How to measure memory consumption during unit tests?

Hello,
I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
Running on Win32, the system-level info of memory consumed is known not to be accurate.
Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
What tools do you use/suggest?

However this requires manual code in my unit test
classes themselves, e.g. in my setUp/tearDown
methods.
I was expecting something more orthogonal to the
tests, that I could activate or not depending on the
purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
If I don't have another option, OK I'll wire my own
pre/post memory counting, maybe using AOP, and will
activate memory measurement only when needed.If you need to check the memory used, I would do this.
You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
Have you actually used your suggestion to automate
memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
I have more test which fail if the execution time is siginificantly different.
Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
Plus, I did not understand your suggestion, can you elaborate?
- I first assumed you meant freeMemory(), which, as
you suggest, is not accurate, since it returns "an
approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
- I re-read it and now assume you do mean
totalMemory(), which unfortunately will grow only
when more memory than the initial heap setting is
needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
- Eventually, I may need to inlcude calls to
System.gc() but I seem to remember it is best-effort
only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

Similar Messages

  • Problems updating projects to new versions of Premiere (CS5 to CC and CC to CC 2014) Memory consumption during re-index and Offline MPEG Clips in CC 2014

    I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
    A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
    Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
    Memory consumption during re-indexing.
    When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
    I checked that clips played; there were no errors generated; no clips showed as Offline.
    Some Clips now Offline:Importer  CC 2014
    Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
    The same clips play perfectly well in, for example, Windows Media Player.
    I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
    It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
    It's not the sort of problem expected with a premium product.
    After this experience, I will not be updating premiere mid-project ever again.
    How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
    Any help appreciated. Thanks.

    To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
    Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
    It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
    It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
    It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
    This is very poor support for what is supposed to be a premium product.
    Lesson learned: I won't be upgrading Premiere again mid project after these experiences.

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

  • Deployed models are cached during unit tests

    I am using Stardust 2.1.1, and currently I am writing unit tests with JUnit, Spring and an in-memory Derby database. I would like to create a clean state for each of the unit tests, so I create and initialize the database before every test method and drop it after method executions. I also dispose of the Spring application context and initialize it between test methods with the @DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD) annotation.
    My problem is that despite dropping the database and reinitializing the Spring context after every test method my previously deployed models are still cached and I get an authorization-related exception:
    org.eclipse.stardust.common.error.AccessForbiddenException: AUTHx01000 - The user 'motu' does not have the permission 'model.deployProcessModel'.
    at org.eclipse.stardust.engine.core.runtime.utils.Authorization2.checkPermission(Authorization2.java:332) ~[carnot-engine-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.core.runtime.beans.interceptors.GuardingInterceptor.invoke(GuardingInterceptor.java:52) ~[carnot-engine-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.core.runtime.interceptor.MethodInvocationImpl.proceed(MethodInvocationImpl.java:130) [carnot-engine-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.core.runtime.beans.interceptors.AbstractLoginInterceptor.performCall(AbstractLoginInterceptor.java:201) ~[carnot-engine-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.core.runtime.beans.interceptors.AbstractLoginInterceptor.invoke(AbstractLoginInterceptor.java:131) ~[carnot-engine-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.api.spring.SpringBeanLoginInterceptor.invoke(SpringBeanLoginInterceptor.java:79) ~[carnot-spring-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.core.runtime.interceptor.MethodInvocationImpl.proceed(MethodInvocationImpl.java:130) [carnot-engine-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.api.spring.SpringSessionInterceptor.doWithDataSource(SpringSessionInterceptor.java:142) ~[carnot-spring-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.api.spring.SpringSessionInterceptor.access$000(SpringSessionInterceptor.java:48) ~[carnot-spring-2.1.1.jar:2.1.1]
    at org.eclipse.stardust.engine.api.spring.SpringSessionInterceptor$1.doInConnection(SpringSessionInterceptor.java:87) ~[carnot-spring-2.1.1.jar:2.1.1]
    at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:350) ~[spring-jdbc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
    I looked into the code and found out that the ModelManagerBean and ModelMananagerBeanPartition classes cache my test model. I set the Infinity.Engine.Caching property to false in my carnot.properties as per the online documentation but the model is still getting cached.
    What is the proper way to disable caching in Stardust or how can I ensure that it disposes of everything that is cached?
    Thank you.

    In the Authorization2.checkPermission(...) method there's the following call:
    List<IModel> models = ModelManagerFactory.getCurrent().findActiveModels();
    This is a call to the ModelManagerPartition.findActiveModels() method. This class also has a deleleAllModels() method. If I try to call either of them manually after deploying the model in the first test, I get a NullPointerException.

  • How to measure power consumption of a brushless DC motor

    Hi,
        I need to measure the power consumption of a moog 23-23 motor using a PXI platform because I need to save the instant power consumption data. I have a Dqmx 6259 and a FPGA module. The problem is that I don't know if I should measure the current and voltaje of all three coils of the motors or if it is enuogh measuring just one, plus is it better to measure in differential mode all the signals directly with the DAQmx or shoul I use a differential amplifier first?

    I don't see anything wrong with your approach; however, I'll throw out a few thoughts:
    If you could tie the sending of the change-voltage command (in your DLL) to the START TASK command for DAQ, you could reduce the variability in the time between the two events. Maybe that's important, maybe not.
    Can you set the voltage via some LabVIEW code, rather than a DLL?
    You might or might not want a variable sampling rate - if you expect 10 mSec, you might want to sample at 10 kHz to catch the 1% difference between 10.2 and 10.3 mSec. But if you're expecting 500 mSec, you could sample at 200 Hz to catch the 1% difference between 500 mSec and 505 mSec, thereby saving data space and processing time. Maybe that's important, maybe not.
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

  • How to measure memory usage?

    xorg-server 1.16 seems to use much more memory that the previous versions (and generally sucks ;P), but maybe just its memory use reporting has changed.
    https://www.archlinux.org/packages/comm … ny/ps_mem/
    https://github.com/pixelb/ps_mem/
    It's a 32-bit system.
    # ps_mem -p 362
    Private + Shared = RAM used Program
    333.9 MiB + -7156.5 KiB = 326.9 MiB firefox
    326.9 MiB
    =================================
    # ps_mem -p 253
    Private + Shared = RAM used Program
    173.9 MiB + 2.0 MiB = 175.9 MiB Xorg.bin
    175.9 MiB
    =================================
    # free -m
    total used free shared buffers cached
    Mem: 997 861 135 174 44 442
    -/+ buffers/cache: 374 622
    Swap: 258 0 258
    Can someone interpret these results for me?
    Does using zswap has any effect on memory use reporting?
    http://stackoverflow.com/questions/1313 … or-process or https://bbs.archlinux.org/viewtopic.php?id=184496 leave me with even more questions and a headache.

    I'm scratching my head and banging it against the wall. I've developed a nice rhythm ;-)
    It's Xorg.bin now:
    $ pacman -Qo Xorg.bin
    /usr/bin/Xorg.bin is owned by xorg-server 1.16.0-2
    $ smem -kt
    PID User Command Swap USS PSS RSS
    315 karol dwmst 0 160.0K 272.0K 2.1M
    296 karol xinit /home/karol/.xinitrc 0 176.0K 279.0K 1.9M
    516 karol dbus-launch --autolaunch=0d 0 244.0K 347.0K 1.9M
    279 karol /bin/sh /usr/bin/startx 0 292.0K 413.0K 2.9M
    522 karol /usr/bin/dbus-daemon --fork 0 216.0K 479.0K 1.9M
    300 karol dwm 0 336.0K 562.0K 3.1M
    263 karol /usr/lib/systemd/systemd -- 0 408.0K 923.0K 3.1M
    419 karol bash 0 848.0K 995.0K 3.8M
    411 karol bash 0 860.0K 1004.0K 3.8M
    2850 karol bash 0 860.0K 1007.0K 3.9M
    3651 karol bash 0 864.0K 1011.0K 3.8M
    422 karol bash 0 852.0K 1016.0K 4.0M
    417 karol bash 0 856.0K 1017.0K 3.9M
    267 karol -bash 0 856.0K 1018.0K 3.9M
    425 karol bash 0 852.0K 1019.0K 4.0M
    415 karol bash 0 856.0K 1021.0K 3.9M
    330 karol -bash 0 860.0K 1022.0K 4.0M
    443 karol vim -p 0 0 4.0M 4.1M 6.7M
    5934 karol python2 /usr/bin/smem -kt 0 5.3M 5.4M 7.3M
    316 karol urxvtd 0 22.7M 25.5M 33.1M
    573 karol /usr/lib/firefox/plugin-con 0 22.0M 32.7M 48.0M
    297 karol /usr/bin/Xorg.bin -nolisten 0 123.7M 124.8M 128.9M
    452 karol firefox 0 232.0M 245.9M 265.4M
    23 1 0 419.8M 451.5M 545.5M
    $ sudo ps_mem
    Private + Shared = RAM used Program
    116.0 KiB + 44.0 KiB = 160.0 KiB atd
    144.0 KiB + 52.5 KiB = 196.5 KiB vnstatd
    164.0 KiB + 33.5 KiB = 197.5 KiB gpm
    156.0 KiB + 48.5 KiB = 204.5 KiB acpid
    188.0 KiB + 63.0 KiB = 251.0 KiB systemd-resolved
    160.0 KiB + 127.0 KiB = 287.0 KiB dwmst
    176.0 KiB + 119.5 KiB = 295.5 KiB xinit
    244.0 KiB + 121.0 KiB = 365.0 KiB dbus-launch
    292.0 KiB + 131.0 KiB = 423.0 KiB startx
    336.0 KiB + 249.0 KiB = 585.0 KiB dwm
    644.0 KiB + 67.0 KiB = 711.0 KiB systemd-networkd
    672.0 KiB + 72.5 KiB = 744.5 KiB systemd-logind
    932.0 KiB + 120.0 KiB = 1.0 MiB systemd-udevd
    736.0 KiB + 419.5 KiB = 1.1 MiB (sd-pam)
    552.0 KiB + 659.0 KiB = 1.2 MiB dbus-daemon (2)
    876.0 KiB + 587.0 KiB = 1.4 MiB su (3)
    1.2 MiB + 378.5 KiB = 1.6 MiB sudo
    944.0 KiB + 712.0 KiB = 1.6 MiB login (2)
    996.0 KiB + 1.3 MiB = 2.2 MiB systemd (2)
    4.0 MiB + 195.5 KiB = 4.2 MiB vim
    8.1 MiB + 109.0 KiB = 8.2 MiB systemd-journald
    11.1 MiB + 2.1 MiB = 13.2 MiB bash (13)
    26.1 MiB + 5.6 MiB = 31.6 MiB urxvtd (3)
    20.9 MiB + 11.4 MiB = 32.3 MiB plugin-container
    123.7 MiB + 1.2 MiB = 124.9 MiB Xorg.bin
    230.7 MiB + 14.2 MiB = 244.9 MiB firefox
    473.9 MiB
    =================================
    Wrt xrestop, how do you drive this thing?

  • Unit Testing and Code Coverage

    Is there any way to see graphs and charts of how much code was covered during Unit Tests in OBPM 10GR3 from the CUnit and PUnit tests?
    We use Clover Reports in Java Projects.
    Any such tool for OBPM 10GR3 projects?

    Here are some more
    Triggers and DB links are not available in Oracle Explorer - it would be great to have them in there - I found triggers under tables - but I would much prefer them to be broken out under their own node rather than nested under table
    I think others have mentioned this but when you query a table (Retrieve Data) - it would be great to be able to specify a default number of records to retrieve - the 30 second timeout is great - but more control would be nice - also a way to control the timeout would be nice as well
    I noticed that I get different behavior on the date format when I retrieve data (by selecting the option from the table menu when I right click on a table) versus getting the same data through the query window - why isn't it consistent?
    Also - with Intellisense - can you make the icons different for the type of objects that the things represent (like tables versus views versus functions)
    I noticed that I couldn't get dbms_output to show up with Intellisense - I had filtered out of Oracle Explorer all the System objects - does that somehow affect Intellisense as well? I know that the account I am using has access to those packages.
    Also - more control over collapsible regions would be nice - I love that feature of VS - but for ODT it seems to only work at the procedure level (not configurable with some kind of directive etc...)

  • I can't figure out how to set a breakpoint in a SenTestingKit unit test

    I'm learning Cocoa after decades of doing other languages. I'm trying to use SenTestingKit for unit tests. One of my unit tests doesn't work, and I want to set a breakpoint to figure out why. So far, I haven't figured out how to do this.
    I found Chris Hanson's instructions for how to do this, but he apparently wrote them for an earlier version of Xcode, and I don't know enough to adapt them.
    I am writing a tutorial document for writing a simple Cocoa App using Xcode 3.1, git, SenTestingKit, and OCMock. It's at http://xorandor.com/FirstCocoaApp. I've got it up to the point of trying to debug a unit test, and I'm stuck there.
    I also couldn't figure out how to have Xcode run the unit tests while building the application. Again, Chris' instructions didn't quite do it, and I don't know enough to figure out the rest.
    So, if you know how to do this, please give me some hints. I'll put those into my tutorial, so that the next people who need this can find it.
    Thanks,
    Pat

    I figured out more of my problem, but I haven't solved it yet.
    What happens is that, after I create a test case, I try to run it with a breakpoint. After I start the program, Xcode turns the breakpoint from blue (meaning active) to orange (meaning that it can't set the breakpoint).
    Several people have said to turn off the Lazy symbol loading preference. I did that, and that didn't fix the problem.
    I wrote up a detailed list of steps that I took at http://www.xorandor.com/DebugQuestion.txt
    I also put a copy of the project file at http://xorandor.com/MyProject.tar
    If anyone has any suggestions on what to try next, please let me know.
    Thanks,
    Pat

  • Unit Tester: how to refactor

    I' using SQLDeveloper Version 2.1.1.64 Build MAIN-64.45 (german locale).
    How can I edit an existing unit test in case the name of the function/procedure under test changed or it's arguments?
    The user guide at OTN schows a pecile icon which I dont have...
    bye
    TPD

    So the answer is: no way. :o(
    The way I currently refactor my Test is to export my test (suit) to an XML file and change names of procedures, functions and parameters in the XML file.
    If a new parameter is introduced I first create a new Test, export it to XML (along with the existing test) and copy the tags of the new parameter to the old tests XML file. (Down't forget to change implementation ID...)
    Hopefully SQLDeveloper gets some refactoring features for unit test soon...
    bye
    TPD

  • Dbxml memory consumption

    I have a query that returns about 10MB worth of data when run against my db -- it looks something like the following
    'for $doc in collection("VcObjStore")/doc
    where $doc[@type="Foo"]
    return <item>{$doc}</item>'
    when I run this query in dbxml.exe, I see memory footprint (of dbxml.exe) increase 125MB. Once query finishes, it comes back down.
    I expected memory consumption to be somewhat larger than what the query actually returns but this seems quite extreme.
    Is this behavior expected? What is a general rule of thumb on memory usage with respect to result size (is it really 10x)? Any way to make it less of a hog?
    Thanks

    Hi Ron,
    Thanks for a quick reply!
    - I wasn't actually benchmarking DBXML. We've observed large memory consumption during query execution in our test application and verified the same issue with dbxml.exe. Since dbxml.exe is well understood by everyone familiar with DBXML, I thought it would help starting with that.
    - Yes, an environment was created for this db. Here is the code we used to set it up
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setInitializeLocking(true);
    envConfig.setInitializeCache(true);
    envConfig.setAllowCreate(true);
    envConfig.setErrorStream(System.err);
    envConfig.setCacheSize(1024 * 1024 * 100);
    - I'd like an explanation on reasons behind the performance difference between these two queries
    Query 1:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return $doc'
    552 objects... <snip>
    Time in seconds for command 'query': 0.031
    Query 2:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return <val>{$doc}</val>'
    552 objects... <snip>
    Time in seconds for command 'query': 5.797
    - Any way to make the query #2 go as fast as #1?
    Thanks!

  • BeginTimer and EndTimer Not Working In Data-Driven Unit Test Using DataSource Attribute

    Using VS2012 Ultimate I have a unit test with the <DataSource> attribute which passes in values from a CSV file.  I'm wrapping the unit test in a load test.  I expect a Transaction to appear in the load test results for each row in my DataSource
    due to the BeginTimer and EndTimer methods in the unit test.  However, I only get 1 transaction, with response time roughly equal to the overall test time.  For example, if each row in the DataSource takes 1 second to process, then my test time and
    transaction time would be about 10 seconds with 10 rows in the CSV file.
    I've created a load test with Constant Load Pattern, 1 user, and 1 test iteration.  Seeing the behavior described above.
    I have tried with and without and the check for $LoadTestUserContext.  I'm only using that because I get a NotSupportedException when running the unit test on its own.  I can't believe this bug still
    exists, but that is another topic.
    Note also that I cannot see the Debug output in the Output window in VS2012.
    Any tips?  Thank you.
        <TestMethod()>
        <DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\TestData\MyValues.csv", "MyValues#csv", DataAccessMethod.Sequential)>
        Public Sub WebServiceTest()
            Dim service As MySvc = New MySvcClient()
            Dim input As MyInput = New MySvcInput()
            input.A = testContextInstance.DataRow("A")
            input.B = testContextInstance.DataRow("B")
            Dim g As Guid = Guid.NewGuid()
            If TestContext.Properties.Contains("$LoadTestUserContext") Then 'running as load test
                TestContext.BeginTimer("MySvcTrans")
                Debug.WriteLine("Begin Transaction MySvcTrans:  {0}", g)
            End If
            Dim output As MySvcOutput = service.Method(input)
            If TestContext.Properties.Contains("$LoadTestUserContext") Then 'running as load test
                Debug.WriteLine("End Transaction MySvcTrans:  {0}", g)
                TestContext.EndTimer("MySvcTrans")
            End If
            Assert.AreEqual(0, output.ReturnCode)
        End Sub

    Hi John,
    >> I expect a Transaction to appear in the load test results for each row in my DataSource due to the BeginTimer and EndTimer methods in the unit test.  However, I only get 1 transaction, with response time roughly equal to the overall test
    time.
    Could you share me a screen shot about the reslut in load test?
    Could you share us the load test result in "Transaction"Tables?
    About how to use the
    Transaction for unit tests, maybe you could get useful informaiton here:
    http://blogs.msdn.com/b/slumley/archive/2006/04/14/load-testing-web-services-with-unit-tests.aspx
    Best Regards,
    Jack
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Very high memory consumption of B1i and cockpit widgets

    Hi all,
    finally I have managed it to install B1i successfully, but I think something is wrong though.
    Memory consumption in my test environment (Win2003, 1024 MB RAM), while no other applications and no SAP addons are started:
    tomcat5.exe 305 MB
    SAP B1 client 315 MB
    SAP B1DIProxy.exe 115 MB
    sqlservr.exe 40 MB
    SAPB1iEventSender.exe 15 MB
    others less than 6 MB and almost only system based processes...
    For each widget I open (3 default widgets, one on each standard cockpit), the tomcat grows bigger and leaves less for the sql server, which has to fetch all the data (several seconds on 100% of CPU usage).
    Is this heavy memory consumption normal? What happens if several users are logged into SAP B1 using widgets?
    Thanks in advance
    Regards
    Sebastian

    Hi Gordon,
    so this is normal? Then I guess the dashboards are not suitable for many customers, especially for them who are working on a terminal server infrastructure. Even if the tomcat server has this memory consumption only on the SAP server, when each client needs about 300 MB (and add some hundred for the several addons they need!), I could not activate the widgets. And generally SAP B1 is not the only application running at the customers site. Suggesting to buy more memory for some Xcelsius dashboards won't convince the customer.
    I hope that this feature will be improved in the future, otherwise the cockpit is just an extension of the old user menu (except for the brilliant quickfinder on top of the screen).
    Regards
    Sebastian

  • Clear unit test results

    Hello,
    How can I delete or clear unit test results.
    I can't find any funcionality in SQL DEVELOPER.
    B>K

    Hi klenikk,
    You can do this in 3.0 in several ways:
    1. In the Unit Test navigator, right click a test and select "Purge Test Results".
    2. In the menu bar, select Tools/Purge Run Results to remove results for ALL tests.
    3. In the results tab of a test editor, right click a run node and select "Delete Result".
    I hope this helps.
    Philip Richens,
    SQL Developer Team.

  • How to measure JSP Memory Utilization

    I'm trying to build a tool that will tell me how much resources a JSP is consuming. Am using 1.4.2_14. I'm using a static heap size (1GB) and -Xgc:singlepar. I've created a filter that does a Runtime.totalMemory () - Runtime.freeMemory () before and after a chain to the JSP. To test this I built a simple JSP that I call from a shell script with curl:
    <%
    int alloc = 131065;
    if (null != request.getParameter("alloc"))
    alloc = Integer.parseInt(request.getParameter("alloc"));
    Object[] o = new Object[alloc];
    for (int i = 0; i < o.length; i++)
    o[i] = new Object ();
    if (null != request.getParameter("clean"))
    for (int i = 0; i < o.length; i++)
    o[i] = null;
    o = null;
    out.println("Done with " + o.length);
    %>
    When running this JSP repeatedly starting with a allocation of 131,064 objects I get a heap growth of 0 until I increment to 131,067. Then I seem to get good information but every so often I'll see a 18MB bump in memory. The size I get for heap growth at 131,067 is 512,288 bytes.
    Why can't I see any memory utilization below 512KB?
    What is this 18MB bump in memory?
    Is there a way for me to get a more accurate measurment?
    Thanks,
    Hari

    It's possible that the totalMemory() and freeMemory() calls are not 100% exact all the time; I don't remember exactly how that info is gathered.
    There is a way to get very exact memory consumption with JR. Mail me for details.
    -- Henrik

  • Measure thread's memory consumption

    Hello.
    Nice to see you here.
    Please tell me is there any possibility to measure thread's memory consumption?
    I'm trying to tune application server.
    Totally physical server with Power AIX 5.3 on board has 8GB of memory.
    For example I allocate 1408m for Application Server Java heap (-Xms1408m -Xmx1408m).
    Then I tune Application server thread pools (web-threads, EJB-threads, EJB alarm threads, etc...).
    As I understood Java treads live in native memory, not in Java heap.
    I would like to know how to measure size of thread in native memory.
    After that I can set size of thread pools (to avoid OutOfMemory native or heap).

    holod wrote:
    As I understood Java treads live in native memory, not in Java heap.The data the JVM uses to manage threads may live in the JVM's own memory outside of the Java heap. However, that data will be a very tiny fraction of what the JVM is consuming (unless you have huge number of threads, which are all using very, very little memory).
    I would like to know how to measure size of thread in native memory. It will almost certainly be so small as to not matter.
    After that I can set size of thread pools (to avoid OutOfMemory native or heap).No, that will almost certain not help at all.

Maybe you are looking for

  • How to prepare Custom tag with scope as  session  like Struts tag

    Hi All Struts tags having the feature of Automatic Scoping. Automatic Scoping: Struts tags check all contexts (namely page, request, session, application) and use the first instance found. Developers do not have to explicitly "retrieve" objects from

  • Your heat experiences with the MBP 13 2012

    Looking to buy a new machine, actually I already picked one, I'm going for a MPB 13.3inch i7 (2012). Even though, I read a lot of people saying around the internet, that this machine tends to heat a LOT. I want to know your personal experience with t

  • Why does word for mac 2011 error

    When I am using the word for mac that came installed on my Mac Book Pro, it frequently errors and has to close and I loose my work.  Even though I choose the option to send the report and recover my work, i have lost many things. Why does this happen

  • Tax calculation in VA01

    Hello, While creating a sales order through VA01, we are passing additional fields to the pricing structures using the user exit<b> 'EXIT_SAPLFYTX_USER_001'</b> . Here, we are setting the value of <b>'ch_user_changed_fields-accnt_no'</b> to the ship

  • Random Premiere Pro CC crashes ***Need help***

    Hiya, this is my first time posting here so excuse my scruffy postings. I am having some serious problems with my PP:CC and I can not find anything anywhere or anyone who has been struggling with the same crashes as myself. What seems to happen to me