Automated Unit Tests through NWDI / Purpose of testrun.xml

Hello,
I could not find any documentation on the test infrastructure / system that seems to be integrated into Netweaver CE 7.1. A DC offers the option via the context menu to "Add/Repair Unit Test Support". When this is used the Developer Studio automatically creates the necessary test source folder, JUnit library dependencies and a file called TEST-INF/testrun.xml.
The first two things are clear, but what function does the testrun.xml file serve? To me it looks like it defines the Tests that should be executed (automaticaly during build?). However, the tests are never actually run.
<?xml version="1.0" encoding="UTF-8"?>
<test>
  <run type="junit">
    <include>com.sap.**</include>
    <exclude></exclude>
  </run>
</test>
Can anybody expain how this file is though to be used? Or how to perform automatic unit tests during/after the build process?
Thanks in advance.
ciao,
Elmar
Edited by: Elmar Weber on Sep 17, 2008 9:58 AM

Hi test developers,
the integration of JUnit tests into the DC build is planned for 7.30. You can add your tests in a separate test folder and specify additional "test"-dependencies, if required. The tests will be executed during the DC build. If a test fails, the build will also fail. Generation of code coverage reports will be possible, too.
Best Regards,
  Jochen Ehret.

Similar Messages

  • Automated Unit Tests / TEST-INF/testrun.xml

    Hello!
    Regarding to the originial question here Re: Automated Unit Tests through NWDI / Purpose of testrun.xml I'll try to ask this question again:
    Is it possible to run jUnit-Tests automatically during the build (cbs)?
    A very promising looking file (testrun.xml) is not documented..
    We're using the NWDI and CE 7.11.
    Testdriven development isn't a new paradigm in the standard java development so it must be possible to do so with ce and nwdi.
    Thanks in advance,
    --cl

    Hi Carsten
    I guess that the testrun.xml allows to do exactly that you want - to run jUnit-Tests automatically during the build (CBS).
    Though there is one small thing - I think that the tool which can understand the file and run the tests is SAP internal tool. So I doubt that having just pure CBS server it'll be possible to activate the automatic test execution process. It seems we need to have something more.
    I also could not find any documentation regarding this on SDN. That's why I think the functionality is SAP internal.
    BR, Siarhei

  • "Automating" Unit Test Deployment

    We're trying to develop an automated build process using SQL Developer's Unit Test. This works by developing the unit test(s) on database A and then deploying the unit test(s) to database B for the build. Unfortunately, there is an issue when we come to import a new version (v2) of an existing test (v1). If a previous version of the test already exists on database B then the old version is sometimes merged with the new version.
    A simple example would be where:
    (v1) Version 01 has a Startup Process but no Teardown Process
    (v2) Version 02 has a Teardown Process but no Startup Process
    If I import Version 01 then Version 02 I get a test with both a Startup Process and a Teardown Process
    Now, we've managed to manually work around this by using the command - Purge Repository Objects; but this is not ideal as the process is meant to be fully automated.
    Any Ideas...

    Phillip / Brian
    Is it possible to "Purge Repository Objects" through a SQL script instead ?
    From what I can infer from looking at the UT tables the "Purge" truncates all the UT tables except UT_LOOKUP_CATEGORIES and UT_METADATA. Now, I've tried this but it doesn't quite seem to work. I'm missing something here.
    I get the following error message when I try to import files after my manual "purge":
    ORA-01400: cannot insert NULL into ("DCI_UT_REPO"."UT_TEST"."CREATED_ON")
    01400.00000 - "cannot insert NULL into (%s)"
    Regards
    Subboss

  • Flash Builder + unit tests = Unknown error generating output application.xml

    Please see my initial forum post here: http://forums.adobe.com/message/3730193
    This forum does not let me mark my initial post as "not answered" (re-opened) and it seems the dev is not going to respond to it as-is (maybe because he does not know)... so I'll re-open it this way
    An update to my initial post:
    I am now seeing this on Flex 4.1 projects as well. Both AIR and Flex Library projects.
    It appears to be some kind of race condition. Sometimes unit tests will launch on the first try, other times it takes 4-5 tries. If I keep the unit tests application XML open in the background, even on successful launches I see that the <content> node is re-written to contain the square brackets.

    Thanks Sudhir. I had just assumed that you were not getting the notifications since the ticket was marked as "answered".
    I appreciate your help in this matter.

  • How  to genearte Unit Test reports  through  ANT/Command  line ?

    Hi,
    I am using sql developer unit test feature to test my database code. I am planning to execute and generate reports by running ant script.
    Is it possible to get the unit test results in any format (text,XML,HTML) after running the tests.
    How to integrate report generation tasks as part of Automated builds?
    Is there any command line utility do through can invoke through ant task?
    Thanks,
    Fernando

    Fernando,
    I, too, am looking to run our PL/SQL unit test suites through our automated ant build scripts. Currently, I've only been able to determine that there is a "UtUtil.bat" and "UtUtil.sh" command line utility for win and linux in /sqldeveloper/sqldeveloper/bin. However, it only take three switches:
    UtUtil -run ?
    UtUtil -imp ?
    and
    UtUtil -exp ?
    While this does provide some limited value to us through automating the importing of our exported test suites and then running them as part of our builds, it doesn't help in running reports on the test runs and exporting the reports to something our build processes can consume (i.e. xml).
    Also, we want to be able to run our full db build on (almost) any of our development machines and we don't want to have to have a unit test repository already preconfigured on each development db. I haven't found a utility to automate the creation of the unit test repository user and the repository, and then whena all the test suite runs have finished and the reports run, delete the repository and repository user.
    We have used Quest's Code Tester product in the past and it had all of these great features that I am really hoping Oracle can either implement or expose to us.
    Regards and best of luck,
    Mike Sanchez

  • How can I compare the actual and expected values in Unit testing when they are XML files?

    I have created a unit test for a method in VS 2008. My expected value and actual value are XMLs. Therefore though the output is same as I expect it gives an error as I am doing string comparison now. How can I compare these 2 XMLs in expected output and
    actual output format in Unit Testing?
    mayooran99

    In unit test, when you want to validate XML files, you feed them into the class / struct that you want to feed the XML into and compare the values there (You don't just feed it in XMLReader and feed it line by line, right? But if it really is, that's how
    you should also test it in unit tests).
    In short, how you'd use the XML in your code, that's how you should test it in unit test.

  • SSDT: Creation of localDB instances from project file - Sql Server Unit testing purposes

    I have a SqlServer Database Project in my solution (VS2013 Professional) which has a corresponding test project with some stored procedure unit tests. Currently I am using a LocalDB and as far as I understand a local database instance is created in C:\Users\[User]\AppData\Local\Microsoft\Microsoft
    SQL Server Local DB\Instances\Projects and the specific .mdf file referenced in the SQL Server Object Explorer is in C:\Users\[User]\AppData\Local\Microsoft\VisualStudio\SSDT\[ProjectName]. The unit tests run fine on my local machine which I have developed
    the project on.
    My issue is we have a box which is configured to check out the project file from our version control, build the project using ms build commands and then run the unit tests using VSTest.Console. Usually with C# Test projects we reference the test project
    dll and the unit tests run fine. I have referenced the dll for the test project with the stored procedure unit tests in. 
    With the Stored Procedure unit tests however we get this exception: 
    Initialization method [project].[spTest].TestInitialize threw exception. System.Data.SqlClient.SqlException: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 50 - Local Database Runtime error occurred. The specified LocalDB instance does not exist.
    After some digging I have realised that the localdb instance seems to be created when the project itself is created in VS not when it is built. Specifically when the localdb is first used and if you look into the appData folder of the test machine there
    is no corresponding mdf file for the project.  
    The question is is there a way to set up a localDB instance on the new machine if all you have the project file? The only purpose of the project on the test machine is to run the unit tests, no other development purposes. VS2013 Professional is installed
    on the test machine but a solution only using config file changes or MSBuild/VSTest commands would be preferable.
    I realise you could change the connection string to an actual test database and run the unit tests of that but we quite like the localdb approach for the testing. I also realise that you could potentially transfer the mdf file (haven't tested this solution)
    as well, though I would prefer if there is a solution to my initial question. 
    http://technet.microsoft.com/en-us/library/hh234692.aspx
    http://msdn.microsoft.com/en-us/library/hh309441(v=vs.110).aspx
    I have been reading up on LocalDB and I assume a automatic LocalDB is created when you create a sql server database project (ie on localdb first use). I have tried adding the database creation to the test project config file but do not really know where
    to go from there. The second link does not really specify when the named localdb will be created if you add the config items and I am not even sure if that is an actual solution.  Here's my test project config file for reference
    <configSections>
    <section name="system.data.localdb" type="System.Data.LocalDBConfigurationSection,System.Data,Version=4.0.0.0,Culture=neutral,PublicKeyToken=[PublicKeyToken]"/>
    <section name="SqlUnitTesting_VS2013" type="Microsoft.Data.Tools.Schema.Sql.UnitTesting.Configuration.SqlUnitTestingSection, Microsoft.Data.Tools.Schema.Sql.UnitTesting, Version=12.0.0.0, Culture=neutral, PublicKeyToken=[PublicKeyToken]" />
    </configSections>
    <system.data.localdb>
    <localdbinstances>
    <add name="SimpleUnitTestingDB" version="11.0" />
    </localdbinstances>
    </system.data.localdb>
    <SqlUnitTesting_VS2013>
    <DatabaseDeployment DatabaseProjectFileName="..\..\..\SimpleUnitTestDB\SimpleUnitTestDB.sqlproj"
    Configuration="Release" />
    <DataGeneration ClearDatabase="true" />
    <ExecutionContext Provider="System.Data.SqlClient" ConnectionString="Data Source=(localdb)\Projects;Initial Catalog=SimpleUnitTestDB;Integrated Security=True;Pooling=False;Connect Timeout=30"
    CommandTimeout="30" />
    <PrivilegedContext Provider="System.Data.SqlClient" ConnectionString="Data Source=(localdb)\Projects;Initial Catalog=SimpleUnitTestDB;Integrated Security=True;Pooling=False;Connect Timeout=30"
    CommandTimeout="30" />
    </SqlUnitTesting_VS2013>
    Thanks in advance for any response. Sorry if there is any misunderstanding, while I have been using VS to develop from the start, this is the first time I have used a Sql Server Database Project. 
    Regards,
    Christopher. 

    Yes, you can create a LocalDB instance manually. You use the SqlLocalDb utility, see here:
    http://technet.microsoft.com/en-us/library/hh212961.aspx
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Unit Test in SQL Developer 2.1: Automated Builds

    Hi,
    I am interested to know if the new Unit Testing framework can be accessed via API so the test execution is initiated from automated build process.
    Regards,
    Vadim

    I am having a problem with the unit testing command line.
    I am attempting to run the unit testing using the command line interface.
    I can connect to UNIT_TEST_REPOS schema in SQL developer.
    I am successfully running units test and suites in SQL developer.
    UNIT_TEST_REPOS, RCSV1 and DEVER users are granted on the UT_REPO_USER role and UNIT_TEST_REPOS and DEVER on the UT_REPO_ADMINISTRATOR.
    The following commands result in an error box saying "No Repository was found on the selected connection, you need to create a repository." (The HELP button apparently does nothing. The OK button closes the box.)
    C:\Program Files\sqldeveloper\sqldeveloper\bin>UtUtil -exp -test -name RCSV1_RCS_SECURITY.GET_LDAP_BASE -repo unit_test_repos -file c:\ut_xml\test.xml
    Unable to open repository
    C:\Program Files\sqldeveloper\sqldeveloper\bin>UtUtil -run -test -name RCSV1_RCS_SECURITY.GET_LDAP_BASE -repo unit_test_repos -db dever
    Unable to open repository
    C:\Program Files\sqldeveloper\sqldeveloper\bin>UtUtil -run -test -name RCSV1_RCS_SECURITY.GET_LDAP_BASE -repo dever -db dever
    Unable to open repository
    I would guess that I am not supplying the correct connection info.
    My last comment triggered an idea. It turns out that the connection names required are those connections named in SQL developer. In my case, they are not the same as the schema names. The following command worked as advertised.
    C:\Program Files\sqldeveloper\sqldeveloper\bin>UtUtil -run -test -name RCSV1_RCS_SECURITY.GET_LDAP_BASE -repo UNIT_TEST -db DeverLocal
    The ANT target is
    <target name="UnitTests">
    <exec executable="cmd" dir="${sqldev.bin.dir}">
    <arg value="/c"/>
    <arg value="UtUtil -run -suite -name RCSV1 -repo UNIT_TEST -db DeverLocal"/>
    </exec>
    </target>
    Regards,
    Bill

  • Selenium Unit Tests / NWDI

    Hello,
    I've recently read the article:
    HUDSON USAGE IN AN NWDI BASED ENVIRONMENT
    [http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/60d4fb24-b062-2e10-55b5-d8488b216af8]
    I was interested in the unit test tool used called Selenium, has anyone got experience of using this tool for Web Dynpro built apps? I have downloaded the Selenium Firefox plugin to record the unit tests however when this tool is used for Web Dynpro Apps the right click option has been disabled.
    Is there anyway to allow this right click? There are a number of Selenium unit test features under this right click option to help quickly build unit tests. i.e asserts
    If anyone else has used Selenium to create unit tests in WDJ, have you used a different approach to the Firefox plugin?
    Thanks in advance!
    Jon

    Dear All,
    I am trying to use Selenium for Recording DynPro web pages, I am able to record successfuly, but I am unable to play it properly.
    Selenium Tool is unable to write entries in Text Fields.
    Any body having any idea about the issue, Please Help.
    Thanking You All.

  • Unit Testing, Null, and Warnings

    I have a Unit Test that includes the following lines:
    Dim nullarray As Integer()()
    Assert.AreEqual(nullarray.ToString(False), "Nothing")
    The variable "nullarray" will obviously be null when ToString is called (ToString is an extension method, which is the one I am testing). This is by design, because the purpose of this specific unit test is to make sure that my ToString extension
    method handles null values the way I expect. The test runs fine, but Visual Studio 2013 gives includes the following warning:
    Variable 'nullarray' is used before it has been assigned a value. A null reference exception could result at runtime.
    This warning is to be expected, and I don't want to stop Visual Studio 2013 from showing this warning or any other warnings, just this specific case (and several others that involve similar scenarios). Is there any way to mark a line or segment
    of code so that it is not checked for warnings? Otherwise, I will end up with lots of warnings for things that I am perfectly aware of and don't plan on changing.
    Nathan Sokalski [email protected] http://www.nathansokalski.com/

    Hi Nathan Sokalski,
    Variable 'nullarray' is used before it has been assigned a value. A null reference exception could result at runtime.
    Whether the warning above was thrown when you built the test project but the test run successfully? I assume Yes.
    Is there any way to mark a line or segment of code so that it is not checked for warnings?
    There is no built-in way to make some code snippet or a code line not be checked during compiling, but we can configure some specific warnings not to be warned during compiling in Visual Basic through project Properties->Compile
    tab->warning configurations box.
    For detailed information, please see: Configuring Warnings in Visual Basic
    Another way is to correct your code logic and make sure the code will not generate warning at runtime.
    If I misunderstood you, please tell us what code you want it not to be checked for warnings with a sample so that we can further look at your issue.
    Thanks,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Unit Testing in PL/SQL

    Hi,
    I m writing some packages and in that some procedures are there. Now i want to test these procedures. I have downloaded utPLSQL from sourceforge.net
    But I am unable to use it. Do we need to copy those utPLSQL files where the orcale is present ? If so and after that do we need to run any .sql scripts before going to do Unit Testing.
    And please tell the procedure in brief in order to start Unit Testing.
    Thanks in Advance.

    I suggest you also check out Quest Code Tester for Oracle (disclosure: I design, build and use the tool - hey, I also wrote the original version of utPLSQL!), the first commercial automated testing tool for PL/SQL.
    www.quest.com/code-tester-for-oracle
    It generates virtually all of your test code, based on expected behavior you describe through a point and click interface. You can run tests through the UI or scheduled scripts.
    Regards,
    Steven Feuerstein
    www.ToadWorld.com/SF

  • Unit tests and QA process

    Hello,
    (disclaimer : if you agree that this topic does not really belong to this forum please vote for a new Development Process forum there:
    http://forum.java.sun.com/thread.jspa?forumID=53&threadID=504658 ;-)
    My current organization has a dedicated QA team.
    They ensure end-user functional testing but also run and monitor "technical" tests as well.
    In particular they would want to run developer-written junit tests as sanity tests before the functional tests.
    I'm wondering whether this is such a good idea, and how to handle failed unit tests:
    1) Well , indeed, I think this is a good idea: even if developer all abide by the practice of ensuring 100% of their test pass before promoting their code (which is unfortunately not the case), integration of independant development may cause regression or interactions that make some test fail.
    Any reason against QA running junit tests at this stage?
    However the next question is, what do they do with failed tests : QA has no clue how important a given unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a complete outage of a 24x7 application. Or maybe 20% of failed tests only means a few misaligned icons...
    2) The developer of the failed package may know, but how can he communicate this to QA?
    Javadocing their unit testing code ("This test is mandatory before entering user acceptance") seems a bit fragile.
    Are there recommended methods?
    3) Even the developer of the failed package may not realize the importance of the failure. So what should be the process when unit tests fail in QA?
    Block the process until 100% tests pass? Or, start acceptance anyway but notify the developper through the bug tracking system?
    4) Does your acceptance process require 100% pass before user acceptance starts?
    Indeed I have ruled out requiring 100% pass, but is this a widespread practice?
    I rule it out because maybe the failed test indeed points out a bad test, or a temporary unavailability of a dependent or simulated resource.
    This has to be analyzed of course, as tests have to be maintained as well, but this can be a parallel process to the user acceptance (accepting that the software may have to be patched at some point during the acceptance).
    Thank you for your inputs.
    J.

    >
    Any reason against QA running junit tests at this
    stage?
    Actually running them seems pointless to me.
    QA could be interested in the following
    - That unit tests do actually exist
    - That the unit tests are actually being run
    - That the unit tests pass.
    This can all be achieved as part of the build process however. It can either be done for every cm build (like automated nightly) or for just for release builds.
    This would require that the following information was logged
    - An id unique to each test
    - Pass fail
    - A collection system.
    Obviously doing this is going to require more work and probably code than if QA was not tracking it.
    However the next question is, what do they do with
    failed tests : QA has no clue how important a given
    unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a
    complete outage of a 24x7 application. Or maybe 20%
    of failed tests only means a few misaligned icons...
    To me that question is like asking what happens if one class fails to build for a release build.
    To my mind any unit test failure is logged as a severe exception (the entire system is unusable.)
    2) The developer of the failed package may know, but
    how can he communicate this to QA?
    Javadocing their unit testing code ("This test is
    mandatory before entering user acceptance") seems a
    bit fragile.
    Are there recommended methods?Automatic collection obviously. This has to be planned for.
    One way is to just log success and failure for each test which is gathered in one or more files. Then a seperate process munges the result file to collect the data.
    I know that there is a java build engine (add on to ant or a wrapper to ant) which will do periodic builds and email reports to developers. I think it even allows for categorization so the correct developer gets the correct error.
    >
    3) Even the developer of the failed package may not
    realize the importance of the failure. So what
    should be the process when unit tests fail in
    QA?
    Block the process until 100% tests pass? Or, start
    acceptance anyway but notify the developper through
    the bug tracking system?
    I would block it.
    4) Does your acceptance process require 100% pass
    before user acceptance starts?
    No. But I am not sure what that has to do with what you were discussing above. I consider unit tests and acceptance testing to be two seperate things.
    Indeed I have ruled out requiring 100% pass, but is
    this a widespread practice?
    I rule it out because maybe the failed test indeed
    points out a bad test, or a temporary unavailability
    of a dependent or simulated resource.Then something is wrong with your process.
    When you create a release build you should include those things that should be in the release. If they are not done, missing, have build errors, then they shouldn't be in the release.
    If some dependent piece is missing for the release build then the build process has failed. And so it should not be released to QA.
    I have used version control/debug systems which made this relatively easy by allowing control over which bug/enhancements are included in a release. And of course requiring that anything checked in must be done so under a bug/enhancement. They will even control dependencies as well (the files for bug 7 might require that bug 8 is added as well.)

  • Unit test for J2EE application

    I am writting a Unit test for One J2EE application. The Application is built in such a way that makes unit testing extremely difficult.
    There are 2 things that contribute to the mess.
    1. Sping integration means all the config files are specified in web.xml independently, even though their beans rely on each other across files. End result is in a unit, I cannot try to load a bean because some of its dependencies are missing (ie. they are in a config file that the first file does not include). For this I tried to use AbstractDependencyInjectionSpringContextTests class to set the Spring Application Context when you are not in the flow but didn't success ed.If any one has use this please post the example.
    2. The application is using Errors interface of package org.springframework.validation. To write a test for any validator class you have to pass the object of Error in the validate method with the command object. Now my question is how you can set this Error object when you are not in the flow. For this I tried to use Mock object e.g Errors error;
    Mockery context = new Mockery();
    final Errors errorMock =context.mock(Errors.class);
    //call the validate object with mock object
    classObject.doValidate(cmdObject, errorMock)
    This thing doesn't work. It gives me below error message.
    unexpected invocation: errors.pushNestedPath("")
    no expectations specified: did you...
    - forget to start an expectation with a cardinality clause?
    - call a mocked method to specify the parameter of an expectation?
    Is there any way to get around these hiccups programmatically in unit tests?
    thanks...

    If you are doing unit testing, try to use straight JUnit4 without involving the Spring framework. Given that you do unit testing, you might not need Spring configuration in your unit test at all. You can programmatically instantiate the instance of the class under testing and either programmatically instantiate collaborating objects, or create mock objects for that purpose. If you are doing functional testing, you might need a Spring context after all. Understand that your tests are running in the different context than the complete application, so you would have to create separate application context for your test(s). You might have to go through the existing Spring configuration modules that you created for your application and re-jiggle them a bit so that they can be included both in your application context and your unit test context.
    Hope this helps.

  • Automated GUI testing

    Hey,
    I'm developing an application using UI5 and I'm currently looking for a tool that allows me to record automated GUI test cases.
    Usually we are using HP QuickTest for that purpose but it seems not to support UI5.
    Can you recommend me any Software that actually does support UI5 and can create automated GUI tests?
    Thanks!
    Regards, Timo

    Hi Timo,
    this is not my main expertise, but UI5 development internally also uses QTP. Maybe with additional plug-ins.
    But we are moving towards Selenium for UI tests.
    qUnit mentioned above is a code-based unit-testing tool which comes from jQuery and is also heavily used. The official documentation and some of our example test pages *.qunit.html should make it easy to write your own tests. But it's not really recording UI interaction.
    Regards
    Andreas

  • When making a function module, what is difference between test /unit test

    best regards,

    Hi,
    ABAP Unit Tests
    Structure of Unit Tests
    An ABAP Unit test unit (TU) can be a class, function pool, executable program, or module pool. You can call ABAP Unit for testing individual TUs from the Class Builder (SE24), Function Builder (SE37), ABAP Editor (SE38), and SE80. Mass tests can be carried out from the Code Inspector.
    You organize your tests into classes and then into test methods, which are all part of the TU. The system checks small units within the TU, hence the name ABAP Unit. The aim of a test method is to check whether a unit returns the desired result. For this purpose, there are methods of the service class CL_AUNIT_ASSERT that execute comparisons between target and actual values calculated by a unit. The results of all unit tests of a TU are then shown in the ABAP Unit result display.
    Example
    Let us look at an example to help clarify this. Our minimalist TU is a percentage calculator that delegates this task to a subroutine:
    REPORT  Percentages.
    PARAMETERS: price TYPE p.
    PERFORM minus_ten_percent CHANGING price.
    WRITE price.
    FORM minus_ten_percent CHANGING fprice TYPE p.
      price = fprice * '0.9'.
    ENDFORM.                    "Minus_ten_percent
    We add a test class to the report, which tests whether the subroutine correctly calculates the percentage for a specific value.
    CLASS test DEFINITION FOR TESTING.  "#AU Risk_Level Harmless
    "#AU Duration Short
      PRIVATE SECTION.
        METHODS test_minus_ten_percent FOR TESTING.
    ENDCLASS.                   
    CLASS test IMPLEMENTATION.
      METHOD test_minus_ten_percent.
        DATA: testprice type p value 200.
        PERFORM minus_ten_percent CHANGING testprice.
        cl_aunit_assert=>assert_equals( act = testprice exp = 180
                            msg = 'ninety percent not calculated correctly').
      ENDMETHOD.                   
    ENDCLASS.  
    The test class and the test method TEST_MINUS_TEN_PERCENT are recognized by the tool through the addition FOR TESTING. The additions "#AU RISK_LEVEL HARMLESS and "#AU DURATION SHORT are not optional comments but annotations containing required technical information. With RISK_LEVEL you specify to what extent, if at all, executing the test endangers critical data. The time is specified so that any tests in endless loops are automatically cancelled due to a timeout.
    In the transaction SAUNIT_CLIENT_SETUP, you can make settings for an entire system with regard to the RISKLEVEL of tests and the times assigned to the individual attributes that specify the permitted duration (DURATION: SHORT, MEDIUM, and LONG). For example, in some development systems you will want to prohibit tests that change important data. If the test is cancelled for any reason, it is possible that some data may be left with invalid values or in an inconsistent state. Therefore, you will set the permitted RISKLEVEL suitably low in such systems.
    Service of the Class CL_AUNIT_ASSERT
    As parameters, the service method CL_AUNIT_ASSERT=>ASSERT_EQUALS requires a target value and an actual value, which is the result of the tested calculation; it then compares the two values. The MESSAGE parameter should contain a text that explains what went wrong during the test. You can also pass a parameter with the service method CL_AUNIT_ASSERT=>ASSERT_EQUALS that specifies the severity of the error, as well as a parameter stating how to proceed if the test fails: Should the test method simply continue, or should the current test method, the entire test class, or all tests of the TU be canceled?
    In addition to this method, the class CL_AUNIT_ASSERT also provides other methods, which execute different checks and pass the results to the framework. Most importantly, you should note that you can use the parameters of the methods of the service class CL_AUNIT_ASSERT to control the flow of the whole unit test and include information in the test methods that will later provide you with important details about any errors in the result display.
    ABAP Unit Result Display
    On the left side of the result display, all the tests of your TU are grouped into a task. In our case, this is only one test class with a single method:
    Via the name of the method, you can see what parts of the test method caused errors during the test:
    The message you passed with the service method is displayed at the top. Below that, you can which values diverge and can navigate to the class via the stack line.
    In our example, a closer examination of the code reveals that the input parameter of the report was confused with the change parameter of the subroutine.

Maybe you are looking for