Clear unit test results
Hello,
How can I delete or clear unit test results.
I can't find any funcionality in SQL DEVELOPER.
B>K
Hi klenikk,
You can do this in 3.0 in several ways:
1. In the Unit Test navigator, right click a test and select "Purge Test Results".
2. In the menu bar, select Tools/Purge Run Results to remove results for ALL tests.
3. In the results tab of a test editor, right click a run node and select "Delete Result".
I hope this helps.
Philip Richens,
SQL Developer Team.
Similar Messages
-
Unit Tester: Manually Deleting Old Unit Test Results
Oracle, as you know, there isn't currently a way to delete the results of previous unit test runs.
Well, those previous test runs are giving us a very annoying slow response times in the Unit Tester GUI with some of our tests, because of the amount of time the interface is taking to load the results of previous tests.
For example, one of our unit tests in particular took the interface 1 minute, 12 seconds to simply "bring up", even though we weren't on the Results tab. As time goes on (as a unit test has been run more and more), the interface goes slower and slower with that unit, because of the increasing number of unit test results.
I know you're working on a long-term solution, but to assist us with working with these kind of tests now, can Oracle please provide us with the SQL statements necessary to delete all of a given unit test's test results from the unit tester repository, please? It would make my programmers so much less annoyed with working with these particular unit tests and the Unit Tester, in general.
I'm guessing that if I go spelunking through the Unit Tester's repository tables, I could probably figure it out, but I'm looking for the "Oracle-sanctioned" way, I guess.
Thanks.Hi,
here are the necessary commands to delete your tables:
Make a backup of your tables(!)
create table t_ut_test_impl_results as select * from ut_test_impl_results ;
create table t_ut_test_impl_val_results as select * from ut_test_impl_val_results ;
create table t_ut_test_results as select * from ut_test_results ;
create table t_ut_suite_results as select * from ut_suite_results ;
create table t_ut_suite_test_results as select * from ut_suite_test_results ;
-- delete data from tests
delete from ut_test_impl_val_results ;
delete from ut_test_impl_results ;
delete from ut_test_results ;
-- delete suite results
delete from ut_suite_test_results ;
delete from ut_suite_results ;
This scripts are not Oracle sanctioned.
Use them on your on risk
HTH
Oliver -
Unit Testing - Results greater than 0
I am unit testing a PL/SQL function. The function has no inputs and one output (Interval Day to Second) . The output is the time it takes to run a query therefore a valid value for the output would be greater than 0.
When I setup the test the result only takes a Interval Day to Second value. I want to be able to say any result greater than 0 is a success.
How can I do this?
tomI am unit testing a PL/SQL function. The function has no inputs and one output (Interval Day to Second) . The output is the time it takes to run a query therefore a valid value for the output would be greater than 0.
When I setup the test the result only takes a Interval Day to Second value. I want to be able to say any result greater than 0 is a success.
How can I do this?
tom -
TFS Build - Unit Test is showing wrong result
HI Team,
I have a build definition(“BDPR's Playground CI Build”) which is compiling 2 solutions and running the unit tests in them. Globally I have
6 unit tests that pass, 0 that fail and 2 that are inconclusive, which gives a total of 8 unit tests. When I run this build definition, what is reported in the build summary is “6/6 test(s) passed, 0 failed, 0 inconclusive”, which is obviously wrong. It looks
like inconclusive unit tests are completely ignored by the build system. Is this a bug or am I missing some subtle option somewhere in the build definition to get this right?
Displaying a given build summary in Visual Studio gives less information that in the web portal. The web portal reports the number of inconclusive
unit tests (but with a wrong value – see previous item) whereas Visual Studio doesn’t report the inconclusive unit tests at all. Example: for the build “BDPR's Playground CI Build_20150409.5”, web portal reports “6/6 test(s) passed, 0 failed, 0 inconclusive”
while Visual Studio reports “6 of 6 test(s) passed”! Is this a bug or is there some misconfiguration somewhere?
Regards
HemHi Hem,
Thanks for your post.
What’s the version of your TFS and VS?
Yes, as far as I know the inconclusive unit test result not show in TFS build summary by default. Bu you can view the TFS build
Diagnostic log to view which inconclusive unit tests be executed in build.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
We are working on some unit tests for HANA and we have two problems:
When running the tests on browser it uses the browser language as language for tests, as we are using text joins the language may affect the tests. Is it possible to force English to be used as language independently of the browser language?
We need to do some tests that involve currency conversion. The conversions are made automatically by the HANA Server. How can we isolate this behavior to use in the unit tests?
Thanks and regards
JeffersonHello Arti,
with release 7.02 onwards there will be an special workbench tool that enables developers to execute single tests and measure their coverage. In releases < 7.02 there is NO chance to provide a similiar service to developers.
Measuring Coverage for mass runs should be possible however. I propose the following approach:
- create a dedicated user account (e.g. ABAPUNIT)
- create in SCOV a test group and assign the dedicated user
- schedule a massrun for the dedicated user with the help of code inspector
- review the unit test result in code inspector
- review the coverage in SCOV
- reset the coverage result in SCOV prior to each folloup analysis
Best Regards
Klaus -
How to genearte Unit Test reports through ANT/Command line ?
Hi,
I am using sql developer unit test feature to test my database code. I am planning to execute and generate reports by running ant script.
Is it possible to get the unit test results in any format (text,XML,HTML) after running the tests.
How to integrate report generation tasks as part of Automated builds?
Is there any command line utility do through can invoke through ant task?
Thanks,
FernandoFernando,
I, too, am looking to run our PL/SQL unit test suites through our automated ant build scripts. Currently, I've only been able to determine that there is a "UtUtil.bat" and "UtUtil.sh" command line utility for win and linux in /sqldeveloper/sqldeveloper/bin. However, it only take three switches:
UtUtil -run ?
UtUtil -imp ?
and
UtUtil -exp ?
While this does provide some limited value to us through automating the importing of our exported test suites and then running them as part of our builds, it doesn't help in running reports on the test runs and exporting the reports to something our build processes can consume (i.e. xml).
Also, we want to be able to run our full db build on (almost) any of our development machines and we don't want to have to have a unit test repository already preconfigured on each development db. I haven't found a utility to automate the creation of the unit test repository user and the repository, and then whena all the test suite runs have finished and the reports run, delete the repository and repository user.
We have used Quest's Code Tester product in the past and it had all of these great features that I am really hoping Oracle can either implement or expose to us.
Regards and best of luck,
Mike Sanchez -
Hi All,
I would like to inform to all, can any one explain to me the concept of Unit testing in FICO and how it should be drawn? If possible send any dcoumentation for this focus to my mail id: [email protected]
Thanks with regards,
BalaHi
refer below reward if helps
Unit testing is done in bit and pieces. Like e.g. in SD standard order cycle; we do have 1-create order, then 2-delivery, then 3-transfer order, then 4-PGI and then 5-Invoice. So we will be testing 1,2,3,4 and 5 seperately alone one by one using test cases and test data. We will not be looking and checking/testing any integration between order and delivery; delivery and TO; TO and PGI and then invoice.
Whrereas System testing you will be testing the full cycle with it's integration, and you will be testing using test cases which give a full cyclic test from order to invoice.
Security testing you will be testing different roles and functionalities and will check and signoff.
Performance testing is refered to as how much time / second will take to perform some actions, like e.g. PGI. If BPP defination says 5 seconds for PGI then it should be 5 and not 6 second. Usually it is done using software.
Regression testing is reffered to a test which verfies that some new configuration doesnot adversly impact existing functionality. This will be done on each phase of testing.
User Acceptance Testing: Refers to Customer testing. The UAT will be performed through the execution of predefined business scenarios, which combine various business processes. The user test model is comprised of a sub-set of system integration test cases.
We use different software during testing. Most commonly use are
Test Director: which is used to record requirement, preparing test plan and then recording the progress. We will be incorporating defects that are coming during these testings using different test cases.
Mercury Load Runner: is used for performance testing. This is an automatic tool.
What does the following terms means :
- Technical Unit Testing
- Functional Unit Testing
- IntegrationTesting
- Volume Testing
- Parallel Testing?
Technical Unit Testing= Test of some technical development such as a user exit, custom program, or interface. the test usually consists of a test data set that is processed according to the new program. A successful test only proves the developed code works and that it performed the process as as designed.
Functional Unit Testing= Test of configuration, system settings or a custom development (it may follow the technical unit testing) These usually use actual data or data that is masked but essentially the same as a real data set. A successful test shows that the development or configuration works as designed and the data is accurate as a result.
IntegrationTesting= Testing a process, development or configuration within the context of any other functions that the process, development or functionality will touch or integrate . The test should examine all data involved across all modules and any data indirectly affected. A successful test indicates that the processes work as designed and integrate with other functions without causing any problems in any integrated areas.
Volume Testing= testing a full data set that is either actual or masked to insure that the entire volume does cause system problems such as network transmission problems, system resources issues, or any systemic problem, A successful test indicates that the processes will not slow or crash the system due to a full data set being utilized.
Parallel Testing= Testing the new system or processes with a complete data set while running the same processes in the legacy system. A successful test will show identical results when both the legacy system and new system results are compared.
I would also note that when a new implementation is being done you will want to conduct at least one cut over test from the old system to the new and you should probably do several.
What kind of testings that are carried out in testing server?
1. Individual Testing ( Individually which we've created)
2. Regressive Testing ( Entire Process)
3. Integration Testing ( Along with other integrated modules)
The 3 types of testing is as follows:-
1. Unit testing (where an individual process relevant to a SD or MM etc is tested)
2. Integration testing (where a process is tested that cuts across all areas of SAP).
3. Stress testing (where lots of transactions are run to see if the system can handle the data)
http://www50.sap.com/businessmaps/6D1236712F84462F941FDE131A66126C.htm
Unit test issues
The unit tools test all SAP development work that handles business object processing for the connector. Also, the unit test tools enable you to test the interaction of your work with the ABAP components of the connector. The test tools allow you to test your development work as an online user (real-time) only.
Note:
It is important to understand the differences between testing the connector as an online user and testing the connector as if operating as a background user.
The differences between testing the connector as an online user or as a background user are described as follows:
Memory--When testing a business object, the connector must log into the SAP application.
The connector runs as a background user, so it processes in a single memory space that is never implicitly refreshed until the connector is stopped and then restarted (therefore it is critical in business object development to clear memory after processing is complete). Since you are an online user, memory is typically refreshed after each transaction you execute.
For more information, see Developing business objects for the ABAP Extension Module. Any problems that may occur because of this (for example, return codes never being initialized) are not detected using the test tool; only testing with the connector will reveal these issues.
Screen flow behavior--Screen flow behavior is relevant only when using the Call Transaction API. The precise screen and sequence of screens that a user interacts with is usually determined at runtime by the transaction's code. For example, if a user chooses to extend a material master record to include a sales view by checking the Sales view check box, SAP queries the user for the specific Sales Organization information by presenting an additional input field. In this way, the transaction source code at runtime determines the specific screen and its requirements based on the data input by the user. While the test tool does handle this type of test scenario, there is a related scenario that the test tool cannot handle.
SAP's transaction code may present different screens to online users versus background users (usually for usability versus performance). The test tool only operates as an online user. The connector only operates as a background user. Despite this difference, unit testing should get through most of the testing situations. -
Hi,
I am having an issue in Unit Testing of Function Module. The issue is that Interface parameter of FM is not accessed in subroutine as the parameter goes like a field symbol. I get a dump while try to access the Interface parameter.
Can anyone please help me out on this.* Following is my Unit Test Method
METHOD get_user_formats.
DATA lv_format TYPE char35.
PERFORM get_user_formats USING '2'.
READ TABLE user_format INTO lv_format INDEX 1.
" Expected Result
cl_aunit_assert=>assert_equals( act = lv_format
exp = 'DATE = MM/DD/YYYY').
ENDMETHOD.
FORM get_user_formats USING datfm TYPE datfm.
DATA: lv_format TYPE char35.
SELECT SINGLE ddtext FROM dd07t INTO lv_format
WHERE domname = 'XUDATFM' AND
ddlanguage = sy-langu AND domvalue_l = datfm.
CONCATENATE 'DATE = ' lv_format INTO lv_format SEPARATED BY space.
APPEND lv_format TO user_format.
CLEAR lv_format.
* Comment: user_format is a table in TABLES parameter of FM. It is having single field of type CHAR35
ENDFORM. " get_user_formats
Please note that I get dump ONLY at the time UNIT TESTING. Otherwise the program is running fine. -
Unit Test Variable Substitution in PL/SQL User Vailidation code not running
Hi
I am using new Unit Test Feature in SQL Developer 2.1.0.62.
I have created a test implemented to test a function. The function has a VARCHAR2 parameter as input and returns a BINARY_INTEGER.
I would like to perform 'Process Validation instead of specifying an explicit 'Result'. The check box 'Test Result' is unchecked.
I have seen in the doc. that I can use substitution variables in my user defined PL/SQL code. I try
IF {RETURNS$} < 0
THEN ...
but I always get the error
ERROR
<RETURN> : Expected: [Any value because apply check was cleared], Received: [-103]
Validation User PL/Sql Code failed: Non supported SQL92 token at position: 181: RETURNS$
Obviously, the program doesn't recognize {RETURN$}.
Am I missing something?
br
ReinerHi all,
I have installed the latest version of SQL Developer (2.1.1) that fixed the problem - must have been a bug.
The only problem was that I got an ORA-904 TEST_USER_NAME... error. I export my tests, dropped the repository, created a new one and reimported everything. Now it works as it should.
br
Reiner -
Unit Test Validation for Output Ref Cursor Not Working
Here is the problem:
I have a stored procedure as follows:
CREATE OR REPLACE
PROCEDURE usp_GetEmployee(
p_employeeId IN NUMBER,
cv_employee OUT Sys_RefCursor )
AS
BEGIN
OPEN cv_employee FOR SELECT * FROM employees WHERE employee_id=p_employeeid;
END usp_GetEmployee;
For this, I am implementing a unit test.
* In the "Select Parameters" step, I am unchecking the "Test Result" check box for the cursor OUT variable.
* In the "Specify Validations" step, I am choosing "Boolean Function" and putting the following PL/SQL code:
DECLARE
emp_rec {cv_employee$}%rowtype;
BEGIN
FETCH {cv_employee$} INTO emp_rec;
IF {cv_employee$}%FOUND THEN
RETURN TRUE;
ELSE
RETURN FALSE;
END IF;
RETURN TRUE;
END;
But, when I try to execute this Test, I get the following error:
Validation Boolean function failed: Unable to convert <oracle.jdbc.driver.OracleResultSetImpl@4f0617> to REF CURSOR.
If I run in the debug mode, I get the following content in a dialog box:
The following procedure was run.
Execution Call
BEGIN
"ARCADMIN"."USP_GETEMPLOYEE"(P_EMPLOYEEID=>:1,
CV_EMPLOYEE=>:2);
END;
Bind variables used
:1 NUMBER IN 1001
:2 REF CURSOR OUT (null)
Execution Results
ERROR
CV_EMPLOYEE : Expected: [Any value because apply check was cleared], Received: [EMPLOYEE_ID COMMISSION_PCT SALARY
1001 0.2 8400
Validation Boolean function failed: Unable to convert <oracle.jdbc.driver.OracleResultSetImpl@31dba0> to REF CURSOR.
Please suggest how to handle this issue.
Thanks,
Rahul979635 wrote:
But, when I try to execute this Test, I get the following error:
Validation Boolean function failed: Unable to convert <oracle.jdbc.driver.OracleResultSetImpl@4f0617> to REF CURSOR.
If I run in the debug mode, I get the following content in a dialog box:
The following procedure was run.
Execution Call
BEGIN
"ARCADMIN"."USP_GETEMPLOYEE"(P_EMPLOYEEID=>:1,
CV_EMPLOYEE=>:2);
END;
Bind variables used
:1 NUMBER IN 1001
:2 REF CURSOR OUT (null)
Try explicity declaring the ref cursor instead of using a bind variable, something like (untested)
begin
foo sys_refcurosr;
begin
test_procedure(foo);
end;Alternately, in SQL*PLUS use the DEFINE command to ste a named bind variable to type REFCURSOR and use the named bind variable in your test
Edited by: riedelme on Jan 23, 2013 7:10 AM -
Need to write international Unit Test.
Hello All. I�d like to write a 16-bit charset unit test and view the results in eclipse. Can anyone tell me how to do so?
I�m writing an application that needs to support international characters. The app will someday support Japanese, Hindi, Hebrew, Chinese, etc. No one in the office has written code in these languages.
I�d like to write a JUnit test to ensure that my persistence layer can save �Hello World� in one of these languages and retrieve the exact String. I found a site showing me how to write Hello World in Japanese http://unix.org.ua/orelly/java-ent/servlet/ch12_03.htm, but System.out.println("\u4eca\u65e5\u306f\u4e16\u754c"); displays �?????�. I�d like to be able to preview the unit test from Eclipse and run it from ANT. What do I need to do to accomplish this?
Thanks in advance,
StevenHi,
Steps to be followed for UTP.
UTP : Unit Test Plan. Testing the program by the developer who developed the program is termed as Unit Test Plan.
Two aspects are to be considered in UTP.
1. Black Box Testing
2. White Box Testing.
1. Black Box Testing : The program is executed to view the output.
2. White Box Testing : The code is checked for performance tuning and syntax errors.
Follow below mentioned steps.
<b>Black Box Testing</b>
1. Cover all the test scenarios in the test plan. Test plan is usually prepared at the time of Techincal Spec preparation, by the testing team. Make sure that all the scenarios mentioned in the test plan are coverd in UTP.
2. Execute your code for positive and negative test. Postive tests - to execute the code and check if the program works as per expected. Negative Test - Execute code to know if the code is working in scenarios in which it is not supposed to work. The code should work only in the mentioned scenarios and not in all cases.
<b>White Box Testing.</b>
1. Check the Select statments in your code. Check if any redundant fields are being fetched by the select statements.
2. Check If there is any redundant code in the program.
3. Check whether the code adheres to the Coding standards of your client or your company.
4. Check if all the variables are cleared appropriately.
5. Optimize the code by following the performance tuning procedures.
<b>Using tools provided by SAP</b>
1. Check your program using <b>EXTENDED PROGRAM CHECK</b>.
2. Use SQL Trace to estimate the performace and the response of the each statement in the code. If changes are required, mention the same in UTP.
3. Use Runtime Analyser and Code Inspector to test your code.
4. Paste the screen shots of all the tests in the UTP document. This gives a clear picture of the tests conducted on the program.
All the above steps are to be mentioned in UTP.
Regards,
Vara -
Unit Testing, Null, and Warnings
I have a Unit Test that includes the following lines:
Dim nullarray As Integer()()
Assert.AreEqual(nullarray.ToString(False), "Nothing")
The variable "nullarray" will obviously be null when ToString is called (ToString is an extension method, which is the one I am testing). This is by design, because the purpose of this specific unit test is to make sure that my ToString extension
method handles null values the way I expect. The test runs fine, but Visual Studio 2013 gives includes the following warning:
Variable 'nullarray' is used before it has been assigned a value. A null reference exception could result at runtime.
This warning is to be expected, and I don't want to stop Visual Studio 2013 from showing this warning or any other warnings, just this specific case (and several others that involve similar scenarios). Is there any way to mark a line or segment
of code so that it is not checked for warnings? Otherwise, I will end up with lots of warnings for things that I am perfectly aware of and don't plan on changing.
Nathan Sokalski [email protected] http://www.nathansokalski.com/Hi Nathan Sokalski,
Variable 'nullarray' is used before it has been assigned a value. A null reference exception could result at runtime.
Whether the warning above was thrown when you built the test project but the test run successfully? I assume Yes.
Is there any way to mark a line or segment of code so that it is not checked for warnings?
There is no built-in way to make some code snippet or a code line not be checked during compiling, but we can configure some specific warnings not to be warned during compiling in Visual Basic through project Properties->Compile
tab->warning configurations box.
For detailed information, please see: Configuring Warnings in Visual Basic
Another way is to correct your code logic and make sure the code will not generate warning at runtime.
If I misunderstood you, please tell us what code you want it not to be checked for warnings with a sample so that we can further look at your issue.
Thanks,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Unit Testing / Integration Testing
Hi all
Can somebody provide links to SAP testing....
Can anybody define & write down the step by step details on carrying out unit testing, integration testing and user acceptance testing...How it is done..which systems..tools..test scripts / test cases etchi,
For all kinds of testing the first thing is to enumerate the scenarions i.e preparing a list of scenarions that need to be tsted. this wud include the test data and also the desired results. templates are provided by the consultants to the end users. thse templates have a description of the scenarios, the data used, the transacion codes, the docs generated and if the desired results were obtained. thse need to be certified by the concerned.
integration and unit tsting can be done on the same or different clients depanding on system landscape. user accptance testing is generally after the interfration tests have been satisfactorily conducted. different knds of users are asked to do the tests once their id roles and profiles are created. this phase shud be handled as if live production envirionment.
saurabh -
Hi,
We had done technical migration of value based roles to derived roles, and facing problem to design the unit testing approach for the same. can you please suggest what must unit testing approach and how to create test cases for authorizations specificaly to derived roles created from value based roles ?
goal is after testing, end users should not feel any changes done in roles approach.
Thanks.
Regards,
Swapnil
<removed_by_moderator>
Edited by: Julius Bussche on Oct 7, 2008 3:40 PMHi Swapnil,
The Testing of Security roles need to be taken in a two step approach
Step 1 Unit Testing in DEV
A. Prepare the test cases for each of the derived roles and ensure that your main focus is to see if you are able to execute all the tcodes that have been derived from the parent role with out authorization errors. You also need to verify if each of the derived roles are applicable to those respective Org level Values.
B. Because there will not enough data in DEV ( except In some cases where you have a refresh of fresh PROD data) it is always advisable to do the actual testing of the roles in QA. The goal here is to see if you are able to perform a dry run of all tcodes/Reports/Programs that belong to the roles.
C. You may create fewer Unit test ids as you only assign one ID with one role and once the role is tested you can assign the same ID to another role.
Step 2 Integration Testing in QA
A. Prepare the Integration Test cases for each of the Derived roles. Here most likely the testing will be performed by the end users/Business Analysts in that respective Business Process. Each test case must reflect the possible Org level Authorization Objects and Values that need to be tested.
B. As Integration testing is simulation of actual Production authorizations scenario, care must be taken when creating mulitple Integration test user ids and assigning them right roles and send the ids to the end users to perform the testing in QA.
C. The objective here is that end user must feel comfortable with the test cases and perform both Positive and Negative testing. Testing results must be caputured and documented for any further analysis.
D. In an event of any authorization errors from Integration testing, the authorization errors will be sent to the Security team along with SU53 screenshots. The roles will be corrected in DEV and transported back to QA and the testing continues.
E. Also the main objective of Integration testing would be to check if the transactions are reflecting the right amount of data when executed and any mismatch in the data will be direct implication that the Derived roles do not contain the right Org level values.
Hope this helps you to understand how testing of Security roles (Derived) is done at a high level.
Regards,
Kiran Kandepalli.
Edited by: Kiran Kandepalli on Oct 7, 2008 5:47 AM -
Suggestions requested for Unit Testing process and build processes.
Hi All,
We are using WebLogic WorkShop 8.1 SP2 to build our WebApp. One thing I am trying
to get together is a "Best Practises" list for aspects of WorkShop developement,
particularly Unit Testing, Continous Build methodology, source control management
etc.I have been through the "Best Practises Guide" that comes in WorkShop help,
but it doesnt address these issues.This could help us all for future projects.
1)Could anyone give pointers on how to perform Unit Testing using either JUnit/JUnitEE
in the WorkShop realm, given that Controls cannot be accessed directly from PO
Test classes.
2)For a project of size say 5 developers ,does it make sense to have a nightly
build using tools like CruiseControl?We use CVS for our source control and its
working out pretty well, but given that we currently have no Unit Tests that can
be run after the build and that can provide some reports on what broke/what didnt?
I am sure we all would appreciate any suggestions and more questions on this topic.
Thanks,
Vik.Hi, Chris,
can you perhaps explain your solution in greater detail. I am really curious to
find a way to test controls.
"Chris Pyrke" <[email protected]> wrote:
>
I have written (well it's a bit of a dirty hack really) something that
lends itself
to the name ControlTest (it unit tests controls). Its a blend of Junit
and Cactus
with some of the source of each brutalised a bit to get things to work
(not much
- it was a couple of hours work, when I was supposed to be doing something
else).
To write a control test you code something like...
package com.liffe;
import com.liffe.controlunit.ControlTestCase;
import controls.Example;
public class TestExample extends ControlTestCase
private Example example = null;
public void setUp() {
example = (Example)getControl("example");
public void testExample() {
this.assertNotNull(example);
String result = example.getData();
assertEquals(result, "Happy as Larry");
Other tasks required to set up a test are creating a web project with
a jpf which
needs some cut and paste code (14 lines) in its begin method and a jsp
which is
also cut and paste (5 lines). (ie create a standard web project and paste
in 2
pieces of code)
In the web project you need to create a control (A) with an instance
name of controlContainer.
(if it's called something else the pasted in code will need changing
to reflect)
In this control you need to put an instance of TestContainerImpl and
any controls
that need testing.
You then need to add a method to the control (A) that looks like…
* @common:operation
public String controlTestRun(String theSuiteClassName, boolean xsl)
container.setControl("example", example);
return container.controlTestRun(theSuiteClassName, xsl);
You need to call container.setControl for each control being tested and
the object
'container' is the instance name of the TestContainerImpl that was put
in.
There are 4 jars (junit, cactus etc) that go in the library. You will
also need
the ControlUnitBase project (or maybe just it's jar).
To use you call a URL like:
http://localhost:7001/TestWeb/Controller.jpf?test=com.liffe.TestExample
TestWeb is the name I gave to my web project - this will be different
for each
test project
com.liffe.Example is the class above and will therefore be different
for each
test case.
You can also call
http://localhost:7001/TestWeb/Controller.jpf?test=com.liffe.TestExample&xsl=true
(Note the extra &xsl=true) and the browser will (if it can) render it
more prettily.
This seems to do the job quite nicely, but there are several caveats
I would hope
someone from bea would be able to address before I start using it widely.
1) To access the control you need to create it (eg as a subcontrol in
the control
(A) above.
To get it into the test case you need to pass it round as an Object (can't
return
Control from a control operation). As it's being passed around among
Java (POJO)
classes I'm assuming that control remains in the control container context
so
this is OK. It seems to work and the Object is some form of proxy as
any control
seems to be reproxied before the control is invoked from the test case.
2) If I'm testing controls called from a JPD then they either need to
be in a
control project (and my test cases called from a Web Project) which makes
for
a large increase in project numbers (we already have this and are trying
to resist
it) To avoid this - as a process project is a brain damaged web project
I simply
perform some brain surgery and augment the process project with some
standard
files found in any old web project. this means I can call the test JPF
from a
browser. This seems nasty, is there a better way?
3) I would like to be able to deliver without the test code. At the worst
the
code can be in place but must be inacessible in a production environment.
I don't
know how best to do this - any suggestions (without creating lots of
projects,
or lots of manual effort)
If anyone has read this far I would ask the question does this seem like
the kind
of thing that would be useful?
Hopefully a future version of workshop will have something to enable
unit testing
and this hacking will be unnecessary.
Could someone from BEA tell me if this is a dangerous way to do things?
Chris
"vik" <[email protected]> wrote:
Hi All,
We are using WebLogic WorkShop 8.1 SP2 to build our WebApp. One thing
I am trying
to get together is a "Best Practises" list for aspects of WorkShop developement,
particularly Unit Testing, Continous Build methodology, source control
management
etc.I have been through the "Best Practises Guide" that comes in WorkShop
help,
but it doesnt address these issues.This could help us all for future
projects.
1)Could anyone give pointers on how to perform Unit Testing using either
JUnit/JUnitEE
in the WorkShop realm, given that Controls cannot be accessed directly
from PO
Test classes.
2)For a project of size say 5 developers ,does it make sense to have
a nightly
build using tools like CruiseControl?We use CVS for our source control
and its
working out pretty well, but given that we currently have no Unit Tests
that can
be run after the build and that can provide some reports on what broke/what
didnt?
I am sure we all would appreciate any suggestions and more questions
on this topic.
Thanks,
Vik.
Maybe you are looking for
-
I have a 2008 iMac 21.5" using 10.6.8 Snow Leopard with VM Ware 4.1.3 supporting Windows 7. Initially ML OSX refused to install due to Back up on HD which I removed to trash.It then proceeded but froze up in the initial stages with 33 mins remaining.
-
What exactly does the InHome Agent Wireless Optimizer optimize?
When I got FIOS the other week, I ran the In Home agent, more so just to see what it was, and used the Wireless Optimizer feature. Since then, my computer won't go into sleep mode. I thought it may have changed a setting on my NIC to wake the comput
-
Time zone in message monitoring (GMT)
Hi all - as I understand, all messages in XI are saved with timestamps within GMT time zone. This results in having a possible mismatch between displays in Adapter Engine (e.g., message monitoring) and Integration Engine (e.g., SXMB_MONI), the letter
-
Multiple deliveries for single sales order
Hi Please share your thoughts for the following scenario: A sales order has 10 line items and all schedule lines are confirmed on same date. Now I want to select 4 line items for one delivery and the rest of the line items with another delivery docu
-
Hi, I am facing the following problem while generating charts. I have a tag query returning current values of two tags - T1 and T2. If I am plotting the chart at 12.20, The X-axis points would be 11.30, 12.00, 12.30, 1.00. T2 should be shown as bar