Monitoring and unit testing

how to do monitoring and unit testing the interface from Runtime work bench.

Hi,
RWB is a tool in SAP XI/PI which helps you to monitor the messages.
please find the below linnk
http://help.sap.com/saphelp_nw04/helpdata/en/88/21bc3ff6beeb0ce10000000a114084/content.htm
please search on SDN or http://help.sap.com
Whereas RWB runs on Java stack of XI. In RWB you can monitor message on Integration server and Adapter Engine as well
RWB we have component monitoring and message monitoring
i) component monitoring you can monitor the status of ur communication channel
ii) message monitoring you can monitor ur message in the XI.
Regards
srinivas

Similar Messages

  • [svn:osmf:] 10437: Add support and unit tests for parsing VAST documents ( inline or wrapper).

    Revision: 10437
    Author:   [email protected]
    Date:     2009-09-20 13:31:16 -0700 (Sun, 20 Sep 2009)
    Log Message:
    Add support and unit tests for parsing VAST documents (inline or wrapper).   All elements are covered with the exception of Video.
    Modified Paths:
        osmf/trunk/libs/VAST/.flexLibProperties
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTAdPackageBase.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTDocument.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTInlineAd.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTWrapperAd.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/parser/VASTParser.as
        osmf/trunk/libs/VASTTest/org/openvideoplayer/vast/parser/TestVASTParser.as
    Added Paths:
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTAdBase.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTCompanionAd.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTNonLinearAd.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTTrackingEvent.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTTrackingEventType.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTUrl.as
        osmf/trunk/libs/VAST/org/openvideoplayer/vast/model/VASTVideoClick.as

    Your problem sounds similar to this one, except that they're using software raid rather than just pure LVM. If you're using an initrd, you may not have the appropriate modules installed. If you're not using an initrd, then the kernel probably needs LVM support compiled in (not as a module) and could be solved by fixing that. I have never used the ck-patchset, but this should give you an additional data point.
    Also comment=systemd.automount will be deprecated soon as I understand it; if you have a need for automounting, x-systemd.automount should be used instead.

  • [svn:osmf:] 11184: Adding new temporal metadata support, including sample app and unit tests.

    Revision: 11184
    Author:   [email protected]
    Date:     2009-10-27 11:08:21 -0700 (Tue, 27 Oct 2009)
    Log Message:
    Adding new temporal metadata support, including sample app and unit tests.
    Added Paths:
        osmf/trunk/apps/samples/framework/CuePointSample/
        osmf/trunk/apps/samples/framework/CuePointSample/.actionScriptProperties
        osmf/trunk/apps/samples/framework/CuePointSample/.flexProperties
        osmf/trunk/apps/samples/framework/CuePointSample/.project
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/AC_OETags.js
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/history/
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/history/history.css
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/history/history.js
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/history/historyFrame.html
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/index.template.html
        osmf/trunk/apps/samples/framework/CuePointSample/html-template/playerProductInstall.swf
        osmf/trunk/apps/samples/framework/CuePointSample/libs/
        osmf/trunk/apps/samples/framework/CuePointSample/src/
        osmf/trunk/apps/samples/framework/CuePointSample/src/CuePointSample.css
        osmf/trunk/apps/samples/framework/CuePointSample/src/CuePointSample.mxml
        osmf/trunk/framework/MediaFramework/org/osmf/metadata/TemporalFacet.as
        osmf/trunk/framework/MediaFramework/org/osmf/metadata/TemporalFacetEvent.as
        osmf/trunk/framework/MediaFramework/org/osmf/metadata/TemporalIdentifier.as
        osmf/trunk/framework/MediaFramework/org/osmf/video/CuePoint.as
        osmf/trunk/framework/MediaFramework/org/osmf/video/CuePointType.as
        osmf/trunk/framework/MediaFrameworkFlexTest/org/osmf/metadata/TestTemporalFacet.as
        osmf/trunk/framework/MediaFrameworkFlexTest/org/osmf/video/TestCuePoint.as

    Greg -
        The metadata sample demonstrates how metadata can be placed on an IMediaResource, and used to  give the framework more information about the resource.  The MediaFactory uses this information to determine which type of media a given resource points to.  One of the intial problems we encountered when creating a media factory that created MediaElements from URLResources, was the inability to distinguish between image, video, and audio urls, since the extension isn't a guaranteed way.
    The Metadata framework, that was developed last sprint was provided to enable metadata such as XMP, Namevalue, and Cuepoints.  Not all specific classes that hold this data haven't been created yet however.  The logic to add metadata automatically hasn't been placed in the framework yet.
    It's currenty possible to retrieve onPlaystatus, onMetadata, onXMPData, onImagedata, and onCuepoint callbacks:
    var videoElement:VideoElement
    [After videoElement loads]
    videoElement.client.addHandler(NetStreamCodes.ON_XMP_DATA, onXMPData);
    function onXMPData(info:Object):void
       //Process XMP here.
    Hope this helps.

  • 2.1.0.62: Problem with Package.Functions and Unit Tests

    I like the new Sqldeveloper - I startet trying Unit Tests as described here: Link: [http://www.oracle.com/technology/obe/11gr2_db_prod/appdev/sqldev/sqldev_unit_test/sqldev_unit_test.htm#t4]
    It worked with a test-procedure. Now i am trying to test my package functions, but all i get is this:
    Die folgende Prozedur wurde ausgeführt.
    Ausführungsaufruf
    BEGIN
    :1 := "PKG_MYPACK"."CREATEFUNCTION"(IN_PROGRAMMEID=>:2,
    IN_AMOUNT=>:3,
    IN_SWS=>:4);
    END;
    Bind-Variablen verwendet
    1 INTEGER OUT (null)
    2 INTEGER IN 1
    3 INTEGER IN 10
    4 INTEGER IN 11
    Ausführungsergebnisse
    ERROR
    Ungültige Konvertierung angefordert
    oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
    oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110)
    oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171)
    oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:227)
    oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:439)
    oracle.jdbc.driver.OraclePreparedStatement.setObjectCritical(OraclePreparedStatement.java:7723)
    oracle.jdbc.driver.OraclePreparedStatement.setObjectInternal(OraclePreparedStatement.java:7496)
    oracle.jdbc.driver.OraclePreparedStatement.setObjectInternal(OraclePreparedStatement.java:7978)
    oracle.jdbc.driver.OracleCallableStatement.setObject(OracleCallableStatement.java:4063)
    oracle.jdbc.driver.OraclePreparedStatementWrapper.setObject(OraclePreparedStatementWrapper.java:221)
    oracle.dbtools.raptor.datatypes.strategies.callablestatement.CallableBindingDatum.customBindIN(CallableBindingDatum.java:135)
    oracle. ...
    what can i do?

    Created
    Bug 8976245 - EA1: UNIT_TEST: INVALID CONVERSION ERROR USING INTEGER PARAMETER
    and have asked bug responder to keep you updated on status here in the forum.
    Bad news is that any INTEGER parameter for which you specify a non-null value will fail.
    Possibly helpful news is that if you create a 'clone' of you function using NUMBER as the data type, you can continue to experiment with how unit testing may be of use to you.
    Sorry no instant answer. :(
    Brian
    SQL Developer Team

  • Unit test and program

    Hi all,
    I created Ztable and i created table maintenance generator and Tcode for Ztable .
    Now my query is I want to create program for ztable to test the data.
    plz provide me the program and unit test plan for ztable.its urgent.

    Hi,
    Please check if this helps.
        MODIFY s076 FROM TABLE i_final2.
        IF sy-subrc = 0.
          MESSAGE s900 WITH text-007."Data inserted into table S076
        ENDIF.
      ELSE.
        MESSAGE s900 WITH text-012."Data not inserted into table S076
      ENDIF.
    You can display such messages in your program output screen after the datas are inserted into the ztable.
    regards,
    Priya.

  • VI Analyzer Front Panel Size and Postion Test fails

    Hi
    We make a lot of use of the VI Analyser and I have been playing with the configuration settings to try and make it only fail the tests that matter to us and I have one test I do not understand correctly.
    From the LabVIEW help
    Panel Size and Position—Checks that a front panel
    completely resides within the bounds of the screen. The test also checks whether
    the front panel is larger than the maximum specified width and height. If you
    are using a multi-monitor system, the test fails if the panel does not reside
    entirely within the bounds of the primary monitor. This test works only on
    standard, control, and global VIs.
    I would like to apply this test, a simple check developers are not creating huge VI, it sounds simple enough, but I cannot seem to get this test to pass.
    I am working with two monitors and this test fails when my VI is on either monitor with the message  "This VI's front panel does not reside entirely within the specified bounds (1280 x 1024)" my current screen resolutionis 1280 x 1024, so I assume as long as my VI front panel fits within one screen it should be a pass.
    I can clearly see my that VI front panel (and block diagram for that matter) fits within the screen of either monitor. If I look at the VI properties of my VI the Window Size is 474 x 513, could anybody please suggest when why this might be failing.
    cheers 
    Dannyt
    Danny Thomson CLAD
    Sub10 Systems Ltd
    Solved!
    Go to Solution.

    Hi Norbert,
    Thanks for the prompt reply, my asking the question and your reply has made me play with this a little deeper and I have managed to solve my problem.
    In case anyone else has a similar problem here are the details.
    When you have two monitors and look at the Windows properites you see one monitor is identified as 1 and the other as 2, thses identities are based on the graphics card and the which monitor is plugged into which ouptput connector. You can then chose which monitor you wish to act as your primary monitor. You would suspect it would not matter once you have selected your primary monitor BUT IT DOES.
    My inital set-up   
    Left Hand Monitor    - idenity 2 selected to be primary monitor
    Right Hand Monitor - identiy 1 
    With this set-up everything looks and acts as I expect it to task bar, icons etc on my primarry monitor.  However if you run the VI Analyser it will fail Panel Size and Position on BOTH the left and right hand monitors.
    I crawlled under my desk and swapped the monitors cables round so now the set-up is
    Left Hand Monitor    - idenity 1 selected to be primary monitor
    Right Hand Monitor - identiy 2
    Now the VI Analysers Panel Size & Position Tests will pass on the left hand monitor but fail on the right hand monitor, just as would expect.
    It appears that LabVIEW is looking not only at the windows "Primary Monitor" setting but also the identies, this does not seem the behaviour I would expect nor the correct behaviour.
    cheers 
    Dannyt
    Danny Thomson CLAD
    Sub10 Systems Ltd

  • Unit Test Module cannot create repository

    DBA has created user with listed permissions in the documentation and Unit test repository creation is still stating basic permissions need to be met. In addition, I now need to upgrade my (nonexistant) repository and the basic permissions are still not met. What exactly does the repository admin require in oder to create a repository?
    Additional problems:
    When the dba types in the correct sys user password for that user to create the correct permissions through SQL developer, it states they are wrong. How does the sys user login as sysdba through that? That is our best guess as to why it won't recognize the password. This is also why I am asking what our DBA needs to simply get the admin user running.
    Also, after the repository is created, what permissions can be removed from that user? With our DBAs, less is more :)
    I appreciate any help; I realize this is a duplicate, but following other threads has not yielded results either.
    10gr2 DB, 3.0 now 3.1 Sql Developer
    Edited by: sinistral on Feb 14, 2012 11:15 AM
    Edited by: sinistral on Feb 14, 2012 11:15 AM

    Hi sinistral -
    Instructions for setting up a unit test repository in a (dba friendly ;) ) locked down environment are in {message:id=3985967}
    +That link points directly to the message for minimum permissions set up. Alternatively, A few messages back is one with info on how a dba can set up the roles/permissions as SQL Developer would so you don't get asked for sysdba info and can use the shared repository feature. {message:id=3985438}+
    Brian Jeffries
    SQL Developer Team
    Edited by: bjeffrie on Feb 15, 2012 11:58 AM to include reference to 'full permissions' info.
    Edited by: bjeffrie on Feb 15, 2012 12:45 PM to fix links

  • Unit tests and QA process

    Hello,
    (disclaimer : if you agree that this topic does not really belong to this forum please vote for a new Development Process forum there:
    http://forum.java.sun.com/thread.jspa?forumID=53&threadID=504658 ;-)
    My current organization has a dedicated QA team.
    They ensure end-user functional testing but also run and monitor "technical" tests as well.
    In particular they would want to run developer-written junit tests as sanity tests before the functional tests.
    I'm wondering whether this is such a good idea, and how to handle failed unit tests:
    1) Well , indeed, I think this is a good idea: even if developer all abide by the practice of ensuring 100% of their test pass before promoting their code (which is unfortunately not the case), integration of independant development may cause regression or interactions that make some test fail.
    Any reason against QA running junit tests at this stage?
    However the next question is, what do they do with failed tests : QA has no clue how important a given unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a complete outage of a 24x7 application. Or maybe 20% of failed tests only means a few misaligned icons...
    2) The developer of the failed package may know, but how can he communicate this to QA?
    Javadocing their unit testing code ("This test is mandatory before entering user acceptance") seems a bit fragile.
    Are there recommended methods?
    3) Even the developer of the failed package may not realize the importance of the failure. So what should be the process when unit tests fail in QA?
    Block the process until 100% tests pass? Or, start acceptance anyway but notify the developper through the bug tracking system?
    4) Does your acceptance process require 100% pass before user acceptance starts?
    Indeed I have ruled out requiring 100% pass, but is this a widespread practice?
    I rule it out because maybe the failed test indeed points out a bad test, or a temporary unavailability of a dependent or simulated resource.
    This has to be analyzed of course, as tests have to be maintained as well, but this can be a parallel process to the user acceptance (accepting that the software may have to be patched at some point during the acceptance).
    Thank you for your inputs.
    J.

    >
    Any reason against QA running junit tests at this
    stage?
    Actually running them seems pointless to me.
    QA could be interested in the following
    - That unit tests do actually exist
    - That the unit tests are actually being run
    - That the unit tests pass.
    This can all be achieved as part of the build process however. It can either be done for every cm build (like automated nightly) or for just for release builds.
    This would require that the following information was logged
    - An id unique to each test
    - Pass fail
    - A collection system.
    Obviously doing this is going to require more work and probably code than if QA was not tracking it.
    However the next question is, what do they do with
    failed tests : QA has no clue how important a given
    unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a
    complete outage of a 24x7 application. Or maybe 20%
    of failed tests only means a few misaligned icons...
    To me that question is like asking what happens if one class fails to build for a release build.
    To my mind any unit test failure is logged as a severe exception (the entire system is unusable.)
    2) The developer of the failed package may know, but
    how can he communicate this to QA?
    Javadocing their unit testing code ("This test is
    mandatory before entering user acceptance") seems a
    bit fragile.
    Are there recommended methods?Automatic collection obviously. This has to be planned for.
    One way is to just log success and failure for each test which is gathered in one or more files. Then a seperate process munges the result file to collect the data.
    I know that there is a java build engine (add on to ant or a wrapper to ant) which will do periodic builds and email reports to developers. I think it even allows for categorization so the correct developer gets the correct error.
    >
    3) Even the developer of the failed package may not
    realize the importance of the failure. So what
    should be the process when unit tests fail in
    QA?
    Block the process until 100% tests pass? Or, start
    acceptance anyway but notify the developper through
    the bug tracking system?
    I would block it.
    4) Does your acceptance process require 100% pass
    before user acceptance starts?
    No. But I am not sure what that has to do with what you were discussing above. I consider unit tests and acceptance testing to be two seperate things.
    Indeed I have ruled out requiring 100% pass, but is
    this a widespread practice?
    I rule it out because maybe the failed test indeed
    points out a bad test, or a temporary unavailability
    of a dependent or simulated resource.Then something is wrong with your process.
    When you create a release build you should include those things that should be in the release. If they are not done, missing, have build errors, then they shouldn't be in the release.
    If some dependent piece is missing for the release build then the build process has failed. And so it should not be released to QA.
    I have used version control/debug systems which made this relatively easy by allowing control over which bug/enhancements are included in a release. And of course requiring that anything checked in must be done so under a bug/enhancement. They will even control dependencies as well (the files for bug 7 might require that bug 8 is added as well.)

  • Unit Testing, Null, and Warnings

    I have a Unit Test that includes the following lines:
    Dim nullarray As Integer()()
    Assert.AreEqual(nullarray.ToString(False), "Nothing")
    The variable "nullarray" will obviously be null when ToString is called (ToString is an extension method, which is the one I am testing). This is by design, because the purpose of this specific unit test is to make sure that my ToString extension
    method handles null values the way I expect. The test runs fine, but Visual Studio 2013 gives includes the following warning:
    Variable 'nullarray' is used before it has been assigned a value. A null reference exception could result at runtime.
    This warning is to be expected, and I don't want to stop Visual Studio 2013 from showing this warning or any other warnings, just this specific case (and several others that involve similar scenarios). Is there any way to mark a line or segment
    of code so that it is not checked for warnings? Otherwise, I will end up with lots of warnings for things that I am perfectly aware of and don't plan on changing.
    Nathan Sokalski [email protected] http://www.nathansokalski.com/

    Hi Nathan Sokalski,
    Variable 'nullarray' is used before it has been assigned a value. A null reference exception could result at runtime.
    Whether the warning above was thrown when you built the test project but the test run successfully? I assume Yes.
    Is there any way to mark a line or segment of code so that it is not checked for warnings?
    There is no built-in way to make some code snippet or a code line not be checked during compiling, but we can configure some specific warnings not to be warned during compiling in Visual Basic through project Properties->Compile
    tab->warning configurations box.
    For detailed information, please see: Configuring Warnings in Visual Basic
    Another way is to correct your code logic and make sure the code will not generate warning at runtime.
    If I misunderstood you, please tell us what code you want it not to be checked for warnings with a sample so that we can further look at your issue.
    Thanks,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Unit Testing with Microsoft Sharepoint Emulators and Fakes with Visual Studio 2013

    Hi All,
    I have created Test Project and now creating Test cases for Sharepoint. I found a link on MSDN which suggests using Fakes framework but it supports VS2012 and I am using Visual Studio 2013.
    So how can I use it with VS2013 or is there any other way with which I can implement the Test cases with VS2013.
    Please suggest.
    Thanks in advance.
    Himanshu Nigam

    Hi HimanshuNigam,
    According to your descrition, my understanding is that you want to use Fakes framework to create test case for SharePoint project in Visual Studio 2013.
    If you want to test using Fakes Framework, you can use the codeplex extension to achieve it. It supports Visual Studio 2013.
    Here is a detailed article for your reference:
    Better Unit Testing with Microsoft Fakes
    About how to include the Nuget package, you can use the package with the link below:
    NuGet Package Manager for Visual Studio 2013
    Installing NuGet
    If you still have question about this issue, I suggest you can create a post in Visual Studio, more experts will help you and you can get more detailed information from there:
    https://social.msdn.microsoft.com/Forums/vstudio/en-US/home?category=visualstudio%2Cvsarch%2Cvsdbg%2Cvstest%2Cvstfs%2Cvsdata%2Cvsappdev%2Cvisualbasic%2Cvisualcsharp%2Cvisualc
    Best Regards
    Zhengyu Guo
    TechNet Community Support

  • Unit Testing of servlets and jsp by Junit

    hi,
    suggest me how to do the unit testing by junit!
    thanks in advance

    Or abstract the business logic away from the actual servlets themselves (always a good idea) and test those classes outside the context of a servlet framework.
    That limits the code that actually is impacted by being a servlet to the absolute minimum (essentially parameter passing).

  • Unit Testing and Code Coverage

    Is there any way to see graphs and charts of how much code was covered during Unit Tests in OBPM 10GR3 from the CUnit and PUnit tests?
    We use Clover Reports in Java Projects.
    Any such tool for OBPM 10GR3 projects?

    Here are some more
    Triggers and DB links are not available in Oracle Explorer - it would be great to have them in there - I found triggers under tables - but I would much prefer them to be broken out under their own node rather than nested under table
    I think others have mentioned this but when you query a table (Retrieve Data) - it would be great to be able to specify a default number of records to retrieve - the 30 second timeout is great - but more control would be nice - also a way to control the timeout would be nice as well
    I noticed that I get different behavior on the date format when I retrieve data (by selecting the option from the table menu when I right click on a table) versus getting the same data through the query window - why isn't it consistent?
    Also - with Intellisense - can you make the icons different for the type of objects that the things represent (like tables versus views versus functions)
    I noticed that I couldn't get dbms_output to show up with Intellisense - I had filtered out of Oracle Explorer all the System objects - does that somehow affect Intellisense as well? I know that the account I am using has access to those packages.
    Also - more control over collapsible regions would be nice - I love that feature of VS - but for ODT it seems to only work at the procedure level (not configurable with some kind of directive etc...)

  • Unit Testing and APEX Global Variables

    We've recently started to unit test our database level PL/SQL business logic.
    As such we have a need to be able to simulate or provide output from PL/SQL APEX components in order to facilitate testing of these components.
    Some of the most obvious portions that need simulation are:
    1. The existence of a session
    2. The current application ID
    3. The current page ID.
    We currently handle requirement #1 by using apex_040100.wwv_flow_session.create_new
    We handle 2 and 3 using the apex_application.g_flow_id and g_flow_step_id global variables.
    I'm just wondering, how safe is it for us to use wwv_flow_session.create_new to simulate the creation of a session at testing time for those things which need a session?
    I've also noticed that there are apex_application.get_application_id and apex_application.get_page_id functions whose output is not tied to the global variables (at least in our current version).
    Is it safe for us to expect that we can set these global variables for use in testing or is apex moving to get_application_id and get_page_id functions away from global variables?
    Will there be corresponding set_application_id and set_page_id functions in the future?
    Sorry for the question bomb. Thanks for any help.

    My first question would be why do you need to establish a session to test your PL/SQL?
    wwv_flow_session is a package internal to APEX, and you should probably leave it be.
    The get_application_id procedure you refer to is in apex_application_install, which is used for scripting installation of applications - not get/set of page ID like you're describing.
    If you're uncomfortable using apex_application.g_flow_id, you can use v('APP_ID') or preferably pass the app_id/page_id as parameters to your procedures.
    Your question seems to have a few unknowns, so that's the best I can describe.
    Scott

  • Unit testing and integration testing

    hello 2 all,
                    what is the diff bet unit and integration testing? in sap what is unit teesting consists of and integration testing consists of what?
    is this the work  of test engineers r whose work is this?
    take care
    love ur parents

    Hi Sameer,
    Unit Testing
    A unit test is a procedure used to validate that a particular module of source code is working properly from each modification to the next. The procedure is to write test cases for all functions and methods so that whenever a change causes a regression, it can be quickly identified and fixed. Ideally, each test case is
    separate from the others; constructs such as mock objects can assist in separating unit tests. This type of testing is mostly done by the developers and not by end-users.
    Integration testing
    Integration testing can proceed in a number of different ways, which can be broadly characterized as top down or bottom up. In top down integration testing the high level control routines are tested first, possibly with the middle level control structures present only as stubs. Subprogram stubs were presented in Section 2 as incomplete subprograms which are only present to allow the higher level control routines to be tested. Thus a menu driven program may have the major menu options initially only present as stubs, which merely announce that they have been successfully called, in order to allow the high level menu driver to be tested.
    Top down testing can proceed in a depth-first or a breadth-first manner. For depth-first integration each module is tested in increasing detail, replacing more and more levels of detail with actual code rather than stubs. Alternatively breadth-first would proceed by refining all the modules at the same level of control
    throughout the application. In practice a combination of the two techniques would be used. At the initial stages all the modules might be only partly functional, possibly being implemented only to deal with non-erroneous data. These would be tested in breadth-first manner, but over a period of time each would be
    replaced with successive refinements which were closer to the full functionality. This allows depth-first testing of a module to be performed simultaneously with breadth-first testing of all the modules.The other major category of integration testing is bottom up integration testing where an individual module is
    tested from a test harness. Once a set of individual modules have been tested they are then combined into a collection of modules, known as builds, which are then tested by a second test harness. This process can continue until the build consists of the entire application.
    In practice a combination of top-down and bottom-up testing would be used. In a large software project being developed by a number of sub-teams, or a smaller project where different modules were being built by individuals. The sub-teams or individuals would conduct bottom-up testing of the modules which they were
    constructing before releasing them to an integration team which would assemble them together for top-down testing.
    I think this will help.
    Thanks ,
    Saptarshi

  • Unit testing and Issue on ODS

    Hi every one,
    I was asked to do Unit testing on all the objects I created which includes data loading, checking the structure.....etc...If any one has any template for Unit testing pls forward it to me [email protected].....
    My ods is designed with out checking "Activate ODS data immeaditely".....and we r not using process chains here...
    while testing data loads when ever we send data data to that ODS how do u activate it thru jobs automatically????
    if we give Job selection it may not work out right, cos we don't know when we load data....so can I make use of events so that when ever data comes to ODS I need to activate it and it has to push that data to other ODS...can anyone throw some light on the events and what even to use for automatic activation....
    If any one has any template pls forward it to the above ID
    Thanks in advance

    BW
    Doing unit test is simple. Just document what ever the steps you follow while loading your ODS. You could follow these steps
    1 . Check the functionality of your Infosource whether it is full or Delta – Take a screen shot.
    2.   Check Delta Load Functionality if you have any Delta extractors.
    4     . Schedule the Info Package
    5     Go to Manage Data targets and see whether ODS got loaded successfully or not.
    6     Check if it is activated or Not. In your case it will automatically gets activated since you selected automatic activation.
    Don’t forget to take screen shots of each Process
    Hope this helps
    Thanks
    Sat

Maybe you are looking for

  • Message bind Not appearing as should

    Dear All; I have an application where i bind, the labels of the buttons and fields, from a pre-existing file using load bundle, and it happens that the names binded are arabic, and when i run locally they appear as should, only when i deployed to ora

  • Hard drive recognition problem in BIOS...HELP

    I have a Satellite A-135 S2256. A month or so ago it was in for the BIOS password error fix. Came back fine and worked well for about a week, then it went black screen on me. I had the video chip set/video chip reflowed and I now have the Toshiba sta

  • Unable to launch Adobe Plugin - after uninstall of Adobe

    When accessing some websites, we had an issue where the pagwe would frezze for 50 or 60 seconds - and then we would get "Unable to launch External Handler" message. After clinking ok twice! the site would load. So we thought - uninstall Adobe and ins

  • One day old iPOD 30g Video randomly turns off

    Title says it all...tried all of the Rs, even restored it. Plays fine for a little while, then BAM....turns off, just after a small pop is heard subtly in the headphones (I assume it is the amp shutting off.) Then you can hear the hard drive powering

  • My MacBook Pro running slow, EtreCheck

    Hi, I feel my MacBook Pro is booting very slowly compared to how it used to. Is there anything on my EtreCheck that seems unusual? EtreCheck version: 1.9.12 (48) Report generated 20 July 2014 15:17:17 BST Hardware Information:   MacBook Pro (17-inch,