Test Framework

Hey guys,
sorry if im posting in the wrong place, Just wondering if anyone knows of good testing tools or frameworks specifically to handle session initiation protocol (sip) servers and Instant messaging applications. Sorry for the very open ended quetion, but anything will help if anyone can provide info.
Thanks in advance.
Mark

Search this forum for "JCTerminal" and you will find several examples for the two parameters.
Jan

Similar Messages

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • [svn] 3526: The call to the TestNG task in the configuration test framework had haltOnFailure set to true which is not what we want .

    Revision: 3526
    Author: [email protected]
    Date: 2008-10-08 14:21:40 -0700 (Wed, 08 Oct 2008)
    Log Message:
    The call to the TestNG task in the configuration test framework had haltOnFailure set to true which is not what we want. Failures will get logged to the database at which point we can review them.
    Also fix a failing test.
    Modified Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/build.xml
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/services-config.xml

    I have a standard Ant build script for signing a jar file. I import it into my master Ant build files with
    <import file="Sign.xml"/>
    and then in my master Ant script I setup the name of the jar file e.g.
    <property name="jar-file" value="${fun}/FunApplet.jar"/>
    and then invoke a target
    <target name="sign-jar" depends="jar, sign">
    </target>
    Since this target (sign-jar) depends on target 'jar' and target 'sign' it executes the 'jar' target and then the 'sign' target that is contained in Sign.xml.

  • Unit Test Framework for 8.6

    Hi all,
    Do we have Unit Test Framework Toolkit for LabVIEW 8.6?
    We have unit Test Framework Toolkit for LabVIEW 8.6.1(Uit Test Framework Toolkit 1.0)
    Thanks,
    Suresh Kumar.G

    John Harby <[email protected]> wrote:
    Did you ever find anything? We are looking too ...I have been reading a great book that has given me some ideas, but I have not
    solidified any proofs of concept as of yet. Check out Vincent Massol book JUnit
    in Action. He has some great working examples of Mock Objects and Stubs using
    Cactus and Jetty. What I am thinking is that in a seperate "Java Project" within
    the application, we can extend JUnit and create whatever global objects a process
    needs and then make SOAP based calls to the JPD, since the JPD is derived from
    a web service. So the other piece to this is experience Unit Testing SOAP...
    - Noam

  • Test coverage in LabView Unit Test Framework

    Hi,
    can somebody from NI confirm the following two statements about the Unit Test Framework:
    1. The framework does not support "recursive coverage metrics", where the coverage considers sub-VIs that are executed in the VI under test.
    2. 100% coverage means something weaker that common "branch coverage". For example, an "if" VI is a branch in the program but it is not considered as a branch by LabView's test coverage metrics.
    Thanks,
    Peter

    Hello Johannes,
    I'm interested in branch coverage of a VI under test.
    Imagine a VI A that calls another VI B. If A is tested and LV's unit test framework reports 100% test coverage for A, it is possible that the test cases didn't visit all frames (branches) in B.
    Now my question is: is it possible that LV thinks of A as "flattened" so that all code in B is considered as code of A?
    Peter

  • How to run a process PSPPYRUN or PSPPYBLD in PeopleSoft Test Framework ?

    How to run a process PSPPYRUN or PSPPYBLD in PeopleSoft Test Framework ?
    Please advise on the below scripts ,
         1          True          Browser     Start_Login          
         2          True          Browser     Set_URL     PORTAL     
         3          True          Link     Click     id=fldra_HC_NORTH_AMERICAN_PAYROLL     
         4          True          Link     Click     id=fldra_HC_PROCESS_PAYROLL     
         5          True          Link     Click     id=fldra_HC_CREATE_PAYSHEETS     
         6          True          Link     Click     innerText=Create Paysheets     
         7          True          Browser     FrameSet     TargetContent     
         8          False          Page     Prompt     MANAGE_PAYROLL_PROCESS_US.RUNCTL_PAYSHEET.USA     add update
         9          True          Text     Set_Value     Name=PRCSRUNCNTL_RUN_CNTL_ID     AA
         10          False          Page     PromptOk          
         14          True          Button     Click     Name=#ICSearch     Search
         11          True          Text     Set_Value     Name=PAYSHEET_RUNCTL_RUN_ID     K01FIN
         12          True          Page     Save          
         13          False          Process     Run     prcname=psppybld;prctype=COBOL SQL;wait=True     
         18          False          Process     Run     prcname=psppybld     
         16          True          Process     Run     wait=True;expected=Success;     
         15          True          Button     Click     Name=#ICSave

    Build M33.106 is not available in service market place yet.
    SAP informed that customer can not find information on the next build release date from Product Availability Matrix (PAM) or anywhere from SAP site.
    Is there a way to get his information from Redwood site? Like when is the expected date for next release of the build?
    Thanks
    Nanda

  • [SOLVED] google test framework: linking issue

    Hello,
    I have got a problem with Google test framework, few days ago I was trying this on another machine and did not get any problems.
    I got this from svn and build libgtest.a. Then I tried to compile the first sample as:
    g++ -I../include -L../ -lgtest -lpthread ../src/gtest_main.cc sample1.cc sample1_unittest.cc
    But got a lot of linking errors like:
    /tmp/ccVHpTQc.o: In function `main':
    gtest_main.cc:(.text+0x28): undefined reference to `testing::InitGoogleTest(int*, char**)'
    gtest_main.cc:(.text+0x2d): undefined reference to `testing::UnitTest::GetInstance()'
    gtest_main.cc:(.text+0x35): undefined reference to `testing::UnitTest::Run()'
    /tmp/ccQuonuE.o: In function `FactorialTest_Negative_Test::TestBody()':
    sample1_unittest.cc:(.text+0x99): undefined reference to `testing::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
    sample1_unittest.cc:(.text+0xac): undefined reference to `testing::internal::AssertHelper::operator=(testing::Message const&) const'
    sample1_unittest.cc:(.text+0xb8): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x15d): undefined reference to `testing::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
    sample1_unittest.cc:(.text+0x170): undefined reference to `testing::internal::AssertHelper::operator=(testing::Message const&) const'
    sample1_unittest.cc:(.text+0x17c): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x221): undefined reference to `testing::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
    sample1_unittest.cc:(.text+0x234): undefined reference to `testing::internal::AssertHelper::operator=(testing::Message const&) const'
    sample1_unittest.cc:(.text+0x240): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x271): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x2b4): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x2f9): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    As a workaround I unpack the library and got gtest-all.cc.o
    ar x ../libgtest.a
    So when I link samples with this object file - no errors:
    g++ -I../include -L../ gtest-all.cc.o -lpthread ../src/gtest_main.cc sample1.cc sample1_unittest.cc
    My g++ version is
    [ds|samples]$ g++ --version
    g++ (GCC) 4.7.2
    Copyright (C) 2012 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions. There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    What can be wrong with my gcc?
    Last edited by ds80 (2013-01-27 20:26:44)

    The problem was fixed by changing ordering a litte - I just moved lgtest to the end:
    g++ -I../include -L../ ../src/gtest_main.cc sample1.cc sample1_unittest.cc -lgtest -lpthread

  • Dynamic data in Composite Test Framework Input Request

    Hi All,
    I have created composite test framework test suites to test existing soa11g bpel composite.
    I have a requirement to have unique id in its input elemnt every time I test a BPEL service.
    <input>
    <ID>unique id</ID>
    </input>
    Can we generate unique ids in input data from test frame work?
    Can we achieve this ?
    Thanks,
    Praveen

    I remember that we had a prototype for this - but it seems that code never made it into the delivered code base. You might want to file an SR and ask for an enhancement request.

  • [svn] 3229: Made some updates to the config test framework.

    Revision: 3229
    Author: [email protected]
    Date: 2008-09-16 12:15:34 -0700 (Tue, 16 Sep 2008)
    Log Message:
    Made some updates to the config test framework. This should be able to run on all the regression boxes now assuming I got the names of all the log files correct for the different app servers. After this checkin, I will update the regression scripts to start running the config framework tests under automation. This will be another antcall from the run.tests target in automation.xml which will run the tests and then load the results to the test results db.
    Modified Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/build.xml
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/DestinationWith NoChannelTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/DestinationWith NoIDTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidAckn owledgeModeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidDeli veryModeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidDest inationTypeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidMess ageTypeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoConnectio nFactoryTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/error.txt
    blazeds/trunk/qa/resources/frameworks/qa-frameworks.zip

    despite the workaround, it doesn't fix the real problem. It shouldn't be a huge deal for adobe to add support for multiple svn versions. Dreamweaver is the first tool i've used that works with svn that doesn't support several types of svn meta data. If they're going to claim that Dreamweaver supports svn is should actually support svn, the current version, not a version several years old. This should have been among the first patches released, or at least after snow leopard came out (and packaged with it the current version of svn).
    does anyone know if the code that handles meta data formatting is something that is human readable, or where it might be, or is it in compiled code.
    i signed up for the forums, for the sole purpose of being able to vent about this very frustrating and disappointing situation.

  • New version of sapyto - SAP Penetration Testing Framework

    Hello list,
    I'm glad to let you know that a new version of sapyto, the SAP Penetration Testing Framework, is available.
    You can download it by accessing the following link: http://www.cybsec.com/EN/research/sapyto.php
    News in this version:
    This version is mainly a complete re-design of sapyto's core and architecture to support future releases. Some of the new features now available are:
    . Target configuration is now based on "connectors", which represent different ways to communicate with SAP services and components. This makes the
    framework extensible to handle new types of connections to SAP platforms.
    . Plugins are now divided in three categories:
         . Discovery: Try to discover new targets from the configured/already-discovered ones.
         . Audit: Perform some kind of vulnerability check over configured targets.
         . Exploit: Are used as proofs of concept for discovered vulnerabilities.
    . Exploit plugins now generate shells and/or sapytoAgent objects.
    . New plugins!: User account bruteforcing, client enumeration, SAProuter assessment, and more...
    . Plugin-developer interface drastically simplified and improved.
    . New command switches to allow the configuration of targets/scripts/output independently.
    . Installation process and general documentation improved.
    . Many (many) bugs fixed. :P
    Enjoy!
    Cheers,
    Mariano

    Hi Mariano,
    Thanks for the update.
    We implemented secinfo restrictions 5 years ago, but used a rather complicated approach. We did some tests today (the "local" setting works okay so far) and will continue tomorrow.
    We now use the HOST and USER-HOST set to "local" and let the application security deal with who-can-do-what and this works quite well; though we have encountered some external 3rd party server programs in some cases. It seems to be popular amongst the business folks and some of the products use the gateway monitor to comunicate with the SAP system to find out when it has completed processing.
    I think this is a design error, but they of course think otherwise
    What was interesting to note, was that we locked ourselves out of an unprotected system. We changed the gw/monitor from 2 to 1 in a test. This worked. But then the gwmon cannot be used to change it back to 2! To we tried RZ11, and experienced the same. So we changed it to 0 in a test, and then 1 was blocked as well. This appears to be implemented in the kernel, as even hobbling the application coding does not help. The parameter is only dynamic when decreasing the value and increasing the security.
    We had to restart the whole system for the instance profile to take effect again. Rather noisy and a few developers could take an additional 10 minute coffee break as a result
    We are testing this on 3 different releases with different config:
    - 4.6C (46D)
    - 6.40
    - 7.00
    The different config relates to:
    - gw/sec_info
    - gw/monitor
    - auth/rfc_authority_check
    Our intention behind this is to improve baseline security and harden some special systems further.
    Cheers,
    Julius

  • BlazeDS stress test framework

    Hi all,
    I'm looking for a stress test tool for BlazeDS applications
    As far as I understand BlazeDS (not that much) Adobe's tool for Flex DS (LC DS) - http://labs.adobe.com/wiki/index.php/Flex_Stress_Testing_Framework - might not be appropriate.
    Am I right here ?
    Does anyone have a direction to go with this ?

    Some commercial products are available that have some Flex/BlazeDS support. WebLOAD from RadView has a Flex Add-On that I believe some people have used for testing BlazeDS.
    The Flex Stress Testing Framework, recently updated and renamed the Data Services Stress Testing Framework only runs on LCDS 2.6.
    http://labs.adobe.com/wiki/index.php/Data_Services_Stress_Testing_Framework
    We are thinking about updating the Data Services Stress Testing Framework to run on BlazeDS as well but don't have any definate plans or schedule for this at this point.
    For the time being, if none of the commercial products work for you, I think you should be able to use the Data Services Stress Testing Framework to test BlazeDS, but I haven't tested this.
    You would deploy the Data Services Stress Testing Framework on LCDS and then in your load test make whichever destination/service you are testing point to your BlazeDS server. For example if you are load testing a messaging destination on your BlazeDS server you would have the messaging destination in your load test use a channel with a url that pointed to your BlazeDS server and not your LCDS server. Setting this up would definately take some understanding of BlazeDS, specifically how channels, destinations, etc. work.
    If that sounds daunting, I would look into one of the commercial products. I think commercial products may be limited at this point in terms of how well they emulate our channel/messaging behaviour but I know some vendors have been working on this so that should be getting better.
    Hope that helps.

  • Peoplesoft Test Framework

    Hi,
    I am using the PS Test Framework and I have already some testcases which funtion correctly.
    But there is an order which seems to have a problem:
    Type:Browser
    Action:waitForNew
    It waits... the new window opens but the test execution does not stop waiting but after the waiting time it throws a timeout error. So the
    opening of the new window does not trigger anything. I cannot fix this problem so I am asking you for help.
    Facts:
    - some other orders did not function also when I worked on two screens and the resolution of the screens were different. For example a click on an <img> tag was effectless (but only by the clicking on the image, on other buttons or links it functioned)
    - I am using IE 9.0.8112.16421
    - on an other machine this order functioned well, the IE version was the same and the main browser configuration also. But next time it did not work on the other computer also and I do not know what does it depend on. It happens also when I work on one monitor.
    I could not find a workaround and I am blocked because there are a lot of links which trigger to open a new browser instance and there is sometime no other way to get from A to B on the GUI.
    Thank you in advance.
    Regrads,
    Csaba

    Well, I won't speak for Oracle here but rather as a user of the ATS tools for a number of years (all the way back to the original RSWSoftware e-Test suite 3.x in late 1999 then into the Empirix versions and now into Oracle). The suite has been evolving for years since the original "all vb platform" to the new web and java based platforms. So your comment:
    "First they came up with Oracle functional tester (VB Scripting) then Open script (Java) and now for peoplesoft PTF 8.51 I can't get a reason behind this."
    can at least be partially explained by the software evolving ....OpenScript was being talked about at the Empirix User Conferences for many years before Oracle officially released it. OFT was really just the old e-Tester that had been hanging around for years and years. I suspect there is a similar story behind the PTF 8.51 release as some of what it represents had been hanging around in PeopleTools for some time and for one reason or another they put a UI on it and released it as yet another tool. PeopleSoft has always marched to their own beat so is it really a surprise :-)
    I suspect - and hopefully Oracle will clarify this - that at some point there will be yet another "accelerator" applied to ATS to integrate with this in the same way that they integrate with the Siebel Test API or EBS. Again, just an opinion here but given that test management, load testing and automated regression testing using ATS is the focus of Siebel, EBS, ADF, Flex, 3rd party Web apps, and Webservices ...etc .... I would say that ATS is the test tool the investments are being made in. I bet at some point they will be hitting us all up for licensing of the "peoplesoft accelerator for leveraging PTF in ATS" - yet another licensing pack! :-)

  • PeopleTools Test Framework issue

    Using 8.51.01 test framework (TF).
    When I launch the browser from within the TF it launches TWO browsers and neither of them capture any output. One browser has the IE8 icon... the other has the old IE6 icon. very odd. The two browser window issue seems to be a known problem with IE 7 /8 but it's not clear how to fix it.
    The purpose of this post is to find out in what way the TF modifies the IE settings so it can capture (the client calls it HOOK INTO IE).
    Is it a proxy setting?
    Thanks
    Graham

    Hi,
    Based on my understanding, you want to create a WCF service to make a communication between the client side and the test machine side. And also you want it can attach and detach modules. For this situation, you could create a WCF service application (Not
    WCF service Library), and create functions for each module, then host the service on IIS. If you want to add more module, you could add another service file within the WCF service to create functions for this module.
    For creating WCF service and host it on IIS, you could refer to the following links:
    http://www.codeproject.com/Articles/42643/Creating-and-Consuming-Your-First-WCF-Service
    http://www.codeproject.com/Articles/550796/A-Beginners-Tutorial-on-How-to-Host-a-WCF-Service
    Regards

  • Test Framework Mojito?

    Hi, I was recently discussing a test framework and remember it being called Mojito. But when i google it, nothing useful comes up. I was wondering if anyone could point me to a website, if this is in fact a test framework.
    Thanks in advance.

    I expect you're talking about Mockito. Not a testing framework, but a mocking framework, often useful in conjunction with, say, JUnit. It's my favourite mocking framework, although I recently discovered Powermock which extends Mockito even further.

  • Test framework with WCF

    Happy New year guys-
    Designing a test framework for growing test requirements is a daunting task for me. Attached is the concept i am thinking of.
    Notes:  Clients can be in the same computer or different. 
    Question 1:  Is this  a good idea to use WCF service based test framework (Idea is to develop a open test framework where we can attach and detach modules.)
    Question 2:  To develop a service for every test module in the library? or
                        To maintain one service library to manage all test functions
    My main problem is that  I am very new to WCF,C# but have used C,C++ and COM based apps.
    please share your thoughts or redirect me to right areas.
    thanks
    Jis

    Hi,
    Based on my understanding, you want to create a WCF service to make a communication between the client side and the test machine side. And also you want it can attach and detach modules. For this situation, you could create a WCF service application (Not
    WCF service Library), and create functions for each module, then host the service on IIS. If you want to add more module, you could add another service file within the WCF service to create functions for this module.
    For creating WCF service and host it on IIS, you could refer to the following links:
    http://www.codeproject.com/Articles/42643/Creating-and-Consuming-Your-First-WCF-Service
    http://www.codeproject.com/Articles/550796/A-Beginners-Tutorial-on-How-to-Host-a-WCF-Service
    Regards

  • Message Recognition Box - PeopleSoft Test Framework

    'Im trying to get the Message Recognition Box working inside PeopleSoft Test Framework. I have all my messages there, but when I execute my script it wont click on OK to the multiple messages I am expecting. What should I put in my step in order to execute the OK.
    Thanks

    Hi,
    I haven't seen this error, nor have I used PTF from the command line, just running online in from the tool to the browser, but when I look at PeopleBooks it says
    if you do not use the -CD= parameter to specify the connection data, use the parameters in the following table:
    –CS= Specify the server:port to connect to. This is the Server:Port value you would enter in the PeopleSoft Test Framework Signon dialog box when signing on to PTF.
    –CNO= Specify the node name.
    –CO= Specify the user name.
    So I would say your command line would have to look like this:
    "C:\Program Files\PeopleSoft\PeopleSoft Test Framework\PsTestFw.exe" -CS=SERVER:PORT -CNO=PT_LOCAL -CO=VP1 -CP=VP1 -TST=OURTEST -TC=DEFAULT -EXO=QE851_No_Folder -LOG=my_run_log
    PeopleBooks does not state that the -CNO and -EXO are optional      
    See PeopleBooks > PeopleTools 8.52: PeopleSoft Test Framework > Creating Tests and Test Cases > Executing a Test from the Command Line

Maybe you are looking for