Test framework for OSB

Is there anything similar as the Test Suites in BPEL described here
http://www.oracle.com/technology/oramag/oracle/07-nov/o67bpel.html
for the OSB (formerly ALSB)??
We are looking at some way to execute test cases for OSB flows just like you can in JDev for BPEL.
thanks

OSB does not have anything like "TestSuite" you described in above link. But some part of the use case can be accomplished by test console http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/consolehelp/testing.html#wp1052410. The testing can be done using Workshop for OSB (similar to JDEV for BPEL) http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/eclipsehelp/tasks.html#wp1151352. Even though documentation talks about split-join, the same can be applied to any Proxy/Business Service
Thanks
Manoj
Edited by: mneelapu on Dec 22, 2009 9:22 AM

Similar Messages

  • Unit Test Framework for 8.6

    Hi all,
    Do we have Unit Test Framework Toolkit for LabVIEW 8.6?
    We have unit Test Framework Toolkit for LabVIEW 8.6.1(Uit Test Framework Toolkit 1.0)
    Thanks,
    Suresh Kumar.G

    John Harby <[email protected]> wrote:
    Did you ever find anything? We are looking too ...I have been reading a great book that has given me some ideas, but I have not
    solidified any proofs of concept as of yet. Check out Vincent Massol book JUnit
    in Action. He has some great working examples of Mock Objects and Stubs using
    Cactus and Jetty. What I am thinking is that in a seperate "Java Project" within
    the application, we can extend JUnit and create whatever global objects a process
    needs and then make SOAP based calls to the JPD, since the JPD is derived from
    a web service. So the other piece to this is experience Unit Testing SOAP...
    - Noam

  • Test framework with WCF

    Happy New year guys-
    Designing a test framework for growing test requirements is a daunting task for me. Attached is the concept i am thinking of.
    Notes:  Clients can be in the same computer or different. 
    Question 1:  Is this  a good idea to use WCF service based test framework (Idea is to develop a open test framework where we can attach and detach modules.)
    Question 2:  To develop a service for every test module in the library? or
                        To maintain one service library to manage all test functions
    My main problem is that  I am very new to WCF,C# but have used C,C++ and COM based apps.
    please share your thoughts or redirect me to right areas.
    thanks
    Jis

    Hi,
    Based on my understanding, you want to create a WCF service to make a communication between the client side and the test machine side. And also you want it can attach and detach modules. For this situation, you could create a WCF service application (Not
    WCF service Library), and create functions for each module, then host the service on IIS. If you want to add more module, you could add another service file within the WCF service to create functions for this module.
    For creating WCF service and host it on IIS, you could refer to the following links:
    http://www.codeproject.com/Articles/42643/Creating-and-Consuming-Your-First-WCF-Service
    http://www.codeproject.com/Articles/550796/A-Beginners-Tutorial-on-How-to-Host-a-WCF-Service
    Regards

  • Best practices or design framework for designing processes in OSB(11g)

    Hi all,
    We have been working in oracle 10g, now in the new project we are going to use Soa suite 11g.For 10g we designed our services very similar to AIA framework. But in 11g since OSB is introduced we are not able to exactly fit the AIA framework here because OSB has a structure different than ESB.
    Can anybody suggest best practices or some design framework for designing processes in OSB or 11g SOA Suite ?

    http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10223/04_osb.htm
    http://www.oracle.com/technology/products/integration/service-bus/index.html
    Regards,
    Anuj

  • Castor framework improved test coverage for SAPDB/MaxDB

    Hi,
    Just to announce that Castor (Open Source data binding framework for Java) was improved test coverage for SAPDB/MaxDB, with this maybe MaxDB can gain more community users.
    enjoy
    Clóvis

    Hi,
    Just to announce that Castor (Open Source data binding framework for Java) was improved test coverage for SAPDB/MaxDB, with this maybe MaxDB can gain more community users.
    enjoy
    Clóvis

  • Mocking framework for ABAP unit tests

    Hi, Is there a mocking framework for ABAP ? Something like easymock or mockito for Java ?
    I searched but cannot find anything...
    thanks !

    Hello,
    Now there is one framework available ;-)
    https://github.com/uweku/mockA
    Regards,
    Uwe

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • [svn] 3526: The call to the TestNG task in the configuration test framework had haltOnFailure set to true which is not what we want .

    Revision: 3526
    Author: [email protected]
    Date: 2008-10-08 14:21:40 -0700 (Wed, 08 Oct 2008)
    Log Message:
    The call to the TestNG task in the configuration test framework had haltOnFailure set to true which is not what we want. Failures will get logged to the database at which point we can review them.
    Also fix a failing test.
    Modified Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/build.xml
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/services-config.xml

    I have a standard Ant build script for signing a jar file. I import it into my master Ant build files with
    <import file="Sign.xml"/>
    and then in my master Ant script I setup the name of the jar file e.g.
    <property name="jar-file" value="${fun}/FunApplet.jar"/>
    and then invoke a target
    <target name="sign-jar" depends="jar, sign">
    </target>
    Since this target (sign-jar) depends on target 'jar' and target 'sign' it executes the 'jar' target and then the 'sign' target that is contained in Sign.xml.

  • Test coverage in LabView Unit Test Framework

    Hi,
    can somebody from NI confirm the following two statements about the Unit Test Framework:
    1. The framework does not support "recursive coverage metrics", where the coverage considers sub-VIs that are executed in the VI under test.
    2. 100% coverage means something weaker that common "branch coverage". For example, an "if" VI is a branch in the program but it is not considered as a branch by LabView's test coverage metrics.
    Thanks,
    Peter

    Hello Johannes,
    I'm interested in branch coverage of a VI under test.
    Imagine a VI A that calls another VI B. If A is tested and LV's unit test framework reports 100% test coverage for A, it is possible that the test cases didn't visit all frames (branches) in B.
    Now my question is: is it possible that LV thinks of A as "flattened" so that all code in B is considered as code of A?
    Peter

  • How to run a process PSPPYRUN or PSPPYBLD in PeopleSoft Test Framework ?

    How to run a process PSPPYRUN or PSPPYBLD in PeopleSoft Test Framework ?
    Please advise on the below scripts ,
         1          True          Browser     Start_Login          
         2          True          Browser     Set_URL     PORTAL     
         3          True          Link     Click     id=fldra_HC_NORTH_AMERICAN_PAYROLL     
         4          True          Link     Click     id=fldra_HC_PROCESS_PAYROLL     
         5          True          Link     Click     id=fldra_HC_CREATE_PAYSHEETS     
         6          True          Link     Click     innerText=Create Paysheets     
         7          True          Browser     FrameSet     TargetContent     
         8          False          Page     Prompt     MANAGE_PAYROLL_PROCESS_US.RUNCTL_PAYSHEET.USA     add update
         9          True          Text     Set_Value     Name=PRCSRUNCNTL_RUN_CNTL_ID     AA
         10          False          Page     PromptOk          
         14          True          Button     Click     Name=#ICSearch     Search
         11          True          Text     Set_Value     Name=PAYSHEET_RUNCTL_RUN_ID     K01FIN
         12          True          Page     Save          
         13          False          Process     Run     prcname=psppybld;prctype=COBOL SQL;wait=True     
         18          False          Process     Run     prcname=psppybld     
         16          True          Process     Run     wait=True;expected=Success;     
         15          True          Button     Click     Name=#ICSave

    Build M33.106 is not available in service market place yet.
    SAP informed that customer can not find information on the next build release date from Product Availability Matrix (PAM) or anywhere from SAP site.
    Is there a way to get his information from Redwood site? Like when is the expected date for next release of the build?
    Thanks
    Nanda

  • [SOLVED] google test framework: linking issue

    Hello,
    I have got a problem with Google test framework, few days ago I was trying this on another machine and did not get any problems.
    I got this from svn and build libgtest.a. Then I tried to compile the first sample as:
    g++ -I../include -L../ -lgtest -lpthread ../src/gtest_main.cc sample1.cc sample1_unittest.cc
    But got a lot of linking errors like:
    /tmp/ccVHpTQc.o: In function `main':
    gtest_main.cc:(.text+0x28): undefined reference to `testing::InitGoogleTest(int*, char**)'
    gtest_main.cc:(.text+0x2d): undefined reference to `testing::UnitTest::GetInstance()'
    gtest_main.cc:(.text+0x35): undefined reference to `testing::UnitTest::Run()'
    /tmp/ccQuonuE.o: In function `FactorialTest_Negative_Test::TestBody()':
    sample1_unittest.cc:(.text+0x99): undefined reference to `testing::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
    sample1_unittest.cc:(.text+0xac): undefined reference to `testing::internal::AssertHelper::operator=(testing::Message const&) const'
    sample1_unittest.cc:(.text+0xb8): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x15d): undefined reference to `testing::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
    sample1_unittest.cc:(.text+0x170): undefined reference to `testing::internal::AssertHelper::operator=(testing::Message const&) const'
    sample1_unittest.cc:(.text+0x17c): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x221): undefined reference to `testing::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
    sample1_unittest.cc:(.text+0x234): undefined reference to `testing::internal::AssertHelper::operator=(testing::Message const&) const'
    sample1_unittest.cc:(.text+0x240): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x271): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x2b4): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    sample1_unittest.cc:(.text+0x2f9): undefined reference to `testing::internal::AssertHelper::~AssertHelper()'
    As a workaround I unpack the library and got gtest-all.cc.o
    ar x ../libgtest.a
    So when I link samples with this object file - no errors:
    g++ -I../include -L../ gtest-all.cc.o -lpthread ../src/gtest_main.cc sample1.cc sample1_unittest.cc
    My g++ version is
    [ds|samples]$ g++ --version
    g++ (GCC) 4.7.2
    Copyright (C) 2012 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions. There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    What can be wrong with my gcc?
    Last edited by ds80 (2013-01-27 20:26:44)

    The problem was fixed by changing ordering a litte - I just moved lgtest to the end:
    g++ -I../include -L../ ../src/gtest_main.cc sample1.cc sample1_unittest.cc -lgtest -lpthread

  • Dynamic data in Composite Test Framework Input Request

    Hi All,
    I have created composite test framework test suites to test existing soa11g bpel composite.
    I have a requirement to have unique id in its input elemnt every time I test a BPEL service.
    <input>
    <ID>unique id</ID>
    </input>
    Can we generate unique ids in input data from test frame work?
    Can we achieve this ?
    Thanks,
    Praveen

    I remember that we had a prototype for this - but it seems that code never made it into the delivered code base. You might want to file an SR and ask for an enhancement request.

  • [svn] 3229: Made some updates to the config test framework.

    Revision: 3229
    Author: [email protected]
    Date: 2008-09-16 12:15:34 -0700 (Tue, 16 Sep 2008)
    Log Message:
    Made some updates to the config test framework. This should be able to run on all the regression boxes now assuming I got the names of all the log files correct for the different app servers. After this checkin, I will update the regression scripts to start running the config framework tests under automation. This will be another antcall from the run.tests target in automation.xml which will run the tests and then load the results to the test results db.
    Modified Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/build.xml
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/DestinationWith NoChannelTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/DestinationWith NoIDTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidAckn owledgeModeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidDeli veryModeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidDest inationTypeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidMess ageTypeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoConnectio nFactoryTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/error.txt
    blazeds/trunk/qa/resources/frameworks/qa-frameworks.zip

    despite the workaround, it doesn't fix the real problem. It shouldn't be a huge deal for adobe to add support for multiple svn versions. Dreamweaver is the first tool i've used that works with svn that doesn't support several types of svn meta data. If they're going to claim that Dreamweaver supports svn is should actually support svn, the current version, not a version several years old. This should have been among the first patches released, or at least after snow leopard came out (and packaged with it the current version of svn).
    does anyone know if the code that handles meta data formatting is something that is human readable, or where it might be, or is it in compiled code.
    i signed up for the forums, for the sole purpose of being able to vent about this very frustrating and disappointing situation.

  • New version of sapyto - SAP Penetration Testing Framework

    Hello list,
    I'm glad to let you know that a new version of sapyto, the SAP Penetration Testing Framework, is available.
    You can download it by accessing the following link: http://www.cybsec.com/EN/research/sapyto.php
    News in this version:
    This version is mainly a complete re-design of sapyto's core and architecture to support future releases. Some of the new features now available are:
    . Target configuration is now based on "connectors", which represent different ways to communicate with SAP services and components. This makes the
    framework extensible to handle new types of connections to SAP platforms.
    . Plugins are now divided in three categories:
         . Discovery: Try to discover new targets from the configured/already-discovered ones.
         . Audit: Perform some kind of vulnerability check over configured targets.
         . Exploit: Are used as proofs of concept for discovered vulnerabilities.
    . Exploit plugins now generate shells and/or sapytoAgent objects.
    . New plugins!: User account bruteforcing, client enumeration, SAProuter assessment, and more...
    . Plugin-developer interface drastically simplified and improved.
    . New command switches to allow the configuration of targets/scripts/output independently.
    . Installation process and general documentation improved.
    . Many (many) bugs fixed. :P
    Enjoy!
    Cheers,
    Mariano

    Hi Mariano,
    Thanks for the update.
    We implemented secinfo restrictions 5 years ago, but used a rather complicated approach. We did some tests today (the "local" setting works okay so far) and will continue tomorrow.
    We now use the HOST and USER-HOST set to "local" and let the application security deal with who-can-do-what and this works quite well; though we have encountered some external 3rd party server programs in some cases. It seems to be popular amongst the business folks and some of the products use the gateway monitor to comunicate with the SAP system to find out when it has completed processing.
    I think this is a design error, but they of course think otherwise
    What was interesting to note, was that we locked ourselves out of an unprotected system. We changed the gw/monitor from 2 to 1 in a test. This worked. But then the gwmon cannot be used to change it back to 2! To we tried RZ11, and experienced the same. So we changed it to 0 in a test, and then 1 was blocked as well. This appears to be implemented in the kernel, as even hobbling the application coding does not help. The parameter is only dynamic when decreasing the value and increasing the security.
    We had to restart the whole system for the instance profile to take effect again. Rather noisy and a few developers could take an additional 10 minute coffee break as a result
    We are testing this on 3 different releases with different config:
    - 4.6C (46D)
    - 6.40
    - 7.00
    The different config relates to:
    - gw/sec_info
    - gw/monitor
    - auth/rfc_authority_check
    Our intention behind this is to improve baseline security and harden some special systems further.
    Cheers,
    Julius

  • BlazeDS stress test framework

    Hi all,
    I'm looking for a stress test tool for BlazeDS applications
    As far as I understand BlazeDS (not that much) Adobe's tool for Flex DS (LC DS) - http://labs.adobe.com/wiki/index.php/Flex_Stress_Testing_Framework - might not be appropriate.
    Am I right here ?
    Does anyone have a direction to go with this ?

    Some commercial products are available that have some Flex/BlazeDS support. WebLOAD from RadView has a Flex Add-On that I believe some people have used for testing BlazeDS.
    The Flex Stress Testing Framework, recently updated and renamed the Data Services Stress Testing Framework only runs on LCDS 2.6.
    http://labs.adobe.com/wiki/index.php/Data_Services_Stress_Testing_Framework
    We are thinking about updating the Data Services Stress Testing Framework to run on BlazeDS as well but don't have any definate plans or schedule for this at this point.
    For the time being, if none of the commercial products work for you, I think you should be able to use the Data Services Stress Testing Framework to test BlazeDS, but I haven't tested this.
    You would deploy the Data Services Stress Testing Framework on LCDS and then in your load test make whichever destination/service you are testing point to your BlazeDS server. For example if you are load testing a messaging destination on your BlazeDS server you would have the messaging destination in your load test use a channel with a url that pointed to your BlazeDS server and not your LCDS server. Setting this up would definately take some understanding of BlazeDS, specifically how channels, destinations, etc. work.
    If that sounds daunting, I would look into one of the commercial products. I think commercial products may be limited at this point in terms of how well they emulate our channel/messaging behaviour but I know some vendors have been working on this so that should be getting better.
    Hope that helps.

Maybe you are looking for

  • Issues after migrating from 3.0.6.6.5 to 3.0.9.8.3A

    G'day All, Over the last week or so I have performed some test migrations using copies of our production Portal environment and here are some of the issues that I have encountered. ISSUE 1 - Performance Using a stopwatch (primitive, I know) I visited

  • Changing description of a field in shopping cart form in SRM Portal

    Hi, We have a requirement to change the description/adding a note to one of the fields in Shopping cart creation form in the portal. In the SHOP page, when we click on describe requirement we have an option to choose the product type as Goods or Serv

  • Attachment Count link in Resource list

    Hi gurus, I have setup a repository manager with several custom properties for the documents. One of these properties is a copy of the attachment property (attach files or links to other documents, to a document). When a user goes to a folder and see

  • Digital Signature on F16

    Hi Experts, we are implementing Digital signature on Form 16. if any body already implemented this explain me the role(configuration) of functional,technical and basis consultants. what are the we need to do. regards Ram.

  • DVD player unsupported disc

    Every disc, brand new will not play on the DVD player. Anyone else have this issue, or can suggest third party app to resolve.