Dynamic data in Composite Test Framework Input Request

Hi All,
I have created composite test framework test suites to test existing soa11g bpel composite.
I have a requirement to have unique id in its input elemnt every time I test a BPEL service.
<input>
<ID>unique id</ID>
</input>
Can we generate unique ids in input data from test frame work?
Can we achieve this ?
Thanks,
Praveen

I remember that we had a prototype for this - but it seems that code never made it into the delivered code base. You might want to file an SR and ask for an enhancement request.

Similar Messages

  • BlazeDS stress test framework

    Hi all,
    I'm looking for a stress test tool for BlazeDS applications
    As far as I understand BlazeDS (not that much) Adobe's tool for Flex DS (LC DS) - http://labs.adobe.com/wiki/index.php/Flex_Stress_Testing_Framework - might not be appropriate.
    Am I right here ?
    Does anyone have a direction to go with this ?

    Some commercial products are available that have some Flex/BlazeDS support. WebLOAD from RadView has a Flex Add-On that I believe some people have used for testing BlazeDS.
    The Flex Stress Testing Framework, recently updated and renamed the Data Services Stress Testing Framework only runs on LCDS 2.6.
    http://labs.adobe.com/wiki/index.php/Data_Services_Stress_Testing_Framework
    We are thinking about updating the Data Services Stress Testing Framework to run on BlazeDS as well but don't have any definate plans or schedule for this at this point.
    For the time being, if none of the commercial products work for you, I think you should be able to use the Data Services Stress Testing Framework to test BlazeDS, but I haven't tested this.
    You would deploy the Data Services Stress Testing Framework on LCDS and then in your load test make whichever destination/service you are testing point to your BlazeDS server. For example if you are load testing a messaging destination on your BlazeDS server you would have the messaging destination in your load test use a channel with a url that pointed to your BlazeDS server and not your LCDS server. Setting this up would definately take some understanding of BlazeDS, specifically how channels, destinations, etc. work.
    If that sounds daunting, I would look into one of the commercial products. I think commercial products may be limited at this point in terms of how well they emulate our channel/messaging behaviour but I know some vendors have been working on this so that should be getting better.
    Hope that helps.

  • Using an Excel file as a dynamic data source? Opinions needed...

    I have posted this topic before, but as always; in order to get the relevant correct answer you have to ask the correct question.
    I'm trying to create a number of Pricelists linked to an Excel/CSV file. I have a Excel file that contains Pricelist information which is Product specific.
    I have had a number of suggestion that follow:
    A direct link to the Excel file. PROs: Excel file can be uploaded on FTP and overwritten if (and when) amended. Linking this is easy peasy in Dreamweaver. Person browsing can download info straight away on request - no hassle. CONs: Simply, not everyone has Excel and those who don't can not access the information.
    Import Excel file as tabular data. PROs: Fairly easy to do in Dreamweaver. Person browsing can see info straight away. CONs: Can be time consuming on larger Excel files. NOT amendable (so a number of price changes becomes a big job). Can't simply overwrite Excel file on FTP. Larger Excel files can take a lot of page space and thus require tonnes of scrolling).
    Use the Excel file as a dynamic data source. --Not entirely sure how I would go about doing this (any suggestions/links/tutorials etc)-- PROs: ? CONs: Contributor added this suggestion is not a good idea as it performs poorly on the web.
    Create a dynamic page using a database and import the Excel file to that....or maintain the price list in the database rather than an Excel file. --Again not entirely sure how I would go about doing this (any suggestions/links/tutorials etc)-- My understanding of this option is that it will require XML data and SPRY work. Is this correct? Can this be someone who is not an advanced user?
    If once again, I'm off the mark and better suggestions can be thought - please do so.
    As you can see I'm at a bit of a crossroads so an suggestions, comment, help, links, tutorials or applause would be greatly received.
    Thanks in advance and looking forward to seeing the comments this throws up!

    Hi
    Although not everyone has excel just about everyone can open csv files in some way, if not offer the option to download "open office" which is free for the pc, as for the mac they have a program installed by default to use csv files.
    The import tabular data is not really an option for the reasons stated.
    Use excel as data source - not a good idea, requires asp.net to work correctly otherwise it does run slow and is not recommended if you are expecting more than a very small number of users.
    As for the dynamic with database, this can be done with xml and spry but if you have a large amount of data this is almost as bad as the option above. You are probably better creating a database and importing your excel spreadsheet into this, for tutorials on creating a php/mysql database and set-up of testing server see - http://www.adobe.com/devnet/dreamweaver/application_development.html.
    No matter which way you go with the last option it will require a fair amount of knowledge and experience to do correctly, efficiently and securely.
    PZ
    www.pziecina.com

  • JSF and dynamic data

    Hi,
    This is not a specific question but rather a high level concept that I'm trying to figure out. I have a system that uses a lot of dynamic data that is handled and viewed in the form of Dynabeans as well as SDO data objects.
    I'm working on trying to develop reusable web widgets such as input forms based on these objects and thought of JSF right away.
    However its seems like JSF will only work with static data models because the beans and bean properties need to be specified at design time in the xml configuration files.
    Here it is: Lets say I have a dynamic java bean with dynamic properties that was created from a metadata object. Now I want to develop a single reusable JSF widget that will create an input form using the information from the metadata object and then connect the widget values to the actual dynamic data object.
    Seems like in order to do this, the properties of the dynamic data object need to be defined at runtime so that you can classify it as a managed bean so that JSF knows how to bind values from a widget to a bean.
    Does anyone have any thoughts?

    Hi.
    ValueBinding and MethodBinding work through PropertyResolver, VariableResolver
    #{aaaa.bbbb.cccc}aaaa - will be resolved with VariableResolver (root object)
    bbbb and cccc - with PropertyResolver
    Look at the description of the default realisation of VariableResolver
    You will see in which places it searches for a root object. At least it searches in the request, session and application maps. You can just put your dynamic beans in one of this maps.
    At the worst case you can substitute implementation of the VariableResolver in faces-config.xml
      <application>
         <variable-resolver>com.qqqqq.MyVariableResolver</variable-resolver>
         <property-resolver>com.qqqqq.MyPropertyResolver</property-resolver>
      </application>
    ...Good luck

  • DYnamic data grid through ActionScript

    Hi...
    I want to create a Data grid dynamically during runtime based
    on some input through action script.
    It will be helpful if some one can post me an example of how
    to creat the data grid dynamically and use all its common
    properties.
    Regards

    Please read the Flex 2 on the DataGrid class for the details,
    but here's the gist:
    var grid:DataGrid = new DataGrid();
    grid.dataProvider = someCollection; // same as
    dataProvider="{someCollection}" in MXML
    // set other properties here
    grid.setStyle( "alternateItemColors", [0xff0000,0x00ff00]);
    // same as alternateItemColors="[0xff0000,0x00ff00]" in MXML
    // set other styles here
    var columns:Array = new Array();
    var col:DataGridColumn = new DataGridColumn();
    col.headerText = "Test";
    col.dataField = "someField";
    columns.push(col);
    // create more columns
    grid.columns = columns; // same as <mx:columns> in MXML
    addChild(grid); // vital - without this your grid will not be
    visible.

  • When to call DSDisposeHandle when you have a DLL function acting as a repeating DS dynamic data source?

    Hi 
    I have a DLL function acting as a dynamic data source within a LabVIEW application (see attachment) - I am using DSNewHandle to dynamically allocate 2D array handle storage via the LV memory manager for the arbitrary size data arrays to be passed into the application. I had assumed that these memory blocks would (magically) be disposed when appropriate by the LabVIEW built in processing blocks that sit downstream, however this is not the case and the LabVIEW system memory usage keeps increasing until LabVIEW environment is exited (not simply if just the offending application is stopped or closed).
    So my question is how and where to mop up the used up array buffers in the application processing chain and how do I know when the buffers are actually exhausted and not being reused downstream (for instance the two 2D arrays are first combined into a complex 2D array by the re+im to complex labview operator - is the data memory out of this stage totally different from the inputs or is it some modified version of the input arrays - if totally different and the input 2D arrays are not wired into any other blocks why does the operator not dispose the input data stores?)
    Perhaps Im going about this the wrong way having the DLL data source dynamically allocate the data space arrays? Any advice would be gratefully received
    Regards
    Steve
    Solved!
    Go to Solution.
    Attachments:
    DataGetFloat.PNG ‏23 KB

    Then instead of doing:
    DLLEXPORT int32_t DataGetFloatDll(... , Array2DFloat ***p_samples_2d_i, ...)
    if ( p_samples_2d_i )
    // *p_samples_2d_i can be non NULL, because of performance optimization where LabVIEW will pass in the same handle
    // that you returned in a previous call from this function, unless some other LabVIEW diagram took ownership of the handle.
    // Your C code can't really take ownership of the handle, it owns the handle for the duration of the function call and either
    // has to pass it back or deallocate it (and if you deallocate it you better NULL out the handle before returning from the
    // function or return a different newly allocated handle. A NULL handle for an array is valid and treated as empty array.
    *p_samples_2d_i = (Array2DFloat **) DSNewHandle( ( sizeof(int32_t) * 2 ) + ( sizeof(float) * channel_count * sample_count ) );
    // Generally you should first try to insert the data into the array before adjusting the size
    // the most safe would be to adjust the size after filling in the data if the array gets bigger in respect to the passed in array
    // and do the opposite if the adjusted handle happened to get smaller. This is only really important though if your C code can
    // bail out of the code path because of error conditions between adjusting the handle size and adjusting the array sizes.
    // You should definitely avoid to return from this function with the array dimensions indicating a bigger size than what the
    // handle really is allocated for, which can happen if the array was resized to a smaller size and you then return because of errors
    // before adjusting the dimension sizes in the array.
    (**p_samples_2d_i)->Rows = channel_count;
    (**p_samples_2d_i)->Columns = sample_count;
     you should be doing:
    DLLEXPORT int32_t DataGetFloatDll(... , Array2DFloat ***p_samples_2d_i, ...)
    MgErr err = NumericArrayResize(fS /* array of singles */, 2 /* number of dims */, (UHandle*)p_samples_2d_i, channel_count * sample_count);
    if (!err)
    // Fill in the data somehow
    // Adjust the dimension sizes
    (**p_samples_2d_i)->Rows = channel_count;
    (**p_samples_2d_i)->Columns = sample_count;
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Search 1 d array of dynamic data

    Hi,
    Is there a reason why the 'Search 1 D Array' function doesn't give the expected results when working on Dynamic data?
    I am trying to use one where it's input array comes from a Comparison VI inside a For loop, and so is an array of 0's & 1's.
    The element that I have set it to look for is 0, a Numeric Constant converted to Dynamic Data Type using the Convert to Dynamic Data function.
    It appears to me that regardless of whether the array contains a 0, the output is always -1, which indicates that a 0 was not present.
    Any assistance much appreciated
    Thanks
    Lama

    Stradis,
    Thanks for your reply.
    I am also using LabVIEW 8.2.1
    The reason that I am using Dynamic Data types is because this is what the Comparison VI require as input, and produce as output.
    I am using one Comparison VI to compare some DMM readings against a spreadsheet of expected values, with it set to equal within tolerance and outputting one result per datapoint. This will put a 0 against any DMM reading that falls outside of range.
    I am using a second Comparison VI in a similar way, but set to output one result for all channels to produce an overall pass or fail result (1 or 0) for each sweep of the DMM.
    The DMM is set to measure resistance, 2 wire mode and the two wires are routed through two switching matrixes. I am connecting one of the DMM inputs to the first wire and measuring between it and the remainig wires in turn, via the second switching matrix. This is what I am calling one sweep. I am then moving the first DMM input to the second wire and repeating the sweep, etc.
    Consequently I get a 1 D Array of dynamic data out of my For loop, and I want to search it to see if it contains any 0's, which would indicate an overall fail. I had hoped to use the 'Search 1 D Array' function to do this, and if it returned a -1, to use this as an indication that the equipment under test passed all sweeps of the DMM, but as I said earlier I also get -1 for what should be fail conditions.
    I hope you will forgive the spralling mess that is my code, I know that I need to reduce it to a number of sub VI's etc, but I just wanted to see it work first. Hopefully it is attached,
    Thanks
    Lama
    Attachments:
    DMM & 2 Switch Synchronous Scanning 18.vi ‏646 KB

  • JasperException, calling tag with dynamic data

    Hello
    I want to call a tag which I have written with dynamic data.
    I can call the tag like this, and it works fine
    <cm:getCalendar year = "2" week = "51" />But I don't want the input to be hard coded.
    Therefor I have tried to insert GET data to the tag like this
    <cm:getCalendar year = "<%= request.getParameter("year")%>
    " week = "<%= request.getParameter("week")%>" />Or like this
    <cm:getCalendar year = "%{#request.yearNr}" week = "%{#request.weekNr}" />The %{#request.yearNr} should be struts 2 specific I think.
    In both this cases I get this exception
    org.apache.jasper.JasperException: /calendar.jsp(25,0) According to TLD or
    attribute directive in tag file, attribute week does not accept any expressions
    Is there a way which I could insert data dynamic to my own tag?
    Here is the TLD file
    <taglib>
      <tlib-version>1.0</tlib-version>
      <jsp-version>1.2</jsp-version>
      <short-name></short-name>
      <tag>
           <name>getCalendar</name>
           <tag-class>taglib.PrintCalendar</tag-class>
           <body-content>JSP</body-content>
           <attribute>
                <name>year</name>
                <required>true/false</required>
           </attribute>
           <attribute>
                <name>week</name>
                <required>true/false</required>
           </attribute>
      </tag>
    </taglib>

    I change the tld to
    <taglib>
    <tlib-version>1.0</tlib-version>
    <jsp-version>1.2</jsp-version>
    <short-name></short-name>
    <tag>
    <name>getCalendar</name>
    <tag-class>taglib.PrintCalendar</tag-class>
    <body-content>JSP</body-content>
    <attribute>
    <name>year</name>
    <required>true/false</required>
    <rtexprvalue>true</rtexprvalue>
    </attribute>
    <attribute>
    <name>week</name>
    <required>true/false</required>
    <rtexprvalue>true</rtexprvalue>
    </attribute>
    </tag>
    </taglib>

  • Add data to table view from input fields in a page

    Hi
            I am developing a BSP page which will be called from SRM shop transaction. After user enters the line item data, data will be passed back to shop transaction using OCI interface and the page attributes (URL).
       (1) How can I add data from input fields to table view on a page on a button click? I am able to add first line but I could not retain first line data when I try to add the second line.
          I am able to add multiple lines to table view if I use view and controller by adding to the line data to static attribute of the controller. I can’t use the controller and view because I can not set the attribute to Controller automatically.
       (2) Is there a way to pass an attribute (URL) to controller from SPRO? Like we pass an attribute to page automatically (Automatic page attribute).
       (3) How can I call a controller and view and pass the page attribute to the controller on a button click from a page with out controller?
    Thanks
    Sreenivas

    I'm trying to test the merge with the following data in a test.txt file:
    ZZZZZ114923000004
    1234Z400660000001
    ZZZZZ114923000010
    Getting an error:
    SQL> @C:\dataformats\sql\pc12seriesMerge.sql
    Directory created.
    SP2-0552: Bind variable "17" not declared.
    SQL>
    here it the pc12seriesMerge.sql file
    set serveroutput on
    create or replace directory user_dir as 'c:\dataformats\incoming\';
    DECLARE
    v_filename VARCHAR2(100); -- Data filename
    v_file_exists boolean;
    v_file_length number;
    v_block_size number;
    f utl_file.file_type;
    s varchar2(200);
    lineString varchar(200);
    v_account varchar(5);
    v_IDN varchar(6);
    v_quantity varchar(6);
    BEGIN
    v_filename := 'TEST.TXT';
    DBMS_OUTPUT.PUT_LINE(v_filename); --shows filename
    utl_file.fgetattr('USER_DIR', v_filename, v_file_exists, v_file_length ,v_block_size );
    IF v_file_exists THEN
    dbms_output.put_line('File Exists');
    create table ext_table (
    account varchar2(5),
    idn number(6),
    quantity varchar2(6)
    organization external (
    type oracle_loader
    default directory user_dir
    access parameters (
    records delimited by newline
    fields (
    account position(1:5) char(5),
    idn position(6:11) char(6),
    quantity position(12:17) char(6)
    location ('test.txt')
    reject limit unlimited;
    MERGE INTO id_req_stg t
    USING (
    SELECT account,
    idn,
    decode(quantity, '-', 0, to_number(quantity)) as quantity
    FROM ext_table
    ) v
    ON ( t.account = v.account AND t.idn = v.idn )
    WHEN MATCHED THEN
    UPDATE SET t.quantity = v.quantity
    DELETE WHERE t.quantity = 0
    WHEN NOT MATCHED THEN
    INSERT (account, idn, quantity)
    VALUES (v.account, v.idn, v.quantity);
    ELSE
    dbms_output.put_line('File Does Not Exist');
    END IF; -- file exists
    EXCEPTION
    WHEN UTL_FILE.ACCESS_DENIED THEN
    DBMS_OUTPUT.PUT_LINE('No Access!!!');
    WHEN UTL_FILE.INVALID_PATH THEN
    DBMS_OUTPUT.PUT_LINE('PATH DOES NOT EXIST');
    WHEN others THEN
    DBMS_OUTPUT.PUT_LINE('SQLERRM: ' || SQLERRM);
    END;
    /

  • Creation of Dynamic Date Variables to be used in WebI reports

    What we are trying to achieve is to create 4 optional filters (Current Day, Current Week, Last Week, Last Month) on 4 different dates which will allow the users to use them in WebI reports.
    When using an optional SAP Customer Exit variable in BEx and creating a Universe on top, the filter becomes mandatory (i.e. the whole Universe is filtered by the SAP Exit, irrespective of whether the filter is used in the WebI report). Even if the filter is flagged as optional at the Universe level, it still behaves as mandatory.
    If each filter becomes mandatory then we'll have to create 16 different Universes (for each optional filter and date combination)! This is not feasible.
    I've seen in other posts that MDX Statements are not currently supported for Universes base on BW and SAP Exit should be utilized.
    So with the existing BO version, is it possible to create optional dynamic date variables or is that a product limitation?
    We are on XI3.1 SP3 FP3.1
    Thanks

    Hi Adam,
    In BEx, I would create this query very easily using the "Amount" key figure twice in my results and restricting each with a different SAP standard out-of-the-box delivered variable. For your reference, the variables in BEx are: 0FPER and 0FYTLFP.
    If I expose these variables in my OLE DB for OLAP query, they are not transfered into the universe, but rather act as filters on the entire universe. I've seen in documentation that only "Ready for Input" variables can be transfered as options into the universe which is not something that I have seen mentioned in this thread.
    >> In the BEx Query you have the option to either make the variable "ready for input" or not. The behavior is the same in Bex or in the Universe / Web Intelligence. "Ready for input" means the user can actually provide an input and without the flag the user can not provide an input. Yes those variables are supported in the Universe.
    Why this is a problem: I can't create separate universes based on potential variable periods that users might want to see. Additionally, many financial reports require concurrent use of these measures in the same report. Also, in reality it's not 2 variables, but dozens.
    >> Which is a decision you make already on the BEx query level. if you decide that the variable is not ready for input then the user can change the timeframe in BEx either.
    Also, I don't have a good way to mimic the standard out-of-the-box functionality given with BEx in BO. If I custom create all my variables in the universe, how do I do a lookup from the system date to the fiscal calendar that is stored on the BW server? In other words, how does BO know which date belongs in which period? (the same would be true with factory calendars for a different functional area).
    >> Variable are created in the BEx query and the Universe will leverage those.
    If you want a dynamic date range then EXIT variable as part of the BEx query - ready for input or not - is the solution.
    regards
    Ingo Hilgefort
    The only work around I can see is to require users to enter the current fiscal period and have the BO reports filter based off that user entered value. This is unfortunate as the entire purpose of SAP Exit variables is to avoid having to require user input at report time.

  • How do you save dynamic data type, from the DAQ assistant, for use in Excel or matlab?

    Currently, I have the following basic VI setup to save Data from my PCI6221 Data Aquisition Card.  The problem I'm having is I keep getting the last iteration of the while loop in the measurement file and that's pretty much it.  When I try to index the data leaving the loop it gives me a 2D array of Data which cannot be input into the "Write to Measurement File" VI.  How would I save this to a useful Data/time step format?  I was wondering of a way to continuously collect the Data and then save it in a large measurement file that I would manipulate in Matlab/excel?  Am I using the wrong type of loop for this application?  I also noticed my Dynamic Data array consists of data, time, timestep and then a vector of the data taken.  Is it possible to just get a vector of the time change per sample alongside the data?    Sorry for the barrage of questions but any help would be greatly appreciated, and thanks in advance!
    -Bryan
    Attachments:
    basic DAQ.vi ‏120 KB

    There is a VI in the Express > Signal Manipulation palette called "From DDT" that lets you convert from the Dynamic Data Type to other data types that are more compatible with operations like File I/O....for instance, you could convert your DDT into a 2D array and use the Write To Spreadsheet File.vi.  Just a thought...
    -D
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • What's wrong with dynamic data?

    I am starting this thread to showcase what I might call "dynamic data abuse": The use of dynamic data where it has no business to be used (does it ever?).
    Everybody is invited to contribute with their own examples!
    One of the problems when operating on dynamic data is its opacity. We can never really tell what's going on by looking at the code. It also seems to hide serious programming mistakes by adapting in ways to mask horrendous code and making it seemingly functional, glossing over the glaring programming errors.
    To get it all started, here are a few recent sightings:
    (A)
    (Seen here). No Kidding! The users joins a scalar with a waveform, then wires the resulting dynamic data to the index input of "replace array subset". He cannot see anything wrong, because the program actually works (coercing the dynamic data to a scalar will, by a happy coincidence, return the value of the scalar joined earlier. I have no idea what thought process led to this code in the first place (maybe just randomly connecting functions until the result is as expected? ).
    (B)
    (seen here)
    What's wrong with simply branching the wire???
    (C)
    I don't remember the link, but here's how somebody transposed a 2D array a while ago (bottom part of image). No way to tell from looking at the code picture! (The "FromDDT" and "ToDDT" need to be configured just right ...).
    ARRRRGH!
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    join.png ‏2 KB
    split.png ‏3 KB
    DynamicTranspose.png ‏18 KB

    Here's another one for you.  Found here:http://forums.ni.com/t5/LabVIEW/Program-unresponsive-after-a-dialog-box-input/m-p/2447608#U2447608
    Put a bunch of booleans into a Combine Signals and then do a Dynamic Data to Boolean Array.
    I supplied the much more cleaned up version of the code below.
    EDIT:  I just realized I needed to subtract 1 frm the power before using the Scale by Power of 2.  Oops.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Button Press Decoder_BD.png ‏41 KB

  • [svn] 3526: The call to the TestNG task in the configuration test framework had haltOnFailure set to true which is not what we want .

    Revision: 3526
    Author: [email protected]
    Date: 2008-10-08 14:21:40 -0700 (Wed, 08 Oct 2008)
    Log Message:
    The call to the TestNG task in the configuration test framework had haltOnFailure set to true which is not what we want. Failures will get logged to the database at which point we can review them.
    Also fix a failing test.
    Modified Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/build.xml
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/services-config.xml

    I have a standard Ant build script for signing a jar file. I import it into my master Ant build files with
    <import file="Sign.xml"/>
    and then in my master Ant script I setup the name of the jar file e.g.
    <property name="jar-file" value="${fun}/FunApplet.jar"/>
    and then invoke a target
    <target name="sign-jar" depends="jar, sign">
    </target>
    Since this target (sign-jar) depends on target 'jar' and target 'sign' it executes the 'jar' target and then the 'sign' target that is contained in Sign.xml.

  • Using Composite Application Framework to deliver global HR applications

    Hello,
    I'm new to Composite applications but have experience with Web development and Web services.
    Our business case is the following:
    - Working on HR applications
    - The customer is a group made of 3 companies.
    - We have 2 separate back-end system, one global for the group and one dedicated the company
    - Some HR business processes need to be performed on both the global and the company system.  Exemple: Hire of an employee.  Depending on the company the employee will work for, some data needs to be created on the company system and on the global system
    I believe this can be achieved with Composite Application Framework.  I guess, front-end application has to be build (using Composite Designer?).  What I don't know is what type of technology is used to build the front-end application?
    Finally, the idea behind this is to replace the current solution: multiple interfaces between the group system and the company system.
    Thanks for your feedback, any suggestion or tip would be most than welcome!
    Laurent

    While I was searching the web for some answers, I found out you can create web based composite applications using web dynpros for Java.
    Is it possible to do so with web dynpro for ABAP?  I guess we can build an abap web dynpro application that connects to different web services from different back-ends to retrieve and update data.  Can anyone confirm this?
    Thanks,
    Laurent

Maybe you are looking for