Best practices for logging results from Looped steps

Hi all
I would like to start a discussion  to document best practices for logging results (to reports and databases) from Looped Steps 
As an application example - let's say you are developing a test for one of NI's analog input or output cards and need to measure a voltage across multiple inputs or outputs.
One way to do that would be to create a sequence that switches the appropriate signals and performs a "Voltage Measurement" test in a loop.    
What are your techniques for keeping track of the individual measurements so that they can be traced to the individual signal paths that are being measured?
I have used a variety of techniques such as
i )creating a custom step type that generates unique identifiers for each iteration of the loop.    This required some customization to the results processing . Also the sequence developer had to include code to ensure that a unique identifier was generated for each iteration
ii) Adding an input parameter to the test function/vi, passing loop iteration to it and adding this to Additional results parameters to log.   

I have attached a simple example (LV 2012 and TS 2012) that includes steps inside a loop structure as well as a looped test.
If you enable both database and report generation, you will see the following:
1)  The numeric limit test in the for loop always generates the same name in the report and database which makes it difficult to determine the result of a particular iteration
2) The Max voltage test report includes the paramater as an additional result but the database does not include any differentiating information
3) The Looped Limit test generates both uniques reports and database entries - you can easily see what the result for each iteration is.   
As mentioned, I am seeking to start a discussion for how others handle results for steps inside loops.    The only way I have been able to accomplish a result similar to that of the Looped step (unique results and database entry for each iteration of the loop) is to modify the process model results processing.  
Attachments:
test.vi ‏27 KB
Sequence File 2.seq ‏9 KB

Similar Messages

  • What's best practice for logging messages in pageflow?

    What's best practice for logging messages in pageflow?
    Workshop complains when I try to use a Log4J logger by saying it's not serializable. Is there a context similar to JWSContext that you can get a logger from?
    There seems to be a big hole in the documentation on debug logging in workflows and JSP pages.
    thanks,
    Rodger...

    Make the configuration change in setDomainEnv.cmd. Find where the following variable is set:
    LOG4J_CONFIG_FILE
    and change it to your desired path.
    In your Global.app class, instantiate a static Logger like this:
    transient static Logger logger = Logger.getLogger(Global.class);
    You should be logging now as long as you have the categories and appenders configured properly in your log4j.xml file.

  • Best practice for linking fields from multiple entity objects

    I am currently transitioning from PHP to ADF. I'm looking for the best practice for linking data from multiple entity objects.
    Example:
    EO 'REQUESTS' has fields: req_id, name, dt, his_stat_id, her_stat_id
    EO 'STATUSES' has fields: stat_id, short_txt_descr
    'REQUESTS' is linked to EO 'STATUSES' on: STATUSES.stat_id = REQUESTS.his_status_id
    'REQUESTS' is also linked to EO 'STATUSES' on: STATUSES.stat_id = REQUESTS.her_status_id
    REQUESTS.his_status_id is independent of REQUESTS.her_status_id
    When I create a VO for REQUESTS, I want to display: REQUESTS.name, REQUESTS.dt, STATUSES.short_txt_descr (for his_stat_id), STATUS.short_txt_descr (for her_stat_id)
    What is the best practice for accomplishing this? It appears I could do it a few different ways:
    1. Create the REQUESTS VO with a LOV for his_stat_id and her_stat_id
    2. Create the REQUESTS VO with the join to STATUSES performed within the query for the VO. This would require joining on the STATUSES EO twice (his_stat_id, her_stat_id)
    3. I just started reading about View Links - would that somehow do what I'm looking for?
    I also need to be able to update his_status_id and her_status_id through the by selecting a STATUSES.short_txt_descr from a dropdown.
    Any suggestions on how to approach such a stupidly simple task?
    Using jDeveloper 11.1.2.2.0 if that makes a difference in the solution.
    Thanks ahead of time,
    CJ

    CJ,
    I vote for solution 1 as it's just your use case. As you said you what to update the his_status_id and her_status_id through the by selecting a STATUSES.short_txt_descr by a drop down. This is exactly the LOV solution.
    ViewLinks are used fro master detail navigation (which you don't do here) and Joining the data make it difficult to update (and you still need a LOV for the drop down box.
    Timo

  • Best practice for logging

    Hi All,
    I would like to know if there is any best practice document for Firewall logging. This would include
    1. What level of logging is ideal
    2. If a log is stored in a logging server, how long is it best to store the logs and retain the logs by a backup tape etc.
    This can include for various industries like IT, Banking etc.
    Any document pertaining to these would be helpful. Thanks in advance.
    Regards,
    Manoj

    Hi All,
    I would like to know if there is any best practice document for Firewall logging. This would include
    1. What level of logging is ideal
    2. If a log is stored in a logging server, how long is it best to store the logs and retain the logs by a backup tape etc.
    This can include for various industries like IT, Banking etc.
    Any document pertaining to these would be helpful. Thanks in advance.
    Regards,
    Manoj
    Manoj,
    Check out the below link for best practice for logging and prerequiste in cisco devices.
    http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080120f48.shtml#logbest
    http://www.ciscopartner.biz/en/US/docs/security/asa/asa82/configuration/guide/monitor_syslog.html#wp1110908
    Hope to Help !!
    Ganesh.H
    Remember to rate the helpful post

  • Best Practice for Removing Zeroes from Database

    Does anyone have some clever bits of code or best practices for evaluating a database and instances of zeroes? I'm working on cleaning up our rules file and am thinking the best way to start would be to write some code to look for zeroes and write them to a log file. This would at least indicate if there was even a problem with zeroes (which there may or may not be).
    Any suggestions out there / utilities / code samples?
    Thanks.

    We accomplished this using data extracts from a subset of scenarios/years/entities/accounts to ensure that all of our potential rules could be checked to ensure they were not writting zero's. This worked pretty well for our purposes, a text editor called EmEditor allows for VB macros in it pretty easily and we could write a quick macro to check for strings ending in "; 0." You may also want to review your check box of calculated in your extract and see if the zeros are a result of calculations. A rule output could work pretty well, although it would take some defining as you would have to write it out in a sub and make sure that you capture the data of all subroutines if your zero's are rule driven or actual inputs. May want to review some if you have very small insignificant values getting written, seen items that have one value 13 places to the right of the decimal that were not really signficant.
    JTF

  • Best Practices for Removing Shots from BDMV folder

    CS6 Production Premium Suite
    Win7x64
    Canon XA10
    I would appreciate feedback on the best practices for the following situation:
    Using Windows Explorer, I copy the BDMV folder from the XA10 to my Talk2 project folder.
    The BDMV folder has three one hour shots (talks) where each shot is one hour called Talk1, Talk2, and Talk3.
    Each shot consists of several MTS files in the STREAM folder since MTS files have a maximum file size so a new MTS file is created when a given MTS file reaches the maximum file size.
    Since I only want to have Talk2 stuff in my Talk2 project folder, I need to remove the Talk1 and Talk3 stuff from the BDMV folder.
    I delete the Talk1 and Talk3 MTS files from the STREAM folder.
    I delete the Talk1 and Talk3 CPI files from the CLIPINF folder.
    I leave the PLAYLIST folder as is.
    Using the Media Browser, I import Talk2 (which consists of two MTS files).
    I edit the clip.
    This procedure seems to work, but I do not know if there are any "got you" issues.
    Thanks in advance.

    Oh, don't do it that way.  I know a lot of people do, heck, my boss does, but it's just asking for trouble.
    Treat your card as if it was your original tape master (because it is).  It is the most important thing you have.  Don't delete or move any part of it. 
    If you want to break up the talks, do it as you shoot them.  Use separate cards for each talk and archive each one separately.  There is too much valuable information in the structure of the card format.  You may not need it now but your editing program may need it later.
    Hard drive space is cheap but digital recordings are priceless

  • Best Practices for Logging Interactions in WLST

    I am wondering what is the best method to log commands implemented during execution of a WLST in interactive mode and offline mode?
    Is it best just to import Python's logging module, or is there a better facility provided by WLST?

    I have attached a simple example (LV 2012 and TS 2012) that includes steps inside a loop structure as well as a looped test.
    If you enable both database and report generation, you will see the following:
    1)  The numeric limit test in the for loop always generates the same name in the report and database which makes it difficult to determine the result of a particular iteration
    2) The Max voltage test report includes the paramater as an additional result but the database does not include any differentiating information
    3) The Looped Limit test generates both uniques reports and database entries - you can easily see what the result for each iteration is.   
    As mentioned, I am seeking to start a discussion for how others handle results for steps inside loops.    The only way I have been able to accomplish a result similar to that of the Looped step (unique results and database entry for each iteration of the loop) is to modify the process model results processing.  
    Attachments:
    test.vi ‏27 KB
    Sequence File 2.seq ‏9 KB

  • Best practice for workflow triggering from Web Dynpro UI

    Hello, workflow community!
    I'm working on a task which allow to trigger the workflow by clicking a button in Web Dynpro UI. As always, there are multiple ways to do that, for instance, to use SAP Workflow API (SAP_WAPI_START_WORKFLOW) or to raise an event upon the button click, which will be caught by workflow template.
    In my opinion, the optimal solution is to call FM, which will call ABAP-class, raising an event, which, in turn, will be caught by workflow template. In this case, FM will service kind of wrapper, where we can implement some additional checks if needed.
    But the question is what approach is the best practice — to raise an event or use SAP_WAPI_*?
    Thanks.

    let combine, use SAP_WAPI_CREATE_EVENT
    usually I would not recommend creating a workflow directly (SAP_WAPI_START_WORKFLOW) since when I look for workflows in a system I usually start from SWE2 (the event linkage) which uses events so the workflow you start by SAP_WAPI_START_WORKFLOW will not be seen there, also SWE2 gives you better control for starting the workflow, you can easily deactivate an event linkage. finding where you called the workflow by SAP_WAPI_START_WORKFLOW will be more difficult and deactivation will require a code change.
    so use events, and start them by SAP_WAPI_CREATE_EVENT
    Also pay attention that you have a check function module option in SWE2.

  • Best Practice for EJB calls from servlet?

    Hi folks
    I could not find general rules for making calls to an stateful EJB from the web container (e.g. from a backingBean). In some books they say it is a bad programming style calling them directly from a common servlet. The book says create first an HTTPSession Object and from there call the stateful EJB.
    I'm a bit confused because, I'm missing some best practice guide from where to initiate such calls.
    Can somebody please point me in the right direction?
    Kind Regards
    Bruno
    Edited by: zajoho on Oct 30, 2008 11:14 PM

    Hi Bruno,
    The main issue with the combination of stateful session beans and servlets is the servlet threading model.
    It is dangerous to store a stateful session bean reference in servlet instance state, since the servlet instance
    can be accessed concurrently, yet a stateful session bean reference is intended to be used by only one
    client.
    As you point out, one alternative is to store the reference in the HttpSession. That associates the reference
    with a particular client, which matches the stateful session bean programming model.

  • What is the best practice for logging runtime error  or uncheked exceptions

    hello
    my main problem is logging the nullPointerException to a file
    suppose I have this method
    public void myMethod()
       //..... some declaration here
       User user = obj.findUser("userx"); //this may return null
       System.out.println("user name is "+user.getName()); // I may have a null pointer exception here
    }so what is the best practice to catch the exception ??
    can I log the exception without catching it ???
    thank you

    A terrible way of logging exceptions.Not that im not agreeing with you, but why? (I havent
    actually used this)Because it's always on, for one thing. It's not really configurable either, unless you go to some trouble to make it so. You'd have to provide an InputStream. How? Either at compile-time, which is undesirable, or by providing configuration, which a logging framework has already done, better. Then there's the fact that you can't log anything other than the stacktrace - not particularly helpful a lot of the time. In short, it's a buggy and incomplete solution to something that's already been solved much much better by, for example, Log4J

  • Best Practices for configuring ICMP from the outside

    Question,
    Are there any best practices or best recommendations on how ICMP should be configured from the outside? I have been cleaning up the rules on our ASA as a lot were simply ported over years ago when we retired our PIX. I noticed that there is a rule to allow ICMP any any and began to wonder how this works when the rules above are specific IP addresses and specific ports. This in thurn started me looking to see if there was any documentation or anything to help me determine a best practice. Anyone know of anything?
    As a second part how does this flow on a firewall if all the addresses are natted? It the ICMP traffic simply passed through the NAT and the destiantion simply responds?
    Brent                   

    Here you go, bro!
    http://checkthenetwork.com/networksecurity%20Cisco%20ASA%20Firewall%20Best%20Practices%20for%20Firewall%20Deployment%201.asp#_Toc218778855
    access-list inside permit icmp any any echo
    access-list inside permit icmp any any echo-reply
    access-list inside permit icmp any any unreachable
    access-list inside permit icmp any any time-exceeded
    access-list inside permit icmp any any packets-too-big
    access-list inside permit udp any any eq 33434 33464
    access-list deny icmp any any log
    P/S: if you think this comment is useful, please do rate them nicely :-)

  • Oim11g - Best Practice for Logging

    Hello all,
    want to know  the best practice or common usage of error logging in oim11g. As we know that oim has some sequence to run processes that will end in Java. For the best practice, where and when should I create the log? Should I create log inside each function called by oim adapter? If I make a sysout at the beginning of function, parameter names and values, e.getMessage() inside catch(Exception e) and at the end of function, is it okay? Or is there any better implementation? Using sysout or log4j, commons logging, etc.
    in my idea (in each function):
    <function_name>::BEGIN
    - Time: <dd-MMM-yyyy>
    <function_name>::PARAMETER
    - <param_name1>: <param1_value>
    - <param_name2>: <param2_value>
    - <param_name3>: <param3_value>
    if error, inside throw{}
    <function_name>::ERROR
    - Message: <e.getMessage()>
    <function_name>::END
    is it good?

    This is what i've been doing with 11g logging.  In every custom code class i run, i use this to declare my logger:
    private final static Logger LOGGER = Logger.getLogger(<Class_Name>.class.getName().toUpperCase());  // Replace <Class_Name> with the actual class name
    This lets me go go the enterprise manager and make changes to the logging level once the class has been used.
    You can then use the following code:
    LOGGER.log(Level.INFO, "Insert Information Message Here");
    LOGGER.log(Level.CONFIG, "Insert More Detailed Debugger Information Message Here");
    LOGGER.log(Level.WARNING, "Insert Error Message Information Here", e); //e is your exception that is caught
    Personally, i like to put a start and end output in my logging, and then any details in the middle, i use the CONFIG level.  This lets me know pieces are running successfully, and only need to see the details during testing or if needed.  When deployed to production, i set the logger to WARNING level to only know about the problems.
    By using these, you can set your logger appropriately in the enterprise manager to output more detailed when needed.
    -Kevin

  • Best Practice for Migrating code from Dev to a fresh Test ODI instance

    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines?
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

    948115 wrote:
    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines? If this is the 1st time you are moving to QA, better export/import complete work repositories. If it is not the 1st time then create scenario of specific packages and export/import them to QA. In case of scenario you need not to bother about model/datastores. keep in mind that the logical schema name should be same in QA as used in your DEV.
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?It should be different.
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.It is required to ensure object uniqueness across several work repositories. For more understanding you can refer
    http://docs.oracle.com/cd/E14571_01/integrate.1111/e12643/export_import.htm
    http://odiexperts.com/odi-internal-id/
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?If you are exporting topology then you will get the logical connection and context details. If you are not exporting topology then you need to manually create context and other physical connection/logical connection.
    >
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

  • Java - best practice for getting values from complex dialogs

    Hi,
    Java is a language I use quite a bit, but much of my work hasn't required GUIs. Now, I'm developing an app which will contain a number of custom-made dialogs (which will obviously extend JDialog). Take a typical Options dialog which typically stores many parameters for a given application.
    Simple dialogs that ship with Java allow to 'get' a given value. As they assume that the dialog only consists of obtaining a single piece of data, think JFileChooser.
    So, what is the best way to cope with a dialog that could potentially hold hundreds of parameters? It's surely unfeasible to implement getters for each field. I was thinking either creating a hashmap object to hold key-value pairs, and 'get' that from the dialog. Or, I could encapsulate that a little more by creating a new class that manages these values, ProgramOptions or such like. That can be passed around between the dialog and the main app.
    I hope this makes sense
    TIA

    sudman1 wrote:
    I doubt it's the most efficient method, but the way I've delt with things like this in the past is to set up the fields in either an Array or ArrayList, then use a for loop that gets the data and drops it into an Array/ArrayList which is then returned.
    class MyDialog extends JDialog {
    // Global variable
    private ArrayList dialogFieldsArray= new ArrayList();
    private void initGUI() {
    for (int i=0 ; i <numRequiredFields ; i++) {
    dialogFieldsArray.add(new JTextField(10));
    private ArrayList getDialogData() {
    ArrayList rval = new ArrayList();
    for (int i=0 ; i < dialogFieldsArray.size() ; i++) {
    String tempString = ((JTextField) dialogFieldsArray.get(i)).getText();
    rval.add(tempString);
    return rval;
    The only thing to remember with the above method is the castings involved, but you could implement your own subclasses of ArrayList that deal only with JTextFields and Strings seperately to get around that.
    I personally wouldn't go for this exact style, instead I'd opt for a Map collection instead. This because you need to know which field is at which exact index. Imagine you make a form that has the fields: forename, surname, date of birth. And so you know that forename is .get(0) of your ArrayList, etc.
    But then, you decide to put a new field, saluation (Mr, Miss, etc) at the beginning. This now shifts all your olds fields along by one, and so you need to edit that code. Sure, no biggie with a handful of fields, but large dialogs could cause headaches.

  • Best practice for client migration from SMS 2003 to SCCM 2012 R2?

    Can anyone advise what the most recommended method is for migrating SMS 2003 clients (Windows 7) to SCCM 2012 R2?
    Options are:
    1. SMS package to deploy ccmclean.exe to uninstall SMS 2003 client. Deploy SCCM 2012 R2 client via SCCM console push.
    2. Simply push SCCM client from console installing over existing SMS 2003 client.
    I am finding some folks saying this is ok to do but others saying no it is not supported by Microsoft. Even if it appears to work I do not want to do it if it is not supported by Microsoft. Can anyone provide a definitive answer?
    3. Use a logon script or Group Policy preference to detect the SMS client and delete it using CCMclean and then install the SCCM client.
    Appreciate any feedback. Thank you.

    Hi,
    >>I am finding some folks saying this is ok to do but others saying no it is not supported by Microsoft. Even if it appears to work I do not want to do it if it is not supported by Microsoft. Can anyone provide a definitive answer?
    I haven't seen any official document records the option 2 method. There is only upgrading from SCCM 2007 to SCCM 2012 directly.
    I think you could use a logon script or Group Policy preference to detect the SMS client and delete it using CCMclean.exe. Then use SCCM 2012 to discover these computers and push clients to them.
    Best Regards,
    Joyce

Maybe you are looking for