Sequence of execution

Hello to every body
I need to know about sequence of execution in a select command.
I have a sql command that use a function.some thing like:
select id, function(item)
from tbl
where conditions...
I want to know that if my function on item execute before where section or vise versa.
I try to explain it more. I want to know that sql engine fetch rows according to where clause and then execute my function or execute my function and then fetch the rows according to where clause.
If you have a document or some thing that explain about sql engine and sequence of execution please let me know.
Your help is really appreciated.

EXPLAIN PLAN SET STATEMENT_ID='TSH' FOR
SELECT *
FROM emp e, dept d
WHERE e.deptno = d.deptno
AND e.ename = 'SMITH';
SELECT *
FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE','TSH','BASIC'));
for this u need to have PLAN_TABLE in your schema
Regards,
Abu

Similar Messages

  • Customer Exit Variable and Condition in a Query (Sequence of Execution)

    Hi,
      For a query i defined a Customer exit variable and a condition....
    Which will first execute...wether it is a  Variable and then Condition or vise versa
    Is there any way we can control sequence of execution
    My requirment is first to execute the condition and the variable how can i control this
    Thanks

    Hi
    In your customer exit you will be having a field by name I_STEP which will help you to handle the time of execution of the variable.
    Assign points if helpful
    Prathish

  • Sequence of execution of events

    Can anyone tell me what is the sequence of execution of events i

    Hello,
    Take a look on this: http://help.sap.com/saphelp_nw04s/helpdata/en/9f/db9a1435c111d1829f0000e829fbfe/frameset.htm
    Regards.

  • Sequence of execution of report

    HI BW Experts,
    Can any one tell me what is sequence of execution like RKF, CKF,Filters,Formulas etc. of the report
    Regards
    RK

    hello,
    the sequence may be
    1.Filters
    2.RKF
    3.CKF , Formula
    For furtehr understanding,please refer the link:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm
    Reg,
    Dhanya

  • Exit Sequence after Execution

    Is there a way to automatically close the current sequence file when the test finishes executing?  I am using TestStand Version 1.0.2
    My problem is with Type Conflict errors occuring while running sequence files through a LabView GUI.  The user is having to exit all the way out of LabView in order to "unload" the current sequence file types.  I am working with code that cannot be modified, so I am wondering if there is a setting somewhere in TestStand that would solve this.

    The API is found in the TestStand Help.  You may be looking for the ApplicationMgr.CloseSequenceFile method call.
    NI TestStand™ UI Controls API Reference Help>>UI Classes, Properties, Methods, and Events>>Core UI Classes, Properties, Methods, and Events>>ApplicationMgr>>Methods>>CloseSequence File
    Jensen
    National Instruments
    Applications Engineer

  • Sequence of execution of Oracle Joins

    Hi All,
    I am applying combination of outer-join & equi-join on around 5-6 tables.
    But I am not getting expected results.
    I just wanted to know how Oracle applies execution of this joins, outer & equi.

    Here is the data on which I am playing,
    --Employee table  
    CREATE TABLE "EMP"
       ("EMP_ID" NUMBER(10,0) NOT NULL ENABLE,
         "FNAME" NVARCHAR2(50) NOT NULL ENABLE,
         "LNAME" NVARCHAR2(50) NOT NULL ENABLE
    --Mapping between Task type & Department
       CREATE TABLE "TASKTYPE_FOR_DEPT"
       (     "TASKTYPE_FOR_DEPT_ID" NUMBER(10,0) NOT NULL ENABLE,
         "DEPT_ID" NUMBER(10,0) NOT NULL ENABLE,
         "TASK_TYPE_CD" NVARCHAR2(10) NOT NULL ENABLE
    -- Departmentwise employee hierarchy  
    CREATE TABLE "EMP_HIERARCHY"
       (     "EMP_ID" NUMBER(10,0) NOT NULL ENABLE,
         "DEPT_ID" NUMBER(10,0) NOT NULL ENABLE
       -- Tasks Details
    CREATE TABLE "TASKS"
       (     "TASK_ID" NUMBER(10,0) NOT NULL ENABLE,
         "TASK_PRIORITY" NVARCHAR2(10) NOT NULL ENABLE,
         "TASK_TYPE" VARCHAR2(20 BYTE)
       -- Tasks allocation
    CREATE TABLE "TASKSALLOCATION"
       (     "TASKALLOCATION_ID" NUMBER(10,0) NOT NULL ENABLE,
         "EMP_ID" NUMBER(10,0) NOT NULL ENABLE,
         "TASK_ID" NUMBER(10,0) NOT NULL ENABLE
    Insert into EMP (EMP_ID,FNAME,LNAME) values (1,'XYZ','DFD');
    Insert into EMP (EMP_ID,FNAME,LNAME) values (2,'DFDS','FD');
    Insert into EMP (EMP_ID,FNAME,LNAME) values (3,'FDSF','GFH');
    Insert into EMP (EMP_ID,FNAME,LNAME) values (6,'GFHGF','GFHS');
    Insert into EMP (EMP_ID,FNAME,LNAME) values (4,'GFD','FDG');
    Insert into EMP (EMP_ID,FNAME,LNAME) values (5,'DSFDS','FDSAF');
    Insert into EMP (EMP_ID,FNAME,LNAME) values (7,'GHGY','EWE');
    Insert into EMP (EMP_ID,FNAME,LNAME) values (8,'FGRFSAD','SADF');
    Insert into TASKTYPE_FOR_DEPT (TASKTYPE_FOR_DEPT_ID,DEPT_ID,TASK_TYPE_CD) values (1,1,'T1');
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (1,1);
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (2,1);
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (3,1);
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (4,1);
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (5,1);
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (6,1);
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (7,1);
    Insert into EMP_HIERARCHY (EMP_ID,DEPT_ID) values (8,1);
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (1,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (2,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (3,'LOW','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (4,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (5,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (6,'LOW','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (7,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (8,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (9,'LOW','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (10,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (11,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (12,'LOW','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (13,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (14,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (15,'LOW','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (16,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (17,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (18,'LOW','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (19,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (20,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (21,'LOW','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (22,'HIGH','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (23,'MEDIUM','T1');
    Insert into TASKS (TASK_ID,TASK_PRIORITY,TASK_TYPE) values (24,'LOW','T1');
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (1,1,1);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (2,2,1);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (3,3,2);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (4,3,3);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (5,4,4);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (6,4,5);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (7,4,6);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (8,4,7);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (9,5,6);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (10,6,8);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (12,8,8);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (13,8,10);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (14,8,11);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (15,8,12);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (16,6,13);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (17,5,14);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (18,3,12);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (19,3,13);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (20,2,15);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (21,1,16);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (22,2,17);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (23,1,18);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (24,4,19);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (25,6,20);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (26,5,21);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (27,1,22);
    Insert into TASKSALLOCATION (TASKALLOCATION_ID,EMP_ID,TASK_ID) values (28,3,23);
    COMMIT;We want all resources belongs to department for a given case type with assinged tasks count & grouping with priority.
    I tried the following query,
    select emp.fname || ' ' || emp.lname EMP_NAME
         , sum(DECODE(tasks.TASK_PRIORITY, 'HIGH', 1, 0)) HIGH
         , sum(DECODE(tasks.TASK_PRIORITY, 'MEDIUM', 1, 0)) MEDIUM
         , sum(DECODE(tasks.TASK_PRIORITY, 'LOW', 1, 0)) LOW
      from emp,
      EMP_HIERARCHY,
      TASKSALLOCATION,
      TASKS,
      TASKTYPE_FOR_DEPT 
      where
       TASKTYPE_FOR_DEPT.TASK_TYPE_CD = 'T1'
       emp.EMP_ID = TASKSALLOCATION.EMP_ID(+)
      and TASKSALLOCATION.TASK_ID = tasks.TASK_ID(+)
      and tasks.TASK_TYPE = TASKTYPE_FOR_DEPT.TASK_TYPE_CD
      and TASKTYPE_FOR_DEPT.dept_id = EMP_HIERARCHY.dept_id
    -- and EMP_HIERARCHY.emp_id = emp.EMP_ID
    group by emp.fname || ' ' || emp.lname;
    It is not working properly.
    Actually we are also looking the employee for whom task has been not allocated but belong to same department.
    In above resultset employee 'GHGY EWE'.
    We are expecting resultset something like this.
    with
        t as
              select     'GFHGF GFHS' as emp_name, 1 as highPriority,     2 as mediumPriority, 0 as lowPriority FROM dual union all
              select     'FDSF GFH',               1     ,     2     ,     2     FROM dual union all
              select     'XYZ DFD'     ,          3     ,     0     ,     1     FROM dual union all
              select     'GHGY EWE',          0     ,     0     ,     0     FROM dual union all
              select     'DFDS FD',               1     ,     1     ,     1     FROM dual union all
              select     'GFD FDG',               3     ,     1     ,     1     FROM dual union all
              select     'FGRFSAD SADF',          1     ,     2     ,     1     FROM dual union all
              select     'DSFDS FDSAF',          0     ,     1     ,     2     FROM dual
              );Note : We are using Oracle 11.2.0.2.0 version

  • Can we view line numbers next to each step in sequence file ?

    Hi,
        I have a sequence file having 600 steps. I would like to know is there any option in test stand to set line number for each step in sequence editor so that it will be helpful for review.
    Example:
    1 Action1
    2 Action2
    600 Action600 
    Regards,
    krishna 
    Solved!
    Go to Solution.

    Krishna,
    If you just want to know the index of a step, this information is displayed at the bottom of the sequence editor in the status bar.  It shows you how many steps there are in the sequence, how many you have selected, and the index(es) of the selected step(s).
    It is also possible to display this information in the steps view as a column if you are willing to create a new column.  Follow the instructions in this KB to see how to create a new column:Changing Columns Properties of the TestStand Sequence Editor Execution View Window.
    One of the types of colums is Index. Simply set your column to that type and you will see step indexes as shown below. 
    Message Edited by Josh W. on 07-01-2009 01:50 PM
    Message Edited by Josh W. on 07-01-2009 01:53 PM
    Josh W.
    Certified TestStand Architect
    Formerly blue

  • Can we force queries to execute in Sequence?

    Hello all,
    We are at BOXI 3.1/SP2 version. One of our complex webi reports has data from multiple queries ( They are all from the same universe, so same data connection. It's an OLAP universe).
    I have this question
    1. When i click "Run Queries" - do all the queries get sent to the database the same time or is there a sequence that is followed? Can we enforce a sequence?
    2. Similarly if i schedule this report - is it possible to enforce a sequence for execution of these queries?
    Thanks in advance.

    Hi Dave,
    I need to provide more details to elucidate upon my issue
    1. Our queries are a bit intensive, so when i refresh the single report, i actually see  say all 10 huge MDX queries ( multiple queries within a single report) hitting the cube the same time, and causing a general impact in performance.
    My ideal scenario would be to some-how control BO to fire one query..get results and so..on and then collate all that data and present it in the report.
    I hope this clarifies.

  • Copy and replace sequences

    Hi!
    I have a running sequence containing a main loop. Depending on input from user, a given subsequence (in separate file) is called and when it has executed we go back to the main file. From the main loop I have made it possible to run a batch file that copies the whole set of sequence files from a network drive to the local folder.
    Problem: The files seem to not always update, meaning when one certain sub file is called it is still unchanged although the network master was edited and saved. All the sequences are set to unload after execution.
    Ideas?
    /Stefan

    Hello!
    When programming with the ActiveX API in TestStand you can get object references to the sequence file, sequence, steps, execution and so on that is executing at the moment. By using these references you can close the open sequence file and the re-load them the ordinary way.
    One of the methods you can use is: SequenceFile.UnloadModules
    Regards,
    Jimmie A.
    Applications Engineer, National Instruments
    Regards,
    Jimmie Adolph
    Systems Engineer Manager, National Instruments Northern Region
    Bring Me The Horizon - Sempiternal

  • I would like to do something after every subsequence finish execution. Is there any engine callbacks?

    After every sequence finish execution in a client sequence file, i want to check the status of every step in that sequence. Based on the status, say i want to do something. Currently i am having a subsequence that gets called at the end of every sequence. I was wondering whether i could move this subsequence into some callbacks so that i don't have to call explicitly in every sequence.
    Solved!
    Go to Solution.

    Saravisu,
    There is a way, using the SequenceFilePostStep callback. This will execute for every step, but you can use a Flow Control 'If' to Check for PropertyExists("Parameters.Result.TS.SequenceCall"); when true, the step results for the sequence call will be in Parameters.Result.TS.SequenceCall.ResultList.
    Note this will only execute for steps that occur in the sequence file that contains the SequenceFilePostStep callback.
    -Jack

  • Commit sequence

    Hi all,
    I'd like to know a little more on a DB commit process. In my database,
    Table A is parent of Table B. And Table B is parent of Table C. The following code will attempt to fill each of these table with respect to their relationship:
    class Update {
      DAOFactory mySQL = null;
      ServiceTable_A serviceA = null;
      ServiceTable_B serviceB= null
      ServiceTable_C serviceC =null;
      public Update(){
          mySQL = DAOFactory.getDAOFactory(DAOFactory.MYSQL);
         serviceA = mySQL.getServiceA();
         serviceB = mySQL.getServiceB();
         serviceC = mySQL.getServiceC();
      public void addToA(Person A){
           int status = serviceA.insert(A);
      public void addToB(Relative B){
           int status = serviceB.insert(B);
      public void addToC(Address C){
           int status = serviceC.insert(C);
      public boolean success(){
           if(status > -1){
               mySQL.commit();  //see snapshot of commit method below
          else
               mySQL.rollback();
    class PerformUpdate(){
        Update update = null;
        Person A = null;
        Relative B = null;
        Address C = null;
        public PerformUpdate(Update d){
             update= d;
        public void performNewPerson(Person A, Relative B, Address C){
              update.addToA(A);
             update.addToB(B);
             update.addToC(C);
    class View{
        private boolean UPDATE = true;
        private PerformUpdate perf = null;
       public void constructObjsFromInput(){
             // I construct Person A, Relative B, Address C objs here
       public void executeAction(){
            constructObjsFromInput();
            if(UPDATE){
                perf = new PerformUpdate(new Update());
                perf.performNewPerson(A, B, C);
      //HERE IS A SNAPSHOT OF THE COMMIT METHOD IN MYSQL FACTORY
      public void commit(){
         if( ! connection.isAutoCommit()){
                connection.commit();
                connection.setAutoCommit(true);
                      dbUtils.close(connection);  //dbUtils will close connections in
                                                                       //finally block
      }serviceA, serviceB, and serviceC have their own preparedStatements that
    executes insert commands.
    Assume that none of them fail, it's still important that serviceA commits first, followed by serviceB, then serviceC based on their relationship.
    Is there anyway that the sequence of execution triggered by the commit gets jacked up on the way to the DB ? I tested this situation several times.
    Two situations occur wih the same set of data:
    1. Table B, child of Table A, cannot be updated because of foreign key constraints. It is as if, serviceB attempted to be updated before Table A did !
    2. Table C, child of Table B, cannot be updated because of foreign key constraints. It is as if, Table A update occured first, then Table C's followed by table B.
    Are we guaranteed that preparedStmts will be committed with respect to the time they were called ?
    or does it depend the weight of their query ?
    or it's just up to the JVM to give resources to whichever one? If so, do we have a way to force the sequence to happen in one way ?
    Thanks

    also check for any key assumptions that may not be
    valid. What is the foriegn key constraint? Is it an
    autonumber that you are making an assumption about
    that when it turns out to be not what you think blows
    it all up?That's exactly right. The DB keys are auto-generated, auto-incremented.
    With this in mind, if i want to add a new field to a Table , I compute the next number sequence of the key. In that manner:
    class ServiceToTable_A implements ServiceA_DAO{
        private Connection con = null;
        public ServiceToTable_A(Connection con){
              this.con = con
        // The body structure of this method is the same for all other daos
       // except that I query the appropriate table for a given service
        public int nextSeq(int increment){
             int key = 0;
             Statement stat = null;
             ResultSet keySet = null;
              try{
                      String query= "Select * from table A";
                      stat = con.createStatement();
                      keySet = stat.executeQuery(query);
                      keySet.last();   //move cursor to last row
                      key = keySet.getInt("person_id");
              }catch(Exception n){
                   //log error             
               finally{
                  stat.close();
                  keySet.close();
              return key + inc;
       class Loading {
          DAOFactory mySQL = null;
         ServiceTable_A serviceA = null;
         ServiceTable_B serviceB= null
         ServiceTable_C serviceC =null;
         public Loading(){
               mySQL = DAOFactory.getDAOFactory(DAOFactory.MYSQL);
              serviceA = mySQL.getServiceA();
              serviceB = mySQL.getServiceB();
              serviceC = mySQL.getServiceC();
          public int getNextPersonId(int inc){
                  return serviceA.nextSeq(inc);
          public int getNextRelativeId(int inc){
                  return serviceB.nextSeq(inc);
       class View{
            Person A;
            Relative B;
            Loading load = new Loading();
            public void constructObjects(){
                      A = new Person();
                      //Construct A from input fields
                      B.setParentKey(load.getNextPersonId(1));
                      //construct B from input fields              
                      C.setParentKey(load.getNextRelativeId(1);
                //construct C from input fields              
    }As you said, i'll do some more testing to isolate what's going on. It seems very odd that the two outcomes alternate like that. Or I may just have the coolest DB on earth

  • Runtime Sequence of triggers

    Is there any way to identify the runtime sequence of the execution of triggers in the form ?
    Also is there a place where the default (template) sequence of execution for forms triggers can be found ?

    There is no single place where you can read about all the triggers and the order in which they fire I am afraid. The order is not very well defined either since the events that happen are not known in every instance.
    The help topic called Navigational Triggers holds some relevant information and there are other topics as well that would add to your knowledge about this topic.
    I would suggest that you test out the order by setting up the scenario that you would like to know the sequence about and then put a message() call in each trigger you want to know about and observe the sequence in the status bar.

  • Programatically Loading Several Sequences Using LV Simple Operator Interface

    I am playing around with the supplied LV(2009) Simple Operator Interface for running TestStand (4.2) and hoping that someone can point me in the right direction.  I know this may be too generic for a concise solution, but hopefully I can get some suggestions on the best approach for this. Here is a short description of what I am trying to accomplish. 
    I am testing 2 unique UUTs on a single test fixture that is controlled by a PCI-7831R DAQ.  Each UUT will have a unique serial number and report generated.  Therefore, I am using 2 unique sequence files created using TestStand.  Basically, I want to be able to read in a barcode on the test fixture and have the operator interface load the 2 unique sequence files based on information contained in the barcode.  The operator will then hit a button to start the sequence file executions.  I want the first sequence file to run and test the first UUT.  Once the first sequence file is finished, I want the second sequence file to load and run on the second UUT.
    I can use the simple operator interface to load both sequence files.  I then have to select the combo box to select which one I want to run.  This works fine, however, I am trying to automate this so that the operator doesn't have to open the files and continually use the combo box to select sequence files.
    Any ideas or suggestions would be greatly appreciated.

    I have figured out a solution (for my initial application at least) of programmatically loading the sequence files.  When the operator interface first starts, there is a prompt where the user will scan a barcode that is loaded on the front of the fixture.  For all of our product lines, we will have a unique format for the fixture barcode.  Included in this barcode is a product specific software ID number.  I create a folder with the same software ID number and store the sequence files there.  I then parse the barcode and open all *.seq files in that folder.  I have this working correctly now. 
    Once the sequence files are opened, the user must scan each UUT ID barcode before installing them on the fixture.  The intent is to have both UUTs installed on the fixture at the same time so they can be tested serially.  If, however, they do not scan one of the UUTs, it will not be tested.  Basically this is due to the fact that they may want to run only a single board.  Once all of the UUT barcode scanning has been completed, I have another button that will execute the sequence files using the 'Single Pass' execution entry point.  I agree that the 'Single Pass' and 'Test UUTs' points should be hidden from the user.  Once the execute test(s) button is pressed, it will check each UUT barcode to make sure that it is valid.  If both UUT's have valid barcodes, it will run the first UUT and then the second using the 2 sequence files that were previously loaded, otherwise, it will run only one or the other.  The UUT(s) will then be removed and the process starts over again from the UUT ID barcode scanning.  They will not have to scan the fixture barcode again since the correct sequence files have already been loaded.
    Currently I have the UUT/Sequence File associations hard-coded in the UI.  I will need to put my thinking cap on so that as we increase the product lines (several are already in the pipeline) we can use the same UI without any modifications.  Scalability is King!

  • Update Row into Run Table Task is not executing in correct sequence in DAC

    Update Row into Run Table Task is not executing in correct sequence in DAC.
    The task phase for this task is "Post Lost" . The depth in the execution plan is 19 but this task is running some times in Depth 12, some times in 14 and some time in Depth 16. Would like to know is this sequence of execution is correct order or not? In the out of the Box this task is executed at the end of the entire load. No Errors were reported in DAC log.
    Please let me know if any documents that would highlight this issue
    rm

    Update into Run table is a task thats required to update a table called W_ETL_RUN_S. The whole intention of this table is to keep the poor mans run history on the warehouse itself. The actual run history is stored in the DAC runtime tables, however the DAC repository could be on some other database/schema other than warehouse. Its mostly a legacy table, thats being carried around. If one were to pay close attention to this task, it has phase dependencies defined that dictate when this task should run.
    Apologies in advance for a lengthy post.... But sure might help understanding how DAC behaves! And is going to be essential for you to find issues at hand.
    The dependency generation in DAC follows the following rules of thumb!
    - Considers the Source table target table definitions of the tasks. With this information the tasks that write to a table take precedence over the tasks that reads from a table.
    - Considers the phase information. With this information, it will be able to resolve some of the conflicts. Should multiple tasks write to the same table, the phase is used to appropriately stagger them.
    - Considers the truncate table option. Should there be multiple tasks that write to the same table with the same phase information, the task that truncates the table takes precedence.
    - When more than one task that needs to write to the table that have similar properties, DAC would stagger them. However if one feels that either they can all go in parallel, or a common truncate is desired prior to any of the tasks execution, one could use a task group.
    - Task group is also handy when you suspect the application logic dictates cyclical reads and writes. For example, Task 1 reads from A and writes to B. Task 2 reads from B and writes back to A. If these two tasks were to have different phases, DAC would be able to figure that out and order them accordingly. If not, for example those tasks need to be of the same phase for some reason, one could create a task group as well.
    Now that I described the behavior of how the dependency generation works, there may be some tasks that have no relevance to other tasks either as source tables or target tables. The update into run history is a classic example. The purpose of this task is to update the run information in the W_ETL_RUN_S with status 'Completed' with an end time stamp. Because this needs to run at the end, it has phase dependency defined on it. With this information DAC will be able to stagger the position of execution either before (Block) or after (Wait) all the tasks belonging to a particular phase is completed.
    Now a description about depth. While Depth gives an indication to the order of execution, its only an indication of how the tasks may be executed. Its a reflection of how the dependencies have been discovered. Let me explain with an example. The tasks that have no dependency will have a depth of 0. The tasks that depend on one or more or all of depth 0 get a depth of 1. The tasks that depend on one or more or all of depth 1 get a depth of 2. It also means implicitly a task of depth 2 will indirectly depend on a task of depth 0 through other tasks in depth 1. In essence the dependencies translate to an execution graph, and is different from the batch structures one usually thinks of when it comes to ETL execution.
    Because DAC does runtime optimization in the order in which tasks are executed, it may pick a task thats of order 1 over something else with an order of 0. The factors considered for picking the next best task to run depend on
    - The number of dependent tasks. For example, a task which has 10 dependents gets more priorty than the one whose dependents is 1.
    - If all else equal, it considers the number of source tables. For example a task having 10 source tables gets more priority than the one that has only two source tables.
    - If all else equal, it considers the average time taken by each of the tasks. The longer running ones will get more preference than the quick running ones
    - and many other factors!
    And of course the dependencies are honored through the execution. Unless all the predecessors of a task are in completed state a task does not get picked for execution.
    Another way to think of this depth concept : If one were to execute one task at a time, probably this is the order in which the tasks will be executed.
    The depth can change depending on the number of tasks identified for the execution plan.
    The immediate predecessors and successor can be a very valuable information to look at and should be used to validate the design. All predecessors and successors provide information to corroborate it even further. This can be accessed through clicking on any task and choosing the detail button. You will see all these information over there. As an alternate method, you could also use the 'All/immediate Predecessors' and 'All/immediate Successor' tabs that provide a flat view of the dependencies. Note that these tabs may have to retrieve a large amount of data, and hence will open in a query mode.
    SUMMARY: Irrespective of the depth, validate
    - if this task has 'Phase dependencies' that span all the ETL phases and has a 'Wait' option.
    - click on the particular task and verify if the task does not have any successors. And the predecessors include all the tasks from all the phases its supposed to wait for!
    Once you have inspected the above two you should be good to go, no matter what the depth says!
    Hope this helps!

  • How to load the sequence file from the process model?

    Does anyone have an example process model that loads a sequence file? The out-of-the-box process models assume the sequence file is already loaded. I want the process model to identify the UUT type and load the appropriate sequence file based on that.

    Mark,
    A better solution to your question can be accomplised if you have TestStand 2.0.
    Within the entry point of a process modle you can set the client sequence using Execution.ClientFile(). This is a new method of TestStand 2.0. It was specifically designed so that you could dynamically set the client sequence within the process model.
    Currently the entry points in the default process models (i.e. Test UUTs and Single Pass) are configured to Show Entry Point When Client File Window is Active. This means that you must open and have active a client sequence file before you can execute one of the entry point. You probably do not want this implementation if you are going to set the client file during the entry point execution. To change this you will need to go the sequence properties of your entry point (while the sequence is open select Edit>>Sequence Properties), switch to the Model tab of the entry point's property dialog box, and enable Show Entry Point For All Windows. The entry point will then appear whether or not you have an open sequence file active.
    You will need to add at least 3 steps to your entry point sequence that all use the ActiveX Automation Adapter. Remember that MUST disable Record Results for any step you add to the process model. The 3 steps will perform the following tasks:
    1) Obtains a sequence file reference of the file that you want to be the client sequence file. You will need to use the Engine.GetSequenceFileEx method. You will need a local variable (ActiveX data type) in which to store the sequence file reference.
    2) Set the client sequence file using the Execution.ClientFile property.
    3) Close the reference to the client sequence file in the Cleanup step group of your entry point sequence using Engine.ReleaseSequenceFileEx
    I am attaching a SequenceModel.seq file (the default process model in TestStand 2.0) in which we have modified the TestUUTs entry point as described above.
    Note that you'll be prompted to enter the path to your client sequence file. This is a message popup that you can delete and it was added for your review only.
    Good luck in your project,
    Azucena Perez
    National Instruments
    Attachments:
    sequentialmodel.seq ‏164 KB

Maybe you are looking for

  • Acrobat 8 pro and windows 7 needs adobepdf.dll

    I have just installed my old CS3 suit on a new machine. Now with windows 7 and Acrobat Pro version 8 I can't print to the Adobe pdf printer. Eventually it asks for abobdePdf.dll - it can't find what it wants on the original installation discs then wa

  • Interakt File Upload incompatible with Firefox?  Recommend ASP Upload tool?

    Hi, I've been using Interakt's File Upload tool for quite some time but have recently been made aware that it doesn't work in Firefox! Anyway, I notice that Interakt no longer exist now, having been bought out by Adobe some time ago as it turns out.

  • How can I track emailed pictures?

    I send out copies of iPhoto pictures to a group of kids from age 8 through 17.  It is a tedious task, and generally takes about 2-weeks to complete.  Is there a mechanism in iPhoto '11 that allows information that tracks which pictures have been sent

  • "required folder can not be found"?

    My 5th generation iPod nano would not let me add new artists and sync with itunes.  I tried everything.  My son's 5th gen works fine on the same computer, as does my iPhone.  I have uninstalled (with itunes support help over the phone) and reinstalle

  • Broadcast based on user's authorization

    Hi I have to broadcast the reports based on the user authorization. What are the possible ways of achieving this? 1. Is it possible to execute the report only once and send the reports to different users based on thier authorization? 2. For example,