Writing expression in OWB to generate SEQ no.?

I want to generate a Sequence number for different records of each unique record.
For example
Cust_no R_visit_num Seq_vis_num
11 8 8.01
11 8 8.02
11 8 8.03
22 77 77.01
33 55 55.01
33 55 55.01
Let me know if you can help me to generate seq_vis_num for all my records?
Thanks,
Maddy

Cust_no R_visit_num Seq_vis_num
11             8             8.01
11             8             8.02
11             8             8.03
22             77           77.01
33             55           55.01
33             55           55.01Can you explain it more detail here how the Seq_vis_num is generater...
as for 11 8 pair its increasing as 8.01,8.02,8.03 but for 33 ,55 pair it is same 55.01.

Similar Messages

  • Writing a java program for generating .pdf file with the data of MS-Excel .

    Hi all,
    My object is write a java program so tht...it'll generate the .pdf file after retriving the data from MS-Excel file.
    I used POI HSSF to read the data from MS-Excel and used iText to generate .pdf file:
    My Program is:
    * Created on Apr 13, 2005
    * TODO To change the template for this generated file go to
    * Window - Preferences - Java - Code Style - Code Templates
    package forums;
    import java.io.*;
    import java.awt.Color;
    import com.lowagie.text.*;
    import com.lowagie.text.pdf.*;
    import com.lowagie.text.Font.*;
    import com.lowagie.text.pdf.MultiColumnText;
    import com.lowagie.text.Phrase.*;
    import net.sf.hibernate.mapping.Array;
    import org.apache.poi.hssf.*;
    import org.apache.poi.poifs.filesystem.*;
    import org.apache.poi.hssf.usermodel.*;
    import com.lowagie.text.Phrase.*;
    import java.util.Iterator;
    * Generates a simple 'Hello World' PDF file.
    * @author blowagie
    public class pdfgenerator {
         * Generates a PDF file with the text 'Hello World'
         * @param args no arguments needed here
         public static void main(String[] args) {
              System.out.println("Hello World");
              Rectangle pageSize = new Rectangle(916, 1592);
                        pageSize.setBackgroundColor(new java.awt.Color(0xFF, 0xFF, 0xDE));
              // step 1: creation of a document-object
              //Document document = new Document(pageSize);
              Document document = new Document(pageSize, 132, 164, 108, 108);
              try {
                   // step 2:
                   // we create a writer that listens to the document
                   // and directs a PDF-stream to a file
                   PdfWriter writer =PdfWriter.getInstance(document,new FileOutputStream("c:\\weeklystatus.pdf"));
                   writer.setEncryption(PdfWriter.STRENGTH128BITS, "Hello", "World", PdfWriter.AllowCopy | PdfWriter.AllowPrinting);
    //               step 3: we open the document
                             document.open();
                   Paragraph paragraph = new Paragraph("",new Font(Font.TIMES_ROMAN, 13, Font.BOLDITALIC, new Color(0, 0, 255)));
                   POIFSFileSystem pofilesystem=new POIFSFileSystem(new FileInputStream("D:\\ESM\\plans\\weekly report(31-01..04-02).xls"));
                   HSSFWorkbook hbook=new HSSFWorkbook(pofilesystem);
                   HSSFSheet hsheet=hbook.getSheetAt(0);//.createSheet();
                   Iterator rows = hsheet.rowIterator();
                                  while( rows.hasNext() ) {
                                       Phrase phrase=new Phrase();
                                       HSSFRow row = (HSSFRow) rows.next();
                                       //System.out.println( "Row #" + row.getRowNum());
                                       // Iterate over each cell in the row and print out the cell's content
                                       Iterator cells = row.cellIterator();
                                       while( cells.hasNext() ) {
                                            HSSFCell cell = (HSSFCell) cells.next();
                                            //System.out.println( "Cell #" + cell.getCellNum() );
                                            switch ( cell.getCellType() ) {
                                                 case HSSFCell.CELL_TYPE_STRING:
                                                 String stringcell=cell.getStringCellValue ()+" ";
                                                 writer.setSpaceCharRatio(PdfWriter.NO_SPACE_CHAR_RATIO);
                                                 phrase.add(stringcell);
                                            // document.add(new Phrase(string));
                                                      System.out.print( cell.getStringCellValue () );
                                                      break;
                                                 case HSSFCell.CELL_TYPE_FORMULA:
                                                           String stringdate=cell.getCellFormula()+" ";
                                                           writer.setSpaceCharRatio(PdfWriter.NO_SPACE_CHAR_RATIO);
                                                           phrase.add(stringdate);
                                                 System.out.print( cell.getCellFormula() );
                                                           break;
                                                 case HSSFCell.CELL_TYPE_NUMERIC:
                                                 String string=String.valueOf(cell.getNumericCellValue())+" ";
                                                      writer.setSpaceCharRatio(PdfWriter.NO_SPACE_CHAR_RATIO);
                                                      phrase.add(string);
                                                      System.out.print( cell.getNumericCellValue() );
                                                      break;
                                                 default:
                                                      //System.out.println( "unsuported sell type" );
                                                      break;
                                       document.add(new Paragraph(phrase));
                                       document.add(new Paragraph("\n \n \n"));
                   // step 4: we add a paragraph to the document
              } catch (DocumentException de) {
                   System.err.println(de.getMessage());
              } catch (IOException ioe) {
                   System.err.println(ioe.getMessage());
              // step 5: we close the document
              document.close();
    My Input from MS-Excel file is:
         Planning and Tracking Template for Interns                                                                 
         Name of the Intern     N.Kesavulu Reddy                                                            
         Project Name     Enterprise Sales and Marketing                                                            
         Description     Estimated Effort in Hrs     Planned/Replanned          Actual          Actual Effort in Hrs     Complexity     Priority     LOC written new & modified     % work completion     Status     Rework     Remarks
    S.No               Start Date     End Date     Start Date     End Date                                        
    1     setup the configuration          31/01/2005     1/2/2005     31/01/2005     1/2/2005                                        
    2     Deploying an application through Tapestry, Spring, Hibernate          2/2/2005     2/2/2005     2/2/2005     2/2/2005                                        
    3     Gone through Componentization and Cxprice application          3/2/2005     3/2/2005     3/2/2005     3/2/2005                                        
    4     Attend the sessions(tapestry,spring, hibernate), QBA          4/2/2005     4/2/2005     4/2/2005     4/2/2005                                        
         The o/p I'm gettint in .pdf file is:
    Planning and Tracking Template for Interns
    N.Kesavulu Reddy Name of the Intern
    Enterprise Sales and Marketing Project Name
    Remarks Rework Status % work completion LOC written new & modified Priority
    Complexity Actual Effort in Hrs Actual Planned/Replanned Estimated Effort in Hrs Description
    End Date Start Date End Date Start Date S.No
    38354.0 31/01/2005 38354.0 31/01/2005 setup the configuration 1.0
    38385.0 38385.0 38385.0 38385.0 Deploying an application through Tapestry, Spring, Hibernate
    2.0
    38413.0 38413.0 38413.0 38413.0 Gone through Componentization and Cxprice application
    3.0
    38444.0 38444.0 38444.0 38444.0 Attend the sessions(tapestry,spring, hibernate), QBA 4.0
                                       The issues i'm facing are:
    When it is reading a row from MS-Excel it is writing to the .pdf file from last cell to first cell.( 2 cell in 1 place, 1 cell in 2 place like if the row has two cells with data as : Name of the Intern: Kesavulu Reddy then it is writing to the .pdf file as Kesavulu Reddy Name of Intern)
    and the second issue is:
    It is not recognizing the date format..it is recognizing the date in first row only......
    Plz Tell me wht is the solution for this...
    Regards
    [email protected]

    Don't double post your question:
    http://forum.java.sun.com/thread.jspa?threadID=617605&messageID=3450899#3450899
    /Kaj

  • OWB Code generator - does not generate consistent code in every deployment

    Hi,
    I have a mapping to load dimension WM_USERS_ADVISORS_DIM.
    Every time I generate the code for this mapping code generated is different than previous. Hence the query optimization applied at database level does not work.
    following are 2 different queries generated for the same mapping.
    Also, In target load order for this dimension, automatically values are populated like USERS_ADVISORS_STG, where as I have not declared this table/dim.
    I am unable to remove this from load order. Every time code gets generated these _STG is added to each level in Dimension.
    Please suggest how to have a consistent ETL generated every single time ETL deploys?
    INSERT /*+ APPEND PARALLEL ("USERS_ADVISORS_STG") */ INTO "OWB$ *USERS_ADVISORS__1FFE2A* " "USERS_ADVISORS_STG" ( "PERSON_ID", "ADVISOR_PERSON_ID", "INVITE_CLIENT_ID", "IS_PRIMARY_ADVISOR", "SCHEMA_NAME", "DUMMY_LEVEL_KEY", "DESCRIPTION", "DUMMY_VALUE") (SELECT "INGRP1". "PERSON_ID" "PERSON_ID$10", "INGRP1". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$10", "INGRP1". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$8", "INGRP1". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$10", "INGRP1". "SCHEMA_NAME" "SCHEMA_NAME$9", "INGRP2". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY$8", "INGRP2". "DESCRIPTION" "DESCRIPTION$12", "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" "LOOKUP$$$_1_DUMMY_VALUE$6" FROM ( SELECT "LOOKUP_INPUT_SUBQUERY$5". "USERS_ADVISORS_KEY$6" "USERS_ADVISORS_KEY", "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" "PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" "ADVISOR_PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "INVITE_CLIENT_ID$9" "INVITE_CLIENT_ID", "LOOKUP_INPUT_SUBQUERY$5". "IS_PRIMARY_ADVISOR$11" "IS_PRIMARY_ADVISOR", "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" "SCHEMA_NAME", "LOOKUP_INPUT_SUBQUERY$5". "LOOKUP$$$_1_DUMMY_VALUE$7" "LOOKUP$$$_1_DUMMY_VALUE" FROM (SELECT "DEDUP_SRC_0$2". "USERS_ADVISORS_KEY$7" "USERS_ADVISORS_KEY$6", "DEDUP_SRC_0$2". "PERSON_ID$12" "PERSON_ID$11", "DEDUP_SRC_0$2". "ADVISOR_PERSON_ID$12" "ADVISOR_PERSON_ID$11", "DEDUP_SRC_0$2". "INVITE_CLIENT_ID$10" "INVITE_CLIENT_ID$9", "DEDUP_SRC_0$2". "IS_PRIMARY_ADVISOR$12" "IS_PRIMARY_ADVISOR$11", "DEDUP_SRC_0$2". "SCHEMA_NAME$11" "SCHEMA_NAME$10", "DEDUP_SRC_0$2". "LOOKUP$$$_1_DUMMY_VALUE$8" "LOOKUP$$$_1_DUMMY_VALUE$7" FROM (SELECT CAST (NULL AS NUMERIC) "USERS_ADVISORS_KEY$7", "AGG_INPUT$2". "PERSON_ID$13" "PERSON_ID$12", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13" "ADVISOR_PERSON_ID$12", MIN( "AGG_INPUT$2". "INVITE_CLIENT_ID$11") "INVITE_CLIENT_ID$10", MIN( "AGG_INPUT$2". "IS_PRIMARY_ADVISOR$13") "IS_PRIMARY_ADVISOR$12", "AGG_INPUT$2". "SCHEMA_NAME$12" "SCHEMA_NAME$11", MIN( "AGG_INPUT$2". "LOOKUP$$$_1_DUMMY_VALUE$9") "LOOKUP$$$_1_DUMMY_VALUE$8" FROM (SELECT ( :B4 ) "USERS_ADVISORS_KEY$8", "USER_ADVISOR". "PERSON_ID" "PERSON_ID$13", "USER_ADVISOR". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$13", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$11", "USER_ADVISOR". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$13", ( :B3 ) "SCHEMA_NAME$12", 0 "LOOKUP$$$_1_DUMMY_VALUE$9" FROM ( SELECT "SET_OPERATION$2". "PERSON_ID$14" "PERSON_ID", "SET_OPERATION$2". "ADVISOR_PERSON_ID$14" "ADVISOR_PERSON_ID", "SET_OPERATION$2". "CREATED_ON$2" "CREATED_ON", "SET_OPERATION$2". "IS_PRIMARY_ADVISOR$14" "IS_PRIMARY_ADVISOR", "SET_OPERATION$2". "MOD_TAG$2" "MOD_TAG", "SET_OPERATION$2". "MODIFIED_ON$2" "MODIFIED_ON" FROM (SELECT "PERSON_ID" "PERSON_ID$14", "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$14", "CREATED_ON" "CREATED_ON$2", "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$14", "MOD_TAG" "MOD_TAG$2", "MODIFIED_ON" "MODIFIED_ON$2" FROM (SELECT "USERS_ADVISORS". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS" "USERS_ADVISORS" UNION SELECT "USERS_ADVISORS_DLOG". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS_DLOG". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS_DLOG". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS_DLOG". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS_DLOG". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS_DLOG". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS_DLOG" "USERS_ADVISORS_DLOG") ) "SET_OPERATION$2" ) "USER_ADVISOR" LEFT OUTER JOIN ( SELECT "INVITE_CLIENTS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "INVITE_CLIENTS". "PERSON_ID" "PERSON_ID", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID" FROM "ADVIEWPROD". "INVITE_CLIENTS" "INVITE_CLIENTS" ) "INVITE_CLIENTS" ON ( (( "USER_ADVISOR". "PERSON_ID" = "INVITE_CLIENTS". "PERSON_ID" )) AND (( "USER_ADVISOR". "ADVISOR_PERSON_ID" = "INVITE_CLIENTS". "ADVISOR_PERSON_ID" )) ) WHERE ( ( "USER_ADVISOR". "CREATED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) OR "USER_ADVISOR". "MODIFIED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) ) ) ) "AGG_INPUT$2" GROUP BY "AGG_INPUT$2". "PERSON_ID$13", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13", "AGG_INPUT$2". "SCHEMA_NAME$12" ) "DEDUP_SRC_0$2" ) "LOOKUP_INPUT_SUBQUERY$5" WHERE ( (NOT ( "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" IS NULL)) ) ) "INGRP1" LEFT OUTER JOIN ( SELECT "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY", "DUMMY_LEVEL_0". "DUMMY_VALUE" "DUMMY_VALUE", "DUMMY_LEVEL_0". "DESCRIPTION" "DESCRIPTION" FROM "WM_USERS_ADVISORS_DIM" "DUMMY_LEVEL_0" WHERE ( "DUMMY_LEVEL_0". "DIMENSION_KEY" = "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" ) AND ( "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" IS NOT NULL ) ) "INGRP2" ON ( ( "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" = "INGRP2". "DUMMY_VALUE" ) ) )
    INSERT /*+ APPEND PARALLEL ("USERS_ADVISORS_STG") */ INTO "OWB$ *USERS_ADVISORS__1FEA66* " "USERS_ADVISORS_STG" ( "PERSON_ID", "ADVISOR_PERSON_ID", "INVITE_CLIENT_ID", "IS_PRIMARY_ADVISOR", "SCHEMA_NAME", "DUMMY_LEVEL_KEY", "DESCRIPTION", "DUMMY_VALUE") (SELECT "INGRP1". "PERSON_ID" "PERSON_ID$10", "INGRP1". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$10", "INGRP1". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$8", "INGRP1". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$10", "INGRP1". "SCHEMA_NAME" "SCHEMA_NAME$9", "INGRP2". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY$8", "INGRP2". "DESCRIPTION" "DESCRIPTION$12", "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" "LOOKUP$$$_1_DUMMY_VALUE$6" FROM ( SELECT "LOOKUP_INPUT_SUBQUERY$5". "USERS_ADVISORS_KEY$6" "USERS_ADVISORS_KEY", "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" "PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" "ADVISOR_PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "INVITE_CLIENT_ID$9" "INVITE_CLIENT_ID", "LOOKUP_INPUT_SUBQUERY$5". "IS_PRIMARY_ADVISOR$11" "IS_PRIMARY_ADVISOR", "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" "SCHEMA_NAME", "LOOKUP_INPUT_SUBQUERY$5". "LOOKUP$$$_1_DUMMY_VALUE$7" "LOOKUP$$$_1_DUMMY_VALUE" FROM (SELECT "DEDUP_SRC_0$2". "USERS_ADVISORS_KEY$7" "USERS_ADVISORS_KEY$6", "DEDUP_SRC_0$2". "PERSON_ID$12" "PERSON_ID$11", "DEDUP_SRC_0$2". "ADVISOR_PERSON_ID$12" "ADVISOR_PERSON_ID$11", "DEDUP_SRC_0$2". "INVITE_CLIENT_ID$10" "INVITE_CLIENT_ID$9", "DEDUP_SRC_0$2". "IS_PRIMARY_ADVISOR$12" "IS_PRIMARY_ADVISOR$11", "DEDUP_SRC_0$2". "SCHEMA_NAME$11" "SCHEMA_NAME$10", "DEDUP_SRC_0$2". "LOOKUP$$$_1_DUMMY_VALUE$8" "LOOKUP$$$_1_DUMMY_VALUE$7" FROM (SELECT CAST (NULL AS NUMERIC) "USERS_ADVISORS_KEY$7", "AGG_INPUT$2". "PERSON_ID$13" "PERSON_ID$12", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13" "ADVISOR_PERSON_ID$12", MIN( "AGG_INPUT$2". "INVITE_CLIENT_ID$11") "INVITE_CLIENT_ID$10", MIN( "AGG_INPUT$2". "IS_PRIMARY_ADVISOR$13") "IS_PRIMARY_ADVISOR$12", "AGG_INPUT$2". "SCHEMA_NAME$12" "SCHEMA_NAME$11", MIN( "AGG_INPUT$2". "LOOKUP$$$_1_DUMMY_VALUE$9") "LOOKUP$$$_1_DUMMY_VALUE$8" FROM (SELECT ( :B4 ) "USERS_ADVISORS_KEY$8", "USER_ADVISOR". "PERSON_ID" "PERSON_ID$13", "USER_ADVISOR". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$13", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$11", "USER_ADVISOR". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$13", ( :B3 ) "SCHEMA_NAME$12", 0 "LOOKUP$$$_1_DUMMY_VALUE$9" FROM ( SELECT "SET_OPERATION$2". "PERSON_ID$14" "PERSON_ID", "SET_OPERATION$2". "ADVISOR_PERSON_ID$14" "ADVISOR_PERSON_ID", "SET_OPERATION$2". "CREATED_ON$2" "CREATED_ON", "SET_OPERATION$2". "IS_PRIMARY_ADVISOR$14" "IS_PRIMARY_ADVISOR", "SET_OPERATION$2". "MOD_TAG$2" "MOD_TAG", "SET_OPERATION$2". "MODIFIED_ON$2" "MODIFIED_ON" FROM (SELECT "PERSON_ID" "PERSON_ID$14", "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$14", "CREATED_ON" "CREATED_ON$2", "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$14", "MOD_TAG" "MOD_TAG$2", "MODIFIED_ON" "MODIFIED_ON$2" FROM (SELECT "USERS_ADVISORS". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS" "USERS_ADVISORS" UNION SELECT "USERS_ADVISORS_DLOG". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS_DLOG". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS_DLOG". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS_DLOG". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS_DLOG". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS_DLOG". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS_DLOG" "USERS_ADVISORS_DLOG") ) "SET_OPERATION$2" ) "USER_ADVISOR" LEFT OUTER JOIN ( SELECT "INVITE_CLIENTS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "INVITE_CLIENTS". "PERSON_ID" "PERSON_ID", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID" FROM "ADVIEWPROD". "INVITE_CLIENTS" "INVITE_CLIENTS" ) "INVITE_CLIENTS" ON ( (( "USER_ADVISOR". "PERSON_ID" = "INVITE_CLIENTS". "PERSON_ID" )) AND (( "USER_ADVISOR". "ADVISOR_PERSON_ID" = "INVITE_CLIENTS". "ADVISOR_PERSON_ID" )) ) WHERE ( ( "USER_ADVISOR". "CREATED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) OR "USER_ADVISOR". "MODIFIED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) ) ) ) "AGG_INPUT$2" GROUP BY "AGG_INPUT$2". "PERSON_ID$13", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13", "AGG_INPUT$2". "SCHEMA_NAME$12" ) "DEDUP_SRC_0$2" ) "LOOKUP_INPUT_SUBQUERY$5" WHERE ( (NOT ( "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" IS NULL)) ) ) "INGRP1" LEFT OUTER JOIN ( SELECT "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY", "DUMMY_LEVEL_0". "DUMMY_VALUE" "DUMMY_VALUE", "DUMMY_LEVEL_0". "DESCRIPTION" "DESCRIPTION" FROM "WM_USERS_ADVISORS_DIM" "DUMMY_LEVEL_0" WHERE ( "DUMMY_LEVEL_0". "DIMENSION_KEY" = "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" ) AND ( "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" IS NOT NULL ) ) "INGRP2" ON ( ( "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" = "INGRP2". "DUMMY_VALUE" ) ) )
    Thanks in advance
    Meg
    Edited by: Meg on Jan 4, 2012 5:33 PM

    Hello,
    These wrappers were separated into different templates.  What you would need to do is to run the templates that you need. You can find these templates in the C:\Program Files\National Instruments\MATRIXx\mx_71.4\case\ACC\templates folder.
    Hope this helps.
    Ricardo S.
    National Instruments

  • Subsequent Lookup Operators causes OWB to generate undeployable mappings

    Hi,
    I am using OWB 11gR2 .
    I am trying to create a fact loading mapping based on datavault.
    That gives me error during deployment.
    Validating and Generate doesn't give me an error.
    So in trying to load the fact i tie various datavault tables together with a joiner operator.
    All tables except the driving table are set to outer join role.
    The output fields are tied to various lookup operator objects.
    The output from those is tied to the target fact table.
    All of this goes well, this mapping is deployable and upon genrate one can see the statements.
    The problem arises when i try to insert another lookup operator between the output of one lookup operator and the fact.
    That mapping does not give a validation error and also generate intermediate doesn't error.
    Deploying doesn't work however, it complains of a incorrect identifier.
    Inspecting the generate intermediate does reveal the problem:
    OWB appends all of the join clauses from the first joiner to the total used for loading the fact as a where statement.
    When you look at the first joiner though it just displays nicely all of the left outer join statements.
    There is no where statement to be found on this first joiner.
    It is only added at the fact stage at exactly the same place where the left outerjoins are from the first joiner.
    Questions:
    Is there a limit to the number of subsequent lookup operators one can use ? 2 can not be it i hope ..
    Is there a patch for this ?
    Other remarks; i have noticed that when i use more than 8 lookup operators on my canvas that the lookup conditons get corrupt.
    It becomes something like: lookup.fieldname = null instead of lookup.fieldname = input.fieldname .
    When this happens i have to correct every lookup operator on the mapping.
    Is this known error ?
    Hope somebody has an answer fro my first problem.
    rgrds Mike
    Edited by: MichaelR64 on 16-jan-2011 23:39

    Hi,
    I did some further testing:
    This happens when there is an unequal number of lookups "attached"to the driving table.
    What i mean is that if there is one lookup attached to a port of the driving table, then the next port that has a lookup can not have two (serially connected) lookups.
    Or put the other way : if a port has two lookups(sequentially connected) than the errror disappears when all the other ports with lookups also have two lookups(serially connected that is).
    At first i thought it had something to do with the joiner used in the first stage.
    Replacing that with a view didn't solve it
    In fact using a lookup where multiple rows output is specified causes owb to create this with a outer join.
    It is this outer join part that is being mangled by owb as specified before.
    If anyone can comment on this..

  • Regular expression in OWB

    Hi
    Can the following be done in OWB using operators
    SELECT DISTINCT column1
    FROM ( SELECT DISTINCT REGEXP_SUBSTR (description,
    '[^[:blank:][:punct:] ]+',
    1,
    LEVEL)
    column1,
    LEVEL
    FROM (SELECT FREE_TEXT DESCRIPTION FROM TEST_TABLE)
    CONNECT BY LEVEL <=
    LENGTH (REGEXP_REPLACE (description, '[[:alnum:]]'))
    + 1
    ORDER BY LEVEL)
    WHERE column1 IS NOT NULL
    TEST_TABLE is a table with following data
    FREE_TEXT
    THE MAN IS WIELDING A SPADE
    A%WOMAN,WAS SEEN.
    1 THE'GIRL IS LAUGH$ING
    Output as from SQL is
    Column1
    GIRL
    1
    SPADE
    WIELDING
    IS
    THE
    MAN
    WAS
    WOMAN
    SEEN
    ING
    LAUGH
    A
    Cheers
    Birdy

    Hi David
    I was trying to improve the performance of the above query after tips from some forums.
    The new query is
    SELECT REGEXP_SUBSTR ( free_text, '[^[:blank:][:punct:] ]+', 1, lvl)
    FROM Test_table,
    (SELECT LEVEL lvl
    FROM (SELECT MAX (LENGTH (REGEXP_REPLACE ( free_text, '[[:alnum:]]'))) mx
    FROM Test_table)
    CONNECT BY LEVEL <= mx + 1)
    WHERE lvl - 1 <= LENGTH (REGEXP_REPLACE ( free_text, '[[:alnum:]]'))
    ORDER BY lvl;
    The only issue I am facing is to get the max of length of regular expression.
    I am not sure whether this will be possible in a mapping as we cannot have aggregations in a filter condition...
    I even tried to first get the value MAX (LENGTH (REGEXP_REPLACE ( free_text, '[[:alnum:]]'))) in an aggregator and then use is further but the data flow will change as I will have to go through the aggregator.
    Not sure how to go about this now. I guess I will have to embed it in a function.
    The old query mapping works but is way too slow.
    Birdy

  • I need to walk before I run, but I can't help it. Where can I learn more about writing expressions?

    I find myself daydreaming about cool ways to create, and though I am an engineer at heart, my education is in business management.  I speak fluently in three oral languages, but only babble incoherently in one or two programming languages. 
    I want to learn how to convert my visions into functions that siena recognizes.  IN addition to the awesome references that you guys are providing her online, are there books I can buy that explain how to write better functions and expressions? 
    - Something that allows me to progress at my pace?
    Thank you again
    Aaron
    aka Jagged Rocks

    Hi Aaron, I second you hopes and vision. Me also, I will buy some books or videos etc - for become better to do this. But it's in a early stage, still a Beta. So I think the best place is actually here. It's a thread with several (alreadY) excampelapps,
    there is questions answared , and it looks like a nice little community already here!
    Also, you can join (a little) Facebook-group: https://www.facebook.com/groups/projectsiena/ 
    There is also popping up several blog's....
    So - As long as Microsoft keep up the pressure on this AppTool, I think we can dig into it an learn a lot!  :)
    While having fun doing it! 
    Best regards Terje F - Norway

  • Writing a class using random generator in bluej

    Hello
    Im trying to write a class for a deck of cards. Im using a random generator but I dont know how to write the instance variable.
    I have to make 4 suits heart, club, spade, dimonds. and 13 for face value. I know how to random generate numbers. Like if I were making a slot machine to give me 3 numbers in a rage from 0-10. Thats just numbers. How do I random generate values of 1-13 and have it output a random suit? Also how do I make it say if its a jack king or queen? Do I need a constructor or how would I make the card with the face value of 13 suit heart and the card be a queen.
    before jumping down my throat about this being a homework assignment...yes it is but this step Im seeking help on there is no example for this type of generating.
    Thanks for any help
    Rewind

    Well, this is far from bullet-proof, but I think gets the basic idea across. This does sampling with replacement; if you wanted to do something like shuffle a deck of cards you'll need a smarter approach than this.
    import java.util.*;
    public class RandomCards {
      public static void main(String[] args) {
        Suit suit=new Suit();
        for (int i=0; i < 10; i++) {
          System.out.println(suit.nextSuit());
      private static class Suit {
        public static final String HEART="Heart";
        public static final String DIAMOND="Diamond";
        public static final String SPADE="Spade";
        public static final String CLUB="Club";
        private final String[] SUITS={ HEART, DIAMOND, SPADE, CLUB };
        private Gen suitGen=new Gen(0,3);
        public String nextSuit() {
          return SUITS[suitGen.nextInt()];
      private static class Gen {
        private int floor,ceiling;
        private Random rand;
        public Gen(int floor, int ceiling) {
          this.floor=floor;
          this.ceiling=ceiling;
          rand=new Random();
        public int nextInt() {
          return rand.nextInt(ceiling-floor)+floor;

  • Getting Error in OWB while generating

    Hi Every one
    I am getting the following error while trying to compile my mapping in OWB. Is there any experienced person who can guide me how to come out of this issue.
    Thanks in advance
    RB
    Cannot invoke method handleSelectionChanged in class oracle.wh.ui.tsmapping.GenerateEventHandler

    I'm getting a very similar error when attempting to delete an object from my mapping in OWB 10.2.0.3:
    Cannot invoke method handleObjectBeforeDeleted in class oracle.wh.ui.tsmapping.MappingGraph
    I'm assuming this is some type of error with the OWB java client itself, but I'm with the OP in hoping someone out there has any idea where to start looking for answers regarding these types of errors. Thanks.

  • Writing transformations for OWB/OWF to implement bespoke error handling

    I have implemented mappings which perform lookup on a translation table and if the lookup is not found a suitable value is output to a column, e.g. 'ERROR' is written to the output column on an intermediate table e.g. xxx_temp. The intermediate table is then split into two streams and output to two tables, those with errors to xxx_errors and those that are valid to xxx_out. What I want to know is how to write a transformation which will count the number of errors in xxx_errors and if 0 will return 'success' and if > 0 will return 'error' - because OWB Process Flows can only seem to handle three state events. Will this example function work or are there other parameters that I must include in the function before Oracle Workflow can process it correctly.
    FUNCTION etl_md_errors
    RETURN NUMBER IS
    lv_status NUMBER(22) := 0;
    lv_count NUMBER(5) := 0;
    ** Cursor to count the number of errors in the mappings run
    ** for the Operating Unit Master Data.
    CURSOR lcur_count_errors IS
    SELECT c1.err_cnt + c2.err_cnt
    FROM (
    SELECT Count(Rowid) err_cnt
    FROM t_mdo_ce_errors
    ) c1,
    SELECT Count(Rowid) err_cnt
    FROM t_mdo_cstobj_errors
    ) c2,
    SELECT Count(Rowid) err_cnt
    FROM t_mdo_vntr_errors
    ) c3;
    BEGIN
    OPEN lcur_count_errors;
    FETCH lcur_count_errors
    INTO lv_count;
    CLOSE lcur_count_errors;
    IF lv_count <= 0 THEN
    lv_status := 1;
    ELSE
    lv_status := 3;
    END IF;
    RETURN lv_status;
    EXCEPTION
    WHEN OTHERS THEN
    RETURN 3;
    END etl_md_errors;
    I cannot test the Process Flow deployment as Oracle Workflow 2.6.0 has been installed on an Oracle 8i schema but the Location registration version pulldown only has one entry 2.6.2
    Cheers,
    Phil Thomson

    Hi,
    You seem to have missed the point of the posting.
    I am asking how to write a transformation which can be used in Process Flow to determine whether any lookup validation errors have occurred, e.g. to determine which e-mail to send to the system administrator's email account. I was not asking if the method of processing/validating the transactions needed revamping .. it is what the client asked for and it's what they have tested and approved. I'll give an explanation of the background to the processing.
    a) we are reading in transactions from Country Data Warehouses (SAP BW) which are placed as tables in country data source schemas.
    b) we want to consolidate the transactions so that they can be loaded into a European wide Data Warehouse (SAP BW) and these are placed as tables in the target country schemas ... which are consolidated in a global schema using views and dblinks.
    c) as the Country Data Warehouses are using their own set of reference/look up values the transactions have the country data warehouse reference codes translated to equivalent european reference codes. Therefore in each country target schema we have sets of mapping/xref table(s) which translates one country column value to the equivalent european column value, e.g. area_ctry and area_eur. Area_ctry is the area reference code in the Country Data Warehouse and area_eur is the area reference code to be used in the European Data Warehouse.
    d) when a transaction record has a reference value that does not have an equivalent european reference value we want to flag that column and record as an error, as their are several column values to be translated you do not want to only flag up the 1st validation error encountered, you want to validate the entire record and you also want to validate the entire set of transaction records. Users get a bit miffed if you fail the entire batch of transactions on the first column validation or first invalid record, they correct it then find there are other records with errors ... user has to repeat until no errror records.
    e) that is the reason we use the Key Lookup operator on the mapping/xref table with the transaction column's country value in the input group and as the value for the Lookup Condition, on the Key Lookup operator's output group we set the default value for the looked up column to 'ERROR' so if no lookup is found 'ERROR' will be output as the looked up value.
    f) the Split operator is then used to identify error records e.g. area_eur = 'ERROR' OR region_eur = 'ERROR' etc, and to identify valid records e.g. area_eur != 'ERROR' AND region_eur != 'ERROR' etc. The error records are output their own table and the valid records are output to their own table.
    g) if their are true SQL errors e.g. tablespace exceeded, referenced procedure/function state changed then these will be handled by OWB and should be viewed vaie the OWB audit browser.
    As stated in my post what is the template of PL/SQL functions that can be used as Transformations in Process Flows with their 'SUCCESS', 'WARNING' and 'ERROR' transition condition.
    Hope the above explanation helps,
    Cheers,
    Phil Thomson

  • Capture auto generating seqs

    Hi,
    Is there a way with FCP to auto create sequences during DV capture based on the timestamp of the clips on the tape? I mean one sequence per take.
    Thanks,
    fred.
      Mac OS X (10.4.3)  

    If your timecode is as it should be, there will be no breaks.
    I should have mentioned - If you are shooting HDV and capturing in FCP5, you have the option of either capturing continuous clips or having FCP break them into discrete shots. This is due to the highly problematic nature of the 15 frame GOP format of HDV.
    have fun.
    x
    Do your part in supporting your fellow users. If a response has been Helpful to you or Solved your question, please mark it as such as an aid to other lost souls on the forum.
    Also, don't forget to mark the thread Answered when you get enough information to close the thread.

  • Sequence Generator problem in OWB Mapping

    Hi,
    I am using a Sequence Generator in my Dimension mapping. And OWB implements the mapping via an Oracle MERGE statement.
    However when I execute the mapping my sequence number value gets incremented even when there are only updates and no inserts taking place during the execution.
    I am using the sequence number only for inserts and not for updates in the mapping.
    Is there a way to avoid this situation or a work around to this problem? This is very urgent and any help in this area would be greatly appreciated.
    Thanks

    Hi
    I also have the same problem as I loose the seq numbers when there are updates only and I do not want to loose thousands of seq numbers. I tried with bulk size =1 as well as row based only execution but no result.
    The thing which solved this problem is to have outer join with source and target, then use splitter to find out new and updated records (by looking the join key as null on target table side), then use a fuction to generate seq number when the key is not null. But doing all this does not seem to be elegant performance point of you. I do not want to join source and target due to performance issues.
    Could you pl clrify your apprach. Are you using , Source minus target and Source Intersect Target . What you will do if there is no one to one matching between source and the target.
    Thanks

  • OWB Error for using Decode in Expression

    Debug code deployment messages:
    LINE 4558 ,COLUMN 71:
    PLS-00204: function or pseudo-column 'DECODE' may be used inside a SQL statement only
    LINE 4558 ,COLUMN 17:
    PL/SQL: Statement ignored
    End debug code deployment messages
    DBG1012: Debug deployment errors, can't run debug code.

    What i have experience is that if you use Decode inside the expression the OWB cannot validate it .
    But it will execute perfect.
    Use Case statement so that you can validate it and debug then
    Cheers
    Nawneet

  • Can't generate Merge statement in OWB

    Hi All,
    I'm facing a very strange problem in OWB. Version is 9.0.3. I have a target table which has its properties set to INSERT/UPDATE and I have set one of its fileds to be the one used for matching. However when generating the code, OWB is generating only an INSERT statement and not a MERGE as I'd expect. The table is very simple, it has 4 columns, 1 of which I will be using to match on. the properties of them are:
    Load on Insert Load on Update Use for Matching
    F1 Y Y N
    F2 Y Y N
    F3 Y Y N
    F4 N N Y
    The module in which this mapping was created is pointing to an Oracle8i/9i DB. All other mappings when created in a seperate module with this project CAN generate the Merge...strange!
    I have used the Merge statement successfully on many other mappings but at this client site I'm having problems
    Any help would me most appreciated.
    Take care
    Mitesh

    Thanks for your reply. My target table does not have any contraints on it and that is why in the properties in OWB I have selected the value "no contraints" for "match by constrainsts". I have then set one of the 4 fields to:
    Insert:Use for loading = No
    Update:Use for loading = No
    Update:Use for matching = Yes
    The other threee columns are set to:
    Insert:Use for loading = Yes
    Update:Use for loading = Yes
    Update:Use for matching = No
    The table Loading/Type is Insert/Update but I'm still not getting the merge to work. I really am not sure if this is a Java bug or not. I have deleted the mapping and created it from scratch but I get the same problem. Then next point of call I think would be to create a new module within my projects - but its just strange how other moules in this same project work with merge.
    Thanks
    Mitesh

  • Populating Ranks in table through OWB

    I am having trouble using DENSE_RANK in OWB (I am running the mapping in set-based mode).
    I want to select data from one table, rank the rows, then insert the rows with the rank into another table. But I get an error when I validate the expression and the mapping fails when I try to execute it.
    When trying to validate the expression, I receive this error:
    Line 1, Col 14:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    . ( * % & = - + ; < / > at in is mod not rem
    <an exponent (**)> <> or != or ~= >= <= <> and or like
    between ||
    The expression looks like this...
    DENSE_RANK() OVER (PARTITION BY INGRP1.CUSTOMER_ID
    ORDER BY INGRP1.SEQ)
    When I generate the intermediate result and run it in SQL Plus, it runs without errors.
    How can I use RANK in an expression and insert the rank into a table in OWB?
    Thanks!

    Sorry, the Rank and other analytical functions syntax is not currently supported. It has to do with the fact that analytic functions are SQL only, they are not valid in the PL/SQL environment. Today, if you put an analytical function into an expression, ignore the validation error and force the map to run in set-based mode, OWB will generate the correct SQL statement. But it unfortunately will be not deployable. The next release of OWB removes this limitation.
    Until then, the workaround is to use the analytical functions in views that serve as sources to OWB maps.
    Nikolai

  • Express White Paper

    Thanks to all those who responded requesting the Express white paper. I
    received an overwhelming response. I was expecting a dozen or so
    requests - I received over 80. Apparently there is strong demand for
    lessons learned about working with Express.
    The paper is in-progress. Everyone that requested it will get it when
    it is ready, hopefully before end of September. BTW, my paper on
    Express and the Object/Relational Problem will be published by Dr.
    Dobb's Journal about that time also - the publisher tells me that the
    November issue will be on the newsstands by end of September. The DDJ
    article is a review of the basics of Express, what it does, and how
    you develop with it. It also includes our early experiences with it
    (article was written end of April and reflects almost three month's
    experience with Express at that time). The article also goes briefly
    over our concept for a rapid process specific to Express. The white
    paper will have much more to say on the topic.
    Several people thanked me for my generosity in offering a free white
    paper and sharing our experiences with the Forte' community. There is
    nothing generous about the offer: it is unabashed self-promotion in the
    finest tradition of American crass commercialism. We're a consulting
    company. We sell our knowledge and experience. If, after you get the
    white paper, you would like to retain us on a consulting assignment we
    would be very grateful, and you will have a chance to pay us back for
    our generosity. If not, maybe you can reciprocate and share your
    experiences.
    Now to the subject of this posting: why am I posting this now? Well,
    for one thing, I received several responses that said something
    like: " we've tried Express and we were disappointed with ...", or "we've
    been using it and have been frustrated with ...", or "we've evaluated it
    and we had difficulties with ...". I started writing reply notes to each
    of the individuals who expressed those negative experiences, but when I
    reviewed what I wrote, it sounded like a Dear Abby column, with the replies
    sounding like: Dear Disappointed, or Dear Frustrated, or Dear With
    Difficulties. I decided I'll just post one note for all those who've
    had negative experiences, or who are just starting to use/evaluate Express
    and are likely to have similar experiences. Hence this. I also felt that
    I should give people somewhat of an overview of what's coming in the
    white paper while they're waiting to get the finished product.
    Perhaps initial difficulties with Express is a problem of unrealistic
    expectations. I always try to remember Mick Jagger's words. Mick, as
    everyone knows, is one of the great software minds of the 20th century:
    "You can't always get what you want ...". You must determine if you're
    getting what you need.
    Seriously, I have been working on the object/relational "impedance
    mismatch problem" for close to ten years now (since 1987 when I
    developed an Ada/SQL binding for the US Department of Defense). I have seen
    many solutions, and have developed several myself for C,C++,Ada and
    for Oracle, Sybase, Informix, and Ingress. I find Express to be one
    of the most elegant solutions to that thorny problem. If you look at it from
    that point of view alone, it's very hard to fail to be impressed. If you're
    expecting Express (or PowerBuilder 5, or any other solution) to be yet another
    Silver bullet to slay the development monster then you'll be disappointed.
    Software development is hard, will continue to be hard, and will continue
    to get more complex. Anything that can help us eliminate or reduce what
    Frederick Brooks calls "accidental complexity", and design around "essential
    complexity", will help. Forte' and Express definitely do that. Paul
    Butterworth's paper on "Managing the New Complexities of Application
    Development", shows how Forte' has solved many of the development/deployment
    problems. If you haven't read it, I highly recommend it. If you have, I would
    recommend a re-read if you've forgotten why you chose Forte' to begin with, or
    if you yourself did not participate in making that choice, The Express user's
    manual, "Using Forte' Express", shows how Express extends Forte' to reduce
    the complexity of developing RDBMS-based systems.
    To get an appreciation for what Express does for you, try a simple
    experiment : spec out a GUI/RDBMS application, say the order entry application
    that comes with Express as a tutorial. Do it without Express. Then do it with
    Express. Try to make the application as complete as possible - it must
    implement all your business rules and have all the behaviors that you desire.
    Relax a bit about look and feel. Also Remember to keep the experiment fair.
    As part of your application development come up with a framework and an
    architecture that the next application will use. Your non-Express application
    also must be as extensible and modifiable as Express allows an Express
    project. Record the development time of both. If you can beat Express in
    development time, then you're a Forte' development Guru and people should be
    beating a path to your door.
    Lest anyone think I am a cheerleader for Express, I want to mention that
    I have some very strong disagreements with several aspects of the
    Express architecture. One major problem I find with it is conceptual.
    The Express relational encapsulation has added a great deal of accidental
    complexity, i.e complexity that is not inherently there because
    of the nature of the problem. It arises because of design or implementation
    choices. Express represents each database table with three classes (there is
    actually six classes per table, three of which are just derived place holders
    to contain customizations, so we'll ignore them for this discussion). For a
    table EMP, Express produces three base classes: an EMPClass, an EMPQuery
    class, and EMPMgr class. The EMPClass is quite understandable. It
    encapsulates the table's data. The EMPMgr class is somewhat understandable,
    it encapsulates operations that manage the table's data as it crosses the
    interfaces. But why do we need one class per table? A manager should manage
    several things, not one thing. That leads us to EMPQuery, the encapsulation
    that I have most difficulty with: creating a query class for each table. That
    is definitely the wrong abstraction.
    If you consider that, in general, a SQL query is multi-table:
    select t1.col1, t2.col2, t3.col3, ...
    from t1, t2, t3, ..
    where <expressions on t1.col1, t2.col2, ...>
    order by <expressions on t1.col1, t2.col2, ...>
    you'll see that the abstraction here is a query tree across many tables,
    many columns, and a large variety of expressions - single and multi-table. To
    attempt to encapsulate that in objects that are basically single table objects
    will produce a great deal of accidental complexity. The design choice of one
    query class per table makes writing one-table queries simple, but writing
    multi-table queries awkward.
    The Express architecture would be much simpler if there is a QueryTree
    class for all tables. Better yet, leave the representation of queries as
    text strings - ANSI or Forte' SQL on the client side, and DBMS-specific on the
    server side. A great deal of complexity in doing query customizations will
    be reduced. You will lose some type checking that the current design has, but
    hey, you can't always get what you want. When you have several hundred tables
    in your database and Express generates six classes to per table, you'll see
    that the number of classes generated as excessive. When you try to design a
    general query modification scheme you'll realize how awkward multi-table joins
    are to do via the Express BusinessQuery class. Last week I was developing a
    general design for row-level security, the query structure drove me crazy,
    I ended up catching the generated SQLText and inserting the security
    constraints.
    Now back to the Dear Abby column: If you're unhappy because of performance
    issues, try to isolate the reason for the poor performance. This is not easy
    in 3-tier applications. Don't be too quick to blame the bad performance on
    Express. Do you have a non-Express benchmark application that does the
    same thing and outperforms Express? Don't be too quick to blame Forte'
    either. Do you have a non-Forte' benchmark, that does the same things
    and outperforms Forte'? The operative words here are "does the same
    things". A VB application that issues a SQL Select is not a benchmark.
    Forte' allows you to instrument applications to study performance
    bottlenecks. Find out where your hot spots are and try to do some design
    work. If the Express architecture gets in the way, it's time for feedback
    to Express developers.
    Performance issues, particularly in 3-tier client/server systems are
    multi-faceted and complex. There are many interactions of database
    issues, interaction of the database with TOOL language issues, locking,
    caching, timing of asynchronous events, shared objects, distributed objects,
    remote references, memory allocation/deallocation, message traffic,
    copying across partitions, etc. etc. that have to be considered. There
    was an interesting discussion just a few days ago on multi-threading
    on the client side, and blocking in DBMS APIs. Issues like that can
    keep you bogged down for days. I have worked on several performance efforts
    on triage tuning teams and swat re-design teams, where several hundred man
    hours were dedicated to performance and tuning of c/s systems. Big and
    complex topic. What I would advice about performance is what Tom Gilb says:
    "(1) don't worry about it, and (2) don't worry about it yet" - assuming of
    course that you have a rational design, and a sound framework. Many sins of
    design are committed in the name of performance. Anyway, enough
    of the harangue about premature considerations of performance. Bottom
    line is : once you get your functionality, instrument, measure, and tune. If
    your architecture was sound, you won't have to re-design for performance, you
    would've designed it in.
    On our project the system is so large we are subsumed with rapid process
    issues: how can we get this monster finished on time? without having to
    expand the team to several times its size, and without having to spend more
    than we can afford? The upcoming white paper's focus will be on the rapid
    process. Probably at a later date, we'll do another paper on performance
    issues with Express.
    Another reason you may be unhappy with Express is if you perceive that
    it is the wrong tool for your application - but was chosen by
    corporate mandate. If your application does not involve an RDBMS (say
    real-time process control), then Express is obviously not for you. It may
    also appear that Express is not suitable for your application if your usage
    of the RDBMS is marginal, but your application logic is quite complex (in our
    case the application has many AI aspects to it, a rules-based database, and
    many interconnected patterns of rules, and rich behaviors). If you find
    you're spending too much time doing things outside Express, fighting
    Express, or doing way too many customizations, then Express may
    not have been the right choice for your application.
    Don't think, however, that Express is only for those applications that
    maintain relational base tables. You can use a relational database to
    store tables other than base tables (state transition tables, dialog
    support tables, views, and other kinds of virtual tables). To make use
    of Express's powerful application generating capabilities you can use
    tables created for the sole purpose of of supporting an Express
    application model. The table is in essence, a state transition
    diagram. The Express application model creates rows in this
    virtual table while the dialog is in-progress. You can use insert and
    update triggers in your SQL engine to do the real thing to your base
    tables. This trick is among some I'll detail in the white paper.
    Another reason some people may be unhappy with Express may be methodology
    tension between those who use behavior-driven methodologies (Booch, Jacobson,
    Wirfs-Brock), and those who favor data-driven methodologies (OMT, Coad). If
    you're in the first camp, you'll probably feel that the modeling done via
    Express is not adequate. You'd probably say "that's not an object model!
    that's an ERD". You would be half right - the Express business model shows
    only containment and association relationships. It does not document "uses"
    relationships, so it really can't be considered a full object-model. Granted;
    but once you make that realization, your reaction should be one of joy, not
    sadness. This is a brilliant reduction in the amount of modeling that needs
    to be done since most MIS systems are dominated by their data-model, not their
    behavior model (See Arthur Riel's Design Heuristics) . Behavior-based methodologies,
    with their documentation of use-cases and class behavior will tend to be analysis
    overkill for most MIS projects. For some OOA/OOD practitioners, going back to a
    data-centered process may be unpalatable. For those folks my advice would be to try to
    look at the business model/application models as meta-models. Take the
    generated classes and produce a full object model if you wish. Document your
    domain classes in your favorite CASE tool. By all means document
    domain-pertinent behavior and use-cases, they will help you test. But do
    appreciate the productivity gain produced by the reduction of modeling load
    that Express data-centered approach gives you. Your detailed
    behavior-based, use-case model may be a luxury you can't afford.
    If the methodology clash manifests itself politically in your
    organization, where you have the OO purists pooh-pooh a data centered
    approach, then you have my sympathies. My best advice is to cool it on the
    methodology religion front. If you have a product to deliver, you can't
    afford it. Also keep in mind that even if your modeling work is reduced by
    adopting a data-centered Express process, you'll still have ample
    opportunities to fully utilize your OOD expertise when it comes time to add
    functionality or improve performance of the entire application as a whole.
    There will still be processes where Express may not be expressive enough. Those
    processes whose behavior is so rich and intricate that you cannot find a
    data-based trick to model them with, you'd have to do outside Express. These
    should be rare and the exception not the rule in MIS systems, however.
    Does that exhaust the list of reasons of why people may be
    disappointed in Express? Probably not. Undoubtedly Express reduces your
    degrees of freedom, and constrains your choices, but many times "jail
    liberates". More reasons? I've heard some complaints about repository
    corruption problems. I'm not aware that we've had those, or that it is
    something due to Express. I'll check with our Forte' system manager. If we
    have, they must not have been show stoppers, and our system manager must
    have dealt with them quickly enough that the developers did not notice much.
    Until you get the full paper in a few weeks, I'll leave you with some
    thoughts about Express, and OO development in general:
    1. Learn about the concept of "Good enough" in software
    engineering. Here are some sources:
    - Ed Yourdon: Read Ed Yourdon's article in the last issue of Byte,
    titled "When Good Enough is Best". One of Yourdon's tips in the
    article: "It's the Process, Stupid!"
    Don't take "good enough" to mean that development with Express
    requires you to lower your expectations, or lower your
    standards. You must tune the concept of "good enough" to your
    acceptable standards.
    - Arthur Riel: Read Arthur Riel's great book "Object-Oriented Design
    Heuristics". Riel shows that there are many problems with no optimal
    solutions. This is particularly true in those systems that
    are not purely object oriented. Systems that interface with
    non-object oriented "legacy" systems, which is what Express
    is. Also, Riel's discussion of behavior-based vs data-based
    methodologies is very illuminating.
    2. Don't obsess about look and feel. That's where Express is most
    constraining. If you have unique look and feel requirements,
    and look and feel is paramount to you, save yourself some pain and
    choose another tool, or sing along with Mick: you can't always get
    you want ...
    3. Be clear about what rapid development really means. An excellent
    resource is the book by Steve McConnell of Microsoft: "Rapid
    Development - Taming Wild Software Schedules". A thick book, but the
    chapters on best practices, and the tens of case studies are great. The
    book shows clearly the differences between evolutionary
    delivery, and staged delivery. It shows the differences between
    evolutionary prototyping, throwaway prototyping, user-interface
    prototyping, and demonstration prototyping and the appropriate uses
    and risks of each. In our white paper we advocate a life cycle
    approach that is basically evolutionary prototyping, with evolutionary
    delivery, and occasional use of throwaway prototypes. We don't advocate
    using Express for demonstration prototyping.
    4. Realize that Express is maturing along with the product you're
    developing. If you don't have deep philosophicalobjections to the
    Express framework and architecture, then most of
    the concerns with Express would be temporary details that will be
    smoothed as Express, and Forte', mature. How long did we wait for
    Windows to mature? Let's be fair to the Express developers.
    5. The main keys to success in Express are not rocket science (I
    worry now about having hyped up people's expectations myself). The
    major keys to success revolve around management issues, not
    technical issues: expectations management, process management,
    and customizations management.
    The full paper includes the design and implementation of a Customizations
    Management System that allows you to plan customizations needed and to
    inventory customizations completed. It automates the process of
    extracting the customizations completed from the repository and stores
    them in a relational database. A customizations browser then allows
    management to plan and prioritize the implementation of customizations. It
    allows developers to study the completed customizations and to reuse code,
    design, or concepts to implement further customizations. Managing
    customizations is absolutely essential for success in Express. The paper
    will also detail a rapid process that is "Express friendly".
    I'm glad there was such a big response to the white paper offer. Now I have
    to sit down and write it!
    Nabil Hijazi Optimum Solutions, Inc.
    [email protected] 201 Elden Street
    Phone: (703) 435-3530 #501
    Fax: (703) 435-9212 Herndon, Va 22070
    ================================================
    You can't always get what you want.
    But if you try sometime, you might find,
    you get what you need. Mick Jagger.
    ------------------------------------------------

    [email protected] wrote:
    >
    A few comments on Nabil Hijazi's observations...
    Nabil Hijazi writes...
    One major problem I find with it is conceptual.The Express relational
    encapsulation has added a great deal of accidental complexity, i.e complexity
    that is not inherently there because of the nature of the problem. It arises
    because of design or implementation choices.
    Paul Krinsky comments...
    Anyone who has used NeXT's Enterprise Object Framework (EOF) will be at home
    with Express's architecture, it is very similar. NeXT has been around for a
    while and have gone through a lot. They originally started with DBKit to solve
    the persistence problem. Basically it wrappered the database libraries. EOF was
    created when it became clear that the DBKit approach wouldn't work. EOF has
    EO's (Enterprise Objects), EOQuery, EOController, etc. that do pretty much what
    BusinessClass, BusinessQuery and BusinessMgr do. I'm not sure if Forte hired
    people with NeXT experience, but it would be interesting to find out if both
    companies came up with the same architecture independently. What are the
    chances?
    Nabil Hijazi writes...
    The design choice of one query class per table makes writing one-table queries
    simple, but writing multi-table queries awkward.
    Paul Krinsky comments...
    I don't think BusinessQuery is too bad once you get used to it. Multi-table
    queries are pretty easy if you use the foreign attributes Express provides to
    build connected queries. One feature I miss from EOF is the EOFault. An EOFault
    stands in for an object to reduce the overhead of retrieving everything an
    object has a pointer to. For example, a retrieve on customer that contains an
    array of orders would bring in EOFaults to stand in for the orders. When one of
    the orders was referenced, EOF would produce a fault (hence the name) and go
    and get the required record. Of course you could force EOF to bring the real
    data and not use EOfaults if you wanted (if chance were high that you would
    need it). This feature saved a lot of memory and increased the speed of
    retrieval while still providing transparent access from the viewpoint of the
    developer. Another cool feature was uniquing. EOF kept track of the EOs it
    retrieved for a client. So if two windows both retrieved Customer X, EOF would
    realize this and point the 2nd window at the copy already in memory. This
    avoided having multiple copies of the same object in memory and allowed
    provided everyone with the most current changes.
    Nabil Hijazi writes...
    The Express architecture would be much simpler if there is a QueryTree
    class for all tables. Better yet, leave the representation of queries as text
    strings - ANSI or Forte' SQL on the client side, and DBMS-specific on the
    server side. A great deal of complexity in doing query customizations will be
    reduced. You will lose some type checking that the current design has, but hey,
    you can't always get what you want. When you have several hundred tables in
    your database and Express generates six classes to per table, you'll see that
    the number of classes generated as excessive. When you try to design a general
    query modification scheme you'll realize how awkward multi-table joins are to
    do via the Express BusinessQuery class. Last week I was developing a general
    design for row-level security, the query structure drove me crazy, I ended up
    catching the generated SQLText and inserting the security constraints.
    Paul Krinsky comments...
    I like the fact that Express manages the mapping to the database. I can change
    the underlying database schema and all my queries still work. When the DBAs
    inform me that I'm not following their naming standard (remove all vowels
    except for 207 "standard" abbreviations that somehow got blessed then compress
    to 8 characters using a bit compression algorithm that NASA would be proud of -
    am I ranting?) it lets me conform without having to deal with it except in the
    business model. It's nice to have a layer of abstraction.
    I'm not a big fan of having all the generated classes either. I think it's a
    necessary evil because of TOOL. NeXT uses Objective-C which is much more
    dynamic in nature (more in common with Smalltalk than C). Their business model
    can be defined on the fly and changed at runtime. It's pretty powerful but you
    always have the speed vs. size tradeoff. The BusinessQuery is a nice way to
    send only the what you need to the server in a format that isn't too difficult
    to translate to SQL but not so close to SQL that you couldn't rip out the
    backend and use the same interface to communicate with something other than a
    relational database.
    With any tool you have to understand it's strengths and weaknesses. Express is
    a 1.0 product. Given that I think they have done a great job. The biggest
    request I have is that Express moves away from being so focused on UI and
    Database access and focus more on the BusinessClasses. For example, why are the
    Validate and NewObject methods not on the BusinessClass? I understand their
    importance in the Window classes but they should really delegate most of the
    work to the BusinessClass. Otherwise you end up with most of the logic in the
    UI and a 2-tier application. One of the first things we did is extend the
    Window classes to delegate validation, etc. to the classes they display.Paul,
    This a very good point. After reviewing all the customizations we have done on
    our Express project, (BTW, I work with Nabil) I found that we have not done any
    business service customizations except for database row level security. We could
    have easily moved validation to the business classes. Actually, Express gives you examples
    for this. They recommend customizing the insert and update methods to apply validation.
    You could simply add your own validate method on the business class and have the insert,
    update, or the window call it. This is actually much more object oriented than coding
    validation into the window classes (for the oo purest out there!).
    Robert Crisafulli
    AMISYS Managed Care Solutions Inc.
    (301) 838-7540
    >
    I look forward to reading the white paper on Express. I would encourage anyone
    else to post similar documents. If anyone is interested, I can dig up some
    stuff I wrote on EOF's architecture. It's a good source for enhancement
    requests if nothing else! If anyone has used other persistence frameworks I
    think the group would benefit from their experiences.
    Paul Krinsky
    Price Waterhouse LLC
    Management Consulting Group

Maybe you are looking for