Class Classification

Hi,
I'd like to ask if how would you know that a class characteristics of a material is linked to QM.
I have a problem in QS61 wherein I created a class characteristics in batch class "FIFO" for material A and that charcateristics becomes a default value in QS61.

All batch characteristics using catalogs and numerics and linked to MIC's should appear in QS61. 
If you've created a characteristic for use in batch search strategys, (FIFO sounds like one of these), you might want to create separate batch search strategy class.
In addition, SAP already provides several standard characteristics that can be used in a FIFO strategy so you shouldn't have to create your own.  Look at the characteristics that start with LOBM_*.
One of these is batch creation date which is often used in FIFO search strategies for sorting.
Craig

Similar Messages

  • What is class & classifications

    Hi
    My requirement is to create tables same as standard tables. So they told me to create tables using class & classifications.
    I dont know what is class & classifications. Please help me to create tables using this and also what is the tcode for that.

    If people ask me to do something I never did, I tell them and ask if they can explain what it is...

  • Bapi for class classification

    hi ,
    how to upload the class classification details in ca01 using bapi_routing_create.
    thanks in advance
    Edited by: Arun on Jul 18, 2011 1:46 PM

    Jessica,
    Try looking at the following: BAPI_BUPA_FRG0040_CREATE
    Note that if you already have maintained Sales Classifications then you'll have to use the _CHANGE version.
    It has the importing structure of type BAPIBUS1006040_SALESCLASS and so you could for example populate it's fields as shown below and using the structure when calling the above BAPI:
    DATA
           IS_COMPETITOR
           IS_PROSPECT
           IS_CONSUMER
           IS_CUSTOMER                    X
           CUSTOMER_SINCE
           IS_COD_CUSTOMER
           INDUSTRY
           IS_RENTED
           ACCOUNT_GROUP                  0001
           NIELSEN_ID
           CLASSIFIC
           ATTRIBUTE
           ATTRIB_2
           ATTRIB_3
           ATTRIB_4
           ATTRIB_5
           ATTRIB_6
           ATTRIB_7
           ATTRIB_8
           ATTRIB_9
           ATTRIB_10
    Regards,
    Brad

  • Classes & Classifications

    Hi,
    What are classes & classifications and what is their relationship to Items or Fixed assets?. Should I upload Classes and classifications before a intila stock upload?
    Edited by: Csaba Szommer on Oct 13, 2011 8:34 AM

    hi..
    classes are the group of characteristics....and these classes we assign in classification tab of material master...
    example..1..
    while creating the release procedure we define the classes and these classes having characteristic..like plant value, purchasing org value..suppose these 2 are the characterics ..
    we will create the class and define these 2 characteristics inside that class now...we will use this class in config release procedure...
    Exp 2.  in batch management..suppose we have created one characteristics SLED and we have assigned this to the class...and now material which we want to have identify by their shelf life date can be assigned to that class in classification tab in material master...
    so..Classes are the combination of similar type of properties....and for that we need to define the properties....
    so while uploading the intial stock of these SLED material ..the systems ask for date of manufactiring for these material so that it can calculate the shelf life date for the material...
    hope u get it,,
    Thanks

  • Class & classification in master table

    Hi
    I want to create one dealer master table. How to create classification class in dealer master table. please help me.

    Hello, I now the question is really old but maybe the answer is usefull for someone else.
    1) go into KLAH
    complete the Class Type and/or the Class.
    Here you get the Internal class number.
    2) Then go to KSSK and input internal class number and the object
    The object is the material or customer with the following format:
    - material number (0000000000000XXXXX)
    - customer number (XXXXXXXXX)
    Done.

  • Revenue classes,classification rules in incentive compensation

    Hi all,
    Iam new to incentive compensation , can any one explain about revenue classes and classification rules and their role in calculating commission .
    I want to know about how siblings are useful and about input and output expression .
    Thank you,
    Sandeep Reddy

    Hello Danielle,
    Dont get so frustrated. Cheer up...
    Slight modification to your setup.
    Define Joe's Pizza Shop as Revenue class and also in the hierarchy.
    Capture these fields in the rule set.
    158785 - child value
    252155 - child value.
    I guess you are pulling your orders into CN_COMMSSION_HEADERS_ALL table.
    Go to tables>> and search for this table and click on columns and from the top dropdown select "classification"
    and then 'enable that columns, put a user name as 'Location' or some thing which contains this above values.
    so that you can see that columns 'Location when you define your ruleset. and feed these above 2 values so that the transactions can be captured/classified on the basis of above 2 values for calculation.
    Cheers,
    A.P.

  • Change Characteristic values in Material class Classification

    Hi,
    Is there any standard program to change Characteristic vaules in Material Classification?
    Thanks,
    Shiva

    Thanks for your help.
    Points will be awarded.
    Shiva

  • Transfer of Classes( Classification data) - through ALE

    Hi experts,
    I am trying to transfer Classes from a 3.1i system to a ECC 6.0 system through ALE distribution.
    Almost all the classes got transferred but in a few of the classes, the status of the idoc remains at 64 - waiting to be processed.
    When I scheduled the processing of these idocs in background mode they result in a dump.
    Dump:
    The exception, which is assigned to class 'CX_SY_ARITHMETIC_OVERFLOW', was not
      caught in
    procedure "MAINTAIN_ALLO_VALUES_VIA_API" "(FORM)", nor was it propagated by a
      RAISING clause.
    This dump is happening because sy-index value is crossing 2147483647 ( which is the max allowed value).
    I could not find any relevant notes for it. Does anyone know what is causing this issue and how to correct it.
    FYI - There are around 3000 values attached to a characteristic which is attached to the class.
    Thanks and Regards,
    Karthik

    Hi Karthrik,
    Did you never get answer on this topic or did you make a request to SAP in the meanwhile? I got same dump and I am looking for solution, too.
    Br
    Markku

  • DMS Migration (DIR, charateristics, classes)

    Hi All,
    I have a requirement to migrate the Document Info Records, characteristics, classes, classification and the original documents from R/3 4.7 to ECC 6 Server, as well as move the original docs from the old KPRO server to the new KPRO Server.
    So far i was able to find the IDOC type that i can use for characteristics, classes and classification as well as the reports to do a mass distribution.
    My biggest problem is migrating the documents and to also ensure that, the links point to the new server. Your help would be really appreciated.
    Thank you in Advance,
    Danny

    Hi,
    U can use the message class DOCMAS.
    Also consider that you would need to transfer the following message types where applicable:
    Classification data needs to be ALEu2019d to the new system, in the sequence set out below:
    1.     Characteristics and Characteristic Values
    Most objects for variant configuration are dependent on characteristics. For this reason, characteristics must be transferred first.
    u2022     Characteristics with value hierarchies, long texts for characteristic values, or linked documents may lead to problems during transfer.
    u2022     Message type: CHRMAS
    u2022     Availability: as of R/3 3.0
    2.     Classes
    When you use ALE to distribute classes, the characteristic assignments are also transferred.
    u2022     Message type: CLSMAS
    u2022     Availability: as of R/3 3.0
    3.     Variant Table Structures
    These are the variant tables that are created to support data maintenance.
    u2022     Message type: VTAMAS
    u2022     Availability: as of R/3 3.1
    4.     Variant Table Contents
    Once the structures of the variant tables have been distributed, their contents can be transferred.
    u2022     Message type: VTMMAS
    u2022     Availability: as of R/3 3.1
    5.     User-Defined Functions (Variant Functions, VC Functions)
    User-defined functions in variant configuration let you use function modules that you have written, to check and infer characteristic values.
    The distribution of functions only transfers the framework (texts, characteristics, and so on). The function modules that belong to the functions must be transferred first, using the usual R/3 transport system.
    u2022     Message type: VFNMAS
    u2022     Availability: as of R/3 4.5
    6.     Object Dependencies (Except Constraints)
    Dependencies (preconditions, selection conditions, procedures, and actions) usually refer to characteristics, characteristic values, variant tables, and variant functions. For this reason, dependencies must be distributed after this other master data. The dependencies transferred here are global dependencies. Local dependencies are transferred with the objects to which they are assigned. For example, if you created a selection condition as a local dependency for a BOM item, this dependency is transferred when you use ALE to distribute the BOM (bill of material).
    u2022     Message type: KNOMAS
    u2022     Availability: as of R/3 3.1
    7.     Constraints
    Constraints can only be distributed as of R/3 4.5.
    u2022     Message type: KNOMAS
    u2022     Availability: as of R/3 4.5
    8.     Constraint Nets
    Constraint nets can only be distributed as of R/3 4.5.
    u2022     Message type: DEPNET
    u2022     Availability: as of R/3 4.5
    9.     Assignment of Dependencies to Characteristics and Characteristic Values
    u2022     The characteristics are transferred once more to do this. Start ALE distribution for characteristics again, and the system transfers the assignments.
    See point 1: Characteristics and Characteristic Values
    Regards,
    Freddie.

  • Storage class not allowed in storage type

    Hi All,
    I am getting this error while confirming  the TO.
    It stays Storage class is not allowed in Storage type but how can  this be configured in OMM2.
    or do we need to define any sequence in OMLY.

    Hi
    Have u defined your material as Hazardous material? is so, yes, u have to do settings in OMLY.
    "Storage class: Classifies hazardous materials with regard to their storage conditions. A check can be carried out to find out whether hazardous materials that belong to a specific storage class may be placed into a particular storage type".
    Rgds
    Ramesh

  • How to link material idoc to class & characteristics ?

    Hi Experts,
    Please help me to link material idoc to class & classification.
    I am new to idoc.
    Thanks in advance..

    Hi Nikhil,
    Thanks for your reply.
    I know message types CLSMAS u2014 Class master CLFMAS u2014 Classification data.
    I have been told that to research, while sending idoc how material would be linked to class & classification.

  • Tables to view classification tab

    Hi,
    Suppose in material master I am maintaining classification tab and hence incorporating classification values. In what tables do I refer in order to find the above values. Do I refer to KLAH table or do I refer to MARA/MARC table.

    Hi,
    You will not get the values of clasificaiton directly in a table. You have to link many tables and this will be a time consuming job.
    Better use Function Module "CLAF_CLASSIFICATION_OF_OBJECTS" - Get Class & Classification data for Object
    Here pass following;
    CLASSTYPE                       001
    CLINT                                 For MATNR from KSSK Table
    OBJECT                             MATNR

  • ECM for Classification

    Dear All.
    We have implimented ECM now we are using same for Material Master,BOM & Tasklist, now we want to use Class 001-Material in Material master so we defined class & characteristic for same.
    Our requirement is we require to capture the changes done in Charecteristic value with refrance to Change no. so for taht i have created CC having object type Material,Characteristic,Characteristic of Class & Classification. But while i am going to MM02 with change no. in Classification view system geving me message as "Change no. ingnored for class type 001" Message no-CL141.
    Please suggest what configration require for same??
    Regards.
    Dev123....

    Hi Dev123
    The reason why you receive the message CL141 is probably because 'ECH (time)' isnt set for the class type 001
    You can check this in your system by following the path :
    (Transaction ) O1CL ->
                            Object Table -> MARA
                            Class Type -> 001
                            Details
                            ECH(times)
                            Objects -> ECH(times)
    IF ECH ( times ) is unticked then this leads to the popup you
    receive . Inorder to resolve the issue these flags need to be set.
    I Hope the information helps.
    Enda.

  • Performance issue - Pricing Report

    Hi Experts,
    I have developed an ALV report and I need to improve the performance in production.
    I have a relatively complex issue here. I have to fetch data for pricing from a pricing condition from all the respective underlying condition tables (AXXX). Then for every material, I have to display the material class classification characteristics and their values maintained in the material master as well as batch classification characteristics and their corresponding values maintained for every batch.
    For example:
    In this way, if the condition is having 10 materials, every material has say 10 material class characteristic and 12 batch class characteristics then the total number of rows in the out put should be 130 (one for material and 12 rows for classifications).
    How can I optimize the output so that the performance is optimized.
    I am also fetching other data for the output like stock and sales order quantity against every material.
    I have tried minimizing loops and used select for all entries(I have read a few threads which suggests otherwise). Read statements use binary search. No select * queries.
    Warm Regards,
    Abdullah

    I would not sign a lot of the recommendations going around here. Seems like a complex report you are doing there, so here is my personal generic recommendations:
    - always use index when selecting from database tables
    - avoid redundant database accesses to unbuffered tables, only read fields you really need for processing
    - use sorted or hashed tables for reads or loops and access them by key
    - use ASSIGNING rather than INTO when reading or looping (small gain)
    - use SE30 and if necessary ST05 to fine tune if not yet satisfied with the runtime
    Thomas

  • Dmtreedemo.java error

    hi,
    i got error in the following programme in java named dmdemotree.java the code and the error are as mentioned below
    i have installed oracle 10g r2 and i have used JDK 1.4.2 softwares , i have set classpath for jdm.jar and ojdm_api.jar available in oracle 10g r2 software ,successfully compiled but at execution stage i got error as
    F:\Mallari\DATA MINING demos\java\samples>java dmtreedemo localhost:1521:orcl scott tiger
    --- Build Model - using cost matrix ---
    javax.datamining.JDMException: Generic Error.
    at oracle.dmt.jdm.resource.OraExceptionHandler.createException(OraExcept
    ionHandler.java:142)
    at oracle.dmt.jdm.resource.OraExceptionHandler.createException(OraExcept
    ionHandler.java:91)
    at oracle.dmt.jdm.OraDMObject.createException(OraDMObject.java:111)
    at oracle.dmt.jdm.base.OraTask.saveObjectInDatabase(OraTask.java:204)
    at oracle.dmt.jdm.OraMiningObject.saveObjectInDatabase(OraMiningObject.j
    ava:164)
    at oracle.dmt.jdm.resource.OraPersistanceManagerImpl.saveObject(OraPersi
    stanceManagerImpl.java:245)
    at oracle.dmt.jdm.resource.OraConnection.saveObject(OraConnection.java:3
    83)
    at dmtreedemo.executeTask(dmtreedemo.java:622)
    at dmtreedemo.buildModel(dmtreedemo.java:304)
    at dmtreedemo.main(dmtreedemo.java:199)
    Caused by: java.sql.SQLException: Unsupported feature
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)
    at oracle.jdbc.dbaccess.DBError.throwUnsupportedFeatureSqlException(DBEr
    ror.java:690)
    at oracle.jdbc.driver.OracleCallableStatement.setString(OracleCallableSt
    atement.java:1337)
    at oracle.dmt.jdm.utils.OraSQLUtils.createCallableStatement(OraSQLUtils.
    java:126)
    at oracle.dmt.jdm.utils.OraSQLUtils.executeCallableStatement(OraSQLUtils
    .java:532)
    at oracle.dmt.jdm.scheduler.OraProgramJob.createJob(OraProgramJob.java:7
    7)
    at oracle.dmt.jdm.scheduler.OraJob.saveJob(OraJob.java:107)
    at oracle.dmt.jdm.scheduler.OraProgramJob.saveJob(OraProgramJob.java:85)
    at oracle.dmt.jdm.scheduler.OraProgramJob.saveJob(OraProgramJob.java:290
    at oracle.dmt.jdm.base.OraTask.saveObjectInDatabase(OraTask.java:199)
    ... 6 more
    SO PLZ HELP ME OUT IN THIS , I WILL BE VERY THANK FULL
    ===========================================================
    the sample code is
    // Copyright (c) 2004, 2005, Oracle. All rights reserved.
    // File: dmtreedemo.java
    * This demo program describes how to use the Oracle Data Mining (ODM) Java API
    * to solve a classification problem using Decision Tree (DT) algorithm.
    * PROBLEM DEFINITION
    * How to predict whether a customer responds or not to the new affinity card
    * program using a classifier based on DT algorithm?
    * DATA DESCRIPTION
    * Data for this demo is composed from base tables in the Sales History (SH)
    * schema. The SH schema is an Oracle Database Sample Schema that has the customer
    * demographics, purchasing, and response details for the previous affinity card
    * programs. Data exploration and preparing the data is a common step before
    * doing data mining. Here in this demo, the following views are created in the user
    * schema using CUSTOMERS, COUNTRIES, and SUPPLIMENTARY_DEMOGRAPHICS tables.
    * MINING_DATA_BUILD_V:
    * This view collects the previous customers' demographics, purchasing, and affinity
    * card response details for building the model.
    * MINING_DATA_TEST_V:
    * This view collects the previous customers' demographics, purchasing, and affinity
    * card response details for testing the model.
    * MINING_DATA_APPLY_V:
    * This view collects the prospective customers' demographics and purchasing
    * details for predicting response for the new affinity card program.
    * DATA MINING PROCESS
    * Prepare Data:
    * 1. Missing Value treatment for predictors
    * See dmsvcdemo.java for a definition of missing values, and the steps to be
    * taken for missing value imputation. SVM interprets all NULL values for a
    * given attribute as "sparse". Sparse data is not suitable for decision
    * trees, but it will accept sparse data nevertheless. Decision Tree
    * implementation in ODM handles missing predictor values (by penalizing
    * predictors which have missing values) and missing target values (by simple
    * discarding records with missing target values). We skip missing values
    * treatment in this demo.
    * 2. Outlier/Clipping treatment for predictors
    * See dmsvcdemo.java for a discussion on outlier treatment. For decision
    * trees, outlier treatment is not really necessary. We skip outlier treatment
    * in this demo.
    * 3. Binning high cardinality data
    * No data preparation for the types we accept is necessary - even for high
    * cardinality predictors. Preprocessing to reduce the cardinality
    * (e.g., binning) can improve the performance of the build, but it could
    * penalize the accuracy of the resulting model.
    * The PrepareData() method in this demo program illustrates the preparation of the
    * build, test, and apply data. We skip PrepareData() since the decision tree
    * algorithm is very capable of handling data which has not been specially
    * prepared. For this demo, no data preparation will be performed.
    * Build Model:
    * Mining Model is the prime object in data mining. The buildModel() method
    * illustrates how to build a classification model using DT algorithm.
    * Test Model:
    * Classification model performance can be evaluated by computing test
    * metrics like accuracy, confusion matrix, lift and ROC. The testModel() or
    * computeTestMetrics() method illustrates how to perform a test operation to
    * compute various metrics.
    * Apply Model:
    * Predicting the target attribute values is the prime function of
    * classification models. The applyModel() method illustrates how to
    * predict the customer response for affinity card program.
    * EXECUTING DEMO PROGRAM
    * Refer to Oracle Data Mining Administrator's Guide
    * for guidelines for executing this demo program.
    // Generic Java api imports
    import java.math.BigDecimal;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.ResultSetMetaData;
    import java.sql.SQLException;
    import java.sql.Statement;
    import java.text.DecimalFormat;
    import java.text.MessageFormat;
    import java.util.Collection;
    import java.util.HashMap;
    import java.util.Iterator;
    import java.util.Stack;
    // Java Data Mining (JDM) standard api imports
    import javax.datamining.ExecutionHandle;
    import javax.datamining.ExecutionState;
    import javax.datamining.ExecutionStatus;
    import javax.datamining.JDMException;
    import javax.datamining.MiningAlgorithm;
    import javax.datamining.MiningFunction;
    import javax.datamining.NamedObject;
    import javax.datamining.SizeUnit;
    import javax.datamining.algorithm.tree.TreeHomogeneityMetric;
    import javax.datamining.algorithm.tree.TreeSettings;
    import javax.datamining.algorithm.tree.TreeSettingsFactory;
    import javax.datamining.base.AlgorithmSettings;
    import javax.datamining.base.Model;
    import javax.datamining.base.Task;
    import javax.datamining.data.AttributeDataType;
    import javax.datamining.data.CategoryProperty;
    import javax.datamining.data.CategorySet;
    import javax.datamining.data.CategorySetFactory;
    import javax.datamining.data.ModelSignature;
    import javax.datamining.data.PhysicalAttribute;
    import javax.datamining.data.PhysicalAttributeFactory;
    import javax.datamining.data.PhysicalAttributeRole;
    import javax.datamining.data.PhysicalDataSet;
    import javax.datamining.data.PhysicalDataSetFactory;
    import javax.datamining.data.SignatureAttribute;
    import javax.datamining.modeldetail.tree.TreeModelDetail;
    import javax.datamining.modeldetail.tree.TreeNode;
    import javax.datamining.resource.Connection;
    import javax.datamining.resource.ConnectionFactory;
    import javax.datamining.resource.ConnectionSpec;
    import javax.datamining.rule.Predicate;
    import javax.datamining.rule.Rule;
    import javax.datamining.supervised.classification.ClassificationApplySettings;
    import javax.datamining.supervised.classification.ClassificationApplySettingsFactory;
    import javax.datamining.supervised.classification.ClassificationModel;
    import javax.datamining.supervised.classification.ClassificationSettings;
    import javax.datamining.supervised.classification.ClassificationSettingsFactory;
    import javax.datamining.supervised.classification.ClassificationTestMetricOption;
    import javax.datamining.supervised.classification.ClassificationTestMetrics;
    import javax.datamining.supervised.classification.ClassificationTestMetricsTask;
    import javax.datamining.supervised.classification.ClassificationTestMetricsTaskFactory;
    import javax.datamining.supervised.classification.ClassificationTestTaskFactory;
    import javax.datamining.supervised.classification.ConfusionMatrix;
    import javax.datamining.supervised.classification.CostMatrix;
    import javax.datamining.supervised.classification.CostMatrixFactory;
    import javax.datamining.supervised.classification.Lift;
    import javax.datamining.supervised.classification.ReceiverOperatingCharacterics;
    import javax.datamining.task.BuildTask;
    import javax.datamining.task.BuildTaskFactory;
    import javax.datamining.task.apply.DataSetApplyTask;
    import javax.datamining.task.apply.DataSetApplyTaskFactory;
    // Oracle Java Data Mining (JDM) implemented api imports
    import oracle.dmt.jdm.algorithm.tree.OraTreeSettings;
    import oracle.dmt.jdm.resource.OraConnection;
    import oracle.dmt.jdm.resource.OraConnectionFactory;
    import oracle.dmt.jdm.modeldetail.tree.OraTreeModelDetail;
    public class dmtreedemo
    //Connection related data members
    private static Connection m_dmeConn;
    private static ConnectionFactory m_dmeConnFactory;
    //Object factories used in this demo program
    private static PhysicalDataSetFactory m_pdsFactory;
    private static PhysicalAttributeFactory m_paFactory;
    private static ClassificationSettingsFactory m_clasFactory;
    private static TreeSettingsFactory m_treeFactory;
    private static BuildTaskFactory m_buildFactory;
    private static DataSetApplyTaskFactory m_dsApplyFactory;
    private static ClassificationTestTaskFactory m_testFactory;
    private static ClassificationApplySettingsFactory m_applySettingsFactory;
    private static CostMatrixFactory m_costMatrixFactory;
    private static CategorySetFactory m_catSetFactory;
    private static ClassificationTestMetricsTaskFactory m_testMetricsTaskFactory;
    // Global constants
    private static DecimalFormat m_df = new DecimalFormat("##.####");
    private static String m_costMatrixName = null;
    public static void main( String args[] )
    try
    if ( args.length != 3 ) {
    System.out.println("Usage: java dmsvrdemo <Host name>:<Port>:<SID> <User Name> <Password>");
    return;
    String uri = args[0];
    String name = args[1];
    String password = args[2];
    // 1. Login to the Data Mining Engine
    m_dmeConnFactory = new OraConnectionFactory();
    ConnectionSpec connSpec = m_dmeConnFactory.getConnectionSpec();
    connSpec.setURI("jdbc:oracle:thin:@"+uri);
    connSpec.setName(name);
    connSpec.setPassword(password);
    m_dmeConn = m_dmeConnFactory.getConnection(connSpec);
    // 2. Clean up all previuosly created demo objects
    clean();
    // 3. Initialize factories for mining objects
    initFactories();
    m_costMatrixName = createCostMatrix();
    // 4. Build model with supplied cost matrix
    buildModel();
    // 5. Test model - To compute accuracy and confusion matrix, lift result
    // and ROC for the model from apply output data.
    // Please see dnnbdemo.java to see how to test the model
    // with a test input data and cost matrix.
    // Test the model with cost matrix
    computeTestMetrics("DT_TEST_APPLY_OUTPUT_COST_JDM",
    "dtTestMetricsWithCost_jdm", m_costMatrixName);
    // Test the model without cost matrix
    computeTestMetrics("DT_TEST_APPLY_OUTPUT_JDM",
    "dtTestMetrics_jdm", null);
    // 6. Apply the model
    applyModel();
    } catch(Exception anyExp) {
    anyExp.printStackTrace(System.out);
    } finally {
    try {
    //6. Logout from the Data Mining Engine
    m_dmeConn.close();
    } catch(Exception anyExp1) { }//Ignore
    * Initialize all object factories used in the demo program.
    * @exception JDMException if factory initalization failed
    public static void initFactories() throws JDMException
    m_pdsFactory = (PhysicalDataSetFactory)m_dmeConn.getFactory(
    "javax.datamining.data.PhysicalDataSet");
    m_paFactory = (PhysicalAttributeFactory)m_dmeConn.getFactory(
    "javax.datamining.data.PhysicalAttribute");
    m_clasFactory = (ClassificationSettingsFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationSettings");
    m_treeFactory = (TreeSettingsFactory) m_dmeConn.getFactory(
    "javax.datamining.algorithm.tree.TreeSettings");
    m_buildFactory = (BuildTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.task.BuildTask");
    m_dsApplyFactory = (DataSetApplyTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.task.apply.DataSetApplyTask");
    m_testFactory = (ClassificationTestTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationTestTask");
    m_applySettingsFactory = (ClassificationApplySettingsFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationApplySettings");
    m_costMatrixFactory = (CostMatrixFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.CostMatrix");
    m_catSetFactory = (CategorySetFactory)m_dmeConn.getFactory(
    "javax.datamining.data.CategorySet" );
    m_testMetricsTaskFactory = (ClassificationTestMetricsTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationTestMetricsTask");
    * This method illustrates how to build a mining model using the
    * MINING_DATA_BUILD_V dataset and classification settings with
    * DT algorithm.
    * @exception JDMException if model build failed
    public static void buildModel() throws JDMException
    System.out.println("---------------------------------------------------");
    System.out.println("--- Build Model - using cost matrix ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    PhysicalDataSet buildData =
    m_pdsFactory.create("MINING_DATA_BUILD_V", false);
    PhysicalAttribute pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    buildData.addAttribute(pa);
    m_dmeConn.saveObject("treeBuildData_jdm", buildData, true);
    //2. Create & save Mining Function Settings
    // Create tree algorithm settings
    TreeSettings treeAlgo = m_treeFactory.create();
    // By default, tree algorithm will have the following settings:
    // treeAlgo.setBuildHomogeneityMetric(TreeHomogeneityMetric.gini);
    // treeAlgo.setMaxDepth(7);
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(0.1, SizeUnit.percentage);
    // treeAlgo.setMinNodeSize( 0.05, SizeUnit.percentage );
    // treeAlgo.setMinNodeSize( 10, SizeUnit.count );
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(20, SizeUnit.count);
    // Set cost matrix. A cost matrix is used to influence the weighting of
    // misclassification during model creation (and scoring).
    // See Oracle Data Mining Concepts Guide for more details.
    String costMatrixName = m_costMatrixName;
    // Create ClassificationSettings
    ClassificationSettings buildSettings = m_clasFactory.create();
    buildSettings.setAlgorithmSettings(treeAlgo);
    buildSettings.setCostMatrixName(costMatrixName);
    buildSettings.setTargetAttributeName("AFFINITY_CARD");
    m_dmeConn.saveObject("treeBuildSettings_jdm", buildSettings, true);
    // 3. Create, save & execute Build Task
    BuildTask buildTask = m_buildFactory.create(
    "treeBuildData_jdm", // Build data specification
    "treeBuildSettings_jdm", // Mining function settings name
    "treeModel_jdm" // Mining model name
    buildTask.setDescription("treeBuildTask_jdm");
    executeTask(buildTask, "treeBuildTask_jdm");
    //4. Restore the model from the DME and explore the details of the model
    ClassificationModel model =
    (ClassificationModel)m_dmeConn.retrieveObject(
    "treeModel_jdm", NamedObject.model);
    // Display model build settings
    ClassificationSettings retrievedBuildSettings =
    (ClassificationSettings)model.getBuildSettings();
    if(buildSettings == null)
    System.out.println("Failure to restore build settings.");
    else
    displayBuildSettings(retrievedBuildSettings, "treeBuildSettings_jdm");
    // Display model signature
    displayModelSignature((Model)model);
    // Display model detail
    TreeModelDetail treeModelDetails = (TreeModelDetail) model.getModelDetail();
    displayTreeModelDetailsExtensions(treeModelDetails);
    * Create and save cost matrix.
    * Consider an example where it costs $10 to mail a promotion to a
    * prospective customer and if the prospect becomes a customer, the
    * typical sale including the promotion, is worth $100. Then the cost
    * of missing a customer (i.e. missing a $100 sale) is 10x that of
    * incorrectly indicating that a person is good prospect (spending
    * $10 for the promo). In this case, all prediction errors made by
    * the model are NOT equal. To act on what the model determines to
    * be the most likely (probable) outcome may be a poor choice.
    * Suppose that the probability of a BUY reponse is 10% for a given
    * prospect. Then the expected revenue from the prospect is:
    * .10 * $100 - .90 * $10 = $1.
    * The optimal action, given the cost matrix, is to simply mail the
    * promotion to the customer, because the action is profitable ($1).
    * In contrast, without the cost matrix, all that can be said is
    * that the most likely response is NO BUY, so don't send the
    * promotion. This shows that cost matrices can be very important.
    * The caveat in all this is that the model predicted probabilities
    * may NOT be accurate. For binary targets, a systematic approach to
    * this issue exists. It is ROC, illustrated below.
    * With ROC computed on a test set, the user can see how various model
    * predicted probability thresholds affect the action of mailing a promotion.
    * Suppose I promote when the probability to BUY exceeds 5, 10, 15%, etc.
    * what return can I expect? Note that the answer to this question does
    * not rely on the predicted probabilities being accurate, only that
    * they are in approximately the correct rank order.
    * Assuming that the predicted probabilities are accurate, provide the
    * cost matrix table name as input to the RANK_APPLY procedure to get
    * appropriate costed scoring results to determine the most appropriate
    * action.
    * In this demo, we will create the following cost matrix
    * ActualTarget PredictedTarget Cost
    * 0 0 0
    * 0 1 1
    * 1 0 8
    * 1 1 0
    private static String createCostMatrix() throws JDMException
    String costMatrixName = "treeCostMatrix";
    // Create categorySet
    CategorySet catSet = m_catSetFactory.create(AttributeDataType.integerType);
    // Add category values
    catSet.addCategory(new Integer(0), CategoryProperty.valid);
    catSet.addCategory(new Integer(1), CategoryProperty.valid);
    // Create cost matrix
    CostMatrix costMatrix = m_costMatrixFactory.create(catSet);
    // ActualTarget PredictedTarget Cost
    costMatrix.setValue(new Integer(0), new Integer(0), 0);
    costMatrix.setValue(new Integer(0), new Integer(1), 1);
    costMatrix.setValue(new Integer(1), new Integer(0), 8);
    costMatrix.setValue(new Integer(1), new Integer(1), 0);
    //save cost matrix
    m_dmeConn.saveObject(costMatrixName, costMatrix, true);
    return costMatrixName;
    * This method illustrates how to compute test metrics using
    * an apply output table that has actual and predicted target values. Here the
    * apply operation is done on the MINING_DATA_TEST_V dataset. It creates
    * an apply output table with actual and predicted target values. Using
    * ClassificationTestMetricsTask test metrics are computed. This produces
    * the same test metrics results as ClassificationTestTask.
    * @param applyOutputName apply output table name
    * @param testResultName test result name
    * @param costMatrixName table name of the supplied cost matrix
    * @exception JDMException if model test failed
    public static void computeTestMetrics(String applyOutputName,
    String testResultName, String costMatrixName) throws JDMException
    if (costMatrixName != null) {
    System.out.println("---------------------------------------------------");
    System.out.println("--- Test Model - using apply output table ---");
    System.out.println("--- - using cost matrix table ---");
    System.out.println("---------------------------------------------------");
    else {
    System.out.println("---------------------------------------------------");
    System.out.println("--- Test Model - using apply output table ---");
    System.out.println("--- - using no cost matrix table ---");
    System.out.println("---------------------------------------------------");
    // 1. Do the apply on test data to create an apply output table
    // Create & save PhysicalDataSpecification
    PhysicalDataSet applyData =
    m_pdsFactory.create( "MINING_DATA_TEST_V", false );
    PhysicalAttribute pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeTestApplyData_jdm", applyData, true );
    // 2 Create & save ClassificationApplySettings
    ClassificationApplySettings clasAS = m_applySettingsFactory.create();
    HashMap sourceAttrMap = new HashMap();
    sourceAttrMap.put( "AFFINITY_CARD", "AFFINITY_CARD" );
    clasAS.setSourceDestinationMap( sourceAttrMap );
    m_dmeConn.saveObject( "treeTestApplySettings_jdm", clasAS, true);
    // 3 Create, store & execute apply Task
    DataSetApplyTask applyTask = m_dsApplyFactory.create(
    "treeTestApplyData_jdm",
    "treeModel_jdm",
    "treeTestApplySettings_jdm",
    applyOutputName);
    if(executeTask(applyTask, "treeTestApplyTask_jdm"))
    // Compute test metrics on new created apply output table
    // 4. Create & save PhysicalDataSpecification
    PhysicalDataSet applyOutputData = m_pdsFactory.create(
    applyOutputName, false );
    applyOutputData.addAttribute( pa );
    m_dmeConn.saveObject( "treeTestApplyOutput_jdm", applyOutputData, true );
    // 5. Create a ClassificationTestMetricsTask
    ClassificationTestMetricsTask testMetricsTask =
    m_testMetricsTaskFactory.create( "treeTestApplyOutput_jdm", // apply output data used as input
    "AFFINITY_CARD", // actual target column
    "PREDICTION", // predicted target column
    testResultName // test metrics result name
    testMetricsTask.computeMetric( // enable confusion matrix computation
    ClassificationTestMetricOption.confusionMatrix, true );
    testMetricsTask.computeMetric( // enable lift computation
    ClassificationTestMetricOption.lift, true );
    testMetricsTask.computeMetric( // enable ROC computation
    ClassificationTestMetricOption.receiverOperatingCharacteristics, true );
    testMetricsTask.setPositiveTargetValue( new Integer(1) );
    testMetricsTask.setNumberOfLiftQuantiles( 10 );
    testMetricsTask.setPredictionRankingAttrName( "PROBABILITY" );
    if (costMatrixName != null) {
    testMetricsTask.setCostMatrixName(costMatrixName);
    displayTable(costMatrixName, "", "order by ACTUAL_TARGET_VALUE, PREDICTED_TARGET_VALUE");
    // Store & execute the task
    boolean isTaskSuccess = executeTask(testMetricsTask, "treeTestMetricsTask_jdm");
    if( isTaskSuccess ) {
    // Restore & display test metrics
    ClassificationTestMetrics testMetrics = (ClassificationTestMetrics)
    m_dmeConn.retrieveObject( testResultName, NamedObject.testMetrics );
    // Display classification test metrics
    displayTestMetricDetails(testMetrics);
    * This method illustrates how to apply the mining model on the
    * MINING_DATA_APPLY_V dataset to predict customer
    * response. After completion of the task apply output table with the
    * predicted results is created at the user specified location.
    * @exception JDMException if model apply failed
    public static void applyModel() throws JDMException
    System.out.println("---------------------------------------------------");
    System.out.println("--- Apply Model ---");
    System.out.println("---------------------------------------------------");
    System.out.println("---------------------------------------------------");
    System.out.println("--- Business case 1 ---");
    System.out.println("--- Find the 10 customers who live in Italy ---");
    System.out.println("--- that are least expensive to be convinced to ---");
    System.out.println("--- use an affinity card. ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    PhysicalDataSet applyData =
    m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
    PhysicalAttribute pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
    // 2. Create & save ClassificationApplySettings
    ClassificationApplySettings clasAS = m_applySettingsFactory.create();
    // Add source attributes
    HashMap sourceAttrMap = new HashMap();
    sourceAttrMap.put( "COUNTRY_NAME", "COUNTRY_NAME" );
    clasAS.setSourceDestinationMap( sourceAttrMap );
    // Add cost matrix
    clasAS.setCostMatrixName( m_costMatrixName );
    m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
    // 3. Create, store & execute apply Task
    DataSetApplyTask applyTask = m_dsApplyFactory.create(
    "treeApplyData_jdm", "treeModel_jdm",
    "treeApplySettings_jdm", "TREE_APPLY_OUTPUT1_JDM");
    executeTask(applyTask, "treeApplyTask_jdm");
    // 4. Display apply result -- Note that APPLY results do not need to be
    // reverse transformed, as done in the case of model details. This is
    // because class values of a classification target were not (required to
    // be) binned or normalized.
    // Find the 10 customers who live in Italy that are least expensive to be
    // convinced to use an affinity card.
    displayTable("TREE_APPLY_OUTPUT1_JDM",
    "where COUNTRY_NAME='Italy' and ROWNUM < 11 ",
    "order by COST");
    System.out.println("---------------------------------------------------");
    System.out.println("--- Business case 2 ---");
    System.out.println("--- List ten customers (ordered by their id) ---");
    System.out.println("--- along with likelihood and cost to use or ---");
    System.out.println("--- reject the affinity card. ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    applyData =
    m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
    pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
    // 2. Create & save ClassificationApplySettings
    clasAS = m_applySettingsFactory.create();
    // Add cost matrix
    clasAS.setCostMatrixName( m_costMatrixName );
    m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
    // 3. Create, store & execute apply Task
    applyTask = m_dsApplyFactory.create(
    "treeApplyData_jdm", "treeModel_jdm",
    "treeApplySettings_jdm", "TREE_APPLY_OUTPUT2_JDM");
    executeTask(applyTask, "treeApplyTask_jdm");
    // 4. Display apply result -- Note that APPLY results do not need to be
    // reverse transformed, as done in the case of model details. This is
    // because class values of a classification target were not (required to
    // be) binned or normalized.
    // List ten customers (ordered by their id) along with likelihood and cost
    // to use or reject the affinity card (Note: while this example has a
    // binary target, such a query is useful in multi-class classification -
    // Low, Med, High for example).
    displayTable("TREE_APPLY_OUTPUT2_JDM",
    "where ROWNUM < 21",
    "order by CUST_ID, PREDICTION");
    System.out.println("---------------------------------------------------");
    System.out.println("--- Business case 3 ---");
    System.out.println("--- Find the customers who work in Tech support ---");
    System.out.println("--- and are under 25 who is going to response ---");
    System.out.println("--- to the new affinity card program. ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    applyData =
    m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
    pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
    // 2. Create & save ClassificationApplySettings
    clasAS = m_applySettingsFactory.create();
    // Add source attributes
    sourceAttrMap = new HashMap();
    sourceAttrMap.put( "AGE", "AGE" );
    sourceAttrMap.put( "OCCUPATION", "OCCUPATION" );
    clasAS.setSourceDestinationMap( sourceAttrMap );
    m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
    // 3. Create, store & execute apply Task
    applyTask = m_dsApplyFactory.create(
    "treeApplyData_jdm", "treeModel_jdm",
    "treeApplySettings_jdm", "TREE_APPLY_OUTPUT3_JDM");
    executeTask(applyTask, "treeApplyTask_jdm");
    // 4. Display apply result -- Note that APPLY results do not need to be
    // reverse transformed, as done in the case of model details. This is
    // because class values of a classification target were not (required to
    // be) binned or normalized.
    // Find the customers who work in Tech support and are under 25 who is
    // going to response to the new affinity card program.
    displayTable("TREE_APPLY_OUTPUT3_JDM",
    "where OCCUPATION = 'TechSup' " +
    "and AGE < 25 " +
    "and PREDICTION = 1 ",
    "order by CUST_ID");
    * This method stores the given task with the specified name in the DMS
    * and submits the task for asynchronous execution in the DMS. After
    * completing the task successfully it returns true. If there is a task
    * failure, then it prints error description and returns false.
    * @param taskObj task object
    * @param taskName name of the task
    * @return boolean returns true when the task is successful
    * @exception JDMException if task execution failed
    public static boolean executeTask(Task taskObj, String taskName)
    throws JDMException
    boolean isTaskSuccess = false;
    m_dmeConn.saveObject(taskName, taskObj, true);
    ExecutionHandle execHandle = m_dmeConn.execute(taskName);
    System.out.print(taskName + " is started, please wait. ");
    //Wait for completion of the task
    ExecutionStatus status = execHandle.waitForCompletion(Integer.MAX_VALUE);
    //Check the status of the task after completion
    isTaskSuccess = status.getState().equals(ExecutionState.success);
    if( isTaskSuccess ) //Task completed successfully
    System.out.println(taskName + " is successful.");
    else //Task failed
    System.out.println(taskName + " failed.\nFailure Description: " +
    status.getDescription() );
    return isTaskSuccess;
    private static void displayBuildSettings(
    ClassificationSettings clasSettings, String buildSettingsName)
    System.out.println("BuildSettings Details from the "
    + buildSettingsName + " table:");
    displayTable(buildSettingsName, "", "order by SETTING_NAME");
    System.out.println("BuildSettings Details from the "
    + buildSettingsName + " model build settings object:");
    String objName = clasSettings.getName();
    if(objName != null)
    System.out.println("Name = " + objName);
    String objDescription = clasSettings.getDescription();
    if(objDescription != null)
    System.out.println("Description = " + objDescription);
    java.util.Date creationDate = clasSettings.getCreationDate();
    String creator = clasSettings.getCreatorInfo();
    String targetAttrName = clasSettings.getTargetAttributeName();
    System.out.println("Target attribute name = " + targetAttrName);
    AlgorithmSettings algoSettings = clasSettings.getAlgorithmSettings();
    if(algoSettings == null)
    System.out.println("Failure: clasSettings.getAlgorithmSettings() returns null");
    MiningAlgorithm algo = algoSettings.getMiningAlgorithm();
    if(algo == null) System.out.println("Failure: algoSettings.getMiningAlgorithm() returns null");
    System.out.println("Algorithm Name: " + algo.name());
    MiningFunction function = clasSettings.getMiningFunction();
    if(function == null) System.out.println("Failure: clasSettings.getMiningFunction() returns null");
    System.out.println("Function Name: " + function.name());
    try {
    String costMatrixName = clasSettings.getCostMatrixName();
    if(costMatrixName != null) {
    System.out.println("Cost Matrix Details from the " + costMatrixName
    + " table:");
    displayTable(costMatrixName, "", "order by ACTUAL_TARGET_VALUE, PREDICTED_TARGET_VALUE");
    } catch(Exception jdmExp)
    System.out.println("Failure: clasSettings.getCostMatrixName()throws exception");
    jdmExp.printStackTrace();
    // List of DT algorithm settings
    // treeAlgo.setBuildHomogeneityMetric(TreeHomogeneityMetric.gini);
    // treeAlgo.setMaxDepth(7);
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(0.1, SizeUnit.percentage);
    // treeAlgo.setMinNodeSize( 0.05, SizeUnit.percentage );
    // treeAlgo.setMinNodeSize( 10, SizeUnit.count );
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(20, SizeUnit.count);
    TreeHomogeneityMetric homogeneityMetric = ((OraTreeSettings)algoSettings).getBuildHomogeneityMetric();
    System.out.println("Homogeneity Metric: " + homogeneityMetric.name());
    int intValue = ((OraTreeSettings)algoSettings).getMaxDepth();
    System.out.println("Max Depth: " + intValue);
    double doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSizeForSplit(SizeUnit.percentage);
    System.out.println("MinNodeSizeForSplit (percentage): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSizeForSplit(SizeUnit.count);
    System.out.println("MinNodeSizeForSplit (count): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize();
    SizeUnit unit = ((OraTreeSettings)algoSettings).getMinNodeSizeUnit();
    System.out.println("Min Node Size (" + unit.name() +"): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize( SizeUnit.count );
    System.out.println("Min Node Size (" + SizeUnit.count.name() +"): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize( SizeUnit.percentage );
    System.out.println("Min Node Size (" + SizeUnit.percentage.name() +"): " + m_df.format(doubleValue));
    * This method displayes DT model signature.
    * @param model model object
    * @exception JDMException if failed to retrieve model signature
    public static void displayModelSignature(Model model) throws JDMException
    String modelName = model.getName();
    System.out.println("Model Name: " + modelName);
    ModelSignature modelSignature = model.getSignature();
    System.out.println("ModelSignature Deatils: ( Attribute Name, Attribute Type )");
    MessageFormat mfSign = new MessageFormat(" ( {0}, {1} )");
    String[] vals = new String[3];
    Collection sortedSet = modelSignature.getAttributes();
    Iterator attrIterator = sortedSet.iterator();
    while(attrIterator.hasNext())
    SignatureAttribute attr = (SignatureAttribute)attrIterator.next();
    vals[0] = attr.getName();
    vals[1] = attr.getDataType().name();
    System.out.println( mfSign.format(vals) );
    * This method displayes DT model details.
    * @param treeModelDetails tree model details object
    * @exception JDMException if failed to retrieve model details
    public static void displayTreeModelDetailsExtensions(TreeModelDetail treeModelDetails)
    throws JDMException
    System.out.println( "\nTreeModelDetail: Model name=" + "treeModel_jdm" );
    TreeNode root = treeModelDetails.getRootNode();
    System.out.println( "\nRoot node: " + root.getIdentifier() );
    // get the info for the tree model
    int treeDepth = ((OraTreeModelDetail) treeModelDetails).getTreeDepth();
    System.out.println( "Tree depth: " + treeDepth );
    int totalNodes = ((OraTreeModelDetail) treeModelDetails).getNumberOfNodes();
    System.out.println( "Total number of nodes: " + totalNodes );
    int totalLeaves = ((OraTreeModelDetail) treeModelDetails).getNumberOfLeafNodes();
    System.out.println( "Total number of leaf nodes: " + totalLeaves );
    Stack nodeStack = new Stack();
    nodeStack.push( root);
    while( !nodeStack.empty() )
    TreeNode node = (TreeNode) nodeStack.pop();
    // display this node
    int nodeId = node.getIdentifier();
    long caseCount = node.getCaseCount();
    Object prediction = node.getPrediction();
    int level = node.getLevel();
    int children = node.getNumberOfChildren();
    TreeNode parent = node.getParent();
    System.out.println( "\nNode id=" + nodeId + " at level " + level );
    if( parent != null )
    System.out.println( "parent: " + parent.getIdentifier() +
    ", children=" + children );
    System.out.println( "Case count: " + caseCount + ", prediction: " + prediction );
    Predicate predicate = node.getPredicate();
    System.out.println( "Predicate: " + predicate.toString() );
    Predicate[] surrogates = node.getSurrogates();
    if( surrogates != null )
    for( int i=0; i<surrogates.length; i++ )
    System.out.println( "Surrogate[" + i + "]: " + surrogates[i] );
    // add child nodes in the stack
    if( children > 0 )
    TreeNode[] childNodes = node.getChildren();
    for( int i=0; i<childNodes.length; i++ )
    nodeStack.push( childNodes[i] );
    TreeNode[] allNodes = treeModelDetails.getNodes();
    System.out.print( "\nNode identifiers by getNodes():" );
    for( int i=0; i<allNodes.length; i++ )
    System.out.print( " " + allNodes.getIdentifier() );
    System.out.println();
    // display the node identifiers
    int[] nodeIds = treeModelDetails.getNodeIdentifiers();
    System.out.print( "Node identifiers by getNodeIdentifiers():" );
    for( int i=0; i<nodeIds.length; i++ )
    System.out.print( " " + nodeIds[i] );
    System.out.println();
    TreeNode node = treeModelDetails.getNode(nodeIds.length-1);
    System.out.println( "Node identifier by getNode(" + (nodeIds.length-1) +
    "): " + node.getIdentifier() );
    Rule rule2 = treeModelDetails.getRule(nodeIds.length-1);
    System.out.println( "Rule identifier by getRule(" + (nodeIds.length-1) +
    "): " + rule2.getRuleIdentifier() );
    // get the rules and display them
    Collection ruleColl = treeModelDetails.getRules();
    Iterator ruleIterator = ruleColl.iterator();
    while( ruleIterator.hasNext() )
    Rule rule = (Rule) ruleIterator.next();
    int ruleId = rule.getRuleIdentifier();
    Predicate antecedent = (Predicate) rule.getAntecedent();
    Predicate consequent = (Predicate) rule.getConsequent();
    System.out.println( "\nRULE " + ruleId + ": support=" +
    rule.getSupport() + " (abs=" + rule.getAbsoluteSupport() +
    "), confidence=" + rule.getConfidence() );
    System.out.println( antecedent );
    System.out.println( "=======>" );
    System.out.println( consequent );
    * Display classification test metrics object
    * @param testMetrics classification test metrics object
    * @exception JDMException if failed to retrieve test metric details
    public static void displayTestMetricDetails(
    ClassificationTestMetrics testMetrics) throws JDMException
    // Retrieve Oracle ABN model test metrics deatils extensions
    // Test Metrics Name
    System.out.println("Test Metrics Name = " + testMetrics.getName());
    // Model Name
    System.out.println("Model Name = " + testMetrics.getModelName());
    // Test Data Name
    System.out.println("Test Data Name = " + testMetrics.getTestDataName());
    // Accuracy
    System.out.println("Accuracy = " + m_df.format(testMetrics.getAccuracy().doubleValue()));
    // Confusion Matrix
    ConfusionMatrix confusionMatrix = testMetrics.getConfusionMatrix();
    Collection categories = confusionMatrix.getCategories();
    Iterator xIterator = categories.iterator();
    System.out.println("Confusion Matrix: Accuracy = " + m_df.format(confusionMatrix.getAccuracy()));
    System.out.println("Confusion Matrix: Error = " + m_df.format(confusionMatrix.getError()));
    System.out.println("Confusion Matrix:( Actual, Prection, Value )");
    MessageFormat mf = new MessageFormat(" ( {0}, {1}, {2} )");
    String[] vals = new String[3];
    while(xIterator.hasNext())
    Object actual = xIterator.next();
    vals[0] = actual.toString();
    Iterator yIterator = categories.iterator();
    while(yIterator.hasNext())
    Object predicted = yIterator.next();
    vals[1] = predicted.toString();
    long number = confusionMatrix.getNumberOfPredictions(actual, predicted);
    vals[2] = Long.toString(number);
    System.out.println(mf.format(vals));
    // Lift
    Lift lift = testMetrics.getLift();
    System.out.println("Lift Details:");
    System.out.println("Lift: Target Attribute Name = " + lift.getTargetAttributeName());
    System.out.println("Lift: Positive Target Value = " + lift.getPositiveTargetValue());
    System.out.println("Lift: Total Cases = " + lift.getTotalCases());
    System.out.println("Lift: Total Positive Cases = " + lift.getTotalPositiveCases());
    int numberOfQuantiles = lift.getNumberOfQuantiles();
    System.out.println("Lift: Number Of Quantiles = " + numberOfQuantiles);
    System.out.println("Lift: ( QUANTILE_NUMBER, QUANTILE_TOTAL_COUNT, QUANTILE_TARGET_COUNT, PERCENTAGE_RECORDS_CUMULATIVE,CUMULATIVE_LIFT,CUMULATIVE_TARGET_DENSITY,TARGETS_CUMULATIVE, NON_TARGETS_CUMULATIVE, LIFT_QUANTILE, TARGET_DENSITY )");
    MessageFormat mfLift = new MessageFormat(" ( {0}, {1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9} )");
    String[] liftVals = new String[10];
    for(int iQuantile=1; iQuantile<= numberOfQuantiles; iQuantile++)
    liftVals[0] = Integer.toString(iQuantile); //QUANTILE_NUMBER
    liftVals[1] = Long.toString(lift.getCases((iQuantile-1), iQuantile));//QUANTILE_TOTAL_COUNT
    liftVals[2] = Long.toString(lift.getNumberOfPositiveCases((iQuantile-1), iQuantile));//QUANTILE_TARGET_COUNT
    liftVals[3] = m_df.format(lift.getCumulativePercentageSize(iQuantile).doubleValue());//PERCENTAGE_RECORDS_CUMULATIVE
    liftVals[4] = m_df.format(lift.getCumulativeLift(iQuantile).doubleValue());//CUMULATIVE_LIFT
    liftVals[5] = m_df.format(lift.getCumulativeTargetDensity(iQuantile).doubleValue());//CUMULATIVE_TARGET_DENSITY
    liftVals[6] = Long.toString(lift.getCumulativePositiveCases(iQuantile));//TARGETS_CUMULATIVE
    liftVals[7] = Long.toString(lift.getCumulativeNegativeCases(iQuantile));//NON_TARGETS_CUMULATIVE
    liftVals[8] = m_df.format(lift.getLift(iQuantile, iQuantile).doubleValue());//LIFT_QUANTILE
    liftVals[9] = m_df.format(lift.getTargetDensity(iQuantile, iQuantile).doubleValue());//TARGET_DENSITY
    System.out.println(mfLift.format(liftVals));
    // ROC
    ReceiverOperatingCharacterics roc = testMetrics.getROC();
    System.out.println("ROC Details:");
    System.out.println("ROC: Area Under Curve = " + m_df.format(roc.getAreaUnderCurve()));
    int nROCThresh = roc.getNumberOfThresholdCandidates();
    System.out.println("ROC: Number Of Threshold Candidates = " + nROCThresh);
    System.out.println("ROC: ( INDEX, PROBABILITY, TRUE_POSITIVES, FALSE_NEGATIVES, FALSE_POSITIVES, TRUE_NEGATIVES, TRUE_POSITIVE_FRACTION, FALSE_POSITIVE_FRACTION )");
    MessageFormat mfROC = new MessageFormat(" ( {0}, {1}, {2}, {3}, {4}, {5}, {6}, {7} )");
    String[] rocVals = new String[8];
    for(int iROC=1; iROC <= nROCThresh; iROC++)
    rocVals[0] = Integer.toString(iROC); //INDEX
    rocVals[1] = m_df.format(roc.getProbabilityThreshold(iROC));//PROBABILITY
    rocVals[2] = Long.toString(roc.getPositives(iROC, true));//TRUE_POSITIVES
    rocVals[3] = Long.toString(roc.getNegatives(iROC, false));//FALSE_NEGATIVES
    rocVals[4] = Long.toString(roc.getPositives(iROC, false));//FALSE_POSITIVES
    rocVals[5] = Long.toString(roc.getNegatives(iROC, true));//TRUE_NEGATIVES
    rocVals[6] = m_df.format(roc.getHitRate(iROC));//TRUE_POSITIVE_FRACTION
    rocVals[7] = m_df.format(roc.getFalseAlarmRate(iROC));//FALSE_POSITIVE_FRACTION
    System.out.println(mfROC.format(rocVals));
    private static void displayTable(String tableName, String whereCause, String orderByColumn)
    StringBuffer emptyCol = new StringBuffer(" ");
    java.sql.Connection dbConn =
    ((OraConnection)m_dmeConn).getDatabaseConnection();
    PreparedStatement pStmt = null;
    ResultSet rs = null;
    try
    pStmt = dbConn.prepareStatement("SELECT * FROM " + tableName + " " + whereCause + " " + orderByColumn);
    rs = pStmt.executeQuery();
    ResultSetMetaData rsMeta = rs.getMetaData();
    int colCount = rsMeta.getColumnCount();
    StringBuffer header = new StringBuffer();
    System.out.println("Table : " + tableName);
    //Build table header
    for(int iCol=1; iCol<=colCount; iCol++)
    String colName = rsMeta.getColumnName(iCol);
    header.append(emptyCol.replace(0, colName.length(), colName));
    emptyCol = new StringBuffer(" ");
    System.out.println(header.toString());
    //Write table data
    while(rs.next())
    StringBuffer rowContent = new StringBuffer();
    for(int iCol=1; iCol<=colCount; iCol++)
    int sqlType = rsMeta.getColumnType(iCol);
    Object obj = rs.getObject(iCol);
    String colContent = null;
    if(obj instanceof java.lang.Number)
    try
    BigDecimal bd = (BigDecimal)obj;
    if(bd.scale() > 5)
    colContent = m_df.format(obj);
    } else
    colContent = bd.toString();
    } catch(Exception anyExp) {
    colContent = m_df.format(obj);
    } else
    if(obj == null)
    colContent = "NULL";
    else
    colContent = obj.toString();
    rowContent.append(" "+emptyCol.replace(0, colContent.length(), colContent));
    emptyCol = new StringBuffer(" ");
    System.out.println(rowContent.toString());
    } catch(Exception anySqlExp) {
    anySqlExp.printStackTrace();
    }//Ignore
    private static void createTableForTestMetrics(String applyOutputTableName,
    String testDataName,
    String testMetricsInputTableName)
    //0. need to execute the following in the schema
    String sqlCreate =
    "create table " + testMetricsInputTableName + " as " +
    "select a.id as id, prediction, probability, affinity_card " +
    "from " + testDataName + " a, " + applyOutputTableName + " b " +
    "where a.id = b.id";
    java.sql.Connection dbConn = ((OraConnection) m_dmeConn).getDatabaseConnection();
    Statement stmt = null;
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate( sqlCreate );
    catch( Exception anySqlExp )
    System.out.println( anySqlExp.getMessage() );
    anySqlExp.printStackTrace();
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    private static void clean()
    java.sql.Connection dbConn =
    ((OraConnection) m_dmeConn).getDatabaseConnection();
    Statement stmt = null;
    // Drop apply output table
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT1_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT2_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT3_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    // Drop apply output table created for test metrics task
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE DT_TEST_APPLY_OUTPUT_COST_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE DT_TEST_APPLY_OUTPUT_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    //Drop the model
    try {
    m_dmeConn.removeObject( "treeModel_jdm", NamedObject.model );
    } catch(Exception jdmExp) {}
    // drop test metrics result: created by TestMetricsTask
    try {
    m_dmeConn.removeObject( "dtTestMetricsWithCost_jdm", NamedObject.testMetrics );
    } catch(Exception jdmExp) {}
    try {
    m_dmeConn.removeObject( "dtTestMetrics_jdm", NamedObject.testMetrics );
    } catch(Exception jdmExp) {}

    Hi
    I am not sure whether this will help but someone else was getting an error with a java.sql.SQLexception: Unsupported feature. Here is a link to the fix: http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=3&t=007947
    Best wishes
    Michael

Maybe you are looking for

  • Secure link to I tunes is not working please help

    I have been gaining access to I Tunes fine up to last Wednesday. I now cannot access my account on I tunes, I can surf the shop fine and look at it contents fine but as soon as I try to buy anything I can't log in to my account. I cannot even create

  • "Class can't be instantiated " error message. Help !

    Dear Java People, In trying to do a program that outputs a sound with every button click I have no compilation errors but a runtime error that says: "class can't be instantiated" below is the program and below that the error message thank you in adva

  • Find the container name of a component

    hi how does one get the container name of a component on the component's actionListener event. i have a ' N ' number of JInternalFrames in which there are multiple JToggleButtons. On the actionListener event of the togglebutton i would like to know t

  • Ordering Movies/TV shows that were bought on iTunes

    Is there anything out there now or any other way of getting the movies, TV shows etc on a DVD that you have purchased on iTunes? Example: in iPhoto you can take pictures of when you were on a vacation then make a book/album of them and then order it

  • WS-BPEL 2.0 support?

    Dear all, Given that recently a new version of the WS-BPEL 2.0 standard has appeared, I was wondering whether Oracle BPEL PM 10.1.3.1.0 has been developed based on it. I couldnot find this info on the Oracle web-site. Does anyone know if this version