Oracle Data Modeler Versión 4.1.0.866 -- issue compare model Vs BBDD ( index PK, UK)

Hello,
I have a Data Modeler Relational and physical, the data model has three. The three tables are in a BBDD. The primary key generate is usssing index , and when I generated the DDL only show alter table add constraint .... . until here everything OK.
if I compare the bbdd with Data Modeler , I get differences in indices that generated oracle when I create the primary key.
¿Is a bug or is there some way to fix it by configuration options? 
Thanks

Hello,
So you have a Relational Model containing 3 Tables.  I assume the Physical Model is for an Oracle database.
The primary key generate is usssing index , and when I generated the DDL only show alter table add constraint .... . until here everything OK.
I assume you are doing a DDL generation of your model (using the Generate DDL button above the diagram or using Export > DDL File from the File menu).
In the DDL Generation Options phase of the DDL generation, can you go to the Tables tab and check that the "Selected" check box for the relevant Tables is selected.
(This should normally be selected, but if you deselected it in a previous DDL generation, it will remember that setting.)
The "PK and UK Constraints" and "Indexes" tabs also allow you to control whether the constraints and Indexes are included in the generated DDL.
if I compare the bbdd with Data Modeler , I get differences in indices that generated oracle when I create the primary key.
¿Is a bug or is there some way to fix it by configuration options?
I assume here that you have input your generated DDL to your database and you are then doing a File > Import > Data Dictionary to compare your database definitions with your initial model.
It is likely that there will be some differences shown, due to defaults used by the database when the DDL is input (e.g storage properties for your indexes and tables).
I suggest you examine the differences (which will be highlighted in red on the Details, Storage Details or Physical Details tabs) for each object in the Compare Models dialog, and provided they are acceptable, select the Merge button to merge them into your model.
If you do not want some of the property differences to be merged, you should unset the "Selected" check box for that property before merging.
Note that it is possible to exclude specific properties from the comparison by selecting the Options tab in the Compare Models dialog, and then selecting the Properties Filter, Physical Properties Filter or Storage Properties Filter tab as appropriate.
I hope this helps.
David

Similar Messages

  • Oracle Data Mining - How to use PREDICTION function with a regression model

    I've been searching this site for Data Mining Q&A specifically related to prediction function and I wasn't able to find something useful on this topic. So I hope that posting it as a new thread will get useful answers for a beginner in oracle data mining.
    So here is my issue with prediction function:
    Given a table with 17 weeks of sales for a given product, I would like to do a forecast to predict the sales for the week 18th.
    For that let's start preparing the necessary objects and data:
    CREATE TABLE T_SALES
    PURCHASE_WEEK DATE,
    WEEK NUMBER,
    SALES NUMBER
    SET DEFINE OFF;
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('11/27/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 1, 55488);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/04/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 2, 78336);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/11/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 3, 77248);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/18/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 4, 106624);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/25/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 5, 104448);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/01/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 6, 90304);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/08/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 7, 44608);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/15/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 8, 95744);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/22/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 9, 129472);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/29/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 10, 110976);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/05/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 11, 139264);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/12/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 12, 87040);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/19/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 13, 47872);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/26/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 14, 120768);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('03/05/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 15, 98463.65);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('03/12/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 16, 67455.84);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('3/19/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 17, 100095.66);
    COMMIT;
    There are a lot of linear regression models and approaches for sales forecast out on the market, however I will focus on what oracle 11g offers i.e. package SYS.DBMS_DATA_MINING to create a model using regression as mining function and then, once the model is created, to apply prediction function on the model.
    Therefore I'll have to go through few steps:
    i) normalization of data
    CREATE OR REPLACE VIEW t_sales_norm AS
    SELECT week,
    sales,
    (sales - 91423.95)/27238.3693126778 sales_norm
    FROM t_sales;
    whereas the numerical values are the mean and the standard deviation:
    select avg(sales) from t_sales;
    91423.95
    select stddev(sales) from t_sales;
    27238.3693126778
    ii) auto-correlation. For the sake of simplicity, I will safely assume that there is no auto-correlation (no repetitive pattern in sales among the weeks). Therefore to define the lag data I will consider the whole set:
    CREATE OR REPLACE VIEW t_sales_lag AS
    SELECT a.*
    FROM (SELECT week,
    sales,
    LAG(sales_norm, 1) OVER (ORDER BY week) L1,
    LAG(sales_norm, 2) OVER (ORDER BY week) L2,
    LAG(sales_norm, 3) OVER (ORDER BY week) L3,
    LAG(sales_norm, 4) OVER (ORDER BY week) L4,
    LAG(sales_norm, 5) OVER (ORDER BY week) L5,
    LAG(sales_norm, 6) OVER (ORDER BY week) L6,
    LAG(sales_norm, 7) OVER (ORDER BY week) L7,
    LAG(sales_norm, 8) OVER (ORDER BY week) L8,
    LAG(sales_norm, 9) OVER (ORDER BY week) L9,
    LAG(sales_norm, 10) OVER (ORDER BY week) L10,
    LAG(sales_norm, 11) OVER (ORDER BY week) L11,
    LAG(sales_norm, 12) OVER (ORDER BY week) L12,
    LAG(sales_norm, 13) OVER (ORDER BY week) L13,
    LAG(sales_norm, 14) OVER (ORDER BY week) L14,
    LAG(sales_norm, 15) OVER (ORDER BY week) L15,
    LAG(sales_norm, 16) OVER (ORDER BY week) L16,
    LAG(sales_norm, 17) OVER (ORDER BY week) L17
    FROM t_sales_norm) a;
    iii) choosing the training data. Again, I will choose the whole set of 17 weeks, as for this discussion in not relevant how big should be the set of training data.
    CREATE OR REPLACE VIEW t_sales_train AS
    SELECT week, sales,
    L1, L2, L3, L4, L5, L6, L7, L8, L9, L10,
    L11, L12, L13, L14, L15, L16, L17
    FROM t_sales_lag a
    WHERE week >= 1 AND week <= 17;
    iv) build the model
    -- exec SYS.DBMS_DATA_MINING.DROP_MODEL('t_SVM');
    BEGIN
    sys.DBMS_DATA_MINING.CREATE_MODEL( model_name => 't_SVM',
    mining_function => dbms_data_mining.regression,
    data_table_name => 't_sales_train',
    case_id_column_name => 'week',
    target_column_name => 'sales');
    END;
    v) finally, where I am confused is applying the prediction function against this model and making sense of the results.
    On a search on Google I found 2 ways of applying this function to my case.
    One way is the following:
    SELECT week, sales,
    PREDICTION(t_SVM USING
    LAG(sales,1) OVER (ORDER BY week) as l1,
    LAG(sales,2) OVER (ORDER BY week) as l2,
    LAG(sales,3) OVER (ORDER BY week) as l3,
    LAG(sales,4) OVER (ORDER BY week) as l4,
    LAG(sales,5) OVER (ORDER BY week) as l5,
    LAG(sales,6) OVER (ORDER BY week) as l6,
    LAG(sales,7) OVER (ORDER BY week) as l7,
    LAG(sales,8) OVER (ORDER BY week) as l8,
    LAG(sales,9) OVER (ORDER BY week) as l9,
    LAG(sales,10) OVER (ORDER BY week) as l10,
    LAG(sales,11) OVER (ORDER BY week) as l11,
    LAG(sales,12) OVER (ORDER BY week) as l12,
    LAG(sales,13) OVER (ORDER BY week) as l13,
    LAG(sales,14) OVER (ORDER BY week) as l14,
    LAG(sales,15) OVER (ORDER BY week) as l15,
    LAG(sales,16) OVER (ORDER BY week) as l16,
    LAG(sales,17) OVER (ORDER BY week) as l17
    ) pred
    FROM t_sales a;
    WEEK, SALES, PREDICTION
    1, 55488, 68861.084076412
    2, 78336, 104816.995823913
    3, 77248, 104816.995823913
    4, 106624, 104816.995823913
    As you can see for the first row there is a value of 68861.084 and for the rest of 16 values is always one and the same 104816.995.
    Question: where is my week 18 prediction ? or maybe I should say which one is it ?
    Another way of using prediction even more confusing is against the lag table:
    SELECT week, sales,
    PREDICTION(t_svm USING a.*) pred
    FROM t_sales_lag a;
    WEEK, SALES, PREDICTION
    1, 55488, 68861.084076412
    2, 78336, 75512.3642096908
    3, 77248, 85711.5003385927
    4, 106624, 98160.5009687461
    Each row out of 17, its own 'prediction' result.
    Same question: which one is my week 18th prediction ?
    Thank you very much for all help that you can provide on this matter.
    It is as always highly appreciated.
    Serge F.

    Kindly let me know how to give input to predict the values for example script to create model is as follows
    drop table data_4svm
    drop table svm_settings
    begin
    dbms_data_mining.drop_model('MODEL_SVMR1');
    CREATE TABLE data_4svm (
    id NUMBER,
    a NUMBER,
    b NUMBER
    INSERT INTO data_4svm VALUES (1,0,0);
    INSERT INTO data_4svm VALUES (2,1,1);
    INSERT INTO data_4svm VALUES (3,2,4);
    INSERT INTO data_4svm VALUES (4,3,9);
    commit;
    --setting table
    CREATE TABLE svm_settings
    setting_name VARCHAR2(30),
    setting_value VARCHAR2(30)
    --settings
    BEGIN
    INSERT INTO svm_settings (setting_name, setting_value) VALUES
    (dbms_data_mining.algo_name, dbms_data_mining.algo_support_vector_machines);
    INSERT INTO svm_settings (setting_name, setting_value) VALUES
    (dbms_data_mining.svms_kernel_function, dbms_data_mining.svms_linear);
    INSERT INTO svm_settings (setting_name, setting_value) VALUES
    (dbms_data_mining.svms_active_learning, dbms_data_mining.svms_al_enable);
    COMMIT;
    END;
    --create model
    BEGIN
    DBMS_DATA_MINING.CREATE_MODEL(
    model_name => 'Model_SVMR1',
    mining_function => dbms_data_mining.regression,
    data_table_name => 'data_4svm',
    case_id_column_name => 'ID',
    target_column_name => 'B',
    settings_table_name => 'svm_settings');
    END;
    --to show the out put
    select class, attribute_name, attribute_value, coefficient
    from table(dbms_data_mining.get_model_details_svm('MODEL_SVMR1')) a, table(a.attribute_set) b
    order by abs(coefficient) desc
    -- to get predicted values (Q1)
    SELECT PREDICTION(MODEL_SVMR1 USING *
    ) pred
    FROM data_4svm a;
    Here i am not sure how to predict B values . Please suggest the proper usage . Moreover In GUI (.NET windows form ) how user can give input and system can respond using the Q1

  • Data Modeler : Comparing models

    Hi,
    A ct of mine is trying to compare data models, they have some questions:
    1) Is there am easy way to change the "from" and "to" models, I mean depending on the first file you open you set the "from" model. It could be good to change: compare model A agisnt B, and B against A.
    2) Another thing when you compare models if you have the "Tables" option selected and then expand the menu, all the tables are unmarked, is it the expected behaviour?
    3) Also is it possible to syn with a svn repository?
    Rgds
    L

    Hi Luis,
    what version of Data Modeler do you use?
    1) Is there am easy way to change the "from" and "to" models, I mean depending on the first file you open you set the "from" model. It could be good to change: compare model A agisnt B, and B against A.no, not for 3.0
    2) Another thing when you compare models if you have the "Tables" option selected and then expand the menu, all the tables are unmarked, is it the expected behaviour?they are unmarked if there are no differences - you can use check box in front of "Tables" to mark them all
    3) Also is it possible to syn with a svn repository?yes in 3.0 (including early adopters versions), you can look at demonstrations here http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html
    Philip

  • Oracle Data Modeler commit change seg physical table

    Hello,
    Sometimes when I record a data model, Oracle Data Modeler (SVN) given as outgoing a chage of segment of physical table , if you committees are to change, duplication of files in the physical model is produced. Later when you open the physical model gives errors that have duplicate files
    How you can correct this error? Why the segment change occurs?
    Thanks

    Hello,
    Versión 4.0.0.833.
    No pattern is occasionally when shooting in the data model. How you can correct this error? Why the segment change occurs?
    Thanks

  • Error when create classification model by Oracle Data Miner 10.1.0.2

    Hello every body!
    Please help me!
    I have created a classification model by Oracle Data Mining 10.1.0.2, Oracle 10g Release 1 (10.1.0.3) with following options:
    Single record per case
    Adaptive Bayes Network algorithms
    SingleFeatureBuild
    When finished, i receive an error with following detail content:
    ORA-40101: Data Mining System Error ODM_ABN_MODEL-ODM_ABN_BUILD--20002
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "DMSYS.ODM_ABN_MODEL", line 458
    ORA-06512: at "DMSYS.ODM_ABN_MODEL", line 5664
    ORA-40101: Data Mining System Error ODM_ABN_MODEL-ODM_ABN_BUILD--20012
    ORA-06512: at line 1
    Hope you help me!
    Thanks!

    Hi,
    Can you provide instructions that recreate this problem using sample data we provide in the SH schema?
    Thanks, Mark

  • Ora-00922 when building a model with Oracle Data Miner

    Hi,
    i'm using Oracle Data Miner 10.1.0.2 with Database 10.1.0.3.0 under Linux x86.
    The specific patch 10.1.0.3.1 for Data Mining was applied because i didn't manage to execute the models in the tutorial.
    So all were fine, and i succeded creating my own models and i executed them.
    But few months later, i.e now, i'm trying to build som other models but when building i have again the ora - 00922 : missing or invalid option
    I tried applying once again the patch but no effect !
    Is anyone ever faced this problem !?
    Eric

    Hi Xiafang,
    the database were not upgraded but used to build some ETL mappings.
    Here are the results of the SQL statements u gave me
    SQL> connect / as sysdba
    Connected.
    SQL> select value from v$option where parameter like '%Data Minin%';
    VALUE
    TRUE
    FALSE
    SQL> select comp_id, version, status from dba_registry;
    COMP_ID VERSION STATUS
    WK 10.1.0.3.0 VALID
    EM 10.1.0.3.0 VALID
    XDB 10.1.0.3.0 VALID
    AMD 10.1.0.3.0 VALID
    CONTEXT 10.1.0.3.0 VALID
    SDO 10.1.0.3.0 VALID
    ORDIM 10.1.0.3.0 VALID
    EXF 10.1.0.3.0 VALID
    OWM 10.1.0.2.0 VALID
    ODM 10.1.0.3.1 VALID
    CATALOG 10.1.0.3.0 VALID
    COMP_ID VERSION STATUS
    CATPROC 10.1.0.3.0 VALID
    JAVAVM 10.1.0.3.0 VALID
    XML 10.1.0.3.0 VALID
    CATJAVA 10.1.0.3.0 VALID
    APS 10.1.0.3.0 VALID
    XOQ 10.1.0.3.0 VALID
    Is the DM option enabled, i think yes but the SQL answer TRUE and FALSE.
    So, should i enable DM with the commands u gave me ?
    Eric

  • Oracle Data Modeling

    Hello, I download Oracle datamodeling-1.5.1-525 and I create a logical model of my future database. My database have about 127 entities, 974 attributes and 233 relations.
    I want generate ddl sentences in oracle and generate my oracle scheme in my database (Oracle SE 10G R2). I tried to create a relational model throught to Designer -> Engineer to Relational Model (Ctrl-Mayus-F) but nothing happend. The program does not anything, no errors no models, nothing.
    Can anyone help me, please?
    Thanks you very much!!!
    PD: Excuseme, but I'm Spanish and my English is horrible.

    Hi Roberto,
    do you still use build 525? You can try the latest version here [http://www.oracle.com/technology/products/database/datamodeler/index.html] . Please pay attention to "release notes" and remark there - "Note for designs created with earlier versions of Data Modeler – persistence has changed and it’s strongly recommended that you use the “Save As” functionality to create new version of designs".
    New version will resolve possible inconsistencies and you can engineer to relational model.
    Philip

  • Oracle Data Modeler - Impact Analysis option

    Hi
    I am using Oracle Data Modeler 3.1.0683 and reversed engineering my existing relational models to logical models. I have 3 relational models and reverse engineering it to 1 logical model.
    In logical model under enity's propoerties -> impact analysis how do I add which relational table the logical entity depends on? For example in relational models i have table Class, Student, teacher in 3 separate relation model. Logical i created entity Person which depends on table Student from relational model 1, and teacher from relational model 2, I want to view(add) these tables under "Impact Analysis".
    The help window says
    <quote>
    +"Impact Analysis+
    +Enables you to view and specify information to be used by Oracle Warehouse Builder for impact analysis."+
    </quote>
    Though i couldnt figure out where to specify?
    Thanks in advance.
    Regards
    Lahoria

    So any suggestions how can I bring those table ( as i mentioned in original post) to show up in Impact AnalysisIf your entity is result of reverse engineering from relational model then you can find related table under mappings. The same if you engineer logical model to relational model.
    If you start from column then you can see related attribute in logical model and usage in data flow diagrams and dimensional models
    Philip

  • Oracle Data Modeler price list

    Hi everyone,
    We would like to install Oracle Data Modeler in the Office. We have seen on the web site that we can download it for free. But if you open the price list it is announced for 3000 $ by machine.
    We don't understand how is it working.
    Maybe there is a larger version, or it is only free for personnal use...
    Thanx for clear that up.
    Cheers

    Hi,
    yes I think we are thinkink to the same product.
    Here
    [http://www.oracle.com/technology/products/database/datamodeler/index.html]
    My Question is :
    Everywhere on the web, it is said that it's free to use, but here you see on the price list that , it is a priced product.
    [http://www.oracle.com/technology/products/database/datamodeler/html/pricing_faq.html]
    What we like to know is why is it free to download, and not free to use.
    Thanks for your reply

  • Error in instalation oracle data miner repository

    Hi,
    I learn oracle data miner. I'm trying to install repository by this guide Setting Up Oracle Data Miner 4.0
    But when I start installation data miner repository (step 7), sql developer shows "Task failed".
    Logs:"
    anonymous block completed
    anonymous block completed
    Drop public synonyms created by ODMRSYS.
    anonymous block completed
    anonymous block completed
    Total Number of Objects: 0
    Total Number of Objects Dropped: 0
    Total Number of Objects Failed to Drop: 0
    I use pluggable database.
    Oracle logs are empty.
    Regards,
    Irina

    Hi, Denny
    Thank you for your answer.
    The current versions:
    Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    PL/SQL Release 12.1.0.2.0 - Production
    "CORE 12.1.0.2.0 Production"
    TNS for 64-bit Windows: Version 12.1.0.2.0 - Production
    NLSRTL Version 12.1.0.2.0 - Production
    The Logs contain the same text.
    I was trying to use oracle 11g. There was the same error except that the test examples wasn't work properly. There was error "ORA-40206: invalid setting value for setting name SVMS_CONV_TOLERANCE" for regression model.
    I think the reason in that I use windows 8 with the Russian language. In english linux with oracle 12c it works properly.
    Regard,
    Irina

  • Designer Vs. Oracle Data warehouse builder

    Dear all,
    Currently I'm responsible of building a Data warehousing project using Oracle database. I'm trying to decide on a tool for modelling my datawarehouse. I have two options:
    1) Designer: we have some experience with this tool and we are using it for our main OLTP application.
    2) Oracle Data Warehouse builder: we are using this to design our ETL processes.
    I want to get some advice on whether the OWB is capable of modelling my datawarehouse and of doing a retrofit action. also, I try to standardize on the tools that are using in the Data Warehouse department (currently we are using only OWB).
    I will appreciate for any other advice to help in my selection process.
    Best Regards,
    Bilal

    Hi,
    In my experience this choice depends on the implementation of the datawarehouse. If you are building a "pure" Kimball style dimensional data warehouse you should be able to do this using OWB. I have architected such a DW in the past using only OWB, so I am speaking from experience.
    If on the other hand you are planning to implement an Inmon style CIF, if your requirements includes an operational data store (ODS), or if you for any other reason anticipate that you are going to be doing a lot of ER modeling, then I would not recommend using the current release of OWB for modelling. (Note however that there are significant improvements to the modelling capabilities in the Paris release of OWB, so this may change in the future)
    The advantage of improved maintainability when using a single tools needs to be weighted against the improved functionality if you choose a combination of the two. In the "two tool" scenario strict development and deployment routines need to be enforced to avoid that the model in Designer comes out of sync with the metadata in OWB. (Consider the effect of a developer making a change to a table definition in OWB and deploying it directly to the database without updating the model in Designer.)
    Hope this helps.
    Regards,
    Roald

  • Oracle Data Miner (Bug)

    Hello All,
    I am using oracle data miner 11g for building association model (market basket analysis). When I run the model and view the results, I find that the values of Antecedent Support% and Consequent Support% are swapped. i.e. the support value of the antecedent is listed in the Consequent Support% column.
    Is it a bug in the data miner or am I mistaken?
    Edited by: 976043 on Dec 10, 2012 10:00 AM

    Hi,
    Yes you are correct.
    You can prove it my adding a Model Details node and connecting the AR Build node to it.
    The rule output from the Model Details node produced shows the true values.
    We have opened a bug to fix this and it will be in the upcoming SQL Dev 4.0 release.
    Thanks for the help, Mark

  • Oracle data miner

    Hello,
    I am new to Oracle, I am planning on implementing a credit card fraud management system as my academic project... I wanted to get my hands dirty with oracle for quite some time and thought that it might be a good idea to develop a fraud management system using Java and oracle.
    I don't know anything about oracle-data-miner. All I know is it provides you with certain models, and you can train your data and then check the predictions from those models using simple queries.. What I want to know is: Is that everything that oracle data miner provides? bunch of models/algorithms, can I write a new algorithm or modify an existing one ?
    I am sorry for such silly question, but any help regarding this is highly appreciated.
    Thanks,
    Ali

    Hi Attila,
    You can find the transform package at the following link:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_dmtran.htm#i1013223
    The ODM mining algorithms use techniques that are not exposed.
    However, there are a lot of features in the db that are available to you to build your own.
    As for an optimization package, I know there are some internal implementations but I am not aware of one in the db that is exposed.
    Thanks, Mark

  • Oracle Data Miner ROC Chart

    Is there anyone who can explain some things about the roc chart for me?
    How is what is showed in the roc chart related to the confusion matrix next to it given in the Oracle Data Miner?
    How is this roc chart constructed? How is it possible that it represents the decision tree model I made?
    I hope somebody can help me

    Hi,
    This explaination comes from one of our algorithm engineers:
    "The ROC analysis applies to binary classification problems. One of the classes is selected as a "positive" one. The ROC chart plots the true positive rate as a function of the false positive rate. It is parametrized by the probability threshold values. The true positive rate represents the fraction of positive cases that were correctly classified by the model. The false positive rate represents the fraction of negative cases that were incorrectly classified as positive. Each point on the ROC plot represents a true_positive_rate/false_positive_rate pair corresponding to a particular probability threshold. Each point has a corresponding confusion matrix. The user can analyze the confusion matrices produced at different threshold levels and select a probability threshold to be used for scoring. The probability threshold choice is usually based on application requirements (i.e., acceptable level of false positives).
    The ROC does not represent a model. Instead it quantifies its discriminatory ability and assists the user in selecting an appropriate operating point for scoring."
    I would add to this that you can select a threshold point the build activity to bias the apply process. Currently we generate a cost matrix based on the selected threshold point rather than use the threshold point directly.
    Thanks, Mark

  • OracleAS Data Source Plugin for OmniPortlet for SAP

    Hi,
    What is the later version for the OracleAS Data Source Plugin for OmniPortlet for SAP ?
    Still BETA release ?
    thanks

    Hello,
    We get the same error message for our WD4J application now
    "JCo data source missing for type: class com.company.application.main.model.Tvm1T    class com.sap.aii.proxy.framework.core.DataAccessException"
    Does anybody know the reason for this?
    Thanks,
    Robert

Maybe you are looking for