Costs in Data Miner
How can I assign costs for classification in Data Miner?
Hi,
The Test Metric step has an options button that allows you to define a custom cost matrix.
The Apply Activity allows you to pick the test metric cost matrix you want (either the one in the build activity or in a separate test activity).
Now having said that, we have a feature that automates some of this, assuming you don't have "real costs" but simply want to tune the model to achieve better balanced predictions.
See the FAQ list on the following link and look for question "How can you adjust or tune a model when there aren't enough examples of a specific value of the target attribute?".
http://www.oracle.com/technology/products/bi/odm/index.html
Thanks, Mark
Similar Messages
-
Is there anyone who can explain some things about the roc chart for me?
How is what is showed in the roc chart related to the confusion matrix next to it given in the Oracle Data Miner?
How is this roc chart constructed? How is it possible that it represents the decision tree model I made?
I hope somebody can help meHi,
This explaination comes from one of our algorithm engineers:
"The ROC analysis applies to binary classification problems. One of the classes is selected as a "positive" one. The ROC chart plots the true positive rate as a function of the false positive rate. It is parametrized by the probability threshold values. The true positive rate represents the fraction of positive cases that were correctly classified by the model. The false positive rate represents the fraction of negative cases that were incorrectly classified as positive. Each point on the ROC plot represents a true_positive_rate/false_positive_rate pair corresponding to a particular probability threshold. Each point has a corresponding confusion matrix. The user can analyze the confusion matrices produced at different threshold levels and select a probability threshold to be used for scoring. The probability threshold choice is usually based on application requirements (i.e., acceptable level of false positives).
The ROC does not represent a model. Instead it quantifies its discriminatory ability and assists the user in selecting an appropriate operating point for scoring."
I would add to this that you can select a threshold point the build activity to bias the apply process. Currently we generate a cost matrix based on the selected threshold point rather than use the threshold point directly.
Thanks, Mark -
Error while installing Oracle Data miner 10G Release 2
Hello,
I am a student involved in research in Data mining. I am new to Oracle Database and data miner.
I installed Oracle Enterprise Manager 10g Grid Control Release 2 (10.2.0.1). Now I am trying to install ORacle data miner (10.2.0.1). However, at the time of installation ODM gives the following error:
"specified data mining server is not compatible. 10.1.0.4.0."
I have installed Oracle 10.2.0.1 but when I login using SqlPlus I get the following information -
SQL*Plus: Release 10.1.0.4.0 - Production on Sun Jul 23 09:52:41 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL>
I would be really obliged if someone can help me with this.
Thanks in advance
PoojaHi ,
Download and install the product version(10.2.0.1.) of Oracle Data Mining....
Simon -
Oracle Data Miner 10.1.0.2 Interoperate with Database 10g Release 2
Hi all,
I cannot connect from Oracle Data Miner to a newly upgraded Database 10g Release 2 with Data Mining option. This database was 10.1.0.2 before upgrade, and I could connect via Oracle Data Miner before the upgrade (though it needs to be upgraded to 10.1.0.3+ for data mining to function).
I have similar problem for a new installation on another computer. The error message in either case is "Cannot connect to specified Data Mining Server. Check connection information and try again."
I can use SQL*Plus to login as the data mining user using the net service corresponding to the connect string. I check the v$option and DBA_REGISTRY as per the Data Mining Admin. documentation to verify that the data mining option exists and is valid. I am able to use the same connect string "host:port:SID" to connect from Analytical Workspace Manager to verify that the connectivity is OK.
Furthermore, some Oracle by Example seems not valid for a DB of version 10.2. For example, at the URL http://www.oracle.com/technology/obe/obe10gdb/bidw/odm/odm.htm#p, the point 6 <ORACLE_HOME>\dm\lib\odmapi.jar is not applicable, because the path <ORACLE_HOME>\dm no longer exists.
Therefore, I have query if Oracle Data Miner 10.1.0.2 can work with DB 10.2? What procedure should I follow? Please advise.
Thanks and regards,
lawmanI am waiting on the beta version since I have installed Oracle10gR2.
I've been checking the OTN website every day to see when it is released.
If it is not a bother, can you send me an email when I can download it.
Thanks in advance.
Have a wonderful day/weekend,
Andy -
Status of transfer (shipment costs header data)
Kindly explain it to me in general language,
Status of transfer (shipment costs header data)
This status describes the stage of forwarding for a shipment cost document.
Use
The status is determined by the system and cannot be changed manually. To determine the status, the system cumulates the item statuses. The following rules apply:
Item | Status
000001 | _ | A | B | C | B | C | C | C
000002 | _ | _ | _ | _ | A | A | B | C
Header | _ | A | B | C | B | B | B | C
You can select shipment costs that have reached a certain forwarding status.Dear Pradeep,
the matrix describes how the header status gets calculated when the cost document has two items.
The header status is a cumulation of the item status. This is valid for the calculation, accounting and
transfer status!
Example:
If item 00001 has status 'C' and item 00002 'A', the header status will be 'B'.
Item | Status
000001 | _ | A | B | C | B | C | C | C
000002 | _ | _ | _ | _ | A | A | B | C
Header | _ | A | B | C | B | B | B | C
Regards,
Tom -
Error using Data Miner on SQL Developer
Dear all.
I'm trying to use Data Miner on SQL Developer (3.1.06) with Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 -
64bit Production, with the Partitioning, OLAP, Data Mining and Real Application Testing options.
I created a project, a workflow, a data source, a classification model and connected them.
I tried to show the Decision Tree generated as described in the tutorial "Using Oracle Data Miner 11g Release 2".
But, I receive the message "It's not possible to load the data mining model because it does not exist" in a dialog box entitled "Model not found".
It seems that something was missing either during installation or granting my user.
Does anybody have seen this message and know what causes it?
Any other suggestion is welcome.
Regards,
Duncan
PUCRS-BrazilHi Mark.
Thank you on replying my post.
I'm not the DBA of Oracle repository here. So, if possible, I'd like to address my issue without having to do anything in it.
I'll try to state better what we have here.
When I try to use latest SQL Developer version, 11.2.1.1.7.42, it got me a message stating current repository isn't compatible with this SQLDev version
Repository version is: 11.2.1.1.3
I don't know what type of upgrade SQL Developer wants to do...
About your other questions.
"Is the model not found error occurring for all models or just the DT model?" -- The error occurs for all 4 standard models: SVM, GLM, NB and DT. But, I'm able to see all other options: "show tests results", "Compare tests results".
You did not report that the workflow failed, so you must be seeing a successful completion status for each of the nodes. -- Yes, all steps are executed and finished OK.
Are you using a proxy user or some other connection definition other than a standard connection. -- No.
There is currently some problems supporting proxy based connections that will be fixed in a upcoming SQL Developer release.
So, if you can provide me some details on how your connection is defined, that will be helpful.
How was the product installed and the user granted rights? -- I need to ask PUCRS DBA team to answer these questions.
Was it done through the UI or by running the scripts in the dataminer/scripts directory?
Lastly, what is your client operating system and are you using the SQL Dev 64 bit or 32 bit release? -- I tested in 2 different computers, both with Windows 7 Professional. In my notebook, it's 64 bits OS version.
Thank you for your attention and kind reply.
Regards,
Duncan -
Data Miner Extension in SQL Developer 4.0 EA3 Release: EA3 Repository Migration Failure
The Data Miner extension in the SQL Developer 4.0 EA3 release fails when attempting to migrate the repository to EA3. This posting contains instructions on how to recover from this failure as well as to successful migrate the repository from EA2 to EA3. There are no problems when just installing a fresh Data Miner repository.
Sorry about the difficulty if this has caused you any problems.
Mark
Failure message displayed in log when migrating to EA3:
Error report -
ORA-06550: line 96, column 40:
PLS-00382: expression is of wrong type
ORA-06550: line 96, column 7:
PL/SQL: Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:
Instructions on how to recover from failure and continue using original version of SQL Developer:
1) Execute the following sql as SYS in order to allow the Data Miner repository to be open for use again:
UPDATE ODMRSYS.ODMR$REPOSITORY_PROPERTIES
SET PROPERTY_STR_VALUE = 'LOADED'
WHERE PROPERTY_NAME = 'REPOSITORY_STATUS';
COMMIT;
2) During attempted migrationto EA3 failed, access priviliges to the Data Miner repository are revoked from users. In order to use Data Miner again, these grants must be reapplied. You can either use the UI guided process to accomplish this, which requires the SYS password, or you can run the usergrants.sql script, again as the SYS user. For instructions on how to run the usergrants.sql script, or any adminstrative script, review the instructions contained in the install_scripts_readme.html file. You can find this file in the SQL Developer directories created when you unzipped SQL Developer. It is located in the following relative directory: \<SQL Developer Install Directory>\sqldeveloper\dataminer\scripts.
Instructions on how to successfully migrate to EA3.
If you have already attempted to migrate to EA3 and failed, then you need to perform the recovery instructions noted above, after which you can continue with the instructions below.
If you have not yet attempted to migrate to EA3, then just following the instructions below.
1) Replace all contents the script file createxmlworkflowsbackup.sql with specification contained at the end of this posting. The file to edit can be found in the \<SQL Developer Install Directory>\sqldeveloper\dataminer\scripts directory.
2) After completing the edit to createxmlworkflowsbackup.sql, you can proceed with the UI guided migration or perform the migration using the appropriate migration script. Review the install_scripts_readme.html file in the /dataminer/scripts directory for instructions on how to perform the migration using the script. For the UI migration, simply open any user connection in your Data Miner navigator, and you will be prompted to perform the migration.
WHENEVER SQLERROR EXIT SQL.SQLCODE;
DEFINE MAX_VERSIONS = 30
EXECUTE dbms_output.put_line('Start Backup Data Miner Workflows ' || systimestamp);
DECLARE
table_cnt NUMBER;
BEGIN
SELECT count(*) INTO table_cnt FROM all_tables WHERE owner='ODMRSYS' AND table_name='ODMR$WORKFLOWS_BACKUP';
IF (table_cnt = 0) THEN
EXECUTE IMMEDIATE '
CREATE TABLE ODMRSYS.ODMR$WORKFLOWS_BACKUP
USER_NAME VARCHAR2(30 CHAR) NOT NULL
, PROJECT_ID NUMBER NOT NULL
, PROJECT_NAME VARCHAR2(30 CHAR) NOT NULL
, PJ_CREATION_TIME TIMESTAMP(6) NOT NULL
, PJ_LAST_UPDATED_TIME TIMESTAMP(6)
, PJ_COMMENTS VARCHAR2(4000 CHAR)
, WORKFLOW_ID NUMBER NOT NULL
, WORKFLOW_NAME VARCHAR2(30 CHAR) NOT NULL
, WORKFLOW_DATA SYS.XMLTYPE
, CHAIN_NAME VARCHAR2(30 CHAR)
, RUN_MODE VARCHAR2(30 CHAR)
, STATUS VARCHAR2(30 CHAR) NOT NULL
, WF_CREATION_TIME TIMESTAMP(6) NOT NULL
, WF_LAST_UPDATED_TIME TIMESTAMP(6)
, BACKUP_TIME TIMESTAMP(6) NOT NULL
, VERSION NUMBER NOT NULL
, WF_COMMENTS VARCHAR2(4000 CHAR)
, CONSTRAINT ODMR$WORKFLOWS_BACKUP_PK PRIMARY KEY
PROJECT_ID
, WORKFLOW_ID
, VERSION
ENABLE
LOGGING
PCTFREE 10
INITRANS 1
XMLTYPE COLUMN "WORKFLOW_DATA" STORE AS BASICFILE CLOB';
END IF;
END;
DECLARE
schema_old_ver VARCHAR2(30);
schema_ver VARCHAR2(30);
patch VARCHAR2(30);
db_ver VARCHAR2(30);
v_storage VARCHAR2(30);
schema_data CLOB;
v_db_11_2_0_2 NUMBER; -- db is <= 11.2.0.2?
row_cnt NUMBER;
ver_num NUMBER := 1;
maintaindom NUMBER;
workflow_rec ODMRSYS.ODMR$WORKFLOWS_BACKUP%ROWTYPE;
BEGIN
SELECT STORAGE_TYPE INTO v_storage FROM ALL_XML_TAB_COLS WHERE OWNER='ODMRSYS' AND TABLE_NAME='ODMR$WORKFLOWS' AND COLUMN_NAME='WORKFLOW_DATA';
if (db is >= 11.2.0.3 AND SQL Dev > 3.0) OR (db is <= 11.2.0.2 AND MAINTAIN_DOM_PATCH_INSTALLED)
back up all workflows
end if
IF (v_storage != 'BINARY') THEN
-- determine xml schema version
SELECT XMLSerialize(CONTENT SCHEMA AS CLOB) INTO schema_data
FROM DBA_XML_SCHEMAS WHERE schema_url = 'http://xmlns.oracle.com/odmr11/odmr.xsd' AND owner = 'ODMRSYS';
maintaindom := INSTR(schema_data, 'xdb:maintainDOM="false"', 1, 1);
-- determine database version
SELECT version INTO db_ver FROM product_component_version WHERE product LIKE 'Oracle Database%';
--- Check schema compatibility
schema_old_ver := '11.2.0.1.9'; -- default value
BEGIN
SELECT property_str_value INTO schema_ver
FROM "ODMRSYS"."ODMR$REPOSITORY_PROPERTIES" WHERE property_name = 'WF_VERSION';
IF schema_old_ver = schema_ver THEN
IF NOT (db_ver = '11.2.0.1' OR db_ver = '11.2.0.2') THEN
dbms_output.put_line('WARNING: The backup process can not be done, The workflows need to be migrated first');
RETURN;
END IF;
END IF;
EXCEPTION WHEN NO_DATA_FOUND THEN
schema_ver := schema_old_ver;
dbms_output.put_line('No WF_VERSION found. Defaults to: ' || schema_old_ver);
END;
-- determine if MAINTAIN_DOM_PATCH_INSTALLED
IF (INSTR(db_ver, '11.2.0.2') > 0 OR INSTR(db_ver, '11.2.0.1') > 0 OR INSTR(db_ver, '11.2.0.0') > 0) THEN
v_db_11_2_0_2 := 1;
BEGIN
SELECT PROPERTY_STR_VALUE INTO patch FROM ODMRSYS.ODMR$REPOSITORY_PROPERTIES WHERE PROPERTY_NAME = 'MAINTAIN_DOM_PATCH_INSTALLED';
patch := UPPER(patch);
EXCEPTION WHEN NO_DATA_FOUND THEN
patch := 'FALSE';
END;
ELSE
v_db_11_2_0_2 := 0;
END IF;
END IF;
IF ( v_storage = 'BINARY'
OR (v_db_11_2_0_2 = 0) -- db is >= 11.2.0.3
OR ((v_db_11_2_0_2 > 0) AND ((patch = 'TRUE') OR (maintaindom = 0))) ) THEN -- db is <= 11.2.0.2 AND (MAINTAIN_DOM_PATCH_INSTALLED OR maintaindom="true")
SELECT count(*) INTO row_cnt FROM ODMRSYS.ODMR$WORKFLOWS_BACKUP;
IF (row_cnt > 0) THEN
SELECT NVL(MAX(VERSION)+1,1) INTO ver_num FROM ODMRSYS.ODMR$WORKFLOWS_BACKUP;
END IF;
FOR wf IN (
SELECT
p.USER_NAME "USER_NAME",
p.PROJECT_ID "PROJECT_ID",
p.PROJECT_NAME "PROJECT_NAME",
p.CREATION_TIME "PJ_CREATION_TIME",
p.LAST_UPDATED_TIME "PJ_LAST_UPDATED_TIME",
p.COMMENTS "PJ_COMMENTS",
x.WORKFLOW_ID "WORKFLOW_ID",
x.WORKFLOW_NAME "WORKFLOW_NAME",
xmlserialize(DOCUMENT x.WORKFLOW_DATA as CLOB indent size = 2) "WORKFLOW_DATA",
x.CHAIN_NAME "CHAIN_NAME",
x.RUN_MODE "RUN_MODE",
x.STATUS "STATUS",
x.CREATION_TIME "WF_CREATION_TIME",
x.LAST_UPDATED_TIME "WF_LAST_UPDATED_TIME",
x.COMMENTS "WF_COMMENTS"
FROM ODMRSYS.ODMR$PROJECTS p, ODMRSYS.ODMR$WORKFLOWS x
WHERE p.PROJECT_ID = x.PROJECT_ID
LOOP
workflow_rec.USER_NAME := wf.USER_NAME;
workflow_rec.PROJECT_ID := wf.PROJECT_ID;
workflow_rec.PROJECT_NAME := wf.PROJECT_NAME;
workflow_rec.PJ_CREATION_TIME := wf.PJ_CREATION_TIME;
workflow_rec.PJ_LAST_UPDATED_TIME := wf.PJ_LAST_UPDATED_TIME;
workflow_rec.PJ_COMMENTS := wf.PJ_COMMENTS;
workflow_rec.WORKFLOW_ID := wf.WORKFLOW_ID;
workflow_rec.WORKFLOW_NAME := wf.WORKFLOW_NAME;
workflow_rec.WORKFLOW_DATA := SYS.XMLTYPE(wf.WORKFLOW_DATA);
workflow_rec.CHAIN_NAME := wf.CHAIN_NAME;
workflow_rec.RUN_MODE := wf.RUN_MODE;
workflow_rec.STATUS := wf.STATUS;
workflow_rec.WF_CREATION_TIME := wf.WF_CREATION_TIME;
workflow_rec.WF_LAST_UPDATED_TIME := wf.WF_LAST_UPDATED_TIME;
workflow_rec.BACKUP_TIME := SYSTIMESTAMP;
workflow_rec.VERSION := ver_num;
workflow_rec.WF_COMMENTS := wf.WF_COMMENTS;
BEGIN
-- Output the ids (proj name, workflow name, proj id, workflow id)
dbms_output.put_line('Backup workflow: ('||wf.PROJECT_NAME||', '||wf.WORKFLOW_NAME||', '||wf.PROJECT_ID||', '||wf.WORKFLOW_ID||')');
INSERT INTO ODMRSYS.ODMR$WORKFLOWS_BACKUP VALUES workflow_rec;
COMMIT;
EXCEPTION WHEN OTHERS THEN
dbms_output.put_line('Backup workflow failed: ('||wf.PROJECT_NAME||', '||wf.WORKFLOW_NAME||', '||wf.PROJECT_ID||', '||wf.WORKFLOW_ID||')');
END;
END LOOP;
END IF;
-- keep the latest 30 versions
EXECUTE IMMEDIATE 'DELETE FROM ODMRSYS.ODMR$WORKFLOWS_BACKUP WHERE VERSION <= :1' USING (ver_num - &MAX_VERSIONS);
COMMIT;
EXCEPTION WHEN OTHERS THEN
ROLLBACK;
RAISE_APPLICATION_ERROR(-20000, 'Workflow backup failed. Review install log.');
END;
EXECUTE dbms_output.put_line('End Backup Data Miner Workflows. ' || systimestamp);823006 wrote:
5.a. XLS export of big number columns looses precision, for example a number(38) column with all digits used.
I believe excel only holds 15 digits of precision.
Edited by: 823006 on Jan 18, 2011 8:14 AMThen it would be nice if you can define the "excel type" of each column. So you can make this number a "string" in the XLS.
Edited by: user9361780 on Jan 18, 2011 9:12 AM -
Using Oracle Text to Data Mine
Can someone provide me with an idea of how to Data Mine with just using Oracle Text and not the data mining option. I need to search a column of customer complaints and then put it in a category based on that. It would be best if the categories were auto generated. It has to be done in PL/SQL.
Thanks,You cannot have the categories created automatically without data mining. However, if you are willing to create the categories and queries that determine them, then you can do it with just Oracle Text. I posted an example on the 2nd page of the following thread:
Re: New to Oracle Text search -
Oracle Data Miner connecting to Oracle DB 10.2.0.1 on WinXP; Error
I think this posting is similar to others however I can not figure it out and hoped that someone can help me.
I am trying to connect to the 10g database (10.2.0.1) on WinXP with Data Miner (10.1.0.2).
I am getting the following error message:
"Cannot connect to specified Data Mining Server. Check connection information and try again".
The Connection Name is "RSSODM". This is a new connection name.
User is ODM
Password is ODM. Secret password. :>)
Host is my machine name
Port is 1521
SID is RSS10G.
I can connect to the ODM schema on the RSS10G database. However I do not see any objects owned by DMSYS. Also, when I executed the odmuser.sql file, it fails to grant DMUSER_ROLE to ODM.
I can not find the DMUSER_ROLE. Did I miss something? Can I manually execute something or should I provide all grants on all objects owned by DMSYS to ODM?
Please advise.
Thanks in advance.
Have a wonderful day/weekend,
Andy
P.S. For anyone leaving near the Gulf states, prayers and thoughts are with you all. :>)Updated Information as of 9/6/2005 1:05 pm.
I found out several ".sql" file was never executed. For example, dminst1.sql was never executed which corrected some of my issues; could not find DMUSER_ROLE.
I manually executed all the appropriate sql files. I hope I did not miss anything or did it in the correct order or logged in as the correct user.
Also, I tried to use ODMiner logged in as ODM and I was getting the original error message, can not connect to the server..
Then I tried to use ODMIner logged in as DMSYS and now I get another error message. The error message is "Specified Data Mining Server is not compatible. 10.2.0.1.0". The error message appeared in a pop-up window and nothing was seen in the backgroun (MS prompt window).
All error messages were encountered while I was executing the odminer.exe file; used for troubleshooting.
Am I doing something wrong?
Am I do ing something right?
Please advise.
Thanks in advance.
Have a wonderful day/week,
Andy -
Data Miner does not connect to DB anymore
Hello,
I cannot connect to the database with Data Miner since yesterday. It worked well until I tried to deploy a patch. Database was shut down and all services stopped. This patch did not work anyway and so I aborted this process, but I think no files were updated. The error always was that some files are still in use.
After a reboot of the machine everything is working without the Data Miner. When I try to connect, no error message is displayed but the connection dialog is displayed again and the application does not start. I use db version 11.1.0.6 EE.
What could be the reason for this? How can I check if the Data Mining option is still available?
Greetings
JoergHello Mark,
thank you for this post. I didn't know that this application shows more information.
The error was quite simple. Somehow the quota for the tablespace was not enough. After setting a new quota the connection works.
Thanks.
Greetings,
Joerg
Edited by: Scantid on 14.08.2009 07:35 -
Using Oracle Data Miner for Future Sales Prediction
Hi ,
I have one question on predicting the sale for next 5 years along with month the data as follows.
YEAR Month Total_Sale
2009 1 88187
2009 2 87654
2009 3 87656
2008 1 10000
2008 2 30000
2008 3 40000
How to do this in Oracle data Miner ie the prediction of sale for next years like 2014,2015 etc.?
The expected output
is as follows.
2014 1 2000
2014 2 3000
2014 3 9000
2014 4 234
2015 1 2344
2015 2 4000
and so on for all the 12 months and year.
CREATE TABLE "SALE_GROWTH"
( "YEAR" NUMBER(*,0),
"MONTH" NUMBER(*,0),
"TOTAL_SALE" NUMBER
REM INSERTING into SALE_GROWTH
SET DEFINE OFF;
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2009,1,881725);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2009,2,1036585);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2009,3,1406252);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2009,4,550700);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2009,5,985413);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2009,6,727485);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2009,8,228480);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2008,9,699);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2008,10,446428);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2008,11,975335);
Insert into SALE_GROWTH (YEAR,MONTH,TOTAL_SALE) values (2008,12,4853690);
The above is the historical data.
If i use oracle sql slope , am i going get the same sale numbers as the oracle data miner will predict.
Please write.
I appreciate your help.
Thanks,
HSHi,
For a background on how you might be able to use ODM for sales forecasting, see Marcos Campos blog on timeseries (link below).
Thanks, Mark
Oracle Data Mining and Analytics: Time Series Forecasting Series -
SQL Developer 3.0 Final Available with Data MIner
A new SQL Developer extension, Oracle Data Miner is the graphical user interface for Oracle Data Mining, an option to the Oracle Database Enterprise Edition. Oracle Data Mining provides in-database functionality that enables users to discover patterns and relationships hidden in their data to predict customer behavior, identify key factors, find new clusters and their profiles, anticipate and combat churn, detect anomalous behavior and solve a wide range of data-driven problems. For more information, visit Oracle Data Miner on OTN http://www.oracle.com/technetwork/database/options/odm/index.html
Hi Mikka,
Only simple PL/SQL Records are currently supported i.e. those that do not contain repeating or optional components and where all the components are themselves supported. This restriction is in place due to the fact that we use JDBC as the parameter passing mechanism which does not directly support the PL/SQL Record type.
ANYTYPE and ANYDATA are not currently supported as they have a dynamic value type and therefore must be programmatically set. In the future, it may be possible to support these through the dynamic value and and validation features.
This will remain so for the final version.
Regards,
Richard -
Data Miner Navigator not visible in SQL developer 3.0.04
Hi ,
I have been trying to complete an OBE turorial on http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/bidw/datamining/ODM11gR2-SetUp.htm?print=preview&imgs=visible
In the Install the Data Miner Repository section the tutor is advising to go to the following option :
From the SQL Developer menu, select View > Data Miner > Data Miner Navigator
In the current SQL developer version i do not see the Data Miner Navigator option.
Any guesses? The pre requisites says Oracle SQL Developer, version 3.0, or later and I am using 3.0.04
Rgds,
Dominic.Hi Dominic,
Sorry for the late response, I did not notice this posting.
We have a separate forum for data mining: Data Mining
Try to use that in the future so you get a quicker response.
It looks like the OBE course may have a typo or the menu item name was changed
For SQL Dev 3.0 you can display the data miner interfaces in the following way:
The following menu item will make the Data Miner Connections navigator, and all associated viewers (Component Palette, Workflow Jobs, and when necessary Thumbnail, Property Inspector)
Tools->Data Miner -> Make Visible
Go to the following to select just one of the viewers to be made visible:
View->Data Miner->Data Miner Connections
Other options off of the View->Data Miner menu item are: Workflow Jobs, Thumbnail, Property Inspector.
Thanks, Mark -
Problem while using KEKO(Product Costing - Header Data) table in the report
hi,
below is the slect query i have written.
while accessing the table KEKO( Product Costing - Header Data ) .more time is taken.
is there any alternative other than KEKO table when using in my report.
SELECT FEH_STA VBELN POSNR FROM KEKO
INTO TABLE IST_KEKO FOR ALL ENTRIES IN IST_VBAP_KEKO
WHERE KALNR = IST_VBAP_KEKO-KALNR.
regards,
DILIP.Hi Dilip,
Before going for any other table,
As Kalnr is only one of the primary keys of table KEKO, You can try creating secondary index on KEKO, which might help in improving your report performance.
Also, you can add more conditions in where clause if possible, which will also help in improving performance.
Thansk,
Archana -
Delete cost distribution data(infotype 1018)
hi all,
i have a requiremment. i have to Delete Cost Requirement Data (infty 1018) at mass level.
how would i do that?
thanks
SachinHi Sachin,
To delete Infotype Data from the Personnel Planning Database... You can use the Program RHRHDL00 (Last two Characters are zero)
Go to SE38 >> RHRHDL00 >> Execute
CAUTION:
On the Screen,
1. Mention the Plan Version
2. Mention Object Type - You can MAINTAIN THE LIST HERE.... (As IT 1018 can be maintained for Work Center, Org Units, Position)
3. Maintain the Infotype you want to delete (In your Case it is 1018).
4. If this was used for any other infotype which has a Subtype, then maintain the subtype (For Eg: For Relationships INfotype, you want to delete A008 relationship then maintain 1001 in Infotype and A008 in subtype).
****IT WORKS PERFECT... BUT DO THIS AT YOUR OWN RISK.....ONE HAS TO BE VERY CONFIDENT WHILE EXERCISING THIS PROGRAM.....
MASS DELETION.. Done !!!
All the Best !!!
Kumarpal Jain.
Maybe you are looking for
-
Photoshop Installation Failed message
I've tried installing Adobe Photoshop CS6 twice from the Application Manager and constantly receives: "Installation Failed" Unable to extract files. Press Retry to download again or contact suctomer support (EX20). I'm using a PC 32bit computer. C
-
Hello, i recently bought an ipod touch 5th generation and it came with a passcode which i did not know, it is now disabled and i am wondering on how to unlock it using itunes on the computer
-
Dynamic text boxes, font size issues
I have created a dynamic text box and an input text box. I have produced code that replicates whatever is typed in the input box to display in the dynamic box along with a date and time stamp once a button is clicked. This all works fine, except for
-
Testcases for BI reports.
Hi, I need some inputs on writing testcases for the BI reports. what are the functional, technical and data validation aspects to be considered for preparing testcases for a report. Kindly help me. regards, Kavya
-
Logging into Creative Cloud Desktop Fails
I can log into adobe.com to access everything about my account, even reset passwords but everytime I attempt to log into Creative Cloud Desktop, it immeidately reads You Have Been Signed Out. The subscription is active and current, nothng in the fir