Publishing Data Mining results to OBIEE 10.1.3.4
Hi all,
Am working on OBIEE 10.1.3.4.
I've used Oracle DataMiner 10.2 and i need to publish and integrate Data Mining results to OBIEE.
Any suggestions?
Thanks very much.
Are you able to reach presentation services and login if you go to http://192.17.61.85:9704/saw.dll ?
If so this may be a JEE application container issue or some type of rewriting of the URL going on.
Do you have a list of all the items that the patch modified?
Similar Messages
-
Publishing Data Mining models,objects to Discoverer
HI everybody,
Does anybody Know where the ODM objects(tables,views) are inserted during the procedure of Publishing to Discoverer?
Whereas , I have defined a Gateway to ODM in Discoverer Administrator and the Publishing does not invoke any error , I don't see the ODM objects in Discoverer Administrator....
Why may this happen?
Thanks , beforehand
SimonPlease ensure the following patch is installed. Which version of Oracle BI are you using?
Patch 4430506 for OracleBI Discoverer Administrator, available from Oracle MetaLink; if this patch is not installed, the gateway will not work.
Also I am assuming you are familiar with how a gateway object can be loaded to the Discoverer Admin GUI in the load wizard. You can get help in Discoverer forum. Discoverer
Regards
Sunil -
Data Mining on data specified and filtered by the user in runtime
Hi Experts,
i am new to Data Mining in SAP BI (we are on BI 7.0 SP Level 20). I familiarised myself with APD and Data Mining by reading some interesting and useful threads in this forum and some other resources. Therefore I got a understanding about the topic and was able to create basic data mining model for an association analysis and an corresponding APD for it and write the results into a DSO by using the data source. But for now I was not able to find a solution for a concrete customer requirement.
The user shall be able to select an article, a retail location and a month and get the top n combinations sold with that article in the particular location and month. For that he may not access the data mining workbench or any other SAP internal tools but he shall be able to start the analysis out of the portal (preferable a query).
We had some thoughts on the scenario. The first idea would be to create an APD for every location for the last month. As we need to cover more than 100 locations, this would not be practicable. Therefore I think it would be necessary, that the user can select the particular filters, and the data mining would then be executed with the given input.
The other idea was to use a query as source. The user would start this query and filter location and month in it. The result of the query could then be used as the source for the APD with the association analysis. Therefore we would need to create a jump point from that query, which starts the APD with that results. After that the user should be able to start a result query, which displays the result of the association analysis (ideally this result query would start automatically, but starting it manually would be ok, too).
So, I have the following questions for these scenarios:
1.) Is it possible to create variants of a single APD, for automatically doing the data mining for the different locations?
2.) is it possible to start an APD out of a query, with the particular results regarding filtering?
3.) Can we place a query directly on the data mining results (how?) or do we need to write the data mining results in a DSO first?
4.) What about the performance? Would it be practicable to do the data mining in runtime with the user waiting?
5.) Is the idea realistic at all? Do you have any other idea how to accomplish the requirement (e.g. without APD but with a query, specific filter and conditions)?
Edited by: Markus Maier on Jul 27, 2009 1:57 PMHi ,
you can see the example : go to se 80 then select BSP Application ,SBSPEXT_HTMLB then select tableview.bsp , you will get some idea to be more clear for the code which you have written
DATA: tv TYPE REF TO CL_HTMLB_TABLEVIEW.
tv ?= cl_htmlb_manager=>get_data(
request = runtime->server->request
name = 'tableView'
id = ''tbl_o_table" ).
IF tv IS NOT INITIAL.
DATA: tv_data TYPE REF TO CL_HTMLB_EVENT_TABLEVIEW.
tv_data = tv->data.
IF tv_data->prevSelectedRowIndex IS NOT INITIAL.
FIELD-SYMBOLS: <row> LIKE LINE OF sflight.
READ TABLE ur tablename INDEX tv_data->prevSelectedRowIndex ASSIGNING <row>.
DATA value TYPE STRING.
value = tv_data->GET_CELL_ID( row_index =
tv_data->prevSelectedRowIndex
column_index = '1' ).
endif.
endif, -
Is oracle data mining support to other databases to work datamining
Dear CB,
I am using oracle data mining and i have some doubts please clarify my doubts
1) Is ODM talk to other database or not ,like can we use ODM to prepare data mining work using other database
2) Is ODM supports social analytics
Thanks inadvance
thanks & regards
SureshSuresh,
1) Is ODM talk to other database or not ,like can we use ODM to prepare data mining work using other database
Yes, you can use Oracle Data Mining to talk to other databases, but ODM will need to "have" the data inside the Oracle Database during model build and model apply. You can use DB links to pull/push data to/from other Oracle and non-Oracle DBs, but all the data mining work and data transformations occur in-DB by our Oracle design. You can perform data prep, data transformation, build models and then compute dm predictions inside Oracle DB and then publish ODM results to any other dashboard/Q&R tool that can make a SQL call to Oracle to query results or ask ODM predictive model to make a real-time prediction based on current input data.
2) Is ODM supports social analytics
Depends on what you mean, but probably Yes. For example, we can mine unstructured data e.g. Twitter feeds and get 80% accurate Sentiment analysis. See http://www.google.com/url?sa=t&rct=j&q=mining%20twitter%20data%20clasification%20stanford&source=web&cd=1&ved=0CCMQFjAA&url=http%3A%2F%2Fwww.stanford.edu%2F~alecmgo%2Fpapers%2FTwitterDistantSupervision09.pdf&ei=Pk3VTsPOFYaIsQLknuiGDg&usg=AFQjCNGSErmPAa-n6kc_gVCCdxMRMKTeOw paper for a published tech paper that describes an approach that we have successfully replicated in-DB using ODM's text mining capabilities & Oracle Text. Add additional structured data, e.g. # purchases, $amount of purchases over time, etc. and you can have better Sentiment analysis or other types of predictive models.
Also, ODM can perform e.g. churn analysis and include as input to the model the "friends & family" usage, activities, and demographics as enriched input data to mine. We mine Star Schemas so we can pull together a 360 degree view of customer include "social" type data e.g. # links from a friend, etc. Broad topic.... hope this helps.
cb -
Oracle Data Mining - How to use PREDICTION function with a regression model
I've been searching this site for Data Mining Q&A specifically related to prediction function and I wasn't able to find something useful on this topic. So I hope that posting it as a new thread will get useful answers for a beginner in oracle data mining.
So here is my issue with prediction function:
Given a table with 17 weeks of sales for a given product, I would like to do a forecast to predict the sales for the week 18th.
For that let's start preparing the necessary objects and data:
CREATE TABLE T_SALES
PURCHASE_WEEK DATE,
WEEK NUMBER,
SALES NUMBER
SET DEFINE OFF;
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('11/27/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 1, 55488);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('12/04/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 2, 78336);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('12/11/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 3, 77248);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('12/18/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 4, 106624);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('12/25/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 5, 104448);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('01/01/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 6, 90304);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('01/08/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 7, 44608);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('01/15/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 8, 95744);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('01/22/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 9, 129472);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('01/29/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 10, 110976);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('02/05/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 11, 139264);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('02/12/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 12, 87040);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('02/19/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 13, 47872);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('02/26/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 14, 120768);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('03/05/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 15, 98463.65);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('03/12/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 16, 67455.84);
Insert into T_SALES
(PURCHASE_WEEK, WEEK, SALES)
Values
(TO_DATE('3/19/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 17, 100095.66);
COMMIT;
There are a lot of linear regression models and approaches for sales forecast out on the market, however I will focus on what oracle 11g offers i.e. package SYS.DBMS_DATA_MINING to create a model using regression as mining function and then, once the model is created, to apply prediction function on the model.
Therefore I'll have to go through few steps:
i) normalization of data
CREATE OR REPLACE VIEW t_sales_norm AS
SELECT week,
sales,
(sales - 91423.95)/27238.3693126778 sales_norm
FROM t_sales;
whereas the numerical values are the mean and the standard deviation:
select avg(sales) from t_sales;
91423.95
select stddev(sales) from t_sales;
27238.3693126778
ii) auto-correlation. For the sake of simplicity, I will safely assume that there is no auto-correlation (no repetitive pattern in sales among the weeks). Therefore to define the lag data I will consider the whole set:
CREATE OR REPLACE VIEW t_sales_lag AS
SELECT a.*
FROM (SELECT week,
sales,
LAG(sales_norm, 1) OVER (ORDER BY week) L1,
LAG(sales_norm, 2) OVER (ORDER BY week) L2,
LAG(sales_norm, 3) OVER (ORDER BY week) L3,
LAG(sales_norm, 4) OVER (ORDER BY week) L4,
LAG(sales_norm, 5) OVER (ORDER BY week) L5,
LAG(sales_norm, 6) OVER (ORDER BY week) L6,
LAG(sales_norm, 7) OVER (ORDER BY week) L7,
LAG(sales_norm, 8) OVER (ORDER BY week) L8,
LAG(sales_norm, 9) OVER (ORDER BY week) L9,
LAG(sales_norm, 10) OVER (ORDER BY week) L10,
LAG(sales_norm, 11) OVER (ORDER BY week) L11,
LAG(sales_norm, 12) OVER (ORDER BY week) L12,
LAG(sales_norm, 13) OVER (ORDER BY week) L13,
LAG(sales_norm, 14) OVER (ORDER BY week) L14,
LAG(sales_norm, 15) OVER (ORDER BY week) L15,
LAG(sales_norm, 16) OVER (ORDER BY week) L16,
LAG(sales_norm, 17) OVER (ORDER BY week) L17
FROM t_sales_norm) a;
iii) choosing the training data. Again, I will choose the whole set of 17 weeks, as for this discussion in not relevant how big should be the set of training data.
CREATE OR REPLACE VIEW t_sales_train AS
SELECT week, sales,
L1, L2, L3, L4, L5, L6, L7, L8, L9, L10,
L11, L12, L13, L14, L15, L16, L17
FROM t_sales_lag a
WHERE week >= 1 AND week <= 17;
iv) build the model
-- exec SYS.DBMS_DATA_MINING.DROP_MODEL('t_SVM');
BEGIN
sys.DBMS_DATA_MINING.CREATE_MODEL( model_name => 't_SVM',
mining_function => dbms_data_mining.regression,
data_table_name => 't_sales_train',
case_id_column_name => 'week',
target_column_name => 'sales');
END;
v) finally, where I am confused is applying the prediction function against this model and making sense of the results.
On a search on Google I found 2 ways of applying this function to my case.
One way is the following:
SELECT week, sales,
PREDICTION(t_SVM USING
LAG(sales,1) OVER (ORDER BY week) as l1,
LAG(sales,2) OVER (ORDER BY week) as l2,
LAG(sales,3) OVER (ORDER BY week) as l3,
LAG(sales,4) OVER (ORDER BY week) as l4,
LAG(sales,5) OVER (ORDER BY week) as l5,
LAG(sales,6) OVER (ORDER BY week) as l6,
LAG(sales,7) OVER (ORDER BY week) as l7,
LAG(sales,8) OVER (ORDER BY week) as l8,
LAG(sales,9) OVER (ORDER BY week) as l9,
LAG(sales,10) OVER (ORDER BY week) as l10,
LAG(sales,11) OVER (ORDER BY week) as l11,
LAG(sales,12) OVER (ORDER BY week) as l12,
LAG(sales,13) OVER (ORDER BY week) as l13,
LAG(sales,14) OVER (ORDER BY week) as l14,
LAG(sales,15) OVER (ORDER BY week) as l15,
LAG(sales,16) OVER (ORDER BY week) as l16,
LAG(sales,17) OVER (ORDER BY week) as l17
) pred
FROM t_sales a;
WEEK, SALES, PREDICTION
1, 55488, 68861.084076412
2, 78336, 104816.995823913
3, 77248, 104816.995823913
4, 106624, 104816.995823913
As you can see for the first row there is a value of 68861.084 and for the rest of 16 values is always one and the same 104816.995.
Question: where is my week 18 prediction ? or maybe I should say which one is it ?
Another way of using prediction even more confusing is against the lag table:
SELECT week, sales,
PREDICTION(t_svm USING a.*) pred
FROM t_sales_lag a;
WEEK, SALES, PREDICTION
1, 55488, 68861.084076412
2, 78336, 75512.3642096908
3, 77248, 85711.5003385927
4, 106624, 98160.5009687461
Each row out of 17, its own 'prediction' result.
Same question: which one is my week 18th prediction ?
Thank you very much for all help that you can provide on this matter.
It is as always highly appreciated.
Serge F.Kindly let me know how to give input to predict the values for example script to create model is as follows
drop table data_4svm
drop table svm_settings
begin
dbms_data_mining.drop_model('MODEL_SVMR1');
CREATE TABLE data_4svm (
id NUMBER,
a NUMBER,
b NUMBER
INSERT INTO data_4svm VALUES (1,0,0);
INSERT INTO data_4svm VALUES (2,1,1);
INSERT INTO data_4svm VALUES (3,2,4);
INSERT INTO data_4svm VALUES (4,3,9);
commit;
--setting table
CREATE TABLE svm_settings
setting_name VARCHAR2(30),
setting_value VARCHAR2(30)
--settings
BEGIN
INSERT INTO svm_settings (setting_name, setting_value) VALUES
(dbms_data_mining.algo_name, dbms_data_mining.algo_support_vector_machines);
INSERT INTO svm_settings (setting_name, setting_value) VALUES
(dbms_data_mining.svms_kernel_function, dbms_data_mining.svms_linear);
INSERT INTO svm_settings (setting_name, setting_value) VALUES
(dbms_data_mining.svms_active_learning, dbms_data_mining.svms_al_enable);
COMMIT;
END;
--create model
BEGIN
DBMS_DATA_MINING.CREATE_MODEL(
model_name => 'Model_SVMR1',
mining_function => dbms_data_mining.regression,
data_table_name => 'data_4svm',
case_id_column_name => 'ID',
target_column_name => 'B',
settings_table_name => 'svm_settings');
END;
--to show the out put
select class, attribute_name, attribute_value, coefficient
from table(dbms_data_mining.get_model_details_svm('MODEL_SVMR1')) a, table(a.attribute_set) b
order by abs(coefficient) desc
-- to get predicted values (Q1)
SELECT PREDICTION(MODEL_SVMR1 USING *
) pred
FROM data_4svm a;
Here i am not sure how to predict B values . Please suggest the proper usage . Moreover In GUI (.NET windows form ) how user can give input and system can respond using the Q1 -
Beginner installing SQL Server 2014 for Excel Data Mining
Hello, I'm a complete beginner with servers but Im desperately trying to gain access to the SQL server for use with the data mining addin for excel.
Could someone please help. When I try to make a connection in Excel by choosing DATA MINING> <No Connection> New> it then asks me for a Server Name in the connect to Analysis Services box. How can I find out what my Sever name is please? I have
tried all sorts of names that I have found such as SQLEXPRESS or localhost but nothing works. It also tells me to 'Ensure that the Server is running'. Another error message I receive: No connection can be
made because 'the target machine actively refused it'.
I would be really grateful for some troubleshooting tips.
Thank youHi Alberto,
Thanks very much for getting back to me.
Here are the results of the Analysis Services report:
Microsoft SQL Server 2014 Setup Discovery Report
Product
Instance
Instance ID
Feature
Language
Edition
Version
Clustered
Configured
Microsoft SQL Server 2014
SQLEXPRESS
MSSQL12.SQLEXPRESS
Database Engine Services
1033
Express Edition
12.0.2000.8
No
Yes
Microsoft SQL Server 2014
SQLEXPRESS
MSSQL12.SQLEXPRESS
SQL Server Replication
1033
Express Edition
12.0.2000.8
No
Yes
I then ran the System Configuration Checker and these are the results:
Passed: 9. Failed: 1.
Edition WOW 64 Platform Failed
(I can't paste the images as my account has not been verified)
Should I assume that I have installed the wrong version? I am running 64 Bit Windows 8.
I just need the most basic version for personal data analysis in Excel with the Data Mining Add-in.
Thanks again -
Processing a data mining structure throws an error
Processing a data mining structure throws an exception stating the following:
"Errors in the OLAP storage engine: An error occurred while the 'IDK' attribute of the 'Test IDK' dimension from the 'Project1' database was being processed."
"Errors in the OLAP storage engine: The attribute key was converted to an unknown member because the attribute key was not found. Attribute IDK of Dimension: Test IDK from Database: project1, Record:17072643"
I am using a DB view as a DSV. It does not have a unique primary key. Since DB view is getting multiple results per IDK, the IDK repeats for multiple rows. The same IDK is defined as Key column for the mining model. Not sure if that is the
issue. Please help!
Thanks
ShalluHi Shallu,
According to your description, you use a database view in the data source view that do not have a primary key, so you get the error
Errors in the OLAP storage engine: The attribute key was converted to an unknown member because the attribute key was not found. Attribute IDK of Dimension: Test IDK from Database: project1, Record:1707264
when processing the project, right?
In this case, please refer to the links below which describe the similar issue.
http://agilebi.com/ddarden/2009/01/06/analysis-services-error-the-attribute-key-cannot-be-found-when-processing-a-dimension/
http://social.technet.microsoft.com/Forums/systemcenter/en-US/432deebe-52b8-4245-af85-5aa2eecd421a/scsm2012-cube-processing-failing-on-two-cubes-configitemdimkey-not-found?forum=dwreportingdashboards
Regards,
Charlie Liao
TechNet Community Support -
Problem in Exploring the Decision Tree Model on Lesson 4 Basic Data Mining SSAS
Hello everyone,
I've tried to follow all the steps mentioned in Basic Data Mining Tutorials, but I've an odd problem in the Lesson 4 - Exploring the Decision Tree Model (http://technet.microsoft.com/en-us/library/cc879269.aspx).
It's stated that "As you view the TM_Decision_Tree model
in the Decision Tree viewer, you can see the most important attributes at the left side of the chart. “Most important” means that these attributes have the greatest influence on the outcome. Attributes further down the tree (to the right of the chart) have
less of an effect. In this example, age is the single most important factor in predicting bike buying. The model
groups customers by age, and then shows the next more important attribute for each age group. For example, in the group of customers aged 34 to 40, the number of cars owned is the strongest predictor after age."
But, I got a different result from what mentioned in the tutorial. I got the number of cars owned as the most important attribute, then followed by the attribute age.
Do you guys know why?
Thanks in advanceHI,
BEGIN
INSERT INTO DT_CA_SETTINGS_TEST (SETTING_NAME, SETTING_VALUE) VALUES
(dbms_data_mining.TREE_TERM_MINPCT_NODE,to_char(1));
END;
That is not "Mode" its "NODE".
Now Execute.
Cheers.. -
Oracle9i Enterprise Edition Release 9.0.1.1.1 Data Mining API
Is Data Mining API contained In Oracle9i Enterprise Edition Release 9.0.1.1.1 or in release 2 only?
ThanxNo it is not so simple. This "le signe est sur l'avant dernier octet" means that the sign is last character in number.
The sign "é" and "I" are just in my example and they are somehow calculated but I don't know how?
If I take my example there is field with format S9(13)V99 and with value 00000000071049é (and with last sign "é") and I think the last sign is somehow calculated from number. And then from this value I get the number.
00000000071049é
FFFFFFFFFFFFFFC
000000000710490 => +0000000007104,90
S9999999999999V99
Everything I try, calculating from binary to hex, or anything other, I don't get the result or the last sign and I wonder if there is some function in PL/SQL that I can use to get the result I want?
Or if you have some idea how to help me to get from "00000000071049é" to "+0000000007104,90" from example above?
Here are some other examples, just for help:
000000204592D
000000183882D
000000139441C
000000182979H
000000083361F
000000083361F
000000083361F
000000059033F
000000066273E
000000069011G
000000102615B
000000092362F
000000138215‚
000000138215‚
000000138215‚
000000138215‚
000000106760C
000000106760C
000000106760C
000000115024A
000000115024A
000000115024A
000000115024A
000000115024A
000000088149B
000000084459I -
Help to do data mining and transformation...
I have a specific task to accomplish and I am wondering if Oracle Data Mining is the correct tool to use, and if not what possibly might be. Here is brief description:
I have a table with about 500 millions rows of data per day, transactional internet traffic data. It contains about 20 columns/dimensions. The requirement is to transform this flat data into a new table that contains (as one column each) each unique variation of those dimension values recorded.
So for example, if we have 3 dimensions of say gender, age and zip code we would determine each unique combination of those found in the actual data and write out x number of columns to identify them and store a count value for each one. The count will just tell us how many of that combination was found in the data, and the end result will be of course an aggregated table for fast querying on all observed dimensions.
For performance reasons we want to pass through the data only only.
We tried cubes but this takes too long (because it also tries to build out all the non-observed combinations), and we know we could try a code approach but fear this may take too long also. The problem is more of a performance one of course, with that many rows and possible combinations to consider.
Any ideas?
Thanks in advance.After doing some research I realize what I need is a cube, but one that does not contain every single dimension combination but only those that actually exist (to speed up the creation time and reduce storage space). Is this something Oracle supports? Anyone?
-
Data Mining Reports against a Cube using Excel
I'm quite new to Data Mining so please bear with me if these are silly questions.
1. Can you create a data mining model against a Cube from within Excel? I seem to only be able to target a relational database.
2. Once you have a DM model in place and it's run against data in Excel... can these be published to Sharepoint and automatically updated? So for instance, you have 10,000 rows of data in Excel that have been categorised... you want this updated on a weekly
basis so that the categories are updated as the data in the rows changes - someone might change category based on their usage of "something" for instance.
Thanks,
MarcusHello Markus,
1. No, to modify the SSAS database, which contains the Data Mining model, you Need BIDS (Business Intelligence Developer Studio) for the Version 2005-2008R2 or SSDT (SQL Server Data Tools) for Version 2012-2014.
You could use the Data Mining Addin for Excel to create local data mining models, see
http://office.microsoft.com/en-us/excel-help/data-mining-add-ins-HA010342915.aspx
Olaf Helper
[ Blog] [ Xing] [ MVP] -
Custom Activity not returning Published Data when ActivityWarning() is thrown
I'm using Visual Studio 2012 on Windows 8.1 to develop a custom
ActivityMonitor class using the
imperative approach with the Microsoft System Center 2012 R2 Orchestrator Integration Toolkit. The
ActivityMonitor inherits from IActivity, has two inputs and three outputs declared in the
Design method, and publishes three outputs in the Execute method.
I want to use this ActivityMonitor such that the Orchestrator
smart link only processes published data when a Warning is encountered. Therefore, after calling the
IActivityResponse.Publish method three times (once for each output), I throw a new
ActivityWarning exception.
Unfortunately, after throwing the ActivityWarning exception, no published data is passed down the smart link. Since this published data is used by the next
Activity, it is causing that Activity to fail.
Why is my published data getting discarded, just because an ActivityWarning was thrown? How can I make this work?
If this post was helpful, please click the little "Vote as Helpful" button :)
Trevor Sullivan
Trevor Sullivan's Tech Room
Twitter ProfileHi Mithun Sharma
Are you getting desired result, when you execute the RFC FM directly in SAP, with the same values, which you pass from .NET
Regards
Madhan Doraikannan -
Everyone,
Does anyone have already used Weka with Oracle Database? I must do a
comparison between ODM and Weka.
What are the advantages and the drawbacks of ODM and Weka ?
Thank you for your answer.
PYI agree with Mark Kelly, ODMr has inherent advantages of in-database mining in Oracle Database.
I haven't used Weka much, I tried once. It basically gets the data from the database using JDBC driver and does the mining in the desktop environment.
Another advantage of Oracle Data Miner (ODMr) is it is designed to give good prediction results very quickly even for data mining beginners who doesn't know much about the required transformations, algorithm settings etc. Especially with the Guided Analytic using "Activity" template based approach that derives intelligent defaults for the transformations, algorithm settings etc. ODMr does provide required flexibility for the advanced users who wants to control the transformations, algorithm settings etc.
-Sunil -
Data Mining - DM_Nested_Numericals
Hi,
I'm using Data Mining (PL/SQL) to analyze MicroArray Data and have loaded the data as per 4.4.1.2 Oracle Data mining app developers guide as a nested table with dm_nested_numericals. Tables loaded ok, view defined ok, svm analysis runs ok but analysis is definitely wrong - as I get the same results with/without normalisation on the numeric expression attribute in the nested table.
How can I see embedded nested expr attribute - selecting * from the view just shows the non nested attributes with the gene_expr nested table as a blank column..
data mining 10.1.0.2
Thanks
MartinMartin,
A couple of questions:
- How many rows and attributes in the data?
- How many classes (distinct values) in the target column?
- What is the class distribution for the target (counts for each distinct value)?
- Have you tried 0-1 normalization?
- Do you have other attributes besides the nested column and the caseID?
It is possible to get the same accuracy from normalized and non-normalized data. It is a bit surprising to get such a low accuracy though. Also predictive analytics (DBMS_PREDICTIVE_ANALYTICS) doesn't support nested columns. Could you describe describe the list of columns returned by predictive analytics?
Regards,
--Marcos -
Error during Text Mining execution [Data Mining System Error ORA-12988: ]
Hi,
When I run dmkmdemo.java sample program with the below code (starts with <code>) snippet from ODM 11g bundle, I see this below error (starts with <error> - I've created user dmuser, run the scripts):
<code>
public static void prepareData() throws JDMException
System.out.println("---------------------------------------------------");
System.out.println("--- Prepare Data ---");
System.out.println("---------------------------------------------------");
String inputDataURI = null;
String outputDataURI = null;
OraTransformationTask xformTask = null;
// 1. Prepare build data
inputDataURI = "MINING_BUILD_TEXT";
outputDataURI = "mining_build_nested_text";
//NESTED_TABLE_BUILD_TEXT
// Create OraTextTransform
OraTextTransform txtXform = (OraTextTransformImpl)m_textXformFactory.create(
inputDataURI, // name of the input data set
outputDataURI, // name of the transformation result
"CUST_ID", // Case id column
new String[] { "COMMENTS" } ); // Text column names
// Create transformation task
System.out.println("sanku *** JDM transformation");
xformTask = m_xformTaskFactory.create(txtXform);
txtXform.setTextColumnList( new String[] { "COMMENTS" }); // for nested column list
executeTask(xformTask, "kmPrepareBuildTask_jdm");
</code>
<error>
kmPrepareBuildTask_jdm is started, please wait. kmPrepareBuildTask_jdm is failed.
Failure Description: ORA-40101: Data Mining System Error ORA-40101: Data Mining System Error ORA-12988: cannot drop column from table owned by SYS
ORA-06512: at "SYS.DBMS_JDM_INTERNAL", line 2772
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_JDM_INTERNAL", line 3000
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_JDM_INTERNAL", line 3021
ORA-06512: at line 1
</error>
Any pointers or help here please? Thanks.
SanjeevHi,
You should also consider looking at the pl/sql implementations/apis for data mining.
There is more data mining functionality within the pl/sql domain and that is defintely where the emphasis will be going forward.
To gain appreciation for the pl/sql approach you can do the following:
1) Using Data Miner Classic, it provides an option to generate pl/sql code to replicate a mining activity.
2) Using Data Miner Workflow, you can generate the sql for transformations.
We will be coming out with broader sql script generation for workflows in future releases.
3) Oracle Data Mining has sample code available on OTN.
THanks,Mark
Maybe you are looking for
-
How do I set up a local testing server?
Hey again everyone. I am trying to figure out how to set up a local testing server. I've read a couple different tutorials online and i still can't figure it out. I added a new server, named it test server, I'm connecting using local/network, my serv
-
IFrame data not loading on iPad
Hi, I am having trouble figuring out why images are not loading in my iFrame when the web page is loaded on an iPad or iPhone.... http://www.1504westminster.com/photo1.htm The content is simply text and images. It just partially loads. Works well on
-
Hello I am upgrading oracle database from 11.2.0.3 to 11.2.0.4 for ebs 11.5.10.2. I would like know to what patches for companion CD. Thanks Prince
-
Bridge- get photos from camera... no valid files found
I am using Photoshop CS3 on a laptop with Windows Vista and a Canon 30D camera. When I go to Bridge > File > Get photos from camera > select Canon EOS 30D, I get the message 'No Valid Files Found' What are the possible causes? There are raw files on
-
Does anyone know why the majority of the songs on iTunes cost $1.29 ?
I am definitely not a novice when it comes to iTunes but i noticed that the majority of the songs are $1.29 instead of .99 cents. Does anyone have a reasonable explanation for this? I never thought of asking until now. Will it ever go back to to the