Processing a data mining structure throws an error

Processing a data mining structure throws an exception stating the following:
"Errors in the OLAP storage engine: An error occurred while the 'IDK' attribute of the 'Test  IDK' dimension from the 'Project1' database was being processed."
"Errors in the OLAP storage engine: The attribute key was converted to an unknown member because the attribute key was not found. Attribute IDK of Dimension: Test IDK from Database: project1, Record:17072643"
I am using a DB view as a DSV. It does not have a unique primary key. Since DB view is getting multiple results per IDK, the IDK repeats for multiple rows. The same IDK is defined as Key column for the mining model. Not sure if that is the
issue. Please help!
Thanks
Shallu

Hi Shallu,
According to your description, you use a database view in the data source view that do not have a primary key, so you get the error
Errors in the OLAP storage engine: The attribute key was converted to an unknown member because the attribute key was not found. Attribute IDK of Dimension: Test IDK from Database: project1, Record:1707264
when processing the project, right?
In this case, please refer to the links below which describe the similar issue.
http://agilebi.com/ddarden/2009/01/06/analysis-services-error-the-attribute-key-cannot-be-found-when-processing-a-dimension/
http://social.technet.microsoft.com/Forums/systemcenter/en-US/432deebe-52b8-4245-af85-5aa2eecd421a/scsm2012-cube-processing-failing-on-two-cubes-configitemdimkey-not-found?forum=dwreportingdashboards
Regards,
Charlie Liao
TechNet Community Support

Similar Messages

  • In Sharepoint browser enabled Infopath form Date & Date Time Pickers throws Script errors using https url.

    HI All,
    I am working in Sharepoint 2010 browser enabled Infopath form and is have two Infopath date picker controls. The form is working fine in normal "http://" url. while the same site we are using opening with "https://" and select the date
    from picker then we got the script errors.
    There are no scripts used in this site. It is by default sharepoint 2010 publishing site.
    One more i checked with "Date Time picker" Infopath control also but same issue.
    Below are the errors for reference:-
    Any help is appreciated..!
    Thanks.

    Thanks Linda, for your quick response.
    The  Minimal
    Download strategy  feature is not available
    in SharePoint 2010 and start.js file is also not available.
    can you please suggest the exact location of "start.js"
    file in SharePoint 2010 then i go further or please suggest other way to find or debug the process.

  • Collation error when data mining

    I'm trying to process a data mining model but keep getting this error: "Errors in the OLAP storage
    engine: The sort order specified for distinct count records is incorrect. Errors in the OLAP storage engine: An error occurred while processing the..."
    I changed my sql server collation to to Latin1_General_CS_AI with a compatibility level of 100, and in SSAS, I set it to Latin1_General_100 and "Case Sensitive" checked...but I still get the same error...am I not setting the collation correctly?

    I changed my sql server collation to to Latin1_General_CS_AI ...
    The SQL Server setting is just a default value for new databases, when you don't explicit define the collation in the CREATE DATABASE command; same for the database setting, also just a default value. The effective collation is defined on column level.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • What software do I need to install to get "Data Mining Framework"

    Hi,
    from edelivery.oracle.com I have downloaded:
    1. Hyperion Essbase - System 9 Release 9.3.1.2 Server Windows Installer
    2. Essbase Administration Services Release 9.3.1 Windows Installer
    I installed both software and I started Administration Services Console, log in etc. Essbase server is working fine.
    In Administration Services Console I click on Essbase Servers | localhost | Data Mining | Algorithms and got error:
    "message from server [Data Mining Framework is not available. Please contact Administrator].
    It looks like I need to install some other software to have access to data mining features. Which software do I need to install?
    Regards,
    Grofaty

    Windows 7 and iCloud Control Panel for Windows v1.1. Also, Apple (Canada) - iCloud - Learn how to set up iCloud on all your

  • Oracle Data Mining Workshop

    Oracle is providing a free 1 day on-site technical workshop designed to help customers and prospects move forward with the data mining technologies. The workshop consists of the following:
    1. Understanding of data mining and how it compares to other analytical techniques, the data mining process and data mining in the 9i database.
    2. Taking a business problem and showing how to turn it into a data mining problem.
    3. Based on the data mining problem, what are the best methodologies, techniques and algorithms to use. The pros and cons of using each.
    4. What types of data are need for the problem and how the data should be transformed and stored in the database.
    5. How to optimize and tune the 9i database for development and production data mining.
    This workshop is given by Richard Solari who is Oracle's Technical Director of Data Mining Services. For a complete agenda and to schedule your workshop, please contact Richard at [email protected]

    Its primary purpose is to help in the development of data mining applications. You can as you explain utilize it as well as an analytical tool. We expect that any mining application will require analysts and developers to work together in building mining applications.

  • Creating mining structure as dimension using existing Database

    i have a requirement to create data mining structure as dimension using Analysis services database as the source.
    Kindly share me the steps?

    user12953093 wrote:
    I'm trying to create a template using DBCA. When I start it and choosing Welcome > Manage Templates > Create a database template [select From an existing database (structure as well as data)] this option is grey and not possible to choose. the others are available
    Anyone that ca tell me why and what to do to make it available ?
    Thanks
    Magnus JohanssonDo you have an existing database from which to create a template?
    Do you have an entry for said existing database in your oratab file?

  • Processing Mining Structure Error

    I have a problem while processing mining structure...
    This is my Mining Structure
    CREATE MINING
    STRUCTURE [QUESTION1] (
    [ownerReviewID] LONG KEY,
    [makeName] TEXT DISCRETE,
    [modelName] TEXT DISCRETE,
    [price] LONG CONTINUOUS,
    [priceDisc] LONG DISCRETIZED(AUTOMATIC),
    [seriesYear] LONG CONTINUOUS,
    [seriesYearDisc] LONG DISCRETIZED(AUTOMATIC),
    [ownerName] TEXT DISCRETE,
    [ownedSinceNew] BOOLEAN DISCRETE,
    [ownedPeriod] LONG DISCRETE,
    [feature_ratings] TABLE (
    [transID] LONG KEY,
    [ownerReviewID]
    LONG DISCRETE,
    [feature] TEXT DISCRETE,
    [rating] LONG CONTINUOUS,
    [ratingDisc] LONG DISCRETIZED(AUTOMATIC)
    ) WITH HOLDOUT
    (30 PERCENT OR 10000 CASES);
    I've created a Mining Model with Naive Bayes:
    ALTER MINING STRUCTURE [QUESTION1]
    ADD MINING MODEL [PredictReBuy-NBayes]
    [ownerReviewID],
    [makeName],
    [modelName],
    [priceDisc] AS [price],
    [seriesYearDisc] AS
    [seriesYear],
    [ownerName],
    [ownedSinceNew],
    feature_ratings(
    transID,
    feature,
    [ratingDisc] AS
    [rating] PREDICT )
    USING Microsoft_Naive_Bayes
    So far, the 2 codes above work fine, all executed.
    The error shown at the bottom comes up when I try to execute the processing code. This is my processing code:
    INSERT INTO MINING STRUCTURE [QUESTION1]
    [ownerReviewID],[makeName],[modelName],[price],[seriesYear],[ownerName],[ownedSinceNew],[ownedPeriod], feature_ratings(SKIP, [transID])
    SHAPE {
    OPENQUERY([Car Rating],'SELECT ownerReviewID, makeName, modelName, price, seriesYear, ownerName, ownedSinceNew, ownedPeriod FROM dbo.owner_reviews ORDER BY ownerReviewID')}
    APPEND
    {OPENQUERY([Car Rating],'SELECT transID, ownerReviewID, feature, rating FROM
    dbo.feature_ratings ORDER BY transID')
    RELATE ownerReviewID to ownerReviewID
    ) AS [feature_ratings]
    I'm currently receiving this error which I dont understand how to solve... any ideas? Thanks!!
    Executing the query ...Error (Data mining): The count of the specified columns does not match the count of
    the mining model columns or the input rowset columns.Execution complete

    Hi, it seems that you have posted another thread at Stackoverflow forum:
    http://stackoverflow.com/questions/23823448/dmx-processing-mining-structure-error
    Nice to see you have got it resolved.
    Regards, Leo

  • Data Mining - Scalar Mining Structure Column Data type error...

    Hoping someone will have a solution for this error
    Errors in the metadata manager. The data type of the '~CaseDetail ~MG-Fact Voic~6' measure must be the same as its source data type. This is because the aggregate function is not set to count or distinct count.
    Is the problem due to the data type of the column used in the mining structure is Long, and the underlying field in the cube has a type of BigInt,or am I barking up the wrong tree?

    You're right the error does occur when processing the mining model. I've built a simple mining structure to look into the problem - its just a decision tree based on a cube. The cube itself has a single Fact table, with numerous related dimensions. It has two measure groups - one for the use of distinct count, in the other group the aggregation functions used are count and sum.
    The measure which seems to be causing a problem uses the sum aggregation in the cube. The underlying table has this field as a BigInt, which was used due to round issues on the cubes when the original Int datatype was used.
    In the mining model the same field appears as a Continuous with datatype of Long. It has been tried in various guises as predictonly, and ignore modes both with the same results.
    The mining model used to work without a problem when the underlying type was an Integer, but then the cube would report the wrong figures.
    Interestingly, when I build a mining model directly against the source data there were no problems.
    Let me know if you need any further information.
    Thanks

  • Error (Data mining): The specified mining structure does not contain a valid model for the current task.

    I'm trying to run the Cross validation report on a mining structure that contains just Microsoft Association Rules mining model. In Target Attribute, I've tried:
    Actual(Service Description).SE value
    Actual([Service Description]).[SE value]
    Actual(Service Description)
    Actual([Service Description])
    just because i don't know what is the exact correct format, but none of them worked, and I always get the following error:
    Error (Data mining): The specified mining structure does not contain a valid model for the current task.
    the following is my mining model structure

    Association rules does not allow for cross-validation
    Mark Tabladillo PhD (MVP, SAS Expert; MCT, MCITP, MCAD .NET) http://www.marktab.net

  • Data Mining overflow error while loading the mining viewer

    I developed a time series model that processes succesfully. The problem I have is that when I try to view the mining model viewer that displays the times series on a chart, I get the error below:
    An error occurred while a prediction query was being executed:
    'Error (Data mining): An overflow was encountered while converting a predicted value to the '' column, at line 1, column 33. A higher precision data type for the column is recommended.'.
    ADDITIONAL INFORMATION:
    Error (Data mining): An overflow was encountered while converting a predicted value to the '' column, at line 1, column 33. A higher precision data type for the column is recommended. (Microsoft OLE DB Provider for Analysis Services 2008 R2.)
    It looks like the probability value (which is a float data type, for example 0.4452938765) is being stored in a particular column, and the data type of that column cannot contain the range of decimal places that the probability has.
    I don't know which column to look for to change its data type like the error messages says. Would anyone have any idea?

    Hello,
    Thanks for your posting.
    It is more related to Data Mining issue in this case. I move the thread to
    Data Mining forum for better support. Thanks for your understanding.
    Regards,
    Elvis Long
    TechNet Community Support

  • Error during Text Mining execution [Data Mining System Error ORA-12988: ]

    Hi,
    When I run dmkmdemo.java sample program with the below code (starts with <code>) snippet from ODM 11g bundle, I see this below error (starts with <error> - I've created user dmuser, run the scripts):
    <code>
    public static void prepareData() throws JDMException
         System.out.println("---------------------------------------------------");
    System.out.println("--- Prepare Data ---");
    System.out.println("---------------------------------------------------");
    String inputDataURI = null;
    String outputDataURI = null;
    OraTransformationTask xformTask = null;
    // 1. Prepare build data
    inputDataURI = "MINING_BUILD_TEXT";
    outputDataURI = "mining_build_nested_text";
    //NESTED_TABLE_BUILD_TEXT
    // Create OraTextTransform
    OraTextTransform txtXform = (OraTextTransformImpl)m_textXformFactory.create(
                                            inputDataURI, // name of the input data set
                                            outputDataURI, // name of the transformation result
                                            "CUST_ID", // Case id column
                                            new String[] { "COMMENTS" } ); // Text column names
    // Create transformation task
    System.out.println("sanku *** JDM transformation");
    xformTask = m_xformTaskFactory.create(txtXform);
    txtXform.setTextColumnList( new String[] { "COMMENTS" }); // for nested column list
    executeTask(xformTask, "kmPrepareBuildTask_jdm");
    </code>
    <error>
    kmPrepareBuildTask_jdm is started, please wait. kmPrepareBuildTask_jdm is failed.
    Failure Description: ORA-40101: Data Mining System Error ORA-40101: Data Mining System Error ORA-12988: cannot drop column from table owned by SYS
    ORA-06512: at "SYS.DBMS_JDM_INTERNAL", line 2772
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_JDM_INTERNAL", line 3000
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_JDM_INTERNAL", line 3021
    ORA-06512: at line 1
    </error>
    Any pointers or help here please? Thanks.
    Sanjeev

    Hi,
    You should also consider looking at the pl/sql implementations/apis for data mining.
    There is more data mining functionality within the pl/sql domain and that is defintely where the emphasis will be going forward.
    To gain appreciation for the pl/sql approach you can do the following:
    1) Using Data Miner Classic, it provides an option to generate pl/sql code to replicate a mining activity.
    2) Using Data Miner Workflow, you can generate the sql for transformations.
    We will be coming out with broader sql script generation for workflows in future releases.
    3) Oracle Data Mining has sample code available on OTN.
    THanks,Mark

  • Bcp doesnt throw an error when the data length exceeds size of the column

    Hi,
    We are using bcp in SQL 2008 R2 to import the data from flat file. When the data length exceeds the size of the column, it doesn't throw any error instead it has ignored the row.
    Please suggest me how to truncate and load the data into table.
    Thanks,
    Pasha

    Hi Pasha,
    According to your description, you want to import the data from flat file to SQL Server table with truncated data in SQL Server 2008 R2. To achieve your requirement, we can use Import and Export wizard. For more details, please refer to the following steps:
    Launch SSMS by clicking SQL Server Management Studio from the Microsoft SQL Server program group.
    Right click on the destination database in the Object Explorer, select Tasks, then Import Data from the context menu to launch the Import Wizard.
    Choose Flat File Source as Data Source, then browser to the flat file.
    Choose SQL Server Native Client 10.0 as Destination, then select the destination database.
    Click Edit Mappings button to change column size or other properties.
    Finish the processing.
    For the example about how to use Import and Export wizard, please refer to the blog below:
    http://www.mssqltips.com/sqlservertutorial/203/simple-way-to-import-data-into-sql-server/
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Error While updating Process form data Using Scheduler

    Hi All,
    I am trying to update Process form data (ex : lastname) using a scheduled task Code. I am getting Error while updating Field.
    Code :
    HashMap<String, String> map = new HashMap<String, String>();
    map.put("UD_EBS_PF_LASTNAME", "lastname");
    formintf.setProcessFormData(instancekey, map);  //I AM GETTING AT THIS LINE
    Saying
    Thor.API.Exceptions.tcAPIException: The following required fields have not been given values:EBS IT Resource : The following required fields have not been given values:EBS IT Resource
        at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:234)
        at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:348)
        at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:259)
        at Thor.API.Operations.tcFormInstanceOperationsIntfEJB_h6wb8n_tcFormInstanceOperationsIntfRemoteImpl_1036_WLStub.setProcessFormDatax(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at weblogic.ejb.container.internal.RemoteBusinessIntfProxy.invoke(RemoteBusinessIntfProxy.java:85)
        at $Proxy2.setProcessFormDatax(Unknown Source)
        at Thor.API.Operations.tcFormInstanceOperationsIntfDelegate.setProcessFormData(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at Thor.API.Base.SecurityInvocationHandler$1.run(SecurityInvocationHandler.java:68)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
        at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
        at weblogic.security.Security.runAs(Security.java:41)
        at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(weblogicLoginSession.java:52)
        at Thor.API.Base.SecurityInvocationHandler.invoke(SecurityInvocationHandler.java:79)
        at $Proxy3.setProcessFormData(Unknown Source)
        at com.wyndham.tasks.AssignRandomPasswordToAllUsersSchedulerTest.execute(AssignRandomPasswordToAllUsersSchedulerTest.java:182)
        at com.wyndham.tasks.AssignRandomPasswordToAllUsersSchedulerTest.main(AssignRandomPasswordToAllUsersSchedulerTest.java:63)
    Caused by: Thor.API.Exceptions.tcAPIException: The following required fields have not been given values:EBS IT Resource : The following required fields have not been given values:EBS IT Resource
        at com.thortech.xl.ejb.beansimpl.tcFormInstanceOperationsBean.setProcessFormData(tcFormInstanceOperationsBean.java:761)
        at com.thortech.xl.ejb.beansimpl.tcFormInstanceOperationsBean.setProcessFormData(tcFormInstanceOperationsBean.java:426)
        at Thor.API.Operations.tcFormInstanceOperationsIntfEJB.setProcessFormDatax(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    Is that possible there was the field ZDATE in your form interface/ context and now it is not? I guess some source has changed so the field in the form (binding to the not existing field) cannot be processed. Otto

  • Error Ocurred while processing a data cube

    While processing a cube i am getting following errors:- please help me out to fix it
    1) Errors in the high-level relational engine. The following exception occurred while an operation was being performed on a data source view: Method not found: 'System.Threading.Tasks.Task`1<!!0> System.Threading.Tasks.Task.FromResult(!!0)'..
    2)Errors in the high-level relational engine. The following exception occurred while an operation was being performed on a data source view: .
    3)Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Dimension Metier', Name of 'Dimension Metier' was being processed.
    4)Errors in the OLAP storage engine: An error occurred while the 'Metier ID' attribute of the 'Dimension Metier' dimension from the 'MultidimensionalProject1' database was being processed.
    5)Internal error: The operation terminated unsuccessfully.
    6)Server: The current operation was cancelled because another operation in the transaction failed.
    thank you

    Hi Immortal12,
    What's the version of your SQL Server Analysis Services? Are you encountered this issue while process the cube in SQL Server Management Studio? Please elaborate your scenrio with more deail.
    Currently, we can't figure out the root casue of this issue. So, please help to collect more log information which are benefit for us to do further investigation. Here are some good articles to collect SSAS log informtion:
    Error Configuration for Cube, Partition, and Dimension Processing (SSAS - Multidimensional):
    http://msdn.microsoft.com/en-us/library/ms180058.aspx
    Data collection for troubleshooting Analysis Services issues:
    http://blogs.msdn.com/b/as_emea/archive/2012/01/02/initial-data-collection-for-troubleshooting-analysis-services-issues.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • SQL server agent jobs throws random errors while the ETL process works fine.

    Hi,
    I have this problem with SQL agent jobs.
    We went from sql2008 to sql2012 and migrated SSIS without problems. The ETL process runs fine and the OLAP cubes are processed.
    I have a job which calls the master execution dtsx for a particulair customer. When the ETL load and OLAP is processed it should go to the next customer. The problem i have is that the agent logs some errors for random customers. I tried to do only two clients
    in one job this works then i add the third client and then it fails (log wise) for a customer which did run successfully before when there were only two customers.
    Despite the error message the ETL did run, there were no duplicate keys and OLAP was processed???
    Im very close to pull all my hair, because some combinations like two customers work, and placing these two customers with a third one it then fails again. (again cubes are processed and data is integer yet i keep getting these annoying errors in the log)
    Perhaps someone could help me further. 
    -Miracles are easy, the impossible takes a bit longer-

    Just double-click on the Agent job, then click on Steps property page (on your left), you must be able to see a list of steps with the action "On Failure" which you should examine.
    Arthur My Blog

Maybe you are looking for

  • Role of SAP Business Analyst

    Hello Friends ,                      I have orked various phases of a project , I do have interaction with technical team , functional team in SAP . but it will e greatful if anyone can come back to me on general duties of SAP Business Analyst resour

  • Data Migrator  and lsmw

    hi i want to know where i can find material on data migrator and lsma? how it works?examples? thanks have a nice day

  • Metadata without images??

    I have been working to get rid of inconsistencies in applying metadata (CA vs. California, for example), and after changing all of the photos with CA to California, I discovered that there were still 14 occurances of CA, but there are no images to go

  • Error Maintenance Optimizer

    Hi, I try download SPS for system SAP GRC Access Control 5.3 but in the step 2 "Select File" in Maitenance Optimizer show message error: Transaction must be assigned to a Standalone Product Version! Message no.MOPZ_CHECKS229 In the combo box Select T

  • 871W dot11 interface as a receiver?

    Hello, I have a 871W with advanced IP services image. What I have is a non-Cisco router which is connected to the internet and has a wireless. What I want to do is set up the 871W radio interface to receive the signal from my other router (Internet)