Market Basket Analysis

Hi all,
I am currently doing this course "Getting Started with MS Azure Machine Learning" by Buck Woody. One of the questions I have is How do I build a algorithm to develop a retail analytics solution? What retail data should I gather? My end objective
of the app is, when the consumer scans a product is a retail store, the app should pop up other alternatives for him.
my issue to achieve the market basket analysis using azure machine learning.
the same as you provide idea in "Algorithm for Retail Analytics" discussion, 
i know i have to use R-script but i dont have any idea for that so please provide some suggestion how to achieve this.
Please help us.

Hi,
May I interest in considering the
https://datamarket.azure.com/dataset/amla/recommendations offer in the marketplace. We are investing in this offer quite a bit and support a number of recommendation types, including frequently-bought-together (which is what I imagine you are trying to
solve here).
It is an end-to-end solution built with Azure Machine Learning that is already in production with real customers.
Let us know if you have any questions,
Luis Cabrera
Program Manager
Azure Machine Learning Marketplace
Luis Cabrera. Program Manager -- Azure Machine Learning @luisito_cabrera Disclaimer: This post is provided "AS IS" with no warranties, and confer no rights.

Similar Messages

  • Convert to Transaction Class to Use Market Basket Analysis Script (arules) in the Execute R Script Module

    Hello,
    I need to do Market Basket Analysis for my data, and have a working R script when using it in R Studio.  I need to transfer that R script to Azure ML Studio.  I read in another posting that the arules package is pre-installed and that I need
    to use the Execute R Script module since there is not other built-in module/function that does anything similar to Market Basket.  I went ahead and copy-paste my R script into the module with some slight modifications in terms of importing the data. 
    I use the R function read.transaction to import and convert my data frame (a csv file) into a transaction class directly from my working directory when using R Studio.  It appears that read.transaction does not work on Azure ML, and yet
    I need my data to be in transaction class for the rest of the functions in arules to work.  Therefore, how do I get around this?
    Thank you.

    Thanks.
    This is my R script:
    library(arules)
    library(arulesViz)
    # Contents of optional Zip port are in ./src/
    #source("src/MOD Targeting MBA ML.R");
    Data = read.transactions("src/data.csv", format = "single", sep = ",", cols = c(1,2))
    itemFrequencyPlot(Data, topN = 37, type = "absolute")
    Baskets = apriori(Data, parameter = list(supp = 0.001, conf = 0.8))
    inspect(Baskets)
    Results = as(Baskets, "data.frame")
    maml.mapOutputPort("Results")
    And this is the output log:
    Record Starts at UTC 12/23/2014 19:52:51:
    Run the job:"/dll "ExecuteRScript, Version=6.0.0.0, Culture=neutral, PublicKeyToken=69c3241e6f0468ca;Microsoft.MetaAnalytics.RDataSupport.ExecuteRScript;Run" /Output0 "..\..\Result Dataset\Result Dataset.dataset" /Output1 "..\..\R Device\R Device.dataset" /bundlePath "..\..\Script Bundle\Script Bundle.zip" /rStreamReader "script.R" "
    Starting process 'C:\Resources\directory\c3626c2575d5423e8cb58a9e7230be5e.SingleNodeRuntimeCompute.Packages\AFx\6.0\DllModuleHost.exe' with arguments ' /dll "ExecuteRScript, Version=6.0.0.0, Culture=neutral, PublicKeyToken=69c3241e6f0468ca;Microsoft.MetaAnalytics.RDataSupport.ExecuteRScript;Run" /Output0 "..\..\Result Dataset\Result Dataset.dataset" /Output1 "..\..\R Device\R Device.dataset" /bundlePath "..\..\Script Bundle\Script Bundle.zip" /rStreamReader "script.R" '
    [ModuleOutput] DllModuleHost Start: 1 : Program::Main
    [ModuleOutput] DllModuleHost Start: 1 : DataLabModuleDescriptionParser::ParseModuleDescriptionString
    [ModuleOutput] DllModuleHost Stop: 1 : DataLabModuleDescriptionParser::ParseModuleDescriptionString. Duration: 00:00:00.0050971
    [ModuleOutput] DllModuleHost Start: 1 : DllModuleMethod::DllModuleMethod
    [ModuleOutput] DllModuleHost Stop: 1 : DllModuleMethod::DllModuleMethod. Duration: 00:00:00.0000598
    [ModuleOutput] DllModuleHost Start: 1 : DllModuleMethod::Execute
    [ModuleOutput] DllModuleHost Start: 1 : DataLabModuleBinder::BindModuleMethod
    [ModuleOutput] DllModuleHost Verbose: 1 : moduleMethodDescription ExecuteRScript, Version=6.0.0.0, Culture=neutral, PublicKeyToken=69c3241e6f0468ca;Microsoft.MetaAnalytics.RDataSupport.ExecuteRScript;Run
    [ModuleOutput] DllModuleHost Verbose: 1 : assemblyFullName ExecuteRScript, Version=6.0.0.0, Culture=neutral, PublicKeyToken=69c3241e6f0468ca
    [ModuleOutput] DllModuleHost Start: 1 : DataLabModuleBinder::LoadModuleAssembly
    [ModuleOutput] DllModuleHost Verbose: 1 : Trying to resolve assembly : ExecuteRScript, Version=6.0.0.0, Culture=neutral, PublicKeyToken=69c3241e6f0468ca
    [ModuleOutput] DllModuleHost Verbose: 1 : Loaded moduleAssembly ExecuteRScript, Version=6.0.0.0, Culture=neutral, PublicKeyToken=69c3241e6f0468ca
    [ModuleOutput] DllModuleHost Stop: 1 : DataLabModuleBinder::LoadModuleAssembly. Duration: 00:00:00.0074580
    [ModuleOutput] DllModuleHost Verbose: 1 : moduleTypeName Microsoft.MetaAnalytics.RDataSupport.ExecuteRScript
    [ModuleOutput] DllModuleHost Verbose: 1 : moduleMethodName Run
    [ModuleOutput] DllModuleHost Information: 1 : Module FriendlyName : Execute R Script
    [ModuleOutput] DllModuleHost Information: 1 : Module Release Status : Release
    [ModuleOutput] DllModuleHost Stop: 1 : DataLabModuleBinder::BindModuleMethod. Duration: 00:00:00.0116536
    [ModuleOutput] DllModuleHost Start: 1 : ParameterArgumentBinder::InitializeParameterValues
    [ModuleOutput] DllModuleHost Verbose: 1 : parameterInfos count = 5
    [ModuleOutput] DllModuleHost Verbose: 1 : parameterInfos[0] name = dataset1 , type = Microsoft.Numerics.Data.Local.DataTable
    [ModuleOutput] DllModuleHost Verbose: 1 : Set optional parameter dataset1 value to NULL
    [ModuleOutput] DllModuleHost Verbose: 1 : parameterInfos[1] name = dataset2 , type = Microsoft.Numerics.Data.Local.DataTable
    [ModuleOutput] DllModuleHost Verbose: 1 : Set optional parameter dataset2 value to NULL
    [ModuleOutput] DllModuleHost Verbose: 1 : parameterInfos[2] name = bundlePath , type = System.String
    [ModuleOutput] DllModuleHost Verbose: 1 : parameterInfos[3] name = rStreamReader , type = System.IO.StreamReader
    [ModuleOutput] DllModuleHost Verbose: 1 : parameterInfos[4] name = seed , type = System.Nullable`1[System.Int32]
    [ModuleOutput] DllModuleHost Verbose: 1 : Set optional parameter seed value to NULL
    [ModuleOutput] DllModuleHost Stop: 1 : ParameterArgumentBinder::InitializeParameterValues. Duration: 00:00:00.0102003
    [ModuleOutput] DllModuleHost Verbose: 1 : Found trace source in Execute R Script module...
    [ModuleOutput] DllModuleHost Verbose: 1 : Begin invoking method Run ...
    [ModuleOutput] Microsoft Drawbridge Console Host [Version 1.0.2108.0]
    [ModuleOutput] [1] 56000
    [ModuleOutput]
    [ModuleOutput] The following files have been unzipped for sourcing in path=["src"]:
    [ModuleOutput]
    [ModuleOutput] Name Length Date
    [ModuleOutput]
    [ModuleOutput] 1 data.csv 2875965 2014-12-04 17:08:00
    [ModuleOutput]
    [ModuleOutput] 2 __MACOSX/ 0 2014-12-23 09:39:00
    [ModuleOutput]
    [ModuleOutput] 3 __MACOSX/._data.csv 120 2014-12-04 17:08:00
    [ModuleOutput]
    [ModuleOutput] Loading objects:
    [ModuleOutput]
    [ModuleOutput] Loading required package: Matrix
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput] Attaching package: 'arules'
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput] The following objects are masked from 'package:base':
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput] %in%, write
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput] Loading required package: grid
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput] Attaching package: 'arulesViz'
    [ModuleOutput]
    [ModuleOutput] The following object is masked from 'package:base':
    [ModuleOutput]
    [ModuleOutput] abbreviate
    [ModuleOutput]
    [ModuleOutput] $value
    [ModuleOutput] NULL
    [ModuleOutput]
    [ModuleOutput] $visible
    [ModuleOutput] [1] FALSE
    [ModuleOutput]
    [ModuleOutput] Warning messages:
    [ModuleOutput] 1: In strptime(x, format, tz = tz) :
    [ModuleOutput] unable to identify current timezone 'C':
    [ModuleOutput] please set environment variable 'TZ'
    [ModuleOutput] 2: In strptime(x, format, tz = tz) : unknown timezone 'localtime'
    [ModuleOutput] DllModuleHost Stop: 1 : DllModuleMethod::Execute. Duration: 00:00:14.5396895
    [ModuleOutput] DllModuleHost Error: 1 : Program::Main encountered fatal exception: Microsoft.Analytics.Exceptions.ErrorMapping+ModuleException: Error 0063: The following error occurred during evaluation of R script:
    [ModuleOutput] ---------- Start of error message from R ----------
    [ModuleOutput] Error: Mapped variable must be of class type data.frame at this time.
    [ModuleOutput]
    [ModuleOutput]
    [ModuleOutput] Error: Mapped variable must be of class type data.frame at this time.
    [ModuleOutput] ----------- End of error message from R -----------
    Module finished after a runtime of 00:00:14.6091783 with exit code -2
    Module failed due to negative exit code of -2
    Record Ends at UTC 12/23/2014 19:53:07.
    Sorry, it won't let me send a link for some reason.
    Thanks.
    Cindy

  • Offline market basket email functionality yielding nicely-formatted emails

    I'm using htmlresources.zip contents in an InDesign folio to give some market basket functionality for customers working with sales reps. In the html/javascript/jquery code, I would like to allow the reps to send the basket info to the customers when they are ready, and I need the folio to work offline (so we’re not dependent on the internet for this app to function).  The problem I am having is that mailto: does not appear to work using html tags in all the popular email clients (so I’m not using html tags), and the columns in the body of my email only line up if the customers email display is using mono-spaced fonts (which is not always the case—btw, tabs do not appear to work very well with mailto either).  Hence, if our sales reps want the columns to line up nicely in the body of the email, I can only see a few options, namely: 1) stick with plain text mailto: issues, 2)write a separate XCode app to receive urls from the folio with all the email info included and then transmit those strings to a server-side mail server for transmission once internet connectivity is available, 3) somehow generate a pdf  (to make the basket pretty, line up columns, include nice pictures, etc.)  on the fly inside InDesign using Adobe api’s or using html5/javascript/jquery api’s and then include a link to that new pdf in the mailto: url or save the generated pdf off in the IPad somewhere.  In the latter case, reps can then use the IPad to manually email the pdf themselves when they are ready.  These are some things I’m working with and looking for some help in figuring out best methods.  Thank you.

    Moved to DPS forum.

  • BW reports ( basket analysis ) How i will approach

    Dear All,
    I want to develop one report which will show output in matrix format.
    like
       a  b  c  d
    a 1 2  5  10
    b  7 9  8   11
    c  5  9 9   9
    d  5 55 5   9
    The values are basically association of row and column.
    Actually a,b,c, d are the article codes are values are the no. of bill
    I want to do basket analysis.
    Please help me how I will start this.
    Thanks in advance
    Kind regards
    Biswarup

    Hi Biswarup
    Please execute your Bex report and save as workbook to your desktop, then make your changes as per your requirement throug pivot table custom settings, open as work book and refresh your report
    Hope this helps!
    -DU

  • IS-Retail Market-basket pricing procedure

    Hi All,
             Can anyone tell me what is Market-basket pricing and  at what situation do we use Market-basket pricing.and what is the IMG settings and any other settings require to use the pricing.
    Yours sincerely,
    Krishna
    Edited by: gopi krishna on Sep 20, 2008 12:50 PM

    Hi Refer below url from Sdn
    [Market Basket Calculation |https://www.sdn.sap.com/irj/sdn/bpx-bpp-retail?rid=/business_maps/guid/eg1slxvyat1odhrwoi8vzxn3b3jrcgxhy2uuc2fwlmnvbs9zb2nvdmlldy9ty3bnzxquyxnw%0d%0ap3bhy2thz2vpzd0mdmlldz1zbwnyzw5kzxiyjmlkptyyotdgndcwoum1mtexrdkxmkqxmdazmdzf%0d%0amdu1ree3jiz4c2wtymfzzt1odhrwoi8vzxn3b3jrcgxhy2uuc2fwlmnvbs9zb2nvdmlldy9ty3bn%0d%0azxrfbwltzs5hc3a/dxjspxnkbl9uzxcvjiz3zwitdxjspwh0dhbzoi8vzxn3b3jrcgxhy2uuc2fw%0d%0almnvbs9zb2nvdmlldy9ty3bnzxrfbwltzs5hc3a/dxjspsymehnslwxpc3q9c2rux2dlbmvyywxf%0d%0acmvuzgvylnhzbhq7c2rux2xpbmtyzxdyaxrpbmcuehnsddtnzw5lcmf0zuhutuxfu0rolnhzbhqm%0d%0ajnbhy2thz2utawq9jizyzxnvdxjjzxr5cgu9c2rux3nvbhv0aw9ux2nvbxbvc2vyx2z1bgxwywdl%0d%0ajizwyxjlbnrssuq9l3dlymnvbnrlbnqvd2vicgfnzxmvynb4lzqwieluzhvzdhjpzxmvumv0ywls%0d%0al0j1c2luzxnzifbyb2nlc3mgugxhdgzvcm0gaw4gumv0ywlsjiz0axrszt1szxrhawwguhjpy2lu%0d%0azyatie1hcmtldcbcyxnrzxqgq2fsy3vsyxrpb24mjnzpcnr1yww9dhj1zq==&prtmode=print]
    [Market-Basket Price Calculations - Procedure |http://help.sap.com/erp2005_ehp_03/helpdata/EN/83/8818d107a711d3b470006094b9cbfb/frameset.htm]

  • Market price analysis and loss free valuation example

    Hi,
    I want to get advice on the devaluation activity.
    There are 3 method which are market price, range of coverage and loss free valuation.
    I read on sap documentation yet I cannot really get it. So before I execute MRN0, MRN1 and MRN3, I need help to understand
    What does it mean by market price and loss free? Can give example of these 2?
    I got the definition but i not quite understand. So I need example to explain these 2 method.
    "When determining the lowest value based on market prices, the system searches for the lowest price (or alternatively, the most recent price) among the various prices stored for each material. We recommend that you limit the period during which the system retrieves data to the last three months, so the prices are as up-to-date as possible"
    "You may need to devaluate materials sold by your company if they will probably not fetch the material price when they are sold. For example, if the demand for a material diminishes because it is no longer the latest technology, you can only sell this material at a loss compared to the material price."
    Thanks

    Hi
    You may need to devaluate materials sold by your company if they will probably not fetch the material price when they are sold.
    As per my knowledge devaluvate means you are giving discount on your material .
    Regards
    Kailas Ugale

  • How to build the best Purchasing Propensity Model

    Hello, I'm trying to develop a Market Basket Analysis using over 90 different categories and I'm not quite sure which model would be the best option for this amount of variables?
    Also I could use some direction on which algorithms and variables are the most relevant in the construction of a Purchasing Propensity Model (e.g. age, frequency of purchase, average ticket value, purchases in other categories, etc.)
      Thanks in advance for your help

    hi user,
    Few things u should keep in mind
    1) Take the tables and find the relation ship between those(pk-fk)
    2) Identify Dim's and Fact's
    3) After step2 definitely there will be a division of header level and detail tables
    4) Dim1 having Business_Date and Fact1 having Business_date u can join in this way
    5) Normally without ETL we may go for Pl/SQL or SQL's.Your Data model will come under this point
    6)In BMM layer u can form Start Schema/Snow flake schema
    Thanks,
    Saichand.v

  • Affinity queries ("customers who bought this item also bought...")

    Hi,
    Do you know the feature of many web shops, where you can see similar items (i.e. "customers who bought this item also bought...")?
    A simple approach would be to return similar items from the latest orders. What I would like to list are products, which were bought BY MANY OTHER CUSTOMERS in addition to the origin product. So I have to somehow count the matching products from all orders. How can I do this?
    Has anybody thought of a way how to select meaningful similar items to a product?

    You might also look at Oracle data Mining, though I haven't used it myself. From the documentation:
    4.2 Association
    An Association model is often used for market basket analysis, which attempts to discover relationships or correlations in a set of items. Market basket analysis is widely used in data analysis for direct marketing, catalog design, and other business decision-making processes. A typical association rule of this kind asserts that, for example, "70% of the people who buy spaghetti, wine, and sauce also buy garlic bread."

  • How to use Data Mining for a blind DBA?

    Hi, I'm a blind DBA and I need some input and some suggestions from the forum on how to access the Data Mining feature.
    This feature is becoming more and more an essential part of my daily tasks, so I need a way to use it.
    Database: Oracle 11.1.0.7.0
    OS: Windows XP Pro and Linux
    Adaptive Technology:
    JAWS Version 11.0.1467 by Freedom Scientific BLV Group, LLC
    JAVA Access Bridge by SUN Microsystems
    I know there is a Windows program called Data Miner, unfortunately my JAWS speech synthesizer does not work with the interface and I need an alternative. I
    read Oracle's documentation on the Data Miner API., but it is short on examples, details , and step by step instructions. Does anyone have a more
    comprehensive set of examples on how to use the Data Mining API. from start to finish, specifically for market basket analysis? Any help would be
    appreciated.
    Thanks.

    Hi,
    I am not very knowledgeable about Oracle Data Miner, the graphical user interface to Oracle Data Mining, but I can respond to your question about API documentation.
    You may find the source files for the Data Mining sample programs to be very helpful. They provide many examples. After installing Oracle Database, install Oracle Database Examples. This separate installation process installs the sample programs in the DEMO directory under Oracle Home. Instructions for installing and configuring the sample programs are provided in Oracle Data Mining Administrator's Guide. Here is the link to the relevant chapter in the 11.1 Administrator's Guide.
    http://download.oracle.com/docs/cd/B28359_01/datamine.111/b28130/sampleprogs.htm#BABEHFEB
    As for the API documentation, the best place to start is probably the Data Mining Application Developer's Guide. Here is the link to the 11.1 Application Developer's Guide.
    http://download.oracle.com/docs/cd/B28359_01/datamine.111/b28131/toc.htm
    The Data Mining PL/SQL API for creating data mining models is implemented in the DBMS_DATA_MINING package. The syntax of Oracle PL/SQL packages is documented in PL/SQL Packages and Types Reference. Here is the link to the chapter in the 11.1 version of that manual.
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_datmin.htm#i1062150
    The Data Mining functions for scoring (applying mining models to new data) are implemented as SQL functions. The syntax of Oracle SQL is documented in SQL Language Reference. Here is the link to the Data Mining SQL functions in the 11.1 version of that manual.
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/functions001.htm#sthref953
    Hope this helps,
    Kathy
    Edited by: ktaylor on Nov 10, 2010 8:14 AM

  • Oracle Data Miner (Bug)

    Hello All,
    I am using oracle data miner 11g for building association model (market basket analysis). When I run the model and view the results, I find that the values of Antecedent Support% and Consequent Support% are swapped. i.e. the support value of the antecedent is listed in the Consequent Support% column.
    Is it a bug in the data miner or am I mistaken?
    Edited by: 976043 on Dec 10, 2012 10:00 AM

    Hi,
    Yes you are correct.
    You can prove it my adding a Model Details node and connecting the AR Build node to it.
    The rule output from the Model Details node produced shows the true values.
    We have opened a bug to fix this and it will be in the upcoming SQL Dev 4.0 release.
    Thanks for the help, Mark

  • Sparse data specification via DBMS_DATA_MINING API

    I want to use binary/dummy/indicator variables (eg, x1 =0, x2 =1 means credit card payment,
    x1=1, x2=0 means check, etc)
    in a generalized linear regression model to predict a numeric target.
    My understanding is that this type of coding needs to be treated as sparse data, but I see no option
    in the DBMS_DATA_MINING APIs that allows for specification of sparse data.
    Can anyone enlighten me as to whether my assumptions or approach is incorrect, or if this feature is
    "missing" from the API?

    There is no flag or setting to indicate to ODM that your data is sparse. Instead, ODM will treat nested data as sparse by default. Having a sparse representation for data is useful in many cases. For example, in market basket analysis, where the number of potential items being purchased at the grocery store can be large, you may want to indicate simply which items were purchased, not the ones that weren't. It's not clear to me that your use case calls for having the data treated as sparse as it appears that the number of payment options might be small.
    Bun in your case, to implement a sparse field for the payment type, you might create your input table as:
    create table foo (id number, payment dm_nested_numericals, target number)
    nested table payment store as ind_nt;
    insert into foo values (1, dm_nested_numericals(dm_nested_numerical('x2',1)), 0);
    insert into foo values (2, dm_nested_numericals(dm_nested_numerical('x1',1)), 1);
    Mark

  • Degenerate and Junk dimensions

    I am new to SSAS. I want to understand degenerate and junk dimensions in detail. It would be good if you provide with practical examples please.
    Regards,
    Ramu
    Ramu Gade

    Junk Dimensions
    There are certain scenarios where you will find that the source for a fact table contains a bunch of low-cardinality attributes that don’t really relate to any of the other attributes describing these facts. Some of the more common examples are bit/character
    based “flags” or “codes” which are useful to the end users for filtering and aggregating the facts.  For example, imagine a user who wants to analyze orders from the order fact table that are flagged as “reprocessed”…they can either filter for facts with
    the reprocessed flag if they are only interested in that subset…or they can group by the “reprocessed” flag calculate things like the percent of orders that are “reprocessed”…
    Instead of building a separate dimension for each of these individual attributes, another option is to combine them and build what’s known as a Junk Dimension based on the Cartesian product of each of these attributes and they’re corresponding range of values.
    This technique does 2 important things:
    Saves Disk Space
    consider a single 4byte integer key linking to the junk dimension vs. a handful of 4byte integer keys each linking to a separate dimension.  Might not sound like a lot on a per-record basis, but once you extrapolate out over a 100mm record fact table the
    savings really adds up.
    Improves End-User Experience
    By keeping the total number of dimensions down to a manageable size it will be easier for your end-users to find the attributes they’re looking for during ad-hoc analysis. Kimball recommends <= 26 dimensions per fact table – of course there are always a
    few edge-case exceptions.
    Degenerate Dimensions
    The Degenerate Dimension is another modeling technique for attributes found in the source transaction table.  The main differences between these attributes and the ones that would fall into our Junk Dimension are as follows:
    Cardinality
    these are typically high-cardinality attributes – in some cases having a 1 to 1 relationship with the fact.  These are likely to be the business keys of the fact table such as Purchase Order Number, Work Order Number, etc.  Another potential candidate
    for the degenerate dimension is free-form comment fields.
    Use Case for End-Users
    these attributes are not going to be used for filtering/aggregating facts. Instead, these are the types of attributes that are typically going to be used in drilldown or data mining scenarios (ex. Market Basket Analysis). For example, imagine a user who is
    analyzing purchase orders in the “delayed” status. After drilling down on the delayed POs for a certain supplier in a certain time period…the next step might be to pick up the Purchase Order Number which would allow this user to trace this small subset of
    PO’s back to the source system to find out why they are “delayed”.
    Storage
    Despite the name, these attributes typically remain in the fact table. There really isn’t much point in moving them out to an actual dimension – because of the high-cardinality there’s likely to be zero space savings…in fact it would probably cost you space
    due to the additional surrogate-keys.  You’ll also likely be paying a heavy price on the join at query time.
    Analysis Services Implementation
    For Junk Dimensions, you will create a new dimension at the project-level pointing it to the table (or view) in the data warehouse that materializes the distinct combinations of values for the various junk-attributes.  After configuring the dimension at
    the SSAS project-level, it can be added to the cube(s) and linked up to the measure group(s) via regular relationships (where appropriate).
    For Degenerate Dimensions, the process is the same except you base the project-level dimension off of the fact table (or view). Once the project-level dimension is configured, it can be added to the cube(s) and linked up to the measure group(s) using “fact”-relationships
    (where appropriate).
    Please mark as Answer if this helps!
    Rajasekhar.

  • I wan to know complete process(business) in procurement area in SCM

    Hi,
    I want to know complete business process in procurement area in SCM.
    Thanks in advance,
    ravi.

    hi ravi,
    Food & Beverages Industry Business Processes
    Procurement & Supply Chain:
    Raw Materials Procurement
    Machinery Procurement
    Food / Beverages Manufacturing:
    Food Preparations Operations
    Quality Control
    Storage & Inventory Control:
    Specialised Storage for Perishable Items
    Inventory Management
    Distribution:
    Transportation
    Sales & Channel Management:
    Order Processing & Fulfilment
    Marketing Management
    Retail Sales Monitoring & Management
    In-store Exposure Management
    Strategy:
    Marketing & Competitor Intelligence Gathering
    Web & E-commerce Strategy
    Daily, Weekly & Monthly
    SAP for Retail supports the following business process categories for food retailers:
    Category and merchandise management-- Manage consumer-driven categories as strategic business units and gain efficiencies in areas such as category and assortment planning, allocations, and item management. Solutions support a full range of processes, including category strategy roll-out and scorecard review, key item planning, image items new product introduction, and SKU rationalization.
    Buying and vendor management -- Strengthen supplier relationship management and overall operational procurement to optimize supplier selection, compress cycle times, and devise sourcing and purchasing strategies. Solutions support a broad range of purchasing activities, including perishable management, private label management, vendor managed inventory, purchase order management, self service, forward and line buying, and invoice matching.
    Revenue management -- Execute the pricing strategies needed to achieve your profitability goals through cost management, optimization and demand creation, and price management. Solutions support base price optimization, demand forecasting, markdown optimization, retail price management and execution, promotions management, clearance management, and total landed cost, deal, and rebate management.
    Supply chain management -- Optimize supply chain planning, execution, and monitoring to streamline your merchandise flow, reduce out-of-stock situations, and balance out your inventory. Solutions enable private label production, perishable planning, collaborative planning forecast and replenishment, available to promise, warehouse and yard management, transportation planning, reverse logistics, global import management, and supply chain visibility and tracking.
    Store and channel management -- Enhance customer relationship management, store operations, workforce management, and point of sale to offer the best services through different channels. Solutions support key multi channel retail activities, including customer segmentation, market basket analysis, alternative store sourcing, custom order management, returns authorization, store level inventory management, mobile checkout, and labor scheduling.
    Enterprise management and support -- Solve business issues in real-time on both a strategic and an operational level -- and gain control of business processes and assets -- through functionality for finance, human capital management, and corporate services. Solutions support key processes, including financial and management accounting, corporate governance, real estate management, payroll and benefits administration, workforce management, life cycle data management, and enterprise asset management.
    Visit the followingl links:
    http://www.spesfeed.co.za/Supply%20Chain%20Management.htm
    http://www.ebpo.in/ind/food/food.html
    http://www.sap.com/usa/industries/retail/businessprocesses/food/index.epx
    Regards,
    partha

  • Intersection Grid?

    I've been asked to create a grid that displays dimension attributes on rows and columns (in Excel).
    The grid would looks something like
                              Checking           Savings         
    Safety Deposit  
    Checking             100                      23                 
    42
    Savings                55                       47                  33
    Safety Deposit     21                      12                   
    62
    And work as follows, where the products interesect is the total number of customers.  Above and below the interesection are the counts of customers dependant, or the subset of those customers that have the other products. 
    For example, Checking and Checking intersect at 100, so there are 100 total customers that have checking accounts, of those 100 customers, 55 have savings accounts and 21 safety deposit boxes.   There are 47 customers at the savings interesection,
    of those 47 customers, 23 have checking accounts and 12 have Safety Deposit boxes. 
    I started out with a fact table that has the customer and their associated products, I put the same dimension in twice,  1 for rows and 1 for labels.  In Excel, I used the customer count as the value and one product dimension for the columns and
    the same one for the labels.  This got me, somewhat as suspected, a diagonal line of values at the intersection points with 0's everywhere else. 
    It's a rather expansive question, perhaps there's a link that might describe how to approach this problem. 

    Hi Beeyulee
    There are different ways to do it  and I reckon you are on the right way my preferred option would be to use Many to Many and 2 product dimensions.
    Your scenario is very similar to Basket Analysis, where retailers need to identify customers that bought Product A and Product B together. Luckily there are a few articles that cover the topic:
    http://www.codeproject.com/Articles/33765/Market-Basket-Analysis-in-SSAS
    http://blogs.adatis.co.uk/blogs/sachatomey/archive/2007/08/22/basket-analysis-using-analysis-services-2005.aspx
    Hope it helps 
    Cesare

  • Marketing execution over the top of ORDM ?

    We are implementing ORDM here for its MIS/BI capabilities and have noted that most of the Data Mining and Analytics that the Marketing team use in their own tool (Alterian) exist within BI (segment analysis / basket analysis / promotion analysis etc).
    What is missing from the ORDM / OBI EE stack that we have is some form of Marketing campaign execution / management layer (Mailings/email shots etc) and I am wondering if the Oracle (Peoplesoft!?) module can be run on the ORDM to integrate with the work we are doing at present and therefore replace the current Marketing toolset going forwards?
    Anyone done this or got any ideas / comments ?
    Cheers
    Matt

    Hi ,thanks for the replies both.
    The Oracle Marketing I refer to is a bit of the E-business suite (sorry- the Peoplesoft association seems to have been wrong):
    http://www.oracle.com/applications/marketing/marketing.html
    http://www.oracle.com/applications/marketing/oracle-marketing-data-sheet.pdf
    As it stands in our company we have a diverse range of Data Stores (Sybase, Oracle, SQL etc) in the operational space and are trying to implement one source of truth for BI data, replacing an existing MIS system: hence using the ORDM. We will be copying as much operational data from the source systems as is relevant to make the ORDM work effectively, however I still have a couple of other systems I am trying to get aligned. One is Marketing and one is the MDM that holds the "truth" on our customer data. In theory I could use the ORDM as the customer MDM, but Marketing are using Altarian which would need an extract from MDM and one from either the ORDM or the relvant Operational stores. As the ORDM will actually hold all of the Customer / Product / Basket / History data that Marketing would use then they could quite happily do their analytics in OBI EE on top of it, however clearly there is no functionality for email or mail campains etc.
    So whilst I would potentially like to implement Oracle Marketing, i don't want to have to create a whole new schema for it to run on and fill it with data when I have nearly all the data it needs in the ORDM (I would be no better off than I am with Alterian now). I am most happy to Extend the ORDM a bit to include Execution specific tools, I just don't want to have two databases packed full of all the Sales and Product data.
    Cheers,
    Matt
    Edited by: Matt.H. on Aug 25, 2009 7:10 AM

Maybe you are looking for

  • Can I reuse my old Belkin wireless router as an extender to extend wifi from Time Capsule?

    I plan to buy the new Time Capsule and use it as my wireless router. Would it be possible to use my existing Belkin Play router as an extender of the "new" wireless netword provided by the Time Capsule?

  • Support Package is obsolete

    Hi there, I've been upgrading the SAP Solution Manager EHP1 from support package 19 to 24, but i have a serious problem and i was not able to find a solution neither at SAP Support Portal nor at SAP SDN. Actually, the OCS SAPKW70020/21/22/23 were not

  • HT201210 Missing photos after iOS 6 update on my iphone 4

    After completing the iOS 6 upgrade, everything synced properly to my phone except for my photo albums. Is there a way to recover?

  • Browser Viewing  / Testing?

    I must be mad , I'm a newbe but have some background .... great software ... problem I can make all the pages fine ,,,test them all indivigully they all show up 100% by themself in the browser ... but when I link the pages from the index page and the

  • IPhoto empty after restore from time machine

    HELP!!!!!!!!!!  I've lost 30GB of photos!!! Let me explain... i recently deleted some photos i didnt want to, so knowing i have backed up all my photos on time machine, i thought i would just resotre them.  Knowing i cannot restore individual photos