Sample of cost/benefit analysis of data warehouse

Greeting!
I am currently doing a research on data warehouse kindly give me a smple of a cost/benefit analysis.
Looking forward for your immedicate response.
e-mail me at "[email protected]"
Thank you and God Bless!
very truly your,
Benedict

Good day Colin.
You question is like trying to guess the next day stock qouats...
There are many parameters that combined together and not all of them are constants infact...most of them are variables.
The issues are vary from lets say:
1.  COBOL programmers payments needed to support and maintain you current AS400 special written applications and interfacing...
2.The number of system you want to integrate in your organization buisness scenarios..
3.The monitoring capabilities currently available in your organization in the Spagethi architecture.
4.Predicting the growth of your comapny and adding Buisness partners systems...
5. Trying to estimate the re-using of XI-allready-writen-interfaces (not possible... )
6.The duplicated data and data incosistancy in your IT enviorment before XI and after XI...
So my friend,I belive this imaginary generic document of TCO can't be created...
Good luck with your project
Nimrod
If it helps you I've recently asked an ABAP programmer
how long it will take him to develope a simple File to IDoc scenario....he said that including design documents and testing Aprox. 8 days.
I belive i can do the first one in 5 days and the second in 50% of that time...
Message was edited by: Nimrod Gisis

Similar Messages

  • Cost benefit analysis for using XI 3.0

    Hi,
    Does anyone have a cost benefit analysis document for using XI 3.0 versus developing point to point interfaces ?
    I need something that can easily justify and sell the idea of using XI over going down the custom path for point to point development of interfaces.
    Kind regards
    Colin.

    Good day Colin.
    You question is like trying to guess the next day stock qouats...
    There are many parameters that combined together and not all of them are constants infact...most of them are variables.
    The issues are vary from lets say:
    1.  COBOL programmers payments needed to support and maintain you current AS400 special written applications and interfacing...
    2.The number of system you want to integrate in your organization buisness scenarios..
    3.The monitoring capabilities currently available in your organization in the Spagethi architecture.
    4.Predicting the growth of your comapny and adding Buisness partners systems...
    5. Trying to estimate the re-using of XI-allready-writen-interfaces (not possible... )
    6.The duplicated data and data incosistancy in your IT enviorment before XI and after XI...
    So my friend,I belive this imaginary generic document of TCO can't be created...
    Good luck with your project
    Nimrod
    If it helps you I've recently asked an ABAP programmer
    how long it will take him to develope a simple File to IDoc scenario....he said that including design documents and testing Aprox. 8 days.
    I belive i can do the first one in 5 days and the second in 50% of that time...
    Message was edited by: Nimrod Gisis

  • Project Plan for KM/ Cost Benefit Analysis document for KM

    Is there a template Project Plan Document available for KM?
    Is there a Cost Benefit Analysis document available for KM, that lists the advantages of KM, things like what are the Cost Benefits of Implementing KM? etc.
    Any help is greatly appreciated.
    Regards,
    Bharath

    Good day Colin.
    You question is like trying to guess the next day stock qouats...
    There are many parameters that combined together and not all of them are constants infact...most of them are variables.
    The issues are vary from lets say:
    1.  COBOL programmers payments needed to support and maintain you current AS400 special written applications and interfacing...
    2.The number of system you want to integrate in your organization buisness scenarios..
    3.The monitoring capabilities currently available in your organization in the Spagethi architecture.
    4.Predicting the growth of your comapny and adding Buisness partners systems...
    5. Trying to estimate the re-using of XI-allready-writen-interfaces (not possible... )
    6.The duplicated data and data incosistancy in your IT enviorment before XI and after XI...
    So my friend,I belive this imaginary generic document of TCO can't be created...
    Good luck with your project
    Nimrod
    If it helps you I've recently asked an ABAP programmer
    how long it will take him to develope a simple File to IDoc scenario....he said that including design documents and testing Aprox. 8 days.
    I belive i can do the first one in 5 days and the second in 50% of that time...
    Message was edited by: Nimrod Gisis

  • How can I do a cost / benefit analysis with new "share anything"?

    I am trying to figure out the best action for my 82 year old mother.  She is currently and has been on a special contract, something like 65+ plan (don't know the exact plan name).  She pays $29.95/ month with 200 anytime minutes.  Her current contract ends in March 2013.
    She is moving to an assisted living facility on July 8th where her only phone will be wireless and she will need much more than 200 minutes per month.  Several weeks ago, (prior to the “share anything plan” was announced), I told her it might be best to switch to the $50 / month unlimited everything PREPAID plan.  She needs to replace her ancient phone which is 6+ years old, and she doesn't need text or data, (wouldn't be able to figure it out anyway).
    Does she need to do something before June 28th?  Will the $50 per month prepaid plan be affected with the new “share anything” plan?  All of her friends and family are on Verizon.  I can't figure out a cost/benefit analysis because I don't know all the variables.  Is it more cost effective to pay an early termination fee and switch prior to June 28th to a different plan?  She will be using a lot more minutes beginning in July than what her current plan provides.
    Please help with suggestions.  My mom is counting on me and I'm counting on you.  Thanks.

    When you say her only phone will be wireless, I assume that you mean "cellular".
    Is cellular her best option? What about a landline only option with a wireless phone? That way she can still carry the phone with her wherever she is within her home. If she is going to start using "much more than 200 minutes per month" only because she is giving up her current landline, I would assume that the vast majority of her calling minutes take place within her home.
    I guess one question is "will the increase in her calling plan be greater than the cost of the landline she currently has, or would it be cheaper to keep the landline and her current plan?"
    Another question is what "need" does she have for cellular? Will she be out "alone" often if she is moving to an assisted living facility? If not, does she need the availability of cellular? If you are worried about her being out "alone" and getting "confused", will she be too "confused" to make use of a cellular phone?
    In an assisted living facility, the building "could" interfere with her ability to actually receive a cellular signal within her home. Is this going to be a problem, since it seems her current calling habits have her sending/receiving the majority of her calls within her home.
    I ask these questions from experience as my 92 y/o grandmother is also living in an assisted living facility and only has a landline phone with no cellular phone. She chose to give up her cellular phone 7 yrs ago when she moved in.
    These are just a few questions to consider before switching over to strictly cellular.

  • Analysis Services - Data Warehouse

    Hi
    I'm new to DW-Analysis Services. I've created a Data Warehouse Database and populated with data. Later, I've created a cube using BIDS. It created database in SSAS and also cube in same database. 
    If I create more cubes, will it create database for each cube or all the cubes will be created in same database.
    When I access this cube will it pickup data from Data Warehouse or using the structure of the cube Or  data is stored in SSAS. If I'm updating Datawarehouse database daily how these cubes will be refreshed, do I need to follow any procedure to update
    these cubes.
    Thanks
    Venkat Nori
    nories

    Hi Venkat,
    If You are creating Multiple Cube inside a database of same name as it is in SSAS database name.
    Then all your cube will come under the same database.While browsing you can select what cube you want to browse.
    When you change any thing in your SQL database i.e. loading data or anything relatied to transaction,then you need to process the cube.
    For Processing the cube everyday,you can create SSIS package and schedule it on daily basis.
    Regards
    msbiLearning

  • Batch Management - Cost Benefit Analysis

    Hello experts
    My client is evaluating batch magmt in their mfg operations.The key requirement/driver is material traceability.
    They would like to know what is the impact of using batch mgmt in terms of adiotional resources,transaction and maintenance as such.
    Does anybody has insights/material on this.
    Many thanks
    Gaurav

    Hi,
    please mention the manufacturing products then we can decide to go for BATCH or not
    some of the benefits of using Batch Management is
    1. Traceablity of  the material up to the end product.
    2. First IN first OUT principle can be adopted.
    3. Additional resources  is not required.
    4. storage space needed in  more
    5. if the company is having  EXCISABLE the somewhat complicated only
    Regards
    Ganesh

  • Cost-benefit apple tv vs cable

    Has anyone done a cost-benefit analysis for Apple TV vs cable. Obviously, this will be different for cable offerings in each locale, which can be smoked out, sometimes with difficulty. On the ATV side, I'd like to know what are the costs for the various offerings.
    Thanks

    ATV hoping for two HDTV ports to enable running the cable thru the DVR plus
    For what purpose?  DVR's are for live content?  You can watch contant on ATV any time you want and can fast forward rewind etc already.  Copying it to DVR, if you even can, would just require you wait an hour while the show is copied to the DVR to watch it.  Instead of just watching it instantly any time you want.
    maybe Apple and the cable providers using their creative minds to come up with a one-stop shopping setup whereby one can grab just one remote and do the job.
    This is already an opption.   Logitech has remotes that will do this.  Their setup is pretty involved, but once you have it working it is really nice.

  • Service Manager Data Warehouse Install - Analysis Server Configuration For OLAP Cubes Fail

    Hello everyone,
    I have an issue with my installation of the Data Warehouse for System Center Service Manager 2012 SP1.
    My install environment is the following:
    Windows Server 2012 – System Center Service Manager (Successfully Installed) - Virtual
    Windows Server 2012 – System Center Data Warehouse (Pending) - Virtual
    Windows Server 2012 – MS SQL Server 2012 – Physical, Clustered 1<sup>st</sup> of Four Servers
    The SQL Server is a clustered installation with named instances, specifically for SharePoint and Service Manager. Each instance has its own IP address and dynamic ports are turned off. I’m installing using the domain administrator account and I also chose
    to run the installer as administrator. The domain admin has sysadmin rights to the service manager server and instance I’m trying to install on. However, the account does not have sysadmin rights to some of the other instances.
    The install is smooth up until it needs to connect to the Analysis server database. I have tried connecting to the analysis servers on other SQL servers on site and all were successful. The only difference between the older SQL servers, the SQL 2012 development
    server and the SQL 2012 Production server I’m trying to install to is that the that the domain admin account doesn’t have sysadmin access on all the databases on the new production server. The SQL server is being installed and configured by a contractor so
    if you all have troubleshooting suggestions, I’ll need to coordinate with the contractor.
    Starting with the screen below, I began searching for help online. There seems to be no one else with this issue or it is not documented properly. I opened a ticket with MS, called the contractor and troubleshot with him, troubleshot as far as I could on
    my own and I’m still at a loss as to what is preventing the installer from connecting specifically to the analysis server.
    I first thought the installer was at issue or that the data warehouse sever was at issue. But all signs are pointing at the SQL server. The installer is able to connect to all the other SQL servers – including other 2012 servers (same versions) – so it can’t
    be the installer. I’m pretty sure the SQL server is going to be at issue.
    After looking at this error, I opened the resource monitor and clicked the dropdown to see if it was trying to connect to the correct server and it was. I then connected to the old and new test and development servers successfully. Then connected to the
    SQL 2008 R2 production cluster successfully. I then compared the two servers. The only difference other than the version numbers is that the admin account doesn’t have sysadmin rights on all the SQL 2012 database servers. But the database servers are not the
    problem. The analysis servers are.
    I then checked the event logs and they are empty as far as this issue is concerned. Actually, there are no errors on the SQL 2012 production box and the Data Warehouse box. I then checked the log that the installer creates during every step of the installation
    and this is what is created when the dropdown is clicked for the analysis server configuration screen. The log file location is:
    “C:\Users\admin\AppData\Local\Temp\2\SCSMSetupWizard01.txt”
    In the file is the following text.
    01:03:34:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012
    01:03:34:Using SQL Server 2012 management scope on SCSMSQL2012
    01:03:36:Collecting SQL instances on server SCSMSQL2012
    01:03:36:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:36:Using SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:38:Found SQL Instance: SCSMSQL2012\PWGSQL2012
    01:03:38:Found SQL Instance: SCSMSQL2012\SCSMSQL2012
    01:03:39:Error:GetSqlInstanceList(), Exception Type: Microsoft.AnalysisServices.ConnectionException, Exception Message: A connection cannot be made. Ensure that the server is running.
    01:03:39:StackTrace:   at Microsoft.AnalysisServices.XmlaClient.GetTcpClient(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenTcpConnection(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenConnection(ConnectionInfo connectionInfo, Boolean& isSessionTokenNeeded)
       at Microsoft.AnalysisServices.XmlaClient.Connect(ConnectionInfo connectionInfo, Boolean beginSession)
       at Microsoft.AnalysisServices.Server.Connect(String connectionString, String sessionId, ObjectExpansion expansionType)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetASVersion(StringBuilder sqlInstanceServiceName)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetSqlInstanceList(String sqlServerName, Int32 serviceType)
    I’m now investigating the issue according to this output, and decided to ask you all if you’ve run into this issue and found a resolution.

    I am running into same issue . But I don't anything in the instances section related to portipv6 . I do see in the listener section , I tried to remove it . But it comes up again . Please help
    <ConfigurationSettings>
    <Security>
    <RequireClientAuthentication>0</RequireClientAuthentication>
    <SecurityPackageList/>
    </Security>
    <Network>
    <Listener>
    <RequestSizeThreshold>4095</RequestSizeThreshold>
    <MaxAllowedRequestSize>0</MaxAllowedRequestSize>
    <ServerSendTimeout>60000</ServerSendTimeout>
    <ServerReceiveTimeout>60000</ServerReceiveTimeout>
    <IPV4Support>2</IPV4Support>
    <IPV6Support>2</IPV6Support>
    </Listener>
    <TCP>
    <MaxPendingSendCount>12</MaxPendingSendCount>
    <MaxPendingReceiveCount>4</MaxPendingReceiveCount>
    <MinPendingReceiveCount>2</MinPendingReceiveCount>
    <MaxCompletedReceiveCount>9</MaxCompletedReceiveCount>
    <ScatterReceiveMultiplier>5</ScatterReceiveMultiplier>
    <MaxPendingAcceptExCount>10</MaxPendingAcceptExCount>
    <MinPendingAcceptExCount>2</MinPendingAcceptExCount>
    <InitialConnectTimeout>10</InitialConnectTimeout>
    <SocketOptions>
    <SendBufferSize>0</SendBufferSize>
    <ReceiveBufferSize>0</ReceiveBufferSize>
    <DisableNonblockingMode>1</DisableNonblockingMode>
    <EnableNagleAlgorithm>0</EnableNagleAlgorithm>
    <EnableLingerOnClose>0</EnableLingerOnClose>
    <LingerTimeout>0</LingerTimeout>
    </SocketOptions>
    </TCP>
    <Requests>
    <EnableBinaryXML>0</EnableBinaryXML>
    <EnableCompression>0</EnableCompression>
    </Requests>
    <Responses>
    <EnableBinaryXML>1</EnableBinaryXML>
    <EnableCompression>1</EnableCompression>
    <CompressionLevel>9</CompressionLevel>
    </Responses>
    <ListenOnlyOnLocalConnections>0</ListenOnlyOnLocalConnections>
    </Network>
    <Log>
    <File>msmdredir.log</File>
    <FileBufferSize>0</FileBufferSize>
    <MessageLogs>Console;System</MessageLogs>
    <Exception>
    <CreateAndSendCrashReports>0</CreateAndSendCrashReports>
    <CrashReportsFolder/>
    <SQLDumperFlagsOn>0x0</SQLDumperFlagsOn>
    <SQLDumperFlagsOff>0x0</SQLDumperFlagsOff>
    <MiniDumpFlagsOn>0x0</MiniDumpFlagsOn>
    <MiniDumpFlagsOff>0x0</MiniDumpFlagsOff>
    <MinidumpErrorList>0xC1000000, 0xC1000001, 0xC100000C, 0xC1000016, 0xC1360054, 0xC1360055</MinidumpErrorList>
    <ExceptionHandlingMode>0</ExceptionHandlingMode>
    <MaxExceptions>500</MaxExceptions>
    <MaxDuplicateDumps>1</MaxDuplicateDumps>
    </Exception>
    </Log>
    <Memory>
    <HandleIA64AlignmentFaults>0</HandleIA64AlignmentFaults>
    <PreAllocate>0</PreAllocate>
    <VertiPaqPagingPolicy>0</VertiPaqPagingPolicy>
    <PagePoolRestrictNumaNode>0</PagePoolRestrictNumaNode>
    </Memory>
    <Instances/>
    <VertiPaq>
    <DefaultSegmentRowCount>0</DefaultSegmentRowCount>
    <ProcessingTimeboxSecPerMRow>-1</ProcessingTimeboxSecPerMRow>
    <SEQueryRegistry>
    <Size>0</Size>
    <MinKCycles>0</MinKCycles>
    <MinCyclesPerRow>0</MinCyclesPerRow>
    <MaxArbShpSize>0</MaxArbShpSize>
    </SEQueryRegistry>
    </VertiPaq>
    </ConfigurationSettings>

  • Sample data warehouse.

    I'm looking sample data warehouse like MS AdvWorks DW for non-commercial use. I'm not an expert.
    Can you help me?
    Regards,
    Paul

    hi
    I think the following link may be helpful
    http://www.oracle.com/technology/obe/11gr1_owb/index.htm
    I followed it and learnt many things... if u follow the tutorials, i think will be better for you.
    best of luck
    Arif
    Edited by: badwanpk on Oct 28, 2008 5:45 PM

  • Unread      Implementing heirarichal structure in data warehouse

    I want to create a data warehouse for credit card application. Each user can have a credit card and multiple supplementary credit cards. Each credit card has a main limit, which can be sub-divided into sub-limits to supplementary credit cards as requested by the user. Let us consider the following example:
    User “A” has a credit card “CC” with Limit “L” and its limit is $100,000.
    User “A” requested for a supplementary credit card “CC1” which is assigned limit
    “L1” = $50,000. He requests for another supplementary credit card “CC2” which is assigned limit “L2” = $100,000.
    Source tables contain data like this:
    1. src_client_card_trans: contains transaction data of client/user credit card usage (client_id, credit_card_number, balance_acquired)
    Client_id     Credit_card_number     Balance_acquired
    A     CC1     $20,000
    A     CC2     $50,000
    A     CC     $70,000
    2. src_card_limits: contains client’s credit cards linked to credit limits.
    Credit_card_number     Limit_id
    CC1     L1
    CC2     L2
    CC     L
    3. src_limit_structure: contains the relationship of limits and sub-limits.
    Limit_id     Sub_Limit_id
    L     L1
    L     L2
    I have designed two dimensions and one fact table. Dimensions are:
    1. LIMITS: contains the limit_id.
    2. CLIENTS: contains credit card user’s information.
    And fact table is LIMIT_BALANCES_FACT, which have some fact columns with the above dimensions.
    How can I implement the above scenario of limit hierarchy in data warehouse? Need your suggestions.
    Thanks in advance

    Much depends on how you want to analyze the data and there are a few options:
    1) Use credit limit as an attribute of the customer dimension. This would allow you to create query filters that can just show those customers with a $100,000 credit limit. This would return a list of credit cards (since the attribute would be assigned to each credit card) and then you can simply add or just keep the parents of that result set.
    However, this assumes you do not want to measure data specifically relating to credit card limit. For example it would not be possible to view a total amount spent by all customers who had a credit-limit of $100,000.
    In this case the attribute, credit limit, is simply used to filter a result set
    2) Create a separate dimension called Credit Limit and create three levels:
    All
    Range
    Credit Limit
    The level Range would contain groupings of credit limits such as 100-500, 501-1200, 1201-1,000 etc etc.
    This would allow you to analyse your data by customer and by credit limit over time. Allowing you to slice and dice quickly and easily.
    3) A second customer hierarchy could be added to the customer dimension. This would allow you to drill-down through different credit limits to customers to individual credit cards. It would be advisable to follow the same approach as option 2 and create some groupings for the credit limits to make the drill down easier for your business users to navigate:
    All
    Range
    Credit Limit
    Customer
    Credit Card
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Implementing heirarichal structure in data warehouse

    I want to create a data warehouse for credit card application. Each user can have a credit card and multiple supplementary credit cards. Each credit card has a main limit, which can be sub-divided into sub-limits to supplementary credit cards as requested by the user. Let us consider the following example:
    User “A” has a credit card “CC” with Limit “L” and its limit is $100,000.
    User “A” requested for a supplementary credit card “CC1” which is assigned limit
    “L1” = $50,000. He requests for another supplementary credit card “CC2” which is assigned limit “L2” = $100,000.
    Source tables contain data like this:
    1. src_client_card_trans: contains transaction data of client/user credit card usage (client_id, credit_card_number, balance_acquired)
    Client_id     Credit_card_number     Balance_acquired
    A     CC1     $20,000
    A     CC2     $50,000
    A     CC     $70,000
    2. src_card_limits: contains client’s credit cards linked to credit limits.
    Credit_card_number     Limit_id
    CC1     L1
    CC2     L2
    CC     L
    3. src_limit_structure: contains the relationship of limits and sub-limits.
    Limit_id     Sub_Limit_id
    L     L1
    L     L2
    I have designed two dimensions and one fact table. Dimensions are:
    1. LIMITS: contains the limit_id.
    2. CLIENTS: contains credit card user’s information.
    And fact table is LIMIT_BALANCES_FACT, which have some fact columns with the above dimensions.
    How can I implement the above scenario of limit hierarchy in data warehouse? Need your suggestions.
    Thanks in advance

    Much depends on how you want to analyze the data and there are a few options:
    1) Use credit limit as an attribute of the customer dimension. This would allow you to create query filters that can just show those customers with a $100,000 credit limit. This would return a list of credit cards (since the attribute would be assigned to each credit card) and then you can simply add or just keep the parents of that result set.
    However, this assumes you do not want to measure data specifically relating to credit card limit. For example it would not be possible to view a total amount spent by all customers who had a credit-limit of $100,000.
    In this case the attribute, credit limit, is simply used to filter a result set
    2) Create a separate dimension called Credit Limit and create three levels:
    All
    Range
    Credit Limit
    The level Range would contain groupings of credit limits such as 100-500, 501-1200, 1201-1,000 etc etc.
    This would allow you to analyse your data by customer and by credit limit over time. Allowing you to slice and dice quickly and easily.
    3) A second customer hierarchy could be added to the customer dimension. This would allow you to drill-down through different credit limits to customers to individual credit cards. It would be advisable to follow the same approach as option 2 and create some groupings for the credit limits to make the drill down easier for your business users to navigate:
    All
    Range
    Credit Limit
    Customer
    Credit Card
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Data Warehouse Performance impact

    Hi, everyone.
    I have a model that I'm designing for our Data Warehouse team.
    It is a star schema (not snowflake) and it is my understanding that all of the FK fields should together define the FACT table's PK.
    However, my developer doesn't want to do that... he'd rather have only the fields which define the actual uniqueness of the FACT data in the PK, leavning the other DIM FK's as attributes. He would also set up indexes on those columns, according to how the query structures needs, but he doesn't see any performance or other benefit to the structure that I'm suggesting.
    What benefits can I tell him that will be realized by having all the Dimensional FK's in the FACT table's PK? Will it help query performance? Will it otherwise enhance BI objectives? Our DBA's were a little unclear on what the performance benefits would be...
    Thanks for the imput!
    David

    Hi David,
    These are probably the questions we all have to take time for when designing a system. However, if you forget we are looking at a 'star schema' and simply look at it from a data modeling perspective, you quickly realize that only the attributes (FK's) that make up the combination of attributes that uniquely identify exactly one row in the fact table can be in the PK.
    Let's say for example that you add another FK to the PK even though it is not actually part of the combination that identifies one row uniquely. That would allow 2 rows to be inserted with the same combination of columns that uniquely should identify exactly one row! However, due to the 'extra' FK in the PK, this is not valid anymore, because it can be this extra FK that distinguishes both rows and allows the database to store these 2 rows without throwing a PK constraint!
    When making fact mappings that load your fact tables, a lookup will be done to see if this new input data already exists. That must be a lookup on the fact table's PK. If you can keep those as small as possible (as few columns in PK index as possible) the better. Regarding performance, test with adding indexes on other columns later, when some data is loaded.
    Looking back, I made this mistake myself somewhere in the past and it cost me some extra work later to 'redo' things and correct my design. The advantage is: you do such things only once. :-)
    Hope this helps.
    Regards,
    Ed

  • Oracle based data warehouse creation

    Hi All ,
    I need to know Best Infrastructure for creating data warehouse. Our client is currenlty managing data in SQL Server,Oracle and XL sheets. Now they want to implement data warehouse,BI and Data mining.
    Scalability is our main concern because size of data will be in Tera Bytes (10 TB) in next few years. Scope is analysis,Impact analysis,Forecasting,GIS based analysis and mining(Trend analysis).
    Please suggess me on following point (I mean help me in tool selection):
    A. Storage
    B. Operating System, Database Server
    C. Other Server (Warehouse, Mining ,BI).
    D. Tools( Database ETL,OLAPReporting and Mining)
    I m working with Hyperion BI, I m new in data warehouse. This would be great if I get Cost of tools (for Perpectual licenses).
    Thanks in Advance
    Shiv

    Oracle Retail Data Model is independent of RDW and RGBU applications. It can be used with RMS and Point of Sale, but there is no pre-built ETL to load the data included with the product, although there are some options available thru partners.

  • Example database for data warehouse

    Hi!
    Does anybody know where can be found exaple database for data warehouse?
    - schema is good
    - schema with data is better
    Best regards

    slkLinuxUser wrote:
    Hi!
    Does anybody know where can be found exaple database for data warehouse?
    - schema is good
    - schema with data is better
    Best regardsJust like an OLTP database, the schema design and its data is 100% dependant on the business needs and (if done properly) the result of a thorough data analysis. Any kind of a pre-designed sample would be near worthless to any actual application.

  • Failure on clean install of Service Manager 2012 R2 Data Warehouse installation - Error about "AssignSdkAccountAsSsrsPublisher"

    I'm installing the Data Warehouse component on a new server (Server 2008 R2 SP1 with SQL 2012 SP1 CU6).  Reporting service, analysis services and full text services are installed.  I'm installing under an account that has DBO rights to the instance. 
    All pre-reqs pass.  The installation gets to the very end of the installation and i get the following error:
    An error occurred while executing a custom action:_AssignSdkAccountAsSsrsPublisher
    This upgrade attempt has failed before permanent modifications were made. Upgrade has successfully rolled back to the Original state of the system. Once the correction are made, you can retry upgrade for this role.
    I look in the setup logs and only see the following reference to "AssignSdkAccountAsSsrsPublisher"
    MSI (s) (1C:00) [15:09:51:914]: NOTE: custom action _AssignSdkAccountAsSsrsPublisher unexpectedly closed the hInstall handle (type MSIHANDLE) provided to it. The custom action should be fixed to not close that handle.
    CustomAction _AssignSdkAccountAsSsrsPublisher returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
    Originally, I had some issues with services not being started during the install because some accounts didn't have "logon as a service" rights, but those have all been rectified.  Not sure if something is leftover from that, or maybe CU6 wasn't
    tested with SM R2 and i need to go back to an earlier CU?

    That worked for me too.  By manually removing all the folders from SSRS, then retrying the install, allowed it to proceed and complete successfully the second time around.
    The first time installing SCSM 2012 R2, I got hit with
    this error, which is why SSRS wasn't in a clean state.  Make sure to launch setup.exe elevated as an admin!

Maybe you are looking for