Enforce best practices, rules on SSAS OLAP

Hello,
I am coming up a list of rules for creation of OLAP cubes.. like naming convention of cubes, measures, dimensions, sorting of certain columns, etc. I am trying to find a tool that would enforce these rules. I mean, when a developer is creating a cube and
doesn't follow the rule, then he should be notified. For SQL database engine, we are exploring  TOAD where we can define rules for table creation , create models and push it out to the SQL. Does Microsoft BI suite have a similar tool?
Also, for documentation of Microsoft BI, i am looking at Pragmatics Work Doc xPress. Is there a better or alternate solution?
Thanks,
CK

Hi CK,
According to your description, you are looking for a a tool, when a developer is creating a cube and doesn't follow the rule, then he should be notified, right? Based on my research, there is no such a functionality to achieve this requirement.
If you have any concern about this behavior, you can submit a feedback at
http://connect.microsoft.com/SQLServer/Feedback and hope it is resolved in the next release of service pack or product. Your feedback enables Microsoft to make software and services the best that they can be, Microsoft might consider to add this feature
in the following release after official confirmation.
Thank you for your understanding.
Regards,
Charlie Liao
TechNet Community Support

Similar Messages

  • Best tool to analyze SSAS OLAP performance?

    I have SQL Server 2014 SSAS OLAP CUBE and Power View SharePoint 2013. Response time is slow in report.
    What is best tool to test performance and analyze reasons why Cube is slow?
    Kenny_I

    Hi Kenny_l,
    According to your description, you want to monitor the SSAS performance. Right?
    In Analysis Services, we can monitor the performance of Analysis Services by using SQL Server Profiler or Performance Monitor. In SQL Server Profiler, we can create and manage traces and analyze and replay trace results. In Performance Monitor,
    you can monitor the performance of a Microsoft SQL Server Analysis Services (SSAS) instance by using performance counters. Please refer to links below:
    Use SQL Server Profiler to Monitor Analysis Services
    Performance Counters (SSAS)
    Also we have a lot of load test tool for SSAS, please refer to the link below:
    Load Test Tools for Analysis Services
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Best Practices for Maintaining SSAS Projects

    We started using SSAS recently and we maintain we one project to deploy to both DEV and PROD instances by changing the deployment properties. However, this gets messy when we introduce new fact tables in to DEV data warehouse (that are not promoted to
    Production data warehouse). While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business
    users. Sorry for long question but is there is a best practice to manage projects and migrations? Thanks.

     While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business users.
    Hi Sbc_wisc,
    You can create a new project by importing the metadata from the production cube on the server, using the template, Import from Server (Multidimensional and Data Mining) Project, in SQL Server Data Tools (SSDT). And then make some changes on this project
    and then redeploy it to production server.
    Referencec:
    Import a Data Mining Project using the Analysis Services Import Wizard
    Regards,
    Charlie Liao
    TechNet Community Support

  • Best practice for the Update of SAP GRC CC Rule Set

    Hi GRC experts,
    We have in a CC production system a SoD matrix that we would like to modified extensively. Basically by activating many permissions.
    Which is a best practice for accomplish our goal?
    Many thanks in advance. Best regards,
      Imanol

    Hi Simon and Amir
    My name is Connie and I work at Accenture GRC practice (and a colleague of Imanolu2019s). I have been reading this thread and I would like to ask you a question that is related to this topic. We have a case where a Global Rule Set u201CLogic Systemu201D and we may also require to create a Specific Rule Set. Is there a document (from SAP or from best practices) that indicate the potential impact (regarding risk analysis, system performance, process execution time, etc) caused by implementing both type of rule sets in a production environment? Are there any special considerations to be aware? Have you ever implemented this type of scenario?
    I would really appreciate your help and if you could point me to specific documentation could be of great assistance. Thanks in advance and best regards,
    Connie

  • Best Practice Question: Portal Sync Rules

    Hi,
    Is there any benefit in combining Inbound and Outbound Flows in a single Sync Rule?
    Or does it not matter if Inbound flows have their own Sync Rule and Outbound flows have their own Sync Rules?
    Does either option generate more/less EREs/DREs?
    Is either option better for performance reasons?
    look forward to your comments, thank you
    sk

    1:
    Only the OSR's create EREs, so if you have 4 outbound 3 inbound (assuming a user gets all 4 outbound SR's) the user should have 4 EREs. If the rules were combined into 3 outbound/inbound and 1 outbound, the user still gets 4 EREs.
    2:
    If my source was AD, I would create a new MV attribute called inAD. Then import a constant "true" from the target, AD for example. Then from my source, HR for example,  ill flow in a constant "false". Lastly update the MV attribute precedence for the
    inAD attribute to be AD then HR. From there i'd create a custom attribute in the portal so I can use the flag for set criteria.
    This might be a long winded way of getting it done, but say you have 10k users in AD and used DRE's. It means 10k more objects in the MV and in the portal. 
    Disclaimer: Not saying this is best practice, it's just what I do :) 

  • Best practice for business rules

    Our business rules have
    Fix ( [Cost Center] )
    to extract the user's Cost Center from his form so that it runs faster.
    What is the best practice for running that same Business Rule but for all Cost Center? Will it be to put that Business Rule in a menu somewhere and let it prompt users to manually type "Cost Center" so that the Business Rule processes all cost centers ?
    Thanks.
    David

    You can try this way: create your primary business rule with FIX(@RELATIVE(VarCostCenter,0)), where VarCostCenter is a run time promt. Then you could easily use it to calculate only current member on ther form (fix will give you only 1 member).
    Then you create a new sequence and add there your business rule, go to "Launch Variables" tab, find promt for Cost Center, set it to "Total Cost Centers" and click hide. So basically now you have a copy of the primary rule but it runs for all cost centers automatically.
    So using this technique you will have to maintain only one business rule!

  • Rules Best Practice Advice Required

    I find that I'm fighting with the Business Rules in my BPM project, so I'd thought I throw the scenario out here and see what best practices anyone might propose.
    The Example*:
    Assume I have people, and each of them is assigned a list/array of "aspects" from an enumerated set: TALL; SPORTY; TALKATIVE; TRAVELER; STUDIOUS; GREGARIOUS; CLAUSTROPHOBIC.
    Also assume I have several Marketing campaigns, and as part of one or more processes, I need to occasionally determine whether a person fits the criteria for a particular campaign, which is based on the presence of a one or more aspects. The definitions of the campaigns may change, so the thought is to define them as business rules; if they change, the rule changes, without impacting the processes themselves (assume the set of campaigns doesn't change, just the rules for matching aspects to a particular campaign).
    My initial take is to to define each campaign as a bucketset, containing aspects, the presence of which indicates inclusion in the campaign. If a person has ANY of the aspects, they are considered a member.
    Campaigns (each perhaps defined as a LOV bucketset):
    DEODORANT: SPORTY, TRAVELER, GREGARIOUS
    E_READER:STUDIOUS,TRAVELER
    BREATH_MINT:TALKATIVE, GREGARIOUS
    HELMET:TALL, CLAUSTROPHOBIC
    So we want to create a service to check: Does a person belong to the BREATH_MINT campaign? We extract their aspects and check to see if ANY of them are in the BREATH_MINT campaign. If so, we return true. Basically: return ( intersection( BREATH_MINT.elements(), person.aspects() ) ).size() > 0
    The problem is: what's the best way to implement this using Business Rules? Functions? Decision Functions? Decision Tables? Stright IF/THEN? Some combination of the above? I find I'm fighting the tool, which means that, although this is a fairly simple problem, I don't understand the purpose of the various parts of the tool well.
    Things to consider:
    Purpose: test a person for inclusion in a specific campaign
    Input - the person's aspects, either directly, or extracted from the person
    Output - a boolean
    There can be a separate service for each campaign, or it could be specifed by an enumerated value as a parameter.
    Many thanks in advance!
    ~*Completely Fabricated~
    Edited by: 842765 on Mar 8, 2011 12:07 PM - typos

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • Query: Best practice SAN switch (network) access control rules?

    Dear SAN experts,
    Are there generic SAN (MDS) switch access control rules that should always be applied within the SAN environment?
    I have a specific interest in network-based access control rules/CLI-commands with respect to traffic flowing through the switch rather than switch management traffic (controls for traffic flowing to the switch).
    Presumably one would want to provide SAN switch demarcation between initiators and targets using VSAN, Zoning (and LUN Zoning for fine grained access control and defense in depth with storage device LUN masking), IP ACL, Read-Only Zone (or LUN).
    In a LAN environment controlled by a (gateway) firewall, there are (best practice) generic firewall access control rules that should be instantiated regardless of enterprise network IP range, TCP services, topology etc.
    For example, the blocking of malformed TCP flags or the blocking of inbound and outbound IP ranges outlined in RFC 3330 (and RFC 1918).
    These firewall access control rules can be deployed regardless of the IP range or TCP service traffic used within the enterprise. Of course there are firewall access control rules that should also be implemented as best practice that require specific IP addresses and ports that suit the network in which they are deployed. For example, rate limiting as a DoS preventative, may require knowledge of server IP and port number of the hosted service that is being DoS protected.
    So my question is, are there generic best practice SAN switch (network) access control rules that should also be instantiated?
    regards,
    Will.

    Hi William,
    That's a pretty wide net you're casting there, but i'll do my best to give you some insight in the matter.
    Speaking pure fibre channel, your only real way of controlling which nodes can access which other nodes is Zones.
    for zones there are a few best practices:
    * Default Zone: Don't use it. unless you're running Ficon.
    * Single Initiator zones: One host, many storage targets. Don't put 2 initiators in one zone or they'll try logging into each other which at best will give you a performance hit, at worst will bring down your systems.
    * Don't mix zoning types:  You can zone on wwn, on port, and Cisco NX-OS will give you a plethora of other options, like on device alias or LUN Zoning. Don't use different types of these in one zone.
    * Device alias zoning is definately recommended with Enhanced Zoning and Enhanced DA enabled, since it will make replacing hba's a heck of a lot less painful in your fabric.
    * LUN zoning is being deprecated, so avoid. You can achieve the same effect on any modern array by doing lun masking.
    * Read-Only exists, but again any modern array should be able to make a lun read-only.
    * QoS on Zoning: Isn't really an ACL method, more of a congestion control.
    VSANs are a way to separate your physical fabric into several logical fabrics.  There's one huge distinction here with VLANs, that is that as a rule of thumb, you should put things that you want to talk to each other in the same VSANs. There's no such concept as a broadcast domain the way it exists in Ethernet in FC, so VSANs don't serve as isolation for that. Routing on Fibre Channel (IVR or Inter-VSAN Routing) is possible, but quickly becomes a pain if you use it a lot/structurally. Keep IVR for exceptions, use VSANs for logical units of hosts and storage that belong to each other.  A good example would be to put each of 2 remote datacenters in their own VSAN, create a third VSAN for the ports on the array that provide replication between DC and use IVR to make management hosts have inband access to all arrays.
    When using IVR, maintain a manual and minimal topology. IVR tends to become very complex very fast and auto topology isn't helping this.
    Traditional IP acls (permit this proto to that dest on such a port and deny other combinations) are very rare on management interfaces, since they're usually connected to already separated segments. Same goes for Fibre Channel over IP links (that connect to ethernet interfaces in your storage switch).
    They are quite logical to use  and work just the same on an MDS as on a traditional Ethernetswitch when you want to use IP over FC (not to be confused with FC over IP). But then you'll logically use your switch as an L2/L3 device.
    I'm personally not an IP guy, but here's a quite good guide to setting up IP services in a FC fabric:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/ipsvc.html
    To protect your san from devices that are 'slow-draining' and can cause congestion, I highly recommend enabling slow-drain policy monitors, as described in this document:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/int/nxos/intf.html#wp1743661
    That's a very brief summary of the most important access-control-related Best Practices that come to mind.  If any of this isn't clear to you or you require more detail, let me know. HTH!

  • Best Practices - Enforcing the review of  Firefighter Logs/Reports

    Hi,
    I am looking for some best practices as it pertains to the review of Firefighter Usage Logs.  How are companies these days reviewing, documenting, and enforcing that system generated FF logs/reports are indeed being reviewed and monitored?  Anything you can share is greatly appreciated.
    I have seached the GRC forum, Firefighter Post, and reviewed the recently released "Super User Access" article, but have only found information on the tool's functionality and technical specs.
    Regards,
    Edited by: jmsreyes on Jul 20, 2009 6:38 PM

    Hi,
      There is no standard or best practices in enforcing the review of FF logs and reports. Every client/company plan their own strategy around this.
    One of my client used to ask every controller to print out and file the printed paper with their signature on it. They were required to keep this for a year or so. Another client asked them to print it to pdf and save it to a secure location which will mean they have reviewed this log. If there is any issue, it will be the responsibility of the particular FF controller.
    Regards,
    Alpesh

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Multi layer table view/navigation controller hierarchy best practice

    Hi,
    I am new to iPad/iPhone development and wondering what the best practice for multiple layers of table views is? I understand the principle of a navigation controller providing the framework for moving up and down a list but have not yet quite got my head around if you should have one navigation controller for the whole tree or several navigation controllers.
    In my app I need to have the following:
    Main view -> window view showing some interactive elements (picker, buttons etc.)
    Setup view -> Hierarchy managed by nav controller/table views
    The setup view needs to manage the following hierarchy...
    - Level A:
    - Global app variables (one table view)
    - Level B Items (table view showing list of items at belonging to Level B)
    - Level B Item 1 (table view showing list of items at level C belonging to level B item 1)
    - Level C Item 1 (table view showing list of items at level D belonging to level C item 1)
    - Level D Item 1 (table view showing list of items at level E belonging to level D item 1)
    - Level E item (table view for properties of item at Level E)
    - Level D Item n
    - Level C Item n
    - Level B Item n
    Each level in this has some properties and then a list of child items.
    What would be the best way of structuring this? I would assume that creating a class that extends a view controller for each level is a given but what about the control of the navigation? Should this be handled by one navigation controller or one per level? I think I know the right answer but have not seen a neat way of implementing
    I think I am also best off having each level in it's own xib but, once again, am not 100% sure that this is the best design pattern.
    Many thanks in advance for any help/pointers!
    Cheers
    jez

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best practice for loading config params for web services in BEA

    Hello all.
    I have deployed a web service using a java class as back end.
    I want to read in config values (like init-params for servlets in web.xml). What
    is the best practice for doing this in BEA framework? I am not sure how to use
    the web.xml file in WAR file since I do not know how the name of the underlying
    servlet.
    Any useful pointers will be very much appreciated.
    Thank you.

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • BPC 7M SP6 - best practice for multi server setup

    Experts,
    We are considering purchasing new hardware for our BPC 7M implementation. My question is what is the recommended or best practice setup for SQL and Analysis Services? Should they be on the same server or each on a dedicated server?
    The hardware we're looking at would have 4 dual core processors and 32 GB RAM in a x64 base. Would this adequately support both services?
    Our primary application cube is just under 2GB and appset database is about 12 GB. We have over 1400 users and a concurrency count of 250 users. We'll have 5 app/web servers to handle this concurrency.
    Please let me know if I am missing information to be able to answer this question.
    Thank you,
    Hitesh

    I don't think there's really a preference on that point. As long as it's 64bit, the servers scale well (CPU, RAM), so SQL and SSAS can be on the same server. But it is important to look also beyond CPU and RAM and make sure there's no other bottlenecks like storage (Best practice is to split the database files on several disks and of course to have the logs on disks that are used only for the logs). Also the memory allocation in SQL and OLAP should be adjusted so that each has enough memory at all times.
    Another point to consider is high availability. Clustering is quite common on that tier. And you could consider having the active node for SQL on one server and the active node for OLAP (SSAS) on the other server. It costs more in SQL licensing but you get to fully utilize both servers, at the cost of degraded performance in the event of a failover.
    Bruno
    Edited by: Bruno Ranchy on Jul 3, 2010 9:13 AM

  • Best Practice: Usage of the ABAP Packages Concept?

    Hi SDN folks,
      I've just started on a new project - I have significant ABAP development experience (15 years+) - but one thing that I have never seen used correctly is the Package concept in ABAP - for any of the projects that I have worked on.
    I would like to define some best practices - about when we should create packages - and about how they should be structured.
    My understanding of the package concept is that they allow you to bundle all of the related objects of a piece of development work together. In previous projects - and almost every project I have ever worked on - we just have packages ZBASIS, ZMM, ZSD, ZFI and so on. But this to me is a very crude usage of packages, and really it seems that we have not moved on passed the 4.6 usage of the old development class concept - and it means that packages do not really add much value.
    I read in the SAP PRESS Next Generation ABAP book (Thomas Ljung, Rich Hellman) (I only have the 1st edition) - that we should use packages for defining separation of concern for an application. So it seems there they are recommending that for each and every application we write - we define at the least 3 packages - one for Model, one for Controller and one for view based objects. It occurs to me that following this approach will lead to a tremendous number of packages over the life cycle of an implementation, which could potentially lead to confusion - and so also add little value. Is this really the best practice approach? Has anyone tried this approach across a full blown implementation?
    As we are starting a new implementation - we will be running with 7 EHP2 and I would really like to get the most out of the functionality that is provided. I wonder what others have for experience in the definition of packages.
    One possible usage occurs to me that you could define the packages as a mirror image of the application business object class hierarchy (see below). But perhaps this is overcomplicating their usage - and would lead to issues later in terms of transportation conflicts etc.:
                                          ZSD
                                            |
                    ZSOrder    ZDelivery   ZBillingDoc
    Does anyone have any good recommendations for the usage of the ABAP Package concept - from real life project experience?
    All contributions are most welcome - although please refrain from sending links on how to create packages in SE80
    Kind Regards,
    Julian

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

Maybe you are looking for