SSAS Tabular: Handling time datatypes

Folks,
I have two dimensions related to time in my DW. One is for dates (dDate), and another which is for the time of the day (dTime), with the granularity set to minutes (we don't care for seconds).
Here is the code for creating my dTime table:
CREATE TABLE dTime (
[timekey] smallint, --NOT SURE ABOUT THIS SEQUENCE AS THE PRIMARY KEY ** MUST REVIEW **
[Hour of Day (24)] tinyint, --0-23, military/European time
[Hour of Day (12)] tinyint, --1-12, repeating for AM/PM, for us American types
[Meridien] char(2), --AM/PM
[Minute of Hour] tinyint, --the minute of the hour, reset at the top of each hour. 0-59
[Half Hour] tinyint, --1 or 2, if it is the first or second half of the hour
[Half Hour of Day] tinyint, --1-24, incremented at the top of each half hour for the entire day
[Quarter Hour] tinyint, --1-4, for each quarter hour
[Quarter Hour of Day] tinyint, --1-48, incremented at the tope of each half hour for the entire day
[Time String (24)] char(5), --military/European textual representation
[Time String (12)] char(5), --12 hour clock representation sans AM/PM
[Time Value] time(0), --Time in 00:00:00 T-SQL time datatype
CONSTRAINT PK_dTime_timekey PRIMARY KEY CLUSTERED (timekey)
GO
For the sake of completion, I have recently added a time(0) field to my table. I am creating an SSAS tabular model based on data from this DW. However, it comes to my attention that SSAS in Tabular mode does not handle time-only fields. Is that correct?
Could someone please advise?
Best Regards,
P.

Multidimensional and Tabular are different.
Tabular has a dictionary for every column, so there is only one data type (datetime) that contain both. If you have a TIME data type in SQL, it is automatically imported in the right way and assigned to a date in the past (1/1/1900 or something similar).
In order to optimize the column, lower the number of unique values. If you don't need seconds, round to the minute.
Marco Russo (Blog,
Twitter,
LinkedIn) - sqlbi.com:
Articles, Videos,
Tools, Consultancy,
Training
Format with DAX Formatter and design with
DAX Patterns. Learn
Power Pivot and SSAS Tabular.

Similar Messages

  • Excel SSAS Tabular error: An error occurred during an attempt to establish a connection to the external data source

    Hello there,
    I have an Excel report I created which works perfectly fine on my dev environment, but fails on my test environment when I try to do a data refresh.
    The key difference between both dev and test environments is that in dev, everything is installed in one server:
    SharePoint 2013
    SQL 2012: Database Instance, SSAS Instance, SSRS for SharePoint, SSAS POWERPIVOT instance (Powerpivot for SharePoint).
    In my test and production environments, the architecture is different:
    SQL DB Servers in High Availability (irrelevant for this report since it is connecting to the tabular model, just FYI)
    SQL SSAS Tabular server (contains a tabular model that processes data from the SQL DBs).
    2x SharePoint Application Servers (we installed both SSRS and PowerPivot for SharePoint on these servers)
    2x SharePoint FrontEnd Servers (contain the SSRS and PowerPivot add-ins).
    Now in dev, test and production, I can run PowerPivot reports that have been created in SharePoint without any issues. Those reports can access the SSAS Tabular model without any issues, and perform data refresh and OLAP functions (slicing, dicing, etc).
    The problem is with Excel reports (i.e. .xlsx files) uploaded to SharePoint. While I can open them, I am having a hard time performing a data refresh. The error I get is:
    "An error occurred during an attempt to establish a connection to the external data source [...]"
    I ran SQL Profiler on my SSAS Server where the Tabular instance is and I noticed that every time I try to perform a data refresh, I get the following entries:
    Every time I try to perform a data refresh, two entries under the user name ANONYMOUS LOGON.
    Since things work without any issues on my single-server dev environment, I tried running SQL Server Profiler there as well to see what I get.
    As you can see from the above, in the dev environment the query runs without any issues and the user name logged is in fact my username from the dev environment domain. I also have a separated user for the test domain, and another for the production domain.
    Now upon some preliminary investigation I believe this has something to do with the data connection settings in Excel and the usage (or no usage) of secure store. This is what I can vouch for so far:
    Library containing reports is configured as trusted in SharePoint Central Admin.
    Library containing data connections is configured as trusted in SharePoint Central Admin.
    The Data Provider referenced in the Excel report (MSOLAP.5) is configured as trusted in SharePoint Central Admin.
    In the Excel report, the Excel Services authentication settings is set as "use authenticated user's account". This wortks fine in the DEV environment.
    Concerning SecureStore, PowerPivot Configurator has configured it the PowerPivotUnnattendedAccount application ID in all the environments. There is
    NO configuration of an Application ID for Excel Services in any of the environments (Dev, test or production). Altough I reckon this is where the solution lies, I am not 100% sure as to why it fails in test and prod. But as I read what I am
    writing, I reckon this is because of the authentication "hops" through servers. Am I right in my assumption?
    Could someone please advise what am I doing wrong in this case? If it is the fact that I am missing an Secure Store entry for Excel Services, I am wondering if someone could advise me on how to set ip up? My confusion is around the "Target Application
    Type" setting.
    Thank you for your time.
    Regards,
    P.

    Hi Rameshwar,
    PowerPivot workbooks contain embedded data connections. To support workbook interaction through slicers and filters, Excel Services must be configured to allow external data access through embedded connection information. External data access is required
    for retrieving PowerPivot data that is loaded on PowerPivot servers in the farm. Please refer to the steps below to solve this issue:
    In Central Administration, in Application Management, click Manage service applications.
    Click Excel Services Application.
    Click Trusted File Location.
    Click http:// or the location you want to configure.
    In External Data, in Allow External Data, click Trusted data connection libraries and embedded.
    Click OK.
    For more information, please see:
    Create a trusted location for PowerPivot sites in Central Administration:
    http://msdn.microsoft.com/en-us/library/ee637428.aspx
    Another reason is Excel Services returns this error when you query PowerPivot data in an Excel workbook that is published to SharePoint, and the SharePoint environment does not have a PowerPivot for SharePoint server, or the SQL Server Analysis
    Services (PowerPivot) service is stopped. Please check this document:
    http://technet.microsoft.com/en-us/library/ff487858(v=sql.110).aspx
    Finally, here is a good article regarding how to troubleshoot PowerPivot data refresh for your reference. Please see:
    Troubleshooting PowerPivot Data Refresh:
    http://social.technet.microsoft.com/wiki/contents/articles/3870.troubleshooting-powerpivot-data-refresh.aspx
    Hope this helps.
    Elvis Long
    TechNet Community Support

  • SSRS Parameters using SSAS Tabular model get cleared

    I have an SSRS report that uses data from a SSAS Tabular model.  In the query designer, from the calendar dimension I choose a "Date Inclusive" filter and make it a parameter. I also choose to add another filter using and Organisation Unit
    dimension and also make this a parameter. The report is written and deployed to a SharePoint 2013 library.
    Most of the time, the report runs as expected with the parameters cascading off each other as expected.  However, occasionally, parameters get cleared (either after changing a single value such as the Org Unit selection or sometime whilst the report
    is being rendered). Sometimes you cannot select a value from the available values - you need to navigate somewhere else and then start over.
    I changed the data source for the parameters to use SQL queries that return the same values as the MDX queries and the probably seems to have gone (time will tell)
    This report has a child (detail) report that has one extract parameter.  This parameter happens to have over 1,000 values.  With the change of the parent report, you are now able to get to the child report.  However, the child report seems
    to exhibit the same problem with the parameters being cleared - and with a much higher frequency.
    So, that leaves me wondering whether
    anyone else has experienced this ?
    is this an issue with SSRS 2012 and SSAS Tabular models (I have not seen this behaviour before and I have been using SSRS (since version 1) and SSAS Multi-dimensional (from when it was called "OLAP Services") ?

    We applied SQL Server 2012 Service Pack 2 to the SharePoint farm (the SP Admin needed to re-create the service applications) and the problem is fixed

  • SSAS Tabular in DirectQuery - What are the workarounds for formula limitations?

    Hello,
    I need to create an SSAS Tabular model against the database of a live, real-time, line of business transactional system (i.e. a CRM).
    The business requirement behind it is that we need to create some complex reports against live data, and our DW is only updated daily.
    This live model will however be partitioned with a time-variance limitation (e.g. only records which are XX old can be returned).
    Now here is the challenge. Since I am querying live data, then I believe the model must be configured in DirectQuery model. Am I right?
    The issue is that DirectQuery mode is full of formula limitations. So my concern is, if I need a calculated column or measure that I cannot make it work due to DirectQuery limitations, then what are the alternatives?
    Remember that the data source is from a live system, so it is not like I can create columns and measures in the underlying relational database.
    Please advise.
    Regards,
    P.

    Hi pmdci,
    According to your description, you want to use some functions in calculated measure which are not supported in DirectQuery mode. Right?
    In Analysis Services Tabular, since DirectQuery has the real time access and scalability, this comes with a price of restrictions on a number of DAX functions and missing Calculated Column feature. Generally the workaround for these scenarios
    is replacing those functions with other functions which are supported in DirectQuery mode, or create columns in the data source. However, as you said, your environment is not possible to create columns in the database. And a lot of those limited
    function are not replaceable, like time intelligence functions. So actually, there's no really effective workaround currently.
    For you requirement, I suggest you submit Microsoft a feature request
    at https://connect.microsoft.com/SQLServer
    so that we can try to modify and expand the product features based on your needs.
    Best Regards,  
    Simon Hou
    TechNet Community Support

  • VS2010 SSAS Tabular Model Memory Limitation

    Hi,
    My laptop recently crashed and am currently using a loan one that has Windows 7 x32, Visual Studio 2010 SP 1 AND SQL Server 2012 SP 1 (CU 4) installed. It only has 4GB of RAM, and when I attempt to load a very large fact table in a SSAS Tabular
    Model project using Visual Studio I receive the following error:
    Memory error: Allocation failure : Not enough storage is available to process this command. . If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.
    The current operation was cancelled because another operation in the transaction failed.
    Visual Studio 2010 is only available in 32 bit and I have already changed the VertipaqPagingPolicy from the default 0 to 2, but still the issue exists. Has anyone experienced this before?

    You have a couple of options in terms of removing the where clauses:
    1) Build your project in SSDT and then use the Analysis Services Deployment Wizard to create an XMLA deployment script, then edit the script to remove the WHERE clauses before executing it on your server.
    2) If this is a first time deployment change your project deployment options to "Do Not Process" - deploy your project and then edit the partitions for the tables with WHERE clauses to remove them.
    3) Another option is to use views in your relational database. Then you can put the WHERE clauses in the views on Dev, but exclude them in the Prod database.
    http://darren.gosbell.com - please mark correct answers

  • SSAS Tabular DAX- Need to get MAX value of the MIN (top)hierarchy level row

    EDIT:
    I got closer to resolving the issue using MAX. 
    However, If I remove the department hierarchy and just place on the MAX measure I get the single largest value out of all departments. 
    It would be ideal if the measure could still SUM the "top level" values across everything in the system if the hierarchy is not placed on the rows grouping.
    So it returns the largest value for a given department, but if the department hierarchy isn't present I need it to return a SUM of all the level 1 values for all departments...
    Basically return MAX value from the MIN L1ID's DeptLevel value, but I can't seem to construct that DAX query.  So if DepartmentID hierarchy is on display it gets MAX per row, but if that is removed it dips into MAX GoalValue for each L1ID grouping with
    the MIN DeptLevel.
    /EDIT
    I have a rather odd data table I'm bringing into a SSAS Tabular model.
    Instead of having all data at each child level and then it adding up to a grand total in the parent, it has a grand total predefined at each child level.
    I just need this tool to display the raw data if at all possible instead of trying to aggregate everything. Filter on active level, ignore child levels.
    Is there a way to do that?
    Example:
    SalesGoalsByDepartmentLevel:
    Level1 (top level) = 5,000
    Level2( lower level) = 0
    Level3(lower still) = 500
    Level 4(lowest) = 4,250
    So note that adding up all the child levels is still $250 shy of the top 5,000.
    IT is just an odd business rule, basically each level is expected to meet that goal or exceed it, the top level goal is 5,000 but management doesn't care where that last 250 comes from, they do are that each defined level is met.
    These levels are in a hierarchy so if I view the top level of the hierarchy it adds up to 4250+500+5000=9750 when I just want to see 5,000 at the top level and the details when they drill down.
    I added a filter to just filter to the top level, but then when I drill down of course those lower levels are blank.
    Is there a way to filter on the current displayed level without aggregating all child levels?
    Thanks!

    You might want to take a look at the Parent-Child Hierarchies pattern here:
    http://www.daxpatterns.com/parent-child-hierarchies/
    You might write DAX code to check what is the "current" level (see BrowseDepth measure in the sample file you can download) and depending on its level, se the filter to blank to all the levels below, so you don't aggregate "children".
    Just an idea, I'm not sure if it corresponds to your requirement and I don't have time to make more tests.
    I hope it will be helpful.
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

  • Reading column data type in SSAS tabular database using adomd

    I am working on SSAS tabular database and I need to find out the column datatype. I am using ADOMD for this. So far I looked at AdomdDataReader.GetSchemaTable
    Method & looked at the below code snippet to retrive Metadata from the AdomdDataReader.
    foreach (DataRow objRow in schemaTable.Rows)
    foreach (DataColumn objColumn in schemaTable.Columns)
    Console.WriteLine(objColumn.ColumnName + " = " + objRow[objColumn]);
    Console.WriteLine();
    I wonder with only the list of columns available to me, how can I execute a query/command and utilize adomd reader to get this information?
    What is the best approach to read column metadata?

    Hi, One option that you can try is to use AMO. Here is a sample code that connects to multi-dimensional model.
    http://sqlblogcasts.com/blogs/drjohn/archive/2012/03/15/ssas-utility-to-check-you-have-the-correct-data-types-and-sizes-in-your-cube-definition.aspx
    You will have to change few things to make it work for tabular model. Sorry, I did not try myself but I hope this will give you some ideas.
    // get DSV - this code assumes only one DSV in cube definition
    AMO.DataSourceView
    dsv = olapDatabase.DataSourceViews[0];
    foreach (AMO.Dimension
    dim in olapDatabase.Dimensions)
     foreach (AMO.DimensionAttribute
    attr in dim.Attributes)
     foreach (AMO.DataItem
    dItem in attr.KeyColumns)
         AddAttributeDataItem(dtCubeDataTypes,
    "Key", dsv, attr, dItem);
    AMO.DataItem contains column size and data type when connected to a tabular database.

  • SSAS Tabular: Show balance on latest dimension attribute

    Hi,
    I have a fact with transactions over time eg.
    20140101, 1000
    20140105,-400
    In SSAS Tabular, I want to add a balance (saldo) measure, that shows the balance on any given date from my date dimension
    Balance 20140106: 600
     I can do this by using SUMX (or summarize)
    Saldo:=SUMX(
    VALUES('Date'[Date])
    ,CALCULATE(
    SUM(Fact[Amount])
    ,DATESBETWEEN('Date'[Date],BLANK(),LASTDATE('Date'[Date]))
    ,ALL('Date')
    The issue arises when I want to show the balance for an attribute from a dimension related to the latest fact entry. I can calculate this on dates that has transactions like this:
    Saldo_MaxFact:=MAXX(
    VALUES('Fact'[FactId])
    ,CALCULATE(
    SUM(Fact[Amount])
    ,DATESBETWEEN('Date'[Date],BLANK(),LASTDATE('Date'[Date]))
    ,ALL('Date')
    ,ALL('Fact'[FactId])
    ,ALL('Dimension')
    But on dates with no transactions, this measure is empty (which makes sense, since there is no FactId to roll-up the sum to).
    How would I go about creating a measure that rolls up to any given date AND the attributes on the latest fact entry?
    I have created a sample snapshot: http://1drv.ms/1ly4o6a
    Sample Excel Power Pivot model: http://1drv.ms/1jy2nkX
    Any help would be much appreciated!

    Hi Greg,
    Finally I found the problem why the query goes out of memory in tabular mode. I guess this information will helpful for others and I am posting my findings.
    Some of the non-key attribute columns in the tabular model tables (mainly the tables which form dimensions) do not contain pretty names. So for the non-key attribute columns which I need to provide pretty names I renamed the columns to something else.
    For an example, in my date dimension there is a non-key attribute named “DateAltKey”. This is the date column which I am using. As this is not pretty to the client tools I renamed this column as “Date” inside the designer (Dimension
    design screen). I deployed the cube, processed the cube and no problem.
    Now here comes the fun part. For every table, inside the Tables node (Tabular SSAS Database > Tables) you can view the partition details. You have single partition per dimension table if you do not create extra partitions. I opened the partitions screen
    and clicked on the “Edit” icon and performed a Syntax Check. Surprisingly it failed. It complains about the renamed column. It complained “Date” cannot be found in source. So I realized that I cannot simply rename the columns like that.
    After that I created calculated columns (with a pretty name) for all the columns which complained and all the source columns to the calculated columns were hid from the client tools. I deployed the cube, processed the cube and performed a
    syntax check. No errors and everything were perfect.
    I ran the query which gave me trouble and guess what... it executed within 5 seconds. My problem is solved. I really do not know who did this improve the performance but the trick worked for me.
    Thanks a lot for your support.
    Chandima

  • Need help ASAP - SSAS Tabular error has me stumped and others' solutions do not apply in this case.

    Hello everyone,
    Here's hoping no one will consider this to be the wrong forum. The data source is an Access database, but the issue is an SSAS Tabular issue. Kind of desperate here... the customer needs this ASAP, and I've been trying to resolve this for a week
    already! Any help would be much appreciated!
    A customer of mine (another employee of the ComIT department) is receiving an error trying to open an Access database via SQL Data Tools.
    The error is "OLE DB or ODBC error: The Microsoft Access database engine cannot open or write to the file '\\[our_domain]\[the_path_to_the_file]\[database_name].accdb'. It is already opened exclusively by another user, or you need permission
    to view and write its data.; 3051."
    What perplexes me is that I do not receive this error, my coworkers on the DBA team do not receive this error, and our boss does not receive this error--even on the customer's machine. But the customer gets this error every single time.
    The only difference is the credentials provided on the "Impersonation Information" window.
    Here is our process:
    We open SQL Data Tools, we create a new Analysis Services Tabular Project, we select the workspace server (our Analysis Services server... db5079\tabular) and test connection ("Test connection succeded") and click OK.
    Then we click "Import From Data Source," select "Microsoft Access," enter the full network path to the database and test connection ("Test connection succeeded")
    The next window says "Impersonation Information - Specify the credentials used by the Analysis Services server to connect to the data source when importing and processing data"
    The credentials provided at this point is the only difference between me/my team/our boss and the customer. If we use our own credentials at this point--even on the customer's machine--we receive no error and everything is fine.
    If we use the customer's credentials, we get the error above at the end of the next step (i.e., after we choose the data to import and then click Finish).
    So that's it. On the same machine, the customer's credentials produce this error, and our credentials do not.
    I have already added the customer as a server administrator to the Analysis Services server (db5079\tabular).
    Copying the data source to another folder on the network or to his local machine produces the same results: his credentials produce the error, my credentials/my boss's credentials/etc. do not.
    All of our machines are 64 bit, and the Analysis Services server is 2012 64-bit.
    Please help!

    UPDATE: As it turns out, all those who were able to import the data were local administrators on the Analysis Server (i.e., on the OS), and all those who were unable to import data were not.
    When we added someone who couldn't import the data to the local Administrators group on the Analysis Server, they were able to import the data.
    However, we can't give them local admin on the Analysis Server, and we are unable to determine what combination of user rights and permissions on folders we can grant the user as individual that will allow them to import the data.
    Just as a test, we tried giving the user the same user rights that the admin group has, and the same permissions on all the drive as the admin group has, but that didn't work.
    If it had worked, we could have started reducing the rights until we found the minimum necessary, but it didn't and once again we're stuck.
    Please help!

  • Role Base Security SSAS Tabular and PPS not working

    Hi,
    I am having SSAS (Tabular Model) with Role based Security. It is working fine with Powerview and PowerPivot.
    But when i am  using same with PPS. it is giving me error like 'Data source not accessible'.
    If i don't Provide Role while connecting and if i select
    unattended account, it is working but no security.
    Please help me out on this situation or provide any steps with snapshot(if possible), how to make pps
    working with SSAS Tabular model with Role.
    Thanks in Advance
    Pinak kakadiya

    Hi Vishal,
    According to your description, you are trying to use time intelligence functions in SQL Server Analysis Services Tabular model without success, right?
    In order to use time intelligence functions in DAX formulas, you must specify a date table and a unique identifier (datetime) column of the Date data type. Once a column in the date table is specified as a unique identifier, you can create relationships
    between columns in the date table and any fact tables. Please refer to the links below to see the details steps to use time intelligence functions in DAX formulas.
    https://msdn.microsoft.com/en-us/library/hh758415.aspx?f=255&MSPPError=-2147217396
    http://blog.gbrueckl.at/2013/02/fiscal-periods-tabular-models-and-time-intelligence/
    If the issue persists, please provide us more information about your tabular structure, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Updation of DAX in SSAS Tabular Model

    Hi All,
    I have a SSAS tabular model which has around 42 Dimension table and 10 Fact table each containing some thousands
    of record.when I am writing some DAX on a FACT to get some measure ,its taking 10-15 minutes time to update one DAX.
    I have to write more than 300 measures using DAX.Can anyone please suggest how can I speed up the updation process
    in writing DAX.
       Thanks in Advance.
    Sanjay
    sanjay

    Hi Sanjay,
    According to your description, there are around 42 Dimension table and 10 Fact table each containing some thousands of record in your tabular model, so it takes 10-15 minutes time to update the calculation to server, right?
    It's default setting to update the modification to server when change something on your tabular model, if the tables contain large data, it will takes long time to update. In your scenario, we recommend you to use Multidimensional database instead of Tabular
    model when having large amount of data with complex requirements.
    SQL Server Analysis Services (SSAS) Multidimensional model is used when having large amount of data with complex requirements. In order to improve query performance, there will be a cube doing heavy time-consuming processing and then synchronize it
    to query cubes.
    However, SSAS Tabular model is used when the data model is relatively simple. So for the large amount data model, we recommend to use Multidimensional database instead of process data on one server then synchronize it to query cubes.
    Reference
    http://blog.aditi.com/data/choosing-between-analysis-services-multidimensional-and-tabular-models-part-3/
    Regards,
    Charlie Liao
    TechNet Community Support

  • Clustering SSAS Tabular in real case studies

    Hi,
    I need to find more info about the Clustering of SSAS Tabular in real scenarios.
    Any issues? Any particularities? Any warnings?
    Tabular is on memory and not on storage.
    Many thanks

    The best practice is to put more nodes in load balancing - I think you can use Clustering, but when the service start on passive node in case of failover, you have to consider the time required to load data in memory from storage. I haven't seen a cluster
    configuration for Tabular in production until now, only load balance configurations.
    Marco Russo (Blog,
    Twitter,
    LinkedIn) - sqlbi.com:
    Articles, Videos,
    Tools, Consultancy,
    Training
    Format with DAX Formatter and design with
    DAX Patterns. Learn
    Power Pivot and SSAS Tabular.

  • SSAS Tabular model Performance Issue

    Hello,
    We have strange behavior with SSAS tabular model. The Model size is approx. 40GB in memory. Our production server has 200GB memory. Most of the users access the cube via Excel 2013 (64bit). We have been noticing that the performance starts degrading from
    the next day after the Analysis service has been restarted or the server rebooted. The day either one of them is performed we get good response but then the next day response times becomes 2 to 3 times more than on the 1st day. Is this something which anyone
    else has experienced. We are entering in a mode where we are required to restart the service almost everyday.
    Any help in this matter would be greatly appreciated.
    Thanks
    Deepak

    Why are you sure role based security is the culprit? Immediately after cube processing or immediately after ClearCache does the report perform fast for a user without any role based security filters? I would recommend reviewing the methodology on page
    35 of the tabular performance guide to see if it is a storage engine or formula engine bottleneck and report back.
    http://aka.ms/ASTabPerf2012
    http://artisconsulting.com/Blogs/GregGalloway

  • Slow Query Performance During Process Of SSAS Tabular

    As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
    becomes very slow. 
    Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
    I am using SQL Server 2012 SP2.
    Thanks

    Hi AL.M,
    According to your description, when you query the tabular during Process Add, the performance is slow. Right?
    In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
    One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • Evaluation of hw features of a machine hosting a SSAS Tabular model

    Hi,
    on a machine I've deployed a SSAS Tabular model that has two fact tables having each 200-300 millions of rows, and about 10-15 dimension tables. I've implemented some Excel workbooks, each of them queries one measure of one of the two fact tables.
    Each workbook has 10-15 filters/slicers. I've tried some times to reduce the number of filter/slicer or of row expansion and I've registered a query response time from 5 to 15 seconds for a change selection for a filter/slicer. 15 seconds is a time too high
    for the user requirements.
    The formula for the two measures has some SUMs and DIVIDEs. In order to improve the query performance I've tried to write the formula in different manners and I've created partial calculations in the underlying dwh but with negligible performance improvements.
    Cutting filters or slicers or row expansions is against the user requirements.
    I've also captured some Profiler traces but with few indications.
    I suspect that a possible issue derived from the hw features of the hosting machine. During the use of the pivot table the usage of memory is about 50-60% and the usage of CPU is 80-100%.
    With the CPU-Z program I've captured the hw features:
    I hope in any suggests deriving from the experiences and not from some papers, fe "Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services", in order to improve the query performance from Excel connected to my tabular model.
    Probably, the CPU features are not appropriate.
    Many thanks

    Hi Pscorca,
    Based on your description, you want to if it is best practice for improve the query performance in hardware level by increase RAM or CPU. In this case, here are some document about it, please refer to the links below which might helpful for you.
    Forcing NUMA Node affinity for Analysis Services Tabular databases
    SSAS Tabular – NUMA and CPU Cores Performance
    Regards,
    Charlie Liao
    TechNet Community Support

Maybe you are looking for