SSAS Tabular. MDX slow when reporting high cardinality columns.

Even with small fact tables( ~20 million rows) MDX is extremely slow when there are high cardinality columns in the body of the report.
e.g. The DAX query is subsecond.
Evaluate
SUMMARIZE (
CALCULATETABLE('Posted Entry',
'Cost Centre'[COST_CENTRE_ID]="981224"
, 'Vendor'[VENDOR_NU]="100001"
,'Posted Entry'[DR_CR]="S")
,'Posted Entry'[DOCUMENT_ID]
,'Posted Entry'[DOCUMENT_LINE_DS]
,'Posted Entry'[TAX_CODE_ID]
,"Posted Amount",[GL Amount]
,"Document Count",[Document Count]
,"Record Count",[Row Count]
,"Document Line Count",[Document Line Count]
,"Vendor Count",[Vendor Count]
order by
'Posted Entry'[GL Amount] desc
The MDX equivalent takes 1 minute 13 seconds.
Select
{ [Measures].[Document Count],[Measures].[Document Line Count],[Measures].[GL Amount], [Measures].[Row Count],[Measures].[Vendor Count]} On Columns ,
NON EMPTY [Posted Entry].[DOCUMENT_ID_LINE].[DOCUMENT_ID_LINE].AllMembers * [Posted Entry].[DOCUMENT_LINE_DS].[DOCUMENT_LINE_DS].AllMembers * [Posted Entry].[TAX_CODE_ID].[TAX_CODE_ID].AllMembers On Rows
From [Scrambled Posted Entry]
WHERE ( [Cost Centre].[COST_CENTRE_ID].&[981224] ,[Vendor].[VENDOR_NU].&[100001] ,{[Posted Entry].[DR_CR].&[S]})
I've tried this under 2012 SP1 and it is still a problem. The slow MDX happens when there is a high cardinality column in the rows and selection is done on joined tables. DAX performs well; MDX doesn't. Using client generated MDX or bigger fact tables makes
the situation worse.
Is there a go fast switch for MDX in Tabular models?

Hi,
There's only 50 rows returned. The MDX is still slow if you only return a couple of rows.
It comes down to the DAX produces a lot more efficient queries against the engine.
FOR DAX
e.g.
after a number of reference queries in the trace the main vertipaq se query is
SELECT
[Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID],
SUM([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[POSTING_ENTRY_AMT])
FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
WHERE
 ([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]) IN {('0273185857', 'COUOXKCZKKU:CKZTCO CCU YCOT
XY UUKUO ZTC', 'P0'), ('0272325356', 'ZXOBWUB ZOOOUBL CCBW ZTOKKUB:YKB 9T KOD', 'P0'), ('0271408149', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT BUY', 'P0'), ('0273174968', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0273785256', 'ZOUYOWU ZOCO CLU:Y/WTC-KC
YOBT 3ZT JXO', 'P0'), ('0273967993', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KCB', 'P0'), ('0272435413', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0'), ('0273785417', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0'), ('0272791529', 'ZOUYOWU ZOCO CLU:Y/WTC-KC
YOBT 7.3ZT JXO', 'P0'), ('0270592030', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 89.3Z JXO', 'P0')...[49 total tuples, not all displayed]};
showing a CPU time of 312 and duration of 156. It looks like it has constructed an in clause for every row it is retrieving.
The total for the DAX query from the profiler is 889 CPU time and duration of 1669
For the MDX
after a number of reference queries in the trace the expensive vertipaq se query is
SELECT
[Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]
FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
WHERE
[Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DR_CR] = 'S';
showing a CPU time of 49213 and duration of 25818.
It looks like it is only filtering by a debit/credit indicator .. this will be half the fact table.
After that it fires up some tuple based queries (similar to the MDX but with crossjoins)
SELECT
[Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]
FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
LEFT OUTER JOIN [Vendor_6b7b13d5-69b8-48dd-b7dc-14bcacb6b641] ON [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[VENDOR_NU]=[Vendor_6b7b13d5-69b8-48dd-b7dc-14bcacb6b641].[VENDOR_NU]
LEFT OUTER JOIN [Cost Centre_f181022d-ef5c-474a-9871-51a30095a864] ON [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[COST_CENTRE_ID]=[Cost Centre_f181022d-ef5c-474a-9871-51a30095a864].[COST_CENTRE_ID]
WHERE
[Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DR_CR] = 'S' VAND
([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]) IN {('0271068437/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT ZTC', 'P0'), ('0272510444/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0272606954/1', null, 'P0'), ('0273967993/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KCB', 'P0'), ('0272325356/1', 'ZXOBWUB ZOOOUBL CCBW ZTOKKUB:YKB 9T KOD', 'P0'), ('0272325518/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KUW', 'P0'), ('0273231318/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT ZWB', 'P0'), ('0273967504/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0274055644/1', 'YBUCC OBUC YTT OYX:OD 5.3F81.3ZT TOZUT', 'P5'), ('0272435413/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0')...[49 total tuples, not all displayed]};
This query takes 671 CPU and duration 234; more expensive than the most expensive part of the DAX query but still insignificant compared to the expensive part of the MDX.
The total for the MDX query from the profiler is 47206 CPU time and duration of 73024.
To me the problem looks like the MDX fires a very expensive query against the fact table and only filters by 1 element of the fact table; then goes about refining the set later on.

Similar Messages

  • Use SSAS Tabular KPI in SSRS report with groups

    I have a SSAS tabular model that I have built KPIs in. I have tried several attempts to use the KPI in SSRS with multiple groups, but as you probably know when you aggregate the KPI it does not calculate correctly. How do you use SSAS KPIs with SSRS groups?
    Thanks in advance.

    Hi Micheal,
    According to your description, you want to use the KPI created in SSAS tabular for SSRS multiple groups. Right?
    In Analysis Services, when we create KPI, it works for each detail record. This KPI will not consider any aggregation value. So when you aggregate the KPI in SSRS, it can't return correct result. However, in Reporting Services, you can use Variable
    to define a "KPI" and apply condition with aggregation function(like sum, average) based on this "KPI" in Expression to return the correct result. Also you can use the Indicator in SSRS to achieve this requirement. Please refer to
    links below:
    Report and Group Variables Collections References (Report Builder and SSRS)
    Indicators (Report Builder and SSRS)
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • SSAS Tabular : MDX query goes OutOfMemory for a larger dataset

    Hello all,
    I am using SSAS 2012 Tabular to build the cube to support the organizational reporting requirements. Right now the server is Windows 2008 x64 with 16GB of Ram installed. I have the following MDX query. What this query does is get the member caption of the
    “OrderGroupNumber” non-key attribute as a measure where order group numbers pertain to a specific day and which occurs in specific seconds of a day. As I want to find in which second I have order group numbers, I cross the time dimension’s members with a specific
    day and filter the tuples using the transaction count. The transaction count is a non-zero value if an Order Group Number occurs within a specific second of a selected day.
    At present “TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber]” has 170+ million members (Potentially this could grow rapidly) and time dimension has 86400 members.
    WITH
    MEMBER [Measures].[OrderGroupNumber]
    AS IIF([Measures].[Transaction Count] > 0, [TransactionsInflight].[OrderGroupNumber].CURRENTMEMBER.MEMBER_CAPTION,
    NULL)
    SELECT
    NON EMPTY{[TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber].MEMBERS}
    ON COLUMNS,
    {FILTER(([Date].[Calendar Hierarchy].[Date].&[2012-07-05T00:00:00], [Time].[Time].[Time].MEMBERS),
    [Measures].[Transaction Count] > 0) } ON
    ROWS
    FROM [OrgDataCube]
    WHERE [Measures].[OrderGroupNumber]
    After I run this query it reaches to a dead-end and freezes the server (Sometimes SSAS server throws OutOfMemory exception but sometimes it does not). Even though I have 16GB of memory it uses all the memory and doing nothing. I have to do a hard-reset against
    the server to get the server online. Even I limit the time members using the “:” range operator still the machine freeze. I have run out of solutions to fine-tune the design. Could you guys provide me some guidelines to optimize this query? I am willing to
    do a design change if it is necessary.
    Thanks and best regards,
    Chandima

    Hi Greg,
    Finally I found the problem why the query goes out of memory in tabular mode. I guess this information will helpful for others and I am posting my findings.
    Some of the non-key attribute columns in the tabular model tables (mainly the tables which form dimensions) do not contain pretty names. So for the non-key attribute columns which I need to provide pretty names I renamed the columns to something else.
    For an example, in my date dimension there is a non-key attribute named “DateAltKey”. This is the date column which I am using. As this is not pretty to the client tools I renamed this column as “Date” inside the designer (Dimension
    design screen). I deployed the cube, processed the cube and no problem.
    Now here comes the fun part. For every table, inside the Tables node (Tabular SSAS Database > Tables) you can view the partition details. You have single partition per dimension table if you do not create extra partitions. I opened the partitions screen
    and clicked on the “Edit” icon and performed a Syntax Check. Surprisingly it failed. It complains about the renamed column. It complained “Date” cannot be found in source. So I realized that I cannot simply rename the columns like that.
    After that I created calculated columns (with a pretty name) for all the columns which complained and all the source columns to the calculated columns were hid from the client tools. I deployed the cube, processed the cube and performed a
    syntax check. No errors and everything were perfect.
    I ran the query which gave me trouble and guess what... it executed within 5 seconds. My problem is solved. I really do not know who did this improve the performance but the trick worked for me.
    Thanks a lot for your support.
    Chandima

  • Optimize Fact tables in SSAS Tabular Model

    Hi,
    I have five Fact tables in SSAS Tabular Model and each fact table share same dimensions. It creates some performance issue and also the data model looks very complex is there any simplest way to create simple data model using all fact tables. For Ex...
    Please suggest me for this ...
    Fact Tables:
    Fact_Expense
    Fact_Sale
    Fact_Revenue
    Fact_COA
    Fact_COG
    Dimensions:
    Dim_Region
    Dim_Entity
    Dim_Product
    Dim_DateTime
    Dim_Project
    Dim_Employee
    Dim_Customer 

    Hi hussain,
      Please consider merging the Fact tables based on granularity. Generally, if you have enough RAM there will be no performance issues. Make sure you have double the amount of RAM to cater your processing and operational needs.Try
    to optimize the model design by removing unused keys and some high cardinality columns.
    Please go through the document in the link:
    http://msdn.microsoft.com/en-us/library/dn393915.aspx
    Regards,
    Venkata Koppula

  • SSAS Tabular - Why is it wrong to call classify it as MOLAP, ROLAP or HOLAP?

    Hi There,
    I am working on an assignment (for a masters degree) in which I am evaluating different OLAP tools, and one of the tools assigned to me is Microsoft's BI stack.
    One of the classifications we are working on is whether a tool is MOLAP, ROLAP or HOLAP. When reading about SSAS Tabular, however, every single thing I read points out that SSAS escapes this traditional classification.
    I am trying to understand why would it be wrong to attempt to classify a Tabular instance of SSAS in such way? Why not say that SSAS Tabular is MOLAP, when using DirectQuery it is ROLAP, and if mixing both it is HOLAP?
    What is so fundamentally wrong about it?
    EDIT: Ah, and how does the Microsoft BI Semantic Model
    fits into the picture -- I assume is has something to do with the Tabular model being outside the traditional MOLAP/ROLAP/HOLAP classifiction?
    Thanks in advance for your help!
    Regards,
    P.

    Hi Pmdci,
    According to your description, you are looking for the reason why is it wrong to call classify SQL Server Analysis Service Tabular as MOLAP, right? As you know in MOLAP, data is stored in a multidimensional cube. The storage is not in the relational database,
    but in proprietary formats. And in ROLAP, data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. However tabular solutions use relational modeling constructs such as tables and relationships for
    modeling data, and the xVelocity in-memory analytics engine for storing and calculating data. So as per my understanding, we cannot classify tabular as MOLAP.
    Reference:
    MOLAP, ROLAP, And HOLAP
    Comparing Tabular and Multidimensional Solutions (SSAS)
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • If v check only High Cardinality?

    Hello all
    If v check only cardinality in dimension assignment,then dimension table will be available.(I hope so)
    upto how <b>many characteristics can v assign</b> this cardinality.
    Is there any mandatory like,while v select the cardinality <b>should v select the Line Item also</b>?
    many thanks
    balaji

    hi pizzaman
    thanks for your info., given.
    In your statements u said like "<b>when just High Cardinality is selected for a dimension, a b-tree index is created instead of a bitmap index</b>", but if v check only line item then which index will create? (then also only b-tree index will create,is it this way).
    if both <b>line item</b> and <b>high cardinality</b> checked then which index will create?
    Mant Thanks
    balaji

  • Can I have a browser report in SSAS cube that is Tabular. [I Dnt want the different columns to be in relation with each other, no grouping wanted]

    Hi,
    Can anyone here help me with a query I have in SSAS regarding a Designing on Report. 
    Basically what my team desires is a Report, which is already been made from OLPT DB. The nature of report is not of a Cube. 
    They need a Simple tabular report. Which means we want to display columns just like in SELECT Statement in sql. 
    When I drag my dimension on the report (row or column grouping). No data is fetched. 
    Simply can anyone tell me if Simple tabular report is possible in SSAS or not. 

    Hi Senthiaan,
    Yes you can use "Microsoft.AnalysisServices.AdomdClient.dll" in your
    .NET application to execute MDX queries and populate Datasets. Take a look into the following code;
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    namespace ConsoleApplication2
    using Microsoft.AnalysisServices.AdomdClient;
    class Program
    static void Main(string[] args)
    System.String Query = "SELECT {[Measures].[Sales Amount]} ON COLUMNS, NON EMPTY{[Date].[Calendar Year].[Calendar Year].MEMBERS} ON ROWS FROM [Adventure Works]";
    AdomdConnection oConnection = new AdomdConnection("Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=AdventureWorksDW;Data Source=localhost");
    AdomdDataAdapter oAdapter = new AdomdDataAdapter(Query, oConnection);
    System.Data.DataTable objDT = new System.Data.DataTable("Results");
    oAdapter.Fill(objDT);
    foreach (System.Data.DataRow r in objDT.Rows)
    System.Console.WriteLine(r[0].ToString() + " :- " + r[1].ToString());
    objDT.Dispose();
    oAdapter.Dispose();
    oConnection.Dispose();
    System.Console.ReadLine();
    ADOMD Client DLL locates in the following location.
    C:\Program Files\Microsoft.NET\ADOMD.NET\110\Microsoft.AnalysisServices.AdomdClient.dll
    Best regards.
    (Mark and Vote if the Answer helps...)

  • Slow Query Performance During Process Of SSAS Tabular

    As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
    becomes very slow. 
    Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
    I am using SQL Server 2012 SP2.
    Thanks

    Hi AL.M,
    According to your description, when you query the tabular during Process Add, the performance is slow. Right?
    In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
    One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • BO slow when accessing reports through the web but not with a installed client.

    Hi
    We have installed BO 4.0 SP08 in a virtual machine with 32 RAM and 8 CPU processors. We have divided the APS in a number of instance with different services. We have also 8 Web Intelligence Processing servers.
    The application is slow when trying to create or view a report from the web browser (ie or chrome). If we use the installed Web Rich Client on a machine, then it works fine.
    Does anyone knows why?
    thanks
    Teresa

    Any ideas??
    I don’t know, but whatever it is, it’s preventing you from creating complete sentences.
    Run the Apple Hardware Test as described on page 101 of your owner’s manual to see if it detects any problems.
    RRS

  • SSAS Tabular - placing single measure in Excel is fast, multiple from same table is slow?

    With SSAS Tabular using Excel:
    If I place a single measure MyMeasure:=SUM([ColumnNameOnFactTable])
    it happens very quickly.
    I have 3 other dimensions from 3 other dimension tables on Excel with this "MyMeasure" as the value.
    YearMonth in the columns and say Department ID, Account ID, and Call Center (just all made up for this example).
    Now, when I place a second measure from that same table as "MyMeasure" call it SecondMeasure:SUM([AnotherColumnNameOnFactTable]) the OLAP query in Excel spins, and sometimes even throws the out of memory error.
    The server has 24 GB of RAM, and the model is only a few hundred megs.
    I assume something must be off here? 
    Either I've done something foolish with the model or I'm missing something?
    EDIT:
    It SEEMS to work better if I place all y measures on the Excel grid first, then go and add my "dimensions", adding the measures after the dimensions appears to incur a rather steep penalty?
    Number of rows:
    Largest table (account ID lookup has 180,000)
    Fact table has 7,000
    The others are 1,000 or less...

    Hi,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • What user should be used when importing a Power Pivot model into SSAS tabular?

    I created a Power Pivot model using my own userid.
    In SSDT I can import that model but it saves the model in my own My Documents folder. In SSMS, other users can browse the data in that model.
    If I tell SSDT to save the model in the Public user's My Documents, the model is not visible in SSMS for anyone to see.
    If a colleague imports the model, it can be seen in SSMS but there is no data when it is browsed.
    There seems to be a tie up with the user id I used when the model was first created. Is it possible to break that link, or should we have a user id specifically for creating and deploying Power BI models via SSAS tabular?
    Propdev

    inputComboboxListOfValues
    selectOneChoice
    inputListOfValues

  • High Cardinality - only relevant for reporting performance?

    Hello,
    I read a lot about high cardinality. What could not be clarified: is it "only" for reporting purpose? Sure that tables will be sized in another way.
    If I have no reports (we use the BW system which is integrated in DP) - is high cardinality flag relevant?

    It could potentially be used for determining access path for any SQL accessing the cube. Whether it is open-hub, bapi, ABAP or some retractor, whenever an SQL is executed against the cube, this flag may influence how data is read (eg what index is used).

  • SSRS Parameters using SSAS Tabular model get cleared

    I have an SSRS report that uses data from a SSAS Tabular model.  In the query designer, from the calendar dimension I choose a "Date Inclusive" filter and make it a parameter. I also choose to add another filter using and Organisation Unit
    dimension and also make this a parameter. The report is written and deployed to a SharePoint 2013 library.
    Most of the time, the report runs as expected with the parameters cascading off each other as expected.  However, occasionally, parameters get cleared (either after changing a single value such as the Org Unit selection or sometime whilst the report
    is being rendered). Sometimes you cannot select a value from the available values - you need to navigate somewhere else and then start over.
    I changed the data source for the parameters to use SQL queries that return the same values as the MDX queries and the probably seems to have gone (time will tell)
    This report has a child (detail) report that has one extract parameter.  This parameter happens to have over 1,000 values.  With the change of the parent report, you are now able to get to the child report.  However, the child report seems
    to exhibit the same problem with the parameters being cleared - and with a much higher frequency.
    So, that leaves me wondering whether
    anyone else has experienced this ?
    is this an issue with SSRS 2012 and SSAS Tabular models (I have not seen this behaviour before and I have been using SSRS (since version 1) and SSAS Multi-dimensional (from when it was called "OLAP Services") ?

    We applied SQL Server 2012 Service Pack 2 to the SharePoint farm (the SP Admin needed to re-create the service applications) and the problem is fixed

  • Excel SSAS Tabular error: An error occurred during an attempt to establish a connection to the external data source

    Hello there,
    I have an Excel report I created which works perfectly fine on my dev environment, but fails on my test environment when I try to do a data refresh.
    The key difference between both dev and test environments is that in dev, everything is installed in one server:
    SharePoint 2013
    SQL 2012: Database Instance, SSAS Instance, SSRS for SharePoint, SSAS POWERPIVOT instance (Powerpivot for SharePoint).
    In my test and production environments, the architecture is different:
    SQL DB Servers in High Availability (irrelevant for this report since it is connecting to the tabular model, just FYI)
    SQL SSAS Tabular server (contains a tabular model that processes data from the SQL DBs).
    2x SharePoint Application Servers (we installed both SSRS and PowerPivot for SharePoint on these servers)
    2x SharePoint FrontEnd Servers (contain the SSRS and PowerPivot add-ins).
    Now in dev, test and production, I can run PowerPivot reports that have been created in SharePoint without any issues. Those reports can access the SSAS Tabular model without any issues, and perform data refresh and OLAP functions (slicing, dicing, etc).
    The problem is with Excel reports (i.e. .xlsx files) uploaded to SharePoint. While I can open them, I am having a hard time performing a data refresh. The error I get is:
    "An error occurred during an attempt to establish a connection to the external data source [...]"
    I ran SQL Profiler on my SSAS Server where the Tabular instance is and I noticed that every time I try to perform a data refresh, I get the following entries:
    Every time I try to perform a data refresh, two entries under the user name ANONYMOUS LOGON.
    Since things work without any issues on my single-server dev environment, I tried running SQL Server Profiler there as well to see what I get.
    As you can see from the above, in the dev environment the query runs without any issues and the user name logged is in fact my username from the dev environment domain. I also have a separated user for the test domain, and another for the production domain.
    Now upon some preliminary investigation I believe this has something to do with the data connection settings in Excel and the usage (or no usage) of secure store. This is what I can vouch for so far:
    Library containing reports is configured as trusted in SharePoint Central Admin.
    Library containing data connections is configured as trusted in SharePoint Central Admin.
    The Data Provider referenced in the Excel report (MSOLAP.5) is configured as trusted in SharePoint Central Admin.
    In the Excel report, the Excel Services authentication settings is set as "use authenticated user's account". This wortks fine in the DEV environment.
    Concerning SecureStore, PowerPivot Configurator has configured it the PowerPivotUnnattendedAccount application ID in all the environments. There is
    NO configuration of an Application ID for Excel Services in any of the environments (Dev, test or production). Altough I reckon this is where the solution lies, I am not 100% sure as to why it fails in test and prod. But as I read what I am
    writing, I reckon this is because of the authentication "hops" through servers. Am I right in my assumption?
    Could someone please advise what am I doing wrong in this case? If it is the fact that I am missing an Secure Store entry for Excel Services, I am wondering if someone could advise me on how to set ip up? My confusion is around the "Target Application
    Type" setting.
    Thank you for your time.
    Regards,
    P.

    Hi Rameshwar,
    PowerPivot workbooks contain embedded data connections. To support workbook interaction through slicers and filters, Excel Services must be configured to allow external data access through embedded connection information. External data access is required
    for retrieving PowerPivot data that is loaded on PowerPivot servers in the farm. Please refer to the steps below to solve this issue:
    In Central Administration, in Application Management, click Manage service applications.
    Click Excel Services Application.
    Click Trusted File Location.
    Click http:// or the location you want to configure.
    In External Data, in Allow External Data, click Trusted data connection libraries and embedded.
    Click OK.
    For more information, please see:
    Create a trusted location for PowerPivot sites in Central Administration:
    http://msdn.microsoft.com/en-us/library/ee637428.aspx
    Another reason is Excel Services returns this error when you query PowerPivot data in an Excel workbook that is published to SharePoint, and the SharePoint environment does not have a PowerPivot for SharePoint server, or the SQL Server Analysis
    Services (PowerPivot) service is stopped. Please check this document:
    http://technet.microsoft.com/en-us/library/ff487858(v=sql.110).aspx
    Finally, here is a good article regarding how to troubleshoot PowerPivot data refresh for your reference. Please see:
    Troubleshooting PowerPivot Data Refresh:
    http://social.technet.microsoft.com/wiki/contents/articles/3870.troubleshooting-powerpivot-data-refresh.aspx
    Hope this helps.
    Elvis Long
    TechNet Community Support

  • What will be physical architecture for SSAS Tabular model with Sharepoint 2013?

    I would like to build SharePoint 2013 based reporting portal with Power View as visualization tool.
    I have 300-800 million rows and customer puts priorities in high performance.
    Data source is SQL Server database and Flat Files.
    What will be recommended physical archicture? Is following good?
    -SQL Server database server
    -SSAS Tabular application server
    -SharePoint 2013 Front-End (multiple if needed)
    Kenny_I

    Hi Kenny_I,
    According to your description, you want to if it's OK to SQL Server database server, SSAS tabular application server and SharePoint 2013 Front-End on different server, right?
    As I said in another thread, even though we can install all of them on same physical server, however in the production environment, the recommendation is that install them on different server, it will beneficial to troubleshooting the issue when some
    issue occur on the environment. So in your scenario, it Ok to install all the application on different server. Here are some links about requirement to install the application.
    System requirements for Power View
    Hardware
    Sizing a Tabular Solution (SQL Server Analysis Services)
    If I have anything misunderstood, please point it out.
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

Maybe you are looking for

  • Consecutive data with time stamp

    Hi, I have a table named schedules, it has the following columns: SCHEDULE_ID SCHEDULE_DATE HOME_TEAM_ID VISITOR_TEAM_ID SEASON_ID VISIT_TEAM_DECISION HOME_TEAM_DECISION 20071228NCAAFMICHIGANST0 12/28/2007 6639 5408 21586 W L 20071201NCAAFBOSTCOLL--0

  • Weird hyphenation problem

    I'm using InDesign CS3 5.0.4 on a MacBook running OS10.5.8 I have an InDesign document that I need to convert to Microsoft Word. Before copying the InDesign text I turn off hyphenation. However, when I paste the text into the Word document, all kinds

  • About bapi with java

    hi expert's ,          i wanted to know how we can do bapi with java ...?          how we can retrive data from java.           means what will be the coding for java?                may be m not able to explain but plz help me with this... Regards P

  • How can i remove  Iphone disabled ios 7

    i change pass code then i confuse to write againe until arrive to Iphne is disabled

  • Fieldcatalog

    my fieldcatalog has 5 fields . i have to dynamically display a few fields from these 5 fields. say i want to display only F1 and F3 rest i want to show invisible. which property should i choose from LVC_S_FCAT. in oder to receive the output.