Tabular model: First deployment to server takes 120 min to process, subsequent ProcessFull 15 min?

I have noticed this several times now and I do not understand it.
I have a model with ~45 million rows in the largest table and the first time I deploy to the server and then execute a ProcessFull (via script) it takes over two hours to complete.
*Note when I deploy from BIDS I have it set as Processing Option: Do Not Process.  So it doesn't process until I explicitly call it.
However, the next day (or could be later same day) I kick off the same ProcessFull and it finishes in 15 minutes.
So it appears the FIRST time it is deployed (as in the model did not exist historically, prior to deployment there was no tabular database called "MyTestModel" on the server) it takes an extremely long time.
Subsequent ProcessFulls are very quick.
Why is that?  Has anyone else encountered that?
When I watch the progress of the process full script I see it finishes retrieving all the data in a relatively decent amount of time, for example the 45 million row table:
Finished processing the 'BigTableWith45MillionRows' table.
So I know it has completed all its data retrieval operations.
Then it moves onto:
Processing of the 'Model' cube has started.
Processing of the 'ACCOUNT' measure group has started.
and many more various measure groups
later I get:
Finished processing the 'ACCOUNT' measure group.
Finished processing the 'Model' cube.
It moves onto to it's "CALCULATE;" statements at that point with "CREATE MEMBER CURRENTCUBE.Measures".... and so forth.
It would be most helpful if I could see which ones it had started but not yet stopped (it appears to "Started processing the 'random' hierarcy" or calculated column, or whatever and then a few lines later it will say "Finished" but other
than looking through them all by hand and matching up every Started with Finished trying to find one with OUT a "Finished" I have no way of knowing which are still processing.
It would be helpful to know "item X takes 2 hours to finish processing"
It tends to take the longest amount of time in the processing hierarchy and calculated column phase.

The default events in profiler are fine. You will likely focus on Progress Report End. How are you running ProcessFull? An XMLA script or from right-clicking on the database or from right clicking on a table and selecting all tables?
http://artisconsulting.com/Blogs/GregGalloway
Right click on database, go to process, select process full and then script (single database not each table).
<
Processxmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
  <
Type>ProcessFull</Type>
  <
Object>
    <
DatabaseID>MyDatabaseName</DatabaseID>
  </
Object>
</
Process>
I finished a full process yesterday and captured the info.
The biggest for CPUTime (I noticed duration would be long but no CPU time, it seemed like it would flag things as having started but due to dependencies they just sat and waited?)
was my larger hierarchy: Progress Report End, for CPU time of 11925840ms or 3.3 hours.  Duration was 11927999ms.
After that was my 45 million row table at CPU time 715296 and duration of 860773 or 14 minutes.
It is interesting because a normal ProcessFull is ~15 minutes, so it seems that the hierarchy rebuild is what is "killing me" on these.
A variety of Object Created events had high durations but NULL CPU time, seems like those were dependant on earlier events maybe??
Regardless, my big hierarchy was the longest at the 3.3 hours.
It has 173,000 unique rows in the hierarchy (again like Account primary, secondary, though 6 or so levels deep, 1.2.3.4.5.6 etc...)

Similar Messages

  • BC server takes 45 min to come up

    I have a BC server deployed to 8i db. The GUI uses InfoBus and data aware controls and presents 8 tabs for displaying grids of various data types. There are approximately 5000 rows total of data associated with all of these grids. It takes about 45 minutes for the GUI to come up with all of the data. Why should it be taking so long? Is this too much data? The CPU meter on the database server (Sun box; 1 CPU; 500MB RAM) is pegged at 100% utilization for almost the entire 45 minutes until the GUI appears on the client.
    How can I improve this performance while still using InfoBus and DAC controls? Docs say entity objects are created representing each row of data - are 5000 entity objects being created when the BC server starts up?
    Any help greatly appreciated...

    Hi Tom!
    From our previous discussions, I think you've found that the issue stems from the fact that by default in a DAC form, all of your queries are executed when the form starts.
    For the benefit of others who are listening in, one way to reduce this start up time is to delay the execution of the queries. This requires some modification of the wizard-generated code. The steps are below.
    1.define the all RowSetInfo objects and their attributes
    2.parent any RowSetInfo whose queries should be executed immediately to the SessionInfo object
    3.publish the SessionInfo
    4.parent any RowSetInfo objects whose queries should be delayed to the SessionInfo object
    5.when appropriate, publish the RowSetInfo using RowSetInfo.publishRowSet(true)
    -- Brian

  • Data "chunking" in SSAS Tabular models

    I have a Tabular model I built in VS2013, deployed to SSAS 2014 Tabular, and consumed via SSRS and PowerPivot. Let's say it consists of 4 fact and 1 measure table. The resulting dataset is about 10 million rows, which for some end-users can be a little unwieldy.
    I had heard of "partitions", and thought it could help me "chunk" the data into manageable "user friendly" portions of the total measure dataset, but it doesn't appear to behave as I envisioned.
    Imagine you have a model with world-wide sales information and you want to maintain that as a master model that you deploy, but want to make portions of the measure data available based on values in a fact table; let's say subsets that represent NA, another
    for EU and so on. The geo data is a fact table in the model. 
    I envisioned "partitions" being select parts of the total model that you expose via partition name that the end user can select to work with like "I'm the AR for the NA region and this is my partition of the WW Sales data cube" How
    would one accomplish this?

    Hi Torben,
    According to your description, you create a SQL Server Analysis Services Tabular model and deploy it to SQL Server 2014 server, and use Reporting Services and Excel to interact with this tabular model. Now the problem is that the resulting dataset is about
    10 million rows, which for some end-users can be a little unwieldy, right? If I have anything misunderstood, please point it out.
    In your scenario, you want to filter the data base on the user's region to improve the performance issue. In this case, please refer to the links below which describe how to achieve it.
    http://blogs.msdn.com/b/analysisservices/archive/2012/08/10/row-level-security-in-analysis-services-2012.aspx
    http://grow-n-shine.blogspot.jp/2012/10/defining-row-level-security-in-ssas.html
    Besides, to improve your report performance, you can cache your report. Caching can shorten the time required to retrieve a report if the report is large or accessed frequently. Please refer to the link below to see the details.
    http://msdn.microsoft.com/en-us/library/ms155927.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

  • Glassfish wont start after the first deployment

    good afternoon.
    i got a problem that i would like to solve whit your help: after the first deployment of my proyect into Glassfish i keep getting this error over and over again, when i reinstall the server everything works fine but after the first deployment the server wont start again, I would be glad to know what i can do to solve this, thx:
    Waiting for domain1 to start ...Error starting domain domain1.
    The server exited prematurely with exit code 1.
    Before it died, it produced the following output:
    Launching GlassFish on Felix platform
    Completed shutdown of GlassFish runtime
    Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMa
    in.java:97)
    at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:55)
    Caused by: org.jvnet.hk2.component.ComponentException: Failed to create a habita
    t
    at com.sun.enterprise.module.common_impl.AbstractModulesRegistryImpl.cre
    ateHabitat(AbstractModulesRegistryImpl.java:169)
    at com.sun.enterprise.module.bootstrap.Main.createHabitat(Main.java:425)
    at org.jvnet.hk2.osgiadapter.HK2Main.createHabitat(HK2Main.java:96)
    at com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishRunt
    ime.newGlassFish(EmbeddedOSGiGlassFishRuntime.java:89)
    at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher.launch(
    GlassFishMain.java:113)
    ... 6 more
    Caused by: java.lang.RuntimeException: Fatal Error. Unable to parse file:/C:/Pr
    ogram%20Files/glassfishv3/glassfish/domains/domain1/config/domain.xml
    at org.glassfish.config.support.DomainXml.parseDomainXml(DomainXml.java:
    273)
    at org.glassfish.config.support.DomainXml.run(DomainXml.java:112)
    at com.sun.enterprise.module.common_impl.AbstractModulesRegistryImpl.pop
    ulateConfig(AbstractModulesRegistryImpl.java:176)
    at com.sun.enterprise.module.common_impl.AbstractModulesRegistryImpl.cre
    ateHabitat(AbstractModulesRegistryImpl.java:158)
    ... 10 more
    Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[11,88]
    Message: Invalid byte 2 of 4-byte UTF-8 sequence.
    at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLS
    treamReaderImpl.java:590)
    at javax.xml.stream.util.StreamReaderDelegate.next(StreamReaderDelegate.
    java:88)
    at org.glassfish.config.support.XMLStreamReaderFilter.thisNextTag(XMLStr
    eamReaderFilter.java:94)
    at org.glassfish.config.support.XMLStreamReaderFilter.nextTag(XMLStreamR
    eaderFilter.java:74)
    at org.jvnet.hk2.config.ConfigParser.handleElement(ConfigParser.java:199
    at org.jvnet.hk2.config.ConfigParser.handleElement(ConfigParser.java:215
    at org.jvnet.hk2.config.ConfigParser.handleElement(ConfigParser.java:167
    at org.jvnet.hk2.config.ConfigParser.parse(ConfigParser.java:98)
    at org.jvnet.hk2.config.ConfigParser.parse(ConfigParser.java:93)
    at org.glassfish.config.support.DomainXml.parseDomainXml(DomainXml.java:
    254)
    ... 13 more
    Command start-domain failed.

    UPDATE:
    I also tried a safe boot and now after inserting install DVD and pressing the option (alt) I choose the DVD-Install disc to boot but after 30-45 seconds appers on the grey boot monitor from "behind" the Terminal panic message:
    panic (cpu 4 caller 0xfffffff80002d1208): Kernel trap at  ...
    Error code: 0x0000000000000000
    Debugger called: <panic>
    Backtrace (CPU 4), Frame : Return Adress
    0xfffffffff8153283600 : 0x8000204d15
    BSD process corresponding to current thread: ReportCrash
    Mac Os version:
    10J4139
    Kernel version:
    Darwin Kernel Version 10.7.4: Mon Apr 18 21:24:17 PDT 2011; root:xnu-1504.14.12~3/RELEASE_X86_64
    How can I solve this, any Idea?
    THX

  • Tabular Model Partition

    Hi everyone
    Please help me,
    I am new to the tabular model 
    My table data size big having millions of records , i need to apply the Partition on my tables month wise or Year wise.
    I have search many blogs but i could not able to find the right solution,
    please help anyone i need step by step process. 
    please anyone share the documentation or link.
    its very helpful.
    Regards
    Sreeni

    Hi Sreeni,
    Partitions divide a table into logical parts, and each partition can then be processed (Refreshed) independent of other partitions. In this case, you should group by your partitions at month level, so we can directy process the "Jan-2014" partition to load
    data for business requirement.
    Besides, we can process the parition via SSIS package. Here are some articles for your reference, please see:
    Using Integration Services with tabular models:
    http://blogs.msdn.com/b/cathyk/archive/2011/09/08/using-integration-services-with-tabular-models.aspx
    SSIS Methods to Process SSAS Tabular Partitions:
    http://jessekraut.wordpress.com/2013/09/24/ssis-methods-to-process-ssas-tabular-partitions/
    Process Tabular Model using SSIS:
    http://www.bifeeds.com/2013/04/process-tabular-model-using-ssis.html
    Regards,
    Elvis Long
    TechNet Community Support

  • Processing of a deployed Tabular model inside SSDT

    Hi,
    operating inside SSDT, does it exist a manner to process a tabular model (or a table/partition) already deployed on the SSAS instance?
    I know that it is possible to process only the workspace data.
    Thanks

    Hi Pscorca,
    According to your description, you want to process data for a tabular model which had already been deployed on the SQL Server Analysis Services instance. When authoring your model project, process actions must be initiated manually in SQL Server Data Tools
    (SSDT). After a model has been deployed, process operations can be performed by using SQL Server Management Studio or scheduled by using a script. So we cannot process data for a tabular model which had already been deployed on the SQL Server Analysis Services
    instance.
    If you have any concern about this behavior, you can submit a feedback at
    http://connect.microsoft.com/SQLServer/Feedback and hope it is resolved in the next release of service pack or product.
    Thank you for your understanding.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Deploying SSAS Tabular Model (2014)

    Hi,
    I'm trying to deploy my model with Microsoft.AnalysisServices.Deployment.exe
    This works locally, but when I try on our test servers then it does not work.
    Strangely enough it tells me that it was all successful....
    Exactly the same result locally as on test server.
    but on my computer it is really deployed but on test server it isn't....
    Any one an idea how I could debug this?
    Kr,
    Sammy

    Hi Sammy,
    According to your description, you are using Microsoft.AnalysisServices.Deployment.exe to deploy the tabular model. It can't deploy successfully on test server. Right?
    In this scenario, since the Deployment Utility didn't throw any error when deploying, the deployment supposed to be executed successfully. Please confirm you connect the correct SQL Server instance. Also please try to deploy via command prompt instead of
    using user interface. You can choose silent mode and it can generate a log file for your further analysis.
    Reference:
    Deployment Utility (Analysis Services - Multidimensional Data)
    SSAS Tabular Model Deployment
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • VS2010 SSAS Tabular Model Memory Limitation

    Hi,
    My laptop recently crashed and am currently using a loan one that has Windows 7 x32, Visual Studio 2010 SP 1 AND SQL Server 2012 SP 1 (CU 4) installed. It only has 4GB of RAM, and when I attempt to load a very large fact table in a SSAS Tabular
    Model project using Visual Studio I receive the following error:
    Memory error: Allocation failure : Not enough storage is available to process this command. . If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.
    The current operation was cancelled because another operation in the transaction failed.
    Visual Studio 2010 is only available in 32 bit and I have already changed the VertipaqPagingPolicy from the default 0 to 2, but still the issue exists. Has anyone experienced this before?

    You have a couple of options in terms of removing the where clauses:
    1) Build your project in SSDT and then use the Analysis Services Deployment Wizard to create an XMLA deployment script, then edit the script to remove the WHERE clauses before executing it on your server.
    2) If this is a first time deployment change your project deployment options to "Do Not Process" - deploy your project and then edit the partitions for the tables with WHERE clauses to remove them.
    3) Another option is to use views in your relational database. Then you can put the WHERE clauses in the views on Dev, but exclude them in the Prod database.
    http://darren.gosbell.com - please mark correct answers

  • Tabular model process-took 4 HOURS instead of normal 10 minutes

    There must be some sort of log I can view or review to get a better look inside the process?
    When I refreshed my development project (which is using the same server for workspace) it takes less than 10 or 15 minutes.
    Then I deployed it to my server and after 35 minutes I closed out of SSDT BI as it appeared to have frozen.
    I then deployed again with no processing.
    After that I processed manually through SSMS by using Process (but scripted out) for a process full.
    After 12 or so minutes most everything appeared to have processed and the last feedback was on:
    CREATE MEASURE 'PROFIT_WORKSHEET'[Less Cost Over Profit -Indirect Costs]=CALCULAT ..."
    I could see that the SSAS server was still working via CPU activity, but no futher messages were reported to SSMS.  Then after 3 hours and 45 minutes later it claimed to have completed and displayed: 
    "Execution Complete" (it had lots of white space above it, around 20 or so blank empty lines which was odd)
    The tabular model seems to be functional, and it is less than 350 MB in size (almost the exact same size of my development workspace model) so I am at a loss as to why the delay like this?
    Any suggestions, thoughts?
    It returned 40 million rows, but I have other models that return more and process in 3 or 4 minutes so it is very odd (and it isn't 400 million, just 40 million)
    Thanks!
    appeared

    Hi OneWithQuestions,
    According to your description, you create a SQL Server Tabular Model project which takes 10 minutes to process in SQL Server Data Tools, the problem is that it takes 4 hours to process this database in SQL Server Management Studio. So you need the detail
    log that inside the process, right?
    In your scenario, you can enable SQL Server profiler to monitor the queries fired by the process, once you find some queries took a very long time to run, consider creating the smaller cube partition or optimizing the query by adding index or partition to
    improve process time.
    http://www.mssqlgirl.com/process-full-on-tabular-model-database-via-ssms.html
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • SSRS Parameters using SSAS Tabular model get cleared

    I have an SSRS report that uses data from a SSAS Tabular model.  In the query designer, from the calendar dimension I choose a "Date Inclusive" filter and make it a parameter. I also choose to add another filter using and Organisation Unit
    dimension and also make this a parameter. The report is written and deployed to a SharePoint 2013 library.
    Most of the time, the report runs as expected with the parameters cascading off each other as expected.  However, occasionally, parameters get cleared (either after changing a single value such as the Org Unit selection or sometime whilst the report
    is being rendered). Sometimes you cannot select a value from the available values - you need to navigate somewhere else and then start over.
    I changed the data source for the parameters to use SQL queries that return the same values as the MDX queries and the probably seems to have gone (time will tell)
    This report has a child (detail) report that has one extract parameter.  This parameter happens to have over 1,000 values.  With the change of the parent report, you are now able to get to the child report.  However, the child report seems
    to exhibit the same problem with the parameters being cleared - and with a much higher frequency.
    So, that leaves me wondering whether
    anyone else has experienced this ?
    is this an issue with SSRS 2012 and SSAS Tabular models (I have not seen this behaviour before and I have been using SSRS (since version 1) and SSAS Multi-dimensional (from when it was called "OLAP Services") ?

    We applied SQL Server 2012 Service Pack 2 to the SharePoint farm (the SP Admin needed to re-create the service applications) and the problem is fixed

  • Error while importing data in SSAS Tabular Model

    I am new to the concept Tabular Model in SSAS 2014.
    I am trying to create one tabular model based on Adventureworks DW 2014.
    I am getting below error while importing tables to create model.
    "OLE DB or ODBC error: Login failed for user 'ASIAPAC\CSCINDAE732028$'.; 28000.
    A connection could not be made to the data source with the DataSourceID of '98c2d415-1e84-469c-a170-2bcacd779c1a', Name of 'Adventure Works DB from SQL'.
    An error occurred while processing the partition 'Customer_65240c88-55e7-416c-a7ac-732dece8be8e' in table 'Customer_65240c88-55e7-416c-a7ac-732dece8be8e'.
    The current operation was cancelled because another operation in the transaction failed."
    But while creating the Datasource, it has created successfully (below img)
    But while importing the facing the below error.
    Note:
    I have multiple instances in my system with unique names.
    Is this causing any ambiguity issues in selecting right instances?

    Hi Naveen,
    Based on your screenshots, you fail to open a connection to the data source. Right?
    In this scenario, the first screenshot you post is for creating a connection to server with the current windows authentication, not for connecting a data source. So the "Test Connection succeed" means your current windows user can connect
    to server, not connect to the database your selected in the dropdown list. Then you click next, you can choose account to access the data source. Based on the information, your service account "'ASIAPAC\CSCINDAE732028$" doesn't have permission to
    access the database you selected. Please grant the permission for the service account in SSMS.
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou
    If you have any feedback on our support, please click here.

  • SSAS Tabular Model and browse with MS Excel 2013

    I have a tabular model, It has an attribute with datatype and data format as "whole number". When I browse the model from excel and add label filter, it
    does not work as expected.
    When I specify greater than 180 as label filter, It gives following output:
    My guess is that it is just filtering on first two digits "18" and not "180". When I copy these values in a excel sheet and create pivot table on top of it, hell it works fine.
    Is this a microsoft bug? have you encountered something like this or I am doing some thing wrong?
    I am using tabular model 2012 and excel 2013 or 2010.
    If this post answers your query, please click "Mark As Answer" or "Vote as Helpful".

    Hi Itz,
    According to your description, the filter not works fine in excel for a SQL Server Analysis Services Tabular models, right?
    I have tested it on the local environment, we cannot reproduce this issue. As per my understanding, this issue is related to the settings of your tabular model. Does this issue can be reproduce in multiple servers in your environment? In that is
    case, from a support perspective this is really beyond what we can do here in the forums. If you cannot determine your answer here or on your own, consider opening a support case with Microsoft. Visit this link to see the various support options that are available
    to better meet your needs:
    http://support.microsoft.com/default.aspx?id=fh;en-us;offerprophone
    Besides, if you consider this issue is a bug, you can submit a feedback at
    http://connect.microsoft.com/SQLServer/Feedback Microsoft will confirm if this issue is a bug or not. 
    Regards,
    Charlie Liao
    TechNet Community Support

  • Processing Tabular Model

    1  We are developing a Tabular Model. In this model there are likely to be about 15 dimension tables and 2 main fact tables.
    2  One of the fact tables is very large approximately 70 million rows and will increase to 150 million rows in about 2-3 years time.
    3  So on the cube we have created partitions and are using the incremental processing. What we have done is to create a partition definition table header and lines. At the header level we will store the name of the measure group on which we wish to
    create partitions, and in the lines table we will create the definition of each partition. Using an sp we will mark of those rows of the partition lines which we wish to reprocess. Such partitions will be dropped and recreated. So far this is working well.
    4  I want to generalize this solution so that I works across different projects without any changes.
    Now I have two questions :
    Question 1 :
    If I make changes in the tabular project and deploy the same, I believe all partitions will get deleted and all the data will need to be pulled in again. This will happen even if I add a calculated measure. Is there any method to overcome this ?
    Question 2 :
    What is the mechanism of only processing certain measure tables incrementally and all other tables fully ? In my above example only one table has partitions. So if I want to process only the current partition of that table, and all other tables how do I
    achieve this ?
    Sanjay Shah
    Prosys InfoTech, Pune, India

    1) if you only add a measure or a calculated column, you do not need to read data from data source. If you have problem with deployment within VS, consider using Deployment Wizard.
    2) A complete description of process strategies is included in a chapter of our book (http://www.sqlbi.com/books/microsoft-sql-server-2012-analysis-services-the-bism-tabular-model).
    In general, you can control which partition/tables you want to process and in which way, using XMLA scripts, PowerShell and other tools. The easiest way to create an XMLA script is using the Generate Script feature in SSMS when you use the process wizard.
    Marco Russo http://ssasworkshop.com http://www.sqlbi.com http://sqlblog.com/blogs/marco_russo

  • Updation of DAX in SSAS Tabular Model

    Hi All,
    I have a SSAS tabular model which has around 42 Dimension table and 10 Fact table each containing some thousands
    of record.when I am writing some DAX on a FACT to get some measure ,its taking 10-15 minutes time to update one DAX.
    I have to write more than 300 measures using DAX.Can anyone please suggest how can I speed up the updation process
    in writing DAX.
       Thanks in Advance.
    Sanjay
    sanjay

    Hi Sanjay,
    According to your description, there are around 42 Dimension table and 10 Fact table each containing some thousands of record in your tabular model, so it takes 10-15 minutes time to update the calculation to server, right?
    It's default setting to update the modification to server when change something on your tabular model, if the tables contain large data, it will takes long time to update. In your scenario, we recommend you to use Multidimensional database instead of Tabular
    model when having large amount of data with complex requirements.
    SQL Server Analysis Services (SSAS) Multidimensional model is used when having large amount of data with complex requirements. In order to improve query performance, there will be a cube doing heavy time-consuming processing and then synchronize it
    to query cubes.
    However, SSAS Tabular model is used when the data model is relatively simple. So for the large amount data model, we recommend to use Multidimensional database instead of process data on one server then synchronize it to query cubes.
    Reference
    http://blog.aditi.com/data/choosing-between-analysis-services-multidimensional-and-tabular-models-part-3/
    Regards,
    Charlie Liao
    TechNet Community Support

  • Tabular model does not update accordingly with source data chage

    Hi all ,
    I am new into Tabular, appreciate if someone can help please.
    I've created a simple Tabular model against a single table (SQL Server 2012 ). When I process and deploy I can open it on Excel and view data.
    Then when I change data in the source table, and process and deploy the data, the changes does not reflect when open in Excel, But If I give different Database Name and Cube Name under Tabular Properties and process and deployed, and open in Excel I can
    see the changes.
    I want to keep the same Database and Cube Name under Tabular properties/Development server.
    Is it possible to see source table changes in Excel, if I didn't change Database and Cube Name under Tabular properties/Development server.
    Deployment Options are as follows:
    Processing Option : Default
    Transaction Deployment :False
    Query Mode: In Memory
    Thanks.
    Mira

    Hi,
    Try by changing the "Processing Option : Default" to "Processing
    Option : Full". Sometimes I delete the workspace database created in SSAS server before do the processing.
    Best regards.
    Chandima Lakmal Fonseka

Maybe you are looking for