Proactive Caching for Cube process.

Hi,
We need to implement proactive caching on one of our cubes in SQL Server 2012, we are able to do it at Partition level (Measures) when there data changes in tables, I am looking for option to implement Proactive Caching at Cube level every
night at particular time (12.00 A.M) irrespective of data change in the tables. We dont want to use SSIS Packages.
Thank You.
Praveen

Hi Praveen,
Proactive Caching is a feature in SSAS that allows you to specify when to process a measure group partition or dimension as the data in the relational data source changes.
Generally, to implement Proactive Caching for a cube, we develop an SSIS package that processes the dimensions and measure group partitions. And then execute the SSIS package periodically. As
Kieran said, why don't you want to use SSIS Packages in your scenario?
Here are some useful links for your reference.
http://vmdp.blogspot.com/2011/07/pro-active-caching-in-ssas.html
http://www.mssqltips.com/sqlservertip/1563/how-to-implement-proactive-caching-in-sql-server-analysis-services-ssas/
Regards,
Charlie Liao
If you have any feedback on our support, please click
here.
Charlie Liao
TechNet Community Support

Similar Messages

  • Error in MOLAP Proactive Caching

    Hello,
    We have enabled Proactive Caching for MOLAP and we are using Polling mechanism.  The polling query queries the view for date column. An SQL view is the datasource for the cube with joins on different tables.
    When an insert happens on the underlying table, the polling query works fine and starts to process the cube.
    During the cube process, the following error is logged in the SQL Server Profiler
    Internal error: The operation terminated unsuccessfully. Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table:vw_realdata, Column: 'Product', Value: 'Product1'. The attibute is 'Product'....
    We have enabled the "Apply settings to dimensions" checkbox for the Measure group
    When the complete database is processed, this error does not occur.
    Please let me know how to prevent this error using Proactive Caching?

    Eileen,
    "The issue is during the cube process which is run by SSAS once it detects changes by Poll query"
    Say I have a dimension Product, with key as Product_Key and an attribute BRAND. with values {1,BRAND-A}.
    Up to now everything works fine.
    Dimension data in database got updated - BRAND-A updated to BRAND-B.
    During this time,
    - before poll query detect
    - after poll query detected and during cube processing by SSAS
    Any MDX query fired with attribute BRAND, will look for BRAND-B in MOLAP dimension, if not found it will throw error. Why BRAND-B, because the DB is already updated.
    SELECT non empty [PRODUCT].[BRAND].MEMBERS on rows, [Sales] on columns FROM MYCUBE
    will translate in sql query like below
    SELECT BRAND, SUM(Sales) Sales FROM <MYFACT> fact , PRODUCT prod where fact.PRODUCT_KEY = prod.PRODUCT_KEY GROUP by prod.BRAND
    The sql returns
    BRAND-B|9999.89 , the attribute values are checked against MOLAP dimension , it would fail with the error message as Anandh got.
    After cube process completed by pro-active mechanism, the error will go away.
    Thanks
    Shom
    Shom

  • Clear cache for items not working with multiple items

    APEX 4.0, 11g. I've made a session state process that clears cache for items (item_1, item_2, item_3) but I keep getting an error "Unexpected error, unable to find item name at application or page level" I have now created 3 session state processes, each with one page item in it, and it works fine but as soon as I try to put more than one item in the process field, separated by commas, I get this error. Is there some other setting I have to change to make it accept multiple values in this field?

    No, I wasn't using the parenthesis... just the items separated by a comma.
    I don't want to clear the cache for all page items, just these 3. It works okay in 3 separate clear cache for item processes, just thought it was odd that I can't put all three items into one process separated by commas.

  • Automatic MOLAP cube : Proactive caching was cancelled because newer data became available

    When I process the cube manually after processing Dimension, It works fine. But when I append data into database column, It performs proactive caching, at that time it fails.
    Sometimes: It does not get key attribute because measure gets processed before dimension
    and sometimes it gives error:  
    Proactive caching was cancelled because newer data became available  Internal error: The operation terminated unsuccessfully. OLE DB error:
    OLE DB or ODBC error: Operation canceled; HY008. Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'call dim Monthly 201401 2', Name of 'call dim Monthly
    201401 2' was being processed. Errors in the OLAP storage engine: An error occurred while the 'MSW' attribute of the 'call dim Monthly 201401 2' dimension from the 'callAnalysisProject' database was being processed.  etc....

    I have also seen this error occur in other scenarios.
    In the first if you have set Pro-Active caching to refresh every 1 minute and your query takes 2 minutes to refresh the error above can be displayed.  Solution increase your refresh time or tune your Pro-Active caching query.
    In connection with the above if your server is limited on available resources this can also cause the slower query response times during refresh and the message above.

  • Proactive Caching - Monitoring processing

    I'd like to hear from anyone that is using proactive caching and how they monitor the loads of the cube. 
    I have created a Aggregation=Max measure in each measure group that loads as getdate(), this allows me to see the load date by partition.  My date dimension has a partition_cd, which denotes what dates a partition covers.  The partition date
    scheme is the same for all measure groups.  This handles things from the user perspective, they know how recent their data is.
    What it doesn't do is allow me to see average load times, number of loads per day, etc..  the things I need from a support perspective.
    The only solution I have seen for this is the ASTrace.exe application.  This would mean installing something custom on the server, like to avoid that if I can.  Any other options out there?
    Any other feedback on this area in general?
    As always you guys are great, thanks for all the help!
    -Ken

    Hi Ken,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Cube process stuck - finished building aggregations and indexes for the partition

    Hi friends
    My cube processing stuck up at "Finished building aggregations and indexes for the partition". How can I troubleshoot this.
    Appreciate your help. 
    Royal Thomas

    Royal,
    Your question is discussed
    here and
    here also. May be it will help you out.
    Best regards.

  • When does proactive caching make sense?

    Hi all!
    An standard pattern for multi-dimensional cube is to have
    one cube doing heavy time-consuming processing and then synchronize it to query cubes.
    In this setup, will pro-active caching makes sense?
    Best regards
    Bjørn
    B. D. Jensen

    Hello Jensen,
    Proactive Cache is useful low volume data cubes where data is updating frequently like inventory, forecasting etc. But I will tell you with my own experience Proactive cache in SSAS is not worth. It behaves unexpectedly some times when data update/insert/Delete
    in source table the cube doesn't start its processing ,  better you create a SQL Job to process the cube after specified time .
    If you want to process the cube in specified interval then I would suggest you to go with SQL JOB
    blog:My Blog/
    Hope this will help you !!!
    Sanjeewan

  • Any problems having Admin Optimization and Proactive caching run concurrently

    Hi,
    We've recently enabled proactive caching refreshing every 10 minutes and have seen data in locked versions changing after Full Admin Optimization runs. Given how the data reverts back to a prior submitted number, I suspect having proactive caching occur while the Full Admin Optimization runs may be the culprit.
    here's an example to depict what is happening.
    original revenue is $10M.
    user submits new revenue amount of $11M.
    version is locked.
    data in locked version is copied into a new open version.
    full optimization runs at night and take 60 mins. all the while, proactive caching runs every 10 mins.
    user reports the revenue in the previously locked version is $10M and the new version shows $11M.
    We've never experienced this prior to enabling proactive caching which leads me to believe the 2 processes running concurrently may be the source of the problem.
    Is proactive caching supposed to be disabled while Full Admin Optimization process is running?
    Thanks,
    Fo

    Hi Fo
    When a full optimization is run, the following operations take place:
    - data is moved from wb and fac2 tables to the fact table
    - the cube is processed
    If the users are loading data while full optimization occurs then it is expected that a certain discrepancy will be observed. One needs to know that even with proactive caching enabled, the OLAP cube will not be 100% accurate 100% of the time.
    Please have a look at this post which explains the details of proactive caching:
    http://www.bidn.com/blogs/MMilligan/bidn-blog/2468/near-real-time-olap-using-ssas-proactive-caching
    Also - depending on how they are built, the BPC reports may generate a combination of MDX and SQL queries which will retrieve data from the cube and data from the backend tables.
    I would suggest to prevent users from loading data and running reports while the optimization takes place.
    Stefan

  • Enable OLAP proactive caching

    Hi All
    Just a quick question, I am unable to find any SAP Notes regarding OLAP proactive caching, and was wondering if enabling it would imporve our cube performance when querying.
    The environment is SQL 2005 / AS 2005 SP3 / BPC 7.0 SP7 Patch 2
    Thanks in advance
    Daniel

    Hi Daniel
    Actually, BPC WB partition is using proactive caching as ROLAP.
    Even though you set it for other two partitions (FACT,FAC2), it will not improve your query performance.
    What I suggest is removing number of rows and columns in the expansion.
    If you have multiple columns and rows in expansion, it will create crossjoin so it will make worse performance.
    If you should have muliple columns and rows, then try to use multiple EVDRE so that it can have same column/row member.
    It will remove crossjoin and will make better query performance.
    Thank you.
    James Lim

  • SSAS Proactive Cache from View

    Hi,
     I have recently reconfigured some of my cube partitions to the HOLAP storage mode with proactive caching turned on. At the moment I have the "Notifications" set to "SQL Server" and tracking from a table and everything is working
    as expected.
     However, ideally I would like to track from an existing view (which would mean that I do not have to create new tables for tracking purposes only). I initially tried this and after some searching have not found a solution only that most people solve
    the issue by tracking from tables. Surely there is a way to achieve tracking from views? If someone could point me in the right direction for some reading it would be much appreciated.
    Thank you for taking the time to read this.
    SQL Server Version:
    Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64)   Jun 11 2012 16:41:53   Copyright (c) Microsoft Corporation  Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) 

    Hi Nesthead,
    According to your description, you created a project based on SQL Server views, now you need to set the change tracking on the views so that you can set cube partitions to the HOLAP storage mode with proactive caching turned on, right?
    As you know, we can set the Change Tracking setting on database level and table level, however there is no such an option on view level. Generally, we create a SQL Server view to combine the columns from different tables, so the view is based on multiple
    tables. If we need to set cube partitions to the HOLAP storage mode with proactive caching turned on, we just need to turn on the Change Tracking on the tables that are used to create the view. Here is a similar issue, please see:
    http://stackoverflow.com/questions/19978072/ssas-2008-proactive-caching-not-working-on-views
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • SHUTDOWN: waiting for detached processes to terminate

    HI
    I have cold backups running every night and before the backup session starts, we have a cronjob 'srvctl stop -p prod' where our db servers are running on RAC.
    but starts from last saturday, our backup session for raw device failed. when look at the alert log, it gives as (example on 15th November 2006)
    Tue Nov 21 01:33:31 2006
    Thread 1 advanced to log sequence 2855
    Current log# 1 seq# 2855 mem# 0: /u01/oracle/oradata/prod/redo_1_01_01.log
    Current log# 1 seq# 2855 mem# 1: /u01/oracle/oradata/prod/redo_1_01_02.log
    Tue Nov 21 02:37:24 2006
    Reconfiguration started
    List of nodes: 0,
    Global Resource Directory frozen
    one node partition
    Communication channels reestablished
    Server queues filtered
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Resources and enqueues cleaned out
    Resources remastered 13089
    147420 GCS shadows traversed, 0 cancelled, 18306 closed
    63793 GCS resources traversed, 0 cancelled
    98369 GCS resources on freelist, 162162 on array, 162162 allocated
    set master node info
    147420 GCS shadows traversed, 0 replayed, 18306 unopened
    Submitted all remote-enqueue requests
    Update rdomain variables
    0 write requests issued in 129114 GCS resources
    1 PIs marked suspect, 0 flush PI msgs
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Tue Nov 21 02:37:27 2006
    Reconfiguration complete
    Tue Nov 21 02:37:28 2006
    Instance recovery: looking for dead threads
    Instance recovery: lock domain invalid but no dead threads
    Tue Nov 21 02:37:29 2006
    Shutting down instance: further logons disabled
    Shutting down instance (immediate)
    License high water mark = 90
    Tue Nov 21 02:37:29 2006
    ALTER DATABASE CLOSE NORMAL
    Tue Nov 21 02:37:29 2006
    SMON: disabling tx recovery
    SMON: disabling cache recovery
    Tue Nov 21 02:37:34 2006
    Thread 1 closed at log sequence 2855
    Tue Nov 21 02:37:38 2006
    Completed: ALTER DATABASE CLOSE NORMAL
    Tue Nov 21 02:37:38 2006
    ALTER DATABASE DISMOUNT
    Completed: ALTER DATABASE DISMOUNT
    ARCH: Archiving is disabled
    Shutting down archive processes
    archiving is disabled
    Archive process shutdown avoided: 0 active
    ARCH: Archiving is disabled
    Shutting down archive processes
    archiving is disabled
    Archive process shutdown avoided: 0 active
    Tue Nov 21 02:42:49 2006
    SHUTDOWN: waiting for detached processes to terminate.Tue Nov 21 07:16:38 2006
    Starting ORACLE instance (normal)
    Seems like it was hang when 'shutdown immediate' command is issued. can somebody help me? what should i do?
    Thanks
    Best regards,
    Nonie

    Hi nonie
    Oracle shutdown Problem
    If the following message is in the Oracle alert file:
    SHUTDOWN: waiting for detached processes to terminate
    you should change the SERVER and SRVR parameters from SHARED to DEDICATED in the file <ORACLE_HOME>\network\ADMIN\tnsnames.ora.
    http://serviceportal.fujitsu-siemens.com/i/en/support/technics/fgm/unix/nsr40a_en.htm
    hope this helps
    Taj.

  • Is it possible for a process to participate in two separate clusters

    Is it possible for a process to participate in two separate clusters? For example if our application would like to get market data in one cluster that has a separate multicast address, and post order in another.

    The easiest way for a client to access multiple clusters is via Coherence*Extend:
         http://wiki.tangosol.com/display/COH33UG/Configuring+and+Using+Coherence*Extend
         The client would not be a member of the cluster, instead it would connect to the cluster via a proxy node that is in the cluster. Using <remote-cache-scheme>, you can configure a cache to point to one proxy (in cluster A) and have another cache point to another proxy (in cluster B.)
         Thanks,
         Patrick Peralta

  • Any ideas on this plan for a process chain?

    Hi,
    I have 6 ODSes. I load these on a daily basis with 6 different flat files. ODS1, ODS2 and ODS3 needs to be loaded first before ODS4, ODS5, ODS6.
    Once all six ODSes are loaded, they are then aggregated based on some two key fields to and loaded into a cube. Npw I want to automate the process.
    Can you please check if my plan for the process chain is right:
    1. Start Process:   
    direct Scheduling
    Change Selections:
    Start date & Time
    Period Jobs: check
    Periodic Values: Daily
    Restrictions: Always execute job
    2. Indexes:(this flows into the first 3 ODSes)
    Delete indexes
    3. Load Data:
    Load Data ODS1 
    Load Data ODS2
    Load Data ODS3
    4. Activate ODS1
    5. Activate ODS2
    6. Activate ODS3
    (? What do I setup here so that the following will be loaded only if ODS1, ODS2 and ODS3 are successful)
    7.Load Data
    Load Data ODS4
    Load Data ODS5
    Load Data ODS6
    8.Delete Indexes
    9. Load Data
    Load data from ODS1, ODS2, ODS3, ODS4, ODS5, ODS6 to the CUBE
    10.Activate Cube (?Needs to activate cube? Is there a process type like that of activate ODS)
    11. Create Index  (Hm, will the Delete and create indexes in this plan take apply to both ODS and Cube)
    Thanks, I will lovr to get hint from you. How do I factor in PSA? i.e. To always go to PSA then to ODS and Cube?

    Hi,
      1.Start the process.(as per your requirement)
      2. Load the data to ODS in parallel (ODS1 , ODS2 & ODS 3)
      3. Activate the three ODS , with separate ODS activation process type.
      4.Put an AND condition
      5. load data to ODS4 ,5 and 6
      6. Activate ods4 ,5, & 6.
      7. delete index for cube
      8. load different data from ods to cube
    9. create index.
                    start
    load ODS1  -- Load oDS2 -- Load ods 3
    Activate ODS1 - Activate oDS2 - Activate ODS3
                     AND (process)
    Load oDS4      load ods5    load ODS 6
    Activate oDS4  Activate ODS5  Activate ODS 6
                     AND
              Delete the index
               Load data from different ODS to cube
                Create Index
       there is no concept of activating the cube .. it is only applicable to ods.
    Regards,
    Siva.

  • Java Proxy Generation not working - Support for Parallel Processing

    Hi Everyone,
    As per SAP Note 1230721 - Java Proxy Generation - Support for Parallel Processing, when we generate a java proxy from an interface we are supposed to get 2 archives (one for serial processing and another suffixed with "PARALLEL" for parallel processing of jaav proxies in JPR).
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1230721
    We are on the correct patch level as per the Note, however when we generate java proxy from the IR for an outbound interface, it genrates only 1 zip archive (whose name we ourselves provide in the craete new archive section). This does not enable the parallel processsing of the messages in JPR.
    Could you please help me in this issue, and guide as to how archives can be generated for parallel processing.
    Thanks & Regards,
    Rosie Sasidharan.

    Hi,
    Thanks a lot for your reply, Prateek.
    I have already checked SAP Note 1142580 - "Java Proxy is not processing messages in parallel" where they ask to modify the ejb-jar.xml. However, on performing the change in ejb-jar.xml and while building the EAR, I get the following error:
    Error! The state of the source cache is INCONSISTENT for at least one of the request DCs. The build might produce incorrect results.
    Then, on going through the SAP Note 1142580 again, I realised that the SAP Note 1230721 also should be looked onto which will be needed for generating the Java proxy from Message Interfaces in IR for parallel processing.
    Kindly help me if any of you have worked on such a scenario.
    Thanks in advance,
    Regards,
    Rosie Sasidharan.

  • Scheduled for Outbound Processing in SXMB_MONI

    I have a scenario RFC -> ccBpm1 -> ccBpm2 -> ccBpm3 -> RFC.
    The scenario was working perfectly. I made some simple changes in ccBpm3 and some errors started to happen.
    The Xml message from ccBpm1 to ccBpm2 started to stuck (message Scheduled for Outbound Processing in SXMB_MONI). I didn´t make ANY change in ccBpm1 or ccBpm2, before the error started.
    When I made a "dummy" change in ccBpm2 and I reactivate it, the message from ccBpm1 to ccBpm2 process, but then the message from ccBpm2 to ccBpm3 stucks. If I do a dummy change in ccBpm3, the message  from ccBpm1 to ccBpm2  stucks again.
    I have no Queues locks in smq1 and smq2. The Cache status are all 0 in SXI_CACHE. I have tried to reimport all Integration Process in Integration Directory, but no sucess.
    Seems to be any kind of bug.
    Any idea?
    Thanks

    First, I would like to know.. if there was a justification to divide the scenario into 3 BPMs. Because BPMs add to system performance. They get converted to workflows on abap engine and context switches from java to abap engine. You might want to check you configuration in ID test configuration and then close analysis of the message processing in each of the ccBPMs.
    VJ

Maybe you are looking for

  • File browse image item not saving

    Hi, Im faced with a problem whereby i created a file browse item to load/save images, thing is the image only gets saved when i edit the record and not on creation. Any help please. Thanks

  • Just installed G52-M6590XD ... PC wont start...HELP PLEASE

    I have just installed a new motherboard, and when I switch on, the fans whir for about 1/2 second and then nothing..... all dead.... any ideas please Neil

  • Billing document is generated automatically How to control ?

    Hi...    After creating outbound delivery in VL01N  , then the Billing document is also to be generated automatically , How to control this ? I want to create the billing document thru VF01 . Regards Deepa. Edited by: Deepa Manian on Jun 27, 2008 5:2

  • CS 6 registration problem

    I purchased  the Photoshop CS6 upgrade, and am trying to install it on a new computer that does not have my previous CS5 version on it.  (It was installed on a laptop that suffered a catastrophic hard disk failure). The registration process is asking

  • Macbook 2.1 upgrade OSX from 10.6.8

    Hi, I have a macbook 2.1 with 2Go RAM running 10.6.8 It's obviously impossible to install 10.8 What is my best option to cotinue getting updates: stay in snow leopard? install lion? I can't find lion in the app store. How can I get it? Thanks edwi