Cache questions.

I have recently had a faulty hard drive replaced in my Imac, and reinstalled a clone off a BBU and have been running of this.
Whilst everything appears OK, I am a bit wary (some may say paranoid) since it was cloned from the faulty hard drive.
I found a folder in library/caches that has a "red circle with a dash in it" it is called "com.google.UpdateEngine.Framework.0" , wondering if it is ok to trash it, and what the red warning means, there is nothing in the folder.
Also would it be worth getting a cache cleaner to clean things up?
If yes, what would be a good safe cache cleaner to get?
I realize I may be doing the "If it aint broke keep fixing it till it is" thing but would just like to be sure before I re-clone the current system.
Any thoughts welcome. Thanks.

The red circle with the dash means you cannot open that folder - you do not have access because you are not the user who created it. It was created by a Google program that you have installed.
Certain cache files are benign and you can delete them without a problem. But some may be essential to the OS and deleting them could be problematical.
Use a utility such as TinkerTool System to remove caches safely. Note that once you delete caches your system will be noticeably slower as the missing cache files are recreated. Suffice it to say that unless you are having a problem caused by a corrupted cache there isn't much to be gained by deleting them.

Similar Messages

  • Switching From Windows To Mac: ACR Database, Bridge Cache Questions

    I am planning to switch from Windows XP to the Mac running Leopard. I am now running CS2 in Windows, but will upgrade to CS3 on the Mac. I am maintaining my RAW file settings in ACR central database and my Bridge cache in a centralized cache file. I've searched the forums but couldn't find answers to my migration questions:<br /><br />1. Can one move the ACR database file from Windows (in folder C:\Documents<br />and Settings\<username>\Application Data\Adobe\CameraRaw) to a folder<br />somewhere on the Mac to preserve the settings on RAW files that are also<br />moved to the Mac? Or does one need to export individual XMP files for the<br />RAW files in Windows and move both the RAW files and their associated XMP<br />files to the Mac?<br /><br />2. I have a similar question for migrating the Bridge cache from Windows XP<br />to the Mac. Can one somehow move the Bridge cache over (I think it's in<br />C:\Documents and Settings\<username>\Application Data\Adobe\Bridge\Cache\Thumbnails\) or do I need to export the cache for<br />each folder and subfolder and move them over along with the folders and<br />subfolders?<br /><br />I would appreciate any help or pointers to any Adobe documents or<br />directions. Thank you.

    Man-Kong,
    Since you said that you were also migrating from CS2 to CS3 I would have said that their cache files were incompatible. That is what I experienced with CS1, CS2, and CS3 in the past. But Thomas indicated it should work. I must be missing something.
    The XMP files move fine for me. The local cache (folder) files have completely different names with each version and I see no evidence that they can be shared. This is also a problem with archived CDs. CS3 always has to rebuild cache if they were archived from CS1 or CS2.
    Since the central cache must have some folder references, and this structure will be completely different on Mac, I dont see how it can work. It doesnt work for scripts or actions either.
    So my humble recommendation is to set your preferences to store cache and XMP files in the local folders where possible. Then simply bite the bullet. Each time you visit a folder for the first time in the new environment, cache will be rebuilt.
    Cheers, Rags :-)

  • Cache Question

    Hello all,
    I am not new to working with java (not writing it, just supporting it), but I'd like to know a little about this:
    I am currently working with a Java applet displaying TIF images that have been uploaded from another server. My question is about the Java cache. First of all, is it easy to find in the dir's? Everytime my applet loads a TIF image, what kind of files are being downloaded into the cache folder? What happens when cache folder has reached the maximum limit of memory that I have set? At that point, are files replaced with the current ones being downloaded, or is the cache no longer able to store files?
    I've been wondering this for some time now, so any help would be appreciated. Thanks!
    -adam

    ajetrumpet wrote:
    If you have a bad day, please don't respond. That's only fair to everyone, ya know?
    Thanks for the replies, or lack thereof, but I think this thread is pretty much useless now...thanks again for anything you guys helped with.I've had a great day, thanks, but if you look at this thread through the eyes of most of us here, you're coming off, well, not looking so good, kind of (and I hate to say it) like a spoiled brat. Is that how you want to appear? Seriously, think on how your reply will appear to others, not just the person you're arguing with. This is being posted with the intention of helping you get better answers in the future. Also, read this link here that tells you how to ask decent questions, questions that others will want to answer: [How To Ask Questions The Smart Way|http://www.catb.org/~esr/faqs/smart-questions.html]
    good luck.

  • Cfquery caching question

    I have a question (actually two) about how caching works with cfquery so I can make sure that what I'm doing on our website makes sense. (We're on CF10)
    I have a database that stores certain details about each of the web pages on our site.  Contact ID, title, date added/updated, whether it's a recent or featured item, etc.  The unique identifier for each page is the path (/folder/subfolder/page.cfm).  Since out site is organized in several major topic areas, there's a column to identify which topic the page is in (and that's the same as the folder name that set of pages resides in).
    Some or topics get tons of hits, and some get very few, so I've been doing this:
    <cfquery name="getalltopics" datasource="dsn" cachedwithin="#createtimespan(0,1,0,0)#">
    SELECT (columns I need
    FROM pages
    WHERE topic = <cfqueryparam value="#topicname#"  cfsqltype="cf_sql_varchar">
    </cfquery>
    <cfquery name="getpagedetails" dbtype="query">
    SELECT (columns I need)
    FROM getalltopics
    WHERE page_id = <cfqueryparam value="#page_path#"  cfsqltype="cf_sql_varchar">
    </cfquery>
    So here's my question.  I know that caches only come into play if the query that's being run identically matches the one that's cached. So in my mind, the fist query would make an individual cache of each of our topics, as the pages were viewed during the day.  My second cache would grab the page details from the first, with several topics cached, how would the second one know which cache to find the page details in?  The query works, and is fast, but I'm just wondering if I need to specify the topic ID in the second query.
    My second question is about cache time.  If I do createtimespan(0,1,0,0), does that cache last for an hour after it's created, or does it last for an hour after the last time that query is run?
    Thanks!

    What I ended up doing was creating a variable-based query name, named after the current topic, like so:
    <cfset qname = "q" & topic_name>
    <cfquery name="#qname#" datasource="dsn" cachedwithin="#createtimespan(0,1,0,0)#">
    SELECT (columns I need)
    FROM pages
    WHERE topic = <cfqueryparam value="#topicname#"  cfsqltype="cf_sql_varchar">
    </cfquery>
    <cfquery name="getpagedetails" dbtype="query">
    SELECT (columns I need)
    FROM #qname#
    WHERE page_id = <cfqueryparam value="#page_path#"  cfsqltype="cf_sql_varchar">
    </cfquery>
    This way I (in theory) have a unique cache for each topic, and my second query draws from that cache.
    Upon implementation, I've noticed a significant reduction in database hits (since we still use Access on this site, any hit creates an .ldb file, so I can easily tell if it's getting hit).  Prior to the change, the ldb file was basically continuously there, appearing and reappearing a few times per second.  Now, several minutes can go by with no ldb, even at the peak of the day, and pages are getting rendered almost instantly.  If I add up all my query execution times for a page, I'm getting 5-7 ms total.
    So it has to be doing something.

  • Simple Query Caching Question

    I have a .cfm template that is used to render a handful of
    pages of my website. Each page is a department, for example, like
    "Arts & Entertainment", "Health", "Finance", and so on. On each
    of these pages (rendered by this same template) there is a common
    element... a list of our top ten articles. I have used the
    "cachedwithin" feature to cache the query for a 3-hour period.
    My question is this...
    Since it is a single template generating these department
    pages, the "top articles" query is exactly the same in terms of
    query name, datasource... only the SQL statement (which uses a
    "WHERE department_id = X" statement) is different. Let's say I have
    ten departments rendered by this template... should it be caching
    all ten queries, regardless of the names being the same?

    Hopefully you only have one cfquery tag and it is located in
    the .cfm template and you are using a variable in your where
    clause. If not, you are not being efficient with your code.
    If you do have just one cfquery tag, and you have a
    cachedwithin attribute, cold fusion will cache a query each time
    your variable changes.

  • Result cache question

    I create and populate the following table in my schema:
    create table plch_table (id number, time_sleep number);
    begin
      insert into plch_table values (1, 20);
      commit;
    end;
    Then I create this function (it compiles successfully, since my schema has EXECUTE authority on DBMS_LOCK):
    create or replace function plch_func
      return number
      result_cache
    is
      l_time_sleep number;
    begin
      select time_sleep
        into l_time_sleep
        from plch_table
       where id = 1;
      dbms_lock.sleep(l_time_sleep);
      return l_time_sleep;
    end;
    I then start up a second session, connected to the same schema, and execute this block:
    declare
      res number := plch_func;
    begin
      null;
    end;
    Within five seconds of executing the above block, I go back to the first session and I run this block:
    declare
      t1 number;
      t2 number;
    begin
      t1 := dbms_utility.get_time;
      dbms_output.put_line(plch_func);
      t2 := dbms_utility.get_time;
      dbms_output.put_line('Execute in '||round((t2-t1)/100)||' seconds');
    end;
    what will be displayed after this block executes?
    And the result is:
    20
    Execute in 30 secondsHowever, I don't understand why? I mean what is going on behind this? Why the result 30? Could somebody tell me why?

    Honestly, before yesterday's PL/SQL Challenge question, I had no idea how this worked either. This is very much a deep internals question-- you'd likely have to go looking for a very specialized presentation or blog post to get more detail (or you'd have to do the research yourself). And even then, it's relatively unlikely that they would go into much more detail than the PL/SQL Challenge answer did. Julain Dyke's Result Cache Internals (PPT) is probably one of the more detailed presentations about the internals of the result cache.
    The set of valid statuses for a result cache object are documented in the Oracle Database Reference entry for the v$result_cache_objects view. The two 10 second timeouts are controlled by the database- and session-level settings of the undocumented resultcache_timeout parameter (which, based on this blog post by Vladimir Begun was set to 60 seconds in 11.1.0.6 and changed to 11.1.0.7 to 10 seconds.
    Justin

  • Some general portal caching questions

    Hi experts,
    I have some general questions regarding caching functions in portal.
    1. In System administration->Navigation I can activate navigation cache. By default there are 3 connectors: Collaboration Connector, ROLES and gpn.
    I guess Collaboration Connector caches Collaboration Content and Roles caches the content of the Role-based navigation? Is that correct? What is gpn-connector?
    2. This cache does only cache the navigation structures? Not the iviews and the content?
    3. For some iViews and pages I can activate caching in PCD with certain cache levels. That caching is not related to navigation caching?
    4. I can't activate caching for web dynpro Java iviews and web dynpro java proxy pages. Is that corect? If not how can I achieve that. Those settings are deactivated for me, so I can't activate them.
    5. In Visual Admin I can activate navigation cache under com.sap.portal.prt.sapj2ee. Is this option related to the setting I can set under system administration->navigation in portal? Because I avtivated the option in portal but in VA it still showed it as not activated.
    I crawled some documentation but couldn't find exact information.
    Thanks and regards
    Manuel

    Hi,
    1. GPN is Guided Procedures Navigation connector
    2. Yes only Navigation nodes are cached (TopLevel and Detailed Navigation nodes)
    3. Here it is PCD Caching, which has nothing to do with Navigation caching
    4.  I never tried this, but It looks like what you say is true.
    5. What you see in VA is old caching mechanism. So this is obsolete and can be ignored.
        So you should only use the options from system administration->navigation
    Changes in the Navigation Cache
    Regards,
    Praveen Gudapati

  • Explain plans - caching question

    I am trying to get some explain plan information for some queries that we have running poorly. Our application uses some alter session statements to set the optimizer_mode to all_rows. I am trying to determine if we need to include the optimizer_index_cost_adj parameter and possibly the optimizer_index_caching parameter.
    Using SQL Developer I can easily run the different alter session statements needed and then get an explain plan for the query in question. However I am seeing some weird results - for example long query times for queries that have a relatively low cost. On the flip side I am seeing some queries that have a higher cost return quicker.
    I am doing this many times over and am wondering what impact using the same query could have. Is this query being cached in Oracle? To get an accurate explain plan would I need to clear a cache each time?
    Also, in the explain plan is the cost figure the most important? How is it that a query can have a low cost and take longer to complete than the same query with a higher cost and some different parameters?

    There are no fixed road map for what to see in explain plan,but some common observation should be made for explain plan i.e full table scans due to poor use of indexing,inappropriate hints uses for forcing indexing where FTS can be faster.unnecessary sorting,cartesian
    joins, outer joins, and anti-joins.
    At last i would say that every database vary to its different hardware platforms and those hardware have different configuration,these things can effect oracle optimizer.
    So, if my statistics are current, what should I be
    taking out of the explain plan? I always assumed
    cost was the main indicator of the performance of a
    query.As justin already told cost is not something which should be considered for tuning query rather access path, but there is more other then explain plan access path e.g TKPROF tool which will address you plan as well as others stuff which should be considered for tuning query.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/sqltrace.htmKhurram

  • Global cache question

    Hi,
    i've a question about global cache (11g)
    Blocks are shared in all nodes when one instance request it (select) or only when there are transactions?
    I think in both case, correct?

    In a doc i read about cache syncronization.
    "In an Oracle RAC environment, when users execute queries from different instances, instead of the DBWR process having to retrieve data from the I/O subsystem every single time, data is transferred (traditionally) over the interconnect from one instance to another. (In Oracle Database 11g Release 2, the new "bypass reader" algorithm used in the cache fusion technology bypasses data transfer when large numbers of rows are being read and instead uses the local I/O subsystem from the requesting instance to retrieve data.) This provides considerable performance benefits, because latency of retrieving data from an I/O subsystem is much higher compared to transferring data over the network. Basically, network latency is much lower compared to I/O latency."
    So blocks are shared in all instances in case of query that retrive small records.

  • OBIEE 11g caching question - cross database joins

    Hi, I'm seeing something strange (a.k.a. not wanted) in OBIEE (11g, not sure that version matters).
    I have a simple data mart that contains spend information. The supplier dimension contains keys that can be used to join it to detailed supplier information and supplier address information in our ERP system (that sits in a different database / on a different box). In the OBIEE physical layer I've created a cross database join between the supplier dimension table and the ERP tables that contain the address info.
    Here's the odd behavior I'm seeing. If I write an answers request to select the supplier, some address info, and total spend for fiscal year 2010, I'm seeing OBIEE fire off two queries (this I expect):
    A) Select supplier, address key, and total spend for fiscal year = 2010 against the spend mart
    B) select address_key and associated address info against the ERP system (no limit on this query, it pulls back all rows from the address table)
    OBIEE then does an internal join itself and serves up the results, everything is correct. But here's what's "wrong" - if I then run the exact same answers request, but change the fiscal year to 2009, I again see OBIEE firing off the two queries. What I expected and/or want to see is that, since the entire result set from query #B doesn't change at all, that it wouldn't have to rerun this query. However, it seems to be.
    Is there any way to get #B to cache so that, for any subsequent query that contains supplier address info, OBIEE can pull it from cache instead of rerunning the query (which is pretty slow)? I really thought it would do that, but it doesn't seem to be.
    Thanks!
    Scott

    Hi,
    Could you give a bit more of context for this case? The table in SQL server; Is it a dimension and the one in Oracle DB is a fact? I am guessing, you have set up the driving table here. Have you given a try taking it off, and let BI Server do the filter in memory?
    -Dhar

  • Client Result Cache Question

    Hi,
    i am not sure, whether the new feature "Client Result Cache" for OCI - Connections is an enterprise only feature or not.
    The Licensing Information at http://docs.oracle.com/cd/E11882_01/license.112/e10594/editions.htm#CJACGHEB shows this three features are enterprise only:
    Client Side Query Cache
    Query Results Cache
    PL/SQL Function Result Cache
    Which of these are pointing to Client Result Cache? Is it the Query Results Cache? Or something else?
    As an Hint, i am unable to activate the feature on standard edition databases, but i am not sure, if this is the reason or if i am just making some mistakes in configuration/testing.
    Thanks in advance
    Joerg

    we stopped all tests, because it seems to be a enterprise edition only feature.

  • ADF/BC4J EO and VO cache question

    I created an EO off a simple table and a corresponding VO. I created a Struts/JSP app to display, add, and update the records. When I add or update records through the Struts/JSP app, I see my add and updates on the display page --- this is expected. However, when I add or update records outside of the ADF/BC4J framework (i.e., via SQLPlus), I don't see my changes from the Struts/JSP display page. How do I turn caching off at the EO and VO level so that they are always getting the latest and greatest from the database?

    Where do I put the code? In the *Impl.java file of the application module?  If so where?  In the corresponding getter method for that viewobject?                                                                                                                                                                                                                                                                                                   

  • Basic caching questions

    I have been going through the setup of caching, to get the initial core functionality working for the last couple days and have not made much progress.
    I believe part of the problem is that there is several different types of caching, all called by the same name on websites, so I might be following directions for one type, and then switching to another without knowing it.
    I want to set up role caching, and I thought I had it setup, but when I tried to access the cache, I got the error,
    {"ErrorCode<ERRCA0042>:SubStatus<ES0001>:Invalid identifier: '[Cache role name or Service Endpoint]'. Check cache rolename/endpoint in configuration."}
    I did find that there is a spot in the configurations for filling in this information.  But since I am using role caching, it doesn't seem like it would apply to my situation.
    Also, I spent a few hours trying to get windowsAzure.caching 2.5 installed, and no matter what I do, and it installs a debug module that crashes everything if you don't remove it.
    So, basically I guess I am lost, and trying to get caching work, and running into issues at every turn.
    If you can please tell me why it is asking for distributed caching information when I am trying to use role caching, what I am doing wrong, and where there is an example on the web that actually works with the latest versions.
    Well, I got by that particular error, but now just have a new issue.  When it hits the initialization line of code for the cache it never continues.  No error, no break point, try catch doesn't help, just sort of terminates there or something. 
    But the program keeps running.
    Also, I cant seem to find any example that actually fits for a full code block.  The comments in the web.config said to use the below but intellisense doesn't recognize this code, but does some similar.  However, the initialization class that
    it is looking for, does not exist either. 
    CacheFactory.LoadConfiguration(name)

    Daniel,
    If you are trying to use the InRole cache, few steps you might want to verify
    1. Enable caching on role. On the Caching tab, select the Enable Caching check box.
    2. Install azure caching nuget package, it will add necessary references to your project along with some settings in your configuration file.
    3. You will need to specify the name of the identifier in the configuration even if you are using In Role cache. i.e. specify your role name as Identifier.
    e.g.
    <dataCacheClients>
    <tracing sinkType="DiagnosticSink" traceLevel="Verbose"/>
    <dataCacheClient name="default">
    <autoDiscover isEnabled="true" identifier="WebRole1"/>
    </dataCacheClient>
    </dataCacheClients>
    4. Make sure that you have the data cache client pointing to the same named client which you created in the step 1, by default it is named as 'default'. The cache that’s named default will be returned when, in code, you call the GetDefaultCache method
    on the DataCacheFactory object. 
    You should be able use the In role caching after all these settings, however about the second issue which you mentioned, I assume it is coming in debug mode when you try to initialize DataCacheFactory class.
    You can refer my blog post which talks about same issue and solution
    http://passionatetechie.blogspot.com/2014/09/unresponsive-datacachefactory.html
    For more information about how you can use In role cache - there is a great documentation, you can refer How to: Use an Azure In-Role Cache in Azure Cloud Service
    https://msdn.microsoft.com/en-us/library/azure/jj131262.aspx
    Bhushan | Blog |
    LinkedIn | Twitter

  • Local Cache question

    Hi,
    I want to create a partitioned cache in a cluster. I also want entries in the partitioned cache to be removed automatically after a specified amount of time, to be able to reduce the size of the distributed data set.
    As I understand it, this can be done by configuring the local-scheme on each node by using the expiry-delay and flush-delay functionality, is this correct? Will be expired entries be removed immediately (freeing memory) each time the cache is periodically flushed?
    I do not want to restrict the size of local caches in any other way, so I don't want to use the eviction-policy at all. If I set high-units to 0, will this completely disable the eviction functionality in the local cache?
    Regards
    Andreas

    Hi Andreas,
    This is correct. Eviction and expiry are independent concepts. Expiry is a logical concept, so the data may remain in memory until the flush is performed. The flush is triggered by the next cache access (after the minimum flush period) rather than by a "timer" event.
    For more information please refer to the local scheme configuration element documentation.
    Jon Purdy
    Tangosol, Inc.

  • Procedure cache question

    Does SQL cache the entire TSQL batch or each individual TFS statement in a batch?  For cache match purposes, does it match on the entire batch or each statement within the batch will attempt to match a previously cached plan?
    When I batch together two TSQL queries in mgmt studio and query the dmvs (dm_exec views/functions), I get two separate rows back where each row has a a different plan_handle, sql_handle, query_hash,query_plan_hash, and text.  The text for each row represents
    a single query statement than the entire batch as the MSDN docs suggest.
    select * from mytable1 where id = 1
    select * from mytable2 where id = 2
    go
    SELECT 
    cp.objtype
    ,qs.plan_handle
    ,qs.SQL_HANDLE
    ,QS.query_hash
    ,QS.query_plan_hash
    ,ST.[TEXT]
    ,cp.usecounts
    ,QS.EXECUTION_COUNT
    ,qs.total_physical_reads
    ,qs.total_logical_reads
    ,P.query_plan
    FROM [SYS].[DM_EXEC_QUERY_STATS] AS [QS] 
    INNER JOIN SYS.dm_exec_cached_plans cp on cp.plan_handle = qs.plan_handle
    CROSS APPLY [SYS].[DM_EXEC_SQL_TEXT]([QS].[SQL_HANDLE]) AS [ST] 
    CROSS APPLY [sys].[dm_exec_query_plan]([qs].[plan_handle]) as [p]
    WHERE [st].[text] like '%mytable1%' or [st].[text] like '%mytable2%'
    ORDER BY 1, [qs].[execution_count] desc;
    go
    The MSDN docs suggest that sql handle from dm_exec_query_stats represent a given TSQL batch of statements.   For caching purposes what constitutes a batch?
    SQL2008

    SQL Server caches the plan for the entire batch, the match when looking for a cache entry is based on a hash that is computed over the entire batch. Note that the hash is computed over the batch text as-is. That is, everything counts: spaces, comments, and
    lowercase and uppercase counts differently.
    But that is not all. If two users submits the same query batch, and the batch includes one or more table references where the schema is not specified, and the users have different default schema, that will result in two cache entries.
    Furthermore, there are a number of SET options that must match for a cache hit. For instance, different settings for ARITHABORT will result in two cache entries.
    As I said, SQL Server initially compiles a plan for the entire batch. However, during execution, recompiles may occur for a number of reasons, and recompilation is on statement level. This causes the part of the plan to be replaced, and as I recall the plan_handle
    remains the same.
    What happens in your case, is something called autoparameterisation. You may note that the query text in the cache has changed, and reads:
    (@1 tinyint)SELECT * FROM [dbo].[mytable2] WHERE [id]=@1
    That is not what you submitted. If you take a query batch where autoparameterisation does not occur, you will still see two entries in the output, because there is always one row per statement, but the sql_handle and plan_handle will be the same. For instance
    try this:
    create table mytable1 (id int NOT NULL)
    create table mytable2 (id int NOT NULL)
    go
    DBCC FREEPROCCACHE
    go
    select * from dbo.mytable1 where id in (SELECT id FROM dbo.mytable2)
    select * from dbo.mytable2 where id in (SELECT id FROM dbo.mytable1)
    go
    SELECT
    cp.objtype
    ,qs.plan_handle
    ,qs.sql_handle
    ,qs.statement_start_offset
    ,qs.statement_end_offset
    ,qs.query_hash
    ,qs.query_plan_hash
    ,st.[text]
    ,cp.usecounts
    ,qs.execution_count
    ,qs.total_physical_reads
    ,qs.total_logical_reads
    ,p.query_plan
    FROM [sys].[dm_exec_query_stats] AS [qs]
    INNER JOIN sys.dm_exec_cached_plans cp on cp.plan_handle = qs.plan_handle
    CROSS APPLY [sys].[dm_exec_sql_text](qs.[sql_handle]) AS st
    CROSS APPLY [sys].[dm_exec_query_plan]([qs].[plan_handle]) as [p]
    WHERE [st].[text] like '%mytable1%' or [st].[text] like '%mytable2%'
    ORDER BY 1, [qs].[execution_count] desc;
    go
    DROP TABLE mytable1, mytable2
    I have added the column statement_start_offset and statement_end_offset, so that you can see the entries are per statement.
    By the way, all the DMVs are spelled in lowercase only, and I recommend that you stick to this. One day, you may need to run your queries on a case-sensitive system, and things like SYS.DM_EXEC_QUERY_STATS will not work for you in this case.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • OLAP Caching question

    Hi
    I'm not sure if this is the right forum, but I figured I would ask anyways. Using the broadcaster, I have configured a couple variable settings on a query to fill the OLAP Cache.
    Is there a way to update the parameter settings by an api instead of going in manually to each query and updating the parameter values?
    For example, say I want to cache only two months worth of data. I set the query settings for Jan and Feb. Now I want to configure to Mar and Apr. Instead of manually opening up the settings and entering and saving I want to do this using code if possible.
    The reason why is if I have 100 queries to set this, it would be much easier to run this through code than manually updating the values.
    Thanks in advance,
    Daniel

    check the table RSRPARAMETRIZA, field-content which stores the variant values. You can update the variant values dynamically using ABAP.
    This articles shows you how to do that.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f048c590-31a4-2c10-8599-bd01fabb93d4

Maybe you are looking for

  • DMS Content server / Master guide

    Hi Friends, I try to search info on details for DMS Content server. I found only content server. Is there any differnce for DMS Content server. I want to understand the sizing / planning  / DB usage for DMS purpose. Can I get any where DMS Master Gui

  • How to use Function module generated to update CDHDR CDPOS

    Hi, I have a Z-table, and I want to tracks the value changes in some fields, the data elements for those specific fields are marked for "Change document". I created a Change document object using the transaction SCDO and I got the function module. no

  • Unexpected exception, when opening mapping.

    Hi, When trying to open any message or Operations mapping in ESR PI 7.10 SP7 then I get the following error. In Short: "Unexpected exception. STACKTRACE: com.sap.aii.utilxi.swing.toolkit.ServerCallCancelledException: Unexpected exception." The mappin

  • Business Partner Mapping Error

    Hi all , I am getting error in business partner mapping. I have checked for no. ranges already & they seem to be correct. Any inputs appreciated. Mapping for ID number categories is inconsistent or incomplete Message no. FKBPIDT044

  • When is the next update expected for Engage?

    Post Author: DHAG CA Forum: Xcelsius and Live Office Based on my own experience and from reading this board, there clearly seems to be a need for a service pack/bug fix release. Can someone kindly let us know when (roughly) we might expect such a rel