How to understand if olap cache is used

Hi,
I am trying to understand if my query uses the olap cache or not.
First I executed the report with specific selections. After that, I executed the report again with the same selections but it took almost same time to bring the report output. There are lots of exception aggregates in my query, so i wonder if olap cache will help me or not.
So, how can I understand if the query uses the olap cache or not ? (i am using RSRT screen, with 'do not use cache' blank)
Thanks
Ozan

Hi
Check in tcode RSRT. Input your query tech name and from the properties tab, you can see if the olap cache is set to be used in the cache mode field.
You can check if the entries you are making in the cache are used, if you, also from RSRT, go to cache monitor->main memory and expand the folder "query directory".
now search for your query tech name.
a folder will appear with your query tech name if the cache mode enabled.
right-click and choose details. here you will see "read accesses" and the number will tell you how many times this entry has been read.
regards
jacob

Similar Messages

  • How to empty out Olap Cache for a query?

    Hi experts,
    I think the olap cache for a query should be resetted to zero after a data load (right?), if i have understood well how it works. Where can i check the olap cache for the query?  I'd like to ckeck the change of the Olap Cache for a specific query.
    I checkt in the Cache Monitor but I haven't found any data.
    Francesco

    Hi
    You can use the Transaction code RSRCACHE
    Regards
    N Ganesh

  • OLAP Cache filling using Broadcasting

    Hi All,
    We have two BEx queries which take more than 30 minutes to complete. One is an inventory cube query and other one is a query with a lot of exception aggregation.
    In the Statistics, both queries show that around 90% execution time is at OLAP level.
    We created broadcasting settings to fill OLAP cache for both the queries and scheduled in background. In RSRCACHE, we can see entries for both queries.
    But the execution time still have not improved. They both are taking same time to complete execution.
    Now I doubt if OLAP Cache supports non-cumulative key figures and exception aggregation or not. Is this mentioned in any documentations?
    Pls let me know your suggestions.
    Thanks in advance,
    Regards,
    Bijesh

    hi,
      you can carry out following checks:
    1) check parameter of global and local cache, general sense the global cache size should be larger than the local cache size you can configure the cache parameters in RSCUSTV14 or simply ask basis do optimise them acc. to system usage
    2) also since you you have checked RSRCACHE, try changing few option in properties change read to X(for large amt of data) by default its H
    3) try changing cache mode to with swapping instead of default without swapping, in case of several app server choose mode 4
    then click on environment->query mass  maintainence
    see if these works out
    regards
    laksh

  • How to understand WS has to make use of SOAP

    Hi All
    via WLS 8.x, we can easily try a HelloWorld java class basis WS and test it
    like the following:
    String wsdlUrl = "http://localhost:7001/web-services/HelloWorld?WSDL";
    HelloWorld service = new HelloWorld_Impl( wsdlUrl );
    HelloWorldPort port = service.getHelloWorldPort();
    result = port.get( ... );
    What I don't understand is, how is the SOAP applied and how can it be kept
    track of during the request and response?
    Thanks lots
    John
    Toronto

    HI...
    If u want to keep track of soap message u can use handler class (GenericHandler ) which allows you to read and modify or attach attachement.In client code you need to specify this handler then it ll call request and reponse method of handler berfore n after calling webservice.
    Cheers,
    Rana...
    Inter net Inter Operate.......Webservices everywhere....

  • How can I creat OLAP cube by using Oracle 10g

    i want to creat a complete Molap cube, but i find that i cannot do it in Oracle Database 10g,Analytic Workspace ...
    What are the systems needed in creating Cube? How to do that?

    Please refer to the OLAP Application Developer's Guide (10.1.0.4) found at:
    http://www.oracle.com/technology/products/bi/olap/olap.html
    By the way, the doc highlights Analytic Workspace Manager 10g for Oracle Database 10.1.0.4.0. The required RDBMS 10.1.0.4.0 patch set should be available any day now on MetaLink for Linux and Windows. Soon thereafter, the AWM 10g client will be posted as a stand-alone to OTN and MetaLink.

  • Query views are not using OLAP cache

    Hi,
    I am trying to pre-fill the OLAP cache with data from a query so as to improve the performance of query views. 
    I have read several documents on the topic, such as “How to… Performance Tuning with the OLAP Cache” (http://www.sapadvisors.com/resources/Howto...PerformanceTuningwiththeOLAPCache$28pdf$29.pdf)
    As far as I can see, I have followed the instructions and guidelines in detail on how to set up the cache and pre-fill it with data. However, when I run the query views they never use the cache. For example, point 3.4 in the abovementioned document does not correspond with my results.
    I would like some input on what I am doing wrong:
    1. In RSRT I have Cache mode = 1 for the specific query.
    2. The query has no variables, but the following restrictions (in filter): 0CALMONTH = 09.2007, 10.2007, 11.2008 and 12.2007.
    3. I have one query view with the restriction 0CALMONTH = 10.2007, 11.2008 and 12.2007.
    4. I have a second query view, which builds on the same query as the first query view. This second query view has the restriction 0CALMONTH = 11.2008 and 12.2007.
    5. There are no variables in the query.
    6. I run the query. 
    7. I run the first query view, and the second query view immediately after.
    8. I check ST03 and RSRT and see that cache has not been used for either of the query views.
    Looking at point 3.4 in the abovementioned document, I argue that the three criteria have been fulfilled:
    1. Same query ID
    2. The first query view is a superset of the second query view
    3. 0CALMONTH is a part of the drill-down of the first query view.
    Can someone tell me what is wrong with my set-up?
    Kind regards,
    Thor

    You need to use following process of process chain: "Attribute change run (ATTRIBCHAN)". This process needs to be incorporated into your process chains which loads data into provider on top of which your query is based.
    See following links on topic how to build it:
    https://help.sap.com/saphelp_nw73/helpdata/en/4a/5da82c7df51cece10000000a42189b/frameset.htm
    https://help.sap.com/saphelp_nw70ehp1/helpdata/en/9a/33853bbc188f2be10000000a114084/content.htm
    cheers
    m./

  • How to Create a OLAP Cube in DEV using SSAS from Raw file system backup from Production?

    How to Create a OLAP Cube in DEV using SSAS from Raw file system backup from Production? I dont have a .abf file available. Two paritions in production are missing data. We were able to get back file system backup which contains the files for these two paritions.
    How do I create a cube in Dev using this file system backup.
    we are on SQL Server 2008R2.
    Thanks,

    How to Create a OLAP Cube in DEV using SSAS from Raw file system backup from Production? I dont have a .abf file available. Two paritions in production are missing data. We were able to get back file system backup which contains the files for these two paritions.
    How do I create a cube in Dev using this file system backup.
    we are on SQL Server 2008R2.
    Thanks,

  • How to check/verify running sql in lib cache is using updated statistics of table

    How to check/verify running sql in lib cache is using updated statistics of table used in from clause.
    one of my application table is highly busy i.e frequent update/insert/delete.
    we gather table stats every 30 min.

    Hello, "try dynamic sampling" = think "outside the box", maybe hit two birds with same stone.
    As a matter of fact, I was just backing up your statement: "30 minutes seems pretty extreme"
    cheers

  • I purchased the teacher and student lightroom 5. I put in my code and uploaded my evidence to show that I am a teacher. I am guessing it was sent through to someone. How do I know when I can use the software? I don't understand what to do next.

    I purchased the teacher and student lightroom 5. I put in my code and uploaded my evidence to show that I am a teacher. I am guessing it was sent through to someone. How do I know when I can use the software? I don't understand what to do next.

    Please refer below link for more information
    Education FAQ

  • Do Query Views use existing OLAP Cache or create their own

    Hi, I'm looking to find out if a Query View will use the Parent Query OLAP Cache or will it create it's own? 
    I am imagining that it would use the query view, but create additional cache if it drilled down on a free characteristic that wasn't in the original query output.  Any comments

    If there are any entries in the CACHE which meets the drilldown and filter criteria for the view it will be able to make use of existing Cache.
    As you mentioned for any additional drilldowns if there are not entries in the cache it will have to read from DB.

  • How we can make use of cache management using presentation services

    hi all
    how we can make use of cache management using presentation services
    Thanks
    Sreedhar

    hi
    i have one small question,
    first time i have submitted a report and the data has come from database, second i have interchange the cols like col1 to col2 and col2 to col1 and i did col1/col2 will it comes from cache or will it come from database(my question is that any in memory calculations will be done by cache )
    Thanks
    Sreedhar

  • How to purge data cache table using command line

    Hi:
    Is there a way to purge the data cache table using command line?
    thanks!

    Thanks, Mike.
    I'm thinking about the ldconsole provided with ALDSP.
    The ldconsole has a link for purging the cache. Is there anything I can leverage from there? Is it a JMX component that I can call?

  • Warming up the OLAP cache

    Hi,
    I would like to schedule execution of some queries in order to put the results in the OLAP cache for fast use.
    Each user run the query with very restricted selection: one period, one node in the CostCenter hierarchy and one currency type! All the selections are obligatory and for single values.
    I created a "super" query with the same characteristics, all in the rows or columns, with not mandatory variables, ...
    I scheduled the "super" query and it create an entry in the OLAP cache.
    When I now run the production query (other query on the same cube, for one node, one period, one currency) the system doesn't use the OLAP cache, but create a new line in the OLAP cache.
    It is impossible to schedule every combination.
    Is there a way to worm up the cache with a different Q?
    Should the seedling be done with Broadcaster or Reporting Agent (in NW2004s)? I think this should be the same ?!
    Any suggestion to fill the cache for my situation?
    Thanks, Tom

    The OLAP cache is at a query level, so to warm up the cache, you must run the actual query that the users will run, not just a similar or "super" query that is similar.
    SDN has some doc on effectively using Global cache which would be good to review.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9f4a452b-0301-0010-8ca6-ef25a095834a
    The other key thing to understand is how the setting on variable "Can be changed during query navigation" works.  If this setting is NOT set on your variables, then in order for a query to access the data in the cache, the query must have been run previously with the exact same variable values, as the varaiable values get saved as part of the cached info.
    Selecting the "Can be changed during navigation" variable setting changes the way the system considers the variable - it behaves as a filter.  This has some implications with way prompts for variables appear to the user, so there are user impacts.  In BEx, a variable is normally presented for input when you first run the query and then whenver you refresh it, but when you change the variable setting to Can be changed..., the variable prompt will be presented teh frist time the query is run, but NOT when a refresh is done, it now behaces as if you added a filter to the query.
    So if you change all the variables (or create new ones) used bythe query to Can be changed..., then run your query thru reporting agent or info broadcasting wide open, or with restrictions that encompass all your user's query executions, the subsequent user executions will use the global Olap cache.

  • Pre-fill the OLAP cache for a query on Data change event  of infoprovider

    Hi Gurus,
    I have to pre-fill the OLAP cache for a query,which has bad performance.
    I read a doc 'Periodic Jobs and Tasks in SAP BW'
    which suggested sum steps to do this
    i hav created the setting for Bex broadcasting for scheduling job Execution with data change in info provider
    thereafter doc says  "an event has to be raised in the process chain which loads the data to this InfoProvider.When the process chain executes the process u201CTrigger Event Data Change (for Broadcaster)u201D, an event is raised to inform the Broadcaster that the query can be filled in the OLAP cache."
    how can this b done please provide with sum proper steps
    Answers are always appreciated.
    Thanks.

    Hi
    U need to create a process chain or use the existing process chain which you are using to load your current solution, just add event change process type in the process chian  and inside it add the info provider which are going to be affected.
    Once you are done with this go to the broadcaster  and  create new setting for that query...you will see the option for event data chainge in infoprovider just choose that  and create the settings.
    hope it helps

  • Header, Line Item and Cache Techniques Using Hashed Tables

    Hi,
    How can I work with header, line item, and a cache techniques using hashed tables?
    Thanks,
    Shah.

    Hi,
    Here is an example to clarify the ideas:
    In general, every time you have a header-> lines structure you have a unique key for the lines that has at least header key plus one or more fields. I'll make use of this fact.
    I'll try to put an example of how to work with header -> line items and a cache technique using hashed tables.
    Just suppose that you need a list of all the material movements '101'-'901' for a certain range of dates in mkpf-budat. We'll extract these fields:
    mkpf-budat
    mkpf-mblnr,
    mseg-lifnr,
    lfa1-name1,
    mkpf-xblnr,
    mseg-zeile
    mseg-charg,
    mseg-matnr,
    makt-maktx,
    mseg-erfmg,
    mseg-erfme.
    I'll use two cache: one for maintaining lfa1 related data and the other to maintain makt related data. Also I'll only describe the data gathering part. The showing of the data is left to your own imagination.
    The main ideas are:
    1. As this is an example I won't use inner join. If properly desingned may be faster .
    2. I'll use four hashed tables: ht_mkpf, ht_mseg, ht_lfa1 and ht_makt to get data into memory. Then I'll collect all the data I want to list into a fifth table ht_lst.
    3. ht_mkpf should have (at least) mkpf's primary key fields : mjahr, mblnr.
    4. ht_mseg should have (at least) mseg primary key fields: mjahr mblnr and zeile.
    5. ht_lfa1 should have an unique key by lifnr.
    6. ht_makt should have an unique key by matnr.
    7. I prefer using with header line because makes the code easier to follow and understand. The waste of time isn't quite significant (in my experience at least).
    Note: When I've needed to work from header to item lines then I added a counter in ht_header that maintains the count of item lines, and I added an id in the ht_lines so I can read straight by key a given item line. But this is very tricky to implement and to follow. (Nevertheless I've programmed it and it works well.)
    The data will be read in this sequence:
    select data from mkpf into table ht_mkpf
    select data from mseg int table ht_mseg having in count all the data in ht_mkpf
    loop at ht_mseg (lines)
    filter unwanted records
    read cache for lfa1 and makt
    fill in ht_lst and collect data
    endloop.
    tables
    tables: mkpf, mseg, lfa1, makt.
    internal tables:
    data: begin of wa_mkpf, "header
    mblnr like mkpf-mblnr,
    mjahr like mkpf-mjahr,
    budat like mkpf-budat,
    xblnr like mkpf-xblnr,
    end of wa_mkpf.
    data ht_mkpf like hashed table of wa_mkpf
    with unique key mblnr mjahr
    with header line.
    data: begin of wa_mseg, " line items
    mblnr like mseg-mblnr,
    mjahr like mseg-mjahr,
    zeile like mseg-zeile,
    bwart like mseg-bwart,
    charg like mseg-charg,
    matnr like mseg-matnr,
    lifnr like mseg-lifnr,
    erfmg like mseg-erfmg,
    erfme like mseg-erfme,
    end of wa_mseg,
    data ht_mseg like hashed table of wa_mseg
    with unique key mblnr mjahr zeile
    with header line.
    data: begin of wa_lfa1,
    lifnr like lfa1-lifnr,
    name1 like lfa1-name1,
    end of wa_lfa1,
    data ht_lfa1 like hashed table of wa_lfa1
    with unique key lifnr
    with header line.
    data: begin of wa_makt,
    matnr like makt-matnr,
    maktx like makt-maktx,
    end of wa_makt.
    data: ht_makt like hashed table of wa_makt
    with unique key matnr
    with header line.
    result table
    data: begin of wa_lst, "
    budat like mkpf-budat,
    mblnr like mseg-mblnr,
    lifnr like mseg-lifnr,
    name1 like lfa1-name1,
    xblnr like mkpf-xblnr,
    zeile like mseg-zeile,
    charg like mseg-charg,
    matnr like mseg-matnr,
    maktx like makt-maktx,
    erfmg like mseg-erfmg,
    erfme like mseg-erfme,
    mjahr like mseg-mjahr,
    end of wa_mseg,
    data: ht_lst like hashed table of wa_lst
    with unique key mblnr mjahr zeile
    with header line.
    data: g_lines type i.
    select-options: so_budat for mkpf-budat default sy-datum.
    select-options: so_matnr for mseg-matnr.
    form get_data.
    select mblnr mjahr budat xblnr
    into table ht_mkfp
    from mkpf
    where budat in so_budat.
    describe table ht_mkpf lines g_lines.
    if lines > 0.
    select mblnr mjahr zeile bwart charg
    matnr lifnr erfmg erfme
    into table ht_mseg
    from mseg
    for all entries in ht_mkpf
    where mblnr = ht_mkpf-mblnr
    and mjahr = ht_mjahr.
    endif.
    loop at ht_mseg.
    filter unwanted data
    check ht_mseg-bwart = '101' or ht_mseg-bwart = '901'.
    check ht_mseg-matnr in so_matnr.
    read header line.
    read table ht_mkpf with table key mblnr = ht_mseg-mblnr
    mjahr = ht_mseg-mjahr.
    clear ht_lst.
    note : this may be faster if you specify field by field.
    move-corresponding ht_mkpf to ht_lst.
    move-corresponding ht_mseg to ht_lst.
    perform read_lfa1 using ht_mseg-lifnr changing ht_lst-name1.
    perform read_makt using ht_mseg-matnr changing ht_lst-maktx.
    insert table ht_lst.
    endloop.
    implementation of cache for lfa1.
    form read_lfa1 using p_lifnr changing p_name1.
    read table ht_lfa1 with table key lifnr = p_lifnr
    transporting name1.
    if sy-subrc <> 0.
    clear ht_lfa1.
    ht_lfa1-lifnr = p_lifnr.
    select single name1
    into ht_lfa1-name1
    from lfa1
    where lifnr = p_lifnr.
    if sy-subrc <> 0. ht_lfa1-name1 = 'n/a in lfa1'. endif.
    insert table ht_lfa1.
    endif.
    p_name1 = ht_lfa1-name1.
    endform.
    implementation of cache for makt
    form read_makt using p_matnr changing p_maktx.
    read table ht_makt with table key matnr = p_matnr
    transporting maktx.
    if sy-subrc <> 0.
    ht_makt-matnr = p_matnr.
    select single maktx into ht_matk-maktx
    from makt
    where spras = sy-langu
    and matnr = p_matnr.
    if sy-subrc <> 0. ht_makt-maktx = 'n/a in makt'. endif.
    insert table ht_makt.
    endif.
    p_maktx = ht_makt-maktx.
    endform.
    Reward points if found helpfull...
    Cheers,
    Siva.

Maybe you are looking for

  • Using time machine for the first time in a while

    Hi all, I just tried backing up my laptop for the first time in ages with time machine. However , it seems to get stuck on "preparing the back up" and never actually starts the backup. The external drive I'm  using has been used before for backups,an

  • What is a dmg file?

    What is a *.dmg file?  I am showing one in my Finder called ABEIDSNCS4_LS1.dmg and it's an image file.  My concern is that the size is 1.16 GB. What could this be and can I delete it? Thank you.

  • Rdc questions

    Hope this is in the right-ish place here... I need to access a pc routinely, which is here in my office as well as the mac. The way which is most convenient is to rdc into it. This works well enough, but I have a couple of frustrating issues when wor

  • Mail server polling Internal Error

    Hi I am doing mail polling server... i got following error in message monitoring.... Error Category :: Internal Error Error Code  :: HTTP_RESP_STATUS_CODE_NOT_OK please help me thanks & regards Ravi Shankar B

  • Sales Order Vs Project Systems(CNS0--Vl02n--VF01) Value&Status(Partial Disp

    Dear All, I need your suggestion in the following scenario: This is the Partial Dispatch scenario. Here in the Sales Order, we are attaching the FERT; Here we will give the total price of the FERT in Conditions level; In the Project Systems level, we