Query performance on Inventory Cube

Hi All,
        I have a query on Inventory Cube with non cumulative key figures, when I ran a query with them its taking 60 to 70 minutes. When I ran the same query by removing non cumulatives its displaing results in 25 seconds. Is there any way we can improve query performance which is effected  by non cumulative keyfigures.
    I have checked the performance related tools like RSRV on cube and master data no errors, in RSRT> execute debug the more time query consumes in data manager, ST03> DB and data manager time and also unassigned time is more.
    I know that query consumes time because of non cumulative keyfigures as it need to perform calculations on fly but its taking lot more than that. I apprecate your inputs to this query in advance.
  I will reward points.
Regards
Satish Reddy

Hi Anil,
    Its nice to see you. We have compressed the cube with marker update and we are using only two infosources to the cube(BF and UM). As there are 150 queries on that cube I don't want to build aggregate especially for that query. I also treid DB stats refresh, there is a process chain to delete and recreate indexes, analysed cube and master data in RSRV etc. it didn't really helped me. Would you please suggest any good solution for this. I apprecaite it in advance.
When i check in Application log in Cube Manage it is displayed that Mass Upsert of Markers update so I assumed that markers are updated.
Regards
Satish Arra.

Similar Messages

  • Performance query on 0IC_C03 inventory cube

    Hello,
    I am currently facing performance problems on this cube. The query is on material groups so the number of row returned are not to high.
    The cube is compressed. Could aggregates be a solution, or does this not work well on this cube because on the non-cumulative key figure?
    Does anyone have any hints on speeding this cube up? (the only tip I see in the collective note is to always compress)
    Best regards
    Jørgen

    Hi Ruud,
    Once compression with Marker update, latest balances will be created automatically for inventory cube: 0IC_C03.
    Historic moments only required to show stock status for any historic date(eg: 02-01-2008).
    If user not interested to check 3 years old status of stock, old data can be deleted selectively from cube using selective deletion.
    Go through doc: [How Tou2026 Handle Inventory Management Scenarios in BW|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328?overridelayout=true]
    Srini

  • MDX query performance on ASO cube with dynamic members

    We have an ASO cube and we are using MDX queries to extract data from that cube. We are doing some performance testing on the MDX data extract.
    Recently we made around 15-20 account dimension members dynamic in the ASO cube, and it is taking around 1 and a half hour for the query to run on an empty cube. Earlier the query was running in 1 minute on the empty cube when there were no dynamic members in the cube.
    Am not clear why it takes so much time to extract data from MDX on an empty cube when there is nothing to extract. The performance has also degraded while extracting data on the cube with data in it.
    Does dynamic members in the outline affect the MDX performance? Is there a way to exclude dynamic members from the MDX extract?
    I appreciate any insights on this issue.

    I guess it depends on what the formulas of those members in the dynamic hierarchy are doing.
    As an extreme example, I can write a member formula to count every unique member combination in the cube and assign it to multiple members, regardless if I have any data in the database or not, that function is going to resolve itself when you query it and it is going to take a lot of time. You are probably somewhere in between there and a simple function that doesn't require any over head. So without seeing the MDX it is hard to say what about it might be causing an issue.
    As far as excluding members there are various function in MDX to narrow down the set you are querying
    Filter(), Contains(), Except(), Is(), Subset(), UDA(), etc.
    Keep in mind you did not make members dynamic, you made a hierarchy dynamic, that is not the same thing and it does impact the way Essbase internally optimizes the database based on Stored vs dynamic hierarchies. So that alone can have an impact as well.

  • Query performance: DSO or Cube?

    For a new report (two queries) we will receive a few 100.000 service messages pushed into a PSA. These messages have various timestamps for different statuses of the items that come in. The client would like to report on very short term (5-15 minutes near real-time) on a subset of the data, and selecting on the date and time.
    The idea is as follows. From the PSA we load a subset of the fields to a DSO on real-time basis, because the business has a need for operational reporting on very short basis. This DSO should only have data for max a few days, so the size won't grow over a million. Different messages on the same unique key of the item can come in. On this DSO, we build a simple query.
    The other DSO, which gets the data from the PSA with a regular non-real-time DTP (or maybe also real-time? since it should refresh every few hours) contains more fields and has several timestamps on which the client wants to apply selections. We want to build a more complex query on this DSO. This DSo will accumulate to millions of records on a weekly basis.
    Question is now: should we apply the use of a query directly on the large DSO (with indices on the selection fields), or build a cube on top? Many fields are timestamps and therefore not suitable as a dimension (maybe line item). However, there are some characteristics that do not have many different values (like 0/1 for true/false selections). Only data from the same PSA is used, no derivations from other data or the use of master data is involved. The timestamp for the
    I'm wondering if a cube has advantages over a DSO here. I haven't found clear documentation explaining if it's better to use a DSO or cube. Any ideas?
    Edited by: Zephania Wilder on Jan 18, 2010 5:54 PM

    Zephania,
    The answer to your question depends on how the report is run. I didn't understand clearly, but it looks like you said the client will filter using multiple timestamp fields.
    If you can determine what the selection screen filter fields are and if you can make sure to make those field mandatory, then you can just create index on those fields and just report from DSO.
    If you can't determine the search criteria, want to give full freedom to the users, but users don't mind waiting when the database performs full table scans, then you may report from DSO. Basis may not like that, though.
    If you want to give full freedom to users, and the report should run quickly, then you have to build an InfoCube and report from that.

  • Query performance on virtual cube

    Hello,
    We have this requirement wherein we are switching a infocube with a virtual one to reduce BWA space. As part of this activity I am carrying out performance testing on existing queries. For most queries I can see an increase of around 20% which is acceptable. But for one query, we are running into issue where the query is getting timed out after hogging a lot of memory. On debugging I could see that this happens inside RSDRC_INFOPROV_READ ->run_query_handler wherein it ends up doing sequential read on SID table (0BILL_NUM). Strangely bill_num is neither part of the rows/free characteristics nor is it used anyplace else (filters,restrictions). Any pointers to what might be going wrong will be highly appreciated. Thanks.
    Regards,
    Dhrubo

    Hi,
    In this case try to find where more time is utilized by using the ST03.
    if there is more time consuming in front end, rearrange the query by using less in the row and using more free chars and filter areas. using the variable also will work.
    use reporting agent and schedule the query in background to fill the cache and rerun the query.
    Reg,
    Vishwa

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • Inventory cube - non cumulativekey fig values are showing -ve values

    Hi Guru's,
             For Improving the performance of inventory cube *0IC_C03
    The following steps i did:
    1) Created History cube by taking a copy of actual cube (0IC_C03).
    2) Transferred all the four years of data (2007, 2008, 2009, 2010) to history cube(4 yr data) as a back up to do clustering and for cube remodelling.
    3) After doing all these, loaded the current 3 years (2008, 2009, 2010)data back to the actual cube and kept one year data in the history cube (2007) (i.e maintained only recent 3yrs data in actual cube).
    5) Created a multiprovider includes actual and history cubes and populated the existing report on top of the multiprovider.
    6) After purging one year data from the actual cube, stock values in the reports are showing negative values
    7) To clear that issue i loaded the 2007 year data back to the actual cube (now the cube has all years data as it was before) to avoid the negative stock value, but again stock values are showing negative values.
    How to solve this issues in inventory cube..
    how too eliminate the negative value in reports which was working prperly before data purging( removing the first year data from the actual cube)

    Hi prayog.. 10q for answering... Yeah i went 2 the data targets. And the forumlae is already wrriten like this IF( Debit/Credit = 'H', Qty in OUn, ( 1- * Qty in OUn ) ) for Actual Consum. K.F and IF( Debit/Credit = 'H', Amt. in local curr., ( 1- * Amt. in local curr. ) ) for Amount.....
    So i already said that from one of the infosource the data is flowing through ODS and then 2 CUBE. So i checked out the data in ODS with the movement type and posting date as per in the Report.. I selected the 'Debit/Credit' = H and Movement type and Posting date... But in ODS o/p the keyfig's are not displayed..... This is the problem...
    Cheers,
    Hemanth Aluri...

  • Inventory Ageing query performance

    Hi All,
       I have created inventory ageing query on our custom cube which is replica of 0IC_C03. We have data from 2003 onwards. the performance of the query is very poor the system almost hangs. I tried to create aggregates to improve performance but its failed. What i should do to improve the performance and why the aggregate filling is failed. Cube have compressed data. Pls guide.
    Regards:
    Jitendra

    Inaddition to the above posts
    Check the below points ... and take action accordingly to increase the query performance.
    mainly check --Is the Cube data Compressed. it will increase the performance of the query..
    1)If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2)Check code for all exit variables used in a report.
    3)Check the read mode for the query. recommended is H.
    4)If Alternative UOM solution is used, turn off query cache.
    5)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    6)Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    7)Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    8)Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    9)If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing.
    10)Check the user exits usage involved in OLAP run time?
    11)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    12)
    Turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes(Choose OLAP and WHM for your relevant Cubes)
    To check the Query Performance problem
    Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes.
    You need to run ST03N in expert mode to get these values
    based on the analysis and the values taken from the above  - Check if an aggregate is suitable or setting OLAP etc.
    Edited by: prashanthk on Nov 26, 2010 9:17 AM

  • Query on Inventory Cube 0IC_C03

    Hi All
    When running a query on Inventory Cube, I get some of the values like this "[2,000]" and the value is not taken into account in the total row.
    Any Idea about the "[Value]"?
    Thanks
    Dror Golani

    Hi,
    Probably it could be a NUMC data type.Just check.
    venkat

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Building a new Cube Vs Restricted Key figure in Query - Performance issue

    Hi,
    I have a requirement to create  a OPEX restricted key figure in Query. The problem is that the key figure should be restricted to about 30 GL Accounts and almost 300 Cost centers.
    I do not know if this might cause performance issue in the query. At the moment, I am thinking of creating a new OPEX cube and load only those 30 GL Accounts, 300 cost  centers and Amount. and  include OPEX  in multiprovider in order to get OPEX
    amount in the report.
    whats the best solution - creating OPEX restricted key figure or OPEX cube ?
    thanks,
    Bhat

    I think you should go for cube as all the restrcited key figure are calculated at OLAP runtime so it will definitely affect the query performance.There are a lot of costcenter for which you have to restrict it so definitely during the runtime of query it will take a lot of time to fetch tha data from infoprovider.Its better that you create a cube with the restrictions and include it in MP.It will definitely save a lot of time during query execution
    Edited by: shyam agarwal on Feb 29, 2012 10:26 AM

  • Query performance on remote/virtual cube

    All:
    We are on BW 3.5.
    Can anyone suggest anything about improving query performance on remote/virtual cubes? Analysis shows query performance is sufferring at the database level.
    I am looking for some advise beyond hardware and database parameters. It seems current hardware and database parametrs work fine with basis cubes.
    Another solution is datamart.But can anything be done before/without going to datamart?
    Any help will be appreciated.
    Thanks

    Hi,
    In this case try to find where more time is utilized by using the ST03.
    if there is more time consuming in front end, rearrange the query by using less in the row and using more free chars and filter areas. using the variable also will work.
    use reporting agent and schedule the query in background to fill the cache and rerun the query.
    Reg,
    Vishwa

  • Query performance on Multiprovider(Remote Cube)

    Hi All,
    I have to increase the query performance for a report which built on Multi provider.
    This multiprovide designed from several remote cubes,but for this report data will bring through one remote cube from R/3.
    In filter i had one remote cube, which bring data from R/3.
    Now in ST03 the stats are like
    %init Time - 0, %DB time - 0, %OLAP time - 16.67, %Front end - 83.33.
    Now i have to improve the %Front end lapsed time.
    Could you please guide me.
    Thanks
    Srinivas

    Hi Srinivas,
    Please see this document
    https://websmp105.sap-ag.de/~sapidb/011000358700001394912002
    And this Discussion Thread
    Re: Deactivate Hierarchy symbols in excel
    See whether this is helpful in case of Remote Cubes.
    Thanks
    CK

  • Non Cumulative Inventory Cube Remodel

    Hello Gurus,
    I've existing inventory cube with non-cumulative key figures with large amount of data. We are using standard non cumulative key figures 0TOTALSTCK, 0ISSTOTSTCK & 0RECTOTSTCK.
    Now I've a new requirement to report issues (0ISSTOTSTCK) & receipts (0RECTOTSTCK) based only on LB. Right now they both are in base UOM. Converting them in LB in query (on the fly) is not an option because of volume / performance of report.
    I'm planning to do this in backend by adding two more key figures for issues & receipts where i'll convert standard ones in LB. I'm not going to touch any of existing non cumulative key figures. But the overall structure (F table) is going to change.
    I'm concerned whether this will work? Any risks? Marker updates?
    Thanks.
    Abhijeet

    Please have a look
    http://sap.seo-gym.com/inventory.pdf

  • Non-cumulative Inventory cube to snapshot Inventory cube scenario

    We currently have a non-cumulative Inventory cube in production system and the stock values match to ECC data. Now we want to build a snapshot cube (to improve query performance) and the question is can we use the current cube (non-cumulative) as data source for the new snapshot cube or do we need to load it from ECC? Is there any special handling needed if it is loaded from current cube?
    Thanks!

    Hi,
      You can use the existing cube, but make sure that you have implemented the OSS Note 1426533 before doing this.
    Regards,
    Raghavendra.

Maybe you are looking for

  • Basic behavior - compile vs. save, application freezes and other questions

    Hi all, I've just recently moved from MS SQL to Oracle and found Oracle SQL Developer the only more or less working application to access the database and write scripts. Still, there are some issues which are beyond me in this application. So what I'

  • Mac OS 10.4 Address Book Contacts

    Niether the Palm Data Transfer Agent nor the MissingSync program work under Mac OS 10.4, and I do not want to pay for 10.5 when 10.6 is coming out in September.  Here is how I have found to copy all of my Address Book contacts to the Pre.  It require

  • Auto-generating word documents from word templates

    How would one go about generating a word document based off of a word template (.dot)? For example, I have a simple JSP form which has a field "Last Name". When submitted, the "Last Name" is plugged into a corresponding bookmark in a .dot file. Then

  • How do I reset an iMac?

    How do I reset an iMac for recycle?

  • Automatic clearing in EBS

    Hi All When I am uploading the statement in FEBP through IDOC, I am getting error" F5263, difference is too large to clear". and not able to clear the posting are 2 i.e. subledger clearing not happening. If any come acrossed the same situation reques