Query performance on virtual cube

Hello,
We have this requirement wherein we are switching a infocube with a virtual one to reduce BWA space. As part of this activity I am carrying out performance testing on existing queries. For most queries I can see an increase of around 20% which is acceptable. But for one query, we are running into issue where the query is getting timed out after hogging a lot of memory. On debugging I could see that this happens inside RSDRC_INFOPROV_READ ->run_query_handler wherein it ends up doing sequential read on SID table (0BILL_NUM). Strangely bill_num is neither part of the rows/free characteristics nor is it used anyplace else (filters,restrictions). Any pointers to what might be going wrong will be highly appreciated. Thanks.
Regards,
Dhrubo

Hi,
In this case try to find where more time is utilized by using the ST03.
if there is more time consuming in front end, rearrange the query by using less in the row and using more free chars and filter areas. using the variable also will work.
use reporting agent and schedule the query in background to fill the cache and rerun the query.
Reg,
Vishwa

Similar Messages

  • BEx query based on virtual cube donu00B4t display a valid List of Value (LOV)

    Hello
    I have a problem with an invalid LOV. The scenario is the following; There´s a BEx query based on a virtual cube. The query has an exit variable on caracteristic that is based on 0CALMONTH.
    At Universe Designer I simply create a connection, a universe based on this query and export.
    At Web Intelligence (also at Live Office), when I try to execute de query, the prompt to fill my exit variable display a list of value that doesn´t match with the values of the caracteristic at the cube.
    Actually, the list at the prompt starts with 01.0000 and finishes with 05.0968.
    In Universe Designer, the option to edit the list of values is not available. But I think that editing the LOV is not the correct way.
    I´ve tried creating a new query based on the DSO that is the source of the virtual cube. In this case, I had a valid list. Unfortunately, I can´t use this DSO.
    Did anyone already have this problem?

    Hi James,
    can you explain what you mean with "input length for that filed" ?
    The field in the table is varchar2(120). I coudn't found options for the List of value.
    Thanks for your response
    Carsten
    null

  • Query ID in Virtual Cube with services-Function module

    Hi,
    I am using virtual cube with services linked to a function module.
    The function module has fixed parameters(such as infoprovider name). None of these parameters consists of query information such as query  ID OR queryname .
    Do any one know how to determine query which was executed this function module?
    Best Regards,
    Anil

    Hi Claudio,
    I never implemented Virtual InfoCube with services with a FM, but I know there is a couple of How To Documents about named:
    - How to Reporting from External Data via Virtual InfoProvider
    -How to Implement a Virtual InfoCube with Services
    both with some code samples: did you read it?
    Hope it helps
    GFV

  • Query on BCS virtual cube is not using the aggregates on BCS basic cube

    Hi all,
    I have BCS Virtual cube which is linked to BCS Basic cube. I built aggregates on BCS Basic cube.
    I created simple query on BCS basic cube and ran in debug mode of rsrt, it showed the aggregates on bcs basic cube. But when I created the same query on BCS vitual cube and ran it rsrt debug mode the query did not show any aggregates, that was strange.
    So My questions is whether query built on virtual bcs can utilize the aggregates built BCS basic cube, if possible please let me knows the tweaks.
    Thanks,
    Raj.

    1. Goto se37. Enter RSDRI_INFOPROV_READ and choose Display.
    2. In line 82 (in a BW 3.5) there is a line that says:
      CLEAR: e_t_data, e_end_of_data, e_aggregate, e_split_occurred.
    Put the cursor in there and press the 'stop shield' or use CtrlShiftF12.
    3. In the same mode open transaction RSRT and choose your query. Execute it. If you stop at the breakpoint, enter I_TH_SFC into one of the fields in the lower left area and press Enter. You should see a table with the characteristics you need in the system.
    As I said I'm not quite sure if it works. I have access to a BCS system on monday. I'll try then to find out more.
    Best regards
    Dirk

  • Query performance on Inventory Cube

    Hi All,
            I have a query on Inventory Cube with non cumulative key figures, when I ran a query with them its taking 60 to 70 minutes. When I ran the same query by removing non cumulatives its displaing results in 25 seconds. Is there any way we can improve query performance which is effected  by non cumulative keyfigures.
        I have checked the performance related tools like RSRV on cube and master data no errors, in RSRT> execute debug the more time query consumes in data manager, ST03> DB and data manager time and also unassigned time is more.
        I know that query consumes time because of non cumulative keyfigures as it need to perform calculations on fly but its taking lot more than that. I apprecate your inputs to this query in advance.
      I will reward points.
    Regards
    Satish Reddy

    Hi Anil,
        Its nice to see you. We have compressed the cube with marker update and we are using only two infosources to the cube(BF and UM). As there are 150 queries on that cube I don't want to build aggregate especially for that query. I also treid DB stats refresh, there is a process chain to delete and recreate indexes, analysed cube and master data in RSRV etc. it didn't really helped me. Would you please suggest any good solution for this. I apprecaite it in advance.
    When i check in Application log in Cube Manage it is displayed that Mass Upsert of Markers update so I assumed that markers are updated.
    Regards
    Satish Arra.

  • Query Calculation on Virtual Cube

    Hi Gurus,
    I am using the virtual cube 0FIGL_C01 for my balance sheet and income statement queries and in one of the queries I am using one of the line item (ASSETS)from the Financial Statement Hierarchy(0GLACCEXT) on the rows which has nodes under it that are either line items or just normal nodes which are actually cummulated to get the value of the line item (ASSETS)and on the columns I have a CKF balance.
    So I am trying to create a formula in another column that will have the value of each of the line items under line item (ASSETS) as a percentage of the value of cummulated line item (ASSETS).
    Example: ASSETS-                               60000
                        Revenue-                 45000
                                     Sales         35000
                        Taxes-                    15000
                                    Income Tax 10000
    So I need to show in the column when the hierarchy is expanded the % of revenue(a node of ASSETS) over ASSETS,% of Sales(a node of revenue) over ASSETS,% of Taxes(a node of ASSETS) over ASSETS,% of Income tax(a node of Taxes)over ASSETS etc.
    Any help will be appreciated
    thanks
    Message was edited by:
            Jide

    I used cell definition for the calculation and it worked.

  • MDX query performance on ASO cube with dynamic members

    We have an ASO cube and we are using MDX queries to extract data from that cube. We are doing some performance testing on the MDX data extract.
    Recently we made around 15-20 account dimension members dynamic in the ASO cube, and it is taking around 1 and a half hour for the query to run on an empty cube. Earlier the query was running in 1 minute on the empty cube when there were no dynamic members in the cube.
    Am not clear why it takes so much time to extract data from MDX on an empty cube when there is nothing to extract. The performance has also degraded while extracting data on the cube with data in it.
    Does dynamic members in the outline affect the MDX performance? Is there a way to exclude dynamic members from the MDX extract?
    I appreciate any insights on this issue.

    I guess it depends on what the formulas of those members in the dynamic hierarchy are doing.
    As an extreme example, I can write a member formula to count every unique member combination in the cube and assign it to multiple members, regardless if I have any data in the database or not, that function is going to resolve itself when you query it and it is going to take a lot of time. You are probably somewhere in between there and a simple function that doesn't require any over head. So without seeing the MDX it is hard to say what about it might be causing an issue.
    As far as excluding members there are various function in MDX to narrow down the set you are querying
    Filter(), Contains(), Except(), Is(), Subset(), UDA(), etc.
    Keep in mind you did not make members dynamic, you made a hierarchy dynamic, that is not the same thing and it does impact the way Essbase internally optimizes the database based on Stored vs dynamic hierarchies. So that alone can have an impact as well.

  • Query performance: DSO or Cube?

    For a new report (two queries) we will receive a few 100.000 service messages pushed into a PSA. These messages have various timestamps for different statuses of the items that come in. The client would like to report on very short term (5-15 minutes near real-time) on a subset of the data, and selecting on the date and time.
    The idea is as follows. From the PSA we load a subset of the fields to a DSO on real-time basis, because the business has a need for operational reporting on very short basis. This DSO should only have data for max a few days, so the size won't grow over a million. Different messages on the same unique key of the item can come in. On this DSO, we build a simple query.
    The other DSO, which gets the data from the PSA with a regular non-real-time DTP (or maybe also real-time? since it should refresh every few hours) contains more fields and has several timestamps on which the client wants to apply selections. We want to build a more complex query on this DSO. This DSo will accumulate to millions of records on a weekly basis.
    Question is now: should we apply the use of a query directly on the large DSO (with indices on the selection fields), or build a cube on top? Many fields are timestamps and therefore not suitable as a dimension (maybe line item). However, there are some characteristics that do not have many different values (like 0/1 for true/false selections). Only data from the same PSA is used, no derivations from other data or the use of master data is involved. The timestamp for the
    I'm wondering if a cube has advantages over a DSO here. I haven't found clear documentation explaining if it's better to use a DSO or cube. Any ideas?
    Edited by: Zephania Wilder on Jan 18, 2010 5:54 PM

    Zephania,
    The answer to your question depends on how the report is run. I didn't understand clearly, but it looks like you said the client will filter using multiple timestamp fields.
    If you can determine what the selection screen filter fields are and if you can make sure to make those field mandatory, then you can just create index on those fields and just report from DSO.
    If you can't determine the search criteria, want to give full freedom to the users, but users don't mind waiting when the database performs full table scans, then you may report from DSO. Basis may not like that, though.
    If you want to give full freedom to users, and the report should run quickly, then you have to build an InfoCube and report from that.

  • Performance issue on a virtual cube

    Hi BW gurus,
    I am working on Consolidation virtual cube and the query performance thru that cube is very bad. I know that we cannot build aggregates or partition on a virtual cube ..what should be my approach then.....
    Your suggestions will be appreciated with lots of points

    Hi Nick,
    If you can not move out of the virtualcube option, then I think you should try to improve the performance of the virtual cube. This mainly ABAP work. You can use SE30 to analyze what parts of the code is taking too much time. Follow this steps:
    1) Create a breakpoint in the function module of your virtualcube
    2) Go to listcube and initiate an extraction from your virtualcube.
    3) On a separate session, run SE30 to start an analysis of the extraction process for your virtual cube.
    You can use the report from SE30 as a starting point on your performance optimization work.
    Note: Transaction ST05 can help you determine what database calls are taking a long time.
    Hope this helps.

  • Query on virtual cube showing incorrect values

    Experts,
    I have create a virtual cube on a base cube using services (info-source). both virtual as well base cube are z-development and are in BI.
    Base cube has 6 records, now when I check data in virtual cube using display data it shows correct values based on routine written in virtual cube transformation.
    Problem is when I created query on this virtual cube it doesnt show correct data or sometime no applicable data. After debugging I found, in end routine itself result package contains records with blank values in it on which I have written routine. But it does this only while executing query not when I display data in cube.
    Regards,
    Akshay

    Hi!!!
    And how do you solved it?

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • Query performance on remote/virtual cube

    All:
    We are on BW 3.5.
    Can anyone suggest anything about improving query performance on remote/virtual cubes? Analysis shows query performance is sufferring at the database level.
    I am looking for some advise beyond hardware and database parameters. It seems current hardware and database parametrs work fine with basis cubes.
    Another solution is datamart.But can anything be done before/without going to datamart?
    Any help will be appreciated.
    Thanks

    Hi,
    In this case try to find where more time is utilized by using the ST03.
    if there is more time consuming in front end, rearrange the query by using less in the row and using more free chars and filter areas. using the variable also will work.
    use reporting agent and schedule the query in background to fill the cache and rerun the query.
    Reg,
    Vishwa

  • Querying on aggregates created on Virtual Cube

    Hello,
    I have implemented a virtual InfoProvider with Services.When I create queries directly on the Virtual Infoprovider the query runs fine and I see the report.
    As per my requirement I create an aggregate on the Virtual Infoprovider .Then I define a query on the aggregate .But when I execute this query I get the following errors :
    Error reading the data of InfoProvider AG4
    An exception with the type CX_SY_REF_IS_INITIAL occurred, but was neither handled locally, nor declared in a RAISING clause
    Dereferencing of the NULL reference.
    Would appreciate any assistance on this topic.
    Thanks
    Priyadarshi

    Yes it is possible to create aggregates on Virtual cubes.
    I will be grateful if hope anybody who is aware of the method of aggreagate creation and who has faced similar issues comes forward and throws some light on what could be the error.
    Thanks

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • BCS Virtual Cubes Performance Tuning

    Hi All,
    We are working on improving the performanc tuning of queries on BCS Virtual Cubes (with services).
    Any specific changes (from RAM to specific properties on queries, Virtual Cubes with services) that you have seen working in improving the performance in yuor environemnt is greately appreciated.
    Thanks,
    - Shashi

    Thanks a lot Marc,
    We are on NW2004, with the following support pack levels
    SAP_BW     350     0016
    FINBASIS     300     0012
    BI_CONT     353     0008
    SEM-BW     400     0012
    I have checked the service market place and the current SP's available are:
    SAP_BW     350     0019
    FINBASIS     300     0015
    BI_CONT     353     0013
    SEM-BW     400     0015
    Which Service Packs that you suggest us to go with for performance issues on BCS Virtual Infoprovider queries?
    Thanks,
    - Shashi

Maybe you are looking for

  • Base unit of measure can not be changed since EAN is already assigned.

    Hi, We would like to change the UOM from EA to SET .In order to tag MM # as BOM Header and ZCEN (Serial number profile)but the error massage appears as below print screensystem is not allowing to do so ,throwing following error message: "Base unit of

  • App store icon on iphone4

    I updated my iPhone 4 software to iOS 5 but after updating the app store icon on my phone is gone .how can I get the app store icon back pls help me?

  • Power PC G5-needs help!!!!!

    I am running 10.3.9 system software with 512MB ram. It is the model with all the hardware in the monitor (19" I think) , it has no tower. I cannot install the new snow leopard system software or updates to several software programs including safari.

  • IPhone 6 plus delivery date changing

    I had to order my iPhone 6 plus 16GB space grey on the phone. I was given a ship date of 9/19 but when I got my confirmation it had changed to 10/14? Im hearing people that were told 10/7 are now getting tracing numbers but mine just says it "not ava

  • Count number for a list, which is broken into smaller list

    Could someone please tell me how to how count status correctly? I have list with a large size and it is broken into a group of smaller lists. I want to show the status when each list is going through the function getNextToSend(). For example, the siz