Query performance on Multiprovider(Remote Cube)

Hi All,
I have to increase the query performance for a report which built on Multi provider.
This multiprovide designed from several remote cubes,but for this report data will bring through one remote cube from R/3.
In filter i had one remote cube, which bring data from R/3.
Now in ST03 the stats are like
%init Time - 0, %DB time - 0, %OLAP time - 16.67, %Front end - 83.33.
Now i have to improve the %Front end lapsed time.
Could you please guide me.
Thanks
Srinivas

Hi Srinivas,
Please see this document
https://websmp105.sap-ag.de/~sapidb/011000358700001394912002
And this Discussion Thread
Re: Deactivate Hierarchy symbols in excel
See whether this is helpful in case of Remote Cubes.
Thanks
CK

Similar Messages

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • Non cumulative remote cube in Multiprovider

    Hi all,
    I have a situation. We created a multiprovider containing 1 infocube and 1 remote cube. The remote cube contains non cumulative key figure. When we create a query on the multiprovider and check the query, the system gives us a system error message saying "System error in program CL_RSR and form GET_CHANMID-01-". I have tried to find an OSS note, but could find any related notes.
    The problem doesn't occur if the query built on the remote cube or on the multiprovider but without the remote cube. So I think the problem lies on using non cumulative remote cube in a multiprovider.
    Has anyone encounter such a situation? Appreciate any opinion or comments.
    Thank you.
    Regards,
    Anzar

    Anyone?
    I really need your advice on this issue.
    Best regards,
    David

  • Query performance on remote/virtual cube

    All:
    We are on BW 3.5.
    Can anyone suggest anything about improving query performance on remote/virtual cubes? Analysis shows query performance is sufferring at the database level.
    I am looking for some advise beyond hardware and database parameters. It seems current hardware and database parametrs work fine with basis cubes.
    Another solution is datamart.But can anything be done before/without going to datamart?
    Any help will be appreciated.
    Thanks

    Hi,
    In this case try to find where more time is utilized by using the ST03.
    if there is more time consuming in front end, rearrange the query by using less in the row and using more free chars and filter areas. using the variable also will work.
    use reporting agent and schedule the query in background to fill the cache and rerun the query.
    Reg,
    Vishwa

  • Remote cube: bad performance of query in source system

    I have created a remote cube that looks to a simple R/3 view VBAK/VBAP.
    In a query on this cube I select document date. on the VBAK there is a special index on document date.
    It appears that my date variable is not included in the SQL statement that is executed on the sourcesystem. This means that on VBAK/VBAP always a full table scan occurs.
    The query on the source system (as generated by executing my BW query) looks like this. (the document-date = AUDAT):
    SELECT
      "MANDT" , "VBELN" , "POSNR" , "VBTYP" ,
      "AUART" , "AUDAT" , "VKORG" , "VTWEG" ,
      "SPART" , "VKGRP" , "VKBUR" , "NETWR" ,
      "WAERK" , "NTGEW" , "GEWEI" , "KWMENG" ,
      "VRKME" , "SHKZG"
    FROM
      "ZBW_2LIS11ITM"
    WHERE
      "MANDT" = :A0#
    My question is: Is it normal that for a remote cube always full table scans are executed. This appears to me a waste of system resources. Or is it a bug?

    Hi,
    if you are using ABAP routines for populating fields of your RemoteCube, you should provide 'inverse routines' as well. According to SAP documentation, this allows for applying filters in the source system rather than doing full table scans.
    An example can be found here: [http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/f5af4a4db53482e10000000a1553f6/content.htm]
    Best regards
    Christian

  • Problem with a query on Remote Cube

    Hi,
    We are working on Remote cube, which has a source from a view built on R/3 base table. I need to extract the data to BW based on a current date due to huge volume of data(performance reasons) in the table. I have used an exit on R/3 to restrict to current date but the extract checker was showing the valid data ie only for current date when i had built a query on Remote cube, the Report was showing complete data(restriction not working). We have even tried using an inversion routine in transfer rules to pass the selections to the Source system even then it doesn't work. Could you help if you have come across same kind of situation or you can suggest an alternate solution on the same but we have to use Remote cube.
    Any suggestions asap would be highly appreciated and rewarded with full points.
    Regards,
    Raj

    Could you check the BLOB really contains UTF-8 encoded XML?
    What's your database character set?The BLOB contains UTF-8 Encoded
    and the database where i am connectes have AL32UTF8 character set, but my internal instance have "AMERICAN_AMERICA.WE8ISO8859P1"
    that is a problem?
    How could I change the character set of the oracle local client to the character set of the remote oracle data base?

  • Help required for a Remote cube query

    Hi,
    We are working on Remote cube, which has a source from a view built on R/3 base table. I need to extract the data to BW based on a current date due to huge volume of data(performance reasons) in the table. I have used an exit on R/3 to restrict to current date but the extract checker was showing the valid data ie only for current date. When i had built a query on Remote cube, the Report was showing complete data(restriction not working). We have even tried using an inversion routine in transfer rules to pass the selections to the Source system even then it doesn't work. Could you help if you have come across same kind of situation or you can suggest an alternate solution on the same but we have to use Remote cube.
    Any suggestions asap would be highly appreciated and rewarded with full points.
    Regards,
    Raj

    I can think of two ways to do it
    Simple with no ABAP coding is a view
    Create a view between a timestamp table and your big table
    Put an entry into your timestamp table of the current date - then use a selection of this inside the view
    Unforutunately you cannot use SY fields inside database views (otherwise you could have used these as a selection condition in the view)
    The best way to do it is using a function module and passing the data from the query into the SQL statement
    I prefer to do it the last way and also pass data through a generic structure - thus you manipulate the data inside the intial loop in the function module and don;t utilise further loops downstream in transfer rules
    (to try and keep the reponse time down)

  • Remote Cube Performance

    Hello BW Experts,
    1)  Is it advisibe to use remote cubes ( not for the user purpose, but for internal data reconciliation purpose). ?
    2) tables we are looking at are the vbak, vbap, vbrk, vbrp from SAP and comparing them to the base Sales and billing ODS. And produce a reconciliaiton report. Since the data in the source tables vbap, vbrp is huge, it gives memory dumps. Is there any better way of using this. Any suggestion to get solve the memory dump?
    3) Are there any measures to improe remote cube performance
    4) Other than the remote cube is there any other way of doing the reconciliation for the above data
    Any suggestions appreciated.
    Thanks,
    BWer

    Hi BWer,
    Remote cube performance is really an issue.
    Instead, you can load data upto PSA and compare them.
    There is a "How to... Validate InfoCube Data By Comparing it with PSA Data":
    http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/g-i/how to validate infocube data by comparing it with psa data
    You can also quickly build in R/3 a query upon those tables using SQ01 tcode and Quick Viewer button for comparison.
    Best regards,
    Eugene

  • Building a new Cube Vs Restricted Key figure in Query - Performance issue

    Hi,
    I have a requirement to create  a OPEX restricted key figure in Query. The problem is that the key figure should be restricted to about 30 GL Accounts and almost 300 Cost centers.
    I do not know if this might cause performance issue in the query. At the moment, I am thinking of creating a new OPEX cube and load only those 30 GL Accounts, 300 cost  centers and Amount. and  include OPEX  in multiprovider in order to get OPEX
    amount in the report.
    whats the best solution - creating OPEX restricted key figure or OPEX cube ?
    thanks,
    Bhat

    I think you should go for cube as all the restrcited key figure are calculated at OLAP runtime so it will definitely affect the query performance.There are a lot of costcenter for which you have to restrict it so definitely during the runtime of query it will take a lot of time to fetch tha data from infoprovider.Its better that you create a cube with the restrictions and include it in MP.It will definitely save a lot of time during query execution
    Edited by: shyam agarwal on Feb 29, 2012 10:26 AM

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Query performance on Inventory Cube

    Hi All,
            I have a query on Inventory Cube with non cumulative key figures, when I ran a query with them its taking 60 to 70 minutes. When I ran the same query by removing non cumulatives its displaing results in 25 seconds. Is there any way we can improve query performance which is effected  by non cumulative keyfigures.
        I have checked the performance related tools like RSRV on cube and master data no errors, in RSRT> execute debug the more time query consumes in data manager, ST03> DB and data manager time and also unassigned time is more.
        I know that query consumes time because of non cumulative keyfigures as it need to perform calculations on fly but its taking lot more than that. I apprecate your inputs to this query in advance.
      I will reward points.
    Regards
    Satish Reddy

    Hi Anil,
        Its nice to see you. We have compressed the cube with marker update and we are using only two infosources to the cube(BF and UM). As there are 150 queries on that cube I don't want to build aggregate especially for that query. I also treid DB stats refresh, there is a process chain to delete and recreate indexes, analysed cube and master data in RSRV etc. it didn't really helped me. Would you please suggest any good solution for this. I apprecaite it in advance.
    When i check in Application log in Cube Manage it is displayed that Mass Upsert of Markers update so I assumed that markers are updated.
    Regards
    Satish Arra.

  • Error while executing BEx query made on Multiprovider having Virtual cube

    Hi All,
    We are getting an error message while executing a BEX query made on Multiprovide which consist of a Virtual Infocube infocube which extracts data from APO Live Cache, the error which we get is as below
    ''Error Reading the data of the infoprovider ZSS_R054
    Error in Substep
    Errors have occured while extracting data from datasource 9A_E2E_FC2
    Errors occured during parallel processing of query 4, RC:3
    Error while reading data; navigation is possible''
    And also when tried to execute from RSRT , it gets executed in debug mode in TRY and CATCHE
    Please let us know if anyone faced a similar situation or can suggest me what to do.
    Please suggest.
    Thanking you in advance.
    regards,
    ajay

    Hi Ajay,
    Try RSRV transaction, Select All Elementary Tests, Select query, now drag both these items on the right hand pane, select your query from this popup. and execute.
    Let us know what the result is.
    regards,
    Sree.

  • Effects of R3 DS enhancement on remote cube

    Hi All,
    Is there a direct effect while running queries against a remote cube when its underlying R3 DS has been enhanced?
    An example is, I'm running queries on a remote cube copied from OL_PCA_1. Now I enhanced R3 DS 0EC_PCA_1_9 (which feeds IS 0EC_PCA_1, which in turn in the basis of my remote cube). I did not include the new characteristics in the remote cube itself, but rather included it in the enhanced standard OL_PCA_1 where more granular reporting will be performed.
    Will there be query performance effects on the remote cube using this approach? How can I measure the performance impact (if any) on the remote cube's queries?
    Thanks.

    Hi All,
    Is there a direct effect while running queries against a remote cube when its underlying R3 DS has been enhanced?
    An example is, I'm running queries on a remote cube copied from OL_PCA_1. Now I enhanced R3 DS 0EC_PCA_1_9 (which feeds IS 0EC_PCA_1, which in turn in the basis of my remote cube). I did not include the new characteristics in the remote cube itself, but rather included it in the enhanced standard OL_PCA_1 where more granular reporting will be performed.
    Will there be query performance effects on the remote cube using this approach? How can I measure the performance impact (if any) on the remote cube's queries?
    Thanks.

  • Bad Query Performance

    Hi Experts,
    I have a Query which was built on a multiprovider,Which has a slow preformance.
    I think the main problem comes from selecting to many records from
    I think it selects 1,1 million rows to display about 500 rows in the result.
    Another point could be that the complicated restricted and calculated keyfigures, especially might spend a lot of time in the OLAP processor.
    Here are the Statistics of the Query.
    OLAP Initialization      :  3,186906
    Wait Time, User         :   56,971169
    OLAP: Settings               0,983193
    Read Cache                     0,015642
    Delete Cache                   0,019030
    Write Cache                    0,087655
    Data Manager                   462,039167
    OLAP: Data Selection      0,671566
    OLAP: Data Transfer       1,257884.
    ST03 Stat:
    %OLAP       :22,74
    %DB           :77,18
    OLAP Time  :29,2
    DBTime        :99,1
    It seems that the maximum time is consuming in the Database
    Any suggestion to speed up this Query response time would be great.
    Thanks in advance.
    BR
    Srini.

    Hi,
    You need to have standard Query performance tuning done for the underlying cubes like better design, aggregates, etc
    Improve Performance of Queries/Reports on Multi Cubes
    Refer SAP Note Number: 869487
    Performance optimization for MultiCubes
    How to Create Efficient Multi-Provider Queries
    Please see the How to Guide "How to Create Efficient MultiProvider Queries" at http://service.sap.com/bi > SAP NetWeaver 2004 - Release-Specific Information > How-to Guides > Business Intelligence
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/how%20to%20create%20efficient%20multiprovider%20queries.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Performance of MultiProviders
    Multiprovider performance / aggregate question
    Query Performance
    Multicube performances
    Create Efficient MultiProvider Queries
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b03b7f4c-c270-2910-a8b8-91e0f6d77096
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/751be690-0201-0010-5e80-f4f92fb4e4ab
    Also try
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    tuning, short dumps
    Performance tuning in BW:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Also notes
    0000903559 MultiProvider optimization is only partially active
    0000825396 Performance in reports with many selections
    multiprovider explanation i need
    Note 629541 - Multiprovider: Parallel Processing 
    Thanks,
    JituK

  • UD Connect to SQL Server  using Remote Cube(URGENT !!!!)

    Hi all,
    Scenario:
    I am new to Remote/Virtual Cubes and I think everyone is new to Universal Data Connect (UD Connect).  I have to create a prototype using Remote Cube to create a front end reporting tool for one of our partners that does not have an SAP system.  So instead of our partner buying a front end tool to do querying on their database we decided to create the solution from our BW system by using Remote Cubes with UD connect since we just upgraded to 3.5 .  The legal limitation prevent us from loading the data on our system so we have to use remote cubes. 
    Question:
    I know RCs  are limited when it comes to the amount of data that they report on and the number of users that can use them but I don't have exact numbers.... I'm quiet certain that no one really know the 'exact' limit but it would help if i know the approx figures I'm working with.  I need to come with with a prototype ASAP so it can be decided if we want to go ahead with this. 
    I would appreciate ANY help from anyone regarding this issue and if you can think of some more issues that i might encounter pls let me know.  It would be great if someone who has gone thru this can guide me as well...Thank you all in advance.
    ML

    Hi,
    1) DB Connect is something like the grandfather of UD Connect (simply put).
    2) Here is some documentation about UDC:
    http://help.sap.com --> SAP NetWeaver --> [release, language] --> SAP NetWeaver --> Information Integration --> SAP Business Information Warehouse --> Data Warehousing --> Data Retrieval --> Data Transfer with UD Connect
    directly:
    http://help.sap.com/saphelp_nw04/helpdata/en/78/ef1441a509064abee6ffd6f38278fd/frameset.htm
    3) Customer Call Series recordings, presentations, etc.:
    http://service.sap.com/nw-cc
    (the webex session for UDConnect was on 17th March)
    4) UD Connect uses the 4 Connector types that SAP provides, like XML, ODBO, JDBC and SAP Query. They same connectors are used by the Bi Java SDK, which is included in the Visual Composer by the BI Kit.
    While in the UDC scenario you can either extract or remotely access the data and build structures for it, in the VC scenario you just make remote access to the data.
    Performance in either case depends on the amount of data and the kind of navigation and aggregation you require on the data made on the backend system (your MS SQL Server). And of course from network, load on the systems, etc.
    Servus
    Mario

Maybe you are looking for