Query Performance for OLE DB OLAP Reporting

Hi Experts,
what are the advantages of enhancing query performance by
A) building Aggregates or
B) using Information Broadcaster Query Precalculation?
Since the settings in Information Broadcaster could be done by any user - will the precalculated version be used only for this user or for all users exeucting the query?
Are these settings also used if the query is executed via a 3rd party Frontend tool?
Thanks,
Angie

Hi Angie,
Which is the third party tool that's accessing the query? Is it BO? If so there's a lot of information available.

Similar Messages

  • Report Performance for GL item level report.

    Hi All,
    I have a requirements to get GL line items
    report based on GL Line items so have created data model like 0FI_GL_4->DSO-> cube and tested everything is fine but when execute in production the report performance is very bad.
    Report contains document number, GL act, comp.code, posting date objects.
    I have decided to do as follows to improve reporting performance
    ·         Create Aggregate on Document, GL characteristic
    ·         Compression.
    Can I do aggregates 1st then do the compression.
    Please let me know if I missing out anything.
    Regards,
    Naani.

    Hi Naani,
    First fill the Aggrigates then do Compression,run SAP_INFOCUBE_DESIGN Check the size of Dimension maintain Line item, High cordinality to the dimension, Set Cahe for query in RSRT,
    Try to reduce Novigational Attr in report. Below document may help you.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6071ed5f-1057-2e10-deb6-d3426fec0219?QuickLink=index&…
    Regards,
    Jagadeesh

  • Query performance for WEBI on top of BEx

    Hello Everyone,
    Is there a way to get the time taken by a webi query execution on the BO side and in the BW side? Is this stat stored some where?
    Also, what is the best approach to take for comparing performance between the original bex query and the WEBI on top of the Bex?
    Thanks for your help.
    Aashish

    Hi,
    i posted some links for the same Question here:
    Poor query performance in WebI on top of BEx queries
    Maybe those can help you.
    Regards
    -Seb.

  • Behavior of BEx Query Attributes for webi on OLAP universe: Including 1 attr to Query!

    My webi report is based on an OLAP universe (.unv) which is based on a BEx Query.   I need an attribute field from an info object.  But when I include that attribute in my BEx query in free characteristics pane……   Every single attribute in that info object comes over to the universe.
    Is this the expected behavior?
    How can I ensure only the attribute wanted comes over and not all the attributes of the info-object.
    I am using BO 4.0SP4.

    Attribute level restriction of an InfoObject is not possible in BEX Query and the same reflects in the Universe.
    You can use attributes of an infoObject which you wanted on WEBI Report Level.
    ---Raji. S

  • CAML query performance for large lists

    I have a list with more than 10000 items. I am retrieving the items and displaying it in a RAD Grid on my page using CAML query. While retrieving the items, around 1000 records are retrieved due to filter. I have enabled paging in my grid and PageSize is
    set to 25. I have noticed that the load time of my page is very slow as it retrieves all the 1000 records at once.
    Is it possible to retrieve just 25 records for the first page on load. On click on the Next button or Page number it should retrieve the next set of 25 records for that particular page.
    I want to know if there is any way to link CAMl query paging with RAD grid paging
    Any code example would be greatly helpful.

    Hi,
    For pagination in SPListItem use the SPQuery.ListItemCollectionPosition property. 
    http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.listitemcollectionposition(v=office.15).aspx
    check the usefull urls
    http://omourad.blogspot.in/2009/07/paging-with-listitemcollectionposition.html
    http://www.anmolrehan-sharepointconsultant.com/2011/10/client-object-model-access-large-lists.html
    Anil

  • Report burst:To increase query performance in xcelsius

    Is there anyway to increase query performance in xcelsius by using report bursting

    Fremlin,
    Report bursting is only for distributing your reports to your end users.
    You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
    -Anil

  • Query performance on Multiprovider(Remote Cube)

    Hi All,
    I have to increase the query performance for a report which built on Multi provider.
    This multiprovide designed from several remote cubes,but for this report data will bring through one remote cube from R/3.
    In filter i had one remote cube, which bring data from R/3.
    Now in ST03 the stats are like
    %init Time - 0, %DB time - 0, %OLAP time - 16.67, %Front end - 83.33.
    Now i have to improve the %Front end lapsed time.
    Could you please guide me.
    Thanks
    Srinivas

    Hi Srinivas,
    Please see this document
    https://websmp105.sap-ag.de/~sapidb/011000358700001394912002
    And this Discussion Thread
    Re: Deactivate Hierarchy symbols in excel
    See whether this is helpful in case of Remote Cubes.
    Thanks
    CK

  • Steps to Improve Query Performance

    Hi All,
    I have a request from User to improve the Query performance for few of the sales reports. Please let me know the different steps I need to perform to improve it.
    The data is coming from R/3 and the query is on a Multicube modeled upon three Cubes.
    It takes way lot of time to open and refresh the report an further execute. The data available is not really huge but still it is taking time.
    Tell me what are the areas that i need to check to understand the issue and how should o progress to improve the performance.
    It would be great help if you can help me with as much details as possible.
    Thanks in advance
    Pavan

    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • Performance in OLE Automation

    Hello.
    We're using OLE automation to generate formatted Excel in a client. They wanted to have data in Excel with formulas, locked cells, colors, logos, etc.
    The problem is that we're having performance issues when we work with more than 1.000 records. We're having timeouts and low memory.
    I'm wondering if someone who worked with that could tell me if there's a limit of records to work with this approach or if there's a way to improve performance for OLE automation??.
    If we're working with too many records, is there another way to generate formatted excel from R3??.

    Hi Joaquin,
    For inserting many lines you should better use the OO technique instead of OLE. Check the help on Desktop Office Integration (e.g. for 4.6C):"
    http://help.sap.com/saphelp_46c/helpdata/en/e9/0bee9f408e11d1893b0000e8323c4f/frameset.htm
    Regards,
    John.

  • The query cannot be released for OLE DB for OLAP

    Hi,
    I have created a SAP BI query by query designer,which includes a replacement path variable.Query is working totaly fine.
    Now i want to use this query in SAP BO universe, so i have checked the query property Release for OLE DB for OLAP in Advance preoperty of query which allow external use of query.
    But after this when i try to run the query its showing error messages-
    The query cannot be released for OLE DB for OLAP
    Query ZBO_OVERDUE_CUST could not be opened.
    >> Row: 37 Inc: CONSTRUCTOR Prog: CL_RSR_OLAP_VAR
    Error when generating dataProvider
    If i remmove it,my query will work fine.But problem is here,i have to use both thing.
    I want repleasement path in query,and because i have to use query in Universe,so i have so allow external use of query.
    What should i do,how can i get my desire result.
    Thanks,
    Piyush

    I think it may be a limitation due to some of the logic in the BEx query.
    http://help.sap.com/saphelp_nw04/helpdata/en/1e/99ea3bd7896f58e10000000a11402f/content.htm
    Release for OLE DB for OLAP
    External reporting tools that communicate using the OLE DB for OLAP interface use queries as data sources. If you want to release this query as a data source for external Reporting tools, select Allow External Access to this Query.
    Note: Queries containing formulas with the operators %RT, %CT, %GT, SUMRT, SUMCT, SUMGT, and LEAF cannot be released for OLE DB for OLAP. These operators are dependent on how the list is displayed in the BEx Analyzer and the formulas return unexpected values when using OLE DB for OLAP or MDX. You may be able to obtain the required result by selecting constants. For more information, see Selecting Constants.
    For more information about using formula operators, see Defining Formulas.
    For more information about OLE DB for OLAP, see Mapping Metadata.

  • OLAP Cache for Query Performance

    Hi Experts,
    I have below 2 Questions before we implement OLAP Cache for our Queries:
    1) We have 15 imporatant queries - which do NOT have any variable/selection screen.
    (question here is will it work for those kind of queries which dont have any selection screen/ variant ? ) --> client wants to prime the cache for few queries which dont have variable screen.
    In this case, say if data is later filtered on any CHAR , will it take the data from Cache?
    2) I have a query which initially will have few characteristics in the drill down when we first execute and users would be drilling down on many other characteristics after that. So if I want to fill OLAP cache for this query then what is the best way so that each drilldown in the query gets data from cache.
    Thank you,
    -Su

    Hi Raghavendra,
    Thanks for your response.
    For first Question, Do you mean , even if we do not have any Selection varaibale on Query--we still can fill OLAP cache for it for its all values (i.e. No selection means "*") ?
    If this is the case, then what we need to defile in General Precalculation (Variable assignment) while creating a new setting for the query.
    Thanks,
    -Su

  • OLAP Query performance

    Hi,
    Does using compression & partitioning (by time) affect the Reporting performance adversely? I have a 8GB Cube with 13 dimensions built in 10.1.0.4. Cube was defined with 1 dense dimension and other 12 as sparse in a compressed composite. It was also partitioned by Years. It takes close to 1 hour to build the cube. Since it is compressed, fully aggregated, I would assume. However, performance of discoverer queries on this cube has been pathetic! Any drill downs or slice/dice takes a long time to return if there are multiple dimensions in either edges of the Crosstab. Also, when scrolling down, it freezes for a while and then brings the data. Sometimes it takes couple of minutes!
    What are the things I needs to check to speed this up? I think I checked things like sparsity, SGA/PGA sizes, OLAP Page Pool etc..
    Regards
    Suresh

    Hi Suresh,
    Before you can implement changes to improve performance, you need to understand the causes of the performance problems. Discoverer for OLAP uses the OLAP API for queries, and the OLAP API generates SQL to query an analytic workspace. There are a few broad possible causes of poor query performance:
    retrieving data from the AW is slow
    SQL execution is slow, perhaps because the SQL is inefficient
    SQL execution is fast, but the OLAP API is slow to fetch data
    Each of these causes demands a different approach. I'd suggest that you enable configuration parameters SQL_TRACE and TIMED_STATISTICS, generate some trace files, and use the tkprof utility to try to narrow down the cause of the trouble.
    Geof

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • System/Query Performance: What to look for in these tcodes

    Hi
    I have been researching on system/query performance in general in the BW environment.
    I have seen tcodes such as
    ST02 :Buffer/Table analysis
    ST03 :System workload
    ST03N:
    ST04 : Database monitor
    ST05 : SQL trace
    ST06 :
    ST66:
    ST21:
    ST22:
    SE30: ABAP runtime analysis
    RSRT:Query performance
    RSRV: Analysis and repair of BW objects
    For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave “tune summary” screen with several rows and columns (?not sure what they are called) with several numerical values.
    Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
    Basically some type of a metric for each of these indicators provided by these performance tcodes.
    Something similar to when you are using an Operating system, and the CPU performance is  consistently over 70%  which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
    I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
    Thanks

    hi Amanda,
    i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters 
    http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
    EarlyWatch focuses on the following aspects:
    ·        Server analysis
    ·        Database analysis
    ·        Configuration analysis
    ·        Application analysis
    ·        Workload analysis
    EarlyWatch Alert – a free part of your standard maintenance contract with SAP – is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
    ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
    Understanding Your EarlyWatch Alert Reports
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
    hope this helps.

Maybe you are looking for

  • ITunes cant handle VBR LAME mp3s (not anecdotal case, how do I contact tech

    I use iTunes primarily on Windows XP. Starting with 6.0.2, playback of MP3's encoded with LAME using VBR settings resulted in problems - namely songs cutting off too early. iTunes has always handled the timing of these mp3's wrong (showcased when the

  • Getting Scroll bar to Return to Top.

    This is my first time messing with Swing (always thought the GUI guys didn't know how to program) anyway, it's kinda fun to add a nice interface to the rest of my code. I send a command to something and get a return vector back, however, when I repri

  • Error in advanced installation of obiee on linux

    Hi All while i am installing obiee 10.1.3.2.0 on linux am getting the following error. "PopChart installation failed. Please re-run it under /uo2/oraobias/product/OracleBI/Corda after this installation. After specifying the directories and selecting

  • Finding/copying/saving a stream's url from iTunes

    How can I find and copy or save a stream's url? For example, I'm listening to a radio sation now (in iTunes). I want to find the stream's url and copy it into an email. Why? So that when I send the email, the recipient can just click the link from wi

  • Photoshop CS4 not reading .CR2 files with Yosemite

    I just updated my computer to a new iMac with Yosemite.  I transferred the data using Time Machine.  I uninstalled and then reinstalled Photoshop CS4.  Now when I try to open my .CR2 camera files, I get an error message in Photoshop: "Could not compl