Query performance and Transport

Guys, (Those of you that already work in the field maybe able to better answer this question.) 
Let's say if an end-user comes and complaints about a certain query performance.  The developer ends up creating aggregates on the cube.  Would he be doing all this process on dev box then transport to QA and on to Production environment or directly on production environment and won't need to transport anything?  Any kind of documentation will be helpful for this.
Thanks,
RG

Aggregates are created directly in production.
We create aggregate based on query statistics...which is based on data read by query and various times.
Since this is based of data in prod we develop directly in prod...i mean dev wont have any data so you dont have any basis to create aggregate. even if you use statistics data from prod to create aggregate in dev...you cannot again check performance improvement coz of aggregate in dev as it wont have data as prod has

Similar Messages

  • SAP Query Use and Transport Strategy

    Anyone wish to share their experience in the use of SAP Query?  We generally have an understanding that we don't want to be giving out this tool to end-users in Production.  We would like to create queries, and when we wish to give them out we'll attach t-codes to them and roll them out.
    However in practice, this is becoming difficult.  An example is where in our gold client we create queries and then we would typically transport to our unit test client.  But whenever we do an export, it generates a transport request.  Before we are done testing we may end up with 10's of transports for a single query?
    Anyone have some ideas on a transport strategy for SAP Query?  How about it's use in Production?  Our landscape for changes are typically DEV Gold -> DEV Test -> QAS -> PRD.  We would ideally like our transport strategy for queries to match what we do for everything else.

    HI,
    Query objects are transported in different ways according to the query area in which they were created.
    In order to know which transport options are available, you must first understand how query objects are created.
    <b>Standard Area</b>
    Query objects are stored in the client-specific table AQLDB. They are not connected to the Change and Transport Organizer.
    <b>Global Area</b>
    Query objects are stored in the cross-client table AQGDB. They are connected to the Change and Transport Organizer.
    http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb467f455611d189710000e8322d00/content.htm
    Global area objects can be transported into other systems. Standard area query objects can not only be transported to other clients within their own system, but into all clients of other systems as well. In addition, query objects can be transported from the global query area to the standard query area and back within the same system.Transports are normally performed by the system administrator, not by end-users. For this reason, you need the appropriate authorizations
    Check the below links for detailed explanation
    <b>Transporting Global Area Objects</b>
    http://help.sap.com/saphelp_47x200/helpdata/en/ec/052786a30411d1950a0000e82de14a/content.htm
    <b>Transporting Standard Area Objects</b>
    http://help.sap.com/saphelp_47x200/helpdata/en/ec/052789a30411d1950a0000e82de14a/content.htm
    <b>General Transport Description</b>
    http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb4699455611d189710000e8322d00/content.htm
    <b>Generating Transporting Datasets</b>
    http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb46a6455611d189710000e8322d00/content.htm
    <b>Reading Transport Datasets</b>
    http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb46e7455611d189710000e8322d00/content.htm
    <b>Managing Transport Datasets</b>
    http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb46f4455611d189710000e8322d00/content.htm
    <b>Transporting Objects between Query Areas</b>
    http://help.sap.com/saphelp_47x200/helpdata/en/ec/05278ca30411d1950a0000e82de14a/content.htm
    I hope this solves your purpose.
    Regards,
    Vara
    Message was edited by:
            varaprasad bhagavatula

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Web Template not impacted after query changed and Transported

    Hi. All.
                        We have modified the query and transported to production But query changes not been
    impacted on Standard web teplate. But it works fine in Bex Analyzer.
                       Issue is there was a description was truncated earlier we used initially as short description.
    Later we mapped to Medium description and modified the entire flow and loaded. It works fine.
                      Only in production we are not seeing correct description in web report and it works fine with Analyzer. So we need to ask the user as after executing the report he needs to change characterstic property from Standard to Medium.
    But it would be bit inconvenience to users.
                So how can we solve this problem and why the query changes are not impacting on Web template.
    Please provide your views on this issue.
    Thanks & Regards
    Vijay

    Hi Vijay,
    one reason might be if you use the option "personalization" within the web template. If users can personalize the templates changes will not be visible and have to be deleted at the time a new version of the query is provided. There are background tables for the personalization as well where you can delete all objects for this template at once if necessary.
    Brgds,
    Marcel

  • Query Performance and Cache

    Hi,
    I am trying to analyze Query Performance in ST03N.
    I run a Query with Property = Cache Inactive.
    I get Runtime as say 180 sec.
    Again when I run the same Query the Runtime is getting decreased .Now it is say 150 sec.
    Pls expalin me the reason for the less runtime when there is No Cache used.

    Could be two things occurring.  When you say cache is inactive - that is probably referring to the OLAP Global Cache.  The use session, still has alocal cache so that if they run execute the same navigation in the same session, results are retrieved from local cache.
    Also,as the others have mentioned, if the same DB query is really being executed on the DB a second time, not only is the SQL stmt parsing/optimization not need to occur, but almost certainly some/all of the data still remains in the DB's buffer, so it only needs to be retrieved from memory, rather than physically reading the disk again.

  • Query Performance and reading an Explain Plan

    Hi,
    Below I have posted a query that is running slowly for me - upwards of 10 minutes which I would not expect. I have also supplied the explain plan. I'm fairly new to explain plans and not sure what the danger signs are that I should be looking out for.
    I have added indexes to these tables, a lot of which are used in the JOIN and so I expected this to be quicker.
    Any help or pointers in the right direction would be very much appreciated -
    SELECT a.lot_id, a.route, a.route_rev
    FROM wlos_owner.tbl_current_lot_status_dim a, wlos_owner.tbl_last_seq_num b, wlos_owner.tbl_hist_metrics_at_op_lkp c
    WHERE a.fw_ver = '2'
    AND a.route = b.route
    AND a.route_rev = b.route_rev
    AND a.fw_ver = b.fw_ver
    AND a.route = c.route
    AND a.route_rev = c.route_rev
    AND a.fw_ver = c.fw_ver
    AND a.prod = c.prod
    AND a.lot_type = c.lot_type
    AND c.step_seq_num >= a.step_seq_num
    PLAN_TABLE_OUTPUT
    Plan hash value: 2447083104
    | Id  | Operation           | Name                       | Rows  | Bytes | Cost
    (%CPU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT    |                            |   333 | 33633 |  1347
       (8)| 00:00:17 |
    |*  1 |  HASH JOIN          |                            |   333 | 33633 |  1347
       (8)| 00:00:17 |
    |*  2 |   HASH JOIN         |                            |   561 | 46002 |  1333
       (7)| 00:00:17 |
    |*  3 |    TABLE ACCESS FULL| TBL_CURRENT_LOT_STATUS_DIM | 11782 |   517K|   203
       (5)| 00:00:03 |
    PLAN_TABLE_OUTPUT
    |*  4 |    TABLE ACCESS FULL| TBL_HIST_METRICS_AT_OP_LKP |   178K|  6455K|  1120
       (7)| 00:00:14 |
    |*  5 |   TABLE ACCESS FULL | TBL_LAST_SEQ_NUM           |  8301 |   154K|    13
      (16)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND
                  "A"."FW_VER"=TO_NUMBER("B"."FW_VER"))
       2 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV" AND
                  "A"."FW_VER"="C"."FW_VER" AND "A"."PROD"="C"."PROD" AND "A"."LOT_T
    YPE"="C"."LOT_TYPE")
           filter("C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
       3 - filter("A"."FW_VER"=2)
    PLAN_TABLE_OUTPUT
       4 - filter("C"."FW_VER"=2)
       5 - filter(TO_NUMBER("B"."FW_VER")=2)
    24 rows selected.

    Guys thank you for your help.
    I changed the type of the offending column and the plan looks a lot better and results seem a lot quicker.
    However I have added to my SELECT, quite substantially, and have a new explain plan.
    There are two sections in particular that have a high cost and I was wondering if you seen anything inherently wrong or can explain more fully what the PLAN_TABLE_OUTPUT descriptions are telling me - in particular
    INDEX FULL SCAN
    PLAN_TABLE_OUTPUT
    Plan hash value: 3665357134
    | Id  | Operation                        | Name                           | Rows
      | Bytes | Cost (%CPU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT                 |                                |
    4 |   316 |    52   (2)| 00:00:01 |
    |*  1 |  VIEW                            |                                |
    4 |   316 |    52   (2)| 00:00:01 |
    |   2 |   WINDOW SORT                    |                                |
    4 |   600 |    52   (2)| 00:00:01 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID   | TBL_HIST_METRICS_AT_OP_LKP     |
    1 |    71 |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |   4 |     NESTED LOOPS                 |                                |
    4 |   600 |    51   (0)| 00:00:01 |
    |   5 |      NESTED LOOPS                |                                |    7
    5 |  5925 |    32   (0)| 00:00:01 |
    |*  6 |       INDEX FULL SCAN            | UNIQUE_LAST_SEQ                |    8
    9 |  2492 |    10   (0)| 00:00:01 |
    |   7 |       TABLE ACCESS BY INDEX ROWID| TBL_CURRENT_LOT_STATUS_DIM     |
    PLAN_TABLE_OUTPUT
    1 |    51 |     1   (0)| 00:00:01 |
    |*  8 |        INDEX RANGE SCAN          | TBL_CUR_LOT_STATUS_DIM_IDX1    |
    1 |       |     1   (0)| 00:00:01 |
    |*  9 |      INDEX RANGE SCAN            | TBL_HIST_METRIC_AT_OP_LKP_IDX1 |    2
    9 |       |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("SEQ"=1)
       3 - filter("C"."FW_VER"=2 AND "A"."PROD"="C"."PROD" AND "A"."LOT_TYPE"="C"."L
    OT_TYPE" AND
                  "C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
       6 - access("B"."FW_VER"=2)
           filter("B"."FW_VER"=2)
    PLAN_TABLE_OUTPUT
       8 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND "A
    "."FW_VER"=2)
       9 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV")

  • Query Performance and Result Set Query/Sub-Query

    Hi,
    I have an infoset with as many as <b>15 joins</b> with different ODS and Master Data..<b>The ODS are quite huge with 20 million, 160 million records)...</b>Its taking a lot of time even for a few days and need to get atleast 3 months data in a reasonable amount of time....
    Could anyone please tell me whether <b>Sub-Query or Result Set Query have to be written against the same InfoProvider</b> (Cube, Infoset, etc)...or they can be used in queries
    on different infoprovider...
    Please suggest...Thanks in Advance.
    Regards
    Anil

    Hi Bhanu,
    Needed some help defining the Precalculated Value Set as I wasnt succesful....
    Please suggest answers if possible for the questions below
    1) Can I use Filter Criteria for restricting the value set for the characteristic when I Define a Query on an ODS. When i tried this it gave me errors as below  ..
    "System error in program CL_RSR_REQUEST and form  EXECUTE_VTABLE:COB_PRO_GET_ALWAYS....                     Error when filling value set DELIVERY..                                               
    Job cancelled after system exception ERROR_MESSAGE"                                           
    2) Can I create a create a Value Set predefined on an Infoset -  Not Succesful as Infoset have names such as "InfosetName_F000232" and cannot find the characteristic in Paramter Tab for Value Set in Reprting Agent.
    3) How does the precalculated value Set variable help if I am not using any Filtering Criteria and is storing the List of all values for the Variable from the ODS which consists of 20 millio records.
    Thanks for your help.
    Kind Regards
    Anil

  • Structures Vs RKFs and CKFs In Query performance

    Hi Gurus,
    I am creating a GL query which will be returning with a couple of KFs and some calculations as well with different GL accounts and I wanted to know which one is going to be more beneficial for me either to create Restricted keyfigures and Calculated Keyfigures or to just use a structure for all the selections and formula calculations?
    Which option will be better for query performance?
    Thanks in advance

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • Query Performance - Analyzer vs WAD

    Hi experts,
    We've been having user complaints regarding performance in BW.  The queries are accessed using Analyzer/Excel interface.  As a test I've tried comparing opening the query through Analyzer and then through Web Application Designer.  With Excel my load from S900 was taking almost 2 minutes.  With the Web Browser my initial load was just 30 seconds!  This is great however, this number is deceiving because it's only showing me a subset of the data; the first 25% of the data is shown and if I want to view more I have to click 'next page' (called 'Next Rows' on screen).  So the subsequent set of records (say 25-50%) then takes another 30 seconds.  So if you hit Next page each time you will end up getting 4 loads of 30 seconds each thus the same performance as Excel (2 minutes total wait), although maybe less stressful to sit and watch.
    QUESTION:
    It seems like the subsequent loads are taking 30 seconds because they are re-querying the back-end data (the S900 Cube) instead of re-querying the resultset.  Is that true?  Do you see my problem?  Is there a better way to do this to improve performance?
    Thanks for any help you can provide!
    -Patrick

    hi,
    dealing with query performance, please check following, hope you get ideas ....
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'
    some docs
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc

  • Effect of Restricted Keyfigure & calculated keyfigure in query performance

    Hi,
             What is the effect of Restricted Keyfigure & calculated keyfigure  in Query Performance?
    Regards
    Anil

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.
    other than performance, there might be other considerations to determine which one of the options should be used.
    If the RKF's are query specific and not used anywhere in majority of other queries, I would go for structure selections. And from my personal exp, sometimes all the developers end up with so many RKF and CKF's that you get easily lost in the web and not to mention the duplication.
    if the same structure is needed widely across most of the queries, that might be a good idea to have global structure to be available across the provider, which might considerable cut down the development time.

  • Query performance - bench marking

    Performance of  a query at any given time is determined by many variables, and I would like to know how to detect performance issues against a benchmark. One thought was to store some data in a separate cube (and not load any more data into it), run a specific query daily (few times) and have the run time info. updated on BW statistics cube. And when users complain about any performance issues on the system for different queries, we can run this specific query to see if there is really a true performance issue in the system.
    The objective is to establish bench marks for tracking query performance. Any thoughts or suggestions would be appreciated.
    Thanks,
    Sree

    hi Sree,
    that's good idea, but i think only volume data factor may considered here, other factors like query design, aggregate, frontend, network, etc. take a look following for query performance
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'
    some docs
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://websmp206.sap-ag.de/~sapidb/011000358700001394912002
    hope this helps.

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Creation and transportation of query from development to simulation system

    Hello experts,
    I need to create a new Query in the development  system in the General Ledger folder in Financial reporting.  and then I need to transport the same to the  simulation system. what are the correct steps I  need to follow to do the same if have to avoid any issues during this transportation.
    Do I need to do any modifications to the role ZS_XX_BEX_MENU and transport the same?
    Could anyone help me in this regard giving me the exact procedure I need to follow.
    Regards

    No need to modify, but if it is $temp then change it has ur won request.
    follow the genral procedure
    1)In RSA1 Go to Transport tab and collect ur query.
    Drag to right screen
    If it is in $Temp change it to your own request. For this You may need Access.
    2)Query contains all objects which were used in that query.
    if any Info object that are created newly then check for Transport
    3) Then, finally click on transport(Truck) icon
    4) By default, it will collect all new objects including newly created Info Objects also. You can change the collections of your own selection.
    You will get a Request Number here. Please save this Number so that you can check this at SE09
    5) In SE09 search for your Request Number.
    6)Release The request by subsequent process onwards( Means sub contents like infoprovider first and then Query)

Maybe you are looking for

  • OPEN ORDERS RECEIPTS...

    Hi, I have build a Tree Control to display Materials and all the Suppliers to this Material. When User klick on Material,this Material Node shows all Suppliers for this Material as 2nd Hierarchy. How can we gather all OPEN ORDERS / RECEIPTS ( Amount

  • Can I use Elements to save an A0 pdf map as a low res jpeg.

    I need to save A0 pdf maps (4 - 5mb) as low resolution jpegs so that it can be downloaded from a website at A0 size. The quality needs to be clear enough to identify names and labels but not so good that prospective purchasers of the map just keep th

  • FM for T COde FTXP

    hi gurus, SAP Transaction FTXP supports maintenance of Tax rates based on the tax code and tax jurisdiction code. We need to figure out some Function Module or Table links which will enable us to get this information using Country code and Tax code.

  • PS CS4 histogram and floating windows

    How do I get Photoshop to open with floating windows and the histogram in compact RGB mode? I have to reset this every time I open a new photo. I can't believe how hard it is to change modes in the histogram.I have to undock it, change it to expanded

  • Avoiding non required view for material master

    Hi, If I do not wan to tuse all the views then how can avoid them at the time of creation of material.I mean system should only show me the views relevant for me. What is the customization setting required to be done in IMG ?/ Thx in advance Regards,