Timing Query Performance in Java

Hello again everyone,
I had previously asked a question concerning timing the execution time of an individual thread, and I have been unable to uncover an acceptable answer. So, instead of threads I may use system processes. Some background...
I am writing a database load-testing application that simulates load via. submitting a set of queries through multiple connections. These multiple connections can be separate threads or processes that run concurrently in order to get a feel for how the database performs under the load of a number of users. My question is does anyone know how to get the actual execution time of a single process. I am afraid that due to the large number of users, the system will be unacceptably slow. Normally I could do something like this ...
long start = System.getcurrentTimeMillis();
executeJDBCquery();
end = System.getcurrentTimeMillis() - start;
however I am concerned that with a large number of users this will not provide an accurate query execution time since the system will be distrubting its processing power over so many processes. The first suggestion that has raised is to get the query processing time from the RDBMS, however, I can't figure out how this can be done with Oracle 8i. Is there a hidden JDBC method or property that will give server-side execution time for a query, or can anyone suggest a solution that will work with oracle? If anyone knows how to get the actual execution time of a single system process that would be very helpful as well. Thanks for any help that you may be able to give, let me know if I can provide any more details.
Dan

Try this:
Create a new table in Oracle to track the times, such as:
Create Table time_log (Thread_ID number, Query_ID number,
start_time number, end_time number);
On the tables that you are performing the queries on, create before and after statement-level triggers. These triggers would insert a record into the time_log field as required (i.e., the before trigger enters a start time, the after trigger enters an end time).
To properly keep track of the execution times, you would need a thread_id (possibly pass this as part of the query call) and a table/query_id. This way you would know which thread queried what table.
For your final timing computations, simply query the time_log table, making the correct calculations.
One note: the start and end times are numbers, not dates, as the Oracle dates only get down to seconds. You will have to use the DBMS_UTILITY.GET_TIME, which returns the current time in hundreds of a second.
HTH,
Sean

Similar Messages

  • Query performance on RAC is a lot slower than single instance

    I simply followed the steps provided by oracle to install a rac db of 2 nodes.
    The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
    However the performance on the select query is very slow compared to single instance.
    I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
    When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
    I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
    Could someone help me how to debug this problem?
    Thanks,
    Chau
    Edited by: user638637 on Aug 6, 2009 8:31 AM

    top 5 timed foreground events:
    DB CPU: times 943(s), %db time (47.5%)
    cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
    direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
    IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
    gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
    another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
    You should check your sql statement from report and tuning them.
    - Check from Execute Plan.
    - If not index, determine to use index.
    SQL> set autot trace explain
    SQL> sql statement;
    cursor: pin S wait on X.
    A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
    use variable SQL , avoid dynamic sql
    http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
    check about memory MEMORY_TARGET initialization parameter.
    By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
    Good Luck

  • How to get comparable Oracle JDBC performance using Java 1.4 vs 1.1.7?

    Our application makes extensive use of JDBC to access an Oracle database. We wrote it a number of years ago using java 1.1.7 and we have been unable to move to new versions of java because of the performance degradation.
    I traced the problem to JDBC calls. I can reproduce the problem using a simple program that simply connects to the database, executes a simple query and then reads the data. The same program running under java 1.4 is about 60% slower than under java 1.1.7. The query is about 80% slower and getting the data is about 150% slower!
    The program is attached below. Note, I run the steps twice as the first time the times are much larger which I attribute to java doing some initializations. So the second set of values I think are more representative of the actual performance in our application where there are numerous accesses to the database. Specifically, I focus on step 4 which is the execute query command and step 5 which is the data retrieval step. The table being read has 4 columns with 170 tuples in it.
    Here are the timings I get when I run this on a Sparc Ultra 5 running
    SunOs 5.8 using an Oracle database running 8.1.7:
                     java 1.1.7  java 1.4
            overall:    2.1s         3.5s
            step 1:     30           200
            step 2:    886          2009
            step 3:      2             2
            step 4:      9            17
            step 5:    122           187
            step 6:      1             1
            step 1:      0             0
            step 2:    203           161
            step 3:      0             1
            step 4:      8            15   <-   87% slower
            step 5:     48           117   <-  143% slower
            step 6:      1             2I find the same poor performance from java versions 1.2 and 1.3.
    I tried using DataDirect's type 4 JDBC driver which gives a little better performance but overall it is still much slower than using java 1.1.7.
    Why do the newer versions of java have such poor performance when using JDBC?
    What can be done so that we can have performance similar to java 1.1.7
    using java 1.4?
    ========================================================================
    import java.util.*;
    import java.io.*;
    import java.sql.*;
    public class test12 {
        public static void main(String args[]) {
            try {
                    long time1 = System.currentTimeMillis();
    /* step 1 */  DriverManager.registerDriver(
                        new oracle.jdbc.driver.OracleDriver());
                    long time2 = System.currentTimeMillis();
    /* step 2 */  Connection conn = DriverManager.getConnection (
                  "jdbc:oracle:thin:@dbserver1:1521:db1","user1","passwd1");
                    long time3 = System.currentTimeMillis();
    /* step 3 */  Statement stmt = conn.createStatement();
                    long time4 = System.currentTimeMillis();
    /* step 4 */  ResultSet rs = stmt.executeQuery("select * from table1");
                    long time5 = System.currentTimeMillis();
    /* step 5 */  while( rs.next() ) {
                      int message_num = rs.getInt(1);
                      String message = rs.getString(2);
                    long time6 = System.currentTimeMillis();
    /* step 6 */  rs.close(); stmt.close();
                    long time7 = System.currentTimeMillis();
                System.out.println("step 1: " + (time2 - time1) );
                System.out.println("step 2: " + (time3 - time2) );
                System.out.println("step 3: " + (time4 - time3) );
                System.out.println("step 4: " + (time5 - time4) );
                System.out.println("step 5: " + (time6 - time5) );
                System.out.println("step 6: " + (time7 - time6) );
                System.out.flush();
            } catch ( Exception e ) {
                System.out.println( "got exception: " + e.getMessage() );
            ... repeat the same 6 steps again...
    }

    If I run my sample program with the -server option, it
    takes a lot longer (6.8s vs 3.5s).Which has to be expected, as the -server option optimizes for long running programs - so it shoudl go with my second suggestion, more below...
    I am not certain what you mean by "just let the jvm
    running". Our users issue a command (in Unix) which
    invokes one of our java programs to access or update
    data in a database. I think what you are suggesting
    would require that I rewrite our application to have a
    java program always running on the users workstation
    and to also rewrite our
    commands (over a hundred) to some how pass data and
    receive data with this new server java program. That
    does not seem a very reasonable just to move to a new
    version of java. Or are you suggesting something
    else?No I was just suggestion what you descript. But if this is not an option, then maybe you should merge your java-programs to C or another native language. Or you could try the IBM-JDK with the -faststart (or similar) option. If thew Unix you mention is AIX, then there would be the option of a resetable-vm. But I cannot say if this VM would solve your problem. Java is definitly not good for applications which only issue some unqiue commands because the hotspot-compiler can not be efficiently used there. You can only try to get 1.1.7 performance by experimenting with vm-parameters (execute java -X).

  • Bad performance of Java Web Report in BI 7

    We are experiencing poor performance with Java web application(EP)
    comparing to that with ABAP web.
    We migrated our BI reports from the ABAP web interface to the Java web
    interface due to our upgrade to Netweaver2004 environment.
    The problem is that it takes much longer time to load the BI reports in
    Java web than in ABAP web.
    It is the same situation in RSRT. Java web needs more time to execute
    the same query.
    there is no EP logon performance delay, so I think the EP is configured good.
    But the response time of BI Java Web Report is delayed average of 4~5
    seconds in every Query compared in that of ABAP Web report.
    Anyone experience this situation?

    Hi,
    I am facing similar problem. How did you solve it?
    Regards,
    Apurva

  • Forms - Query - Performance

    Hi,
    I am working on a application Developed in Forms10g and Oralce 10g.
    I have few very large transaction tables in db and most of the screens in my application based on these tables only.
    When user performs a query (with out any filter conditions) the whole table(s) loaded into memory and takes very long time. Further queries on the same screen perform better.
    How can I keep these tables in memory (buffer) always to reduce the initial query time?
    or
    Is there any way to share the session buffers with other sessions, sothat it does not take long time in each session?
    or
    Any query performance tuning suggestions will be appreciated.
    Thanks in advance

    Thanks a lot for your posts, very large means around
    12 million rows. Yep, that's a large table
    I have set the query all records to "No". Which is good. It means only enough records are fetched to fill the initial block. That's probably about 10 records. All the other records are not fetched from the database, so they're also not kept in memory at the Forms server.
    Even when I try the query in SQL PLUS it is taking
    long time. Sounds like a query performance problem, not a Forms issue. You're probably better of asking in the database or SQL forum. You could at least include the SELECT statement here if you want any help with it. We can't guess why a query is slow if we have no idea what the query is.
    My concern is, when I execute the same query again or
    in another session (some other user or same user),
    can I increase the performance because the tables are
    already in memory. any possibility for this? Can I
    set any database parameters to share the data between
    sessions like that... The database already does this. If data is retrieved from disk for one user it is cached in the SGA (Shared Global Area). Mind the word Shared. This caching information is shared by all sessions, so other users should benefit from it.
    Caching also has its limits. The most obvious one is the size of the SGA of the database server. If the table is 200 megabyte and the server only has 8 megabyte of cache available, than caching is of little use.
    Am I thinking in the right way? or I lost some where?Don't know.
    There's two approaches:
    - try to tune the query or database to have better performance. For starters, open SQL*Plus, execute "set timing on", then execute "set autotrace traceonly explain statistics", then execute your query and look at the results. it should give you an idea on how the database is executing the query and what improvements could be made. You could come back here with the SELECT statement and timing and trace results, but the database or SQL forum is probably better
    - MORE IMPORTANTLY: think if it is necessary for users to perform such time consuming (and perhaps complex) queries. Do users really need the ability to query all records. Are they ever going to browse through millions of records?
    >
    >
    Thanks

  • Problem with query performance

    Hi All,
    While loading data if we run the report, will it make any difference in the query performance? I have a cube with 230 million records  I am running the delta which has got another 230 million records. Now I am trying to run the queries on the cube, all those timed out.
    But I have a DSO with the 480 million the same queries are running good on the DSO. But I wanted to run the reports on the cube not on DSO what can I do? Is the data load giving any problem in the query runtime??
    Pleas advise me what to do?
    Regards
    Kiran

    Hi,
    My load got finished, Now i created indexes and tried to run the query, every query on this cube times out.. but on the top of DSO we have Same queries those queries are running.. what could be the reason how can i go ahead to improve the query performance plzzz advise me
    Kiran
    Edited by: kiran kumar on Mar 14, 2009 1:28 AM

  • Fuzzy searching and concatenated datastore query performance problems.

    I am using the concatenated datastore and indexing two columns.
    The query I am executing includes an exact match on one column and a fuzzy match on the second column.
    When I execute the query, performance should improve as the exact match column is set to return less values.
    This is the case when we execute an exact match search on both columns.
    However, when one column is an exact match and the second column is a fuzzy match this is not true.
    Is this normal processing??? and why??? Is this a bug??
    If you need more information please let me know.
    We are under a deadline and this is our final road block.
    TIA
    Colleen GEislinger

    I see that you have posted the message in the Oracle text forum, good! You should get a better, more timely answer there.
    Larry

  • Poor query performance only with migrated 7.0 queries

    Dear Team,
    We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
    I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
    We are not using any aggregates in the InfoProvider level.
    Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
    I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
    I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
    Regards,
    Chan

    Hi,
    Queries in BI 7.0 are almost the same as queries in 3.x format.
    inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
    if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
    after the query execution u can use statistics which will be shown at the top of the page.
    With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
    Tomer.

  • Performance of Java Swing Vs Ajax ( Java Script )

    We had developed application in swing which displays the data received after querying to web-service in Table and JTrees.
    The current technological move is toward Ajax, It is ok that I can build this application in Ajax but not willing to go for replacment to swing.
    I had following question, why should I go for Ajax over swing considering my type of application only ( beside plugins,JRE.. )?
    And Peformance of Java script ( AJAX ) application over swing on following gorund?
    1. Rendering of components in Swing & HTML ( browser )
    2. Peformance of Java Script over Java ( CPU, Memory usage )
    3. Event model in Java Script over java swing
    4. Multitasking & Multithreading.
    5. Storing data in variables.

    The advantage of AJAX over Swing is clear, no special software is needed on a client computer other than an up to date browser.
    Swing does have some advantages however in that it is much more established and there are many more tools available for developing a front end GUI application in Swing.
    I have yet to find what I consider a "good" IDE for Javascript development on multiple platforms, Eg. IE, Firefox, etc...
    Javascript is also not truly object oriented and that makes organizing your code and your design harder.
    Performance I feel is a moot point between the two as it would hardly be noticeable, however I would guess that Javascript might have less performance than Java.

  • Query performance.

    Hi
    I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
    My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
    Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
    Or should I work on the query performance?
    What would be the most efficient solution for my problem?
    Any advice would be greatly appreciated.
    Thanks

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Report burst:To increase query performance in xcelsius

    Is there anyway to increase query performance in xcelsius by using report bursting

    Fremlin,
    Report bursting is only for distributing your reports to your end users.
    You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
    -Anil

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Query Performance issue in Oracle Forms

    Hi All,
    I am using oracle 9i DB and forms 6i.
    In query form ,qry took long time to load the data into form.
    There are two tables used here.
    1 table(A) contains 5 crore records another table(B) has 2 crore records.
    The recods fetching range 1-500 records.
    Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
    But DBA team dont want to create index on table A.Because of table space problem.
    If create the index on main table (A) ,then performance overhead in production.
    Concurrent user capacity is 1500.
    Is there any alternative methods to handle this problem.
    Regards,
    RS

    1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
    http://en.wikipedia.org/wiki/Crore
    I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
    2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
    3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
    Justin

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

Maybe you are looking for