Applied performance tuning conditions, but no result..!

Hi,
Requirement:
1. Display data from tables like BSEG, BKPF, CSKS etc..,
2. Amount of data , which needs to be display is very huze.
3. Changing of requirement will be happen, only if we agree ABAP can not increase performance more than this.
Result:
1. Bad Performance, perticulerly at BSEG.
2. Some times falling into Memory Over flow issue. Because of number of records pulling from tables are very high.
What I did:
1. I used for all entreis methid.
2. Used availble Indexs in where conditions.
3. Using Clear and Refresh statement where ever neccassary.
4. Applied all performance related options , as of my knowledge.
5. There are syntax, exended check, code inspector errors or warings.
I know this is because of huze data, but as a developer how can reduce the runtime..still...?
---How can solve this problem...?
Is there any way to modify my program acrchitecture to get result...!
Thanks,
Naveen Inuganti.

Hello Naveen,
As i could see from your question, you have tried most of the basic fine tuning methods to improve the performance of the program.
There might still be some things like using binary search's with proper sort criterias, fine tuning a loop processing etc. But that could be suggested only looking at the designed program.
My recommendation is to first recheck the design of the program, verify if any loop processings or search criterias can be fine tuned and make changes accordingly.
If the run time still does not improve, try splitting the selction criteria logically inside the program (as customer might not agree to change the requirement). Once the individual "split" selection data is processed, you can collect all the data for common processings.
For memory issues, use FREE and REFRESH statements to delete all the data which is not required. Sometimes, we keep the internal tables with huge data to be used in later processings. My suggestion is to delete such data once your first steps of processing is completed and select the data again when it is required. We can check if "SELECT..ENDSELECT" can be used which again depends on case to case.
If this is report which displays data based on some criteria from time to time, depending on the various selction criterion being used, you can al alternative solution:
1) Create a database view and each time the program runs and reports the data, store it also in this database view. The view and design of the data base will depend on how the requirement is and teh complexity of the data being shown.
2) Next time when the program runs, only the delta data can be processed from actual database table (BKPF, BSEG, CSKS etc) and teh rest of the data can be fetched from previous processed data.
This might be helpful particularly if the report is working on timestamps.
If it still does not work, due to huge data, the only resort is to run the program in background.
Regards,
Pavan Kolli

Similar Messages

  • HT201412 Phone disabled.  Tried restoring and clearing to factory condition but no results.  Phone still disabled.

    My phone is disabled, can't use.  Tried restoring to factory condition to no avail.  What can I do now?

    What happens when you try to restore to factory? See this support document for a disabled phone and what to do. http://support.apple.com/kb/ht1212

  • Oracle  11g Performance tuning approach ?

    Hello Experts,
    Is it the right forum to follow oracle performance tuning discussions ? If not, let me know what will be the forum to pick up some thread on this subject.
    I am looking for performance tuning approach for oracle 11g. I learned there are some new items in 11g in this regard. For persons, who did tuning in earlier versions of Oracle,
    what will be the best way adopt to 11 g?
    I reviewed the 11g performance tuning guide, but I am looking for some white papers/blogs with case studies and practical approaches. I hope that you have used them.
    What are the other sources to pick up some discussions?
    Do you mind, share your thoughts?
    Thanks in advance.
    RI

    The best sources of information on performance tuning are:
    1. Jonathan Lewis: http://jonathanlewis.wordpress.com/all-postings/
    2. Christian Antognini: http://www.antognini.ch/
    3. Tanel Poder: http://blog.tanelpoder.com/
    4. Richard Foote: http://richardfoote.wordpress.com/
    5. Cary Millsap: http://carymillsap.blogspot.com/
    and a few dozen others whose blogs you will find cross-referenced in those above.

  • Which tool is used for Oracle 11g performance tuning?

    Hi all
    I used statspack for 9i's performance tuning.
    But with 11g R2 now I want to know which way are you using to do performance tuning?
    Statspack? OSW? RDA? ADDM? Or other ways?
    Thank you.

    schavali wrote:
    I would start with Automatic Workload Repository (AWR) reports to identify the bottleneck
    MOS Doc 390374.1 - Oracle Performance Diagnostic Guide (OPDG)
    MOS Doc 748642.1 - What is AWR( Automatic workload repository ) and How to generate the AWR report?
    HTH
    SriniThanks for your answer.
    To read the MOS documents I need a Oracle Customer ID , is that right ?
    (Where to get that ID ? Our company bought Oracle 10g without any Customer IDs in it )
    Best Wishes.

  • Performance tuning of a function which is part of a package.

    Hello
    I just built a function in a package. And now I want to make sure that the SQL is well tuned.
    I was going through the links mentioned in this post
    Performance Tuning
    to know more about performance tuning.
    But I am confused with some simple things.
    So when we analyze a single function present in a package , should I use the SQL of the function or
    should I use a call to that function.
    For example first I thought of using EXPLAIN PLAN.
    I see a different explain plan when I use a call to that function then
    when I use the actual SQL of the function with some plugged in parameters whereever required.
    What is the right way of doing it?
    Thank you

    I recommend :
    1. use SQL tracing with DBMS_MONITOR and then TKPROF on raw SQL trace file
    2. call the function as it used in your production environment
    See good examples in http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php (just refer to DBMS_MONITOR examples and TKPROF).
    Note that SQL trace will only trace SQL and not PL/SQL. If SQL elapsed time does not match function call elapsed time close enough, you need to use DBMS_PROFILER to know where PL/SQL non SQL code elapsed time is used.
    I don't recommend using EXPLAIN PLAN because it has known limitations http://docs.oracle.com/cd/E11882_01/server.112/e16638/ex_plan.htm#PFGRF94673 (others have said "EXPLAIN PLAN lies" - you can google for this expression ....).
    Edited by: P. Forstmann on 10 févr. 2012 19:44

  • Planning to start the performance tuning but....

    Friends,
    Database OS: RHEL AS 3.0
    Database: Oracle Release 9.2.0.4.0
    Number of Tables: 503
    TableSpace size - 1.8GB out of 3GB
    Max.Records in a Table - 1 Million and its increasing..
    Our DB Optimizer mode is - CHOOSE (is it RBO?)
    We are not using enterprise manager and not installed any tuning scripts like statspack etc....
    Currently we are taking user managed backup without any problem so we are continuing the same from 2004 onwards.
    Now we want want to tune our database.(We have never tuned our database)
    We would like to change our optimizer from RBO to CBO.
    Can anybody tell me the first step for the performance tuning?
    Please dont suggest me oracle doc im already studying.....its taking time....
    In the mean time......
    Step 1: Can i Analyze the table or dbms_stat package?
    We have not at all used the analyze or dbms_stat. So can i start with any of the above or do u have any other suggestions for the 1st step?
    Thanks

    our manager feels that if we tune our db the performance will be more than compared to the current one.you have a mystique manager then, ask him what kind of "feelings" does he have about my database ;) there is no place for feelings in this game, this is life cycle to be successful ; testing->reporting->analyzing->take nedded actions->re-testing->reporting->analyzing..
    so while you are surely reading the documentation;
    Oracle9i Database Performance Planning Release 2 (9.2)
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96532/toc.htm
    Oracle9i Database Performance Tuning Guide and Reference Release 2 (9.2)
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/toc.htm
    first thing you have to do is to setup an appropriate test environment with same os-oracle releases, parameters;
    -- some of them to check
    SELECT NAME, VALUE
      FROM v$system_parameter a
    WHERE a.NAME IN
           ('compatible', 'optimizer_features_enable',
            'optimizer_mode', 'pga_aggregate_target', 'workarea_size_policy',
            'db_file_multiblock_read_count', .. )and of course schema set and data amount. Then you run your application on load and take statspack snapshots and do the same after collecting statistics;
    -- customize for your configuration, schema level object statistics
    exec dbms_stats.gather_schema_stats( ownname =>'YOUR_SCHEMA', degree=>16, options=>'GATHER AUTO', estimate_percent=>dbms_stats.auto_sample_size, cascade=>TRUE, method_opt=>'FOR ALL COLUMNS SIZE AUTO', granularity=>'ALL');
    -- check your system stats, with sys account
    SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname = 'SYSSTATS_MAIN';after you have the base report and the report after change compare the top 5 waits, the top queries which have dramatic logical I/O changes etc. At this point you go into session based tuning in order to understand why a specific query performs worser with CBO compared to RBO. You need to be able to create and read execution plans and i/o statistics at least. Here are some quick introductions;
    http://www.bhatipoglu.com/entry/17/oracle-performance-analysis-tracing-and-performance-evaluation
    http://psoug.org/reference/explain_plan.html
    http://coskan.wordpress.com/2007/03/04/viewing-explain-plan/
    and last words again goes to your manager; how does he "feel" about a 10gR2 migration? With Grid Control, AWR, ADDM and ASH performance tuning evolved a lot. Important note here, after 10g RBO is dead(unsupported).
    Best Regards,
    H.Tonguç YILMAZ
    http://tonguc.yilmaz.googlepages.com/
    Message was edited by:
    TongucY

  • Performance Tuning in IR

    Hello All,
    We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
    But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
    Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
    Regards,
    Raj

    SQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
    When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
    To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
    To tune your bqy documents...
    1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
    2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
    3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
    4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
    5) Saved compressed documents
    6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
    7) Remove all duplicate images and keep the image file size small.
    Hope this helps!
    PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
    Edited by: user10899957 on Feb 10, 2009 6:07 AM

  • Performance tuning for new OFMW environment

    Hello
    We have just set up a new UAT environment entirely on all Oracle FMW latest products which includes:
    1) OS -RHEL V5
    2) WebCenter
    3) SOA Suite
    4) Database
    5) IDM
    6) HTTP Server
    7) WebLogic Server
    As as post infrastructure setup plan we need to to performance tuning as well. Shall we do only WLS performance tuning or we need to tune each and every product in order to get optimum results.
    Any guide? resource to help?
    Pls advice
    Regards
    Dev

    When you install the product soa-suite (and probably webcenter too) a prerequisite check is performed
    that gives information about packages and parameters that need to be adjusted on your operating system.
    For products that run on WebLogic Server you probably have to tune the JVM, examples can be found here:
    - http://middlewaremagic.com/weblogic/?p=6930 (discusses JRockit)
    - http://middlewaremagic.com/weblogic/?p=7083 (focuses on Coherence, but the considerations can be applied to other environments as well)
    Examples that set-up the SOA-Suite can be found here:
    - http://middlewaremagic.com/weblogic/?p=6040 (discusses considerations when installing the SOA suite)
    - http://middlewaremagic.com/weblogic/?p=6872 (discusses considerations when adding the web-tier to the SOA suite)
    The enterprise deployment guides can help too:
    - SOA Suite: http://download.oracle.com/docs/cd/E21764_01/core.1111/e12036/toc.htm
    - WebCenter: http://download.oracle.com/docs/cd/E21764_01/core.1111/e12037/toc.htm
    These also contain sections on integration with Oracle Identity Management and the Oracle HTTP Server

  • Performance Tuning Tips

    Dear All,
    In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
    But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
    And also I have one more query that what is " Table Statistics " , what is the use of this ...
    Please give ur valueble suggestions to us in improving the performance .
    Thanks in Advance !

    Hi
    <b>Ways of Performance Tuning</b>
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    <b>Selection Criteria</b>
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    <b>Points # 1/2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Select Statements   Select Queries</b>
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    <b>Point # 1</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    <b>Point # 2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Point # 3</b>
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    <b>Point # 4</b>
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    <b>Point # 5</b>
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    <b>Select Statements           contd..  SQL Interface</b>
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    <b>Point # 1</b>
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    <b>Point # 2</b>
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    <b>Point # 3</b>
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    <b>Select Statements       contd…           Aggregate Functions</b>
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    <b>Select Statements    contd…For All Entries</b>
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    <u>Points to be must considered FOR ALL ENTRIES</u> •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    <b>Select Statements    contd…  Select Over more than one Internal table</b>
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    <b>Point # 1</b>
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    <b>Point # 2</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    <b>Point # 3</b>
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    <b>Internal Tables</b>
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    <b>Point # 2</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    <b>Point # 3</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    <b>Point # 5</b>
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    <b>Point # 6</b>
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    <b>Point # 7</b>
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 8</b>
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    <b>Point # 9</b>
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 10</b>
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 11</b>
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    <b>Point # 12</b>
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 13</b>“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    <b>Internal Tables         contd…
    Hashed and Sorted tables</b>
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    <b>Point # 1</b>
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    <b>Point # 2</b>
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    <b>Reward if usefull</b>

  • Oracle Performance Tuning

    Hi,
    Our organization recently also encounter Oracle DB performance issue, whereby I am searching for information for us to going into diagnostic the problem and finding the root cause of this performance problem.
    I had read through this old article where I found a number of points being highlighted was a good start for us, and just about this port, we would like to understand what in details of AIX 5.3 and asynch I/O turned-on had to do with Oracle DB performance, unfortunately the URL given in the article (possible due to the twas back in 2006), is no longer valid (http://www.oracle.com/technology/pub/articles/lewis_cbo.html), I tried that but the pages being redirected to (http://www.oracle.com/technetwork/articles/index.html).
    May I know, would you able to provide me the new URL path for the mentiened AIX 5.3 asynch I/O turned-on related information?
    Our environment:
    1) OS AIX 5.3
    2) Oracle DB 10g
    (If I have the AWR Report, would it be better possition for you to help us?)
    Thank you in advanced.
    Best Regards,
    KaiLoon Kok
    Edited by: 991132 on Feb 28, 2013 11:19 PM

    Hi,
    performance tuning is a very large argument and the causes of degradation may be many.
    There are several consideration that You have to do like:
    - which kind of application is running on the DB
    - which kind of optimizer is used
    - the striping of the data files across the disks
    - the dimension of the DB block
    - the waits that DB encounters
    - many other things like SORT_AREA_SIZE, HASH_AREA_SIZE, etc.
    To begin Yr analysis use the utlbstat.sql and utlestat.sql scripts provided in the $ORACLE_HOM/rdbms/admin directory and check the conditions of the DB.
    Increasing the SGA, like You've done, may not be the solution, somtimes it may instead decrease the overall performances.
    Also check in wich case the DB is slow, ex. executing some particular procedure, excuting a particular query, ....
    Try to identify the better the DB and then try to resubmit Yr problem.
    Bye Max
    null

  • Performance Tuning 10g

    Hi All,
    I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
    I have following queries with this respect:
    - How should I start... Should I use tkprof or AWR.
    - How to enable these tools.
    - How to view its reports
    - What should I check in these reports
    - Will just increasing RAM improves performance or should we also increase Hard Disk?
    - What is CPU Cost and I/O?
    Please help.
    Thanks & Regards.

    dbdan wrote:
    Hi All,
    I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
    I have following queries with this respect:
    - How should I start... Should I use tkprof or AWR.
    - How to enable these tools.
    - How to view its reports
    - What should I check in these reports
    - Will just increasing RAM improves performance or should we also increase Hard Disk?
    - What is CPU Cost and I/O?
    Please help.
    Thanks & Regards.Here is something you might try as a starting point:
    Capture the output of the following (to a table, send to Excel, or spool to a file):
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$OSSTAT
    ORDER BY
      STAT_NAME;
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SYS_TIME_MODEL
    ORDER BY
      STAT_NAME;
    SELECT
      EVENT,
      TOTAL_WAITS,
      TOTAL_TIMEOUTS,
      TIME_WAITED
    FROM
      V$SYSTEM_EVENT
    WHERE
      WAIT_CLASS != 'Idle'
    ORDER BY
      EVENT;Wait a known amount of time (5 minutes or 10 minutes)
    Execute the above SQL statements again.
    Subtract the starting values from the ending values, and post the results for any items where the difference is greater than 0. The Performance Tuning Guide (especially the 11g version) will help you understand what each item means.
    To repeat what Ed stated, do not randomly change parameters (even if someone claims that they have successfully made the parameter change 100s of times).
    You could also try a Statspack report, but it might be better to start with something which produces less than 70 pages of output.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Performance Tuning Query on Large Tables

    Hi All,
    I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
    Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
    Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
    I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
    I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
    Kind regards
    Mike
    -------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
    /* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
    --CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
    CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                            CALLED_DATE_TIME DATE, DURATION NUMBER );
    COMMIT;
    -- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
    -- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
    -- AND DURATIONS (ALLOW +/- 5 SECS)                                        
    CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                       CALLED_DATE_TIME DATE, DURATION NUMBER )                                        
    COMMIT;
    --CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
    COMMIT;
    -- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 6);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 17);
    COMMIT;
    /* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
    SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
         COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
         COMPARE.DURATION COMPARE_DURATION     
    FROM
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST
    ) MAIN
    LEFT JOIN
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST_COMPARE
    ) COMPARE
    ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
    AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
    AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);

    What does your execution plan look like?

  • VAL_FIELD selection to determine RSDRI or MDX query: performance tuning

    according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
    I did so and found an interesting issue in one of the COPA report.
    with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
    I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
    in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
    so in smmary,
    BAS(PARENT) =>MDX query.
    BAS(CHILD1)=>RSDRI query.
    BAS(CHILD2)=>RSDRI query.
    BAS(CHILD3)=>RSDRI query.
    BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
    I know VAL_FIELD is SAP reserved name for BPC dimensions.  my question is why BAS(PARENT) =>MDX query.?
    interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
    George

    Ok - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection. 
    I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
    I should have paid more attention to the Info message I got in the BEx Query Designed.  It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
    After moving the variables to the Characteristic Restrictions my report worked as expected.  The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
    Hope this helps someone else

  • Reg: Process Chain, query performance tuning steps

    Hi All,
    I come across a question like,  There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
    Please let me know the steps involved in query performance tuning and aggregate tuning.
    Thanks & Regards
    Omkar.K

    Hi,
    Process Chain
    Method 1 (when it fails in a step/request)
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
    Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
    Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
    Query performance tuning
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • Performance tuning of database using DBACOCKPIT

    Hiii Experts,
    I have installed new server with OS:- Windows server 2012 and DB:- Sybase ASE 15.07.
    Now I want to perform performance tuning of SAP and Database, so for that I have configured
    DBACOCKPIT but I could not change the parameter values so could you tell me how to change
    parameter values and how to use DBACOCKPIT  for performance tuning.
    Thank you,
    Regards,
    Omkar M.

    DBA_Guide wrote:
    Reports running slow, not sure if database is using Start transformation for the Datawarehouse  Database.
    Used hint  */+ STAR_TRANSFORMATION */, still no performance increase.
    Checked database parameter -
    SQL> show parameter star;
    NAME                                 TYPE        VALUE
    dg_broker_start                      boolean     FALSE
    fast_start_io_target                 integer     0
    fast_start_mttr_target               integer     0
    fast_start_parallel_rollback         string      LOW
    log_archive_start                    boolean     FALSE
    star_transformation_enabled          string      TRUE
    Any Help is appreciated.
    Thanks!
    If star transformation was not the cause of the slowness, of course it won't make it faster.
    I bet you any number of other different hints will result in no change in performance.
    This is like replacing gas tank on your car with a LARGER tank & filling it with more gasoline to make the car go faster.
    The amount of gasoline is not a factor that limits the speed of your car.

Maybe you are looking for

  • What is the impact - Activating Change Logs for Material Classification Dta

    To activate Change Logs for Material Classification Data I must first set the flag for "Multiple Objs Allowed" for the Class Type 001 Material Class. I am curious of what the impact of setting this flag will be. Also, is there a way to measure the DA

  • Want to check time take by each Individual statment

    Hi SAP gurus, I want to know is there any utility by which i will know how much time it is taking for execution of <b>each individual statement</b> (not only select statement) in ABAP.

  • Quota Arrangement Table?

    Dear All, Which table in SAP stores details of Quota Arrangement? Regards Shyam

  • Failed Compile APEX 3.1.2 with RHEL4 on 10G 10

    Hi All on our dev server in an attempt to Compile APEX 3.1.2 with RHEL4 on 10G 10 SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 27 10:13:59 2009 use Apache mod_pl version I tried to follow from link[joelkallman blog|http://joelkallman.blogspot

  • Saving a jpeg image 10 times a second

    I can't understand why this VI is not saving an image ten a second, the VI is located on my desktop and I was expecting lots of jpegs to appear there! What determines what the jpeg is called? I am using labVIEW 8.5 Attachments: save10asecond.vi ‏9 KB