Performance tuning - Gather stats and statID created

SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE('HR', 'SAVED_STATS');
SQL> SELECT TABLE_NAME, NUM_ROWS, BLOCKS,EMPTY_BLOCKS, AVG_SPACE, USER_STATS, GLOBAL_STATS
2 FROM USER_TABLES
3 WHERE TABLE_NAME = 'MYCOUNTRIES';
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE
GLO
MYCOUNTRIES 0 0 0 0 NO
YES
SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>'HR',TABNAME=>'MYCOUNTRIES',ESTIMATE_PERCENT=>10,STATOWN=>'HR',STATTAB=>'SAVED_STATS',STATID=>'PREVIOUS1');
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE GLO
MYCOUNTRIES 25 5 0 0 NO YES
SQL> select statid, type, count(*)
2 from saved_stats
3 group by statid, type;
STATID T COUNT(*)
PREVIOUS1 C 3
PREVIOUS1 T 1
Qn) Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a seperate statID name, so that a new version of stats is created/stored in a seperate statID.

Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a >seperate statID name, so that a new version of stats is created/stored in a seperate statID.Yes.
From http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461
statid Identifier (optional) to associate with these statistics within stattab

Similar Messages

  • Performance tuning in BPEL and ESB

    Hi,
    Any one can tell me how to do Performance tuning in BPEL and ESB.
    How to create WEB SERVICES in BPEL

    Hi',
    Performance tuning in BPEL and ESB.
    ***This is very big topic I can give you 2 points here
    In BPEL we should avoid the use of duplicate variable, the best way to do this is, when ever we are creating a new variable
    we need to ask can we reuse variable from inside the process, example when creating the input/output variable in Invoke activity
    we need to check if we can use some existing variable instead of creating new.
    All the DB related operation should be performed in 1 single composite.
    How to create WEB SERVICES in BPEL
    Not sure what you want to ask here, as BPEL is itself a webservice.
    -Yatan

  • Performance tuning: lite sessions and local ServletContext

    I have been doing some research on iPlanet performance tuning. In our
    current production environment (iAS6.0 SP1B, iWS4.1 SP2 on Solaris), since
    we don't use clustering there should be a couple of performance improvements
    we can make immediately:
    1. Use lite sessions (<session-impl>lite</session-impl> in ias-web.xml) - I
    believe that if you use lite sessions, the session data is stored in the kjs
    process space as opposed to the kxs process space. This, of course, means
    that if a kjs dies the user's on it will lose their session information but
    it will provide a performance improvement by reducing kxs/kjs communication.
    2. Use local ServletContexts (<distributable>false</distributable> in
    web.xml) - This should cause the ServletContext to only be stored in the
    originating JVM. So again, if a kjs dies, the user will lose their
    ServletContext but again we will get a performance improvement by reducing
    kxs/kjs communcation.
    What I want to understand is how our load balancing configuration will
    effect our production environment if we use this configuration. Right now
    we use sticky load balancing on all our servlets but we don't have our JSPs
    registered and therefore sticky load balancing cannot always be trusted to
    return users to the iAS they came from. We make up for this by using
    hardware load balancing that keeps the majority of our users sticky.
    However, using lite sessions and local ServletContexts will require that a
    user not only stick to an iAS, but to a specific kjs as well. Using sticky
    load balancing would ensure that, but since we also rely on our hardware
    load balancers, could they create a problem? If a user gets sent back to
    the iAS they came from by our hardware load balancers, will the kxs process
    be smart enough to return them to the kjs they came from? If so, then I
    think that means that we can safely switch to lite sessions and local
    ServletContexts, but if not, I think many users will lose their sessions.
    Thanks,
    Linc

    Please follow thru this link for your answers
    http://developer.iplanet.com/viewsource/char_tuningias/index.jsp
    Thanks
    Shital Patel
    Lincoln wrote:
    I have been doing some research on iPlanet performance tuning. In our
    current production environment (iAS6.0 SP1B, iWS4.1 SP2 on Solaris), since
    we don't use clustering there should be a couple of performance improvements
    we can make immediately:
    1. Use lite sessions (<session-impl>lite</session-impl> in ias-web.xml) - I
    believe that if you use lite sessions, the session data is stored in the kjs
    process space as opposed to the kxs process space. This, of course, means
    that if a kjs dies the user's on it will lose their session information but
    it will provide a performance improvement by reducing kxs/kjs communication.
    2. Use local ServletContexts (<distributable>false</distributable> in
    web.xml) - This should cause the ServletContext to only be stored in the
    originating JVM. So again, if a kjs dies, the user will lose their
    ServletContext but again we will get a performance improvement by reducing
    kxs/kjs communcation.
    What I want to understand is how our load balancing configuration will
    effect our production environment if we use this configuration. Right now
    we use sticky load balancing on all our servlets but we don't have our JSPs
    registered and therefore sticky load balancing cannot always be trusted to
    return users to the iAS they came from. We make up for this by using
    hardware load balancing that keeps the majority of our users sticky.
    However, using lite sessions and local ServletContexts will require that a
    user not only stick to an iAS, but to a specific kjs as well. Using sticky
    load balancing would ensure that, but since we also rely on our hardware
    load balancers, could they create a problem? If a user gets sent back to
    the iAS they came from by our hardware load balancers, will the kxs process
    be smart enough to return them to the kjs they came from? If so, then I
    think that means that we can safely switch to lite sessions and local
    ServletContexts, but if not, I think many users will lose their sessions.
    Thanks,
    Linc

  • MS SQL Server 7 - Performance of Prepared Statements and Stored Procedures

    Hello All,
    Our team is currently tuning an application running on WL 5.1 SP 10 with a MS
    SQL Server 7 DB that it accesses via the WebLogic jConnect drivers. The application
    uses Prepared Statements for all types of database operations (selects, updates,
    inserts, etc.) and we have noticed that a great deal of the DB host's resources
    are consumed by the parsing of these statements. Our thought was to convert many
    of these Prepared Statements to Stored Procedures with the idea that the parsing
    overhead would be eliminated. In spite of all this, I have read that because
    of the way that the jConnect drivers are implemented for MS SQL Server, Prepared
    Statments are actually SLOWER than straight SQL because of the way that parameter
    values are converted. Does this also apply to Stored Procedures??? If anyone
    can give me an answer, it would be greatly appreciated.
    Thanks in advance!

    Joseph Weinstein <[email protected]> wrote:
    >
    >
    Matt wrote:
    Hello All,
    Our team is currently tuning an application running on WL 5.1 SP 10with a MS
    SQL Server 7 DB that it accesses via the WebLogic jConnect drivers.The application
    uses Prepared Statements for all types of database operations (selects,updates,
    inserts, etc.) and we have noticed that a great deal of the DB host'sresources
    are consumed by the parsing of these statements. Our thought was toconvert many
    of these Prepared Statements to Stored Procedures with the idea thatthe parsing
    overhead would be eliminated. In spite of all this, I have read thatbecause
    of the way that the jConnect drivers are implemented for MS SQL Server,Prepared
    Statments are actually SLOWER than straight SQL because of the waythat parameter
    values are converted. Does this also apply to Stored Procedures???If anyone
    can give me an answer, it would be greatly appreciated.
    Thanks in advance!Hi. Stored procedures may help, but you can also try MS's new free type-4
    driver,
    which does use DBMS optimizations to make PreparedStatements run faster.
    Joe
    Thanks Joe! I also wanted to know if setting the statement cache (assuming that
    this feature is available in WL 5.1 SP 10) will give a boost for both Prepared Statements
    and stored procs called via Callable Statements. Pretty much all of the Prepared
    Statements that we are replacing are executed from entity bean transactions.
    Thanks again

  • Performance between SQL Statement and Dynamic SQL

    Select emp_id
    into id_val
    from emp
    where emp_id = 100
    EXECUTE IMMEDIATE
    'Select '|| t_emp_id ||
    'from emp '
    'where emp_id = 100'
    into id_valWill there be more impact in performance while using Dynamic SQL?

    CP wrote:
    Will there be more impact in performance while using Dynamic SQL?All SQLs are parsed and executed as SQL cursors.
    The 2 SQLs (dynamic and static) results in the exact same SQL cursor. So both methods will use an identical cursor. There are therefore no performance differences ito of how fast that SQL cursor will be.
    If an identical SQL cursor is not found (a soft parse), the SQL engine needs to compile the SQL source code supplied, into a SQL cursor (a hard parse).
    Hard parsing burns a lot of CPU cycles. Soft parsing burns less CPU cycles and is therefore better. However, no parsing at all is the best.
    To explain: if the code creates a cursor (e.g. INSERT INTO tab VALUES( :1, :2, :3 ) for inserting data), it can do it as follows:
    while More Data Found loop
      parse INSERT cursor
      bind variables to INSERT cursor
      execute INSERT cursor
      close INSERT cursor
    end loopIf that INSERT cursor does not yet exists, it will be hard parsed and a cursor created. Each subsequent loop iteration will result in a soft parse.
    However, the code will be far more optimal as follows:
    parse INSERT cursor
    while More Data Found loop
      bind variables to INSERT cursor
      execute INSERT cursor
    end loop
    close INSERT cursorWith this approach the cursor is parsed (hard or soft), once only. The cursor handle is then used again and again. And when the application is done inserting data, the cursor handle is released.
    With dynamic SQL in PL/SQL, you cannot really follow the optimal approach - unless you use DBMS_SQL (a complex cursor interface). With static SQL, the PL/SQL's optimiser can kick in and it can optimise its access to the cursors your code create and minimise parsing all together.
    This is however not the only consideration when using dynamic SQL. Dynamic SQL makes coding a lot more complex. The SQL code can now only be checked at execution time and not at development time. There is the issue of creating shareable SQL cursors using bind variables. There is the risk of SQL injection. Etc.
    So dynamic SQL is seldom a good idea. And IMO, the vast majority of people that post problems here relating to dynamic SQL, are using dynamic SQL unnecessary. For no justified and logical reasons. Creating unstable code, insecure code and non-performing code.

  • Performance issue with drop and re-create index

    My database table has about 2 million records. The index in the table was not optmized, so we created a new index lets call it index2. So this table now was the original index (index1) and the index2. We then inserted data into this table from the other box. It was running for a few weeks.
    Suddenly we noticed that a query which used to take a few seconds now took more than a minute. The execution plan was using the index2 which technically should be faster. We checked if the statistics were upto date and it was. So then we dropped the new index, re-ran the query and it completed in 10 sec's. It was usign the old index. This puzzled me since the point of the index2 was to make it better. So then we re-created index2 and genrated stats for the index. Re-ran the query and it completed in 5 sec's.
    Everytime we timed to run the query, I shutdown and restarted the box to clear all cache's. So all the time I have specified are pure time's and not cached. The execution plan using index2 taking 1 min and 5 sec's are nearly the same, with very minior difference in cost and cardnitality. Any ideas why index2 took 1 min before and after drop and create again takes only 5 sec.
    The reason I want to find the cause is to ensure that this doesn't happen again, since its impossible for me to re-create the index everytime I see this issue. Any thoughts would be helpful.

    Firstly the indexes are different index1 is only on the time column, where as index2 is a composite index consisting of 3 columns.
    Here are the details. The test that I did were last friday, 3/31. Yesterday and today when I executed the same query I get more increased times, yesterday it took 9 sec amd today 17 sec. The stats job kicked in on both days and is upto date. This table, nothing gets deleted. Only added.
    3/31
    Original
    Elapsed: 00:01:02.17
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6553 Card=9240 Bytes
    =203280)
    1 0 SORT (UNIQUE) (Cost=6553 Card=9240 Bytes=203280)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=15982 Card=2306303 Bytes=50738666)
    drop index EVENT_NA_TIME_ETYPE
    Elapsed: 00:00:11.91
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7792 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=7792 Card=9275 Bytes=204050)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EVENT' (Cost=2092
    Card=2284254 Bytes=50253588)
    3 2 INDEX (RANGE SCAN) OF 'EVENT_TIME_NDX' (NON-UNIQUE
    ) (Cost=6740 Card=2284254)
    create index EVENT_NA_TIME_ETYPE ON EVENT(NET_ADDRESS,TIME,EVENT_TYPE);
    BEGIN
    SYS.DBMS_STATS.GENERATE_STATS('USER','EVENT_NA_TIME_ETYPE',0);
    end;
    Elapsed: 00:00:05.14
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6345 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=6345 Card=9275 Bytes=204050)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12878 Card=2284254 Bytes=50253588)
    4/3
    Elapsed: 00:00:09.70
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6596 Card=9316 Bytes
    =204952)
    1 0 SORT (UNIQUE) (Cost=6596 Card=9316 Bytes=204952)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=11696 Card=2409400 Bytes=53006800)
    Statistics
    0 recursive calls
    0 db block gets
    11933 consistent gets
    9676 physical reads
    724 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    4/4
    Elapsed: 00:00:17.99
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6681 Card=9421 Bytes
    =207262)
    1 0 SORT (UNIQUE) (Cost=6681 Card=9421 Bytes=207262)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12110 Card=2433800 Bytes=53543600)
    Statistics
    0 recursive calls
    0 db block gets
    12279 consistent gets
    9423 physical reads
    2608 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    SQL> select index_name,clustering_factor,blevel,leaf_blocks,distinct_keys from u ser_indexes where index_name like 'EVENT%';
    INDEX_NAME CLUSTERING_FACTOR BLEVEL LEAF_BLOCKS DISTINCT_KEYS
    EVENT_NA_TIME_ETYPE 2393170 2 12108 2395545
    EVENT_PK 32640 2 5313 2286158
    EVENT_TIME_NDX 35673 2 7075 2394055

  • Performance Tuning Issues: UNION and Stored Outlines

    Hi,
    I have two questions,
    Firstly I have read this:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_1016.htm#i35699
    What I can understand is using UNION ALL is better than UNION.
    The ALL in UNION ALL is logically valid because of this exclusivity. It allows the plan to be carried out without an expensive sort to rule out duplicate rows for the two halves of the query.
    Can someone explain me the following sentences.
    Secondly my Oracle Database 10g is on FIRST_ROWS_1, how can stored outlines help in reducing I/O cost and response time in general?Please explain.
    Thank you,
    Adith

    Union ALL and Union
    SQL> select 1, 2 from dual
    union
    select 1, 2 from dual;
    | Id | Operation | Name | Rows | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 2 | 6 (67)| 00:00:01 |
    | 1 | SORT UNIQUE | | 2 | 6 (67)| 00:00:01 |
    | 2 | UNION-ALL | | | | |
    | 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
    | 4 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
    11 rows selected.
    SQL>select 1, 2 from dual
    union all
    select 1, 2 from dual;
    | Id | Operation | Name | Rows | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 2 | 4 (50)| 00:00:01 |
    | 1 | UNION-ALL | | | | |
    | 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
    | 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
    10 rows selected.
    Adith

  • Performance tuning on BKPF and BSEG for my code.

    Please provide alternative code for the following code so that processing is fast.
    my select queries are as follows. It take a lot of time and system gets loaded when it is scheduled.
    select BUKRS
               BELNR
               GJAHR
               BLART
               BLDAT
               BUDAT
               TCODE
               XBLNR
               STBLG
               WAERS
               KURSF
               AWKEY
               STGRD
        into CORRESPONDING FIELDS OF TABLE
    IT_BKP   from bkpf where bukrs =   p_bukrs
                                              and   gjahr in s_gjahr
                                              AND BLART NE 'SA'
                              and   budat in s_date.
    select BELNR
             KOART
             SHKZG
             MWSKZ
             DMBTR
             KTOSL
             SGTXT
             VBELN
             HKONT
             KUNNR
             MATNR
             MENGE
      FROM BSEG INTO  CORRESPONDING FIELDS OF TABLE it_bseg
                                        for all entries in it_bkpf
                                         where bukrs = it_bkpf-bukrs
                                         and   belnr = it_bkpf-belnr
                                         and   gjahr = it_bkpf-gjahr
                                         and   MWSKZ in s_MWSKZ .  
    Please help.

    Hi,
       Declare internal table same fields as the you are selecting from table and
      remove corresponding fields of and also check ur t_bkpf table is not initial
      before using for all enteries.
      foe e.g.
    select BUKRS
    BELNR
    GJAHR
    BLART
    BLDAT
    BUDAT
    TCODE
    XBLNR
    STBLG
    WAERS
    KURSF
    AWKEY
    STGRD
    into TABLE
    IT_BKP from bkpf where bukrs = p_bukrs
    and gjahr in s_gjahr
    AND BLART NE 'SA'
    and budat in s_date.
    IF IT_BKPF[] IN NOT INITIAL.
    select BELNR
    KOART
    SHKZG
    MWSKZ
    DMBTR
    KTOSL
    SGTXT
    VBELN
    HKONT
    KUNNR
    MATNR
    MENGE
    FROM BSEG INTO TABLE it_bseg
    for all entries in it_bkpf
    where bukrs = it_bkpf-bukrs
    and belnr = it_bkpf-belnr
    and gjahr = it_bkpf-gjahr
    and MWSKZ in s_MWSKZ .
    ENDIF.
      reward if useful.

  • Gather stats:Time to complete?

    Experts..
    Could you please let me know how to calculate time complete gathering stats(Analyze) for huge number of tables.
    I would like to calulate the time to finish the analyze in database.
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    PL/SQL Release 9.2.0.6.0 - Production
    CORE     9.2.0.6.0     Production
    TNS for Solaris: Version 9.2.0.6.0 - Production
    NLSRTL Version 9.2.0.6.0 - Production
    Thanks

    hi,
    create a table
    create table analyze_time(task varchar2(20), time date);create a procedure for stats gathering
    create or replace procedure gather_st is
    begin
    insert into analyze_time values('start',sysdate);
    commit;
    --write dbms_stats procedure here to gather stats
    insert into analyze_time values('end',sysdate);
    commit;
    end;
    /execute this procedure to gather stats and after that query analyze_stats to check what time it started and what time it finished.
    In SQLPLUS, you can use "set timing on" and then gather stats, when it will finish, prompt will show you the time it took to complete the operaion.
    Salman

  • Oracle Application Performance Tuning

    Hello all,
    Please forgive me if I am asking this in the wrong section.
    I am a s/w engineer with 2 years of exp. I have been working as a performance tuning resource for Oracle EBS in my company. The work mostly involved SQL tuning and dealing with indexes etc. In this time I took up interest in DBA stuff and I completed my OCA in Oracle 10g. Now my comany is giving me an oppurtunity to move into an Application DBA team and I am totally confused about it.
    Becoming an Apps DBA will mean that the effort I put into the certification in 10g will be of no use. There are other papers for Apps DBA certification. Should I stay put in performance tuning & wait for a Core DBA chance or shall I accept the Apps DBA oppurtunity.
    Also, does my exp. in SQL performace tuning hold any value as such with out an exposure to DBA activities?
    Kindly guide me in this regards as I am very confused at this juncture.
    Regards,
    Jithin

    Jithin wrote:
    Hello bigdelboy , Thank you for your reply.
    Yes, By oracle Apps DBA I meant Oracle EBS.
    Clearing 1Z0-046 is an option. However, my doubt is will clearing both the exams Admin I and Admin II be of any help without having practical expericnce? The EBS DBA work involves support and patching, cloning, and new node setup etc.
    Also, is my performance tuning experience going to help me in any way in the journey forward as a DBA/ EBS DBA?
    Thank you for your valuable time.
    Regards,
    Jithin SarathThe way I read it is this.
    People notice you are capable of understanding and performance tuning SQL. And you must have some knowledge of Oracle EBS. And in fact you also have a DBA OCA.
    So there is a 98% + chance you can be made into a good Oracle Apps DBA (and core DBA). If I was in their position I'd want you on my team too; this is the way we used to bring on DBA's in the old days before certification and it has a lot of merit still. Okay you can only do limited tasks at first ... but the number of taks you can do will increase dramatically over a number of months.
    I would imagine the Oracle Apps DBA will be delighted to have someone on board who can performance tune SQL which sometimes can sometimes seem more like an art than a science. The patching etc can be taught in small doses. If they are a good team they wont try to give you everything at once, they'll get you to learn one procedure at a time.
    And remember, if in doubt ask; and if you dont understand ask again. And be safe rather than sorry on actions that could be critial.
    If your worried about liinux: Linux Recipes For Oracle Dbas (Apress) might be a good book to read but could be expensive in India
    Likewise: (Oracle Applications Dba Field Guide) may suit and even (Rman Recipes For Oracle Database 11 G: A Problem-solution Approach) may have some value if you need to get up to speed quickly in those areas.
    These are not perfect but they sometimes consolidate the information neatly; however only buy if you feel they are cheap enough. You may well buy them and feel disappointed (These all happen to be by Apress who seem to have produced a number of good books ... they've also published some rubbish as well)
    And go over the 2-day dba examples as well and linux install examples n OTN as well ... they are free compared to the books I mentioned.
    Rgds -bigdelboy.

  • Performance tuning of database using DBACOCKPIT

    Hiii Experts,
    I have installed new server with OS:- Windows server 2012 and DB:- Sybase ASE 15.07.
    Now I want to perform performance tuning of SAP and Database, so for that I have configured
    DBACOCKPIT but I could not change the parameter values so could you tell me how to change
    parameter values and how to use DBACOCKPIT  for performance tuning.
    Thank you,
    Regards,
    Omkar M.

    DBA_Guide wrote:
    Reports running slow, not sure if database is using Start transformation for the Datawarehouse  Database.
    Used hint  */+ STAR_TRANSFORMATION */, still no performance increase.
    Checked database parameter -
    SQL> show parameter star;
    NAME                                 TYPE        VALUE
    dg_broker_start                      boolean     FALSE
    fast_start_io_target                 integer     0
    fast_start_mttr_target               integer     0
    fast_start_parallel_rollback         string      LOW
    log_archive_start                    boolean     FALSE
    star_transformation_enabled          string      TRUE
    Any Help is appreciated.
    Thanks!
    If star transformation was not the cause of the slowness, of course it won't make it faster.
    I bet you any number of other different hints will result in no change in performance.
    This is like replacing gas tank on your car with a LARGER tank & filling it with more gasoline to make the car go faster.
    The amount of gasoline is not a factor that limits the speed of your car.

  • Performance tuning documents

    Hi all,
    could you guys provide me the links or documents for SAP r/3 performance tuning.
    Performance tuning for SRM, and others..
    jag

    Hi Jagadish
       Check the link below:
       http://sap-img.com/abap/abap-fine-tuning.htm
       Reward points if it helps.
    Regards
    Karan

  • Monitoring & Performance Tuning

    Hi BW Gurus,
    Please let me know common Monitoring & Performance Tuning tasks/issues and how to resolve them.
    Thanks in advance
    Rajesh

    hi Rajesh,
    take a look sdn forum performance center
    Business Intelligence Performance Tuning [original link is broken]
    there are lots of docs
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5ee957e5-0201-0010-9c83-fe14a43cd04a
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    and check oss note
    557870 'FAQ BW Query Performance' and 567746 'Composite note BW 3.x performance Query and Web'
    417307, 409641 etc (search with 'bw performance')
    hope this helps.

  • Performance Tuning on Extractiors

    hello friends....
      Can any 1 give some information on Performance Tuning on Extractors and some other performance tuning....
    thanx for ur help....

    Our book on BW performance tuning has a detailed explanation and recommendations regarding step by step process of how to do the performance tuning on Extractors. Pl. refer to the following link to know about the book.
    http://www.amazon.com/SAP-Performance-Tuning-Shreekant-Shiralkar/dp/0977725146/ref=sr_1_1?ie=UTF8&s=books&qid=1196767811&sr=1-1.
    You will notice that the book has screen shots and detailed theory on aspects like indexes like B Tree Index or Bitmap Index to help you identify and improve perfromance of your BW system.
    Thanking you in advance for sharing your feedback on the book and its "reference" value for your work.
    Shreekant

  • Invalid statement in Performance Tuning Guide

    Oracle® Database Performance Tuning Guide
    10g Release 2 (10.2)
    Part Number B14211-01
    13 The Query Optimizer
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#sthref1324
    excerpt:
    "You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Fast full index scans cannot be performed against bitmap indexes."
    Emphasis mine - Gints
    Here is counterexample:
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> create table blah (sex varchar2(1) not null, data varchar2(4000));
    Table created.
    SQL> insert into blah select 'F', lpad('a', 4000, 'a') from user_objects where rownum<=10;
    10 rows created.
    SQL> insert into blah select 'M', lpad('a', 4000, 'a') from user_objects where rownum<=10;
    10 rows created.
    SQL> commit;
    Commit complete.
    SQL> create bitmap index sexidx on blah(sex);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user, 'blah', cascade=>true)
    PL/SQL procedure successfully completed.
    SQL>
    SQL> set autot traceonly expl
    SQL> set lines 100
    SQL> select count(*) from blah where sex = 'F';
    SQL> /
    Execution Plan
    Plan hash value: 1028317341
    | Id  | Operation                     | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |        |     1 |     2 |     1   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE               |        |     1 |     2 |            |          |
    |   2 |   BITMAP CONVERSION COUNT     |        |    10 |    20 |     1   (0)| 00:00:01 |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| SEXIDX |       |       |            |          |
    Predicate Information (identified by operation id):
       3 - filter("SEX"='F')
    SQL> set autot off
    SQL> alter session set events '10046 trace name context forever, level 12';
    Session altered.
    SQL> select count(*) from blah where sex = 'F';
      COUNT(*)
            10
    SQL> disconn
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining optionsand here is relevant section from tkprofed trace file assuring that bitmap index fast full scan really was performed.
    select count(*)
    from
    blah where sex = 'F'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      1      0.00       0.03          0          0          0           0
    Fetch        2      0.00       0.00          0          3          0           1
    total        4      0.00       0.05          0          3          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=3 pr=0 pw=0 time=74 us)
          1   BITMAP CONVERSION COUNT (cr=3 pr=0 pw=0 time=55 us)
          1    BITMAP INDEX FAST FULL SCAN SEXIDX (cr=3 pr=0 pw=0 time=43 us)(object id 98446)Gints Plivna
    http://www.gplivna.eu

    Hello Gints. I've reported this to the writer responsible for the Performance Tuning Guide. One of us will get back to you with the resolution.
    Regards,
    Diana

Maybe you are looking for