Tuning up indexes

Environment: SQL Server 2008R2
Problem: the total cost value of the CPU for running basic Select statement is 99%. When integrating Select statement with left outer join, it slows down the performance of the data retrieval. There is 82% cost, 61% cost, and 100% cost of the CPU. When running
SQL statement which has left join and where clause, it slows the data.  how to optimize the performance.  
Here is a general issue 
Please help. 

I run the builtin SP
here are the results 
Updating [dbo].[course]
    [prgrm_cd], update is not necessary...
    [_WA_Sys_0000000C_0F8D3381], update is not necessary...
    [_WA_Sys_00000004_0F8D3381], update is not necessary...
    [_WA_Sys_00000005_0F8D3381], update is not necessary...
    [_WA_Sys_00000006_0F8D3381], update is not necessary...
    [_WA_Sys_0000000F_0F8D3381], update is not necessary...
    [_WA_Sys_00000010_0F8D3381], update is not necessary...
    [_WA_Sys_00000002_0F8D3381], update is not necessary...
    [_WA_Sys_00000003_0F8D3381], update is not necessary...
    [_WA_Sys_00000011_0F8D3381], update is not necessary...
    [_WA_Sys_00000007_0F8D3381], update is not necessary...
    [_WA_Sys_00000008_0F8D3381], update is not necessary...
    [_WA_Sys_0000000D_0F8D3381], update is not necessary...
    0 index(es)/statistic(s) have been updated, 13 did not require update.

Similar Messages

  • Tuning concat index but still fail

    I have query :
    select
    count(*) as y0_
    from
    t_transaction this_
    where this_.USER_ID=:1 and this_.OPTYPE in (:2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32, :33, :34, :35, :36, :37, :38) and this_.TRANSACTION_DATE>=:39 and this_.TRANSACTION_DATE<:40 and ((lower(this_.STATUS) like :41 or lower(this_.STATUS) like :42) or lower(this_.STATUS) like :43)
    doing this :
    create index idx1 on t_transaction(user_id,optype,-1,transaction_date,-1,status,-1) reverse tablespace TS_A_IDX;
    but i failed.
    I don't know how to make the index running on above query. I'd using domain index, separate it with create index idx1 on t_transaction indextype is ctxsys.context for like clause and add to query contains(idx1,:41)>0 or contains(idx1,:42)>0 or contains(idx1,:43)>0 .. but still didn't work.
    Any help very appreciated in order to an concatenate index could run on above query for performance tuning sake.
    Best Regards,
    Han

    but i failed.
    but still didn't work.not Oracle error codes & messages.
    my car didn't work.
    tell me how to make my car go.
    It is really, Really, REALLY difficult to fix a problem that can not be seen.
    use COPY & PASTE so we can see what you do & how Oracle responds.

  • Tuning Spatial Index for points

    What are the best settings to use for indexing point data in Oracle Spatial?
    Andrew Greis
    FCC/CompuTech

    Hi Mark,
    Can you post the create index command, as well as the contents of user_sdo_geom_metadata for the point layer?
    Also, can you post the results of doing a select count(*) from the index table as well as select count(*) from the point table?
    Thanks,
    dan

  • Oracle DB 10g - Count records from a subquery is to slow. (after tuning)

    Hi everybody:
    I have the following problem.
    1.- I have a transactional table with 11 millions of records.
    2.- The user asked for a few reports, to know some sumarized data for business decisions.
    3.- Initialy the query was developed in a development environment, with a subset smaller than the actual data mentioned in label 1.
    4.- When the report was delivered to the end user, the report never return data.
    5.- at this point, we performed tuning, adding indexes, re-writing the query, using hints, etc. and the following scenario is ocurring:
    a) the query without the count, before the tuning was taking about 3 to 5 minutes, and returned aproximatelly 332,000 records-.
    b) the numer of records was counted and this query takes aproximatelly 15 to 23 minutes.
    c) after the tuning, the raw data, returns in 1 second, we used some b-tree indexes, some FBI (because the report needs to filter by to_char functions).
    that time is ok for us, 1 second is a great time in comparison with the 3 to five minutes.
    d) the funny thing, is that when we add a simple count(1), count(x) or wathever flavor of count, the count takes about 3 minutes to return, with the 332,00
    records. is better than de 15 minutes of course, but we dont need count(1), we need to use group by, order by, etc, and will increase the time of query.
    6.- Another thing is happening, if I use count(1) with a transactional table with 600,000 records, without filtering, the count returns in les than 2 seconds. Even if the data is more than the result of my query defined in label 5.c, and that query returns in 1 second.
    Please help me with this, I know that maybe is something that i'm not considering on tuning. Or if there is a way to run this count query faster, please let me know.
    This is the query:
    WITH historial_depurado_v AS (
    select serie, identificador,
           cc_form_cc_tifo_tipo_forma
          ,cc_form_serie
          ,cc_form_folio
          ,estatus_nuevo
          ,to_char(fecha,'MM') Mes
          ,to_char(fecha,'YYYY') Anio  -- the table has a FBI for this.
          ,get_recaudacion_x_forma (hifo.CC_FORM_CC_TIFO_TIPO_FORMA
                                   ,hifo.CC_FORM_SERIE
                                   ,hifo.CC_FORM_FOLIO) Recaudacion  -- function for description.
    from  cc_historial_formas hifo
    where not exists (select 1
                         from ve_tipos_placas tipl
                        where tipl.cc_tifo_tipo_forma = hifo.cc_form_cc_tifo_tipo_forma)
    SELECT serie
      FROM historial_depurado_v hide
    WHERE Anio = '2009'  -- using the function based index.
       AND Mes = '01'
       AND Estatus_nuevo = 'UT'
    -- returns in 1 second, but if I use count(serie) takes 3 to 5 minutes, and still is not complete, I need to use group by some fields returned by the firs subset.
    -- if I count a table with more records, returns in 2 seconds.

    alopez wrote:
    WHERE Anio = '2009'  -- using the function based index.
    AND Mes = '01'
    AND Estatus_nuevo = 'UT'
    Also, i'm not sure if your internal comment is correct, but if you have a function based index on any of those columns, it will ONLY work when you apply the exact function as defined in the FBI (you aren't using any functions on ANY of your columns).
    An example.
    create table use_the_fbi
       column1  date
    insert into use_the_fbi
    select
       trunc(sysdate, 'dd') - level
    from dual connect by level <= 10000;
    create index use_the_fbi_FBI on use_the_fbi (to_char(column1, 'YYYYMM') );
    exec dbms_stats.gather_table_stats( user, 'USE_THE_FBI', cascade=> true, method_opt=> 'for all hidden columns');Let's try without querying in the manner in which we defined the index.
    explain plan for
    select *
    from use_the_fbi
      4  where column1 = '201006';
    Explained.
    Elapsed: 00:00:00.01
    TUBBY_TUBBZ?@xplain
    PLAN_TABLE_OUTPUT
    Plan hash value: 1722805582
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |             |   100 |   900 |     7   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| USE_THE_FBI |   100 |   900 |     7   (0)| 00:00:01 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1 / USE_THE_FBI@SEL$1
    Predicate Information (identified by operation id):
       1 - filter("COLUMN1"='201006')
    Column Projection Information (identified by operation id):
       1 - "COLUMN1"[DATE,7]
    23 rows selected.
    Elapsed: 00:00:00.02Sad Christmas .. no index love.
    Now, query as we defined the index
    explain plan for
    select *
    from use_the_fbi
    where to_char(column1, 'YYYYMM') = '201006';
    Explained.
    Elapsed: 00:00:00.01
    TUBBY_TUBBZ?TUBBY_TUBBZ?@xplain
    PLAN_TABLE_OUTPUT
    Plan hash value: 268331390
    | Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                 |    30 |   450 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| USE_THE_FBI     |    30 |   450 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | USE_THE_FBI_FBI |    30 |       |     1   (0)| 00:00:01 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1 / USE_THE_FBI@SEL$1
       2 - SEL$1 / USE_THE_FBI@SEL$1
    Predicate Information (identified by operation id):
       2 - access(TO_CHAR(INTERNAL_FUNCTION("COLUMN1"),'YYYYMM')='201006')
    Column Projection Information (identified by operation id):
       1 - "USE_THE_FBI"."COLUMN1"[DATE,7]
       2 - "USE_THE_FBI".ROWID[ROWID,10], TO_CHAR(INTERNAL_FUNCTION("COLUMN1"),'YYYYMM')[VA
           RCHAR2,6]
    27 rows selected.
    Elapsed: 00:00:00.03
    TUBBY_TUBBZ?Happy times.

  • Tuning FAQ

    hi all
    can anybody send me some docs how a select query can be performance tuned uusing indexes.
    it would be helpful if examples are shown.
    Also can i get some examples of - (when there is a join of 4 to 5 tables and a query needs to be breaked upfor tuning.)
    thanks in advance

    Indexes help to speed up selection from the database. They consist of a sorted copy of certain database table fields.
    The primary index is always created automatically in the SAP System. It consists of the primary key fields of the database table, and there is at most one record in the table matching each possible combination of these fields. This kind of index is called a UNIQUE index.
    If you cannot use the primary index to determine a selection result (for example, WHERE condition may not contain any primary index fields), the system searches the whole table. To prevent this, and determine the selection result by searching through a restricted number of database records, you can create a secondary index.
    However, you should not define an index for all possible fields in the WHERE condition.
    Creating an index
    You can create an index in Transaction SE11 by choosing Change &#8594; Indexes... &#8594; Create. To make the index unique, select UNIQUE. To specify the fields that will comprise the index, choose "Choose fields". You then need to save and activate the index.
    When to create an index
    It is worth creating an index when:
    You want to select table entries based on fields that are not contained in an index, and the response times are very slow.
    The EXPLAIN function in the SQL trace shows which index the system is using. You can generate a list of the database queries involved in an action by entering Transaction ST05 and choosing Trace on &#8594; Execute action &#8594; Trace off &#8594; List trace. If you execute the EXPLAIN SQL function on a EXEC, REEXEC, OPEN, REOPEN or PREPARE statement, the system returns a list containing the index used in the database query.
    The field or fields of the new secondary index are so selective that each index entry corresponds to at most 5% of the total number of table entries. Otherwise, it is not worth creating the index.
    The database table is accessed mainly for reading entries.
    Using an index consisting of several fields
    Even if an index consists of several fields, you can still use it when only a few of the fields actually appear in the WHERE clause. The sequence in which the fields are specified in the index is important. You can only use a field in the index if all of the preceding fields in the index definition are included in the WHERE condition.
    An index can only support search criteria which describe the search value positively, such as EQ or LIKE. The response time of conditions including NEQ is not improved by an index.
    Optimal number of fields for an index
    An index should only consist of a few fields; as a rule, no more than four. This is because the index has to be updated each time you change its fields in a database operation.
    Fields to include in an index
    Include fields that are often selected and have a high selectivity. In other words, you need to check the proportion of the table entries that can be selected with this field. The smaller the proportion, the more selective the field. You should place the most selective fields at the beginning of the index.
    If all of the fields in a SELECT statement are contained in the index, the system does not access the data a second time following the index access. If there are only a few fields in the SELECT statmeent, you can improve performance significantly by including all of these fields in the index.
    You should not include a field in an index if its value is initial for most of the table entries.
    Optimal number of indexes for a table
    You should not create more than five indexes for any one table because:
    Whenever you change table fields that occur in the index, the index itself is also updated.
    The amount of data increases.
    The optimizer has too many chances to make mistakes by using the 'wrong' index.
    If you are using more than one index for a database table, ensure that they do not overlap.
    Avoiding OR conditions
    The optimizer generally stops if the WHERE condition contains an OR expression. In other words, it does not evaluate the fields in the OR expression with reference to the index.
    An exception to this are OR statements standing on their own. Try to reformulate conditions containing an OR expression for one of the indexed fields. For example, replace:
    SELECT * FROM SPFLI
    WHERE CARRID = 'LH'
    AND (CITYFROM = 'FRANKFURT' OR CITYFROM = 'NEW YORK').
    with:
    SELECT * FROM SPFLI
    WHERE (CARRID = 'LH' AND CITYFROM = 'FRANKFURT')
    OR (CARRID = 'LH' AND CITYFROM = 'NEW YORK').
    Problems with IS NULL
    The value NULL is not stored in the index structure of some database systems. The consequence of this is that the index is not used for that field.

  • New tables in 12.1

    Hi
    We are working on the migration of Studio 11.5 to 12.1 . While creating the Library table names we are seeing the below two additional tables in 12.1. Please let us know the significance of the below tables.
    User Table
    Common Fields Table
    Do we need to have these two tables created or they are optional.
    Thanks in Advance

    The legacy USERINFO.DBF file is now held in the database used to house the resources. The legacy FDB file, for the field database, is also now contained in the database.
    The use of legacy Codebase repository, held on the file system, is no longer used since the introduction of Documaker V12.0. It is possible to maintain existing Codebase workspaces in DMStudio V12.0 and higher, but the creation of any new workspaces must use a database as the library for all files that may have previously been Codebase.
    As of Documaker V12.0 any new workspaces created will be held in a database. If Standard Edition is used this can continue to be SQL or Oracle, however if Enterprise Edition is used it is necessary to use Oracle DB.
    If you are being prompted for these tables it sounds like you didn't use DMStudio in V11.5 and you were using the legacy Image Editor, Formset Editor, etc applications for resource maintenance. You have two options available to you:
    1) Using V11.5 DMStudio create a new workspace and import the legacy resources to that workspace. You can choose a Codebase workspace to house the resources. That workspace can then be opened in DMStudio V12.0.
    2) Using V12.0 DMstudio create a new workspace and import the legacy resources to that workspace. You will have to choose an ODBC database, through the DSN selection screen, to identify a database that will house all resources.
    My recommendation would be to use option 2) as the Codebase repository is a legacy option that will no longer be developed or enhanced. Additionally, the Codebase format cannot be tuned or indexed for performance gains. Finallly, the Codebase format is very easy to corrupt as it's a single file, and this could result in you losing your whole library and having to move to a backup. The use of ODBC database for the library is more stable, secure and allows room for improvement through performance tuning.

  • Puzzled by N96 and N81

    Here's why i'm puzzled.
    N95/N82 and others carry the omap 3d cpu... (I thought for n-gage ??)
    N96 does not support N-Gage, because "lack of 3D chip"
    N81 supports N-Gage, however it too has a "lack of 3D chip"
    Now if you go to
    http://www.glbenchmark.com/result.jsp
    You'll see that the N81 is actually slower than the Nokia 6110, because it has no 3D chip.
    See where i'm going?
    Compare the N95 to the N81 on the game, Global Racer, you'll see how the N81 does what the Nokia 6110 does, about 3 frames per second if that.....
    So the million dollar question, if the N81 supports N-Gage games then so should every single Nokia phone which does not have the omap CPU... the 6110 is faster, faster cpu, no 3D chip, yet i don't see N-Gage mentioned anywhere...
    What's going on? How can a phone without a 3D chip which is found in the N82/N95 etc play n-gage games?....

    To the best of my knowledge (and that's saying something), the N96 DOES support Ngage.
    I don't know where you got that information. Like it was said 3D acceleration doesn't automatically prevent Ngage compatibility.
    According to Nokia's little chart, it will not be supporting n-gage, but the point was not that it *can't* support, it obviously can along with every other symbian 60 nokia phone Nokia's released over the past couple of years, just that they're deciding not too...
    http://www.nokia-tuning.net/index.php?s=processor
    What interests me is that if the next gen phones of the N Series no longer use the omap cpu from texas instruments then the 3D acceleration will just be a relic from the past... obviously, the STn8815 Nomadik cpu (ARM926 332 MHz)
    is going to be pretty powerful as it can support all that video functionality in real time to record tv shows and play them back, i'm betting it can pull off some nice calculations for games..... but that's another story.
    http://www.st.com/stonline/products/families/mobile/processors/stn8815.htm
    info on the N96 processor above.
    The TI OMAP 2420 range processors can handle the 3D processing as you can see the n-gage phone itself where all this gaming began uses a 32-bit RISC CPU ARM-9 104 MHz which also happens to be in all of the "cheaper" models which is why every single phone should be capable of using n-gage and playing n-gage games..
    just that nokia chooooooooses not to for the N96.
    http://www.planetn96.com/images/n96vn95.jpg
    As indicated on the chart from nokia (just the image is externally linked) that might be a no no, if it is, sorry i'll not post an image externally linked again, but the screenshot is directly from the nokia site which i can't seem to find...
    it clearly states that N96 will not be support n-96, although quite clearly it could easily be added.

  • Re: Transaction time

    Dear All,
    My clients has raised a quesry that his user are facing problems while executing a transaction MB5B. This transaction is being required by 10 users at same time with different reporting variants.
    The system takes huge time & in many case it even gives dump.
    Please advice how to resolve this situation.
    I understand there is something in BASIS called performance tuning or Indexing.Can this solve my issue.
    Please advice.
    Regards,
    Vivek

    Hi,
    This is a standard SAP Transaction.
    Basis only can improve the performance.
    Check the SAP Early watch Report with Basis (They are the custodians of this report).
    Regards,
    Siva

  • Tuning OBIEE query with DB index, but it's getting slower

    Hello guys
    I just have a quick question about tuning the performance of sql query using bitmap indexes..
    Currently, there are 2 tables, date and fact. Fact table has about 1 billion row and date dim has about 1 million. These 2 tables are joined by 2 columns:
    Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber
    I have query that needs to be run as the following:
    Select dates.dayofweek, dates,dates, fact.opened_amount from dates, facts
    where Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber and dates.dayofweek = 'monday'.
    Currently this query is running forever. I think it is joining that takes a lot of time. I have created bitmap index on dayofweek column because it's low on distinctive rows. But it didn't seem to speed up with the performance, but rather made it worse. The explain plan before and after creating that index was the same as:
    Select statement optimizer
    nested loops
    partition list all
    index full scan RD_T.PK_FACTS_SNPSH
    TABLE ACCESS BY INDEX ROWID DATES_DIM
    INDEX UNIQUE SCAN DATES_DIM_DATE
    It seems the bitmap index I created for DATES_DIM wasn't used... Also, I am thinking what other indexes I should create for fact table..
    I'd like to know what other indexes will be helpful for me and on which table & columns?.. I am thinking of creating another one for companynumber since it also have low distinctive records.
    Currently I can't purge data or create smaller table, I have to what with what I have..
    So please let me know your thoughts in terms of performance tunings for such situation..
    Thanks

    Jack,
    Thank you for your response. It helped me to clean up the query. All the changes did not give a much better explain plan - at least not to my inexperienced eyes - but the total execution time for the query is now reduced to under two minutes. The query as it is now:
    select /*+ ordered all_rows */ x.rowid1
    from table(sdo_join('CYCLORAMA','GEOMETRIE','CYCLORAMA','GEOMETRIE','distance=3 mask=ANYINTERACT')) x
    , cyclorama s, cyclorama t
    where not x.rowid1 = x.rowid2
    and s.rowid = x.rowid1 and x.rowid2 = t.rowid
    and s.datasetid != t.datasetid
    and s.opnamedatum < t.opnamedatum;Because the docs state that mask=FILTER is the default, I added an explicit mask=ANYINTERACT to the sdo_join parameters when I removed the sdo_distance from the query. Still, the query returns 205035 records with the sdo_distance, and 205125 without. But this may be the result of the extra 0.001 from sdo_dim. I did not investigate as the 3 meter is not crucial.
    I believe I already had a mechanism in place to reduce the number of self-joins with "not x.rowid1 = x.rowid2" and "s.opnamedatum < t.opnamedatum". I can not guarantee that for all records s.opnamedatum < t.opnamedatum equals x.rowid1 < x.rowid2.
    Based on the [url http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14255/sdo_operat.htm#BGEDJIBF]docs, I finally added an 'ordered' hint and reshuffled the tables in the from clause.
    I'm happy with performance now, and creating a new table with the records to keep should not be a problem. Still I'm curious about the following:
    <li>Is it worth optimizing the SDO_DIM_ARRAY for the original table, e.g. narrowing the range of coordinate values?
    <li>How can sdo_join best be used for an anti-join, to select the records to keep?
    Thank you,
    J-----.

  • Performance tuning -index on big table on date column

    Hi,
    I am working on Oracle 10g with Oracle Apps 11i on Sun.
    we have a large non-partition table "GL_JE_HEADERS" with 7 million rows.
    Now we want to run the query for selecting rows using between clause on date column.
    I have created Btree index on the this table.
    Now how can I tune the query? Which hint should I use for the query?
    Thanks,
    rane

    Hi Rane,
    Now how can I tune the query?Indexes on DATE datatypes are tricky, as the SQL queries must match the index!
    For example, an index on ship_date would NOT match a query:
    WHERE trunc(ship_date) > trunc(sysdate-7);
    WHERE to_char(ship_date,’YYYY-MM-DD’) = ‘2004-01-04’;
    You may need to create an function-basd index, so that the DATE reference in your SQL matches the index:
    http://www.dba-oracle.com/oracle_tips_index_scan_fbi_sql.htm
    To start testing, go into SQL*Plus and "set autotrace on" and run the queries.
    Then confirm that your index is being used.
    Which hint should I use for the query?Hints are a last resort!
    Your query is fully tuned when it fetches the rows you need with a minimum of block touches (logical reads, consistent gets).
    See here for details:
    http://www.dba-oracle.com/art_sql_tune.htm
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • Index Tuning Wizard - Oracle 9i 9.2.0.1 (Linux)

    Hello,
    I recently installed Oracle 9i Release 2 on a Linux box (RH 8.0). I have been able to get everything working without any problems. I was interested in trying out the Index Tuning Wizard but I can seem to track down where it is on my installation. I downloaded the images from technet (3 cds) and re-checked my installation for the Tuning Pack but I don't see the Index Tuning Wizard from the OEM Console running on the Management Server.
    Under Tools/Tuning pack it lists only three tools (Outline Managment, ReOrg Wizard,and Tablespace Map). Anyway, I am stumped. I could have swore I used it on 8.1.7 in the past using Oracle Expert... Has it been removed? or is only available as a separate product now?
    Thanks in advance.
    Eric

    Well this is how I did it -- 3 short scripts:
    1) called 'oracle' placed in /etc/init.d with links to the
    appropriate runlevel -- say K03oracle and S95oracle.
    This script was:
    #!/bin/bash
    export ORACLE_HOME=/home/oracle/OraHome
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib
    export ORACLE_SID=quaoar
    PATH=$ORACLE_HOME/bin:$PATH
    export PATH
    if [ "$1" = 'start' ]
    then
    # the value should be around half your available memory
    sysctl -w kernel.shmmax=192000000
    su - oracle -c startOracle
    fi
    if [ "$1" = 'stop' ]
    then
    su - oracle -c stopOracle
    fi
    I then place the two scripts 'startOracle' and 'stopOracle'
    so that they are found in the PATH (i.e. say /usr/local/bin).
    This is so they can be later used to start and stop the
    database manually if needed.
    The 'startOracle' script was:
    #!/bin/bash
    sqlplus /nolog <<-HERE
    connect / as sysdba
    startup
    quit
    HERE
    lsnrctl start
    And the 'stopOracle' script was:
    #!/bin/bash
    sqlplus /nolog <<-HERE
    connect / as sysdba
    shutdown
    quit
    HERE
    lsnrctl stop
    This seemed to work just fine.
    Oracle has 'enhanced' sqlplus to do some of the
    old 'svrmgrl' tasks. Use the /nolog switch and
    'connect / as sysdba' while logged in as the main
    (install) Oracle user. This is the same as the
    old 'connect internal'.
    Harold

  • Query tuning and using the "better" index.

    I have a database table with about 40,000 records in it. Using
    a certain index first limits the number of rows to 11,000
    records. Using a different index first (by disabling the other
    index in the query) limits the number of rows to 2,500 records.
    Using the explain plan, the rest of the query is parsed the same
    way for both queries. What reasons can explain why when the
    index that returns 11,000 records first runs faster than
    the "better" index? I thought the whole idea behind query
    tuning is to use the index that limits the data the most.

    It looks like Oracle likes the equality condition more than the greater than -less than combination (which you might like to recode as a BETWEEN condition for clarity).
    There are a number of factors here.
    i) Are the "test names" equally distributed? Do some test names appear with greater frequency than others? If so, collecting column statistics might cause the BATCH_2 index to be used for some test names, and not for others.
    ii) Likewise, what is the distribution of cdates? How does the distribution of cdates vary by test name?
    iii) You could force the use of the BATCH_6 index over the BATCH_2 by using an optimizer hint instead of dropping the BATCH_2 index ...
    select /*+ index(batch batch_6) */ id, test, cdate
    from batch
    where test = 'Some test name'
    and cdate >= &start date&
    and cdate < &end date& + 1
    ... or even try prompting Oracle to use both indexes ...
    select /*+ index(batch batch_6) index(batch batch_2) */ id, test, cdate
    from batch
    where test = 'Some test name'
    and cdate >= &start date&
    and cdate < &end date& + 1
    ... and test the response times, then chose to use the optimizer hint in your application
    iv) You might like to replace the rather unselective BATCH_2 index with a BATCH_2_6 index on both test and cdate (in that order). That would probably give you an excellent result, and the BATCH_6 index can still be used to satisfy queries slective on cdate that are not selective on test (in very recent versions of Oracle the BATCH_2_6 index might be used for such an operation, and you could drop both BATCH_6 and BATCH_2)
    Well, see if any of this helps.

  • Index Tuning Wizard in Enterprise Manager 10g Grid Control

    Hi,
    9i OEM has an Index Tuning Wizard. Does anyone know where is it located in 10g Enterprise Manager?
    Thanks!
    -Ranit.

    10g has an SQL Access Advisor (for tuning related to indexes and Materialized views) that is accessible via the OEM GC Advisor Central Link.
    Look to DB Targets - Select the DB - Select Advisor Central from the Related Links Section at the bottom - Select SQL Access Advisor.
    HTH,
    Tony

  • Peformance tuning of query using bitmap indexes

    Hello guys
    I just have a quick question about tuning the performance of sql query using bitmap indexes..
    Currently, there are 2 tables, date and fact. Fact table has about 1 billion row and date dim has about 1 million. These 2 tables are joined by 2 columns:
    Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber
    I have query that needs to be run as the following:
    Select dates.dayofweek, dates,dates, fact.opened_amount from dates, facts
    where Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber and dates.dayofweek = 'monday'.
    Currently this query is running forever. I think it is joining that takes a lot of time. I have created bitmap index on dayofweek column because it's low on distinctive rows. But it didn't seem to speed up with the performance..
    I'd like to know what other indexes will be helpful for me.. I am thinking of creating another one for companynumber since it also have low distinctive records.
    Currently the query is being generated by frontend tools like OBIEE so I can't change the sql nor can't I purge data or create smaller table, I have to what with what I have..
    So please let me know your thoughts in terms of performance tunings.
    Thanks

    The explain plan is:
    Row cost Bytes
    Select statement optimizer 1 1
    nested loops 1 1 299
    partition list all 1 0 266
    index full scan RD_T.PK_FACTS_SNPSH 1 0 266
    TABLE ACCESS BY INDEX ROWID DATES_DIM 1 1 33
    INDEX UNIQUE SCAN DATES_DIM_DATE 1 1
    There is no changes nor wait states, but query is taking 18 mins to return results. When it does, it returns 1 billion rows, which is the same number of rows of the fact table....(strange?)That's not a bitmap plan. Plans using bitmaps should have steps indicating bitmap conversions; this plan is listing ordinary btree index access. The rows and bytes on the plan for the volume of data you suggested have to be incorrect. (1 row instead of 1B?????)
    What version of the data base are you using?
    What is your partition key?
    Are the partioned table indexes global or local? Is the partition key part of the join columns, and is it indexed?
    Analyze the tables and all indexes (use dbms_stats) and see if the statistics get better. If that doesn't work try the dynamic sampling hint (there is some overhead for this) to get statistics at runtime.
    I have seen stats like the ones you listed appear in 10g myself.
    Edited by: riedelme on Oct 30, 2009 10:37 AM

  • Launching Index Tuning Wizard from command prompt

    I have been playing with index tuning wizard and I could not figure how to launch it from command line (Manual says it can be launched via management console or oracle expert).
    DB2 and SQL Server provide executables which can be launched from command line. For example, below command evaluates given workload on target database and recommends something if necessary.
    Command> itw -d <targetDB> -w <workloadFile> -o <indexRecommendations>
    Is it possible to launch Oracle Index Tuning Wizard from command prompt in a similar way?
    -fa

    "TNS No Listener" => Start the listener
    To be connected as SYSDBA you dont need a password if your are logged in the DBA group, you just have to :
    PROD_:id
    uid=102(oracle) gid=103(oinstall)
    PROD_:sqlplus "/ as sysdba"
    SQL*Plus: Release 8.1.7.0.0 - Production on Wed Jul 23 11:46:50 2003
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    Connected to:
    Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
    With the Partitioning option
    JServer Release 8.1.7.4.0 - Production
    SQL> show user
    USER is "SYS"
    SQL>
    Fred

Maybe you are looking for

  • Multiple External real instruments and midi connections.

    I want to connect multiple external devices to grageband. Mic ( i have a two channel audio interface so works on its own) guitar through a effects pedal (has usb and is seen as audio so works on it own) and a electronic drum set, has usb midi out. Pr

  • How can I delete an app that doesn't show as installed but is?

    I can not delete an app that is on my iphone 5.  Unfortunely I do not remember the name of it!  It shows up as a moveable round botton within a square on all opened screens. When tapped it opens up four options: Favorites, Siri, Device and Home.  It

  • Error while extracting data to replicated partition (EAL)

    Hi, I´m using Essbase Analytics. I want to create a copy of my HFM application in Essbase. So I´ve created a new bridge. In the Essbase tab of the bridge I choose "Replicated partition", because I want the data in Essbase directly, not in the EAL ser

  • SRKIM: Rel.11i- PA-  GL_IMPORT_REFERENCES mapping with Project Transaction

    Purpose ===== R11i: Project Transaction 이 GL Journal 로 import 되었을 시 mapping 하는 방법에 대해 알아 보도록 한다. Question ===== 11i 에서 Project Transaction 을 GL 로 Transfer 후 reconcile 을 위해 gl_import_references table 과 mapping 을 하고 싶다. 어떻게 mapping 해야 하는가. Answer =====

  • Oracle Database 10g Express Edition support Grid Computing ?

    Hi May I know if I install the Oracle Database Express Edition on Linux, which Linux O/S that can support Grid Computing ? And if I install this database on Windows, which Windows platform that support Grid Computing ? Thanks. Vivi