Dbms_stats not capturing index stats 11.2.0.2

deleting this thread..
Edited by: OraDBA02 on Oct 3, 2012 2:34 PM

select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
DBMS_STATS Parameters.
(I have daily ie.WEEKNIGHT_WINDOW/WEEKEND_WINDOW for OPTIMIZER_STATS)
METHOD_OPT=FOR ALL COLUMNS SIZE SKEWONLY
STALE_PERCENT=20
ESTIMATE_PERCENT=5
CASCADE=DBMS_STATS.AUTO_CASCADE
One of my index in a production table is showing up as 0 statistics.
TABLE_NAME INDEX_NAME NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR BLEVEL LAST_ANALYZED LEAF_BLOCKS LK DK
WEB_ACTIVATED_CARDS I_WAC_CLAIM_CODE_ID 0 0 0 2 08.Aug.12/00:02:45 0 0 0
WEB_ACTIVATED_CARDS PK_WEB_ACTIVATED_CARD_ID 3430768 3430768 1012555 2 08.Aug.12/00:02:47 9846 1 1
WEB_ACTIVATED_CARDS I_WAC_ENCRYPTED_CLAIM_CODE 3434835 3434835 3434771 2 08.Aug.12/00:02:53 18407 1 1
TABLE_NAME PAR NUM_ROWS BLOCKS TO_CHAR(LAST_ANALYZED,'D SAMPLE_SIZE
WEB_ACTIVATED_CARDS NO 3429860 49496 08-aug-12:00:02:43 171493
Index I_WAC_CLAIM_CODE_ID was added on this table on '12.Jun.12' and after that dbms_stats ran only one time on this table (on '08-AUG-12 12.02.43'). Dba_tab_stats_history is showing up only one entry for this table. This may be due to 20% stale_percent.
DBMS_STATS.GET_STATS_HISTORY_RETENTION is 31 days.
I have explicitly gathered index stats upon having one plan-flip issue.
analyze index I_WAC_CLAIM_CODE_ID compute statistics;
TABLE_NAME INDEX_NAME NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR BLEVEL LAST_ANALYZED LEAF_BLOCKS LK DK
WEB_ACTIVATED_CARDS I_WAC_CLAIM_CODE_ID 216876 216876 206622 2 23.Aug.12/17:31:33 2239 1 1
Am i encountering any dbms_stats bug ?

Similar Messages

  • Update databse from internal table statement not using index

    Hi Guys,
    We are updating a databse table from a file. The file has a couple of fields which have data different from what the database has (non-primary fields :). We upload the file data into an internal table and then update the database table from internal table. At a time, internal table is supposed to have 10,000 records. I did SQL trace and found that the update statement is not making use of the databse index.
    Should not the update statement here be using the table index (for primary key)?
    Regards,
    Munish

    ... as often there are recommendations in this forum which makes me wonder, how people overestimate their knowledge!!!
    Updates and Deletes do of course use indexes, as can be seen in the SQL Trace (use explain).
    Inserts don't use indexes, because in many databases inserts are just done somewhere, But also with the INSERT, the primary key is the constraint for the uniqueness condition, duplicate keys are not allowed.
    Coming to the original question, what is you actually coding for the update?
    What is the table, which fields are in the internal table and what are the indexes?
    Siegfried

  • Queries not using indexes

    We installed and configured a new environment of OBIEE and are trying to run a simple query in our data warehouse. This simple query takes only 7 seconds to complete in our previous data warehouse using TOAD but is taking 8+ minutes to complete in our new environment also using TOAD.
    Looking at the explain plans, the query in the new environment is not using indexes. Does anyone have an idea why it is not using the indexes? We checked and all of the indexes have been created and still exist. We also ran Analyze again on the two tables used n the query but the query still did not use the indexes.
    Please let me know if anyone has ideas ASAP since we are baffled.

    - Are the object statistics identical? The ANALYZE statement has been depricated for a while, particularly for data warehouse environments where there may be partitioning. Were you not using the DBMS_STATS package to gather statistics in the previous environment? Were statistics computed on the indexes?
    - Can you post the two query plans (formatted via DBMS_XPLAN and including the filter conditions)? It is not immediately obvious to me what index(es) might be useful here unless one of the two conditions is particularly selective which doesn't seem terribly likely based on just the table names involved.
    - When you do post the query plans, please use the \[code\] and \[code\] tags to preserve the white space so that the output is readable.
    Justin

  • Importing a table and its index statistics, cannot import index stats

    Hi,
    Oracle 10.2.0.4 on Solaris.
    I have DBA to imprt tables and index statistics for a table from prod into QA for further analysis. The stats for this table are locked in prod
    DBA has used the following command for export and import table statistics
    exec dbms_stats.export_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'INV');
    exec dbms_stats.import_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'SCHEMA');Although cascade is set to true above, this resulted in "only table stats" to be imported no index stats. So we have imported the prod table level stats but no index stats! FYI, the indexes in prod have stats (last_analyzed) set.
    Next DBA tried the export and import using export_index_stats and import_index_stats but no luck. DBA advising me that the only option we have is to import the table itself from prod to QA. It seems that import with cascade does not work.
    Is this a bug in 10g or there is another way around to get index statistics as well?
    Thanks
    Edited by: 902986 on 25-Feb-2013 06:22

    902986 wrote:
    Hi,
    Oracle 10.2.0.4 on Solaris.
    I have DBA to imprt tables and index statistics for a table from prod into QA for further analysis. The stats for this table are locked in prod
    DBA has used the following command for export and import table statistics
    exec dbms_stats.export_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'INV');
    exec dbms_stats.import_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'SCHEMA');Although cascade is set to true above, this resulted in "only table stats" to be imported no index stats. Problem Exists Between Keyboard And Chair
    "Gather statistics on the indexes for this table. Index statistics gathering is not parallelized. Using this option is equivalent to running the GATHER_INDEX_STATS Procedure on each of the table's indexes. Use the constant DBMS_STATS.AUTO_CASCADE to have Oracle determine whether index statistics to be collected or not. This is the default. The default value can be changed using theSET_PARAM Procedure."
    Handle:     902986
    Status Level:     Newbie
    Registered:     Dec 17, 2011
    Total Posts:     69
    Total Questions:     18 (12 unresolved)
    Why so MANY unanswered questions?

  • Lost index stats in Ora 8.1.6 Tables when selecting via jdbc

    Hi,
    i'm using JBuilder8 and Kylix2 on a Linux machine with Oracle oci drivers of client 8.1.7.
    The connect through JB8 jdbc via DBPilot is succesful and shows the tables of the db.
    After executing a select on any table the index stats of this table are lost.
    Kylix works fine without this failure.
    Thanks for any comments to this problem
    Jens

    Well, it's not the PL/SQL code that is causing a problem. Everything worked fine for many months. Then one day (without any changes in the environment or code) the update of a table from a java application (via JDBC) fails. The same update done directly on the DB with SQL Plus still succeeds!
    This led us to think that something is wrong with the JDBC connection (which was up for several months). Maybe a memory corruption?
    Anyone that experienced similar problems with JDBC?

  • What is the source of this Crawl Error `The item could not be indexed successfully because the item failed in the indexing subsystem`

    Once in a while my full-crawl fails and stops working properly. The crawllogs show this error for all items crawled.
    The item could not be indexed successfully because the item failed in the indexing subsystem. ( The item could not be indexed successfully because the item failed in the indexing subsystem.; Caught exception when preparing generation GID[7570]: (Previous generation
    (last=GID[7569], curr=GID[7570]) is still active. Cannot open GID[7570]); Aborting insert of item in Link Database because it was not inserted to the Search Index.; ; SearchID = F201681E-AF1B-45D2-BFFD-6A2582D10C19 )
    The full crawls starts out ok, after a while (1.5 hours into the process, 50% of all the data) suddenly no more items can be added to the index.
    The index seems to be stuck. The index files on disk are no longer updated (Located in D:\Microsoft
    Office Servers\15.0\Data\Office Server\Applications\Search\Nodes\BAADC4\IndexComponent3\storage\data\SP4d91e6081ac3.3.I.0.0\ms\%default) The Index and Admin component start to report these error in the ULS logs:
    NodeRunnerIndex: Journal[SP4d91e6081ac3]:
    Rolling back GID[7570] to GID[7569] prepGen=GID[7569]
    NodeRunnerIndex: Remote
    service invocation: method=RollbackGeneration() Service = {  Implementation type=Microsoft.Ceres.SearchCore.ContentTargets.IndexRouter.IndexRouter  Component: SP4d91e6081ac3I.0.0.IndexRouter  Exposer Name: GenerationContentTarget} terminated
    with exception: System.InvalidOperationException: Illegal state transition in SP4d91e6081ac3I.0.0.FastServer.FSIndex: Rollback -> Rollback
    NodeRummerAdmin: RetryableInvocation[SP4d91e6081ac3]:
    Exception invoking index cell I.0.0. Retrying in 16 seconds: System.InvalidOperationException: Illegal state transition in SP4d91e6081ac3I.0.0.FastServer.FSIndex: Rollback -> Rollback
    Looks to me the index has troubles updating/merging 'generations'. But the exact working of the indexer is not documented (as far is I know). Let alone how to fix this.
    Other (maybe related) observations 
    Just before the errors start the NodeRunnerIndex starts a 'checkpoint': Journal[SP4d91e6081ac3]:
    Starting checkpoint because forceCheckpoint is true.which ends a few moments later with Journal[SP4d91e6081ac3]:
    All journal users have completed checkpoint Checkpoint[7560-7569].
    Also just before the errors start to appear a TimerJob starts: Name=Timer
    Job job-application-server-admin-service. This timerjobs does some strange things to the search topology: Synchronizing
    Search Topology for application 'Search Service Application' with active topology [...] and Activating
    components. Previous topology:   ---  New Topology: TopologyId: [...] followed by Starting
    to execute Index RedistributeData method.
    And right after these two evente the errors start to occur. (each row is a ULS log enrty)
    INFO : fsplugin: IndexComponent3-bd83a8aa-923b-4526-97e8-47eac0986ff7-SP4d91e6081ac3.I.0.0 (4236): Prepare generation: 324 documents
    IndexRouter[SP4d91e6081ac3]: Caught exception when preparing generation GID[7570]: (External component has thrown an exception.): System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
    GenerationDispatcher[SP4d91e6081ac3]: Failed to prepare GID[7570] in 453 ms, failed on cells: [I.0.0], stale services: []
    Journal[SP4d91e6081ac3]: Rolling back GID[7570] to GID[7569] prepGen=GID[7569]
    Remote service invocation: method=RollbackGeneration() Service = { Implementation type=Microsoft.Ceres.SearchCore.ContentTargets.IndexRouter.IndexRouter Component: SP4d91e6081ac3I.0.0.IndexRouter Exposer Name: GenerationContentTarget} terminated with exception: System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
    RetryableInvocation[SP4d91e6081ac3]: Exception invoking index cell I.0.0. Retrying in 2 seconds: System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
    Journal[SP4d91e6081ac3]: Rolling back GID[7570] to GID[7569] prepGen=GID[7569]
    The Question What is causing this? And how to prevent it? It happend twice in two weeks
    nows. Out of the blue, no config change has been made, all disks have enough space.
    Known Fix This resolves the problem, but doesn's address the root-cause of the problem!
    Stop all crawls.
    Wait a few minutes to let the crawl come to a complete stop.
    Reset the index (clearing all!)
    Start a full-crawl. In the meantime no search is available to the end user (boohoo!)

    Hi,
    I searched for the similar error log, the issue is finally solved by adding more drive space even though they though there is plenty space already.
    https://social.technet.microsoft.com/Forums/office/en-US/d06c9b2c-0bc1-44c6-b83a-2dc0b70936c4/the-item-could-not-be-indexed-successfully-because-the-item-failed-in-the-indexing-subsystem?forum=sharepointsearch
    http://community.spiceworks.com/topic/480369-the-item-could-not-be-indexed-successfully
    From your decription, the issue seems to occur to your full crawl. There is one point in best practise for crawling:
    Run full crawls only when it is necessary, the reasons to do a full crawl are as below:
    https://technet.microsoft.com/en-us/library/jj219577.aspx#Plan_full_crawl
    https://technet.microsoft.com/en-us/library/dn535606.aspx
    Regards,
    Rebecca Tu
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Could not find prepared statement with handle %.

    Greetings. I've seen several posts for this error on the web, but no clear cut answers. I captured the code below in profiler, with the intention of replaying in mgmt studio.
    However, the attempt end in the following error: "Could not find prepared statement with handle 612."
    declare @p1 int
    set @p1=612
    declare @p2 int
    set @p2=0
    declare @p7 int
    set @p7=0
    exec sp_cursorprepexec @p1 output,@p2 output,N'@P0 int,@P1 int,@P2 int,@P3 int,@P4 bit',N'EXEC dbo.mySproc @P0,@P1,@P2,@P3,@P4 ',4112,8193,@p7 output,219717,95,NULL,1,0
    select @p1, @p2, @p7
    Something noteworthy is that my sproc only has 5 input parameters, but this makes it look like it has many more.
    How do I manipulate the code enough to make it work in mgmt studio? Thanks!
    TIA, ChrisRDBA

    In profiler you would normally see RPC:Starting and RPC:Completed. The statement shown in RPC staring is what you need to pick because as Erland explained, completed would show "funky" behavior.
    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Oracle stored procedure call failed,but not captured by the error handling

    Hi All,
    I have a unix shelll script which calls a stored proc in Oracle, the stored proc call failed due to "ORA-01033: ORACLE initialization or shutdown in progress".
    But it is not captured in the error handling block, Any ideas why this had happened?
    SQL file had : my_test_sql.sql
    exec my_proc(..............);
    Unix shell script has this call:
    sqlplus -s my_user/my_pwd@db1 @my_test_sql.sql
    if [[ $? -ne 0 ]]; then
    echo "failed"
    exit 1
    else
    echo "success"
    fi
    If i execute the above shell, I'm getting the following
    ERROR:
    ORA-01033: ORACLE initialization or shutdown in progress
    SP2-0306: Invalid option.
    Usage: CONNÝECT¨ Ýlogon¨ ÝAS SYSDBA¨
    where <logon> ::= <username>Ý/<password>¨Ý@<connect_identifier>¨ | /
    success.
    This puzzled me, any pointers?

    The $? status variable shows the return code of the last command executed. It will be difficult to determine what the exit status of your sql script is without knowing the script. Do you have any "WHENEVER SQLERROR EXIT" statements in the script?
    The ORA-01033 error happens when the database is not open, perhaps in recovery, or startup or shutdown is halted due to a failed or full disk, error in archiving or writing to redo, etc.

  • ALV edit not capturing new value

    Hi
    In my alv i have made one field as editable..when i edit the field and click save button the control comes to below code. 
    CALL METHOD gro_grid->get_selected_rows
        IMPORTING
          et_index_rows = gwa_selected_rows.
    *Through the index capturing the values of selected rows
      LOOP AT gwa_selected_rows INTO gv_selected_rows.
        READ TABLE git_data INTO gwa_data INDEX gv_selected_rows-index.
    here git_data is the internal table given to alv grid...i nthe above read stmt..gwa_data gives the value in field as old one..its not capturing the new value..how to solve this ..pls help

    Hi,
    Use event data_changed.
    When 'SAVE' is pressed, call check_changed_data( ) which will trigger event data_changed.
    Inside handler method copy the modified cells to table.
    *PAI
    When 'SAVE'.
    g_o_grid->check_changed_data( ).  " It triggers event 'data_changed'.
    LOOP AT g_t_modcells INTO g_r_modcells.   " Loop at modified cells table
    READ TABLE git_data INTO gwa_data INDEX g_r_modcells-row_id .      " <---- Ur code
    ENDLOOP.
    clear: g_t_modcells[],g_t_modcell[].
    *Declare data for handler method.
    Data:  g_t_modcells type lvc_t_modi,
              g_t_modcell type lvc_t_modi.
    *Declare handler method for event 'data_changed'.
    METHODS: data_changed FOR EVENT data_changed OF cl_gui_alv_grid IMPORTING er_data_changed.
    *Handler method implementation
      METHOD data_changed.
        IF er_data_changed->mt_good_cells[] IS NOT INITIAL.
          g_t_modcell[] = er_data_changed->mt_good_cells[].
          APPEND LINES OF g_t_modcell TO g_t_modcells.  " Modified cells are copied to table g_t_modcells[]
        ENDIF.
      ENDMETHOD.
    Thanks,

  • Querys not using indexes

    hi all.
    I want to know wich querys in not using indexes. this is posible??
    My db version is 10.2
    Thanks.

    gomcar wrote:
    hi all.
    I want to know wich querys in not using indexes. this is posible??
    My db version is 10.2
    Thanks.Here is something that I just put together as a possible solution. You probably do not want to execute this SQL statement frequently as it might cause a latching problem (note, not thoroughly tested):
    SELECT /*+ ORDERED */
      SP.SQL_ID,
      SP.HASH_VALUE,
      SP.CHILD_NUMBER,
      S.SQL_TEXT
    FROM
      (SELECT
        SP.SQL_ID,
        SP.HASH_VALUE,
        SP.CHILD_NUMBER,
        SUM(DECODE(INSTR(SP.OBJECT_TYPE,'INDEX'),0,0,1)) COUNTER
      FROM
        V$SQL_PLAN_STATISTICS_ALL SP
      WHERE
        SP.OBJECT_TYPE IS NOT NULL
      GROUP BY
        SP.SQL_ID,
        SP.HASH_VALUE,
        SP.CHILD_NUMBER
      HAVING
        SUM(DECODE(INSTR(SP.OBJECT_TYPE,'INDEX'),0,0,1))=0) SP,
      V$SQL S
    WHERE
      SP.SQL_ID=S.SQL_ID
      AND SP.HASH_VALUE=S.HASH_VALUE
      AND SP.CHILD_NUMBER=S.CHILD_NUMBER
    ORDER BY
      S.SQL_TEXT;Explanation of the above:
    The above looks at the stored execution plans for the queries currently in the shared pool, throwing out any line in the plan where no object is specified. If the OBJECT_TYPE column is found to not contain the word INDEX, a 0 is returned, otherwise a 1 is returned for that line in the plan. The sum of this generated column is calculated for each plan, and those plans having the sum of the generated column equal to 0 are returned. This inline view then drives into the V$SQL view to retrieve the matching SQL statements. An ordered hint is used to make certain that Oracle drives from the inline view into V$SQL.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • SQL Server 2000 to Oracle 9.2 problem [b]"Source database not captured"[/b]

    hi, first-timer here working with the latest omwb version.
    anyway
    i'm trying to do a offline migration of the "Northwind" demo database as an example, they are no problems with the process, everything works like a charm ... until is time to press the "finish" button in the wizard-summary screen.
    after that the output seems normal and there's no error, the completion pop-up message throws a 0 warnings 0 errors, supposedly ending the loading of the model, but the capture is labeled as aborted and the source database is not captured at all.
    i have done everything in the manual, creating the privileged user "omwb_user", seeing that the corresponding tablespace was created, giving enough space everywhere.
    so, what's the problem here?
    i will appreciate any help or hint
    thank in advance

    i thought that mi domain standard of "john.doe" was the problem, i take out all the users, and now there is only the "sa" and "guest" accounts.
    tried the online capture and it worked completely, tried the offline, the same problem:
    ** Started : Fri Oct 22 09:26:41 COT 2004
    ** Workbench Repository : Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Repository Connection URL: jdbc:oracle:thin:@localhost:1521:orcl
    ** The following plugins are installed:
    ** Microsoft SQLServer 2000 Plugin, Production Release 10.1.0.2
    ** Microsoft SQLServer 6.5 Plugin, Production Release 10.1.0.2
    ** Active Plugin : SQLServer2K
    EXCEPTION :SQLServer2KDisconnSourceModelLoad.loadSourceModel(): oracle.mtg.migration.MigrationStopException: java.lang.IndexOutOfBoundsException: Index: 15, Size: 15
    ** Shutdown : Fri Oct 22 09:27:14 COT 2004
    any ideas?
    thanks

  • ODC - Import Server Failing "not properly indexed"

    I've install ODC 10.1351.1 along with import server, trying to import files from a folder.
    Under "Batch Job Settings", "Folder/File List Provider", I'm selecting the "Import from folder" option. I am NOT creating a list file. I've provided a file cabinet as part of the setup, and I'm trying to use Commit Server to do the commits.
    When I execute the job in Import Server, my test tif file gets renamed as expected, but my batch does not get committed. In the logs, I see the following error:
    Event: 1223
    ***Warning*** Batch OCCIT0913113350855 was not committed because it was not properly indexed.
    I do see the batch job if I go to the Commit Server UI and look at the batch list. I could review the items in Commit Server. However, I'm looking to just simply load the folder and auto-commit the items. Am I off base, or do I need more setup? The documentation is not terribly clear here.
    Thanks for any insight,
    William

    You are missing an indexing step, which can't be done automatically without the Recognition server (or manual indexing). ODC manual explains briefly on the 34th page:
    Import Server, which imports images and other electronic documents directly into Capture from sources such as email, FTP sites, network folders, list files, and fax providers. As images and electronic documents are imported, they are converted into Capture batches where they can be indexed using Index or Recognition Server and then committed.
    I use import server only for importing network folder scans and assigning them to the appropriate batch for manual indexing, so can't give you an elaborated advice.
    Regards,
    Boris
    Edited by: tombo on 2011.09.14 01:21

  • Content Index State Crawling

    We have an Exchange database that seems to always have a Content Index State that is crawling.  Some end users have complained about slow searches and indexing issues (which is expected).  We have
    stopped the search services and renamed the catalog directory in an effort to rebuild the search catalog, but it just goes right back to crawling.  The database is only about 300GB, so I don't think size is an issue.
    Could it be there is some corruption on the database that is causing issues with the index catalog.  We have removed the database from the DAG and tried it as a stand alone database, with the same
    results.
    Any ideas would be appreciated.

    did you find anything in the event viewer (app/Sys logs).
    You may consider restarting Search Indexer service and then check if that gets healthy or not.
    If it still doesn't say healthy then you might have to consider Resetting Search Index for problematic Database or all database if all if it is a problem with all databases.
    Refer to the link below to get info about how to reset Search index for a database.
    Exchange
    2010 Database copy on server has content index catalog files in the following state: Failed
    Establishing Exchange Content Index
    Rebuild Baselines – Part 1
    How To Troubleshoot Issues With Exchange 2010 Search
    Reply back with the outcome!
    Pavan Maganti ~ ( Exchange | 2003/2007/2010/E15(2013)) ~~ Please remember to click “Vote As Helpful&quot; if it really helps and &quot;Mark as Answer” if it answers your question, “Unmark as Answer” if a marked post does not actually answer your
    question. ~~ This Information is provided is &quot;AS IS&quot; and confers NO Rights!!

  • Partition table not use index

    Hi Experts,
    Actually i have Production Partition table SMS_DELIVERY_NODETAILS its have partitions "PS_WD_01,PS_WD_02........... PS_WD_30". Partition base on date "TOOPERATOR" column like "10-11-2012 , 11-11-2012 ........".
    I have create local index "DELIVERY_CAMP" on "CAMPAIGN_NAME" column.
    I face issue in :
    case 1: Query using index "DELIVERY_CAMP" when i was use query with partition name "PS_WD_07"
    case 2: when query running with TOOPERATOR='2012-11-25' then table_sance_full.
    Oracle version =10g
    Os version = Linux 5.5
    SQL> DESC SMS_DELIVERY_NODETAILS
    Name                                                                                                              Null?    Type
    MSISDN                                                                                                                     VARCHAR2(15)
    TRANSACTIONID                                                                                                     NOT NULL VARCHAR2(50)
    TOOPERATOR                                                                                                                 VARCHAR2(25)
    FROMOPERATOR                                                                                                               VARCHAR2(25)
    STATUS                                                                                                                     VARCHAR2(25)
    TID_INDEX                                                                                                                  NUMBER
    CAMPAIGN_NAME                                                                                                              VARCHAR2(100)
    NETWORK_ERROR_CODE                                                                                                         VARCHAR2(20)
    Case 1:
    SQL> EXPLAIN PLAN FOR
      2  SELECT count(*) from SMS_DELIVERY_NODETAILS partition(PS_WD_07) where CAMPAIGN_NAME ='1353814653772_ftp_Churnscore100_pe_100';
    Explained.
    SQL> set line 200
    @?/rdbms/admin/utlxpls.sql
    SQL>
    PLAN_TABLE_OUTPUT
    Plan hash value: 2934568714
    | Id  | Operation               | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT        |               |     1 |    38 |    53   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE         |               |     1 |    38 |            |          |       |       |
    |   2 |   PARTITION RANGE SINGLE|               |  6320 |   234K|    53   (0)| 00:00:01 |    31 |    31 |
    |   3 |    PARTITION LIST ALL   |               |  6320 |   234K|    53   (0)| 00:00:01 |     1 |   100 |
    |*  4 |     INDEX RANGE SCAN    | DELIVERY_CAMP |  6320 |   234K|    53   (0)| 00:00:01 |  3001 |  3100 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       4 - access("CAMPAIGN_NAME"='1353814653772_ftp_Churnscore100_pe_100')
    16 rows selected.
    case 2:
    SQL> SQL> EXPLAIN PLAN FOR
      2  SELECT count(*) from SMS_DELIVERY_NODETAILS WHERE TOOPERATOR='2012-11-25' and  CAMPAIGN_NAME ='1353814653772_ftp_Churnscore100_pe_100';
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3258763602
    | Id  | Operation               | Name                   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT        |                        |     1 |    58 | 76394   (2)| 00:15:17 |       |       |
    |   1 |  SORT AGGREGATE         |                        |     1 |    58 |            |          |       |       |
    |   2 |   PARTITION RANGE SINGLE|                        |     1 |    58 | 76394   (2)| 00:15:17 |    31 |    31 |
    |   3 |    PARTITION LIST ALL   |                        |     1 |    58 | 76394   (2)| 00:15:17 |     1 |   100 |
    |*  4 |     TABLE ACCESS FULL   | SMS_DELIVERY_NODETAILS |     1 |    58 | 76394   (2)| 00:15:17 |  3001 |  3100 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       4 - filter("TOOPERATOR"='2012-11-25' AND "CAMPAIGN_NAME"='1353814653772_ftp_Churnscore100_pe_100')
    16 rows selected.

    Dear rp0428 ,
    1. the table and index DDL
    Table=>
    Table_name                     Partition_name                 Subpartition_name
    SMS_DELIVERY_NODETAILS         TOOPERATOR                     TID_INDEX
    Index=>
    create local index DELIVERY_TID_MSISDN_NODETAILS  on SMS_DELIVERY_NODETAILS(TRANSACTIONID,MSISDN);
    create local index DELIVERY_CAMP on SMS_DELIVERY_NODETAILS(CAMPAIGN_NAME);2. the query you used to collect the table and index stats
    SELECT count(*) from SMS_DELIVERY_NODETAILS WHERE TOOPERATOR='2012-11-25' and  CAMPAIGN_NAME ='1353814653772_ftp_Churnscore100_pe_100'; 3. the table, partition,subpartition row counts
    SQL> select PARTITION_POSITION,PARTITION_NAME,SUBPARTITION_COUNT,HIGH_VALUE from user_tab_partitions where TABLE_NAME='SMS_DELIVERY_NODETAILS' order by PARTITION_POSITION ;
    PARTITION_POSITION PARTITION_NAME                 SUBPARTITION_COUNT HIGH_VALUE
                     1 PS_WD_10                                      100 '2012-10-28'
                     2 PS_WD_11                                      100 '2012-10-29'
                     3 PS_WD_12                                      100 '2012-10-30'
                     4 PS_WD_13                                      100 '2012-10-31'
                     5 PS_WD_14                                      100 '2012-11-01'
                     6 PS_WD_15                                      100 '2012-11-02'
                     7 PS_WD_16                                      100 '2012-11-03'
                     8 PS_WD_17                                      100 '2012-11-04'
                     9 PS_WD_18                                      100 '2012-11-05'
                    10 PS_WD_19                                      100 '2012-11-06'
                    11 PS_WD_20                                      100 '2012-11-07'
    PARTITION_POSITION PARTITION_NAME                 SUBPARTITION_COUNT HIGH_VALUE
                    12 PS_WD_21                                      100 '2012-11-08'
                    13 PS_WD_22                                      100 '2012-11-09'
                    14 PS_WD_23                                      100 '2012-11-10'
                    15 PS_WD_24                                      100 '2012-11-11'
                    16 PS_WD_25                                      100 '2012-11-12'
                    17 PS_WD_26                                      100 '2012-11-13'
                    18 PS_WD_27                                      100 '2012-11-14'
                    19 PS_WD_28                                      100 '2012-11-15'
                    20 PS_WD_29                                      100 '2012-11-16'
                    21 PS_WD_30                                      100 '2012-11-17'
                    22 PS_WD_31                                      100 '2012-11-18'
    PARTITION_POSITION PARTITION_NAME                 SUBPARTITION_COUNT HIGH_VALUE
                    23 PS_WD_32                                      100 '2012-11-19'
                    24 PS_WD_01                                      100 '2012-11-20'
                    25 PS_WD_02                                      100 '2012-11-21'
                    26 PS_WD_03                                      100 '2012-11-22'
                    27 PS_WD_04                                      100 '2012-11-23'
                    28 PS_WD_05                                      100 '2012-11-24'
                    29 PS_WD_06                                      100 '2012-11-25'
                    30 PS_WD_07                                      100 '2012-11-26'
                    31 PS_WD_08                                      100 '2012-11-27'
                    32 PS_WD_09                                      100 '2012-11-28'
                    33 PS_WD_DEFAULT                                 100 MAXVALUE
    33 rows selected.

  • Query can not use index

    1,i found a sql :select userid,repute from user_attribute where
    repute>3000 order by repute desc
    cost heavily;
    SELECT STATEMENT Cost = 637
    SORT ORDER BY
    TABLE ACCESS FULL USER_ATTRIBUTE
    2,i use: select index_name from user_indexes where table_name
    = 'USER_ATTRIBUTE'
    INDEX_NAME
    IDX_USER_ATTRIBUTE_FACE
    IDX_USER_ATTRIBUTE_POWER
    IDX_USER_ATTRIBUTE_REPUTE
    IDX_USER_ATTRIBUTE_USERID
    so column repute has indexed
    3, i use CBO with analysize shema compute
    optimizer_index_caching integer 99
    optimizer_index_cost_adj integer 5
    4,i use: select /*index(IDX_USER_ATTRIBUTE_REPUTE)*/
    userid,repute from user_attribute where repute>3000 order by
    repute desc
    i got same explain plan as old
    5,why it can not use index to query,thanks.

    I think your optimizer hint syntax is wrong. you need a "+"
    sign to indicate that the comment block is an optimizer hint,
    and the table name is not optional in the index hints
    select /*+ index(user_attribute
    IDX_USER_ATTRIBUTE_REPUTE)*/
    userid,repute from user_attribute where repute>3000 order
    by
    repute desc
    also, try ...
    select /*+ index_desc(user_attribute
    IDX_USER_ATTRIBUTE_REPUTE)*/
    userid,repute from user_attribute where repute>3000
    This should order the result for you.

Maybe you are looking for