USE of  DBMS_STATS

Hi All
I want to use the dbms_stats package to gather the statistics
EXEC DBMS_STATS.EXPORT_SCHEMA_STATS (
ownname => 'SCOTT',
stattab => 'DBSTATS',
statid => 'SCOTT' || '_' || TO_CHAR(SYSDATE,'MMDDYYYY'),
statown => 'DB_MONITOR');
In the above syntax dbstats table doesnot exists can someone tell how to create the DBSTATS table and what should be its structur so that it can store statistics
Please guide me..
Thanks
Ajay

Hi, you must create the table with the procedure
DBMS_STATS.CREATE_STAT_TABLE (
ownname VARCHAR2,
stattab VARCHAR2,
tblspace VARCHAR2 DEFAULT NULL);
If need more information, please visit the next link.
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1035018
Regards

Similar Messages

  • Dbms_stats.gather_table_stats; What are the parameters you gurus use?

    DB Version :10.2.0.1.0
    Sometimes i want to collect the stats of just few tables. I use the following script to gather individual table stats. Are these parameters OK?
    begin
    dbms_stats.gather_table_stats(user, upper('table_name'), estimate_percent=>100,
        no_invalidate=>false);
    end;
    /

    Hi,
    Check this links
    [GATHER_TABLE_STATS Procedure|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461]
    [Using the DBMS_STATS-package|http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:735625536552]
    [Analyze and DBMS_STATS|http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4347359891525]
    Regards,

  • Problems using dbms_stats.auto_sample_size

    Hi,
    when I try to execute the following statement:
    exec dbms_stats.analyze_schema('SNT', dbms_stats.auto_sample_size);
    I get the following error:
    SQL> execute dbms_stats.gather_schema_stat
    BEGIN dbms_stats.gather_schema_stats('SNT'
    FEHLER in Zeile 1:
    ORA-00933: SQL command not properly ended
    ORA-06512: at "SYS.DBMS_STATS", line 9136
    ORA-06512: at "SYS.DBMS_STATS", line 9616
    ORA-06512: at "SYS.DBMS_STATS", line 9800
    ORA-06512: at "SYS.DBMS_STATS", line 9854
    ORA-06512: at "SYS.DBMS_STATS", line 9831
    ORA-06512: at line 1
    If I use:
    exec dbms_stats.analyze_schema('SNT', 30);
    the statement run without problems. This is an Ora 9.2 release.
    What could be the problem for the first statement ?? Why Oracle doesn't accept the auto_sample_size ??
    Thanks in advance
    Dana

    fyi: i can only reproduce bug 2968571 with 9.2.0.3 and 9.2.0.4

  • COMPUTE INDEX vs. SYS.DBMS_STATS.GATHER_INDEX_STATS

    Hi,
    I've got an Oracle 9 database.
    I am creating an index. I had created this index using the following syntax:
    CREATE INDEX index1. . . . . . COMPUTE STATISTICS;
    One of my colleagues tells me that I should scrap 'COMPUTE STATISTICS' and instead make this a two step process:
    (1) CREATE INDEX index1. . . . .
    (2) SYS.DBMS_STATS.GATHER_INDEX_STATS
    OwnName => 'schema'
    ,IndName => 'index_name'
    ,Estimate_Percent => 10
    ,Degree => 4
    ,No_Invalidate => FALSE);
    Is there any advantage to using SYS.DBMS_STATS.GATHER_INDEX_STATS instead of 'COMPUTE STATISTICS'?
    Thanks,
    Tom

    9i references<br>
    In particular...<br>
    Deprecated at 10<br>

  • Problem using CTXXPATH index

    Hi all,
    i'm using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 on Windows.
    I created this table
    create table PERSISTENT_COMPOSITION
      COMPOSITION_ID NUMBER(19) not null,
      XML_CONTENT    SYS.XMLTYPE not null,
    )and filled it with more or less 1.000.000 records (that si 1.000.000 xml document loaded into XML_CONTENT).
    Then first of all i tested it with a simple query just like the following:
    SELECT *
      FROM PERSISTENT_COMPOSITION t
    WHERE existsNode(t.xml_content, '/composition/archetype_details/archetype_id[value="openEHR-EHR-COMPOSITION.composition_test.v1"]') = 1;obtaining the expected result: 50,000 records found.
    Now, in order to improve query performances, i created a CTXXPATH index as follows:
    CREATE INDEX IDX#COMP_CTXXPATH ON PERSISTENT_COMPOSITION(XML_CONTENT) INDEXTYPE IS CTXSYS.CTXXPATH;Then i tested the new performances using exactly the same query shown above...and here comes the problem: the query returns NO RESULT! No record was found! I looked at the query execution plan and it uses the created index IDX#COMP_CTXXPATH...but no record could be found...
    I thought it could be a matter of namespace: in fact loaded xml documents have a xmlns set and so i changed the query as follows:
    SELECT *
    FROM persistent_composition t
    WHERE existsNode(t.xml_content,
                      '/composition/archetype_details/archetype_id[value="openEHR-EHR-COMPOSITION.composition_test.v1"]',
                 'xmlns="http://this.is.an.xmlns.url.org/v1"') = 1and surprise: i obtained my 50,000 results just like before BUT, looking at the query execution plan, the IDX#COMP_CTXXPATH index HASN'T BEEN USED!!!
    I really don't understand why using the IDX#COMP_CTXXPATH i get no result....can someone help me?
    Thank you very much
    P.S: i tried using ANALYZE (both on index and on table), CTX_DDL.sync_index and CTX_DDL.optimize_index but got no result..
    Edited by: user11295548 on 29-giu-2009 5.47

    Besides following Mark's advice, and I could be mistaken regarding this in combination with domain indexes, you should NOT use ANALYZE anymore in a Oracle 10 environment. Instead use DBMS_STATS. Its more flexible.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_4005.htm#SQLRF01105
    Note:
    Do not use the COMPUTE and ESTIMATE clauses of ANALYZE to collect optimizer statistics.
    These clauses are supported for backward compatibility.
    Instead, use the DBMS_STATS package, which lets you collect statistics in parallel,
    collect global statistics for partitioned objects, and fine tune your statistics collection
    in other ways. The optimizer, which depends upon statistics, will eventually use only
    statistics that have been collected by DBMS_STATS.
    See PL/SQL Packages and Types Reference for more information on the
    DBMS_STATS package. You must use the ANALYZE statement (rather than
    DBMS_STATS) for statistics collection not related to the cost-based optimizer, such as:
    - To use the VALIDATE or LIST CHAINED ROWS clauses
    - To collect information on freelist blocks

  • Index is not using for this query

    I have this query and it doesn't use index. Can you put your suggestion please?
    SELECT /*+ ORDERED USE_HASH(IC_GSMRELATION) USE_HASH(IC_UTRANCELL) USE_HASH(IC_SECTOR) USE_HASH(bt) */
    /* cp */
    bt.value value,
    bt.tstamp tstamp,
    ic_GsmRelation.instance_id instance_id
    FROM
    xr_scenario_tmp IC_GSMRELATION,
    xr_scenario_tmp IC_UTRANCELL,
    xr_scenario_tmp IC_SECTOR,
    rg_busyhour_tmp bt
    WHERE
    bt.instance_id != -1
    AND (IC_GSMRELATION.entity_id = 133)
    AND (IC_GSMRELATION.parentinstance_id = ic_UtranCell.instance_id)
    AND (IC_UTRANCELL.entity_id = 254)
    AND (IC_UTRANCELL.parentinstance_id = ic_Sector.instance_id)
    AND (IC_SECTOR.entity_id = 227)
    AND (IC_SECTOR.parentinstance_id = bt.instance_id);
    table : xr_scenario_tmp
    entity_id          num
    instance_id          num
    parentinstance_id     num
    localkey          varchar
    indexes: 1. entity_id+instance_id
         2. entity_id+parentinstance_id
    table : rg_busyhour_tmp
    instance_id     notnull     num
    tstamp          notnull     date
    rank          notnumm     num
    value               float
    index: instance_id+tstamp+rank
    thanks

    user5797895 wrote:
    Thanks for the update
    1. I don't understand where to put {}. you meant in the forum page like below
    Use the tag. Read the [FAQ|http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ?t=anon] for more information. It's the link on the top right corner.
    >
    2. AROUND 8000 IN DEV MACHINE. BUT 1.5M IN PRODUCTION
    It's a more or less useless exercise if you have that vast difference between the two systems. You need to test this thoroughly using a similar amount of data.
    3.
    Note: cpu costing is off, PLAN_TABLE' is old version
    You need to re-create your PLAN_TABLE. That's the reason why important information is missing from your plans. It's the so called "Predicate Information" section below the execution plan and it requires the correct version of the plan table. Drop your current plan table and re-run in SQL*Plus on the server:
    @?/rdbms/admin/utlxplan
    to re-create the plan table.
    Dynamic sampling doesn't alter the plan in any way no matter what sampling level I choose.
    When I added Cardinality it switched from 1 full table scan and 2 index read
    Can you post the statements with the hints included resp. just the first line including the hints used for the different attempts?
    # WITH dbms_stats.gather_table_stats, without cardinality it uses indexes all the time.
    How did you call DBMS_STATS.GATHER_TABLE_STATS, i.e. which parameter values where you using?
    # After deleting the table stats performance improved back
    All these different attempts are not really helpful if you don't say which of them was more effective than the other ones. That's why I'm asking for the "Predicate Information" section so that this information can be used to determine which of your tables might benefit from an indexed access path and which don't.
    As already mentioned several times if you use SQL tracing as described in one of the links provided you could see which operation produces how many rows. This would allow to determine if it is efficient or not.
    But given that you're doing all this with your test data it doesn't say much about the performance in your production environment.
    4. whether GTT created with "ON COMMIT PRESERVE ROWS"?
    YES - BUT DIFFERENT SESSIONS HAS DIFFERENT NUMBER OF ROWS
    The question is, whether the number of rows differs significantly, if yes, then you shouldn't use the DBMS_STATS approach
    5. neigher (48 sec. / 25 sec. run time) are sufficient, then what is the expected?
    ACTUALLY I AM DOING IT IN DEVELOPMENT MACHINVE. IN PRODUCTION THE NUMBER OF ROWS ARE DIFFERENT. LAST TIME WHEN WE RELEASED THE
    PATCH WITH THIS CODE, THE PERFORMANCE WAS BAD.
    See 2., you need to have a suitable test environment. It's a more or less useless exercise if you only have a fraction of the actual amount of data.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Queries not using indexes

    We installed and configured a new environment of OBIEE and are trying to run a simple query in our data warehouse. This simple query takes only 7 seconds to complete in our previous data warehouse using TOAD but is taking 8+ minutes to complete in our new environment also using TOAD.
    Looking at the explain plans, the query in the new environment is not using indexes. Does anyone have an idea why it is not using the indexes? We checked and all of the indexes have been created and still exist. We also ran Analyze again on the two tables used n the query but the query still did not use the indexes.
    Please let me know if anyone has ideas ASAP since we are baffled.

    - Are the object statistics identical? The ANALYZE statement has been depricated for a while, particularly for data warehouse environments where there may be partitioning. Were you not using the DBMS_STATS package to gather statistics in the previous environment? Were statistics computed on the indexes?
    - Can you post the two query plans (formatted via DBMS_XPLAN and including the filter conditions)? It is not immediately obvious to me what index(es) might be useful here unless one of the two conditions is particularly selective which doesn't seem terribly likely based on just the table names involved.
    - When you do post the query plans, please use the \[code\] and \[code\] tags to preserve the white space so that the output is readable.
    Justin

  • ERROR in dbms_stats

    Hi,
    i am executing dbms_stats . but i am receiving the following error as
    SQL> exec dbms_stats.gather_schema_stats ( ownname => 'STA' , estimate_percent => 10 );
    BEGIN dbms_stats.gather_schema_stats ( ownname => 'STA' , estimate_percent => 10 ); END;
    ERROR at line 1:
    ORA-06521: PL/SQL: Error mapping function
    ORA-06512: at "SYS.DBMS_STATS", line 9375
    ORA-06512: at "SYS.DBMS_STATS", line 9857
    ORA-06512: at "SYS.DBMS_STATS", line 10041
    ORA-06512: at "SYS.DBMS_STATS", line 10095
    ORA-06512: at "SYS.DBMS_STATS", line 10072
    ORA-06512: at line 1
    please provide me the solution

    I find this better
    @>ed
    Wrote file afiedt.buf
      1  begin
      2     dbms_stats.gather_schema_stats(
      3  ownname          => 'HR',
      4  estimate_percent => dbms_stats.auto_sample_size,
      5  degree           => 7
      6     );
      7* end;
    @>/
    PL/SQL procedure successfully completed.Using estimate_percent => dbms_stats.auto_sample_size to let Oracle determine the appropriate sample size
    for good statistics.
    Please visit here:
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036456
    Adith

  • Low_value and high_value in USER_TAB_COL_STATISTICS in readable format

    Hi,
    My Oracle Version is 10.2.0.4.
    How would I know the low_value and high_value from the table USER_TAB_COL_STATISTICS in a readable format. I am getting the values in RAW. How would I get these values for CHAR datatype columns in CHAR, NUMBER datatype columns in NUMBER and DATE datatype columns in DATE.
    See the example given below.
    swamy@VSFTRAC1> DESC employee_attendance
    Name                                                                                   Null?    Type
    EMPID                                                                                  NOT NULL VARCHAR2(10)
    ACCESS_TIME                                                                            NOT NULL DATE
    ENAME                                                                                           VARCHAR2(50)
    FLOOR                                                                                           VARCHAR2(10)
    DOOR                                                                                            VARCHAR2(10)
    INOUT                                                                                           VARCHAR2(3)
    ACCESS_RESULT                                                                                   VARCHAR2(50)
    swamy@VSFTRAC1> SELECT column_name, density, num_distinct, num_nulls, low_value, high_value, avg_COL_len FROM user_tab_col_statistics WHERE table_name='EMPLO
    YEE_ATTENDANCE';
    COLUMN_NAME                       DENSITY NUM_DISTINCT  NUM_NULLS LOW_VALUE                      HIGH_VALUE                     AVG_COL_LEN
    EMPID                          .008333333          120          0 30303031303830                 3031313633                               7
    ACCESS_TIME                    .000259538         3853          0 786E0101031121                 786E0106121B01                           8
    ENAME                          .008333333          120          0 414248494A49542050415449       57494E53544F4E2053414D55454C20          16
                                                                                                     52414A552050
    FLOOR                                  .5            2          0 5345434F4E44                   5448495244                               7
    DOOR                                   .5            2          0 454E5452414E4345               535441495243415345                      10
    INOUT                                  .5            2          0 494E                           4F5554                                   4
    ACCESS_RESULT                           1            1          0 414343455353204752414E544544   414343455353204752414E544544            15
    7 rows selected.
    swamy@VSFTRAC1>

    Hi,
    You can use the " dbms_stats.convert_raw_value" to convert the value to readable format
    Refer to the following for the example
    http://structureddata.org/2007/10/16/how-to-display-high_valuelow_value-columns-from-user_tab_col_statistics/
    Hope that helps and solution for your requirement.
    - Pavan Kumar N
    Oracle 9i/10g - OCP
    http://oracleinternals.blogspot.com/

  • Upgrade R12.0.4 DB 10.2.0.3 to 11.1.0.7

    Hi All,
    I have followed the below steps to upgrade R12 DB on SLES 10 SP2 from 10g to 11g
    3- Perform pre-requsites for patch 6928236
    4- Apply INTEROPERABILITY PATCH 6928236
    5- Apply TXK - 12.0.4 Consolidated Patch 1 7207440
    6- Perform patch 7207440 post install steps
    7- Apply patch 6400501 to applicaition tier, follow these steps:
         cd /oracle/apps/prod/a*/a*/a*
    . ./APPSPROD_prod.env
    export PATH=$PATH:$ORACLE_HOME/OPatch
    cd /oracle/patches/apps/6400501
         opatch lsinventory -invPtrLoc $INST_TOP/admin/oraInst.loc
    opatch apply -invPtrLoc $INST_TOP/admin/oraInst.loc
    After patch is applied do the following:
    cd $ORACLE_HOME/forms/lib
    make -f ins_forms.mk install
    cd $ORACLE_HOME/reports/lib
    `
    8- Apply timezone V4 patch 5632264 to 10g DB (OLD DB) using opatch apply. shutdown and start it up again for the change to take effect
         export PATH=$PATH:$ORACLE_HOME/OPatch
         cd /oracle/patches/db/5632264
         opatch lsinventory -invPtrLoc $ORACLE_HOME/oraInst.loc
         opatch apply -invPtrLoc $ORACLE_HOME/oraInst.loc     
    9- gedit /home/oracle/.bash_profile Create (create it if it doesn't exist) and write in it
    # Oracle Settings
    TMP=/tmp; export TMP
    TMPDIR=$TMP; export TMPDIR
    ORACLE_BASE=/oracle/apps/prod/db11; export ORACLE_BASE
    ORACLE_HOME=$ORACLE_BASE/tech_st/11.1.0; export ORACLE_HOME
    ORACLE_SID=PROD; export ORACLE_SID
    LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
    PATH=$ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:$PATH; export PATH
    PERL5LIB=$ORACLE_HOME/perl/lib/5.8.3:$ORACLE_HOME/perl/lib/site_perl/5.8.3 export PERL5LIB
    TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
    10- Install Oracle Database 11gR1 software only using the command
    ./runInstaller -invPtrLoc /oracle/apps/prod/db11/tech_st/11.1.0/admin/oui/PROD_prod/oraInventory/oraInst.loc
    In the Installation Types window, use the Product Languages button to select any languages other than American English that are used by your Applications database instance. Choose the Enterprise Edition installation type. In the subsequent windows, select the options not to upgrade an existing database and to install the database software only. Select Oracle Home Name to be PROD_db111_RDBMS
    11- Install Oracle Database 11g Products from the 11g Examples CD using the command
    ./runInstaller -invPtrLoc /oracle/apps/prod/db11/tech_st/11.1.0/admin/oui/PROD_prod/oraInventory/oraInst.loc
    In the Installation Types window, use the Product Languages button to select any languages other than American English that are used by your Applications database instance.
    12- Set ENV variables in /home/oracle/.bash_profile
    ORACLE_BASE=/oracle/apps/prod/db11; export ORACLE_BASE
    ORACLE_HOME=$ORACLE_BASE/tech_st/11.1.0; export ORACLE_HOME
    LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
    PATH=$ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:$PATH; export PATH
    PERL5LIB=$ORACLE_HOME/perl/lib/5.8.3:$ORACLE_HOME/perl/lib/site_perl/5.8.3 export PERL5LIB
    13- Install Oracle Database 11gR1 Patchest 7 using the command
    ./runInstaller -invPtrLoc /oracle/apps/prod/db11/tech_st/11.1.0/admin/oui/PROD_prod/oraInventory/oraInst.loc
    14- Run $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/nls/data/old/cr9idata.pl
    15- Add the following line to /home/oracle/.bash_profile ORA_NLS10=/oracle/apps/prod/db11/tech_st/11.1.0/nls/data/9idata; export ORA_NLS10
    16- export $PATH:$ORACLE_HOME/OPatch
    17- cp $ORACLE_HOME/oraInst.loc /etc
    18- opatch lsinventory
    19- Apply patch 7486407 to 11g home using
         opatch apply
    20- Apply patch 7684818 to 11g home using
         opatch napply -skip_subset -skip_duplicate
    21- Gather stats for dictionary tables on the 10g DB (OLD DB) using exec dbms_stats.gather_dictionary_stats;
    22- Extende old Database tablespace SYSAUX to be 500 using the following command
    A- alter database datafile '/oracle/apps/prod/db/apps_st/data/sysaux01.dbf' resize 500m;
    B- alter database datafile '/oracle/apps/prod/db/apps_st/data/sysaux01.dbf' autoextend on;
    23- Run the Oracle Net Configuration Assistant, to start Oracle Net Configuration Assistant run
    netca from $ORACLE_HOME/bin.
    24- Edit /etc/oratab and add the following line PROD:/oracle/apps/prod/db/tech_st/10.2.0:N otherwise DBUA will not find the old DB
    25- MANDATORY STEP: Copy the initPROD.ora file from Linux Patches to $ORACLE_HOME/dbs
    26- copy new DB oraInst.loc to etc cp $ORACLE_HOME/oraInst.loc /etc/oraInst.loc
    26- Upgrade the Database Using the Database Upgrade Assistant from 11gR1 database and follow the steps
         run dbua to start the Database Upgrade Assistant
    The upgrade failes in the post-upgrade steps were it's creating the control file for the new database (during the DBUA i have specifed that i need to move DB files to a new location) with the error ORA-01166: file number 288 is larger than MAXDATAFILES (100) ORA-01110: data file 288: '$ORACLE_HOME/apps_st/data/PROD/system10.dbf'
    I have checked DB_FILES in initPROD.ora and it's 512 I also checked the MAXDATAFILES in the control file from source 10g DB and it's 512 (i used alter database backup controfile to trace to get the maxdatafiles value)
    Any ideas how to fix this issue?
    Appreciate your help.
    Mohammed Tammous

    Create a sql script file create_controlfile.sql. Open a trace log, and find something similar to:
    show sqlprompt
    set sqlprompt 'SQL>'
    Create controlfile reuse set database "INSTANCE_NAME"
    MAXINSTANCES 8
    MAXLOGHISTORY 1
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    Datafile
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system02.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system03.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system04.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system05.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/ctxd01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/owad01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_queue02.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/odm.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/olap.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_tools1.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_ref03.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/xx_cdf_data1.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/xx_cdf_idx1.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system10.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system06.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/portal01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system07.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system09.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system08.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/system11.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/undo01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_data01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_ind01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_ref01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_int01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_summ01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_nolog01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_archive01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_queue01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_media01.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_data02.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_data03.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_ind02.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_ind03.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_ind04.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_txn_ind05.dbf',
    '/disk1/dbtier/db/oradata/INSTANCE_NAME/a_ref02.dbf',
    '/disk1/dbtier/data/sysaux01.dbf'
    LOGFILE GROUP 1 (>'/disk1/dbtier/db/oradata/INSTANCE_NAME/log01a.dbf',>'/disk1/dbtier/db/oradata/INSTANCE_NAME/log01b.dbf' ) SIZE 10240K,
    GROUP 2 (>'/disk1/dbtier/db/oradata/INSTANCE_NAME/log02a.dbf',>'/disk1/dbtier/db/oradata/INSTANCE_NAME/log02b.dbf' ) SIZE 10240K RESETLOGS;
    set heading off
    set timing off
    set pagesize 0
    set feedback off
    set linesize 2048
    set sqlprompt 'SQL_ENGINE_END_OF_SQL'
    show sqlprompt
    set sqlprompt 'SQL>'Copy this to your create_controlfile.sql and change MAXDATAFILES 100 to MAXDATAFILES 1024. Then connect to database with sqlplus as sysdba user and run @create_controlfile.sql

  • SQL query with Bind variable with slower execution plan

    I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
    Execution Plan
    0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
    1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
    3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
    4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
    (Cost=2 Card=135 Bytes=6480)
    Statistics
    0 recursive calls
    18 db block gets
    15558 consistent gets
    47 physical reads
    9896 redo size
    423 bytes sent via SQL*Net to client
    1095 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    55 rows processed
    I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
    Execution Plan
    0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
    2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
    3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
    4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
    Statistics
    0 recursive calls
    12 db block gets
    3003199 consistent gets
    54 physical reads
    9448 redo size
    423 bytes sent via SQL*Net to client
    1258 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    55 rows processed
    TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
    Regards
    Ivan

    Many thanks for your reply.
    I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
    for table I use:-
    begin
    dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
    end;
    for index I use:-
    begin
    dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
    end;
    Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
    regards
    Ivan

  • How to find where is bottleneck oracle 11.2

    I am running oracle 11gr2 on windows server 2008 R2 having ram 20gb.
    I am taking statspack report on database and following is it:
    i am not getting where is actual bottleneck is and some time there is problem regarding procedure hangup which having business logic fetch by cursor.
    i am generate statistics every day using :
    begin
    dbms_stats.gather_schema_stats(ownname => 'MFG',cascade => TRUE,no_invalidate => FALSE);
    end;
    and also using:
    analyze table abc compute statistics;
    stats pack report:
    Load Profile Per Second Per Transaction Per Exec Per Call
    ~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
    DB time(s): 2.0 0.8 0.00 0.01
    DB CPU(s): 1.5 0.6 0.00 0.00
    Redo size: 68,274.8 28,441.6
    Logical reads: 83,672.4 34,855.8
    Block changes: 633.0 263.7
    Physical reads: 2,763.0 1,151.0
    Physical writes: 37.4 15.6
    User calls: 379.8 158.2
    Parses: 342.6 142.7
    Hard parses: 0.3 0.1
    W/A MB processed: 9.9 4.1
    Logons: 34.0 14.2
    Executes: 2,702.3 1,125.7
    Rollbacks: 0.0 0.0
    Transactions: 2.4
    Instance Efficiency Indicators
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 98.06 Optimal W/A Exec %: 99.99
    Library Hit %: 100.03 Soft Parse %: 99.90
    Execute to Parse %: 87.32 Latch Hit %: 99.97
    Parse CPU to Parse Elapsd %: 91.04 % Non-Parse CPU: 99.19
    Shared Pool Statistics Begin End
    Memory Usage %: 68.94 69.12
    % SQL with executions>1: 64.57 65.59
    % Memory for SQL w/exec>1: 81.76 82.22
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time
    CPU time 2,194 71.4
    db file sequential read 131,781 323 2 10.5
    db file scattered read 89,404 206 2 6.7
    Disk file operations I/O 91,788 176 2 5.7
    direct path read 17,001 89 5 2.9
    Host CPU (CPUs: 8 Cores: 4 Sockets: 1)
    ~~~~~~~~ Load Average
    Begin End User System Idle WIO WCPU
    16.82 5.62 77.56
    Instance CPU
    ~~~~~~~~~~~~ % Time (seconds)
    Host: Total time (s): 15,156.8
    Host: Busy CPU time (s): 3,401.3
    % of time Host is Busy: 22.4
    Instance: Total CPU time (s): 2,827.1
    % of Busy CPU used for Instance: 83.1
    Instance: Total Database time (s): 3,753.3
    %DB time waiting for CPU (Resource Mgr): 0.0
    Memory Statistics Begin End
    ~~~~~~~~~~~~~~~~~ ------------ ------------
    Host Mem (MB): 20,468.5 20,468.5
    SGA use (MB): 11,022.6 11,022.6
    PGA use (MB): 779.3 807.4
    % Host Mem used for SGA+PGA: 57.7 57.8
    Time Model System Stats DB/Inst: ORACLE/oracle Snaps: 511-512
    -> Ordered by % of DB time desc, Statistic name
    Statistic Time (s) % DB time
    sql execute elapsed time 2,912.2 78.8
    DB CPU 2,825.0 76.4
    connection management call elapsed 343.6 9.3
    PL/SQL execution elapsed time 56.6 1.5
    parse time elapsed 42.8 1.2
    hard parse elapsed time 25.1 .7
    PL/SQL compilation elapsed time 1.1 .0
    repeated bind elapsed time 1.0 .0
    inbound PL/SQL rpc elapsed time 0.7 .0
    hard parse (sharing criteria) elaps 0.5 .0
    sequence load elapsed time 0.1 .0
    failed parse elapsed time 0.0 .0
    hard parse (bind mismatch) elapsed 0.0 .0
    DB time 3,697.2
    background elapsed time 56.1
    background cpu time 2.1
    Foreground Wait Events DB/Inst: ORACLE/oracle Snaps: 511-512
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> ordered by Total Wait Time desc, Waits desc (idle events last)
    Avg %Total
    %Tim Total Wait wait Waits Call
    Event Waits out Time (s) (ms) /txn Time
    db file sequential read 131,324 0 321 2 28.9 10.5
    db file scattered read 89,376 0 206 2 19.6 6.7
    Disk file operations I/O 91,705 0 176 2 20.2 5.7
    direct path read 16,992 0 89 5 3.7 2.9
    log file sync 5,064 0 16 3 1.1 .5
    db file parallel read 1,575 0 14 9 0.3 .4
    enq: KO - fast object checkp 8 0 5 591 0.0 .2
    control file sequential read 4,457 0 2 0 1.0 .1
    direct path write temp 1,635 0 2 1 0.4 .1
    SQL*Net more data to client 14,776 0 1 0 3.2 .0
    SQL*Net message from dblink 603 0 0 1 0.1 .0
    ADR block file read 91 0 0 4 0.0 .0
    direct path read temp 713 0 0 0 0.2 .0
    SQL*Net break/reset to clien 152 0 0 0 0.0 .0
    asynch descriptor resize 8,239 100 0 0 1.8 .0
    library cache: mutex X 1,238 0 0 0 0.3 .0
    SQL*Net more data from dblin 345 0 0 0 0.1 .0
    ADR block file write 5 0 0 0 0.0 .0
    latch free 66 0 0 0 0.0 .0
    direct path write 4 0 0 0 0.0 .0
    cursor: pin S 10 0 0 0 0.0 .0
    SQL*Net message from client 526,480 0 238,770 454 115.7
    jobq slave wait 3,954 100 2,034 514 0.9
    wait for unread message on b 1,896 98 1,894 999 0.4
    Streams AQ: waiting for mess 379 100 1,892 4993 0.1
    SQL*Net more data from clien 11,597 0 4 0 2.5
    single-task message 29 0 1 21 0.0
    SQL*Net message to client 526,478 0 1 0 115.7
    Background Wait Events DB/Inst: ORACLE/oracle Snaps: 511-512
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> ordered by Total Wait Time desc, Waits desc (idle events last)
    Avg %Total
    %Tim Total Wait wait Waits Call
    Event Waits out Time (s) (ms) /txn Time
    log file parallel write 7,020 0 16 2 1.5 .5
    db file parallel write 5,529 0 14 3 1.2 .5
    control file sequential read 5,966 0 5 1 1.3 .2
    control file parallel write 1,618 0 3 2 0.4 .1
    log file sequential read 66 0 3 50 0.0 .1
    SQL*Net more data to client 64,218 0 2 0 14.1 .1
    db file sequential read 457 0 2 4 0.1 .1
    os thread startup 66 0 1 8 0.0 .0
    Disk file operations I/O 83 0 0 1 0.0 .0
    asynch descriptor resize 64,343 100 0 0 14.1 .0
    direct path read 9 0 0 6 0.0 .0
    db file scattered read 28 0 0 2 0.0 .0
    rdbms ipc reply 8 0 0 0 0.0 .0
    LGWR wait for redo copy 57 0 0 0 0.0 .0
    log file single write 8 0 0 0 0.0 .0
    db file single write 1 0 0 2 0.0 .0
    rdbms ipc message 14,376 51 31,809 2213 3.2
    DIAG idle wait 3,738 100 3,788 1013 0.8
    smon timer 12 33 2,077 ###### 0.0
    dispatcher timer 32 100 1,920 60012 0.0
    Streams AQ: qmn coordinator 136 50 1,905 14009 0.0
    Streams AQ: qmn slave idle w 68 0 1,905 28017 0.0
    pmon timer 2,099 30 1,896 903 0.5
    Space Manager: slave idle wa 381 98 1,894 4970 0.1
    shared server idle wait 63 100 1,891 30014 0.0
    SQL*Net message from client 257,103 0 1,741 7 56.5
    SQL*Net more data from clien 64,218 0 88 1 14.1
    SQL*Net message to client 192,827 0 0 0 42.4
    -------------------------------------------------------------

    Now , I am perform only analyzed method for statics.
    but even though dbms_stats is new method and cover all limit of analyzed method why should you not prefer it?..
    second thing we don't have licence for performance tuning so,AWR does not work . I am using stats pack utility and some query to find top sql on load time which are as below:
    high memory consumed:
    110125     19483     5.65     
    begin CHECK_EMP_ISSUE(P_FOR_COMP=>:P_FOR_COMP, P_FOR_TRANS_DATE=>:P_FOR_TRANS_DATE, P_FOR_EMP_CODE=>:P_FOR_EMP_CODE, P_FOR_DEPT_CODE=>:P_FOR_DEPT_CODE, P_FOR_PROCESS=>:P_FOR_PROCESS, P_FOR_KAPAN_NO=>:P_FOR_KAPAN_NO, P_FOR_PACKET_NO=>:P_FOR_PACKET_NO, P_FOR_PACKET_ID=>:P_FOR_PACKET_ID, P_FOR_ISSUE_TYPE=>:P_FOR_ISSUE_TYPE, P_FOR_SHIFT_NO=>:P_FOR_SHIFT_NO, P_FOR_MC_CODE=>:P_FOR_MC_CODE, P_FOR_CHECK_PREF_EMP=>:P_FOR_CHECK_PREF_EMP, P_FOR_DUMMY=>:P_FOR_DUMMY, P_FOR_MSG=>:P_FOR_MSG, P_FOR_WARNING=>:P_FOR_WARNING, P_FOR_MSG_VALUE=>:P_FOR_MSG_VALUE, VREC=>:VREC); end;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             high i/o consumed:
    EVENT                         WAIT_CLASS       USER_IO_WAIT_TIME          SQL_TEXT                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
    asynch descriptor resize     Other            32454263             SELECT /*+ result_cache */ COMP_CODE,KAPAN_GROUP,VKAPAN_GROUP,SEQ_NO,PACKET_ID,CHILD_ID,INW_DATE, VKAPAN_NO,KAPAN_NO,VPACKET_NO,PACKET_NO,SUB_ID,STONE_TYPE,PCS,WGT,CUR_WGT,P_SEQ_NO,L_SEQ_NO, STAGE,STATUS,PACKET_TYPE,:B1 DEPT_CODE,EMP_CODE,PROCESS,CLV_END_DATE,MFG_END_DATE,TRUNC(SYSDATE)-TRUNC(INW_DATE)VDAYS FROM ( SELECT /*+ leading(a) use_hash(b)*/ :B3 COMP_CODE,A.KAPAN_GROUP,A.VKAPAN_GROUP,:B1 DEPT_CODE,A.SEQ_NO,A.PACKET_ID,A.CHILD_ID,A.INW_DATE, A.VKAPAN_NO,A.KAPAN_NO,A.VPACKET_NO,A.PACKET_NO,A.SUB_ID,A.STONE_TYPE,A.PCS PCS,A.WGT,A.CUR_WGT,A.P_SEQ_NO,A.L_SEQ_NO,

  • Question on 11g Manual Upgrade (Step #33)

    In the Note 837570.1 - Complete Checklist for Manual Upgrades to 11gR2 (Doc ID 837570.1), under Step #33
    Step 33
    Upgrade Statistics Tables Created by the DBMS_STATS Package
    If you created statistics tables using the DBMS_STATS.CREATE_STAT_TABLE procedure, then upgrade these tables by executing the following procedure:
    EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('SYS','dictstattab');
    In the example, 'SYS' is the owner of the statistics table and 'dictstattab' is the name of the statistics table. Execute this procedure for each statistics table.
    How do we know which tables were previously created with DBMS_STATS.CREATE_STAT_TABLE prior to upgrade?”
    Thank you very much.

    Hello and welcome to the forums.
    The statistics tables created by DBMS_STATS, are used for exporting/importing statistics, these tables are not used by the optimizer. So if you are not planning to import statistics, you don't need to find (and update) these tables.
    Anyway, stats tables have a basic table structure and can be found by the following query:
    SELECT OWNER, TABLE_NAME FROM DBA_TAB_COLUMNS WHERE
    COLUMN_NAME = 'STATID' AND DATA_TYPE= 'VARCHAR2' AND DATA_LENGTH=30Check if the listed tables have also these columns:
    Name                                      Null?    Type                       
    STATID                                             VARCHAR2(90)               
    TYPE                                               CHAR(3)                    
    VERSION                                            NUMBER                     
    FLAGS                                              NUMBER                     
    C1                                                 VARCHAR2(90)               
    C2                                                 VARCHAR2(90)               
    C3                                                 VARCHAR2(90)               
    C4                                                 VARCHAR2(90)               
    C5                                                 VARCHAR2(90)               
    N1                                                 NUMBER                     
    N2                                                 NUMBER                     
    N3                                                 NUMBER                     
    N4                                                 NUMBER                     
    N5                                                 NUMBER                     
    N6                                                 NUMBER                     
    N7                                                 NUMBER                     
    N8                                                 NUMBER                     
    N9                                                 NUMBER                     
    N10                                                NUMBER                     
    N11                                                NUMBER                     
    N12                                                NUMBER                     
    D1                                                 DATE                       
    R1                                                 RAW(32)                    
    R2                                                 RAW(32)                    
    CH1                                                VARCHAR2(3000)Regards
    Gokhan Atil

  • Advice regarding how best to collect stats on 10G RAC Production system

    Friends,
    I have read quite a lot of blogs and docs and need some help with the best way forward. I am a DBA new to RAC who has limited experience with busy 24@7 10g systems on the scale of my current employer.
    Historically stats are gathered here as follows :-
    exec dbms_stats.unlock_schema_stats('BP');
    exec dbms_stats.gather_schema_stats(ownname => 'BP', cascade => true, estimate_percent => dbms_stats.auto_sample_size);
    exec dbms_stats.lock_schema_stats('BP');
    Then Flush shared pool ok ????
    Because of previous issues with this - alll tables are currently locked and this process is recommended for every 1-2 months rather than daily.
    EM Grid Control is used when performance is poor and the sql tuning advisor is run to generate recommendations from which a sql profile could be selected and enabled for the selected code.
    My plan is to bring back gathering of stats every 1 to 2 months, my goal is make sure I can fix things quickly if it all goes to custard !!!!
    From research it looks like sql_profile is like a hint and independent of gathering stats - it tells optimiser what hints to use when executing sql.
    This thread is for advice from professional dba's in my shoes - how do you approach this so that any issues are quickly rectified ???
    My thinking is to query dba_profiles and get list of profiles and statuses ... for all tables with sql profiles ..
    This is so profiles can be disabled and then quickly enabled if there is a problem aftewr the tables are analyzed.
    To revert all the schema stats :-
    exec dbms_stats.unlock_schema_stats('BP');
    exec dbms_stats.restore_schema_stats(ownname=>'BP',as_of_timestamp=>sysdate-1);
    exec dbms_stats.lock_schema_stats('BP');
    To revert a table's stats (this looks more finicky so not sure the way to go ...):-
    Pre gather stats :-
    select stats_update_time from user_tab_stats_history where table_name = ‘<EnterTabName>’;
    exec dbms_stats.create_stat_table ( -‘SCOTT’, -‘stattab_new’);
    exec dbms_stats.export_table_stats ( -‘SCOTT’, -‘DEPT’, -null, -‘stattab_new’, -null, -true, -‘SCOTT’);
    Then later after gather stats :-
    exec dbms_stats.restore_table_stats ( -‘SCOTT’, -‘DEPT’, -’21-JAN-09 11.00.00.000000 AM -05:00′);
    Enable/Disable Profile
    exec dbms_sqltune.alter_sql_profile('<Profile name>', 'STATUS', 'DISABLED');
    exec dbms_sqltune.alter_sql_profile('<Profile name>', 'STATUS', 'ENABLED');
    I will do the plan below on a test system first however load may not really identify problems until for real on the Prod system.
    My plan is to :-
    1 analyze all tables as per outline at start above (existing practice)
    2 Disable the sql profiles that are in use on the analyzed tables
    3 See what code is affected and what tables
    If Profile exists for these sql statements then either apply existing profile (as disabled) or use tuning adviser to create another profile
    (Advice welcome here - what do you do on big systems ????)
    4 If its a catastrophe - I can restore the schema stats using (exec dbms_stats.restore_schema_stats(ownname=>'BP',as_of_timestamp=>sysdate-1);)
    and then possibly re-enabling the sql_profiles that were in place before ....
    I welcome any advice based on similar experiences that can help me get this right.
    Many thanks,
    cheers, Rob
    Edited by: Diesel on Jun 27, 2012 10:51 PM

    Useful Link:
    http://www.oradev.com/create_statistics.jsp
    ## Gather schema stats
    begin
    dbms_stats.gather_schema_stats(ownname=>'SYSLOG');
    end;
    ## Gather a particular table stats of a schema
    begin
    DBMS_STATS.gather_table_stats(ownname=>'syslog',tabname=>'logs');
    end;
    Regards
    Asif Kabir
    --mark the answer as correct/helpful                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Bad performance after analyze index

    Hi,
    I have a range partitioned table which recently I had to added new partitions to. After adding new partitions a particular query I had running on it started doing full table scans instead of using the index.
    I read somewhere that you have to disable the primary key prior to adding new partitions and renable after completion. This I did and true, the query was using the index again. However as soon as I analyze the index, the full table scan is back.
    I even tried to analyze the table using:
    begin
    dbms_stats.gather_table_stats
    (user,'tablename',
    method_opt => 'for all indexes for all indexed columns size auto',
    cascade => true);
    end;
    Still the query doesn't improve.
    I am using 9.0.1.3 on Solaris.
    Any ideas on this as I'm stuck at the moment.
    Thanks!

    Hello,
    Please check <Bug:3048661>
    Hdr: 3048661 9.2.0.3.0 RDBMS 9.2.0.3.0 RAM INDEX PRODID-5 PORTID-23
    Abstract: CREATE BITMAP INDEXES TAKES LONGER IN 9203 WHEN PGA => 2GB
    Thanks
    Ashish

Maybe you are looking for

  • Anyone Using Firefox Browser

    I am having a bit of a issue with saving downloads to a specific folder. It will allow me choose a folder, i.e root of a folder, but will not allow me drill down. Any ideas? Thanks

  • [SOLVED] MSI Z97 Gaming 9 AC - Bios Update Fail

    Hi, today i decided to do a bios update of my board. A friend of me told me NOT to use the Windows-Flash tool cause he killed two borads with it. Ok so i used the flash utility from the bios/uefi menu with a usb-stick (M-Flash). Everything looked ok

  • Editing contacts book on HP Officejet Pro 8600 Plus

    I am trying to edit and/or delete my contacts book on the HP Officejet Pro 8600 Plus and am wirelessly connected to a network.  I get the icon for the contacts book on the printer screen but I see no opportunity to delete or edit a particular contact

  • Execute rsecnote leads to an runtime-error in program 'ssa/nsx'

    Hi @all, if I try to execute tc <rsecnote>, then  i keep an system-rutnime-error. I found out it's an problem during security_check in program /SSA/NXS. following notes are completly implemented: 888889 1388729 1030838 The notes was transported from

  • Spry Menu Moves in IE

    I have been working on a spry menu for a website. I have had some experience using Spry but this problem just baffles me. The menu works fine in Firefox and when you open it in IE it works great at first until you start going through the pages. The h