Dbms_utility.analyze_schema

Does anyone know what percentage the package uses when estimating if no % is specified?
(The default is null but it does do some estimating)
The reason asked is that there is a bug which, when using analyze table etc, if the % estimated is too high (for large numbers of rows), temp gets filled, then sys, then the db will hang.

You use an incorrect method to calculate statistics which has been deprecated by Oracle.
You should dbms_stats.gather_schema_stats instead.
Also you could and should have looked up this procedure in the online documentation at http://tahiti.oracle.com
Please do not abuse this forum by asking doc questions.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Error in execute DBMS_UTILITY.ANALYZE_SCHEMA

    Hi..
    i'm trying to execute this procedure in my script and got the following error:
    exec DBMS_UTILITY.ANALYZE_SCHEMA(b,'COMPUTE');
    ERROR at line 19:
    ORA-06550: line 19, column 10:
    PLS-00103: Encountered the symbol "DBMS_UTILITY" when expecting one of the
    following:
    := . ( @ % ;
    The symbol ":=" was substituted for "DBMS_UTILITY" to continue.
    ORA-06550: line 19, column 52:
    PLS-00103: Encountered the symbol "/" when expecting one of the following:
    Message was edited by:
    Dicipulofer

    The part my script that call it:
    declare
    TYPE lista_usuario_rec IS RECORD
    usuario VARCHAR2(50)
    TYPE lista_usuarios IS TABLE OF lista_usuario_rec INDEX BY BINARY_INTEGER;
    cursor usr_banco is select username from dba_users where username not in ('SYS','SYSTEM','OUTLN','DBSNMP','TRACESVR','BKPORCL','PERFSTAT','PROC');
    lst_usr lista_usuarios;
    contador NUMBER;
    b varchar2(30);
    begin
    contador:=0;
    for usr_rec in usr_banco loop
    lst_usr(contador).usuario := usr_rec.username;
    contador := contador + 1;
    end loop;
    for i in 1 .. contador-1 loop
    b:=lst_usr(i).usuario;
    exec DBMS_UTILITY.ANALYZE_SCHEMA(b,'COMPUTE'); // mesma coisa
    end loop;
    end;
    execute immediate 'UPDATE logix.path_logix_v2 SET nom_caminho = ' ||&1|| Lower (cod_sistema) || '/';
    begin
    if '&2' = 'hml' then
    UPDATE logixexp.tb_usuarios SET dsc_caminho_fatura= 'C:\Arquivos de Programas\Logixexphml' WHERE cod_nivel_usuario = 4;
    UPDATE logixexp.tb_usuarios SET dsc_caminho_fatura= '\\red\usuarios\logixexphml\' WHERE cod_nivel_usuario <> 4;
    UPDATE logix.empresa SET den_empresa='ELFUSA *** BANCO DE DADOS HML ***' WHERE den_empresa LIKE 'ELFUSA GERAL DE ELETROFUSA%';
    else
    UPDATE logixexp.tb_usuarios SET dsc_caminho_fatura='C:\Arquivos de Programas\Logixexptst' WHERE cod_nivel_usuario = 4;
    UPDATE logixexp.tb_usuarios SET dsc_caminho_fatura='\\red\usuarios\logixexptst\' WHERE cod_nivel_usuario <> 4;
    UPDATE logix.empresa SET den_empresa='ELFUSA *** BANCO DE DADOS TESTE ***' WHERE den_empresa LIKE 'ELFUSA GERAL DE ELETROFUSA%';
    end if;
    end;
    exit

  • Error executing     DBMS_UTILITY.ANALYZE_SCHEMA(b,'COMPUTE');

    Hi,
    I'm trying to execute this package and after some time I got this error:
    ORA-12012: error on auto execute of job 41
    ORA-00942: table or view does not exist
    ORA-06512: at "LOGIX.TEMPTABPKG", line 112
    ORA-06512: at "LOGIX.TEMPTABPKG", line 126
    I'm doing an investigation to discover.. it's very strange because I did a export full and import full.
    When I run DBMS_UTILITY.ANALYZE_SCHEMA by second time, this error doesn't happen.
    When the error happened in the first time I run, the process's aborted and the "nexts" tables doesn't get statistics.
    To solve unfairly, could I use some param in DBMS_UTILITY to continue genering statistic if some error to happen ?
    Thanks.

    I suppose that yor second execution is succesfull because the Shared Pool has "cached" this table (this is normal). I want recommend to you tuning shared pool memory (make bigger).
    But is not normal the two errors appearing the first time. Do you know this table "LOGIX.TEMPTABPKG"?

  • Problem executing DBMS_UTILITY.ANALYZE_SCHEMA

    Hi
    I have a DBA schema called ASP.
    in this schema i have a procedure which loops through all other client schema names to be analyzed and tries to execute as :
    DBMS_UTILITY.ANALYZE_SCHEMA ( clientSchema , 'COMPUTE' )
    It is giving following exception:
    ORA-20000: You have insufficient privileges for an object in this schema.
    what kind of privilages I have to grant to ASP schema ,so that it can call DBMS_UTILITY.ANALYZE_SCHEMA on other client schemas.
    Is there any global privilage or role we can grant to ASP schema ,so that it can call DBMS_UTILITY.ANALYZE_SCHEMA on all created schemas.
    Please give your input.
    Thanks in advance
    -Gopal

    grant ANALYZE ANY to ASP;

  • Dbms_utility 11g

    Hi brother,
    I am using 11g ( 11.1.0.7 ) with below dbms.stats to update the table and index stats. but I see the table stats haven't any update. Should I use "dbms_utility.analyze_schema"?
    My existing script as below:
    dbms_stats.gather_schema_stats(
    ownname=> 'schema' ,
    options=> 'GATHER AUTO');

    Hi,
    Well I would not suggest you to use dbms_utiltiy in this case, it does not generates stats that could be used by optimizer.
    In your case:
    GATHER AUTO: Gathers all necessary statistics automatically. Oracle implicitly determines which objects need new statistics, and determines how to gather those statistics.
    How you decided that this is not gathering the stats?
    Is stats exist for the object of the schema?
    This procedure also gives the list of object it has processed(gathered stats). You can use 'objlist' parameter to get the list. Check if Oracle is really not gathering the stats?
    Regards
    Anurag Tibrewal.

  • Select query hangs on Oracle 9i but works fine in Oracle 8i

    Hi Guys,
    For a recap of what happened:
    Migrated from Oracle 8i to 9i for a customer, all queries and statements are working fine, except for this particular query. If i run the same query on 8i it works like a charm.
    In 9i, if i remove even one field from the query, it works else it just hangs.
    Any idea, any one???
    **Added 2:09PM: When i removed some ltrim and rtrim that i believe not necessary, the query works fine, is there any field length limitation in Oracle 9i???
    Below is the query:
    set pagesize 100;
    set linesize 1024;
    set heading off;
    set echo off;
    spool scb_xfer_hdr_npsx;
    select ltrim(rtrim(to_char(hdr_srl_no,'99'))) || ';' ||
    'P' || ';' || rtrim(ltrim(payment_type))|| ';'|| ';' ||
    ltrim(rtrim(our_ref_no))|| ';'|| 'MY;KUL;' ||
         rtrim(ltrim(debit_account_no))||';'||
    rtrim(ltrim(to_char(scb_batch_date,'YYYY/MM/DD'))) || ';'|| ';' ||
    rtrim(ltrim(payee_name_1))|| ';'|| ';' ||
    rtrim(ltrim(address_1))|| ';'||
    rtrim(ltrim(address_2))|| ';'||
    rtrim(ltrim(address_3))|| ';'|| ';' ||
    rtrim(ltrim(payee_name_11))|| ';' ||
    rtrim(ltrim(address_11))|| ';'||
    rtrim(ltrim(address_21))|| ';'||
    rtrim(ltrim(address_31))|| ';'|| ';' ||
         rtrim(ltrim(payee_bank_code)) || ';' || ';' ||
         rtrim(ltrim(payee_account_no)) || ';' ||
         rtrim(ltrim(our_ref_no2)) || ';' || ';' ||
         rtrim(ltrim(our_ref_no2)) || ';' || ';' || ';' || ';' ||
    rtrim(ltrim(payment_currency))|| ';'||
    rtrim(ltrim(to_char(payment_amount,'9999999999.99')))|| ';'||
         'C;P;;' || rtrim(ltrim(payee_bank_name))|| ';' ||
         'KL;;' ||
         rtrim(ltrim(delivery_method)) || ';' ||
         rtrim(ltrim(delivery_by)) || ';' ||
         rtrim(ltrim(counter_pickup_location))
    from scb_xfer_hdr_npsx
    order by hdr_srl_no;
    select distinct 'T;' || ltrim(rtrim(to_char(total_payments))) || ';' ||
         ltrim(rtrim(to_char(total_pay_amount,'9999999999.99')))
    from scb_xfer_hdr_npsx;
    spool off;
    spool scb_xfer_dtl_npsx;
    select ltrim(rtrim(to_char(srl_no,'99'))) || ';' || 'I' || ';' ||
         ltrim(rtrim(doc_no)) || ';' || ltrim(rtrim(to_char(doc_date,'yyyy/mm/dd'))) || ';' ||
         ltrim(rtrim(doc_description)) || ';' ||
         ltrim(rtrim(to_char(doc_amount,'9999999999.99')))
    from scb_xfer_dtl_npsx
    order by srl_no;
    spool off;
    set echo on;
    exit;
    Message was edited by:
    Logesh

    Hi,
    are you still on a 32bit kernel on AIX?
    How are the Form4.5 connected, are they on the same box, or are they using the forms GUI?
    What about the statistics, how do you collect them? Is the database set to the COST based optimizer?
    Do you use dbms_utility.analyze_schema, or this:
    exec dbms_stats.gather_schema_stats( -
    ownname => '$OWNER', -
    estimate_percent => 10, -
    granularity => 'ALL', -
    method_opt => 'FOR ALL COLUMNS SIZE 75', -
    degree => NULL , -
    options => 'GATHER $GATH', -
    cascade => TRUE -
    What does it mean the SQL hangs? Can you cancel the query? Do you have to kill the session? Do you use any diagnostic tools like TOAD to trace the session, see what it is doing? Usually if this is a performance/tuning issue, you will see the session is not dead, but you will see advancing reads.
    Regards,
    Richard.

  • Problem with job scheduling

    When I execute that job :
    VARIABLE nojob NUMBER;
    begin
    DBMS_JOB.SUBMIT(:nojob,'my_proc();', sysdate, 'sysdate + 15/24');
    end;
    the job is submitting successfully, but it can't run automatically. If I force it :
    begin
    DBMS_JOB.RUN(:nojob);
    end;
    I've the following error :
    ERROR at line 1:
    ORA-12011: execution of 1 Jobs failed
    ORA-06512: at "SYS.DBMS_IJOB", line 394
    ORA-06512: at "SYS.DBMS_JOB", line 267
    ORA-06512: at line 2
    Have you already seen that problem !
    And do you know how resolve it?
    Thanks.
    null

    Same Problem persists. I removed ':' from my code and tried again, No progress. Here is my code and error message again when I try to run the job manually by dbms_job.run(jobno).
    DECLARE
    jobno number;
    BEGIN
    DBMS_JOB.SUBMIT(jobno,
    'dbms_utility.analyze_schema(''JHTMW_TU_KAMAL'',''COMPUTE'');',
    SYSDATE, 'NEXT_DAY(TRUNC(SYSDATE), ''WEDNESDAY'') + 13/24');
    COMMIT;
    END;
    The error code is
    SQL> exec dbms_job.run(21);
    BEGIN dbms_job.run(21); END;
    ERROR at line 1:
    ORA-12011: execution of 1 jobs failed
    ORA-06512: at "SYS.DBMS_IJOB", line 394
    ORA-06512: at "SYS.DBMS_JOB", line 276
    ORA-06512: at line 1
    Michael
    My job processes are on. I have a value 2 set for job_queue_processes and the value 10 is set for job_queue_interval in my init.ora file.
    null

  • Loosing indexes working with large amounts of data

    SCENARIO
    We are working on an Interface project with ORACLE 10g that works basically like this:
    We have some PARAMETER TABLES in which the key users can insert, update or delete parameters via a web UI.
    There is a download process that brings around 20 million records from our ERP system into what we call RFC TABLES. There are around 14 RFC TABLES.
    We developed several procedures that process all this data against the PARAMETER tables according to some business rules and we end up with what we call XML TABLES because they are sent to another software, completing the interface cycle. We also have INTERMIDIATE TABLES that are loaded in the middle of the process.
    The whole process takes around 2 hours to run.
    We had to create several indexes to get to this time. Without the indexes the process will take forever.
    Every night the RFC, INTERMIDIATE and XML tables need to be truncated and then loaded again.
    I know It might seem strange why we delete millions of records and then load them again. The reason is that the data the users insert in the PARAMETER TABLES need to be processed against ALL data that comes from the ERP and goes to the other software.
    PROBLEMS
    As I said we created several indexes in order to make the process run in less than 2 hours.
    We were able to run the whole process in that time for a few times but, suddenly, the process started to HANG forever and we realized some indexes were just not working anymore.
    When running EXPLAIN we figured the indexes were making no effect and we had some ACCESS FULLS. Curiously when taking the HINTS off and putting them back in the indexes started working again.
    SOLUTION
    We tried things like
    DBMS_STATS.GATHER_SCHEMA_STATS(ownname => SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'), cascade=>TRUE);
    dbms_utility.analyze_schema
    Dropping all tables and recreating the every time before the process starts
    Nothing solved our problem so far.
    We need advice from someone that worked in a process like this. Where millions of records are deleted and inserted and where a lot of indexes are needed.
    THANKS!
    Jose

    skynyrd wrote:
    I don't know anything about
    BIND variables
    Stored Outlines
    bind peeking issue
    or plan stability in the docs
    but I will research about all of them
    we are currently running the process with a new change:
    We put this line:
    DBMS_STATS.GATHER_SCHEMA_STATS(ownname => SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'), cascade=>TRUE);
    after every big INSERT or UPDATE (more than 1 million records)
    It is running well so far (it's almost in the end of the process). But I don't know if this will be a definitive solution. I hope so.
    I will post here after I have an answer if it solved the problem or not.
    Thanks a lot for your help so farWell, you best get someone in there that knows what those things are, basic development, basic performance tuning and basic administration all are predisposed on understanding these basic concepts, and patching is necessary (unless you are on XE). I would recommend getting books by Tom Kyte, he clearly explains the concepts you need to know to make things work well. You ought to find some good explanations of bind peeking online if you google that term with +Kyte.  
    You will be subject to this error at random times if you don't find the root cause and fix it.
    Here is some food for your thoughts:
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/ (one of those "what to expect from the 10g optimizer" links does work)
    http://kerryosborne.oracle-guy.com/2009/03/bind-variable-peeking-drives-me-nuts/
    http://pastebin.com/yTqnuRNN
    http://kerryosborne.oracle-guy.com/category/oracle/plan-stability/
    Getting stats on the entire schema as frequently as you do may be overkill and time wasting, or even counterproductive if you have an issue with skewed stats. Note that you can figure out what statistics you need and lock them, or if you have several scenarios, export them and import them as necessary. You need to know exactly what you are doing, and that is some amount of work. It's not magic, but it is math. Get Jonathan Lewis' optimizer book.

  • 8.1.7.4 Performance Issue

    Hi, I'm not sure I'm posting this question in the right place, but here is better than no where. We recently upgraded all our DB environments (in the last month) from 8.0.6 to 8.1.7.4 and we took performance hit. SQL that used to do rather well is doing very poorly. We're doing full table scans instead of using indexes. Here's one example of a SQL statement that is doing badly:
    SELECT ALL COMPANY.ED_NO, COMPANY.CURR_IND, COMPANY.NAME, COMPANY.CITY,
    COMPANY.STATE, COMPANY.ZIP, COMPANY.ACCT_TYPE, COMPANY.VERSION_NO,
    COMPANY.AUDIT_DATE, COMPANY.AUDIT_USER, CONTRACT.CONTRACT_NO,
    EDITORNO.SOURCE, EDITORNO.ACCT_AUDIT_DATE, EDITORNO.ACCT_AUDIT_USER,
    COMPANY.SEARCH_KEY, EDITORNO.DONT_PUB_IND
    FROM COMPANY, CONTRACT, EDITORNO, TACS_CONTRACT
    WHERE (COMPANY.SEARCH_KEY LIKE 'DAWN%' OR COMPANY.SEARCH_KEY LIKE 'DAWN%')
    AND COMPANY.ED_NO = CONTRACT.ED_NO(+) AND COMPANY.ED_NO = EDITORNO.ED_NO(+)
    AND COMPANY.ED_NO = TACS_CONTRACT.ED_NO(+) AND TACS_CONTRACT.PUB_CODE(+) = '01'
    AND CONTRACT.PUB_CODE(+) = '01' AND COMPANY.CURR_IND = '1'
    ORDER BY COMPANY.NAME
    The explain on this is:
    SELECT STATEMENT Cost = 1380
    2.1 SORT ORDER BY
    3.1 HASH JOIN OUTER
    4.1 HASH JOIN OUTER
    5.1 HASH JOIN OUTER
    6.1 TABLE ACCESS FULL COMPANY
    6.2 TABLE ACCESS FULL EDITORNO
    5.2 TABLE ACCESS FULL TACS_CONTRACT
    4.2 TABLE ACCESS FULL CONTRACT
    Low cost but our database has never done hash joins very well. This can take up to 3 minutes to return a result.
    If we disable the hash joins then we get:
    SELECT STATEMENT Cost = 2546
    2.1 SORT ORDER BY
    3.1 MERGE JOIN OUTER
    4.1 MERGE JOIN OUTER
    5.1 MERGE JOIN OUTER
    6.1 SORT JOIN
    7.1 TABLE ACCESS FULL COMPANY
    6.2 SORT JOIN
    7.1 TABLE ACCESS FULL TACS_CONTRACT
    5.2 SORT JOIN
    6.1 TABLE ACCESS FULL CONTRACT
    4.2 SORT JOIN
    5.1 TABLE ACCESS FULL EDITORNO
    This query runs in about the same about of time as the one above (3 mins).
    So we go the hint route and add a hint:
    SELECT /*+ INDEX (company company_ie6) USE_NL(contract) USE_NL(editorno) USE_NL(tacs_contract)*/
    ALL COMPANY.ED_NO, COMPANY.CURR_IND, COMPANY.NAME, COMPANY.CITY,
    COMPANY.STATE, COMPANY.ZIP, COMPANY.ACCT_TYPE, COMPANY.VERSION_NO,
    COMPANY.AUDIT_DATE, COMPANY.AUDIT_USER, CONTRACT.CONTRACT_NO,
    EDITORNO.SOURCE, EDITORNO.ACCT_AUDIT_DATE, EDITORNO.ACCT_AUDIT_USER,
    COMPANY.SEARCH_KEY, EDITORNO.DONT_PUB_IND
    FROM COMPANY, CONTRACT, EDITORNO, TACS_CONTRACT
    WHERE (COMPANY.SEARCH_KEY LIKE 'DAWN%' OR COMPANY.SEARCH_KEY LIKE 'DAWN%')
    AND COMPANY.ED_NO = CONTRACT.ED_NO(+) AND COMPANY.ED_NO = EDITORNO.ED_NO(+)
    AND COMPANY.ED_NO = TACS_CONTRACT.ED_NO(+) AND TACS_CONTRACT.PUB_CODE(+) = '01'
    AND CONTRACT.PUB_CODE(+) = '01' AND COMPANY.CURR_IND = '1'
    ORDER BY COMPANY.NAME;
    Here is the explain on this:
    SELECT STATEMENT Cost = 50743
    2.1 SORT ORDER BY
    3.1 CONCATENATION
    4.1 NESTED LOOPS OUTER
    5.1 NESTED LOOPS OUTER
    6.1 NESTED LOOPS OUTER
    7.1 TABLE ACCESS BY INDEX ROWID COMPANY
    8.1 INDEX RANGE SCAN COMPANY_IE6 NON-UNIQUE
    7.2 TABLE ACCESS BY INDEX ROWID TACS_CONTRACT
    8.1 INDEX RANGE SCAN TACS_CONTRACT_IE1 NON-UNIQUE
    6.2 TABLE ACCESS BY INDEX ROWID CONTRACT
    7.1 INDEX UNIQUE SCAN CONTRACT_PK UNIQUE
    5.2 TABLE ACCESS BY INDEX ROWID EDITORNO
    6.1 INDEX UNIQUE SCAN EDITORNO_PK UNIQUE
    4.2 NESTED LOOPS OUTER
    5.1 NESTED LOOPS OUTER
    6.1 NESTED LOOPS OUTER
    7.1 TABLE ACCESS BY INDEX ROWID COMPANY
    8.1 INDEX RANGE SCAN COMPANY_IE6 NON-UNIQUE
    7.2 TABLE ACCESS BY INDEX ROWID EDITORNO
    8.1 INDEX UNIQUE SCAN EDITORNO_PK UNIQUE
    6.2 TABLE ACCESS BY INDEX ROWID CONTRACT
    7.1 INDEX UNIQUE SCAN CONTRACT_PK UNIQUE
    5.2 TABLE ACCESS BY INDEX ROWID TACS_CONTRACT
    6.1 INDEX RANGE SCAN TACS_CONTRACT_IE1 NON-UNIQUE
    This query runs in a few seconds. So why does the query with the worst cost run the best? I'm concerned that we are going to alter our production application to add hints and I'm not even sure how to evaluate those hints because "Cost" no longer seems as reliable as before. Is anyone else experiencing this?
    Thank you for any help you can provide.
    Dawn
    [email protected]

    You can ignore the cost= part of an explain statement. This is something used internally by Oracle when calculating explain plans and doesn't indicate which plan is better. I don't know why it's included in the output except to confuse people. Really? This indicator (while not perfect) has always worked pretty well for me in the past. I think I may have been wrong about this after reading the 8.1.7 documentation. I'd seen other messages saying to ignore the cost of explain
    plans before and I took those posts as being right.
    Anyway here's what the 8.1.7 documentation says about analyzing tables. Maybe you should try analyzing your tables using the
    dbms_stats package mentioned below. There's also another package called dbms_utility.analyze_schema that we use and haven't
    had any problems with it.
    From Oracle 8i Designing and Tuning for Performance Ch. 4:
    The CBO consists of the following steps:
    1.The optimizer generates a set of potential plans for the SQL statement based on its available access paths and hints.
    2.The optimizer estimates the cost of each plan based on statistics in the data dictionary for the data distribution and storage characteristics of the tables, indexes,
    and partitions accessed by the statement.
    The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost
    of each possible access method and join order based on the estimated computer resources, including (but not limited to) I/O and memory, that are required to
    execute the statement using the plan.
    Serial plans with greater costs take more time to execute than those with smaller costs. When using a parallel plan, however, resource use is not directly related
    to elapsed time.
    3.The optimizer compares the costs of the plans and chooses the one with the smallest cost.
    To maintain the effectiveness of the CBO, you must gather statistics and keep them current. Gather statistics on your objects using either of the following:
    For releases prior to Oracle8i, use the ANALYZE statement.
    For Oracle8i releases, use the DBMS_STATS package.
    For table columns which contain skewed data (i.e., values with large variations in number of duplicates), you must collect histograms.
    The resulting statistics provide the CBO with information about data uniqueness and distribution. Using this information, the CBO is able to compute plan costs with a
    high degree of accuracy. This enables the CBO to choose the best execution plan based on the least cost.
    See Also:
    For detailed information on gathering statistics, see Chapter 8, "Gathering Statistics".

  • Index size (row_nums) is bigger than the tables row

    Hi everyone,
    I'm encountering some strange problems with the CBO in Oracle 10.2.0.3 - it's telling me that I have more rows in the indexes than there are rows in the tables.
    I've tried all combinations of dbms_stats and analyse and cannot understand how the CBO comes up with such numbers. I've even done a "delete statistics" and
    Re-analysed the table and indexes but it doesn't help.
    The command I used is variations of the following:
    exec
    DBMS_STATS.GATHER_TABLE_STATS(ownname=>'MBS',tabname=>'READINGTOU', -
    estimate_percent=>dbms_stats.auto_sample_size,method_opt=>'FOR COLUMNS PROCESSSTATUS',degree=>2);
    EVEN TRIED
    exec sys.dbms_utility.analyze_schema('MBS','ESTIMATE', estimate_percent => 15);
    I've even used estimate_percent of 50 and still getting lower numbers for the table.
    Initially I was afraid that since the index is larger than the table, the index would never be used. So the question is, does it really matter that the indexes' num_rows is bigger than the tables' num_rows? What is the consequence of this? And how do I get the optimizer to correct the differences in the stats. The table is 30G in size and growing, so a COMPUTE is out of the question.
    but have the same problem in dev..and i did the COMPUTE in dev...get the same thing... I have more rows in the indexes than there are rows in the tables
    Edited by: user630084 on Mar 11, 2009 10:45 AM

    Is your issue that you are having problems with the execution plans of queries referencing these objects? Or is your problem that you are observing more num_rows in the index than in the table when you query the data dictionary?
    If it's the latter then there's really no concern (unless the estimates are insanely inconsistent). The statistics are estimates and as such, will not be 100% accurate, though they should do a reasonable job of representing the data in your system (when they don't, then you have an issue, but we've seen nothing to indicate that as of yet).

  • Problems with Partition pruning using LIST option in 9i

    I am declaring the following partitions on a fact table and when
    I do an explain plan on the following SELECT statements it is
    doing a full table scan on the fact table. However if I use
    the "PARTITION" statement in the FROM clause it picks up the
    correct partition and runs great. I have used the analyze
    command and dbms_utility.analyze_schema command on all tables
    and indexes. I had simaliar problems when partitioning with the
    RANGE options too. I have looked through all of the INIT file
    parameters and don't see anything obvious. Is there something I
    am missing?
    Any help would be appreciated!
    Thanks,
    Bryan
    Facttable create statement....
    CREATE TABLE FactTable (
    ProductGID INTEGER NULL,
    CustomerGID INTEGER NULL,
    OrganizationGID INTEGER NULL,
    TimeGID INTEGER NULL,
    FactValue NUMBER NOT NULL,
    ModDate DATE NULL,
    CombinedGID INTEGER NOT NULL)
    PARTITION BY LIST (CombinedGID)
    (PARTITION sales1 VALUES(9999101),
    PARTITION sales2 VALUES(9999102),
    PARTITION sales3 VALUES(9999103),
    PARTITION model1 VALUES(9999204),
    PARTITION model2 VALUES(9999205),
    PARTITION model3 VALUES(9999206));
    Select statement....that is causing a full table scan.....the
    *tc tables are the equivelent to dimension tables in a star
    schema.
    SELECT tco.parentgid, tcc.parentgid, tcp.parentgid, sum
    (factvalue)
    FROM facttable f, custtc tcc, timetc tct, prodtc tcp, orgtc tco
    WHERE
    tco.childgid = f.organizationgid
    AND tco.parentgid = 18262
    AND tcc.childgid = f.customergid
    AND tcc.parentmembertypegid = 16
    AND tcp.childgid = f.productgid
    AND tcp.parentmembertypegid = 7
    AND tct.childgid = f.timegid
    AND tct.parentgid = 1009
    GROUP BY tco.parentgid, tcc.parentgid, tcp.parentgid;
    Select statement that works great....
    SELECT tco.parentgid, tcc.parentgid, tcp.parentgid, sum
    (factvalue)
    FROM facttable partition(sales1) f, custtc tcc, timetc tct,
    prodtc tcp, orgtc tco
    WHERE
    tco.childgid = f.organizationgid
    AND tco.parentgid = 18262
    AND tcc.childgid = f.customergid
    AND tcc.parentmembertypegid = 16
    AND tcp.childgid = f.productgid
    AND tcp.parentmembertypegid = 7
    AND tct.childgid = f.timegid
    AND tct.parentgid = 1009
    GROUP BY tco.parentgid, tcc.parentgid, tcp.parentgid;

    Hi Hoek,
    the DB version is 10.2 (italian version, then SET is correct).
    ...there's something strange: now I can INSERT rows but I can't update them!
    I'm using this command string:
    UPDATE TW_E_CUSTOMER_UNIFIED SET END_VALIDITY_DATE = TO_DATE('09-SET-09', 'DD-MON-RR') WHERE
    id_customer_unified = '123' and start_validity_date = TO_DATE('09-SET-09', 'DD-MON-RR');
    And this is the error:
    Error SQL: ORA-14402: updating partition key column would cause a partition change
    14402. 00000 - "updating partition key column would cause a partition change"
    *Cause:    An UPDATE statement attempted to change the value of a partition
    key column causing migration of the row to another partition
    *Action:   Do not attempt to update a partition key column or make sure that
    the new partition key is within the range containing the old
    partition key.
    I think that is impossible to use a PARTITION/SUBPARTITION like that: in fact the update of "END_VALIDITY_DATE" cause a partition change.
    Do u agree or it's possible an update on a field that implies a partition change?
    Regards Steve

  • Handling the ORABPEL-schema in production

    Hi,
    are there any guidelines, how to handle the orabpel-schema in the database?
    Its tablespace is growing and growing and the "purge all instances"-button says:
    You are about to delete all the instances stored in the BPEL server. This operation should only be used if you want to clean your testing/development environment. It CANNOT be undone.
    So do i have to export the tablespace and then??
    Any documentation? Best practises?

    Use this script. It will solve ur problem.
    This script purges all the instance in one go from the dehydration store.
    It is the faster way by which you can clear the dehydration store.
    truncate table cube_instance;
    truncate table cube_scope;
    truncate table work_item;
    truncate table wi_exception;
    truncate table document_ci_ref;
    truncate table document_dlv_msg_ref;
    truncate table scope_activation;
    truncate table dlv_subscription;
    truncate table audit_trail;
    truncate table audit_details;
    truncate table sync_trail;
    truncate table sync_store;
    truncate table dlv_message;
    truncate table invoke_message;
    truncate table ci_indexes;
    alter table cube_instance deallocate unused;
    alter table cube_scope deallocate unused;
    alter table work_item deallocate unused;
    alter table wi_exception deallocate unused;
    alter table document_ci_ref deallocate unused;
    alter table document_dlv_msg_ref deallocate unused;
    alter table scope_activation deallocate unused;
    alter table dlv_subscription deallocate unused;
    alter table audit_trail deallocate unused;
    alter table audit_details deallocate unused;
    alter table sync_trail deallocate unused;
    alter table sync_store deallocate unused;
    alter table dlv_message deallocate unused;
    alter table invoke_message deallocate unused;
    alter table ci_indexes deallocate unused;
    alter table cube_scope enable row movement;
    alter table cube_scope shrink space compact;
    alter table cube_scope shrink space;
    alter table cube_scope disable row movement;
    alter table cube_instance enable row movement;
    alter table cube_instance shrink space compact;
    alter table cube_instance shrink space;
    alter table cube_instance disable row movement;
    exec dbms_utility.analyze_schema('ORABPEL', 'Compute');
    Cheers,
    Abhi.

  • Analyze table 10g steps

    Hi,
    DB: 10.2.0.4 RAC ASM
    OS: AIX 5.3L 64-bit
    I want to do analyze tables for all users.Please give me the steps for table and schema level.
    Thanks & Regards,
    Sunand

    CJ,
    dbms_utility.analyze_schema has been deprecated since 9i- you should be using dbms_stats.
    Sunand, by default there will be a gather stats job running on your database picking up any 'stale' statistics, have you disabled it?
    If you want/need to run it manually, dbms_stats.gather_database_stats is what you need. Documentation is here http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41448
    Carl

  • 8i database, 8i client, dev2000

    I have installed 8i database on my computer ( with NT O/S ) and then I installed dev2000, It works okay. And then I installed 8i client on this computer, It start not work properly.
    My purpose to installed 8i client :
    To use the Oracle ODBC Drivers for making apps from VB. Meanwhile I am also some times create apps from dev2000.
    My question : It seems that the dev2000 and 8i client can not installed on the same computer. Any suggestion for this problem ?
    Thanks.

    DBMS_STATS.GATHER_SCHEMA_STATS - Gather Optimizer Statistics
    DBMS_UTILITY.ANALYZE_SCHEMA - - Gather NonOptimizer Statistics
    Starting from 8.1.5, you can use DBMS_STATS.GATHER_SCHEMA_STATS to produce more accurate statistics than ANALYZE_SCHEMA. DBMS_STATS offers mores feature such as:
    1. You can save and export the statistics from user's shema tables for use in other user
    2. You can use it to compute exact global statistics at the table or index level.
    3. Specify parallel run against tables, schemas or database.
    4. Automatic monitoring of statistics.
    5. Computes the statistics on different level: Global, Partition, Subpartition, All.
    schema.

  • Stopping the schema stats gathering process

    Hi,
    I have a large schema in my DB (7 TBs) on which I had triggered a UNIX background job to compute the statistics on that schema. The SQL which i had used was:
    EXEC DBMS_UTILITY.ANALYZE_SCHEMA('<<schema_name>>','COMPUTE');
    However, since the size of the schema is quite big, this stats computation process is running since the last 7 hours and I am not sure about the time it would take to finish with this process.
    My questions are as below:
    1. Is there any method to determine the status of this stats computation process?
    2. If I kill this background job because i now have to run other data loading jobs on the schema and I dont want them to error out because of the exclusive locks on those objects, will it affect the DB performance in any way?
    3. If i kill this background process, will the entire stats computation process be rolled back or would it be stopped at the place from where it was halted?
    Kindly advice.
    Thanks in advance.

    Hi,
    Thanks for the prompt response. I agree that i should be using dbms_stats and estimate for computing the schema stats. I re-analyzed the tables under the schema yeterday and hence i hope i should not face the incorrect cardinality info problem now.
    However, I am facing another problem with the DB now. As a result of the data loading operation on the table, i see that all the subsequent DML operations are now taking a long log time to complete, typically 30 mins for a 10 seconds operation earlier.
    What do you suggest is causing the problem and needs rectification?
    Thanks in advance.

Maybe you are looking for

  • Can Swing/ADF directly deploy on application server?

    Hi, After creation of Swing/ADF rich client in JDevelopment, is there any way to deploy this as a WAR file on application server or this ADF option is only for direct client/server mode deployment?

  • Displaying graphical image on home page

    hi I have created an aplication in apex3.0 on 11g os win XP. I wanted to display image on upper of my home page. I have tried my best but i can't find any thing to help me. Please anyone can help me. Also I am not experienced programmer. Thanks Sajja

  • Streaming swf - is it possible?

    i've been using capitvate to create an online user guide - but the files are rather chunky... can i stream a swf file, like i would video? is this possible?

  • Connecting ReporBuildert to Oracle

    Hello, I have installed Oracle Reports6i. I want to connect this to Oracle8i. I am not able to do it.I give the username,password and hoststring. But I get the error "Unable to connect to the specified database. TNS:Unable to connect to destination".

  • Hey - NM network reports again in my Inbox

    After several months of outage suddenly out of the blue I received two NM  network reports both for "Sunday, June 28th". I am used to the fact that day of the week and day of the month does not match, also the report was empty because the PC was not