Gather stats

Hi All,
I want to add DBMS_STATS.gather_table_stats on table command to my scripts.
Just got confuse on the position.
Consider as follows step and sequences
1. Table ABC is created without a record.
2. Table ABC is inserted with data
3. Table ABC is inserted with SOME MORE data
4. DBMS_STATS.gather_table_stats is execited
5. Table ABC is Select
6. Table ABC is UPDATE
7. Table ABC is DELETED
8. Table ABC is inserted with SOME MORE data
Do I require DBMS_STATS.gather_table_stats command to be between step 6 and 7 and 8.
rgds
saaz

Rakesh jayappa wrote:
Hi,
I want to add DBMS_STATS.gather_table_stats on table command to my scripts.
Just got confuse on the position.
Consider as follows step and sequences
1. Table ABC is created without a record.
2. Table ABC is inserted with data
3. Table ABC is inserted with SOME MORE data
4. DBMS_STATS.gather_table_stats is execited
5. Table ABC is Select
6. Table ABC is UPDATE
7. Table ABC is DELETED
8. Table ABC is inserted with SOME MORE data
Do I require DBMS_STATS.gather_table_stats command to be between step 6 and 7 and 8.
Let me tell you the golden rule
sql> desc dba_tab_modifications;
select insert,delete,update from dba_tab_modifications where table_name='ABC';
lets assume 50 20 30
10+10+10=30
sql> select count(*) from ABC;
50
if 10% of data got modified then oracle suggest to collect the stats if the query is performing slowly
Hope this answers your question
Kind Regards,
Rakesh JayappaLet me tell you the platinum rule
The optimizer uses as much information as it can to decide on an access plan for the data. If the information is accurate for the query, performance may or may not be good, though normally it is. If it is not accurate, performance may or may not be good.
It is entirely possible that the information gathered in statistcs will not be accurate. It is possible that different queries will need different plans to get good performance, and the optimizer could be mistaken for some.
The optimizer is geared toward a steady state, not mass changes as described. Even so, it is possible to have a situation where a small change in data leads to a significant change in plan, with a consequent change in performance. Bind variable peeking means you can have a significant change in performance without changing the plan, simply from different values in the query.
For this scripted situation, you need to be explicit about what queries will be used and the best plan for each in the varying data situations. You can use hints, which may be a maintenance problem down the road and requires a deep understanding of each query. You can use plan stability, which uses hints. You can lock the statistics, which may or may not be appropriate depending on the different queries. You cannot state that any percentage data change will have a predictable result. You cannot predict what will happen just by gathering statistics at varying points in the process. You can test, test, test, and still not hit a real production situation.
So, it depends.

Similar Messages

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • How to gather stats on the target table

    Hi
    I am using OWB 10gR2.
    I have created a mapping with a single target table.
    I have checked the mapping configuration 'Analyze Table Statements'.
    I have set target table property 'Statistics Collection' to 'MONITORING'.
    My requirement is to gather stats on the target table, after the target table is loaded/updated.
    According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
    Analyze Table Statements
    If you select this option, Warehouse Builder generates code for analyzing the target
    table after the target is loaded, if the resulting target table is double or half its original
    size.
    My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
    I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
    Thanks for your help in advance...
    ~Salil

    Hi
    Unfortunately we have to disable automatic stat gather on the 10g database.
    My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
    So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
    Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
    If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
    Any ideas to overcome this issue is appreciated.
    Thanks
    Salil

  • Performance tuning - Gather stats and statID created

    SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE('HR', 'SAVED_STATS');
    SQL> SELECT TABLE_NAME, NUM_ROWS, BLOCKS,EMPTY_BLOCKS, AVG_SPACE, USER_STATS, GLOBAL_STATS
    2 FROM USER_TABLES
    3 WHERE TABLE_NAME = 'MYCOUNTRIES';
    TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE
    GLO
    MYCOUNTRIES 0 0 0 0 NO
    YES
    SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>'HR',TABNAME=>'MYCOUNTRIES',ESTIMATE_PERCENT=>10,STATOWN=>'HR',STATTAB=>'SAVED_STATS',STATID=>'PREVIOUS1');
    TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE GLO
    MYCOUNTRIES 25 5 0 0 NO YES
    SQL> select statid, type, count(*)
    2 from saved_stats
    3 group by statid, type;
    STATID T COUNT(*)
    PREVIOUS1 C 3
    PREVIOUS1 T 1
    Qn) Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a seperate statID name, so that a new version of stats is created/stored in a seperate statID.

    Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a >seperate statID name, so that a new version of stats is created/stored in a seperate statID.Yes.
    From http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461
    statid Identifier (optional) to associate with these statistics within stattab

  • Gather stats on every table in a schema

    Hi,
    i have an CRM application running on 10g R2 db. it has 5000 tbls on which less than 10% of tables are dynamic. gather stats job runs every day at 2am successfully.
    i was monitoring the statistics(dba_tables, dba_tab_modifications, dba_tab_statistics), noticed that only 28 tables r been update with latest stats every day for CRM schema and most of these tables are same. during query tunning i found that some tables has stale stats, but it does't figure in column stale of dba_tab_statistics, but it shows no of rows inserted, updated in tab_modifications.
    my question is there any draw back in gathering stats for all the tables every day irrespective of data is loaded with 10% or not and but not for tables with no rows..

    thanks for the quick response, it was helpful.
    due to application vendor recommendations, for some tables stats were disabled and optimizer parameter were changed which causes dynamic sample not using dynamic stats gather for some queries as they use the tables with no stats. as per documentation it would be calculating the stats on fly when the query the tables which stats has not been updated.
    as of now i am not gathering stats manually for this schema, as auto is scheduled. and will verify if indeed on 10% of data is loaded it updates the stats or not then i may manually gather stats for only those tables.

  • Auto task - Gather Stats.How do Oracle knows which DB objects to analyze ?

    Hi,
    Do oracle has any specific criteria while identifying which Oracle DB objects needs to be analyze as part of Gather Stats ( included as "auto task") ?
    Does it uses information from DBA_TAB_MODIFICATIONS to find any fixed % of change like 10% or so or it analyzes all objects ?

    Copied and pasted from the documentation, which you can find via some simple Google searches if you don't know about http://docs.oracle.com
    GATHER AUTO: Gathers all necessary statistics automatically. Oracle implicitly determines which objects need new statistics, and determines how to gather those statistics. When GATHER AUTO is specified, the only additional valid parameters are stattab,statid, objlist and statown; all other parameter settings are ignored. Returns a list of processed objects.
    GATHER STALE: Gathers statistics on stale objects as determined by looking at the *_tab_modifications views. Also, return a list of objects found to be stale.
    GATHER EMPTY: Gathers statistics on objects which currently have no statistics. Return a list of objects found to have no statistics.
    14.2.1 GATHER_STATS_JOB
    Optimizer statistics are automatically gathered with the job GATHER_STATS_JOB. This job gathers statistics on all objects in the database which have:
    Missing statistics
    Stale statistics

  • Refreshing mview is hanging after a database level gather stats

    hi guys,
    can you please help me identify the root cause of this issue.
    the scenario is this:
    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    we already identified during testing that the scenario where the refresh mview is failing is when after we are gathering stats in a database level.
    during gather stats in a schema level, refresh mview is successful.
    can you please help me understand why we are failing refreshing mview after we gather stats in the database level??
    we are using oracle 9i
    the creation of the mview goes something like below:
    create materialized view hanging_mview
    build deferred
    refresh on demand
    query rewrite disabled
    appreciate all your help.
    thanks a lot in advance.

    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    You know Tuesday's MV refresh "hangs".
    You don't know why it does not complete.
    You desire solution so that it does complete.
    You don't really know what is is doing on Tuesdays, but hope automagical solution will be offered here.
    The ONLY way I know how to possibly get some clues is SQL_TRACE.
    Only after knowing where time is being spent will you have a chance to take corrective action.
    The ball is in your court.
    Enjoy your mystery!

  • Gather Stats on Newly Partitioned Table

    I partitioned an existing table containing 92 million rows. The method was using dbms_redefinition, whereby I started the redef and then added the indexes and constraints last. After partitioning, I did not gather stats on any of the partitions that were created and I did not analyze any of the indexes. Then I loaded an additional 4 million records into on of the partitions of the newly partitioned table. I ran dbms gather stats on this particular partition and it took over 15 hours. Normally it only takes 4 hours to run dbms gather stats on the individual partitions, so I stopped it after 15 hours. When I monitored it while it was running, it looked like it was taking a really long time gathering stats on the indexes. Is this normal for a newly partitioned table? Is there something I can to prevent it from taking so long when I run gather stats? Oracle Version 10.2.0.4

    -- Gather PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_partition_name, estimate_percent => 20, cascade=> FALSE,granularity => 'PARTITION');
    -- Gather GLOBAL INDEX Statistics
    for i in (select * from sys.dba_indexes where table_owner = upper(v_table_owner)
    and table_name = upper(v_table_name) and partitioned = 'NO'
    order by index_name)
    loop
    SYS.DBMS_STATS.gather_index_stats(ownname => upper(v_table_owner), indname => i.index_name,
    estimate_percent => 20, degree => NULL);
    end loop;
    -- Gather SUB-PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_subpartition_name, estimate_percent => 20, cascade=> TRUE,granularity => 'ALL');

  • DBMS Gather Stats using ESTIMATE, gives varying number of rows

    I am a little intrigued by the gather_table_stats results for number of rows on user_tables.
    A table with no maintenance on it, has 242115 rows.
    When I gather stats COMPUTE, num_rows from user_tables = 242115.
    However, when I ESTIMATE the figure changes, without apparent reason:
    10% - 240710
    25% - 240564
    50% - 242904
    99% - 242136
    Using ESTAIMTE, I would expect the number of rows that are inspected to change, but not the resulting number of rows on User_tables!
    I wonder, why is this?
    Thanks

    Thank you for that amusing analogy!
    However, it would be interesting to know where it
    gets this idea from. Why does it decide sometimes
    more, sometimes less, what basis?Actually I'm not the person that knows the precise algorithm, I don't know also any links to docs handy that describes how it is done. But one scenario would be to enble trace and check what sql oracle is issuing to gather stats. Of course it won't be all the algorithm, but probably you'll get some insight.
    I mean, If I knew I had not removed or added any
    socks to the wardrobe or even if I was unsure anyone
    else had, I would use the previous count as my
    starting point.AFAIK Oracle doesn't have any previous knowledge i.e. to be more precise Oracle doesn't use it. Because you as a person probably know something more how the table was or wasn't changed, but Oracle doesn't know and/or use such information at least for stats gathering.
    Gints Plivna
    http://www.gplivna.eu

  • How to check whether gather stats job is running or not in OEM

    Hi,
    People in our team are saying that there is an automatic job is running in OEM to gather the statistics of the tables. Also it decides which table needs to be gather stats.
    I have not much idea in OEM (Oracle 10g), please let me know how to check the job which is gathering the statistics of tables and where to check that job.
    Thanks in advance,
    Mahi

    You may query dba_scheduler_job_log like
    SQL> select JOB_NAME,LOG_DATE,STATUS from dba_scheduler_job_log;There you should see the GATHER_STATS_JOB and its runnings.

  • Can we change GATHER_STATS_PROG (auto gather stat job) behavior

    Hi All,
    Is anyway we can change default oracle auto gather job ?
    We know that oracle gather job (GATHER_STATS_JOB, below 11g) will trigger base on the Maintenance Window, we have few huge Production environment.
    We're thinking to gather application schema stat by using our own scripts, thus would like to know is anyway we can still enable oracle auto gather stat
    but exclude application schema?
    Thanks in advance.
    Regards,
    Klnghau

    Hi All,
    Forgot to mention my oracle version is 10.2.0.4, and I think i find the solution from My Oracle support,
    Oracle10g: New DBMS_STATS parameter AUTOSTATS_TARGET [ID 276358.1]
    This is a new parameter in Oracle10g for the DBMS_STATS package.
    According to the documentation for this package in file dbmsstat.sql
    (under ORACLE_HOME/rdbms/admin):
    This parameter is applicable only for auto stats collection.
    The value of this parameter controls the objects considered for stats collection.
    It takes the following values:
    'ALL' -- statistics collected for all objects in system
    'ORACLE' -- statistics collected for all oracle owned objects
    'AUTO' -- oracle decides for which objects to collect stats
    Default is AUTO, i think i can chenge to ORACLE in my environment.
    Thanks

  • Gather stats:Time to complete?

    Experts..
    Could you please let me know how to calculate time complete gathering stats(Analyze) for huge number of tables.
    I would like to calulate the time to finish the analyze in database.
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    PL/SQL Release 9.2.0.6.0 - Production
    CORE     9.2.0.6.0     Production
    TNS for Solaris: Version 9.2.0.6.0 - Production
    NLSRTL Version 9.2.0.6.0 - Production
    Thanks

    hi,
    create a table
    create table analyze_time(task varchar2(20), time date);create a procedure for stats gathering
    create or replace procedure gather_st is
    begin
    insert into analyze_time values('start',sysdate);
    commit;
    --write dbms_stats procedure here to gather stats
    insert into analyze_time values('end',sysdate);
    commit;
    end;
    /execute this procedure to gather stats and after that query analyze_stats to check what time it started and what time it finished.
    In SQLPLUS, you can use "set timing on" and then gather stats, when it will finish, prompt will show you the time it took to complete the operaion.
    Salman

  • Gather Schema Statistics - GATHER AUTO option failing to gather stats

    Hi ,
    We recently upgraded to 10g DB and 11.5.10 version of Oracle EBS. I want to employ GATHER AUTO option while running Gather Schema Statistics.
    To test the working, I created a test table with 1 million rows. Then, stats were gathered for this table alone by using Gather Table Stats. Now, I deleted ~12% of rows & issued commit. The table all_tab_statistics shows that the table has stale statistics (stale stats column = YES). After that I ran Gather Schema Stats for that particular schema. But the request did not pick the test table to be gathered.
    What is the criterion on which Oracle chooses which all tables to be gather statistics for under Gather Auto option? I am aware of the 10% change in data, but how is this 10% calculated? Is it only based on (insert + update + delete)?
    Also, what is the difference between Gather Auto and Gather Stale ?
    Any help is appreciated.
    Thanks,
    Jithin

    Randalf,
    FYI.. this is what happens inside the concurrent progarm call.. there are a few additional parameters for output/ error msgs:
    procedure GATHER_SCHEMA_STATS(errbuf out varchar2,
    retcode out varchar2,
    schemaname in varchar2,
    estimate_percent in number,
    degree in number ,
    internal_flag in varchar2,
    request_id in number,
    hmode in varchar2 default 'LASTRUN',
    options in varchar2 default 'GATHER',
    modpercent in number default 10,
    invalidate in varchar2 default 'Y'
    is
    exist_insufficient exception;
    bad_input exception;
    pragma exception_init(exist_insufficient,-20000);
    pragma exception_init(bad_input,-20001);
    l_message varchar2(1000);
    Error_counter number := 0;
    Errors Error_Out;
    -- num_request_id number(15);
    conc_request_id number(15);
    degree_parallel number(2);
    begin
    -- Set the package body variable.
    stathist := hmode;
    -- check first if degree is null
    if degree is null then
    degree_parallel:=def_degree;
    else
    degree_parallel := degree;
    end if;
    l_message := 'In GATHER_SCHEMA_STATS , schema_name= '|| schemaname
    || ' percent= '|| to_char(estimate_percent) || ' degree = '
    || to_char(degree_parallel) || ' internal_flag= '|| internal_flag ;
    FND_FILE.put_line(FND_FILE.log,l_message);
    BEGIN
    FND_STATS.GATHER_SCHEMA_STATS(schemaname, estimate_percent,
    degree_parallel, internal_flag, Errors, request_id,stathist,
    options,modpercent,invalidate);
    exception
    when exist_insufficient then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when bad_input then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when others then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    END;
    FOR i in 0..MAX_ERRORS_PRINTED LOOP
    exit when Errors(i) is null;
    Error_counter:=i+1;
    FND_FILE.put_line(FND_FILE.log,'Error #'||Error_counter||
    ': '||Errors(i));
    -- added to send back status to concurrent program manager bug 2625022
    errbuf := sqlerrm ;
    retcode := '2';
    END LOOP;
    end;

  • Gather stats after shrink command

    Hello.
    Running Oracle 11.2 and was preparing to run shrink command on a major table to release 5G of wasted space on a 15G segment.
    I have already done this in our test database and was preparing to do this in production, when one web site I went to said we should run fresh stats after the shrink operation.
    Does this make sense, and does it seem to be necessary?
    It does seem to make sense that the values for number of blocks, number of empty blocks would be different after the shrink operation.
    But I am surprised I did not see this recommendation on any other site except the one site.

    Well, sure... if we want to consider this a DML or DDL operation, but actually, we are not changing or defining new data structure, and we are not manipulating the data (per se).
    But, I'm in agreement that it makes sense to gather fresh stats just based on the difference of blocks and empty blocks which we can assume the optimizer considers when choosing an execution plan.

  • Gather Stats Job in 11g

    Hi,
    I am using 11.1.0.7 on IBMAIX Power based 64 bit system.
    In 10g, if i query dba_scheduler_jobs view, i see the GATHER_STATS_JOB for automated statistics collection but in 11g i don't see this rather i see BSLN_MAINTAIN_STATS_JOB job which executes BSLN_MAINTAIN_STATS_PROG program for stats collection.
    And if i query DBA_SCHEDULER_PROGRAMS, i also see GATHER_STATS_PROG program here. Can gurus help me understanding both in 11g. Why there are two different programs and what is the difference?
    Actually the problem is that i am receiving following error message in my alert log file
    Mon Aug 16 22:01:42 2010
    GATHER_STATS_JOB encountered errors.  Check the trace file.
    Errors in file /oracle/diag/rdbms/usgdwdbp/usgdwdbp/trace/usgdwdbp_j000_1179854.trc:
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredThe trace files shows
    *** 2010-08-14 22:10:14.449
    *** SESSION ID:(2028.20611) 2010-08-14 22:10:14.449
    *** CLIENT ID:() 2010-08-14 22:10:14.449
    *** SERVICE NAME:(SYS$USERS) 2010-08-14 22:10:14.449
    *** MODULE NAME:(DBMS_SCHEDULER) 2010-08-14 22:10:14.449
    *** ACTION NAME:(ORA$AT_OS_OPT_SY_3407) 2010-08-14 22:10:14.449
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    *** 2010-08-14 22:10:14.450
    GATHER_STATS_JOB: GATHER_TABLE_STATS('"DWDB_ADMIN_SYN"','"TEMP_HIST_HEADER_LIVE"','""', ...)
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredBut we dont have GATHER_STATS_JOB in 11g and also the job BSLN_MAINTAIN_STATS_PROG runs only on weekend and the above error message came last night 16 August.
    Thanks
    Salman

    Thanks for the people who are contributing.
    I know from the information that table is locked, but i have tried manually locking a table and executing gather_table_stats procedure but it runs fine. Here i have two questions
    Where is GATHER_STATS_JOB in 11g as you can see the text of trace file where it says that GATHER_STATS_JOB failed, i dont see there is any GATHER_STATS_JOB in 11g.
    BSLN_MAINTAIN_STATS_JOB job is supposed gather statistics but only on weekend nights, then how come i see this error occurring last night at 22:11 on 16th August which is not a week end night.
    Salman

  • Gather Stats-Upgrade

    Dear All,
    R11.5.10.2 upgrade to R 12.3
    As per the upgrade guide, it is necesarry to run GATHER SCHEMA STATS (for all schemas) . our DB is 2.8 TB and runing the same for all schemas is tedious and will consume more time and will not be a feasible option during productiokn upgrade.
    Please advise

    As per the upgrade guide, it is necesarry to run GATHER SCHEMA STATS (for all schemas) . our DB is 2.8 TB and runing the same for all schemas is tedious and will consume more time and will not be a feasible option during productiokn upgrade. You need to run Gather Schema Stats before you start the upgrade (before your actual downtime starts) and this should not impact the upgrade downtime.
    Oracle E-Business Suite Upgrade Guide Release 11i to 12.1.1 [ID 1082375.1]
    11i and R12: Are there any recommended/ideal Percentages for running Gather Schema Statistics across e-Business Suite? [ID 1184276.1]
    https://forums.oracle.com/forums/search.jspa?threadID=&q=GATHER+AND+SCHEMA+AND+Upgrade&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

Maybe you are looking for

  • Can not transfer the partner data from ROS to EBP

    Hi, All We are using SRM 5.0, and setup the ROS ans SUS in one client, EBP in another client. All the configuration was done, we maintained the table: BBP_MARKETP_INFO by SM30 with the following record: COMPANY_ROOT   VENDOR_ROOT    DUMMY_EKORG    US

  • View Links for Programmatic View Objects

    Hi All, I created a read only VO called MyVO based on a sql query. I created 2 programmatic view objects, MasterView and ChildView. In a custom method in AMImpl class ,I iterate through this MyVO resultset and get the rows in a Row object. Based on s

  • Creation of Cost Element in trading Company

    Hi, Ours is Trading company. We do not manufacture any products and hence we don't have production or process orders. Should we create price difference (due to difference in goods receipt and invoice receipt) and Cost of goods sold G/Ls as cost eleme

  • Audio Out to Surround Sound

    Does anyone know of a cable that can convert the audio out on a MacBook Pro to 5.1 surround sound? I have a Logitech G51 speaker system and would like to use it with my computer.

  • IMovie 9.0.9 and Yosemite?

    Hi. I'm wondering if anyone is able to run IMovie 9.0.9 under Yosemite. Tonight I installed Yosemite on an external drive. I then tested all my applications to see if any had problems under Yosemite. Only a few did, but one was IMovie 9.0.9. It would