Gather Table Statistics (Stats) Crashing Application
Hello
Just started a new place of work which has a 9i database with a VB Front End App.
Noticed in the Database no stats had been gathered for any objects (ever).
I decided to run a gather stats procedure on a particular heavily-used table but it caused the application to crash/freeze?
begin
dbms_stats.gather_table_stats(OWNNAME=> , TABNAME=> ,
ESTIMATE_PERCENT=> dbms_stats.auto_sample_size,
METHOD_OPT=> 'FOR ALL COLUMNS SIZE 1', CASCADE=>TRUE);
end;
I decided to use auto sample size and decided to also gather stats on indexes (and without CASCADE=FALSE) but appeared to crash the application. The job itself worked fine.
Anyone ever encountered such issue? Could there be any reason for this (i.e. no stats ever gathered)?
Thanks
Edited by: user636420 on Jan 22, 2010 9:22 AM
I have found something interesting..........
Prior to Oracle 10g, Oracle's default optimizer mode was called “choose.” In the choose optimizer mode, Oracle will execute the rule-based optimizer if there are no statistics present for the table; it will execute the cost-based optimizer if statistics are present. The danger with using the choose optimizer mode is that problems can occur in cases where one Oracle table in a complex query has statistics and the other tables do not.
This probably explains the fact that I am gathering statistics for 1 table and not the others.
Just checked the explain plan for a particular SQL query and it's using CHOOSE optimizer mode.
Edited by: user636420 on Jan 22, 2010 9:35 AM
Similar Messages
-
Different Ways To Gather Table Statistics
Can anyone tell me the difference(s), if any, between these two ways to refresh table stats:
1. Execute this procedure which includes everything owned by the specified user
EXEC DBMS_STATS.gather_schema_stats (ownname => 'USERNAME', degree => dbms_stats.auto_degree);
2. Execute this statement for each table owned by the specifed user
analyze table USERNAME.TABLENAME compute statistics;
Generally speaking, is one way better than the other? Do they act differently and how?In Oracle's automatic stats collection, not all object are included in stats collection.
Only those tables, which has stale stats are taken for stats collection. I don't remember on top of my head, but its either 10% or 20% i.e. the tables where more than 10% (or 20%) data has changed are maked as stale. And only those stale objects will be considered for stats collection.
Do you really think, each and every object/table has to be analyzed every day? How long does it take when you gather stats for all objects? -
i need collect the statistics for few tables from a schema. i am doing the following:
i am creating a table to save the statistics.
begin
dbms_stats.create_stat_table(ownname =>'TEST',
statab =>'TEST_ORD',);
end;
and then i am gathering the table stats for a particular table as follows:
exec dbms_stats.gather_table_stats(
ownname => 'BNS',
estimate_percent =>'10,
statown =>'TEST',
tabname =>'ORDERS',
stattab =>'TEST_ORD',
statid =>'CR');
my question is how can do for multiple tables ? the reason is i have to create a Job to run this script automatically daily.
can i put this into a procedure? or any other ideas?
Appreciate all your help!In many environments the default job does a pretty good job of generating statistics, but I have to agree that there are environments where the default job does not produce very good statistics which is why I mentioned the ability to lock statistics. For anyone who has not had to deal with statistics much you can generate a workable set of statistics manually and then lock them in place so the provided automatich statistics generation task does not re-generate a specific set of statistics.
Also by default the 10g task collects histograms. If histograms were not in use in 9i then removing the histograms often 'fixes' most of the query performance issues encounted upon upgrading to 10g in these cases. 11g adds the ability set dbms_stats parameters at the table level and have those settings retained for use with future stats collection.
Yes cron assumes a UNIX (or Linux) platform but Windows also has a job scheduler available. I am not sure what difference RAC would make. We use cron tasks against RAC all the time. We do normally just run the tasks against the local instance.
Nicely organized script/code library.
HTH -- Mark D Powell -- -
Temp table, and gather table stats
One of my developers is generating a report from Oracle. He loads a subset of the data he needs into a temp table, then creates an index on the temp table, and then runs his report from the temp table (which is a lot smaller than the original table).
My question is: Is it necessary to gather table statistics for the temp table, and the index on the temp table, before querying it ?It depends yesterday I have very bad experience with stats one of my table has NUM_ROWS 300 and count(*)-7million and database version is 9206(bad every with optimizer bugs) so queries starts breaking and lot of buffer busy and latch free it took while to figure out but I have deleted the stats and every thing came under control - my mean to say statistics are good and bad. Once you start collecting you should keep an eye.
Thanks. -
Gather Schema Statistics issue?
Hi
Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?
I can manually run 'Gather Table Statistics' program against all tables.
RegardsHi;
Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?For can run hather stat for custom schema please check
gather schema stats for EBS 11.5.10
gather schema stats for EBS 11.5.10
I can manually run 'Gather Table Statistics' program against all tables. Please see:
How To Gather Statistics On Oracle Applications 11.5.10(and above) - Concurrent Process,Temp Tables, Manually [ID 419728.1]
Also see:
How to work Gather stat
Gather Schema Statistics
http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
Regard
Helios -
How often we need to run gather schema statistics etc.. ??
HI,
Am on 11.5.10.2
RDBMS 9.2.0.6
How often we need to run the following requests in Production...
1.Gather schema statistics
2.Gather Column statistics
3.Gather Table statistics
4.Gather All Column statisitics
ThanksHi;
We discussed here before about same issue. Please check below thread which could be helpful about your issue:
How often we need to run gather schema statistics
Re: Gather schema stats run
How we can collect custom schema information wiht gather statistics
gather schema stats for EBS 11.5.10
gather schema stats conc. program taking too long time
Re: gather schema stats conc. program taking too long time
How it runs
Gather Schema Statistics
http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
gather statistict collect which informations
Gather Schema Statistics...
Regard
Helios -
Hi
Is there any command to delete gathered statistics of a table
suppose : I have gather table statistics as under
exec dbms_stats.gather_table_stats('TEST1','EMP');
now I want to delete the statistics from the above table EMP
Regards
JewelThe package dbms_stats is fully documented in the Oracle supplied package and type reference manual for your unknown version of Oracle.
It has a call to delete statistics.
As per forums etiquette, please do the obvious first and consult documentation.
Sybrand Bakker
Senior Oracle DBA -
Gather Schema Statistics improve the performance of the R12 application?
Hi All,
If we run “Gather Schema Statistics” program, it will improve the performance of the R12 application?
Platform Linux and DB version 10.2.0.4.
Thanks & Regards,
TharunHi Tharun,
If we ruer n “Gather Schema Statistics” program, it will improve the performance of the R12 application?
Yes, it will speed up as it ensures to have an up to date statistics.
Please refer notes:
Concurrent Processing - How To Gather Statistics On Oracle Applications Release 11i and/or Release 12 - Concurrent Process,Temp Tables, Manually [ID 419728.1]
How Often Should Gather Schema Statistics Program be Run? [ID 168136.1]
Why Stats Gather?
Stats gathering must be set as a routine job and is recommended to be scheduled. Even though this program is available from the fronted in the form of submitting a concurrent program basically it performs a DB level enhancement and ensures that you have an up to date optimizer statistics.Because the objects in a database can be constantly changing, statistics must be regularly updated so that they accurately describe these database objects.
For indepth understand as to why it should be run, please refer doc:
Managing Optimizer Statistics
Thanks &
Best Regards, -
Gather Schema Statistics - GATHER AUTO option failing to gather stats
Hi ,
We recently upgraded to 10g DB and 11.5.10 version of Oracle EBS. I want to employ GATHER AUTO option while running Gather Schema Statistics.
To test the working, I created a test table with 1 million rows. Then, stats were gathered for this table alone by using Gather Table Stats. Now, I deleted ~12% of rows & issued commit. The table all_tab_statistics shows that the table has stale statistics (stale stats column = YES). After that I ran Gather Schema Stats for that particular schema. But the request did not pick the test table to be gathered.
What is the criterion on which Oracle chooses which all tables to be gather statistics for under Gather Auto option? I am aware of the 10% change in data, but how is this 10% calculated? Is it only based on (insert + update + delete)?
Also, what is the difference between Gather Auto and Gather Stale ?
Any help is appreciated.
Thanks,
JithinRandalf,
FYI.. this is what happens inside the concurrent progarm call.. there are a few additional parameters for output/ error msgs:
procedure GATHER_SCHEMA_STATS(errbuf out varchar2,
retcode out varchar2,
schemaname in varchar2,
estimate_percent in number,
degree in number ,
internal_flag in varchar2,
request_id in number,
hmode in varchar2 default 'LASTRUN',
options in varchar2 default 'GATHER',
modpercent in number default 10,
invalidate in varchar2 default 'Y'
is
exist_insufficient exception;
bad_input exception;
pragma exception_init(exist_insufficient,-20000);
pragma exception_init(bad_input,-20001);
l_message varchar2(1000);
Error_counter number := 0;
Errors Error_Out;
-- num_request_id number(15);
conc_request_id number(15);
degree_parallel number(2);
begin
-- Set the package body variable.
stathist := hmode;
-- check first if degree is null
if degree is null then
degree_parallel:=def_degree;
else
degree_parallel := degree;
end if;
l_message := 'In GATHER_SCHEMA_STATS , schema_name= '|| schemaname
|| ' percent= '|| to_char(estimate_percent) || ' degree = '
|| to_char(degree_parallel) || ' internal_flag= '|| internal_flag ;
FND_FILE.put_line(FND_FILE.log,l_message);
BEGIN
FND_STATS.GATHER_SCHEMA_STATS(schemaname, estimate_percent,
degree_parallel, internal_flag, Errors, request_id,stathist,
options,modpercent,invalidate);
exception
when exist_insufficient then
errbuf := sqlerrm ;
retcode := '2';
l_message := errbuf;
FND_FILE.put_line(FND_FILE.log,l_message);
raise;
when bad_input then
errbuf := sqlerrm ;
retcode := '2';
l_message := errbuf;
FND_FILE.put_line(FND_FILE.log,l_message);
raise;
when others then
errbuf := sqlerrm ;
retcode := '2';
l_message := errbuf;
FND_FILE.put_line(FND_FILE.log,l_message);
raise;
END;
FOR i in 0..MAX_ERRORS_PRINTED LOOP
exit when Errors(i) is null;
Error_counter:=i+1;
FND_FILE.put_line(FND_FILE.log,'Error #'||Error_counter||
': '||Errors(i));
-- added to send back status to concurrent program manager bug 2625022
errbuf := sqlerrm ;
retcode := '2';
END LOOP;
end; -
Can I gather object statistics on large tables at the same time?
We have large partitioned tables to the tune of 3-4 billion rows, and they have no object statistics. Can I gather object statistics for them at the same time? For example, 4-5 large tables at the same time. I need to gather them in multiple tables because we have several of those large tables and I have to schedule the gathering carefully. So, I want to start at 4 tables. I'm wondering if gathering statistics on the above tables will be intrusive or will impact the performance while it is running.
AlexWhat version are you running? If you are running 11g they are automatically gathered via the autotask job DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC which, depending on your window, will normally run daily at 10 pm. Im very surpised that no stats have been gathered as this collects stats on all new tables or if more than 10% of rows have been changed.
See the following links:
http://www.oracle-base.com/articles/11g/automated-database-maintenance-task-management-11gr1.php
http://docs.oracle.com/cd/B28359_01/server.111/b28274/stats.htm -
How to gather table stats for tables in a different schema
Hi All,
I have a table present in one schema and I want to gather stats for the table in a different schema.
I gave GRANT ALL ON SCHEMA1.T1 TO SCHEMA2;
And when I tried to execute the command to gather stats using
DBMS_STATS.GATHER_TABLE_STATS (OWNNAME=>'SCHMEA1',TABNAME=>'T1');
The function is failing.
Is there any way we can gather table stats for tables in one schema in an another schema.
Thanks,
MK.You need to grant analyze any to schema2.
SY. -
Gather table stats taking longer for Large tables
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
Does Table size actually matter for stats collection ?Max wrote:
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
09:40:05 SQL> desc user_tables
Name Null? Type
TABLE_NAME NOT NULL VARCHAR2(30)
TABLESPACE_NAME VARCHAR2(30)
CLUSTER_NAME VARCHAR2(30)
IOT_NAME VARCHAR2(30)
STATUS VARCHAR2(8)
PCT_FREE NUMBER
PCT_USED NUMBER
INI_TRANS NUMBER
MAX_TRANS NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
MAX_EXTENTS NUMBER
PCT_INCREASE NUMBER
FREELISTS NUMBER
FREELIST_GROUPS NUMBER
LOGGING VARCHAR2(3)
BACKED_UP VARCHAR2(1)
NUM_ROWS NUMBER
BLOCKS NUMBER
EMPTY_BLOCKS NUMBER
AVG_SPACE NUMBER
CHAIN_CNT NUMBER
AVG_ROW_LEN NUMBER
AVG_SPACE_FREELIST_BLOCKS NUMBER
NUM_FREELIST_BLOCKS NUMBER
DEGREE VARCHAR2(10)
INSTANCES VARCHAR2(10)
CACHE VARCHAR2(5)
TABLE_LOCK VARCHAR2(8)
SAMPLE_SIZE NUMBER
LAST_ANALYZED DATE
PARTITIONED VARCHAR2(3)
IOT_TYPE VARCHAR2(12)
TEMPORARY VARCHAR2(1)
SECONDARY VARCHAR2(1)
NESTED VARCHAR2(3)
BUFFER_POOL VARCHAR2(7)
FLASH_CACHE VARCHAR2(7)
CELL_FLASH_CACHE VARCHAR2(7)
ROW_MOVEMENT VARCHAR2(8)
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
DURATION VARCHAR2(15)
SKIP_CORRUPT VARCHAR2(8)
MONITORING VARCHAR2(3)
CLUSTER_OWNER VARCHAR2(30)
DEPENDENCIES VARCHAR2(8)
COMPRESSION VARCHAR2(8)
COMPRESS_FOR VARCHAR2(12)
DROPPED VARCHAR2(3)
READ_ONLY VARCHAR2(3)
SEGMENT_CREATED VARCHAR2(3)
RESULT_CACHE VARCHAR2(7)
09:40:10 SQL> >
Does Table size actually matter for stats collection ?yes
Handle: Max
Status Level: Newbie
Registered: Nov 10, 2008
Total Posts: 155
Total Questions: 80 (49 unresolved)
why so many unanswered questions? -
Hi All,
DB version:10.2.0.4
OS:Aix 6.1
I want to gather table stats for a table since the query which uses this table is running slow. Also I noticed that this table is using full table scan and it was last analyzed 2 months back.
I am planning to execute the below query for gathering the stats. The table has 50 million records.
COUNT(*)
51364617
I expect this gonna take a long time if I execute the query Like below.
EXEC DBMS_STATS.gather_table_stats('schema_name', 'table_name');
My doubts specified below.
1. can i use the estimate_percent parameter also for gathering the stats ?
2. how much percentage should I specify for the parameter estimate_percent?
3. what difference will it make if I use the estimate_percent parameter?
Thanks in advance
Edited by: user13364377 on Mar 27, 2012 1:28 PMIf you are worried about the stats gathering process running for a long time, consider gathering stats in parallel.
1. Can you use estimate_percent? Sure! Go for it.
2. What % to use? Why not let the database decide with auto_sample_size? Various "rules of thumb" have been thrown around over the years, usually around 10% to 20%.
3. What difference will it make? Very little, probably. Occasionally you might see where a small sample size makes a difference, but in general it is perfectly ok to estimate stats.
Perhaps something like this:
BEGIN
dbms_stats.gather_table_stats(ownname => user, tabname => 'MY_TABLE',
estimate_percent => dbms_stats.auto_sample_size, method_opt=>'for all columns size auto',
cascade=>true,degree=>8);
END; -
APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
제품 : AOL
작성날짜 : 2003-12-02
APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
================================================
PURPOSE
APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
Explanation
Gather Schema Statistics 수행에 대한 정해진 주기는 없다. 일부 System은 매주 수행할 필요가 있을수 있고, 또 다른 System은 1개월 주기로 수행할 수 있다. 수행 주기는 data 양과 형태 그리고 얼마나 자주 변경되느냐에 따라 결정된다.
가장 효과적인 수행 주기를 결정하기 위하여 다른 Schedule 로 수행하여 Monitoring 이 필요하다.
일반적으로 다음과 같이 수행한다.
1) 많은 양의 Data 및 Data 내용이 변경된 후 수행
2) Data import 후 에 수행
3) Performance의 저하가 발생한경우.
11i 에서는 ANALYZE command 와 DBMS_STATS packages가 지원되지 않으므로 FND_STATS 을 사용하여 한다.
Gather Schema Statistics 는 FND_STATS 을 사용한다.
Example
Reference Documents
Note 168136.1 - How Often Should Gather Schema Statistics Program be
Run?john
you can do these things
1. gather schema statistics regularly weekly once -full
2. gather schema statistics daily - atleast 10%
3. rebuild the fragmented indexes regularly - 15 days
4. coalesce the tablespaces monthly once
5. purge the unwanted data weekly once
6. pin the db objects into SGA with dbms_shared_pool package
7. find the objects which have become invalid and then validate them .
8. purge workflow runtime data
and there are still some more that as system administrator you should keep a watch on....
but do the above you , your job is best done
any help post here
regards
sdsreenivas -
hi!
I have some confusions about analyze and gather table stats command. Please answers my questions to remove these confusions:
1 - What’s major difference between analyze and gather table stats ?
2 - Which is better?
3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?
Hopes you will support me.
regards
Irfan Ahmad[email protected] wrote:
1 - What’s major difference between analyze and gather table stats ?
2 - Which is bet
3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?1. analyze is older and probably not being extended with new functionality/support for new features
2. overall, dbms_stats is better - it should support everything
3. Any queries running when the analyze takes place will have to use the old stats
Although dbms_stats is probably better, I find the analyze syntax LOTS easier to remember! The temptation for me to be lazy and just use analyze in development is very high. For advanced environments though dbms_stats will probably work better.
There is one other minor difference between analyze and dbms_stats. There's a column in the user/all/dba_histograms view called endpoint_actual_value that I have seen analyze populate with the data value the histogram was created for but have not seen dbms_stats populate.
Maybe you are looking for
-
Hi, I have a problem with colors of photos which I'm using in a. InDesign document. I've created a 20 page catalog, with first and last pages for the front and back of the cover. I placed one photo on the first page and second on the last page. When
-
In mail, my attachments, both photos and documents, appear in the body of the message. How can I get my attachments to appear as a separate attachment and not as part of the main message???? Thanks for your help, Karen.
-
Most of my music left un-downloaded after sync
I have an iPhone 4S 64GB using ios7.0.2. I am syncing with my old trusty MacBook 10.6.8 running iTunes 11.1.1 The problem is two-fold. I connect the iPhone to the MacBook via USB and the settings are ticked to update via WiFi, I start the sync and
-
Need help installing 10.8 on my erased hard drive
Hi,I bought 10.8 and created a bootable disk on a flash drive. Then, instead of erasing my drive as part of the installation of 10.8, I booted from the flash drive, the used Disk Utilities to erase the drive. (I know you don't have to erase your d
-
TREX search Functionality in E recruitment
Hi All, I need help to configure TREX search functionality in E recruitment . Does anyone have any documentation or steps to configure it. It would be great if you could pas me on the steps to configure TREX in E recruitment. AS far as I know we ha