Gather table stats after shrink?
Hello,
do I need (or is it useful) to run dbms_stats.gather_table_stats after table shrink or does oracle update statistics automatically?
Regards,
Arndt
Hi,
I'd suggest you to write some script to run it from crontab to gather statistics automaticaly.
You can do the same via Enterprise Manager scheduling it on daily/hourly basis you wish.
select 'Gather Schema Stats' from dual;
select 'Started: '|| to_char(sysdate, 'DD-MM-YYYY HH24:MI:SS') from dual;
-- exec dbms_stats.gather_schema_stats('<my_schema>');
select '- for all columns size auto' from dual;
exec DBMS_STATS.gather_schema_stats(ownname => '<my_schema>', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', degree => DBMS_STATS.AUTO_DEGREE, cascade => true, force => true);
select '- for all indexed columns size skewonly' from dual;
exec DBMS_STATS.gather_schema_stats(ownname => '<my_schema>', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE SKEWONLY', degree => DBMS_STATS.AUTO_DEGREE, cascade => true, force => true);
select 'Finished: '|| to_char(sysdate, 'DD-MM-YYYY HH24:MI:SS') from dual;Good luck!
http://dba-star.blogspot.com/
Similar Messages
-
How to gather table stats for tables in a different schema
Hi All,
I have a table present in one schema and I want to gather stats for the table in a different schema.
I gave GRANT ALL ON SCHEMA1.T1 TO SCHEMA2;
And when I tried to execute the command to gather stats using
DBMS_STATS.GATHER_TABLE_STATS (OWNNAME=>'SCHMEA1',TABNAME=>'T1');
The function is failing.
Is there any way we can gather table stats for tables in one schema in an another schema.
Thanks,
MK.You need to grant analyze any to schema2.
SY. -
Gather table stats taking longer for Large tables
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
Does Table size actually matter for stats collection ?Max wrote:
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
09:40:05 SQL> desc user_tables
Name Null? Type
TABLE_NAME NOT NULL VARCHAR2(30)
TABLESPACE_NAME VARCHAR2(30)
CLUSTER_NAME VARCHAR2(30)
IOT_NAME VARCHAR2(30)
STATUS VARCHAR2(8)
PCT_FREE NUMBER
PCT_USED NUMBER
INI_TRANS NUMBER
MAX_TRANS NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
MAX_EXTENTS NUMBER
PCT_INCREASE NUMBER
FREELISTS NUMBER
FREELIST_GROUPS NUMBER
LOGGING VARCHAR2(3)
BACKED_UP VARCHAR2(1)
NUM_ROWS NUMBER
BLOCKS NUMBER
EMPTY_BLOCKS NUMBER
AVG_SPACE NUMBER
CHAIN_CNT NUMBER
AVG_ROW_LEN NUMBER
AVG_SPACE_FREELIST_BLOCKS NUMBER
NUM_FREELIST_BLOCKS NUMBER
DEGREE VARCHAR2(10)
INSTANCES VARCHAR2(10)
CACHE VARCHAR2(5)
TABLE_LOCK VARCHAR2(8)
SAMPLE_SIZE NUMBER
LAST_ANALYZED DATE
PARTITIONED VARCHAR2(3)
IOT_TYPE VARCHAR2(12)
TEMPORARY VARCHAR2(1)
SECONDARY VARCHAR2(1)
NESTED VARCHAR2(3)
BUFFER_POOL VARCHAR2(7)
FLASH_CACHE VARCHAR2(7)
CELL_FLASH_CACHE VARCHAR2(7)
ROW_MOVEMENT VARCHAR2(8)
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
DURATION VARCHAR2(15)
SKIP_CORRUPT VARCHAR2(8)
MONITORING VARCHAR2(3)
CLUSTER_OWNER VARCHAR2(30)
DEPENDENCIES VARCHAR2(8)
COMPRESSION VARCHAR2(8)
COMPRESS_FOR VARCHAR2(12)
DROPPED VARCHAR2(3)
READ_ONLY VARCHAR2(3)
SEGMENT_CREATED VARCHAR2(3)
RESULT_CACHE VARCHAR2(7)
09:40:10 SQL> >
Does Table size actually matter for stats collection ?yes
Handle: Max
Status Level: Newbie
Registered: Nov 10, 2008
Total Posts: 155
Total Questions: 80 (49 unresolved)
why so many unanswered questions? -
Hi All,
DB version:10.2.0.4
OS:Aix 6.1
I want to gather table stats for a table since the query which uses this table is running slow. Also I noticed that this table is using full table scan and it was last analyzed 2 months back.
I am planning to execute the below query for gathering the stats. The table has 50 million records.
COUNT(*)
51364617
I expect this gonna take a long time if I execute the query Like below.
EXEC DBMS_STATS.gather_table_stats('schema_name', 'table_name');
My doubts specified below.
1. can i use the estimate_percent parameter also for gathering the stats ?
2. how much percentage should I specify for the parameter estimate_percent?
3. what difference will it make if I use the estimate_percent parameter?
Thanks in advance
Edited by: user13364377 on Mar 27, 2012 1:28 PMIf you are worried about the stats gathering process running for a long time, consider gathering stats in parallel.
1. Can you use estimate_percent? Sure! Go for it.
2. What % to use? Why not let the database decide with auto_sample_size? Various "rules of thumb" have been thrown around over the years, usually around 10% to 20%.
3. What difference will it make? Very little, probably. Occasionally you might see where a small sample size makes a difference, but in general it is perfectly ok to estimate stats.
Perhaps something like this:
BEGIN
dbms_stats.gather_table_stats(ownname => user, tabname => 'MY_TABLE',
estimate_percent => dbms_stats.auto_sample_size, method_opt=>'for all columns size auto',
cascade=>true,degree=>8);
END; -
hi!
I have some confusions about analyze and gather table stats command. Please answers my questions to remove these confusions:
1 - What’s major difference between analyze and gather table stats ?
2 - Which is better?
3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?
Hopes you will support me.
regards
Irfan Ahmad[email protected] wrote:
1 - What’s major difference between analyze and gather table stats ?
2 - Which is bet
3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?1. analyze is older and probably not being extended with new functionality/support for new features
2. overall, dbms_stats is better - it should support everything
3. Any queries running when the analyze takes place will have to use the old stats
Although dbms_stats is probably better, I find the analyze syntax LOTS easier to remember! The temptation for me to be lazy and just use analyze in development is very high. For advanced environments though dbms_stats will probably work better.
There is one other minor difference between analyze and dbms_stats. There's a column in the user/all/dba_histograms view called endpoint_actual_value that I have seen analyze populate with the data value the histogram was created for but have not seen dbms_stats populate. -
Gather stats after shrink command
Hello.
Running Oracle 11.2 and was preparing to run shrink command on a major table to release 5G of wasted space on a 15G segment.
I have already done this in our test database and was preparing to do this in production, when one web site I went to said we should run fresh stats after the shrink operation.
Does this make sense, and does it seem to be necessary?
It does seem to make sense that the values for number of blocks, number of empty blocks would be different after the shrink operation.
But I am surprised I did not see this recommendation on any other site except the one site.Well, sure... if we want to consider this a DML or DDL operation, but actually, we are not changing or defining new data structure, and we are not manipulating the data (per se).
But, I'm in agreement that it makes sense to gather fresh stats just based on the difference of blocks and empty blocks which we can assume the optimizer considers when choosing an execution plan. -
Temp table, and gather table stats
One of my developers is generating a report from Oracle. He loads a subset of the data he needs into a temp table, then creates an index on the temp table, and then runs his report from the temp table (which is a lot smaller than the original table).
My question is: Is it necessary to gather table statistics for the temp table, and the index on the temp table, before querying it ?It depends yesterday I have very bad experience with stats one of my table has NUM_ROWS 300 and count(*)-7million and database version is 9206(bad every with optimizer bugs) so queries starts breaking and lot of buffer busy and latch free it took while to figure out but I have deleted the stats and every thing came under control - my mean to say statistics are good and bad. Once you start collecting you should keep an eye.
Thanks. -
Question on gather table stats.
Guys,
I am trying to gather stats on a table and it is taking a long time to finish up ( it had been running for about 28 hours ), after dropping some partitions and creating couple of partitions by splitting the part lead partition I executed the below command for gathering the stats on the table along with the newly created partitions. Our database in on Oracle 10g and OS is windows 2003 server. Can someone please let me know if I am missing any parameters.
exec dbms_stats.gather_table_stats (-
ownname => 'TRACK', -
tabname => 'TRK_DTL_STAGE', -
estimate_percent => dbms_stats.auto_sample_size, -
cascade => true, -
degree => 5);
Thanks for all your help.user12055522 wrote:
Thanks for the reply. I am not seeing any waits and also I see activity against this session as I looked in the long ops. Will I be better off to kill the current session and just grab the stats for the partitions which I created from the split.How can that be possible that you do not see any waits, it could be that you did not cared to check.
Killing the existing session would be your decision and i do not see any problem in doing that so far with it.
Regards
Anurag -
Gather table stats takes time for big table
Table has got millions of record and it is partitioned. when i analyze the table using the following syntax it takes more than 5 hrs and it has got one index.
I tried with auto sample size and also by changing the estimate percentage value like 20, 50 70 etc. But the time is almost same.
exec dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'TEST',granularity =>'ALL',ESTIMATE_PERCENT=>100,cascade=>TRUE);
What i should do to reduce the analyze time for Big tables. Can anyone help me. lHello,
The behaviour of the ESTIMATE_PERCENT may change from one Release to another.
In some Release when you specify a "too high" (>25%,...) ESTIMATE_PERCENT in fact you collect the Statistics over 100% of the rows, as in COMPUTE mode:
Using DBMS_STATS.GATHER_TABLE_STATS With ESTIMATE_PERCENT Parameter Samples All Rows [ID 223065.1]For later Release, *10g* or *11g*, you have the possibility to use the following value:
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZEIn fact, you may use it even in *9.2*, but in this release it is recommended using a specific estimate value.
More over, starting with *10.1* it's possible to Schedule the Statistics collect by using DBMS_SCHEDULE and, specify a Window so that the Job doesn't run during production hours.
So, the answer may depends on the Oracle Release and also on the Application (SAP, Peoplesoft, ...).
Best regards,
Jean-Valentin -
Question on Exporting Table Stats to another database
I have a question exporting Table Stats from one schema in database A to another schema in another database B.
Currently running Oracle 9.0.2.6 for unix in both prod and dev.
Currently table stats are gathered using the ANALYZE TABLE command. We currently don't use the DBMS_STATS package to gather table statistics.
Question:
If I execute DBMS_STATS.EXPORT_TABLE_STATS in database A can I import them to database B if I'm only using the ANALYZE TABLE to gather table stats? Do I need to execute the DBMS_STATS.GATHER_TABLE_STATS package in database A prior to excuting DBMS_STATS.EXPORT_TABLE_STATS ?
The overall goal is to take table stats from Production in its current state and import them into a Development environment to be used for testing data / processes.
Yes we will be upgrading to Oracle 10 / 11g in near future.
Yes we will be changing our method of gathering table stats by using the DBMS_STATS package.
Thanks,
Russ DHi,
If I execute DBMS_STATS.EXPORT_TABLE_STATS in database A can I import them to database B if I'm only using the ANALYZE TABLE to gather table stats? You need using the DBMS_STAT package for get and export statistics process if you want migrate the stats to other database.
Do I need to execute the DBMS_STATS.GATHER_TABLE_STATS package in database A prior to excuting DBMS_STATS.EXPORT_TABLE_STATS ?Yes, you need executing first DBMS_STATS.GATHER_TABLE_STATS.
Good luck.
Regards. -
Why Oracle not using the correct indexes after running table stats
I created an index on the table and ran the a sql statement. I found that via the explain plan that index is being used and is cheaper which I wanted.
Latter I ran all tables stats and found out again via explain plan that the same sql is now using different index and more costly plan. I don't know what is going on. Why this is happening. Any suggestions.
ThxI just wanted to know the cost using the index.
To gather histograms use (method_opt is the one that causes the package to collect histograms)
DBMS_STATS.GATHER_SCHEMA_STATS (
ownname => 'SCHEMA',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
block_sample => TRUE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
degree => 4,
granularity => 'ALL',
cascade => TRUE,
options => 'GATHER'
); -
Hi All,
We actually locked a table stats (TABLE1) with intention to prevent the nightly auto gather stats from picking up from stats gathering.
Understand that the nightly gather stats job kick start at 10pm daily (during wkday).
The below manual stats gathering will run about 4 hours.
If we trigger the below at 8.30pm, will the nightly gather stats (10pm) pick it up for stats gathering?
As the first sql will unlock the stats and lock it only after completion which is about 12.30am.
exec dbms_stats.unlock_table_stats('AC', 'TABLE1');
exec dbms_stats.gather_table_stats(ownname=>'ac',tabname=>'TABLE1',estimate_percent => 50,cascade=>true, method_opt => 'for all columns size 254');
exec dbms_stats.lock_table_stats('AC', TABLE1);
thanksDidn't see your version mentioned but there may be no need to unlock them if you are updating them manually. Just add the force parameter to your call to gather_table_stats if your version has that option.
From the doc..
force
Gather statistics of table even if it is locked -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Creating a better update table statement
Hello,
I have the following update table statement that I would like to make more effecient. This thing is taking forever. A little background. The source table/views are not indexed and the larger of the two only has 150k records. Any ideas on making more effecient would be appreciate.
Thanks.
Ryan
Script:
DECLARE
V_EID_CIV_ID SBI_EID_W_VALID_ANUM_V.SUBJECT_KEY%TYPE;
V_EID_DOE DATE;
V_EID_POE SBI_EID_W_VALID_ANUM_V.POINT_OF_ENTRY%TYPE;
V_EID_APPR_DATE DATE;
V_CASE_CIV_ID SBI_DACS_CASE_RECORDS.CASE_EID_CIV_ID%TYPE;
V_CASE_DOE DATE;
V_CASE_POE SBI_DACS_CASE_RECORDS.CASE_CODE_ENTRY_PLACE%TYPE;
V_CASE_APPR_DATE DATE;
V_CASE_DEPART_DATE DATE;
V_SBI_UPDATE_STEP SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP%TYPE;
V_SBI_CIV_ID SBI_DACS_CASE_RECORDS.SBI_CIV_ID%TYPE;
CURSOR VALID_CIV_ID_FROM_EID IS
SELECT EID.SUBJECT_KEY,
TO_DATE(EID.PROCESS_ENTRY_DATE),
EID.POINT_OF_ENTRY,
TO_DATE(EID.APPREHENSION_DATE),
DACS.CASE_EID_CIV_ID,
TO_DATE(DACS.CASE_DATE_OF_ENTRY,'YYYYMMDD'),
DACS.CASE_CODE_ENTRY_PLACE,
TO_DATE(DACS.CASE_DATE_APPR,'YYYYMMDD'),
TO_DATE(DACS.CASE_DATE_DEPARTED,'YYYYMMDD'),
DACS.SBI_UPDATE_STEP,
DACS.SBI_CIV_ID
FROM SBI_EID_W_VALID_ANUM_V EID,
SBI_DACS_CASE_RECORDS DACS
WHERE DACS.CASE_NBR_A = EID.ALIEN_FILE_NUMBER;
BEGIN
OPEN VALID_CIV_ID_FROM_EID;
SAVEPOINT A;
LOOP
FETCH VALID_CIV_ID_FROM_EID INTO V_EID_CIV_ID, V_EID_DOE, V_EID_POE,V_EID_APPR_DATE,V_CASE_CIV_ID, V_CASE_DOE,V_CASE_POE,V_CASE_APPR_DATE,V_CASE_DEPART_DATE,V_SBI_UPDATE_STEP,V_SBI_CIV_ID;
DBMS_OUTPUT.PUT_LINE('BEFORE');
EXIT WHEN VALID_CIV_ID_FROM_EID%FOUND;
DBMS_OUTPUT.PUT_LINE('AFTER');
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_CASE_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 1
WHERE V_CASE_CIV_ID IS NOT NULL
AND V_CASE_CIV_ID <> 0;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 2
WHERE V_SBI_CIV_ID IS NULL AND V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE
AND V_EID_APPR_DATE = V_CASE_DEPART_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 3
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 4
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4 ;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 5
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE <> V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 6
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 7
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 8
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 9
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 10
WHERE V_SBI_UPDATE_STEP = 0
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
END LOOP;
CLOSE VALID_CIV_ID_FROM_EID;
COMMIT;
END;
-----Thats it. Thanks for your help.
RyanPlease use [ code] or [ pre] tags to format code before posing:
DECLARE
V_EID_CIV_ID SBI_EID_W_VALID_ANUM_V.SUBJECT_KEY%TYPE;
V_EID_DOE DATE;
V_EID_POE SBI_EID_W_VALID_ANUM_V.POINT_OF_ENTRY%TYPE;
V_EID_APPR_DATE DATE;
V_CASE_CIV_ID SBI_DACS_CASE_RECORDS.CASE_EID_CIV_ID%TYPE;
V_CASE_DOE DATE;
V_CASE_POE SBI_DACS_CASE_RECORDS.CASE_CODE_ENTRY_PLACE%TYPE;
V_CASE_APPR_DATE DATE;
V_CASE_DEPART_DATE DATE;
V_SBI_UPDATE_STEP SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP%TYPE;
V_SBI_CIV_ID SBI_DACS_CASE_RECORDS.SBI_CIV_ID%TYPE;
CURSOR VALID_CIV_ID_FROM_EID IS
SELECT EID.SUBJECT_KEY,
TO_DATE(EID.PROCESS_ENTRY_DATE),
EID.POINT_OF_ENTRY,
TO_DATE(EID.APPREHENSION_DATE),
DACS.CASE_EID_CIV_ID,
TO_DATE(DACS.CASE_DATE_OF_ENTRY,'YYYYMMDD'),
DACS.CASE_CODE_ENTRY_PLACE,
TO_DATE(DACS.CASE_DATE_APPR,'YYYYMMDD'),
TO_DATE(DACS.CASE_DATE_DEPARTED,'YYYYMMDD'),
DACS.SBI_UPDATE_STEP,
DACS.SBI_CIV_ID
FROM SBI_EID_W_VALID_ANUM_V EID,
SBI_DACS_CASE_RECORDS DACS
WHERE DACS.CASE_NBR_A = EID.ALIEN_FILE_NUMBER;
BEGIN
OPEN VALID_CIV_ID_FROM_EID;
SAVEPOINT A;
LOOP
FETCH VALID_CIV_ID_FROM_EID INTO V_EID_CIV_ID, V_EID_DOE,
V_EID_POE,V_EID_APPR_DATE,V_CASE_CIV_ID, V_CASE_DOE,
V_CASE_POE,V_CASE_APPR_DATE,V_CASE_DEPART_DATE,V_SBI_UPDATE_STEP,V_SBI_CIV_ID;
DBMS_OUTPUT.PUT_LINE('BEFORE');
EXIT WHEN VALID_CIV_ID_FROM_EID%FOUND;
DBMS_OUTPUT.PUT_LINE('AFTER');
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_CASE_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 1
WHERE V_CASE_CIV_ID IS NOT NULL
AND V_CASE_CIV_ID <> 0;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 2
WHERE V_SBI_CIV_ID IS NULL AND V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE
AND V_EID_APPR_DATE = V_CASE_DEPART_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 3
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 4
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4 ;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 5
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE <> V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 6
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 7
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 8
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 9
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 10
WHERE V_SBI_UPDATE_STEP = 0
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
END LOOP;
CLOSE VALID_CIV_ID_FROM_EID;
COMMIT;
END;Peter D. -
Different Ways To Gather Table Statistics
Can anyone tell me the difference(s), if any, between these two ways to refresh table stats:
1. Execute this procedure which includes everything owned by the specified user
EXEC DBMS_STATS.gather_schema_stats (ownname => 'USERNAME', degree => dbms_stats.auto_degree);
2. Execute this statement for each table owned by the specifed user
analyze table USERNAME.TABLENAME compute statistics;
Generally speaking, is one way better than the other? Do they act differently and how?In Oracle's automatic stats collection, not all object are included in stats collection.
Only those tables, which has stale stats are taken for stats collection. I don't remember on top of my head, but its either 10% or 20% i.e. the tables where more than 10% (or 20%) data has changed are maked as stale. And only those stale objects will be considered for stats collection.
Do you really think, each and every object/table has to be analyzed every day? How long does it take when you gather stats for all objects?
Maybe you are looking for
-
How do I get rid of the speech icon on my desktop... circle with ESC in it
I want to make the Speech icon on my desktop disappear. It is a circle with a microphone at the top, ESC in the middle and an arrow at the bottom. How do I do this?
-
Change source system name in variant in BWQ
Hi Gurus, I created a process chain for global seetings for unit of measurement and fiscal year variants and transported to BWQ. In BWQ, I found the source system name in variant is showing ERD source system name, but I need to change it to ERQ sourc
-
Why do the times for each event or appointment appear on the phone calendar exactly 5 hours later than the correct times entered into the Yahoo calendar on my P.C? Is there a way of rectifying this, either on the Yahoo calendar site, or on the iPhon
-
Just installed 10.5.1 and iPhoto update. When I connect my camera to my iMac, iPhoto opens....but the camera is not visible in the Finder ! Any suggestions ? Or is this a new bug? Best regards, Roeland
-
if i open a connection in java stored procedure , does oracle open process for each connection? how do i close this connection? Thank's Benny