Gathering daily statistics of a table using DBMS_STATS..!!
Hi all,
Can please somebody help me for how to collect daily statistics of a table..!!
executing the DBMS_stats is fine but when i assign as a job using OEM the job never starts. It just shows the status as job submitted but never executes..
Is there any other way to execute the DBMS_STATS daily...??
In 10g, it is executed daily at 22:00 by default.
Otherwise, you can execute something like
begin
dbms_job.isubmit(job => 1,
what => 'BEGIN
DBMS_STATS.GATHER_DATABASE_STATS (
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
block_sample => TRUE,
method_opt => ''FOR ALL COLUMNS SIZE AUTO'',
degree => 6,
granularity => ''ALL'',
cascade => TRUE,
options => ''GATHER STALE''
END;',
next_date => sysdate + 2,
interval => 'TRUNC(SysDate +1 ) + 22/24',
no_parse => TRUE);
end;
commit;Make sure you commit.
Similar Messages
-
Hi,
I'm trying to get statistics from a variety of related tables
where the statistical counts are based on comparing the value on
one table with an equivalent value on a second table (and I want to
view the results grouped by time period, day, week, month, etc);
e.g.
Consider two tables - Customer & Customer_Action which
are related as a one-to-many (one customer can take many actions):
Customer: id, create_date, customer_name, etc.
Customer_Action: id, customer_id, create_date, action, etc.
If the customer took their first action at the same time as
they registered on the system (i.e. if the customer record was
created at the same time as the very first action record)
YES_count is incremented by 1 else
NO_count is incremented by 1
So running the query against the database the report would
look something like:
Customers Yes No
January 8 5 3
February 14 9 5 .... Etc.
I've tried this around a number of different ways but always
seem to end up with double counting in one way or another: see this
sample data
Customer Create_Date Action_Date
01 05/07/2008 12:36 05/07/2008 12:36
01 05/07/2008 12:36 28/08/2008 22:22
02 10/07/2008 12:04 10/07/2008 12:04
03 10/07/2008 12:12 10/07/2008 12:12
This should give me
Count Yes No
July 3 3 0
...... but I get always get a customer counts of 4 2 2!
My current statement is .....
SELECT count( m2u_Customer.id ) AS Customer,
min( m2u_Customer_Action.action_date ) AS 'Action_Date',
DATE_FORMAT( m2u_Customer.create_date, '%m-%M' ) AS Month,
sum(case when m2u_Customer.create_date =
m2u_Customer_Action.create_date then 1 else 0 end) as Yes,
sum(case when m2u_Customer.create_date !=
m2u_Customer_Action.create_date then 1 else 0 end) as No
FROM m2u_Customer
LEFT JOIN m2u_Customer_Action ON ( m2u_Customer.id =
m2u_CustomerAction.customer_ id )
WHERE m2u_Customer.create_date > '2008-07-02'
AND m2u_Customer.create_date < '2008-08-01'
GROUP BY DATE_FORMAT( m2u_Customer.create_date, '%m-%M' )
Can this be done?
Regards.
PatrickIn the default php.ini is set open_basedir which limits work with php only to few directories (and directories bellow them). There is set /srv/http, /home,/tmp and /usr/share/pear by default.
To allow your vhost you should add /data/www or set empty value. -
Regarding gathering statastics using dbms_stats
Hi,
BANNER
Oracle8i Enterprise Edition Release 8.1.7.0.0 - 64bit Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for Solaris: Version 8.1.7.0.0 - Production
NLSRTL Version 3.4.1.0.0 - Production
i would like generate statastics of tables using dbms_stats and i have lot of users in the database but i would like to generate statastics of "asap" user object(tables and indexes)i am using following syntax by connecting database as asap user :
sql>connect asap/asap;
sql>exec DBMS_STATS.GATHER_TABLE_STATS(NULL,'ITEM_T',CASCADE=>TRUE,METHOD_OPT=>'FOR ALL COLUMNS SIZE 254',
ESTIMATE_PERCENT=>10,DEGREE=>4);
is this statement is correct do anybody share valuable inputs since first time generating statastics in this database(new joinee)
Regards
PrakashHi Prakash ,
if i want gather statastics of asap schema how i need to connect database is as asap user or sys user OR both the cases are okIn general, I would carefully test the dbms_stats and the resulting new plans BEFORE doing anything in production! Lots may change!
Oracle recommends only re-analyzing when you have had enough data changes that it will effect your SQL execution plans.
Is there a reason for your desire to re-analyze? Have you identified sub-optimal SQL as a result of bad stats?
The addage "If it ain't broke, don't fix it" applies here.
Andy Holsworth recommends taking aa "deep" sample (perhaps more than 10% if you have the time), including histograms, and keep the stats as-is, only re-analyzing when required.
Withour SKEWONLY, you will need to manually add your histograms, by identifying skewed column distributions and correlating them to historical SQL. I have some notes on manual histogram detection here:
http://articles.techrepublic.com.com/5100-10878_11-5091017.html
We have the term "Monday Morning Mayhem" to describe databases where the DBA blindly re-analyzes stats on Sunday, only to find that the execution plans change dramatically.
Just make sure to test it throughly in both TEST and QA instances before changing production! -
Import Table Statistics to another table
Hi,
Just like to know if I can use dbms_stats.import_table_stats to import table statistics to another table?
Scenario:
I exported the table statistics of the table (T1) using the command below.
exec dbms_stats.export_table_stats('<user>','T1',NULL,'<stat table>');
PL/SQL procedure successfully completed.
And then, I have another table named T2, T1 and T2 are identical tables. T2 does not have statistics (I intentionally did not run gather statistics). I am wondering
if I can import table statistics from T1 to T2 using dbms_stats package?.
For what I understand, statistics can be imported back at the same table which is T1 but not for another table using dbms_stat package. If I am wrong, anyone can correct me.
Thankshi
just try ;-) you lose nothing probably,
check afterwards last_analyzed value for that table in user_tables
if something is wrong, run regular stats -
Statistics on a table to be collected daily.
Hi,
I have a scenario where a table gets truncated every day at 10 PM before being populated by a billion a two number of records. At 12 AM, the table will be used by warehousing system. How do we gather statistics for this table on a daily basis to see that Oracle accesses this table in the best possible way.
I know that a job something like this...
DECLARE
JOB USER_JOBS.JOB%TYPE;
BEGIN
DBMS_JOB.SUBMIT (JOB, 'BEGIN DBMS_STATS.GATHER_TABLE_STATS ('TABLE_NAME')', to_date ('24/11/2009 23:00', 'dd/mm/yyyy hh24:mi'), sysdate+1);
END;will get the statistics for me,
Is there any other approach??
Thanks,
Aswin.PROCEDURE GATHER_TABLE_STATS
Argument Name Type In/Out Default?
OWNNAME VARCHAR2 IN
TABNAME VARCHAR2 IN
PARTNAME VARCHAR2 IN DEFAULT
ESTIMATE_PERCENT NUMBER IN DEFAULT
BLOCK_SAMPLE BOOLEAN IN DEFAULT
METHOD_OPT VARCHAR2 IN DEFAULT
DEGREE NUMBER IN DEFAULT
GRANULARITY VARCHAR2 IN DEFAULT
CASCADE BOOLEAN IN DEFAULT
STATTAB VARCHAR2 IN DEFAULT
STATID VARCHAR2 IN DEFAULT
STATOWN VARCHAR2 IN DEFAULT
NO_INVALIDATE BOOLEAN IN DEFAULT
STATTYPE VARCHAR2 IN DEFAULT
FORCE BOOLEAN IN DEFAULTYour post contains error.
Provide all mandatory arguments -
Managing statistics for object collections used as table types in SQL
Hi All,
Is there a way to manage statistics for collections used as table types in SQL.
Below is my test case
Oracle Version :
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> Original Query :
SELECT
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:02.90
Execution Plan
Plan hash value: 3970072279
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 1 | HASH JOIN | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 2 | HASH JOIN | | 8168 | 287K| 695 (3)| 00:00:09 |
| 3 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | TG_FILE | 565K| 12M| 659 (2)| 00:00:08 |
| 7 | TABLE ACCESS FULL | TG_FILE_DATA | 852K| 128M| 3863 (1)| 00:00:47 |
Predicate Information (identified by operation id):
1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
Statistics
7 recursive calls
0 db block gets
16783 consistent gets
16779 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
select
index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
from
all_indexes
where table_name in ('TG_FILE','TG_FILE_DATA');
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE
TG_FILE_PK 2 2160 552842 21401 552842 285428
TG_FILE_DATA_PK 2 3544 852297 61437 852297 852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
So my question is, is there any way by which I can change the statistics while using collections in SQL ?
I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
Modified query with hints :
SELECT
/*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 1670128954
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 29978 (1)| 00:06:00 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 194 | 29978 (1)| 00:06:00 |
| 3 | NESTED LOOPS | | 8168 | 1363K| 16379 (1)| 00:03:17 |
| 4 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | TG_FILE_DATA | 1 | 158 | 2 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | TG_FILE_DATA_PK | 1 | | 1 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | TG_FILE_PK | 1 | | 1 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID | TG_FILE | 1 | 23 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
filter("TF"."FILE_ID"="TFD"."FILE_ID")
Statistics
0 recursive calls
0 db block gets
16 consistent gets
8 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Thanks,
BThanks Tubby,
While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
http://www.oracle-developer.net/display.php?id=427
If we go across the document, it has mentioned in total 3 hints to set statistics :
1) CARDINALITY (Undocumented)
2) OPT_ESTIMATE ( Undocumented )
3) DYNAMIC_SAMPLING ( Documented )
4) Extensible Optimiser
Tried it out with different hints and it is working as expected.
i.e. cardinality and opt_estimate are taking the default set value
But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
With CARDINALITY hint
SELECT
/*+ cardinality( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 10 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With OPT_ESTIMATE hint
SELECT
/*+ opt_estimate(table, e, scale_rows=0.0006) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Execution Plan
Plan hash value: 4043204977
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 485 | 29 (0)| 00:00:01 |
| 1 | VIEW | | 5 | 485 | 29 (0)| 00:00:01 |
| 2 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With DYNAMIC_SAMPLING hint
SELECT
/*+ dynamic_sampling( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 | 11 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 2 | 4 | 11 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
Regards,
B -
How to check the Statistics generated for a table through DBMS_STATS.
Hi,
How to check the statistics generated for a Table through DBMS_STATS.GATHER_TABLE_STATS procedure ?
Please let me know.
Thanks !
Regards,
RajasekharRajasekhar wrote:
Hi,
How to check the statistics generated for a Table through DBMS_STATS.GATHER_TABLE_STATS procedure ?
Please let me know.
Thanks !
Regards,
Rajasekharquery ALL_TABLES -
Where are stats that are gathered using dbms_stats stored?
where are stats that are gathered using dbms_stats stored?
http://docs.oracle.com/cd/E11882_01/server.112/e16638/stats.htm#PFGRF94765
-
Disable Statistics for specific Tables
Is it possible to disable statistics for specific tables???
If you want to stop gathering statistics for certain tables, you would simply not call DBMS_STATS.GATHER_TABLE_STATS on those particular tables (I'm assuming that is how you are gathering statistics at the moment). The old statistics will remain around for the CBO, but they won't be updated. Is that really what you want?
If you are currently using GATHER_SCHEMA_STATS to gather statistics, you would have to convert to calling GATHER_TABLE_STATS on each table. You'll probably want to have a table set up that lists what tables to exclude and use that in the procedure that calls GATHER_TABLE_STATS.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
How to disable automatic statistics collections on tables
Hi
I am using Oracle 10g and we have few tables which are frequently truncated and news rows added to it. Oracle automatically analyzes the table by some means which collects statistics of the table but at the wrong time(when the table is empty). This makes my query to do a full table scan rather using indexes since the statistics was collected when the table was empty.Could any one please let me know how to disable the automatic statistics collection feature of Oracle?
Cheers
Anantha PVHi
I am using Oracle 10g and we have few tables which
are frequently truncated and news rows added to it.
Oracle automatically analyzes the table by some means
which collects statistics of the table but at the
wrong time(when the table is empty). This makes my
query to do a full table scan rather using indexes
since the statistics was collected when the table was
empty.Could any one please let me know how to disable
the automatic statistics collection feature of
Oracle?
First of all I think it's important that you understand why Oracle collects statistics on these tables: Because it considers the statistics of the object to be missing or stale. So if you just disable the statistics gathering on these tables then you won't have statistics at all or outdated statistics.
So as said by the previous posts you should gather the statistics manually yourself anyway. If you do so right after loading the data into the truncated table, you don't need to disable the automatic statistics gathering as it only processes objects that are stale or don't have statistics at all.
If you still think that you need to disable it there are several ways to accomplish it:
As already mentioned, for particular objects you can lock the statistics using DBMS_STATS.LOCK_TABLE_STATS, or for a complete schema using DBMS_STATS.LOCK_SCHEMA_STATS. Then these statistics won't be touched by the automatic gathering job. You still can gather statistics using the FORCE=>true option of the GATHER__STATS procedures.
If you want to change the automatic gathering job that it only gathers statistics on objects owned by Oracle (data dictionary, AWR etc.), then you can do so by calling DBMS_STATS.SET_PARAM('AUTOSTATS_TARGET', 'ORACLE'). This is the recommended method.
If you disable the schedule job as mentioned in the documentation by calling DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB') then no statistics at all will be gathered automatically, causing your data dictionary statistics to be become stale over time, which could lead to suboptimal performance of queries on the data dictionary.
All this applies to Oracle 10.2, some of the features mentioned might not be available in Oracle 10.1 (as you haven't mentioned your version of 10g).
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
What are the database resources when collecting stats using dbms_stats
Hello,
We have tables that contain stale stats and would want to collect stats using dbms_stats with estiamte of 30%. What are the database resources that would be consummed when dbms_stats is used on a table? Also, would the table be locked during dbms_stats? Thank you.1) I'm not sure what resources you're talking about. Obviously, gathering statistics requires I/O since you've got to read lots of data from the table. It requires CPU particularly if you are gathering histograms. It requires RAM to the extent that you'll be doing sorts in PGA and to the extent that you'll be putting blocks in the buffer cache (and thus causing other blocks to age out), etc. Depending on whether you immediately invalidate query plans, you may force other sessions to start doing a lot more hard parsing as well.
2) You cannot do DDL on a table while you are gathering statistics, but you can do DML. You would generally not want to gather statistics while an application is active.
Justin -
Gathering Schema statistics automatically
Hi All,
I have a couple of doubts regarding automatic statistics gathering in Oracle 9i.
I have seen a metalink document telling that in 9i statistics can be gathered automatically by giving options => 'GATHER AUTO'. My doubt is
1) Once this statement is executed, dont we require running it any time afterwards? ie, will the statistics be gathered automatically for all objects in this schema without our intervention henceforth?
2) It is mentioned to execute the following statement before the one above "exec dbms_stats.ALTER_SCHEMA_TAB_MONITORING('<owner>',TRUE);"
If we enable table monitoring will it impact the database perfromance by any means? Will there be any overhead associated with table monitoring?
Please clear my above doubts regarding the DBMS_STATS. Also could you please inform me which is the better way to gather object statistics for a schema.
Thanks in advance
Satish1) Yes, you need to create a job inside oracle (recommended) or a batch in your OS to calculate statistics in the interval (daily,weekly,etc) that you want. If you just execute it once only that time the statistics will be calculated.
2)Yes, "it's recommended" but not mandatory. Except:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref6562
Well I think there will be some sort of impact, but I'm not sure how much. I once had that in a prod DB and I didn't noticed any kind of impact. You should test it before.
I don't think there's a rule to know which way is better to gather statistics, because some options for DBMS_STATS applies for some cases and others for a complete different case.
Depends in the type of DB that you have. -
Statistics on temporary table issue
We have a temporary table that causes sometimes a performance issue as the table statistics are not correct. The solution might be to compute the object statistics just before several select statements using the temporary table are executed (this is currently not the case). I think there can still be issues with this approach:
step 1.) Job 1 fills temporary table and gathers statistics on the table
step 2.) Job 1 executes first sql on temporary table
step 3.) Job 2 fills temporary table and gathers statistics on the table
step 4.) Job 2 executes first sql on temporary table using the new statistics
step 5.) Job 1 executes second sql on the table - but has now new (= wrong) table statistics of job 2
Job 1 executes for mandant 1 and job 2 for mandant 2, etc. Some of the "heap-organized" tables are partitioned by mandant.
How can we solve this problem? We consider to partition the temporary table into partitions by mandant, too. Are there other / better solutions?
(Oracle 10.2.0.4)Hello,
If you don't have statistics on your Temporary Table Oracle will use dynamic sampling instead.
So, you may try to delete the statistics on this Table and then Lock them, as follow:
execute dbms_stats.delete_table_stats('{color:red}schema{color}','{color:red}temporary_table{color}');
execute dbms_stats.lock_table_stats('{color:red}schema{color}','{color:red}temporary_table{color}');You may check the performances as dynamic sampling is also resource consuming.
Hope this help.
Best regards,
Jean-Valentin -
Gather statistics or analyze table
What is the difference between gather statistics for table and analyze table?
Regards
ArpitAnalyzing a table is gathering statistics (whether you're using the old ANALYZE statement or the preferred dbms_stats package).
-
Statistics Analysis on Tables that are often empty
Right now I'm dealing with a user application that was originally developed in Ora10g. Recently the database was upgraded to Ora11g, and the schema and data was imported successfully.
However, since the user started using Ora11, some of their applications have been running slower and slower. I'm just wondering if the problem could be due to statistics.
The application has several tables which contains temporary data. Usually these tables are empty, although when a user application runs they are populated, and queried against, and then at the end the data is deleted. (Its this program that's running slower and slower.)
I'm just wondering if the problem could be due to a problem with user statistics.
When I look at the 'last_analyzed' field in user_tables, the date goes back to the date of last import. I know Oracle regularly updates statistics, so what I suspect is happening is that, by luck, Oracle has only been gathering statistics when the tables are empty. (And since the tables are empty, the statistics are of no help in optimizing the DB.)
Am I on the right track?
And if so, is there a way to automatically trigger a statistics gather job when a table gets above a certain size?
System details:
Oracle: 11gR2 (64 bit) Standard version
File System: ASM (GRID infrastructure)Usually these tables are empty, although when a user application runs they are populated, and queried against, and then at the end the data is deletedYou have three options (and depending on how the data changes, you might find that not all temporary tables work best with the same option) :
1. Load representative data into the temporary table, collect statistics (including any histograms that you identify as necessary) and then lock the statistics
2. Modify the job to re-gather statistics immediately after a temporary table is populated
3. Delete statistics and then lock the statistics and check the results (execution plan and performance) when the optimizer uses dynamic sampling
Note : It is perfectly reasonable to create indexes on temporary tables -- provided that you DO create the correct indexes. If jobs are querying the temporary tables for the full data set (all rows) indexes are a hindrance. If there are many separate queries against the temporary table, each query retrieiving a small set of rows, an index or two may be beneficiial. Also some designs do use unique indexes to enforce uniqueness when the tables are loaded.
Hemant K Chitale
Maybe you are looking for
-
Error 0x80020022 when trying to burn a CD.
What is this Error 0x80020022 when trying to burn a CD.
-
Hi all, We have created a PDA application running on <b>SAP Mobile Infrastructure 2.5 Service Pack 18 Patch 1</b> on <b>Windows Mobile 5 </b>and Windows Mobile 2003 SE. <b>Problem description:</b> The problem is the RAM memory size. For WM 2003 there
-
hi, I recently (about a week or two ago) got a 15" mbp 2.4 Intel version and it seems to run a bit hot. weather this is something to worry about or just paranoia I don't know. I installed istat pro and fan control and when im running off the battery
-
Capturing data every 2 mins - Streams
I am using Oracle 11gR2 on RHEL We are planning to configure streams from 11gR2 OLTP environment to the 11gR2 DW. This would be schema level replication. Once the streams destination schema is populated with the data, we need to schedule the ETL proc
-
Connect to webservice via telnet
Hi, this stuff is rather new to me, so I apologize beforehand for the cluelessness of this question. I've got an Axis web service up and running on my test machine: When I access the service URL ("http://<myhost>/axis/services/<myservice-name>?wsdl")