Exclude Schema from Gather Statistics Job
I am using 11.1.0,7. I would like to know if we can exclude some schemas While Oracle automatically gathers the statistics.
Reason I want this is that, I have one database that supports multiple applications but as per one application's demand we should not be gathering stats on it's schema. So, in totality I have 20 Schemas but while this jobs run, I want to exclude 2 schemas that this application supports.
Locking the Stats for these schemas is not an option.
Thanks!
Have you enquired from the vendor exactly why they do not want you to gather statistics on it's schema?
Is there other options they are willing to consider like baselining the explain plan or perhaps not publishing the statistics to the tables...using the pending statistics feature?
Exec dbms_stats.set_table_prefs('SH', 'CUSTOMERS', 'PUBLISH', 'false');
Regards
Tim Boles
Edited by: Tim Boles on Aug 4, 2010 11:11 AM
Similar Messages
-
Autom. Gather Statistics Job
Hello,
i am facing following problem with the automatic gather statistics job which is running every night.
The jobs starts when the weeknight/weekend_maintenance_windows opens, runs approx. 30 minutes and rebuilds statistics for around 270 objects. But when we check the status of objects next morning or immediate after jobrun, we still find hundrets of objects with stale statistics. We are not facing any heavy performance problems, but i want to know why we found stale statistics. The job uses 30 minutes of the 480 minutes window, so in my opinion there is plenty of time for all objects to create new statistics.
Is this normal behaviour?
Can't find anything in metalink.
Systeminfo:
Host: HP-UX 11.31
DB: 10.2.0.4
Regards
UlliUlli,
uwaldt wrote:
Hello Randolf,
we check if column stale_stats contains a 'YES'. The column 'last_analyzed' is filled with a date.OK, and the date corresponds to the latest run of the automatic statistics gathering job or is it outdated? What is the content of the corresponding TABMODIFICATIONS view in that case?
After we check sthe SQL-script with the 'original' check via dbms_stats.gather_schema_stats, we see that there is a difference between all_tab/ind_statistics and the real value in tabmodifications.Apologies, but I'm not able to follow: What is the "difference between all_tab/ind_statistics and the real value in *_tab_modifications"? You mean to say that the list of objects marked as "STALE_STATS = YES" column differs from the output of the "LIST STALE" call?
But what shows grid-control if we check there for stale stats?Good question, I can't answer since I seldom use the GUI tools.
The next is, the developers in house argue with the all_tab/ind_statistics and blame the DBA's to do their housekeeping, especially if the users complain bad performance. I knew that the gather_job do his work, but we can't show it probable to the developers.This all sounds a bit odd. Two comments:
1. It is possible in 10g to lock the statistics of tables, so that the automatic statistics collection job doesn't touch them. It's quite unlikely since you have to invoke it manually/explicitly using DBMS_STATS.LOCK_TABLE/SCHEMA_STATS, but may be you want to check the column STATTYPE_LOCKED in the TABSTATISTICS views for those objects in question, just to make sure.
2. All the monitoring related information (*TABMODIFICATIONS/STALE_STATS column etc.) is usually only updated every three hours or if a call to DBMS_STATS.GATHER_STATS is performed. If you want to manually invoke the flush of the monitoring info, you can call DBMSSTATS.FLUSH_DATABASE_MONITORING_INFO to make sure that you look at the latest monitoring information available.
Above point might explain that you can spot differences/inconsistencies before/after calling DBMS_STATS.GATHER_SCHEMA_STATS to list the stale objects.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Exclude a table from GATHER SCHEMA STATISTICS
Hello All,
How do you exclude a table from gather schema statistcs?
Thanks.
Gregghi,
it is explained here:--
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1043993
LOCK_TABLE_STATS Procedure
This procedure locks the statistics on the table.
Syntax
DBMS_STATS.LOCK_TABLE_STATS (
ownname VARCHAR2,
tabname VARCHAR2);
thanks.
Edited by: varun4dba on Dec 10, 2010 7:39 PM -
How to exclude tables from Schema level replication
Hi All,
I am currently trying to setup Oracle Streams (Oracle 11.1.0.6 on RHEL5) to replicate a schema from one instance to another.
The basic schema level replication is working well, and copying DDL and DML changes without any problems. However there are a couple of tables that I need to exclude from the stream, due to incompatible datatypes.
Does anybody have any ideas or notes on how I could achieve this?? I have been reading the Oracle documentation and find it difficult to follow and confusing, and I have not found any examples in the internet.
Thanks heaps.
GavinWhen you use SCHEMA level rules for capture and need to skip the replication of a few tables, you create rules in the negative rule set for the table.
Here is an example of creating table rules in the negative rule set for the capture process.
begin
dbms_streams_adm.add_table_rules(
table_name => 'schema.table_to_be_skipped',
streams_type => 'CAPTURE',
streams_name => 'your_capture_name',
queue_name => 'strmadmin.capture_queue_name',
include_dml => true,
include_ddl => true,
inclusion_rule => false
end;
table_name parameter identifies the fully qualified table name (schema.table)
streams_name identifies the capture process for which the rules are to be added
queue_name specifies the name of the queue associated with the capture process.
Inclusion_rule=> false indicates that the create rules are to be placed in the negative rule set (ie, skip this table)
include_dml=> true indicates DML changes for the table (ie, skip DML changes for this table)
include_ddl=> true indicates DDL changes for the table (ie, skip DDL changes for this table) -
How to exclude schema name from exported files (PL SQL Developer)
Dear all,
Just one question: I am using PL SQL Developer. My goal is to export some data (as .sql and .dmp files) from one database and to import them into the another database (both databases have identical structure - test database and production, just different database names and names of schema. In order to make it possible, I need to exclude schema name from generated export file. I believe that it is possible to do it automatically by setting up parameters of PL SQL Developer. How?
Thank you in advance,
Kindest regards,
DraganaIn the meantime, I have found the answer on my previous question:
Actually, the initial idea (how to exclude schema name from exported files) was wrong. No need for any intervention.
Trick is: Schema name can be changed during the import of exported files (PL SQL Developer during import gives possibility: From User (old schema) To User (new schema) .
Hope that this will be useful info for others.
Dragana -
Hi,
I launched a job to gather statistics on just one schema on my DATABASE in OEM. How can I identify it's session ? Many thanks before.You can try below query:
select s.username, s.sid,s.serial#, sql.sql_Text
from v$session s, v$sqlarea sql
where s.sql_address = sql.address and s.sql_hash_value = sql.hash_value and
s.username = '&username' and lower(sql.sql_text) like '%dbms_stats%'
Best Regards
Krystian Zieja / mob -
Gather Statistics recommendations
Hi
How to find the recommendations from gather schema statistics job?
Thanks998932 wrote:
Sometimes when a Gather Schema stats job is scheduled(parameters:APPLSYS, 30, , NOBACKUP, , LASTRUN, GATHER, , Y), it will get executed in 25 minutes and sometimes 60 minutes.
a) What could be the possible reasons?This is normal and it depends to how busy your system is.
b) How can I reduce the execution time?How big is your database and how long it takes to gather stats for all schemes?
What if you change 30 to 10, does it make a difference?
c) What If I increase the estimate percent and give the Degree values?
PROD CPU's are 12, cores are 6 and RAM is 72 GBYou need to try it yourself and see how long it takes. Increasing the estimate percent will take longer but with different degree value this might make it different. You should tweak your parameters to find the best values and your concern should be about your system performance after running gather schematic stats instead of how long the program takes to run.
Thanks,
Hussein -
Hi folks!
I gathered statistcs in one schema with folowing options:
exec dbms_stats.gather_schema_stats(user,method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>null,gather_temp=>false,cascade=>true,degree=>dbms_stats.auto_degree);Then:
exec dbms_stats.gather_schema_stats(user,method_opt=>'FOR ALL COLUMNS SIZE SKEWONLY',estimate_percent=>null,gather_temp=>false,cascade=>true,degree=>dbms_stats.auto_degree);Now, I turn on table monitoring and suggest to gather statistics (per day) with that parameters:
dbms_stats.gather_schema_stats(ownname =>user, options=>'GATHER STALE', gather_temp => false, cascade => true, estimate_percent => 100, degree => dbms_stats.auto_degree);Default method_opt is FOR ALL COLUMNS SIZE AUTO.
Now I got better perfomance. How often I need to gather statstics?
Thanks in advance.
Best regards, Pavel.Thanks, I also think so.
Little test-case:
SQL>exec dbms_stats.gather_table_stats(user,'TABLE1',estimate_percent=>null);
Elapsed: 00:00:01.03
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE1;
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
DISGRP 7944 33204
DISMBR 18948 33204
TYPE 4 33204
Elapsed: 00:00:00.03
SQL> exec dbms_stats.gather_table_stats(user,'TABLE1',estimate_percent=>10);
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.78
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE1';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
DISGRP 3836 3269
DISMBR 13686 3269
TYPE 4 3269
Elapsed: 00:00:00.00
SQL> exec dbms_stats.gather_table_stats(user,'TABLE1',estimate_percent=>100);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.52
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE1';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
DISGRP 7944 33204
DISMBR 18948 33204
TYPE 4 33204
Elapsed: 00:00:00.01
SQL> exec dbms_stats.gather_table_stats(user,'TABLE1',estimate_percent=>dbms_stats.auto_sample_size);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.53
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE1';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
DISGRP 8448 5568
DISMBR 18948 33204
TYPE 4 5568
Elapsed: 00:00:00.00
SQL> select count(distinct disgrp) from TABLE1;
COUNT(DISTINCTDISGRP)
7944
Elapsed: 00:00:00.02
SQL> select count(distinct dismbr) from TABLE1;
COUNT(DISTINCTDISMBR)
18948
Elapsed: 00:00:00.03
SQL> select count(distinct type) from TABLE1;
COUNT(DISTINCTTYPE)
4
Elapsed: 00:00:00.01
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE2';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
ID 219120 219120
UNIT 114762 219120
LOC 61 219120
TIS 1230 219120
RTYP 3 219120
Elapsed: 00:00:00.02
SQL> exec dbms_stats.gather_table_stats(user,'TABLE2',estimate_percent=>null);
PL/SQL procedure successfully completed.
Elapsed: 00:00:07.61
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE2';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
ID 219120 219120
UNIT 114762 219120
LOC 61 219120
TIS 1230 219120
RTYP 3 219120
Elapsed: 00:00:00.00
SQL> exec dbms_stats.gather_table_stats(user,'TABLE2',estimate_percent=>10);
PL/SQL procedure successfully completed.
Elapsed: 00:00:02.90
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE2';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
ID 219950 21995
UNIT 70812 21995
LOC 43 21995
TIS 496 21995
RTYP 3 21995
Elapsed: 00:00:00.00
SQL> exec dbms_stats.gather_table_stats(user,'TABLE2',estimate_percent=>50);
PL/SQL procedure successfully completed.
Elapsed: 00:00:03.70
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE2';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
ID 218716 109358
UNIT 92338 109426
LOC 61 109455
TIS 994 109567
RTYP 3 109422
Elapsed: 00:00:00.00
SQL> exec dbms_stats.gather_table_stats(user,'COMPS',estimate_percent=>100);
PL/SQL procedure successfully completed.
Elapsed: 00:00:04.37
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE2';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
ID 219120 219120
UNIT 114762 219120
LOC 61 219120
TIS 1230 219120
RTYP 3 219120
Elapsed: 00:00:00.00
SQL> exec dbms_stats.gather_table_stats(user,'COMPS',estimate_percent=>dbms_stats.auto_sample_size);
PL/SQL procedure successfully completed.
Elapsed: 00:00:03.81
SQL> select column_name,num_distinct,sample_size from user_tab_col_statistics where table_name='TABLE2';
COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
ID 217968 54492
UNIT 122237 54492
LOC 31 5495
TIS 240 5495
RTYP 3 5495
Elapsed: 00:00:00.01
SQL> select count(distinct id) from TABLE2;
COUNT(DISTINCTID)
219120
Elapsed: 00:00:00.27
SQL> select count(distinct unit) from TABLE2;
COUNT(DISTINCTUNIT)
114762
Elapsed: 00:00:00.30
SQL> select count(distinct loc) from TABLE2;
COUNT(DISTINCTLOC)
61
Elapsed: 00:00:00.06
SQL> select count(distinct tis) from TABLE2;
COUNT(DISTINCTTIS)
1230
Elapsed: 00:00:00.09In that situation, auto_sample_size not bad? But estimate_percent=>100 - exact.
Best regards, Pavel. -
How to export a user and their schema from one 10g database to another?
Hi,
I would like to export a user and their entire schema from one 10g database to another one. How do I do this?
thx
adamIf you want to export a user and the schema owned to the user, and import to the same user in a different database, or a different user in the same database, you can use the exp and imp commands as described in the Utilities manual.
These commands are very versatile and have a lot of options - well worth learning properly. To give you a simplistic shortcut, see below - I create a user 'test_move', create some objects in the schema, export, create a new user in the database 'new_move' and import.
oracle@fuzzy:~> sqlplus system/?????
SQL*Plus: Release 10.2.0.1.0 - Production on Sat Mar 11 21:46:54 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> create user test_move identified by test_move;
User created.
SQL> grant create session, resource to test_move;
Grant succeeded.
SQL> connect test_move/test_move
Connected.
SQL> create table test (x number);
Table created.
SQL> insert into test values (1);
1 row created.
SQL> exit
Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
oracle@fuzzy:~> exp system/????? file=exp.dmp owner=test_move
Export: Release 10.2.0.1.0 - Production on Sat Mar 11 21:48:34 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user TEST_MOVE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user TEST_MOVE
About to export TEST_MOVE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export TEST_MOVE's tables via Conventional Path ...
. . exporting table TEST 1 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
oracle@fuzzy:~> sqlplus system/?????
SQL*Plus: Release 10.2.0.1.0 - Production on Sat Mar 11 21:49:23 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> create user new_move identified by new_move;
User created.
SQL> grant create session, resource to new_move;
Grant succeeded.
SQL> exit
Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
oracle@fuzzy:~> imp system/????? file=exp.dmp fromuser=test_move touser=new_move
Import: Release 10.2.0.1.0 - Production on Sat Mar 11 21:50:12 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Export file created by EXPORT:V10.02.01 via conventional path
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
. importing TEST_MOVE's objects into NEW_MOVE
. . importing table "TEST" 1 rows imported
Import terminated successfully without warnings.
oracle@fuzzy:~> If moving between databases, remember to set the SID properly before the import. If keeping the same userid, skip the from/to stuff in the import.
There are many variations on the theme ...
You can simplify this. You can select tables individually. You can use a parameter file. You can transport all the constraints and data. You can skip the data and only move the definitions. You can get some help (imp/exp help=yes).
And, if it's all 10g, there is a new and improved facility called expdp/impdp (dp = data pump) which has a lot more capability as well, including direct transfer (no intermediate file) together with suspend/restart. Also documented in the Utilities manual. -
Trying to create 3 schemas from one schema
DB version : 11.2.0.2 Enterprise Edition
Platform : RHEL 5.6
I have an expdp dump of a schema (HRTB_AP_PROD). I wanted to create 3 schemas from this dump in one go. So i tried this
## The parfile I used
DIRECTORY=DPUMP_DIR
DUMPFILE=HRTB_AP_PROD%u.dmp
LOGFILE=TheThreeSchemas-imp.log
remap_schema=HRTB_AP_PROD:HRTB_AP_DEV1
remap_schema=HRTB_AP_PROD:HRTB_AP_DEV2
remap_schema=HRTB_AP_PROD:HRTB_AP_DEV3
exclude=statistics
parallel=2
nohup impdp \'/ as sysdba\' parfile=impdp-aug23.par &But i encountered
ORA-39046: Metadata remap REMAP_SCHEMA has already been specified.When I googled it found the following link in which Dean Says , it is not possible.
Re: one dump file inport into multiple schema
So, I had to run 3 separate imports (impdp) to do this.
This is a bit wierd. I am surprized that Oracle guys haven't done anything about this . This is like DB2 !Is there a question in your post or are you just letting us know the obvious?
:p -
Hi,
I am using 11.1.0.7 on IBMAIX Power based 64 bit system.
In 10g, if i query dba_scheduler_jobs view, i see the GATHER_STATS_JOB for automated statistics collection but in 11g i don't see this rather i see BSLN_MAINTAIN_STATS_JOB job which executes BSLN_MAINTAIN_STATS_PROG program for stats collection.
And if i query DBA_SCHEDULER_PROGRAMS, i also see GATHER_STATS_PROG program here. Can gurus help me understanding both in 11g. Why there are two different programs and what is the difference?
Actually the problem is that i am receiving following error message in my alert log file
Mon Aug 16 22:01:42 2010
GATHER_STATS_JOB encountered errors. Check the trace file.
Errors in file /oracle/diag/rdbms/usgdwdbp/usgdwdbp/trace/usgdwdbp_j000_1179854.trc:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredThe trace files shows
*** 2010-08-14 22:10:14.449
*** SESSION ID:(2028.20611) 2010-08-14 22:10:14.449
*** CLIENT ID:() 2010-08-14 22:10:14.449
*** SERVICE NAME:(SYS$USERS) 2010-08-14 22:10:14.449
*** MODULE NAME:(DBMS_SCHEDULER) 2010-08-14 22:10:14.449
*** ACTION NAME:(ORA$AT_OS_OPT_SY_3407) 2010-08-14 22:10:14.449
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
*** 2010-08-14 22:10:14.450
GATHER_STATS_JOB: GATHER_TABLE_STATS('"DWDB_ADMIN_SYN"','"TEMP_HIST_HEADER_LIVE"','""', ...)
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredBut we dont have GATHER_STATS_JOB in 11g and also the job BSLN_MAINTAIN_STATS_PROG runs only on weekend and the above error message came last night 16 August.
Thanks
SalmanThanks for the people who are contributing.
I know from the information that table is locked, but i have tried manually locking a table and executing gather_table_stats procedure but it runs fine. Here i have two questions
Where is GATHER_STATS_JOB in 11g as you can see the text of trace file where it says that GATHER_STATS_JOB failed, i dont see there is any GATHER_STATS_JOB in 11g.
BSLN_MAINTAIN_STATS_JOB job is supposed gather statistics but only on weekend nights, then how come i see this error occurring last night at 22:11 on 16th August which is not a week end night.
Salman -
How to Schedule Java statistics job
Hi All,
We have a installed a NW07 Portal system on one of the unix boxes.Now we would like to run the oracle statistics on the same. Since there is no ABAP stack, we can't schedule the same thru db13. Is there anyway we can run the same.
Can I configure the Java statistics jobs for this from the central solution manager using the DBACOCKPIT.
Can somebody tell how to do this??
Thanks for the help.
Regards
RaviRavi,
Chek out the Following SAP Note as well as the PDF's to accomplish the task.:
Note 1027146 - Database administration and monitoring in the DBA Cockpit
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/107aa3f5-2302-2a10-f990-b4d2af1aaca0
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1062428c-f1df-2910-b08f-c322feddcd10
Regards,
Karthick Eswaran -
exec dbms_stats.gather_schema_stats( -
ownname => 'PROD1, -
options => 'GATHER AUTO', -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size repeat', -
degree => 25 -
I have 400 tables in PROD1 schema, but I want to gather statistics for 120 tables only
How to gathere statistics for 120 tables ?
For specifying degree, what is the minimum and maximum? Will this affect any performanceuser8934564 wrote:
exec dbms_stats.gather_schema_stats( -
ownname => 'PROD1, -
options => 'GATHER AUTO', -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size repeat', -
degree => 25 -
I have 400 tables in PROD1 schema, but I want to gather statistics for 120 tables only
How to gathere statistics for 120 tables ?
For specifying degree, what is the minimum and maximum? Will this affect any performanceFrom Oracle Docs:
Degree is Degree of parallelism. The default for degree is NULL. The default value can be changed using the SET_PARAM Procedure. NULL means use the table default value specified by the DEGREE clause in the CREATE TABLE or ALTER TABLE statement. Use the constant DBMS_STATS.DEFAULT_DEGREE to specify the default value based on the initialization parameters. The AUTO_DEGREE value determines the degree of parallelism automatically. This is either 1 (serial execution) or DEFAULT_DEGREE (the system default value based on number of CPUs and initialization parameters) according to size of the object.To be able to gather stats for 120 tables:
1) You can create a temporary table, enter the names of these tables, create a PL/SQL procedure, loop for the table names and call GATHER_TABLE_STATS for each table.
2) You can lock other table's stats, and call GATHER_SCHEMA_STATS.
Regards
Gokhan -
Gather Statistics taking very long time
Hi,
In one of our critical Databases gather statistics is taking a very long time.
How do I go about debugging this bad performance?
Regards,
NarayanStart with
- What is the exact command used to gather statistics (and what are all the global settings that are used for any arguments you're not providing)?
- What does "very long time" mean in numbers? Are we talking about 20 minutes? 4 hours? 4 days?
- How much data are we talking about? What fraction of the tables and indexes are you gathering statistics on?
- Have you done anything to look at, say, v$session_longops (or trace the session or watch the SQL being exectued, etc) to see what the job is doing?
Justin -
Suggested schedule for gather statistics on sysadm
We have recently upgraded to 11.1.0.7 with psoft version 9.0 and tools 8.49 on sun solaris version 10. Currently we are running stats every night on all non work sysadm tables. I'm wondering if this is too much. We are seeing sql statements use different explain plans. For ex: one sql takes plan A and finishes in expected time. At some other point during the week that same sql will take plan B and will either never finish or take way too long.
So I'm curious to see how other folks are handling their statistics.
Other items of note:
we disabled the nightly oracle statistics job
our database is about 200 GB of used space
In any feedback is much appreciated.Welcome to the forum!
Whenever you post provide your 4 digit Oracle version (result of SELECT * FROM V$VERSION)
>
For example there is a table T1 with 25GB size (3 crores rows) and partitioned when i try generating statistics for a T1 table with below command it takes lots of time and i dont know what is the progress.
>
Your command will generate statistics for the entire table; that is, ALL partitions. Is that what you intended?
Sometimes partitioned tables have historical data in older partitions and a small number of ACTIVE partitions where the amount of data changes substantially. You could collect stats only on the partitions that have large data changes. The older, static partitions may not need frequent stat collections since the amount of data that changes is small.
If you are using 11g you could collect incremental statistics instead of global stats. Here is an article by Maria Colgan, an Oracle developer on the Optimizer team on stats.
https://blogs.oracle.com/optimizer/entry/how_do_i_compare_statistics
Maybe you are looking for
-
CD/DVD drive not recognized on my HP laptop
Hi all, Sorry to post yet another thread on the issue but I've tried just about everything that I've read up on I think. I have a HP Pavilion DV6809wm, running Vista 32 bit. I go into Device Manager and there is no Optical drive listed. I have rem
-
How can I add a bitlocker data recovery agent?
Hi, I'm using an SCCM 2012 task sequence to encrypt laptop disks using bitlocker. If I want to add a data recovery agent, can I just configure a GPO with the specific DRA settings as shown here: http://sourcedaddy.com/windows-7/how-to-configure-data-
-
Import Previews and Collections Only
GOAL: to export the Previews and Collections info ONLY on an old Lightroom catalog/computer and apply that info to same photos on a new Lightroom catalog/computer. I'm moving my Lightroom Catalog from one computer that utilized multiple external driv
-
Performance Tuning For 11g Database
I've heard some Oracle DBA's suggest / recommend that specific parameters be tuned in the O.S. for using Linux as a dedicated Oracle 11g database server. I'm guess those parameters vary greatly based on utilization, hardware, and various other factor
-
Problem configure WSDL destination for web services metadata gen and exec
Hello, First question: Is WSIL supported by an R/3 4.7 ABAP system? If yes I'd like to consume web services from it (Function Modules remote-enabled), through WSIL. I need thus to configure 2 WSDL destinations on the SLD for metadata generation and e