Incremental statistics in 10g
dear all
facts:
- oracle ent 10.1.0.5
- aix 5.3
- datawarehouse environment
- a lot of partitioned tables
I want to implement an incremental statistics in order to apply in partitions recently loaded. not in all table. Do you have a script or plsql block in order to do that?
I know that in 11g this method exists by im looking for 10.1.0.5
thanks for your answers.
http://docs.oracle.com/cd/B14117_01/appdev.101/b10802/d_stats.htm#996757
Table 93-32 GATHER_TABLE_STATS Procedure Parameters
partname - Name of partition.
So is it possible or not???
DBMS_STATS.GATHER_TABLE_STATS (
ownname VARCHAR2,
tabname VARCHAR2,
partname VARCHAR2 DEFAULT NULL,
estimate_percent NUMBER DEFAULT to_estimate_percent_type
(get_param('ESTIMATE_PERCENT')),
block_sample BOOLEAN DEFAULT FALSE,
method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
granularity VARCHAR2 DEFAULT 'AUTO',
cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
stattab VARCHAR2 DEFAULT NULL,
statid VARCHAR2 DEFAULT NULL,
statown VARCHAR2 DEFAULT NULL,
no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
get_param('NO_INVALIDATE')));Looks like in PL/SQL it's possible...
Try to SELECT partition name from dictionary and run it with scheduler
Similar Messages
-
Best practices for gathering statistics in 10g
I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.Hi,
Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
I have my detailed notes here:
http://www.dba-oracle.com/art_otn_cbo.htm
As to system stats, oh yes!
By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
No Workload (NW) stats:
CPUSPEEDNW - CPU speed
IOSEEKTIM - The I/O seek time in milliseconds
IOTFRSPEED - I/O transfer speed in milliseconds
I have my notes here:
http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
Hope this helps. . . .
Don Burleson
Oracle Press author
Author of “Oracle Tuning: The Definitive Reference”
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm -
Best Way to gather statistics in 10g
Hi All,
What is the best way to gather optimizer statistics in 10g databases? We are currently following the default automatic statistics gathering feature of 10g. But we feel it has got some shortcomings. For many of the tables the stats are not up to date. Also we have hit one bug in 10g which can cause "cursor: pin S wait on X " during stats gathering (As given in metalink note).
So what do you experts feel about the issue. What would be the best way to gather stats=> manual or auto?
Regards
SatishThe right reply to your question is "it depends". It depends on your application systems,
the amount of change that your data suffers during time and your queries. You can choose what statistics to gather and when. You have to know your data and your application, though. There is no simple answer, right for everyone, which could be dispensed and set as a "golden
rule". The great site with many useful articles about statistics is Wolfgang Breitling's site:
www.centrexcc.com. That is for starters. Your question is far from trivial and is not easily answered. The best reply you can get is "it depends". -
11g Incremental statistics gathering experiences
Hi folks,
I was wondering if people who have configured their 11g DB's to use incremental statistics would share their experiences good/bad etc.
Has anyone setup incremental stats - was worthwhile for you or not - why - etc?
Any problems / performance issues / bugs hit etc etc?
I would welcome any posts of experiences encountered or any related comments.
Thanks,
firefly
Edited by: firefly on 10-Mar-2011 06:58I was wondering if people who have configured their 11g DB's to use incremental statisticswhat exactly are "incremental statistics"?
how are they collected?
where are they stored?
how are they used? -
Gathering system statistics in 10g
I am confused. I read in the Oracle® Database Performance Tuning Guide, for 10g Release 1, that the system statistics are NOT automatically generated and must be manually generated using DBMS_STATS. However, when I query V$SESSTAT, V$STATNAME and V$OSSTAT I have records returned.
I thought that DBMS_STATS was no longer used in 10g and that everything was automatic. Does anyone know which is correct? If I have data in those views does that mean that system statistics have been run?
Thanks!You can still manually collect stats in 10g using DBMS_STATS, but 10g can also perform statistics collection on stale tables automatically, when enabled. Our other DBA was involved in setting up that item, but I think the Oracle tool involved is the Automatic Workload Repository.
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/autostat.htm
-Chuck -
We use below query to find "global cache hit ratio" in RAC Environment in 9i,
SELECT
a.inst_id "Instance",
(A.VALUE+B.VALUE+C.VALUE+D.VALUE)/(E.VALUE+F.VALUE) "GLOBAL CACHE HIT RATIO"
FROM
GV$SYSSTAT A,
GV$SYSSTAT B,
GV$SYSSTAT C,
GV$SYSSTAT D,
GV$SYSSTAT E,
GV$SYSSTAT F
WHERE
A.NAME='global cache gets'
AND B.NAME='global cache converts'
AND C.NAME='global cache cr blocks received'
AND D.NAME='global cache current blocks received'
AND E.NAME='consistent gets'
AND F.NAME='db block gets'
AND B.INST_ID=A.INST_ID
AND C.INST_ID=A.INST_ID
AND D.INST_ID=A.INST_ID
AND E.INST_ID=A.INST_ID
AND F.INST_ID=A.INST_ID;
This query gives "no rows selected" output in Oracle 10g. The reason being some of the name column values like 'global cache gets' etc are not present in 10g. What are the new events which have replaced these events
Can you suggest me any alternative query which will run on all oracle versions 9i,10g and 11g.Hope this query helps.I ran it in our 10g environment and it works :) ,but now rows selected.
select a.inst_id, a.INST_ID "instance", a.value "global blocks lost",
b.value "global blocks served",
c.value "global blocks served",
a.value/(b.value+c.value) ratio
from gv$sysstat a, gv$sysstat b, gv$sysstat c
where a.name='global cache blocks lost' and
b.name='global cache current blocks served' and
c.name='global cache cr blocks served' and
b.inst_id=a.inst_id and c.inst_id = a.inst_id
/ -
Collecting database statistics in 10g
Hi,
We are using Oracle database 10.2.0.4 on hp os . As we know in 10g AWR automatically collect stats after 1 hour . is there any need to collect database stats agin manualy by using dbms_stats ...?
is there any difference betweencollecting stats by AWR and dbms_stats ... ?
"execute sys.dbms_stats.gather_system_stats('Start') ;
execute sys.dbms_stats.gather_schema_stats( ownname=>'pc01', cascade=>FALSE, degree=>dbms_stats.default_degree, estimate_percent=>100);
execute dbms_stats.delete_table_stats( ownname=>'pc01', tabname=>'statcol');
execute sys.dbms_stats.gather_system_stats('Stop');"
any idea ...?Hello...
Thanks a lot ...
Some of our production systems ...those are running on oracle10g ....they are collecting database stats once in a month...manualy...
by using dbms.stats... to improve performance of system ....
so is there any need to collect stats manauly.....?
As per my understanding ...no need to collect it manualy ...because AWR is doing this ...
am i right ...? -
Optimized incremental backup in 10g
Hi,
I study the http://www.oracle.com/technology/products/database/oracle10g/pdf/twp_general_10gdb_product_family.pdf and i have one question about it...
In "Incremental backup" feature/option there is a note :"SE/XE no optimized incremental backup".
What are the features of "optimized incremental backup"...????
Thanks......
SimI believe that is referring to block change tracking. That allows Oracle to track which blocks have actually changed since the last backup, so RMAN can just read the tracking file and backup those blocks rather than examining each block to see whether it has changed.
Justin -
Hi,
I am using RMAN 10.2.0.3.0
and target db 10.2.0.3.0
in nocatalaog mode
I have scheduled the level 0 backup weekly and level 1 backup daily as below.
Saturday
run
show all;
backup incremental level 0 database;
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup archivelog all;
report obsolete;
delete obsolete;
restore validate database;
restore validate controlfile;
exit
Sunday-Friday
run
show all;
backup incremental level 1 database;
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup archivelog all;
report obsolete;
delete obsolete;
restore validate database;
restore validate controlfile;
exit
Saturday the incr level 0 was done without any issues, but next day while running level1 it should take the level0(Saturday) as base, but here it shows that '*no parent copy or data file found* ' and it again takes the full backup.
Please let me know if I did anything wrong. Also my retention policy is set to RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS and my controlfile_record_keep_time is 7.
Please help me
Thanks,
Priya
Edited by: Priya on Sep 22, 2010 6:12 AMLog of Incremental Level 0
Recovery Manager: Release 10.2.0.3.0 - Production on Mon Sep 20 12:07:45 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: FUJIVDB (DBID=639859515)
RMAN> run
2> {
3> show all;
4> backup incremental level 0 database;
5> sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
6> backup archivelog all;
7> report obsolete;
8> delete obsolete;
9> restore validate database;
10> restore validate controlfile;
11>
12> }
13> exit
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'C:\Backups\Controlfile\cf_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'C:\Backups\datafile\ora_df%t_s%s_s%p';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\ORA102\DATABASE\SNCFFUJIVDB_%F%.ORA';
Starting backup at 20-SEP-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=323 devtype=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00003 name=C:\ORACLE\ORADATA\FUJIVDB\SYSAUX01.DBF
input datafile fno=00004 name=C:\ORACLE\ORADATA\FUJIVDB\UNDO02.DBF
input datafile fno=00002 name=C:\ORACLE\ORADATA\FUJIVDB\UNDO01.DBF
input datafile fno=00001 name=C:\ORACLE\ORADATA\FUJIVDB\SYSTEM01.DBF
input datafile fno=00006 name=C:\ORACLE\ORADATA\FUJIVDB\USER01.DBF
input datafile fno=00011 name=C:\ORACLE\ORADATA\FUJIVDB\INDX01.DBF
input datafile fno=00007 name=C:\ORACLE\ORADATA\FUJIVDB\USER11.DBF
input datafile fno=00008 name=C:\ORACLE\ORADATA\FUJIVDB\USER12.DBF
input datafile fno=00009 name=C:\ORACLE\ORADATA\FUJIVDB\USER13.DBF
input datafile fno=00010 name=C:\ORACLE\ORADATA\FUJIVDB\USER14.DBF
input datafile fno=00012 name=C:\ORACLE\ORADATA\FUJIVDB\INDX11.DBF
input datafile fno=00013 name=C:\ORACLE\ORADATA\FUJIVDB\INDX12.DBF
input datafile fno=00014 name=C:\ORACLE\ORADATA\FUJIVDB\INDX13.DBF
input datafile fno=00015 name=C:\ORACLE\ORADATA\FUJIVDB\INDX14.DBF
input datafile fno=00005 name=C:\ORACLE\ORADATA\FUJIVDB\TOOLS01.DBF
channel ORA_DISK_1: starting piece 1 at 20-SEP-10
channel ORA_DISK_1: finished piece 1 at 20-SEP-10
piece handle=C:\BACKUPS\DATAFILE\ORA_DF730210069_S31_S1 tag=TAG20100920T120749 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:06
Finished backup at 20-SEP-10
Starting Control File and SPFILE Autobackup at 20-SEP-10
piece handle=C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100920-00 comment=NONE
Finished Control File and SPFILE Autobackup at 20-SEP-10
sql statement: ALTER SYSTEM ARCHIVE LOG CURRENT
Starting backup at 20-SEP-10
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=9 stamp=729858731
input archive log thread=1 sequence=2 recid=10 stamp=729887062
input archive log thread=1 sequence=3 recid=11 stamp=729953114
input archive log thread=1 sequence=4 recid=12 stamp=729973634
input archive log thread=1 sequence=5 recid=13 stamp=729973634
input archive log thread=1 sequence=6 recid=14 stamp=730015209
input archive log thread=1 sequence=7 recid=15 stamp=730058416
input archive log thread=1 sequence=8 recid=16 stamp=730058416
input archive log thread=1 sequence=9 recid=17 stamp=730144816
input archive log thread=1 sequence=10 recid=18 stamp=730144816
input archive log thread=1 sequence=11 recid=19 stamp=730210042
input archive log thread=1 sequence=12 recid=20 stamp=730210200
input archive log thread=1 sequence=13 recid=21 stamp=730210201
channel ORA_DISK_1: starting piece 1 at 20-SEP-10
channel ORA_DISK_1: finished piece 1 at 20-SEP-10
piece handle=C:\BACKUPS\DATAFILE\ORA_DF730210201_S33_S1 tag=TAG20100920T121001 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:17
Finished backup at 20-SEP-10
Starting Control File and SPFILE Autobackup at 20-SEP-10
piece handle=C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100920-01 comment=NONE
Finished Control File and SPFILE Autobackup at 20-SEP-10
RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 8 days
no obsolete backups found
RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 8 days
using channel ORA_DISK_1
no obsolete backups found
Starting restore at 20-SEP-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile backupset
channel ORA_DISK_1: reading from backup piece C:\BACKUPS\DATAFILE\ORA_DF729786183_S13_S1
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\BACKUPS\DATAFILE\ORA_DF729786183_S13_S1 tag=TAG20100915T142303
channel ORA_DISK_1: validation complete, elapsed time: 00:00:25
Finished restore at 20-SEP-10
Starting restore at 20-SEP-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile backupset
channel ORA_DISK_1: reading from backup piece C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100920-01
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100920-01 tag=TAG20100920T121018
channel ORA_DISK_1: validation complete, elapsed time: 00:00:03
Finished restore at 20-SEP-10
Recovery Manager complete.
Next day Log Of Incremental Level 1
Recovery Manager: Release 10.2.0.3.0 - Production on Tue Sep 21 17:58:00 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: FUJIVDB (DBID=639859515)
RMAN> run
2> {
3> show all;
4> backup incremental level 1 database;
5> sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
6> backup archivelog all;
7> report obsolete;
8> delete obsolete;
9> restore validate database;
10> restore validate controlfile;
11> }
12> exit
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'C:\Backups\Controlfile\cf_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'C:\Backups\datafile\ora_df%t_s%s_s%p';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\ORA102\DATABASE\SNCFFUJIVDB_%F%.ORA';
Starting backup at 21-SEP-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=321 devtype=DISK
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 2 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 6 found
no parent backup or copy of datafile 11 found
no parent backup or copy of datafile 7 found
no parent backup or copy of datafile 8 found
no parent backup or copy of datafile 9 found
no parent backup or copy of datafile 10 found
no parent backup or copy of datafile 12 found
no parent backup or copy of datafile 13 found
no parent backup or copy of datafile 14 found
no parent backup or copy of datafile 15 found
no parent backup or copy of datafile 5 found
channel ORA_DISK_1: starting incremental level 1 datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00003 name=C:\ORACLE\ORADATA\FUJIVDB\SYSAUX01.DBF
input datafile fno=00004 name=C:\ORACLE\ORADATA\FUJIVDB\UNDO02.DBF
input datafile fno=00002 name=C:\ORACLE\ORADATA\FUJIVDB\UNDO01.DBF
input datafile fno=00001 name=C:\ORACLE\ORADATA\FUJIVDB\SYSTEM01.DBF
input datafile fno=00006 name=C:\ORACLE\ORADATA\FUJIVDB\USER01.DBF
input datafile fno=00011 name=C:\ORACLE\ORADATA\FUJIVDB\INDX01.DBF
input datafile fno=00007 name=C:\ORACLE\ORADATA\FUJIVDB\USER11.DBF
input datafile fno=00008 name=C:\ORACLE\ORADATA\FUJIVDB\USER12.DBF
input datafile fno=00009 name=C:\ORACLE\ORADATA\FUJIVDB\USER13.DBF
input datafile fno=00010 name=C:\ORACLE\ORADATA\FUJIVDB\USER14.DBF
input datafile fno=00012 name=C:\ORACLE\ORADATA\FUJIVDB\INDX11.DBF
input datafile fno=00013 name=C:\ORACLE\ORADATA\FUJIVDB\INDX12.DBF
input datafile fno=00014 name=C:\ORACLE\ORADATA\FUJIVDB\INDX13.DBF
input datafile fno=00015 name=C:\ORACLE\ORADATA\FUJIVDB\INDX14.DBF
input datafile fno=00005 name=C:\ORACLE\ORADATA\FUJIVDB\TOOLS01.DBF
channel ORA_DISK_1: starting piece 1 at 21-SEP-10
channel ORA_DISK_1: finished piece 1 at 21-SEP-10
piece handle=C:\BACKUPS\DATAFILE\ORA_DF730317484_S56_S1 tag=TAG20100921T175803 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:06
Finished backup at 21-SEP-10
Starting Control File and SPFILE Autobackup at 21-SEP-10
piece handle=C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100921-06 comment=NONE
Finished Control File and SPFILE Autobackup at 21-SEP-10
sql statement: ALTER SYSTEM ARCHIVE LOG CURRENT
Starting backup at 21-SEP-10
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=9 stamp=729858731
input archive log thread=1 sequence=2 recid=10 stamp=729887062
input archive log thread=1 sequence=3 recid=11 stamp=729953114
input archive log thread=1 sequence=4 recid=12 stamp=729973634
input archive log thread=1 sequence=5 recid=13 stamp=729973634
input archive log thread=1 sequence=6 recid=14 stamp=730015209
input archive log thread=1 sequence=7 recid=15 stamp=730058416
input archive log thread=1 sequence=8 recid=16 stamp=730058416
input archive log thread=1 sequence=9 recid=17 stamp=730144816
input archive log thread=1 sequence=10 recid=18 stamp=730144816
input archive log thread=1 sequence=11 recid=19 stamp=730210042
input archive log thread=1 sequence=12 recid=20 stamp=730210200
input archive log thread=1 sequence=13 recid=21 stamp=730210201
input archive log thread=1 sequence=14 recid=22 stamp=730210443
input archive log thread=1 sequence=15 recid=23 stamp=730210444
input archive log thread=1 sequence=16 recid=24 stamp=730231219
input archive log thread=1 sequence=17 recid=25 stamp=730231220
input archive log thread=1 sequence=18 recid=26 stamp=730261837
input archive log thread=1 sequence=19 recid=27 stamp=730299124
input archive log thread=1 sequence=20 recid=28 stamp=730299125
input archive log thread=1 sequence=21 recid=29 stamp=730299448
input archive log thread=1 sequence=22 recid=30 stamp=730299459
input archive log thread=1 sequence=23 recid=31 stamp=730299531
input archive log thread=1 sequence=24 recid=32 stamp=730299531
input archive log thread=1 sequence=25 recid=33 stamp=730317616
input archive log thread=1 sequence=26 recid=34 stamp=730317616
channel ORA_DISK_1: starting piece 1 at 21-SEP-10
channel ORA_DISK_1: finished piece 1 at 21-SEP-10
piece handle=C:\BACKUPS\DATAFILE\ORA_DF730317617_S58_S1 tag=TAG20100921T180016 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:18
Finished backup at 21-SEP-10
Starting Control File and SPFILE Autobackup at 21-SEP-10
piece handle=C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100921-07 comment=NONE
Finished Control File and SPFILE Autobackup at 21-SEP-10
RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 8 days
no obsolete backups found
RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 8 days
using channel ORA_DISK_1
no obsolete backups found
Starting restore at 21-SEP-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile backupset
channel ORA_DISK_1: reading from backup piece C:\BACKUPS\DATAFILE\ORA_DF729786183_S13_S1
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\BACKUPS\DATAFILE\ORA_DF729786183_S13_S1 tag=TAG20100915T142303
channel ORA_DISK_1: validation complete, elapsed time: 00:00:26
Finished restore at 21-SEP-10
Starting restore at 21-SEP-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile backupset
channel ORA_DISK_1: reading from backup piece C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100921-07
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\BACKUPS\CONTROLFILE\CF_C-639859515-20100921-07 tag=TAG20100921T180035
channel ORA_DISK_1: validation complete, elapsed time: 00:00:02
Finished restore at 21-SEP-10
Recovery Manager complete. -
SQLs for performance statistics in 10g
I need help in getting this information. Can somebody please provide the sqls. I don't seem to get this from AWR.
1. #transactions/sec
2. Physical reads/sec
3. Logical reads/sec
Database is 10.2.0.2 with RAC
Thankshi,
are you sure you have read awr correctly.
I have one and can easily spot
physical reads per sec
logical reads
transactions per sec
look in the Load Profile section of awr and then you can simply search for your information
rgds
alan -
Elapsed time went up from 1min to 22min after migrating from 10g to 11g
I just migrated one of my database from 10.2.0.2(Red hat Linux, 2 node RAC, sga= 1Gb) to 11.2.0.1 (red Hat Linux 2 Node RAC, SGA=7GB)
The timing for one of the specific query shoot up from 1min to 22 min.
Following is the query:
SELECT /*+ gather_plan_statistics */ docr.DRCONTENT
FROM WRPADMIN.T_DOCREPORT docr, WRPADMIN.t_document doc
WHERE doc.docid = docr.docid
AND 294325 = doc.rdocid
AND ( ( ( (EXISTS
(SELECT 'X'
FROM WRPADMIN.t_mastermap mstm1,
WRPADMIN.t_docdimmap docdim1
WHERE doc.docid = mstm1.docid
AND mstm1.dimlvlid = 2
AND mstm1.mstmapid = docdim1.mstmapid
AND docdim1.dimid IN (86541))))
OR (EXISTS
(SELECT 'X'
FROM WRPADMIN.t_mastermap mstm2,
WRPADMIN.t_docdimmap docdim2
WHERE doc.rdocid = mstm2.rdocid
AND mstm2.dimlvlid = 1
AND mstm2.mstmapid = docdim2.mstmapid
AND docdim2.dimid IN (28388)))))
ORDER BY doc.DOCIDThe select field (docr.DRCONTENT) is a CLOB column.
Following is the plan and statistics in 10g
Statistics
1 recursive calls
0 db block gets
675018 consistent gets
52225 physical reads
0 redo size
59486837 bytes sent via SQL*Net to client
27199426 bytes received via SQL*Net from client
103648 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
51823 rows processed
SQL>
Plan hash value: 129748299
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 50 | 51823 |00:00:14.72 | 627K| 5379 | 26M| 1873K| 23M (0)|
|* 2 | FILTER | | 1 | | 51823 |00:00:08.90 | 627K| 5379 | | | |
| 3 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCREPORT | 1 | 1 | 51823 |00:00:05.42 | 159K| 3773 | | | |
| 4 | NESTED LOOPS | | 1 | 50 | 103K|00:00:12.65 | 156K| 628 | | | |
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCUMENT | 1 | 50 | 51823 |00:00:00.15 | 481 | 481 | | | |
|* 6 | INDEX RANGE SCAN | RDOC2_INDEX | 1 | 514 | 51823 |00:00:00.09 | 245 | 245 | | | |
|* 7 | INDEX RANGE SCAN | DOCID9_INDEX | 51823 | 1 | 51823 |00:00:00.46 | 155K| 147 | | | |
|* 8 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 51823 | 1 | 0 |00:00:04.52 | 467K| 1140 | | | |
| 9 | NESTED LOOPS | | 51823 | 1 | 207K|00:00:03.48 | 415K| 479 | | | |
|* 10 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_MASTERMAP | 51823 | 1 | 51823 |00:00:01.20 | 207K| 190 | | | |
|* 11 | INDEX RANGE SCAN | DOCID4_INDEX | 51823 | 1 | 51824 |00:00:00.41 | 155K| 146 | | | |
|* 12 | INDEX RANGE SCAN | MSTMAPID_INDEX | 51823 | 1 | 103K|00:00:00.43 | 207K| 289 | | | |
|* 13 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 1 | 1 | 1 |00:00:01.05 | 469 | 466 | | | |
| 14 | NESTED LOOPS | | 1 | 1 | 15 |00:00:14.62 | 468 | 465 | | | |
|* 15 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_MASTERMAP | 1 | 1 | 1 |00:00:01.02 | 464 | 463 | | | |
|* 16 | INDEX RANGE SCAN | RDOCID3_INDEX | 1 | 629 | 44585 |00:00:00.29 | 198 | 198 | | | |
|* 17 | INDEX RANGE SCAN | MSTMAPID_INDEX | 1 | 1 | 14 |00:00:00.02 | 4 | 2 | | | |
Predicate Information (identified by operation id):
2 - filter(( IS NOT NULL OR IS NOT NULL))
6 - access("DOC"."RDOCID"=294325)
7 - access("DOC"."DOCID"="DOCR"."DOCID")
8 - filter("DOCDIM1"."DIMID"=86541)
10 - filter("MSTM1"."DIMLVLID"=2)
11 - access("MSTM1"."DOCID"=:B1)
12 - access("MSTM1"."MSTMAPID"="DOCDIM1"."MSTMAPID")
13 - filter("DOCDIM2"."DIMID"=28388)
15 - filter("MSTM2"."DIMLVLID"=1)
16 - access("MSTM2"."RDOCID"=:B1)
17 - access("MSTM2"."MSTMAPID"="DOCDIM2"."MSTMAPID")following is the plan in 11g:
Statistics
32 recursive calls
0 db block gets
20959179 consistent gets
105948 physical reads
348 redo size
37320945 bytes sent via SQL*Net to client
15110877 bytes received via SQL*Net from client
103648 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
51823 rows processed
SQL>
Plan hash value: 1013746825
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 51823 |00:01:10.08 | 20M| 2306 | | | |
| 1 | SORT ORDER BY | | 1 | 1 | 51823 |00:01:10.08 | 20M| 2306 | 9266K| 1184K| 8236K (0)|
|* 2 | FILTER | | 1 | | 51823 |00:21:41.79 | 20M| 2306 | | | |
| 3 | NESTED LOOPS | | 1 | | 51823 |00:00:01.95 | 8054 | 1156 | | | |
| 4 | NESTED LOOPS | | 1 | 335 | 51823 |00:00:00.99 | 4970 | 563 | | | |
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCUMENT | 1 | 335 | 51823 |00:00:00.38 | 402 | 401 | | | |
|* 6 | INDEX RANGE SCAN | RDOC2_INDEX | 1 | 335 | 51823 |00:00:00.17 | 148 | 147 | | | |
|* 7 | INDEX RANGE SCAN | DOCID9_INDEX | 51823 | 1 | 51823 |00:00:00.55 | 4568 | 162 | | | |
| 8 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCREPORT | 51823 | 1 | 51823 |00:00:00.94 | 3084 | 593 | | | |
| 9 | CONCATENATION | | 51823 | | 51823 |00:22:16.08 | 20M| 1150 | | | |
| 10 | NESTED LOOPS | | 51823 | | 0 |00:00:02.71 | 221K| 1150 | | | |
| 11 | NESTED LOOPS | | 51823 | 1 | 103K|00:00:01.19 | 169K| 480 | | | |
|* 12 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_MASTERMAP | 51823 | 1 | 51823 |00:00:00.72 | 108K| 163 | | | |
|* 13 | INDEX RANGE SCAN | DOCID4_INDEX | 51823 | 1 | 51824 |00:00:00.52 | 56402 | 163 | | | |
|* 14 | INDEX RANGE SCAN | MSTMAPID_INDEX | 51823 | 2 | 103K|00:00:00.60 | 61061 | 317 | | | |
|* 15 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 103K| 1 | 0 |00:00:01.14 | 52584 | 670 | | | |
| 16 | NESTED LOOPS | | 51823 | | 51823 |00:22:13.19 | 20M| 0 | | | |
| 17 | NESTED LOOPS | | 51823 | 1 | 725K|00:22:12.31 | 20M| 0 | | | |
|* 18 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_MASTERMAP | 51823 | 1 | 51823 |00:22:11.09 | 20M| 0 | | | |
|* 19 | INDEX RANGE SCAN | RDOCID3_INDEX | 51823 | 336 | 2310M|00:12:08.04 | 6477K| 0 | | | |
|* 20 | INDEX RANGE SCAN | MSTMAPID_INDEX | 51823 | 2 | 725K|00:00:00.83 | 51838 | 0 | | | |
|* 21 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 725K| 1 | 51823 |00:00:00.92 | 51823 | 0 | | | |
Predicate Information (identified by operation id):
2 - filter( IS NOT NULL)
6 - access("DOC"."RDOCID"=294325)
7 - access("DOC"."DOCID"="DOCR"."DOCID")
12 - filter("MSTM1"."DIMLVLID"=2)
13 - access("MSTM1"."DOCID"=:B1)
14 - access("MSTM1"."MSTMAPID"="DOCDIM1"."MSTMAPID")
15 - filter((INTERNAL_FUNCTION("DOCDIM1"."DIMID") AND (("DOCDIM1"."DIMID"=86541 AND "MSTM1"."DIMLVLID"=2 AND "MSTM1"."DOCID"=:B1) OR
("DOCDIM1"."DIMID"=28388 AND "MSTM1"."DIMLVLID"=1 AND "MSTM1"."RDOCID"=:B2))))
18 - filter(("MSTM1"."DIMLVLID"=1 AND (LNNVL("MSTM1"."DOCID"=:B1) OR LNNVL("MSTM1"."DIMLVLID"=2))))
19 - access("MSTM1"."RDOCID"=:B1)
20 - access("MSTM1"."MSTMAPID"="DOCDIM1"."MSTMAPID")
21 - filter((INTERNAL_FUNCTION("DOCDIM1"."DIMID") AND (("DOCDIM1"."DIMID"=86541 AND "MSTM1"."DIMLVLID"=2 AND "MSTM1"."DOCID"=:B1) OR
("DOCDIM1"."DIMID"=28388 AND "MSTM1"."DIMLVLID"=1 AND "MSTM1"."RDOCID"=:B2))))Calling all performance experts. Any ideas ??
Edited by: dm_ptldba on Oct 8, 2012 7:50 AMIf you check lines 2, 3, 8, and 13 in the 10g plan you will see that Oracle has operated your two EXISTS subqueries separately (there is a bug with multiple filter subqueries in that version that indents each subquery after the first one extra place, so the shape of the plan is a little deceptive). The statistics show that the second subquery only ran once because existence was almost always satistfied by the first.
In the 11g plan, lines 2, 3, and 9 show that the optimizer has transformed your TWO subqueries into a single subquery, then turned transformed the single subquery into a concatenation and this has, in effect, made it execute both subqueries for every row from the driving table - all the extra work appears from the redundant execution of the thing that was the second EXISTS subquery.
If you extract the OUTLINE from the execution plans (add 'outline' to the call to dbms_xplan as one of the format options) you may see some hint that shows the optimizer combining the two subqueries - if so then put in the "NO_xxx" hint to block it. Alternatively you could simply try adding the hint stop ALL cost-based query transformations /*+ no_query_transformation */
Regards
Jonathan Lewis -
Suggested schedule for gather statistics on sysadm
We have recently upgraded to 11.1.0.7 with psoft version 9.0 and tools 8.49 on sun solaris version 10. Currently we are running stats every night on all non work sysadm tables. I'm wondering if this is too much. We are seeing sql statements use different explain plans. For ex: one sql takes plan A and finishes in expected time. At some other point during the week that same sql will take plan B and will either never finish or take way too long.
So I'm curious to see how other folks are handling their statistics.
Other items of note:
we disabled the nightly oracle statistics job
our database is about 200 GB of used space
In any feedback is much appreciated.Welcome to the forum!
Whenever you post provide your 4 digit Oracle version (result of SELECT * FROM V$VERSION)
>
For example there is a table T1 with 25GB size (3 crores rows) and partitioned when i try generating statistics for a T1 table with below command it takes lots of time and i dont know what is the progress.
>
Your command will generate statistics for the entire table; that is, ALL partitions. Is that what you intended?
Sometimes partitioned tables have historical data in older partitions and a small number of ACTIVE partitions where the amount of data changes substantially. You could collect stats only on the partitions that have large data changes. The older, static partitions may not need frequent stat collections since the amount of data that changes is small.
If you are using 11g you could collect incremental statistics instead of global stats. Here is an article by Maria Colgan, an Oracle developer on the Optimizer team on stats.
https://blogs.oracle.com/optimizer/entry/how_do_i_compare_statistics -
Difference between dbms_stats and COMPUTE STATISTICS
Hello everyone,
Can anyone tell me what is the difference between:
dbms_stats.gather_table_stats(
ownname => 'me',
tabname => 'ORGANISATIONS',
estimate_percent => dbms_stats.auto_sample_size
);and
ANALYZE TABLE ORGANISATIONS COMPUTE STATISTICS;I guess both method are valid to compute statistics, but when I run the first method, the num_rows in USER_TABLES is wrong.
But when I execute the second method, I get the correct num_rows.
So, what is exactly the difference and which one is best?
Thanks,Hello,
It's not recommended to use ANALYZE statement to collect Optimizer statistics. So you should use DBMS_STATS.
Else, about the number of rows, as you used an Estimate method, you may have a number of row which is not accurate.
What is the result if you choose this:
estimate_percent => NULL
NB: Here, NULL means COMPUTE.
You may have more detail on the following link:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#PFGRF003
This Note from My Oracle Support may give you many useful advices:
Recommendations for Gathering Optimizer Statistics on 10g [ID 605439.1]Hope this help.
Best regards,
Jean-Valentin -
Performance problems post Oracle 10.2.0.5 upgrade
Hi All,
We have patched our SAP ECC6 system's Oracle database from 10.2.0.2 to 10.2.0.5. (Operating system Solaris). This was done using the SAP Bundle Patch released in February 2011. (patched DEV, QA and then Production).
Post patching production, we are now experiencing slower performance of our long running background jobs, e.g. our billing runs has increased from 2 hours to 4 hours. The slow down is constant and has not increased or decreased over a period of two weeks.
We have so far implemented the following in production without any affect:
We have double checked that database parameters are set correctly as per note Note 830576 - Parameter recommendations for Oracle 10g.
We have executed with db02old the abap<->db crosscheck to check for missing indexes.
Note 1020260 - Delivery of Oracle statistics (Oracle 10g, 11g).
It was suggested to look at adding specific indexes on tables and changing abap code identified by looking at the most "expensive" SQL statements being executed, but these were all there pre patching and not within the critical long running processes. Although a good idea to optimise, this will not resolve the root cause of the problem introduced by the upgrade to 10.2.0.5. It was thus not implemented in production, although suggested new indexes were tested in QA without effect, then backed out.
It was also suggested to implement SAP Note 1525673 - Optimizer merge fix for Oracle 10.2.0.5, which was not part of the SAP Bundle Patch released in February 2011 which we implemented. To do this we were required to implement the SAP Bundle Patch released in May 2011. As this also contains other Oracle fixes we did not want to implement this directly in production. We thus ran baseline tests to measure performance in our QA environment, implemented the SAP Bundle patch, and ran the same tests again (simplified version of the implementation route ).Result: No improvement in performance, in fact in some cases we had degradation of performance (double the time). As this had the potential to negatively effect production, we have not yet implemented this in production.
Any suggestions would be greatly appreciated !Hello Johan,
well the first goal should be to get the original performance so that you have time to do deeper analysis in your QA system (if the data set is the same).
If the problem is caused by some new optimizer features or bugs you can try to "force" the optimizer to use the "old" 10.2.0.2 behaviour. Just set the parameter OPTIMIZER_FEATURES_ENABLE to 10.2.0.2 and check your performance.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm#CHDFABEF
To get more information we need an AWR (for an overview) and the problematic SQL statements (with all the information like execution plan, statistics, etc.). This analysis are very hard through a forum. I would suggest to open a SAP SR for this issue.
Regards
Stefan -
Hi gurus
I have a issue where the secondary indexes are missing on production and i have checked the status and found that the index is not avilable in the database.I have created the index with help of se14 and its solving my issue.
Problem is when i monitor after two days again i see the warning exist in db13.
please help in resolving the issue.
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZCSKA
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZCSKB
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZCSKU
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZSKA1
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZSKAT
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZSKB1
ThanksHi Pranay,
For the MISSING_INDEX warning, this is caused by the following:
- The index is defined in the ABAP Dictionary but is missing in
the database
- The index is created in the database but is unknown to the
ABAP dictionary
Please first drop and then recreate these indexes.
Firstly I would ike to recommend to you to update your br*tools version to latest.
MISSING_STATISTICS CHECK
This condition checks to see whether there are tables or indices that do not have statistics, although they should have these. The object field is not specified for this condition. This condition has no checkng operands, threshold values or value units.
Please check that the update and check optimizer statistics jobs are scheduled regularly.
As of Oracle 10g, statistics must exist for ALL SAP tables. You can use the following statement to check whether there are still SAP tables without statistics under 10g.
SELECT
T.OWNER,
T.TABLE_NAME,
TO_CHAR(O.CREATED, 'dd.mm.yyyy hh24:mi:ss') CREATION_TIME
FROM
DBA_TABLES T,
DBA_OBJECTS O
WHERE
T.OWNER = O.OWNER AND
T.TABLE_NAME = O.OBJECT_NAME AND
T.OWNER LIKE 'SAP%' AND
T.LAST_ANALYZED IS NULL AND
O.OBJECT_TYPE = 'TABLE';
When the system returns tables, the reason for the missing statistics should be identified and new statistics should be created.
You can update those tables which are missing statistics in DB20 and also do brconnect -u /-c -f stats -t missing
Hope by using this solution, you don't get warning again.
Thanks
Kishore
Maybe you are looking for
-
More than one text area on a slide master... how?
Is it possible to define more than one text area on a master slide? If so how? I want to have a layout with two columns of text - a fairly common layout in presentation packs - but can't persuade Keynote to accept this as a layout for a template. I r
-
How to delete Windows 8.1 Recovery Partition
No, if I create a USB recovery pen on this system IT DOESN'T SHOW ME AN OPTION TO REMOVE THE ORIGINAL RECOVERY PARTITION. I had to go throgh EASEUS Partition Master to delete recovery and extend drive C and after that couldn't boot as usual. Had to u
-
I am trying to make the web service using the jwsdp1.5 and the Tomcat50-jwsdp I am facing the problems when the i use the command ant build it gives me the following error... Buildfile: build.xml BUILD FAILED C:\Sun\jwsdp-1.5\jaxrpc\samples\HelloWS\c
-
Place holder and formula columns help
Hi All Can any one help for me.. how to create placeholder column variable, then how can i assign the variable in formula column to populate. As report Output Requirement : I have two columns col1 and col2 . based on report parameter (selection type
-
My iPad will not send iMessages to other I devices. Checked settings, all good there. It marks the msg failed and has asked if I want to send as text msg. Other times it just doesn't send and doesn't say failed. Any help is welcome