GC statistics in 10g
We use below query to find "global cache hit ratio" in RAC Environment in 9i,
SELECT
a.inst_id "Instance",
(A.VALUE+B.VALUE+C.VALUE+D.VALUE)/(E.VALUE+F.VALUE) "GLOBAL CACHE HIT RATIO"
FROM
GV$SYSSTAT A,
GV$SYSSTAT B,
GV$SYSSTAT C,
GV$SYSSTAT D,
GV$SYSSTAT E,
GV$SYSSTAT F
WHERE
A.NAME='global cache gets'
AND B.NAME='global cache converts'
AND C.NAME='global cache cr blocks received'
AND D.NAME='global cache current blocks received'
AND E.NAME='consistent gets'
AND F.NAME='db block gets'
AND B.INST_ID=A.INST_ID
AND C.INST_ID=A.INST_ID
AND D.INST_ID=A.INST_ID
AND E.INST_ID=A.INST_ID
AND F.INST_ID=A.INST_ID;
This query gives "no rows selected" output in Oracle 10g. The reason being some of the name column values like 'global cache gets' etc are not present in 10g. What are the new events which have replaced these events
Can you suggest me any alternative query which will run on all oracle versions 9i,10g and 11g.
Hope this query helps.I ran it in our 10g environment and it works :) ,but now rows selected.
select a.inst_id, a.INST_ID "instance", a.value "global blocks lost",
b.value "global blocks served",
c.value "global blocks served",
a.value/(b.value+c.value) ratio
from gv$sysstat a, gv$sysstat b, gv$sysstat c
where a.name='global cache blocks lost' and
b.name='global cache current blocks served' and
c.name='global cache cr blocks served' and
b.inst_id=a.inst_id and c.inst_id = a.inst_id
/
Similar Messages
-
Best practices for gathering statistics in 10g
I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.Hi,
Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
I have my detailed notes here:
http://www.dba-oracle.com/art_otn_cbo.htm
As to system stats, oh yes!
By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
No Workload (NW) stats:
CPUSPEEDNW - CPU speed
IOSEEKTIM - The I/O seek time in milliseconds
IOTFRSPEED - I/O transfer speed in milliseconds
I have my notes here:
http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
Hope this helps. . . .
Don Burleson
Oracle Press author
Author of “Oracle Tuning: The Definitive Reference”
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm -
Best Way to gather statistics in 10g
Hi All,
What is the best way to gather optimizer statistics in 10g databases? We are currently following the default automatic statistics gathering feature of 10g. But we feel it has got some shortcomings. For many of the tables the stats are not up to date. Also we have hit one bug in 10g which can cause "cursor: pin S wait on X " during stats gathering (As given in metalink note).
So what do you experts feel about the issue. What would be the best way to gather stats=> manual or auto?
Regards
SatishThe right reply to your question is "it depends". It depends on your application systems,
the amount of change that your data suffers during time and your queries. You can choose what statistics to gather and when. You have to know your data and your application, though. There is no simple answer, right for everyone, which could be dispensed and set as a "golden
rule". The great site with many useful articles about statistics is Wolfgang Breitling's site:
www.centrexcc.com. That is for starters. Your question is far from trivial and is not easily answered. The best reply you can get is "it depends". -
Gathering system statistics in 10g
I am confused. I read in the Oracle® Database Performance Tuning Guide, for 10g Release 1, that the system statistics are NOT automatically generated and must be manually generated using DBMS_STATS. However, when I query V$SESSTAT, V$STATNAME and V$OSSTAT I have records returned.
I thought that DBMS_STATS was no longer used in 10g and that everything was automatic. Does anyone know which is correct? If I have data in those views does that mean that system statistics have been run?
Thanks!You can still manually collect stats in 10g using DBMS_STATS, but 10g can also perform statistics collection on stale tables automatically, when enabled. Our other DBA was involved in setting up that item, but I think the Oracle tool involved is the Automatic Workload Repository.
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/autostat.htm
-Chuck -
dear all
facts:
- oracle ent 10.1.0.5
- aix 5.3
- datawarehouse environment
- a lot of partitioned tables
I want to implement an incremental statistics in order to apply in partitions recently loaded. not in all table. Do you have a script or plsql block in order to do that?
I know that in 11g this method exists by im looking for 10.1.0.5
thanks for your answers.http://docs.oracle.com/cd/B14117_01/appdev.101/b10802/d_stats.htm#996757
Table 93-32 GATHER_TABLE_STATS Procedure Parameters
partname - Name of partition.
So is it possible or not???
DBMS_STATS.GATHER_TABLE_STATS (
ownname VARCHAR2,
tabname VARCHAR2,
partname VARCHAR2 DEFAULT NULL,
estimate_percent NUMBER DEFAULT to_estimate_percent_type
(get_param('ESTIMATE_PERCENT')),
block_sample BOOLEAN DEFAULT FALSE,
method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
granularity VARCHAR2 DEFAULT 'AUTO',
cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
stattab VARCHAR2 DEFAULT NULL,
statid VARCHAR2 DEFAULT NULL,
statown VARCHAR2 DEFAULT NULL,
no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
get_param('NO_INVALIDATE')));Looks like in PL/SQL it's possible...
Try to SELECT partition name from dictionary and run it with scheduler -
Collecting database statistics in 10g
Hi,
We are using Oracle database 10.2.0.4 on hp os . As we know in 10g AWR automatically collect stats after 1 hour . is there any need to collect database stats agin manualy by using dbms_stats ...?
is there any difference betweencollecting stats by AWR and dbms_stats ... ?
"execute sys.dbms_stats.gather_system_stats('Start') ;
execute sys.dbms_stats.gather_schema_stats( ownname=>'pc01', cascade=>FALSE, degree=>dbms_stats.default_degree, estimate_percent=>100);
execute dbms_stats.delete_table_stats( ownname=>'pc01', tabname=>'statcol');
execute sys.dbms_stats.gather_system_stats('Stop');"
any idea ...?Hello...
Thanks a lot ...
Some of our production systems ...those are running on oracle10g ....they are collecting database stats once in a month...manualy...
by using dbms.stats... to improve performance of system ....
so is there any need to collect stats manauly.....?
As per my understanding ...no need to collect it manualy ...because AWR is doing this ...
am i right ...? -
SQLs for performance statistics in 10g
I need help in getting this information. Can somebody please provide the sqls. I don't seem to get this from AWR.
1. #transactions/sec
2. Physical reads/sec
3. Logical reads/sec
Database is 10.2.0.2 with RAC
Thankshi,
are you sure you have read awr correctly.
I have one and can easily spot
physical reads per sec
logical reads
transactions per sec
look in the Load Profile section of awr and then you can simply search for your information
rgds
alan -
Elapsed time went up from 1min to 22min after migrating from 10g to 11g
I just migrated one of my database from 10.2.0.2(Red hat Linux, 2 node RAC, sga= 1Gb) to 11.2.0.1 (red Hat Linux 2 Node RAC, SGA=7GB)
The timing for one of the specific query shoot up from 1min to 22 min.
Following is the query:
SELECT /*+ gather_plan_statistics */ docr.DRCONTENT
FROM WRPADMIN.T_DOCREPORT docr, WRPADMIN.t_document doc
WHERE doc.docid = docr.docid
AND 294325 = doc.rdocid
AND ( ( ( (EXISTS
(SELECT 'X'
FROM WRPADMIN.t_mastermap mstm1,
WRPADMIN.t_docdimmap docdim1
WHERE doc.docid = mstm1.docid
AND mstm1.dimlvlid = 2
AND mstm1.mstmapid = docdim1.mstmapid
AND docdim1.dimid IN (86541))))
OR (EXISTS
(SELECT 'X'
FROM WRPADMIN.t_mastermap mstm2,
WRPADMIN.t_docdimmap docdim2
WHERE doc.rdocid = mstm2.rdocid
AND mstm2.dimlvlid = 1
AND mstm2.mstmapid = docdim2.mstmapid
AND docdim2.dimid IN (28388)))))
ORDER BY doc.DOCIDThe select field (docr.DRCONTENT) is a CLOB column.
Following is the plan and statistics in 10g
Statistics
1 recursive calls
0 db block gets
675018 consistent gets
52225 physical reads
0 redo size
59486837 bytes sent via SQL*Net to client
27199426 bytes received via SQL*Net from client
103648 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
51823 rows processed
SQL>
Plan hash value: 129748299
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | SORT ORDER BY | | 1 | 50 | 51823 |00:00:14.72 | 627K| 5379 | 26M| 1873K| 23M (0)|
|* 2 | FILTER | | 1 | | 51823 |00:00:08.90 | 627K| 5379 | | | |
| 3 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCREPORT | 1 | 1 | 51823 |00:00:05.42 | 159K| 3773 | | | |
| 4 | NESTED LOOPS | | 1 | 50 | 103K|00:00:12.65 | 156K| 628 | | | |
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCUMENT | 1 | 50 | 51823 |00:00:00.15 | 481 | 481 | | | |
|* 6 | INDEX RANGE SCAN | RDOC2_INDEX | 1 | 514 | 51823 |00:00:00.09 | 245 | 245 | | | |
|* 7 | INDEX RANGE SCAN | DOCID9_INDEX | 51823 | 1 | 51823 |00:00:00.46 | 155K| 147 | | | |
|* 8 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 51823 | 1 | 0 |00:00:04.52 | 467K| 1140 | | | |
| 9 | NESTED LOOPS | | 51823 | 1 | 207K|00:00:03.48 | 415K| 479 | | | |
|* 10 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_MASTERMAP | 51823 | 1 | 51823 |00:00:01.20 | 207K| 190 | | | |
|* 11 | INDEX RANGE SCAN | DOCID4_INDEX | 51823 | 1 | 51824 |00:00:00.41 | 155K| 146 | | | |
|* 12 | INDEX RANGE SCAN | MSTMAPID_INDEX | 51823 | 1 | 103K|00:00:00.43 | 207K| 289 | | | |
|* 13 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 1 | 1 | 1 |00:00:01.05 | 469 | 466 | | | |
| 14 | NESTED LOOPS | | 1 | 1 | 15 |00:00:14.62 | 468 | 465 | | | |
|* 15 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_MASTERMAP | 1 | 1 | 1 |00:00:01.02 | 464 | 463 | | | |
|* 16 | INDEX RANGE SCAN | RDOCID3_INDEX | 1 | 629 | 44585 |00:00:00.29 | 198 | 198 | | | |
|* 17 | INDEX RANGE SCAN | MSTMAPID_INDEX | 1 | 1 | 14 |00:00:00.02 | 4 | 2 | | | |
Predicate Information (identified by operation id):
2 - filter(( IS NOT NULL OR IS NOT NULL))
6 - access("DOC"."RDOCID"=294325)
7 - access("DOC"."DOCID"="DOCR"."DOCID")
8 - filter("DOCDIM1"."DIMID"=86541)
10 - filter("MSTM1"."DIMLVLID"=2)
11 - access("MSTM1"."DOCID"=:B1)
12 - access("MSTM1"."MSTMAPID"="DOCDIM1"."MSTMAPID")
13 - filter("DOCDIM2"."DIMID"=28388)
15 - filter("MSTM2"."DIMLVLID"=1)
16 - access("MSTM2"."RDOCID"=:B1)
17 - access("MSTM2"."MSTMAPID"="DOCDIM2"."MSTMAPID")following is the plan in 11g:
Statistics
32 recursive calls
0 db block gets
20959179 consistent gets
105948 physical reads
348 redo size
37320945 bytes sent via SQL*Net to client
15110877 bytes received via SQL*Net from client
103648 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
51823 rows processed
SQL>
Plan hash value: 1013746825
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 51823 |00:01:10.08 | 20M| 2306 | | | |
| 1 | SORT ORDER BY | | 1 | 1 | 51823 |00:01:10.08 | 20M| 2306 | 9266K| 1184K| 8236K (0)|
|* 2 | FILTER | | 1 | | 51823 |00:21:41.79 | 20M| 2306 | | | |
| 3 | NESTED LOOPS | | 1 | | 51823 |00:00:01.95 | 8054 | 1156 | | | |
| 4 | NESTED LOOPS | | 1 | 335 | 51823 |00:00:00.99 | 4970 | 563 | | | |
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCUMENT | 1 | 335 | 51823 |00:00:00.38 | 402 | 401 | | | |
|* 6 | INDEX RANGE SCAN | RDOC2_INDEX | 1 | 335 | 51823 |00:00:00.17 | 148 | 147 | | | |
|* 7 | INDEX RANGE SCAN | DOCID9_INDEX | 51823 | 1 | 51823 |00:00:00.55 | 4568 | 162 | | | |
| 8 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCREPORT | 51823 | 1 | 51823 |00:00:00.94 | 3084 | 593 | | | |
| 9 | CONCATENATION | | 51823 | | 51823 |00:22:16.08 | 20M| 1150 | | | |
| 10 | NESTED LOOPS | | 51823 | | 0 |00:00:02.71 | 221K| 1150 | | | |
| 11 | NESTED LOOPS | | 51823 | 1 | 103K|00:00:01.19 | 169K| 480 | | | |
|* 12 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_MASTERMAP | 51823 | 1 | 51823 |00:00:00.72 | 108K| 163 | | | |
|* 13 | INDEX RANGE SCAN | DOCID4_INDEX | 51823 | 1 | 51824 |00:00:00.52 | 56402 | 163 | | | |
|* 14 | INDEX RANGE SCAN | MSTMAPID_INDEX | 51823 | 2 | 103K|00:00:00.60 | 61061 | 317 | | | |
|* 15 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 103K| 1 | 0 |00:00:01.14 | 52584 | 670 | | | |
| 16 | NESTED LOOPS | | 51823 | | 51823 |00:22:13.19 | 20M| 0 | | | |
| 17 | NESTED LOOPS | | 51823 | 1 | 725K|00:22:12.31 | 20M| 0 | | | |
|* 18 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_MASTERMAP | 51823 | 1 | 51823 |00:22:11.09 | 20M| 0 | | | |
|* 19 | INDEX RANGE SCAN | RDOCID3_INDEX | 51823 | 336 | 2310M|00:12:08.04 | 6477K| 0 | | | |
|* 20 | INDEX RANGE SCAN | MSTMAPID_INDEX | 51823 | 2 | 725K|00:00:00.83 | 51838 | 0 | | | |
|* 21 | TABLE ACCESS BY GLOBAL INDEX ROWID | T_DOCDIMMAP | 725K| 1 | 51823 |00:00:00.92 | 51823 | 0 | | | |
Predicate Information (identified by operation id):
2 - filter( IS NOT NULL)
6 - access("DOC"."RDOCID"=294325)
7 - access("DOC"."DOCID"="DOCR"."DOCID")
12 - filter("MSTM1"."DIMLVLID"=2)
13 - access("MSTM1"."DOCID"=:B1)
14 - access("MSTM1"."MSTMAPID"="DOCDIM1"."MSTMAPID")
15 - filter((INTERNAL_FUNCTION("DOCDIM1"."DIMID") AND (("DOCDIM1"."DIMID"=86541 AND "MSTM1"."DIMLVLID"=2 AND "MSTM1"."DOCID"=:B1) OR
("DOCDIM1"."DIMID"=28388 AND "MSTM1"."DIMLVLID"=1 AND "MSTM1"."RDOCID"=:B2))))
18 - filter(("MSTM1"."DIMLVLID"=1 AND (LNNVL("MSTM1"."DOCID"=:B1) OR LNNVL("MSTM1"."DIMLVLID"=2))))
19 - access("MSTM1"."RDOCID"=:B1)
20 - access("MSTM1"."MSTMAPID"="DOCDIM1"."MSTMAPID")
21 - filter((INTERNAL_FUNCTION("DOCDIM1"."DIMID") AND (("DOCDIM1"."DIMID"=86541 AND "MSTM1"."DIMLVLID"=2 AND "MSTM1"."DOCID"=:B1) OR
("DOCDIM1"."DIMID"=28388 AND "MSTM1"."DIMLVLID"=1 AND "MSTM1"."RDOCID"=:B2))))Calling all performance experts. Any ideas ??
Edited by: dm_ptldba on Oct 8, 2012 7:50 AMIf you check lines 2, 3, 8, and 13 in the 10g plan you will see that Oracle has operated your two EXISTS subqueries separately (there is a bug with multiple filter subqueries in that version that indents each subquery after the first one extra place, so the shape of the plan is a little deceptive). The statistics show that the second subquery only ran once because existence was almost always satistfied by the first.
In the 11g plan, lines 2, 3, and 9 show that the optimizer has transformed your TWO subqueries into a single subquery, then turned transformed the single subquery into a concatenation and this has, in effect, made it execute both subqueries for every row from the driving table - all the extra work appears from the redundant execution of the thing that was the second EXISTS subquery.
If you extract the OUTLINE from the execution plans (add 'outline' to the call to dbms_xplan as one of the format options) you may see some hint that shows the optimizer combining the two subqueries - if so then put in the "NO_xxx" hint to block it. Alternatively you could simply try adding the hint stop ALL cost-based query transformations /*+ no_query_transformation */
Regards
Jonathan Lewis -
Difference between dbms_stats and COMPUTE STATISTICS
Hello everyone,
Can anyone tell me what is the difference between:
dbms_stats.gather_table_stats(
ownname => 'me',
tabname => 'ORGANISATIONS',
estimate_percent => dbms_stats.auto_sample_size
);and
ANALYZE TABLE ORGANISATIONS COMPUTE STATISTICS;I guess both method are valid to compute statistics, but when I run the first method, the num_rows in USER_TABLES is wrong.
But when I execute the second method, I get the correct num_rows.
So, what is exactly the difference and which one is best?
Thanks,Hello,
It's not recommended to use ANALYZE statement to collect Optimizer statistics. So you should use DBMS_STATS.
Else, about the number of rows, as you used an Estimate method, you may have a number of row which is not accurate.
What is the result if you choose this:
estimate_percent => NULL
NB: Here, NULL means COMPUTE.
You may have more detail on the following link:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#PFGRF003
This Note from My Oracle Support may give you many useful advices:
Recommendations for Gathering Optimizer Statistics on 10g [ID 605439.1]Hope this help.
Best regards,
Jean-Valentin -
Performance problems post Oracle 10.2.0.5 upgrade
Hi All,
We have patched our SAP ECC6 system's Oracle database from 10.2.0.2 to 10.2.0.5. (Operating system Solaris). This was done using the SAP Bundle Patch released in February 2011. (patched DEV, QA and then Production).
Post patching production, we are now experiencing slower performance of our long running background jobs, e.g. our billing runs has increased from 2 hours to 4 hours. The slow down is constant and has not increased or decreased over a period of two weeks.
We have so far implemented the following in production without any affect:
We have double checked that database parameters are set correctly as per note Note 830576 - Parameter recommendations for Oracle 10g.
We have executed with db02old the abap<->db crosscheck to check for missing indexes.
Note 1020260 - Delivery of Oracle statistics (Oracle 10g, 11g).
It was suggested to look at adding specific indexes on tables and changing abap code identified by looking at the most "expensive" SQL statements being executed, but these were all there pre patching and not within the critical long running processes. Although a good idea to optimise, this will not resolve the root cause of the problem introduced by the upgrade to 10.2.0.5. It was thus not implemented in production, although suggested new indexes were tested in QA without effect, then backed out.
It was also suggested to implement SAP Note 1525673 - Optimizer merge fix for Oracle 10.2.0.5, which was not part of the SAP Bundle Patch released in February 2011 which we implemented. To do this we were required to implement the SAP Bundle Patch released in May 2011. As this also contains other Oracle fixes we did not want to implement this directly in production. We thus ran baseline tests to measure performance in our QA environment, implemented the SAP Bundle patch, and ran the same tests again (simplified version of the implementation route ).Result: No improvement in performance, in fact in some cases we had degradation of performance (double the time). As this had the potential to negatively effect production, we have not yet implemented this in production.
Any suggestions would be greatly appreciated !Hello Johan,
well the first goal should be to get the original performance so that you have time to do deeper analysis in your QA system (if the data set is the same).
If the problem is caused by some new optimizer features or bugs you can try to "force" the optimizer to use the "old" 10.2.0.2 behaviour. Just set the parameter OPTIMIZER_FEATURES_ENABLE to 10.2.0.2 and check your performance.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm#CHDFABEF
To get more information we need an AWR (for an overview) and the problematic SQL statements (with all the information like execution plan, statistics, etc.). This analysis are very hard through a forum. I would suggest to open a SAP SR for this issue.
Regards
Stefan -
Hi gurus
I have a issue where the secondary indexes are missing on production and i have checked the status and found that the index is not avilable in the database.I have created the index with help of se14 and its solving my issue.
Problem is when i monitor after two days again i see the warning exist in db13.
please help in resolving the issue.
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZCSKA
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZCSKB
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZCSKU
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZSKA1
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZSKAT
BR0970W Database administration alert - level: ERROR, type: MISSING_INDEX, object: (table) SAPSR3.ZZSKB1
ThanksHi Pranay,
For the MISSING_INDEX warning, this is caused by the following:
- The index is defined in the ABAP Dictionary but is missing in
the database
- The index is created in the database but is unknown to the
ABAP dictionary
Please first drop and then recreate these indexes.
Firstly I would ike to recommend to you to update your br*tools version to latest.
MISSING_STATISTICS CHECK
This condition checks to see whether there are tables or indices that do not have statistics, although they should have these. The object field is not specified for this condition. This condition has no checkng operands, threshold values or value units.
Please check that the update and check optimizer statistics jobs are scheduled regularly.
As of Oracle 10g, statistics must exist for ALL SAP tables. You can use the following statement to check whether there are still SAP tables without statistics under 10g.
SELECT
T.OWNER,
T.TABLE_NAME,
TO_CHAR(O.CREATED, 'dd.mm.yyyy hh24:mi:ss') CREATION_TIME
FROM
DBA_TABLES T,
DBA_OBJECTS O
WHERE
T.OWNER = O.OWNER AND
T.TABLE_NAME = O.OBJECT_NAME AND
T.OWNER LIKE 'SAP%' AND
T.LAST_ANALYZED IS NULL AND
O.OBJECT_TYPE = 'TABLE';
When the system returns tables, the reason for the missing statistics should be identified and new statistics should be created.
You can update those tables which are missing statistics in DB20 and also do brconnect -u /-c -f stats -t missing
Hope by using this solution, you don't get warning again.
Thanks
Kishore -
Gather_table_stats and AWR
Hi all,
i would like to disable automatic gathering of table statistics with dbms_scheduler.disable(name => 'SYS.GATHER_STATS_JOB'), because we like to gather statistics every time immediatly after we batch load our tables in our data warehouse. Is it right that this job is also needed for AWR and that oracle will not save system statistics once per hour if i disable this job?
Database version is 10gR2.
Thanks,
Robertuser1367414 wrote:
Hi all,
i would like to disable automatic gathering of table statistics with dbms_scheduler.disable(name => 'SYS.GATHER_STATS_JOB'), because we like to gather statistics every time immediatly after we batch load our tables in our data warehouse. Is it right that this job is also needed for AWR and that oracle will not save system statistics once per hour if i disable this job?
Database version is 10gR2.Robert,
the recommended way to disable automatic statistics collection on the non-Oracle objects in 10g is:
exec DBMS_STATS.SET_PARAM('AUTOSTATS_TARGET', 'ORACLE')
This way the job still gets executed but collects only statistics on the dictionary / AWR / etc. (owned by Oracle) tables. If you disable the job completely, your dictionary statistics might get outdated, potentially leading to suboptimal dictionary performance.
By the way:
"Oracle Database 10g uses a scheduled job, GATHER_STATS_JOB, to collect AWR statistics."This is a bit misleading, the job's purpose is to collect object statistics, as already mentioned the AWR snapshots are controlled differently (and by default are taken every hour, not only during the maintenance windows).
Another way you could handle your requirement would be to lock the statistics of the affected table or schemas by using DBMS_STATS.LOCK_TABLE/SCHEMA_STATS. This way these tables/schemas will be skipped by the default gathering job, and you need then to use the "FORCE=>true" parameter when gathering statistics individually. Note that there is one peculiarity with this approach: If you create/rebuild indexes on a table having the statistics locked, the default option "COMPUTE STATISTICS" of 10g doesn't apply any more, so it doesn't update the statistics of the index automatically in the dictionary, and there is unfortunately no "force" parameter available in the CREATE/ALTER INDEX command. You would need to either temporarily unlock the statistics or gather the index statistics separately (which means additional work for the database).
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Edited by: Randolf Geist on Jan 18, 2009 12:17 PM
Added note about lock statistics approach -
Hi everybody.
I would like to understand where what we are missing here.
We have several installations running 10g, and it's default value for OPTIMIZER_MODE is ALL_ROWS. Well, if there are no statistcs for the application tables, the RDBMS uses by default RULE BASED OPTIMIZER. Ok.
After the statistics are generated (oracle job, automatic), the RDBMS turns to use COST BASED OPTIMIZER. I can understand that.
The problem is: why do several queries run much slower when using CBO? When we analyze the execution plan, we see the wrong indexes being used.
The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.
Why does this happen? Shouldn't CBO, after the statistics are generated, find out the best execution plan possible? I really can't use CBO on our sites, because performance is so much worse...
Thanks in advance.
Carlos InglezHi Carlos,
The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.It's almost always an issue with CBO parms or CBO statistics.
There are several issues in 10g CBO, and here are my notes:
http://www.dba-oracle.com/t_slow_performance_after_upgrade.htm
Oracle has improved the cost-based Oracle optimizer in 9.0.5 and again in 10g, so you need to take a close look at your environmental parameter settings (init.ora parms) and your optimizer statistics.
- Check optimizer parameters - Ensure that you are using the proper optimizer_mode (default is all_rows) and check optimal settings for optimizer_index_cost_adj (lower from the default of 100) and optimizer_index_caching (set to a higher value than the default).
- Re-set optimizer costing - Consider unsetting your CPU-based optimizer costing (the 10g default, a change from 9i). CPU costing is best of you see CPU in your top-5 timed events in your STATSPACK/AWR report, and the 10g default of optimizercost_model=cpu will try to minimize CPU by invoking more full scans, especially in tablespaces with large blocksizes. To return to your 9i CBO I/O-based costing, set the hidden parameter "_optimizer_cost_model"=io
- Verify deprecated parameters - you need to set optimizer_features_enable = 10.2.0.2 and optimizer_mode = FIRST_ROWS_n (or ALL_ROWS for a warehouse, but remove the 9i CHOOSE default).
- Verify quality of CBO statistics - Oracle 10g does automatic statistics collection and your original customized dbms_stats job (with your customized parameters) will be overlaid. You may also see a statistics deficiency (i.e. not enough histograms) causing performance issues. Re-analyze object statistics using dbms_stats and make sure that you collect system statistics.
Hope this helps. . .
Donald K. Burleson
Oracle Press author -
Enq: TX - row lock contention problem
Hi ,
Db version 10.2.0.4
os solaris.
i have upgraded my database from 9.2.0.4 to 10.2.0.4 by using exp/imp as my database is small.
I have created new instance of 10g and changed parameter values as 9i(as required). then imported from 9i to 10g instance.
After importing in 10g instance we are face application wide performance problem..the response time of the applicatoin was very slow...
i have taken awr report of various times and have seeen
SELECT puid,ptimestamp FROM PPOM_OBJECT WHERE puid IN (:1) FOR UPDATE
this query is causing the problem..enq: TX - row lock contention
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 756M 756M Std Block Size: 8K
Shared Pool Size: 252M 252M Log Buffer: 1,264K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 2,501.54 3,029.25
Logical reads: 2,067.79 2,504.00
Block changes: 17.99 21.78
Physical reads: 0.02 0.03
Physical writes: 0.41 0.50
User calls: 140.74 170.44
Parses: 139.55 168.99
Hard parses: 0.01 0.01
Sorts: 10.65 12.89
Logons: 0.32 0.38
Executes: 139.76 169.24
Transactions: 0.83
% Blocks changed per Read: 0.87 Recursive Call %: 17.60
Rollback per transaction %: 0.00 Rows per Sort: 16.86
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.03 Soft Parse %: 100.00
Execute to Parse %: 0.15 Latch Hit %: 99.89
Parse CPU to Parse Elapsd %: 93.19 % Non-Parse CPU: 94.94
Shared Pool Statistics Begin End
Memory Usage %: 86.73 86.55
% SQL with executions>1: 90.99 95.33
% Memory for SQL w/exec>1: 79.15 90.58
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 397 86.3
enq: TX - row lock contention 508 59 115 12.7 Applicatio
log file sync 2,991 5 2 1.1 Commit
log file parallel write 3,238 5 2 1.1 System I/O
SQL*Net more data to client 59,871 4 0 1.0 Network
^LTime Model Statistics DB/Inst: WGMUGPR2/wgmugpr2 Snaps: 706-707
-> Total time in database user-calls (DB Time): 460.5s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
enq: TX - row lock contentio 508 .0 59 115 0.2
log file sync 2,991 .0 5 2 1.0
log file parallel write 3,238 .0 5 2 1.1
SQL*Net more data to client 59,871 .0 4 0 20.1
control file parallel write 1,201 .0 1 1 0.4
SQL*Net more data from clien 3,393 .0 1 0 1.1
SQL*Net message to client 509,864 .0 1 0 170.9
os thread startup 3 .0 1 196 0.0
db file parallel write 845 .0 1 1 0.3
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
59 1 1,377 0.0 12.9 bwnt27fp0z3gm
Module: syncdizio_op@snstr09 (TNS V1-V3)
SELECT puid,ptimestamp FROM PPOM_OBJECT WHERE puid IN (:1) FOR UPDATE
41 41 459 0.1 8.9 8cdswsp7cva2h
Module: syncdizio_op@snstr09 (TNS V1-V3)
select rpad(argument_name, 32, ' ') || in_out || ' ' || nvl(type_subname, data_t
ype) info from user_arguments where package_name IS NULL and object_name = uppe
r(:1) and argument_name is not null order by object_name, position
39 38 7,457 0.0 8.4 271hn6sgra2d8
Module: syncdizio_op@snstr09 (TNS V1-V3)
SELECT DISTINCT t_0.puid FROM PIMANTYPE t_0 WHERE (UPPER(t_0.ptype_name) = UPPER
(:1))
23 22 459 0.0 4.9 g92t08k78tgrw
Module: syncdizio_op@snstr09 (TNS V1-V3)
SELECT PIMANTYPE.puid, ptimestamp, ppid, rowning_siteu, rowning_sitec, pis_froze
n, ptype_class, ptype_name FROM PPOM_OBJECT, PIMANTYPE WHERE PPOM_OBJECT.puid =
(PIMANTYPE.puid)
22 22 158,004 0.0 4.9 chqpmv9c05ghq
Module: syncdizio_op@snstr09 (TNS V1-V3)
SELECT puid,ptimestamp FROM PPOM_OBJECT WHERE puid = :1
17 17 2,294 0.0 3.7 3n5trh11n1x8w
Module: syncdizio_op@snstr09 (TNS V1-V3)
SELECT PTYPECANNEDMETHOD.puid, ptimestamp, ppid, rowning_siteu, rowning_sitec, p
is_frozen, pobject_desc, psecure_bits,VLA_344_5, pmethod_name, pmsg_name, ptype_
name, pexec_seq, paction_type FROM PPOM_OBJECT,PBUSINESSRULE, PTYPECANNEDMETHOD
WHERE PTYPECANNEDMETHOD.puid IN (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,in 9i there is a parameter ENQUEUE_RESOURCES but in 10g relese 2 its got obsoleted....
am new to performace tunning please advice me....!
Regards
VamshiThe CBO has changed substantially between 9.2.x and 10.2.x. Pl see MOS Doc 754931.1 (Cost Based Optimizer - Common Misconceptions and Issues - 10g and Above). Pl verify that statistics have been gathered and are current - pl see MOS Doc 605439.1 (Master Note: Recommendations for Gathering Optimizer Statistics on 10g).
Looking at your output, it seems to me that the database is entirely CPU-bound. 86.3% of time is spent on CPU. The last 5 SQL statements in the output, all of the elapsed time is spent on CPU.
Pl post your init.ora parameters, along with your hardware specs. This question might be more appropriate in the "Database - General" forum.
HTH
Srini -
Check DB Job taking very long time
Hi All,
I have BI system SAP EHP 1 for SAP NetWeaver 7.0 with SP level as
SAP_ABA 701 SAPKA70105
SAP_BASIS 701 SAPKB70105
Kernel is 152 and DBSL patch is 148
Database is Oracle 10.2.0.4.0.
Database size is 4.6 TB and checkdb job is taking approx 9-10 hrs to complete. Even if size of database is big CHECKDB should not take this much time to complete.
Every time we have to cancel this job as it impacts system performance.
There are enough background work process available in system.
Please provide any inputs if it can be helpful.
Regards
VinayHi Vinay,
In order to avoid this unexpected behavior, you need to use the latest BR*Tools and as well as update / adjust individual statistic values, excludes the relevant tables from the statistics creation using the BRCONNECT tool (ACTIV=I in DBSTATC) and also locks the statistics at Oracle level (DBMS_STATS.LOCK_TABLE_STATS).
BR*Tools 7.10 are used with Oracle 10g by default. This is also a prerequisite for most of the new features.
We need to have a plan for the following.
1. Update the brtools version from 7.00 (40) to 7.10 (41) [ Latest available in SMP ] or to 7.20 (16) [ Exceptions with Non-ABAP Systems ]
2. Execute the script attached to Note 1020260 - Delivery of Oracle statistics (Oracle 10g, 11g)
Br,
Venky.
Maybe you are looking for
-
LabVIEW 32 bit on Win7 OS vs. XP 32 bit
I need to know if it is at all possible to create an installer in LabVIEW 2010 32 bit Win7 that would work on a XP 32 bit target machine. How do you guys work around this issue if it exists? I thought I could do it by keeping an XP platfrorm availabl
-
The same photos are transferring from iphone 3gs to computer with every sync
I have a bunch of photos & videos that transfer to my computer with every sync. How can I get only the new photos to sync?
-
Reg:special characters are not priniting while selecting the CS language
Hi All, When I am trying to print in CS language some special characters are not printing. Even when I was trying to create the entries in TTDGT table ,these special characters are not created ( příkazu) here r & i are the special characters. instead
-
Memory issues with Oracle BPM 10gR3 application
Hello, We have been running the load test(100 concurrent users) on our web application that is developed using Oracle BPM 10gR3 and seeing stuck threads on rendering the workspace page in JSF API(method createAndMaybeStoreManagedBeans). I copied one
-
Configuring SFTP adapters over AIX Servers
Hi All, We are trying to configure SFTP for FTP Adapter . However,we are stuck at the first step itself.As the whole of Oracle Documentation says that it is possible only on Windows/Linux. And we are using AIX5L based server boxes. Would anyone have