Gather system statistics
Hi Friends,
I want to gather system statistics in my Oracle 9.2.07 (windows) environment...
created one statistics table...
execute DBMS_STATS.CREATE_STAT_TABLE ('SYS','MY_STATS');
(2) Gathering SYSTEM Statistics Script during office hours ( 8 hours) (8am to 4pm)
begin
execute dbms_stats.gather_system_stats(
gathering_mode=> 'interval',
interval => 480,
stattab=> 'MY_STATS',
STATID=> 'OLTP');
END;
REM
REM
REM END of Script
REM===================================================
(3) Import the Collected System Statistics
Import the statistics daily around 8AM...because..users starts entering the transactions....
variable jobno number;
begin
dbms_job.submit(:jobno,'dbms_stats.import_system_stats(''IMAL_STATS'',''OLTP'');',
SYSDATE,'SYSDATE+1');
COMMIT;
END;
========================================================
OLAP..also i will collect during night time....
gathering system statiscs will end after 4pm....so..it takes system statistics during 8 hours interval time...and i am going to import them next day 8am..
is this the correct method????.please sched some light on this...
Hi,
Oracle recommends to gather system statistics from oracle 9i onwards....this the line from Oracle 9i R2 document...
Oracle9i Database Performance Tuning Guide and Reference
Release 2 (9.2)
Gathering System Statistics
System statistics enable the optimizer to consider a system's I/O and CPU performance and utilization. For each plan candidate, the optimizer computes estimates for I/O and CPU costs. It is important to know the system characteristics to pick the most efficient plan with optimal proportion between I/O and CPU cost.
System I/O characteristics depend on many factors and do not stay constant all the time. Using system statistics management routines, database administrators can capture statistics in the interval of time when the system has the most common workload. For example, database applications can process OLTP transactions during the day and run OLAP reports at night. Administrators can gather statistics for both states and activate appropriate OLTP or OLAP statistics when needed. This enables the optimizer to generate relevant costs with respect to available system resource plans.
When Oracle generates system statistics, it analyzes system activity in a specified period of time. Unlike table, index, or column statistics, Oracle does not invalidate already parsed SQL statements when system statistics get updated. All new SQL statements are parsed using new statistics. Oracle Corporation highly recommends that you gather system statistics.The DBMS_STATS.GATHER_SYSTEM_STATS routine collects system statistics in a user-defined timeframe. You can also set system statistics values explicitly using DBMS_STATS.SET_SYSTEM_STATS. Use DBMS_STATS.GET_SYSTEM_STATS to verify system statistics.
So better performanace, as per above document, we need system statistics.
if not needed.why it is not need...can u please describe this...
Message was edited by:
bsubbu
Similar Messages
-
MBRC and SYSTEM STATISTICS in Oracle 10g database.
Hi All,
I am performing database upgrade from Oracle 8i Solaris to Oracle 10g HP-UX using exp/imp method.
But i do have some doubts regarding MBRC and System statistics.
MBRC in Oracle 10g is automatically adjusted if MBRC parameter is not set, but i found value 128 as shown below.
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL> sho parameter multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 128Also i performed one simple full table scan to test it...but db file scattered read is performing on 128 blocks. So i dont think 128 is suitable and is automatic, i mean MBRC is not set accrodingly it always uses 128.
Does this MBRC value affects whole database performance?
Regarding SYSTEM STATISTICS i found below result:
SQL> select * from AUX_STATS$
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 11-09-2009 04:59
SYSSTATS_INFO DSTOP 11-09-2009 04:59
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 128.239557
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHRNow whether NOWORKLOAD or WORKLOAD is better, and this server is still under building process....so how can i collect WORKLOAD stats as high load on this server can't be performed?? Is it really require to gather system statistics, what will happen with NOWORLOAD stats?
I have not seen single database where system stats are gathered in our organisation having more than 2000 databases.
-YasserMaybe this article written by Tom Kite helps:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:499197100346264909 -
Hi, there,
This might be a dumb question – but is it necessary to gather system statistics on Exadata machines?
I (fairly) recently migrated my Production EDW from a V2 quarter-rack to an X3-2 quarter-rack. On a “normal” system, if I migrated the database to a different (faster) server, I would look at regathering the system statistcs.
Is this something that’s sensible or worthwhile with Exadata?
MarkHi Mark,
Before you gather system stats you can run the following sql to get your current values.
SET SERVEROUTPUT ON
DECLARE
STATUS VARCHAR2(20);
DSTART DATE;
DSTOP DATE;
PVALUE NUMBER;
PNAME VARCHAR2(30);
BEGIN
PNAME := 'CPUSPEEDNW';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('cpuspeednw : '||pvalue);
PNAME := 'IOSEEKTIM';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('ioseektime in ms : '||pvalue);
PNAME := 'IOTFRSPEED';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('iotfrspeef : '||pvalue);
PNAME := 'SREADTIM';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('single block readtime in ms : '||pvalue);
PNAME := 'MREADTIM';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('multi block readtime in ms : '||pvalue);
PNAME := 'CPUSPEED';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('cpuspeed : '||pvalue);
PNAME := 'MBRC';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('multiblock read count : '||pvalue);
PNAME := 'MAXTHR';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('max threads : '||pvalue);
PNAME := 'SLAVETHR';
DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue);
DBMS_OUTPUT.PUT_LINE('slave threads : '||pvalue);
END;
Best advice I can give would be to check Doc ID 1274318.1 and search for dbms_stats.
Regards,
Tycho -
Gather Schema Statistics Report taking more than 13 hours to complete is it normal?
I have run Gather Schema Statistics Report at 9 pm and it completed on 11am next morning. It almost took more than 13 hours, is this behavior normal.
I have used the following parameter.
Schema name: ALL
Estimate percent:50
Backup Flag :NOBACKUP
History Mode :LASTRUN
Gather Option:GATHER
Invalidate Dependent Cursor : Y
My database size is about 250 GB.
Please replyGather schema stastics is erroring out when i'm using the GATHER_AUTO option with 10%.
Here is the log file
+---------------------------------------------------------------------------+
Application Object Library: Version : 12.0.0
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
FNDGSCST module: Gather Schema Statistics
+---------------------------------------------------------------------------+
Current system time is 13-AUG-2013 10:42:12
+---------------------------------------------------------------------------+
**Starts**13-AUG-2013 10:42:12
ORACLE error 20001 in FDPSTP
Cause: FDPSTP failed due to ORA-20001: SYS_NTGNSVL1S+OCZGRAAHKD9MYG== is an invalid identifier
ORA-06512: at "APPS.FND_STATS", line 774
ORA-06512: at line 1
The SQL statement being executed at the time of the error was: SE
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP
ORA-20001: SYS_NTGNSVL1S+OCZGRAAHKD9MYG== is an invalid identifier
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+
+---------------------------------------------------------------------------+
Executing request completion options...
Finished executing request completion options.
+---------------------------------------------------------------------------+
Concurrent request completed
Current system time is 13-AUG-2013 10:43:29
+---------------------------------------------------------------------------+
I have used the following parameters
Schema name: ALL
Estimate percent:10
Backup Flag :NOBACKUP
History Mode :LASTRUN
Gather Option:GATHER_AUTO
Invalidate Dependent Cursor : Y -
Gather Schema Statistics produces HUGE oracle trace file
Hi
I am running gather schema stats on my E-Biz 12.1.3 / 11.2.0.3 system
Each week we run Gather Schema Stats
I have noticed that when the job completes a 3g trace fuile is produced under the Oracle RDBMS home
I have tried un-checking the box when the COnc Request is submitted to DO NOT SAVE OUTPUT FILE but it is still procuced.
I dont want it and do not understand why it is produced. Anyone out there have a resolution?I have noticed that when the job completes a 3g trace fuile is produced under the Oracle RDBMS homeWhat is the file name/location? What is the contents of this trace file?
I have tried un-checking the box when the COnc Request is submitted to DO NOT SAVE OUTPUT FILE but it is still procuced.Do you have any debug/trace enabled on HRMS schema?
I dont want it and do not understand why it is produced. Anyone out there have a resolution?Was this working before? If yes, any changes been done since then?
What are the parameters you use to run "Gather Schema Statistics" concurrent program?
Thanks,
Hussein -
'Gather Schema Statistics' concurrent request question
Hello,
I just have a question about the 'Gather Schema Statistics' concurrent request.
We run this every evening on ALL schemas, however our SYS and SYSTEM schemas aren't registered in the application so therefore stats aren't gathered for these.
Should SYS and SYSTEM be registered and stats gathered for these 2 schemas every night?
Please advise
Thank you
SarahHi Sarah,
First of all, Its not necessary to run gather schema statistics every evening. Either once in a week or month would suffice the need. Please look at the following note:
How Often Should Gather Schema Statistics Program be Run? [ID 168136.1]
With respect to your question:
Sys and system users are not included in the "ALL" specified parameters in gather schema statistic program. It includes the list of product related users. But it is recommended you collect statistics for sys and system users and you can automate and achieve this by creating a job and schedule it at DB level. Please follow the following note which will assist you:
EBPERF FAQ - Collecting Statistics in Oracle EBS 11i and R12 [ID 368252.1]
Cheers,
Asif -
Hi,
I am moving from HPUX PARISK to HPUX iTANIUM my 10203 instance.
I am planing to gather system and dictionary stats based on oracle metalink note :
How to gather statistics on SYS objects and fixed_objects? (Doc ID 457926.1)
To gather the dictionary stats:-
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');
SQL> exec DBMS_STATS.GATHER_DATABASE_STATS (gather_sys=>TRUE);
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
Gather_fixed_objects_stats also gathers statistics for dynamic tables e.g. the X$ tables which
loaded in SGA during the startup. Gathering statistics for fixed objects would normally be recommeded if poor performance is encountered while querying dynamic views e.g. V$ views.
Since fixed objects record current database activity; statistics gathering should be done when database has a representative load so that the statistics reflect the normal database activity .
To gather the fixed objects stats use:- EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; I have two questions :
1. Show i run all the 4 commands mention above in order collect system stats ?
2. How can i rollback those statistics in case of poort performance against the dictionary table ?
Thanks
YoavSQL> exec dbms_stats.create_stat_table(user,'STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.export_system_stats('STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
After moving to production i will gather system/dictionary stats. and in case of problem i should import thos stats .
Is that currect ? YES
Second , what about the 4 command :
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');
SQL> exec DBMS_STATS.GATHER_DATABASE_STATS (gather_sys=>TRUE);
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;Should i run them all ?If you run the statistics these commands, later if you want to roll back these statistics again you need to run IMPORT statistics package command.. so that old statistics will be reloaded to database. :) -
APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
제품 : AOL
작성날짜 : 2003-12-02
APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
================================================
PURPOSE
APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
Explanation
Gather Schema Statistics 수행에 대한 정해진 주기는 없다. 일부 System은 매주 수행할 필요가 있을수 있고, 또 다른 System은 1개월 주기로 수행할 수 있다. 수행 주기는 data 양과 형태 그리고 얼마나 자주 변경되느냐에 따라 결정된다.
가장 효과적인 수행 주기를 결정하기 위하여 다른 Schedule 로 수행하여 Monitoring 이 필요하다.
일반적으로 다음과 같이 수행한다.
1) 많은 양의 Data 및 Data 내용이 변경된 후 수행
2) Data import 후 에 수행
3) Performance의 저하가 발생한경우.
11i 에서는 ANALYZE command 와 DBMS_STATS packages가 지원되지 않으므로 FND_STATS 을 사용하여 한다.
Gather Schema Statistics 는 FND_STATS 을 사용한다.
Example
Reference Documents
Note 168136.1 - How Often Should Gather Schema Statistics Program be
Run?john
you can do these things
1. gather schema statistics regularly weekly once -full
2. gather schema statistics daily - atleast 10%
3. rebuild the fragmented indexes regularly - 15 days
4. coalesce the tablespaces monthly once
5. purge the unwanted data weekly once
6. pin the db objects into SGA with dbms_shared_pool package
7. find the objects which have become invalid and then validate them .
8. purge workflow runtime data
and there are still some more that as system administrator you should keep a watch on....
but do the above you , your job is best done
any help post here
regards
sdsreenivas -
Effectively Determining System Statistics
I've done a number of runs of DBMS.GATHER_SYSTEM_STATS against our production system in preparation of putting system statistics into effect. (OLTP system for web services).
I'm trying to determine which "run" would be best. I have runs during our heaviest load that run for 45 minutes every hour.
I also have runs that span multiple hours starting during peak times and going into less intense periods.
Total of 26 runs.
CPU avg = 1107, median = 1115, Max = 1154, Min = 998
Single read I/O avg = 0.81, median = 0.84, Max = 1.49, Min 0.15
Multi-block I/O avg = 0.16, median = 0.14, Max = 0.34, Min = 0.07
Max thruput avg = 50M, median = 48.6M, Max =85.5M, Min = 31.5M
Avg MB Read Count avg = 5.7, median = 6, max = 7, min = 4
Should I use the values closest to the averages/median? Or the "longest" run? Or the runs at peak load?
How critical are the settings for the I/O values?
Does the CBO look at the Max Throughput or Average MB Read Count?I like the answer, but naturally it's effectively impossible in our environment. No real ability to generate a representative workload (we're on 9i); the resource just aren't there.
I took the results and gave it to our system performance expert/statistician, who did a very nice job of statistical analysis, (which also basically matched my more instinctive selection). There was one run that fell within a 95% confidence level for all factors, so we'll go with that one.
But it would be nice if someone has some guidelines about how to effectively gather statistics; I haven't found any and I'm sure it would useful to a great many people. -
How often we need to run gather schema statistics etc.. ??
HI,
Am on 11.5.10.2
RDBMS 9.2.0.6
How often we need to run the following requests in Production...
1.Gather schema statistics
2.Gather Column statistics
3.Gather Table statistics
4.Gather All Column statisitics
ThanksHi;
We discussed here before about same issue. Please check below thread which could be helpful about your issue:
How often we need to run gather schema statistics
Re: Gather schema stats run
How we can collect custom schema information wiht gather statistics
gather schema stats for EBS 11.5.10
gather schema stats conc. program taking too long time
Re: gather schema stats conc. program taking too long time
How it runs
Gather Schema Statistics
http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
gather statistict collect which informations
Gather Schema Statistics...
Regard
Helios -
Hi
I ran gather schema Statistics
In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 4 internal_flag= NOBACKUP
the error are:
stats on table AQ$_WF_CONTROL_P is locked
stats on table FND_CP_GSM_IPC_AQTBL is locked
Error #1: ERROR: While GATHER_TABLE_STATS:
object_name=AP.JE_FR_DAS_010***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
Error #2: ERROR: While GATHER_TABLE_STATS:
object_name=AP.JE_FR_DAS_010_NEW***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
Error #3: ERROR: While GATHER_TABLE_STATS:
object_name=AP.JG_ZZ_SYS_FORMATS_ALL_B***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
I ran this while ago.
anyone can help me to fix it
ThanksPlease see old threads which discuss the same issue -- http://forums.oracle.com/forums/search.jspa?threadID=&q=Gather+AND+Schema+AND+Statistics+AND+ORA-20001&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein -
Hi All,
When I run gather schema statistics, it is completing status normal . But it took only 45 min.
EBS - 12.1.3
DB - 11.2.0.3
OS - RHEL 64
As per my understanding , it will take 2-3 hrs. After DB upgraded to 11.2.0.3, got an completd with error and followd the note: Gather Schema Statistics fails with Ora-20001 errors after 11G database upgrade [ID 781813.1]
After that it completd with normal. But it took time only 45 min max in every run. with percentage 10 and 40. My DB size is 90 GB
Request Log file:
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
In GATHER_SCHEMA_STATS , schema_name= ALL percent= 40 degree = 8 internal_flag= NOBACKUP
stats on table FND_CP_GSM_IPC_AQTBL is locked
stats on table FND_SOA_JMS_IN is locked
stats on table FND_SOA_JMS_OUT is locked
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP
stats on table FND_CP_GSM_IPC_AQTBL is locked
stats on table FND_SOA_JMS_IN is locked
stats on table FND_SOA_JMS_OUT is locked
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+
Now I querey the below :
SQL> select column_name, nvl(hsize,254) hsize from FND_HISTOGRAM_COLS where table_name = 'JE_BE_LINE_TYPE_MAP' order by column_name;
COLUMN_NAME HSIZE
SOURCE 254
a) Is this an expected behavior ?
b) If not, please suggest, to fix
Thanks
SALDid the concurrent program fail? -- Gather Schema Statistics Fails With Error For APPLSYS Schema (Doc ID 1393184.1)
Please check the LAST_ANALYZED column of DBA_TABLES and DBA_INDEXES views (Note: 166346.1 - How to Determine When a Table Was Last Analyzed By the Gather Schema Statistics Program).
Thanks,
Hussein -
Gather Schema Statistics issue?
Hi
Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?
I can manually run 'Gather Table Statistics' program against all tables.
RegardsHi;
Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?For can run hather stat for custom schema please check
gather schema stats for EBS 11.5.10
gather schema stats for EBS 11.5.10
I can manually run 'Gather Table Statistics' program against all tables. Please see:
How To Gather Statistics On Oracle Applications 11.5.10(and above) - Concurrent Process,Temp Tables, Manually [ID 419728.1]
Also see:
How to work Gather stat
Gather Schema Statistics
http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
Regard
Helios -
Gather Schema Statistics - GATHER AUTO option failing to gather stats
Hi ,
We recently upgraded to 10g DB and 11.5.10 version of Oracle EBS. I want to employ GATHER AUTO option while running Gather Schema Statistics.
To test the working, I created a test table with 1 million rows. Then, stats were gathered for this table alone by using Gather Table Stats. Now, I deleted ~12% of rows & issued commit. The table all_tab_statistics shows that the table has stale statistics (stale stats column = YES). After that I ran Gather Schema Stats for that particular schema. But the request did not pick the test table to be gathered.
What is the criterion on which Oracle chooses which all tables to be gather statistics for under Gather Auto option? I am aware of the 10% change in data, but how is this 10% calculated? Is it only based on (insert + update + delete)?
Also, what is the difference between Gather Auto and Gather Stale ?
Any help is appreciated.
Thanks,
JithinRandalf,
FYI.. this is what happens inside the concurrent progarm call.. there are a few additional parameters for output/ error msgs:
procedure GATHER_SCHEMA_STATS(errbuf out varchar2,
retcode out varchar2,
schemaname in varchar2,
estimate_percent in number,
degree in number ,
internal_flag in varchar2,
request_id in number,
hmode in varchar2 default 'LASTRUN',
options in varchar2 default 'GATHER',
modpercent in number default 10,
invalidate in varchar2 default 'Y'
is
exist_insufficient exception;
bad_input exception;
pragma exception_init(exist_insufficient,-20000);
pragma exception_init(bad_input,-20001);
l_message varchar2(1000);
Error_counter number := 0;
Errors Error_Out;
-- num_request_id number(15);
conc_request_id number(15);
degree_parallel number(2);
begin
-- Set the package body variable.
stathist := hmode;
-- check first if degree is null
if degree is null then
degree_parallel:=def_degree;
else
degree_parallel := degree;
end if;
l_message := 'In GATHER_SCHEMA_STATS , schema_name= '|| schemaname
|| ' percent= '|| to_char(estimate_percent) || ' degree = '
|| to_char(degree_parallel) || ' internal_flag= '|| internal_flag ;
FND_FILE.put_line(FND_FILE.log,l_message);
BEGIN
FND_STATS.GATHER_SCHEMA_STATS(schemaname, estimate_percent,
degree_parallel, internal_flag, Errors, request_id,stathist,
options,modpercent,invalidate);
exception
when exist_insufficient then
errbuf := sqlerrm ;
retcode := '2';
l_message := errbuf;
FND_FILE.put_line(FND_FILE.log,l_message);
raise;
when bad_input then
errbuf := sqlerrm ;
retcode := '2';
l_message := errbuf;
FND_FILE.put_line(FND_FILE.log,l_message);
raise;
when others then
errbuf := sqlerrm ;
retcode := '2';
l_message := errbuf;
FND_FILE.put_line(FND_FILE.log,l_message);
raise;
END;
FOR i in 0..MAX_ERRORS_PRINTED LOOP
exit when Errors(i) is null;
Error_counter:=i+1;
FND_FILE.put_line(FND_FILE.log,'Error #'||Error_counter||
': '||Errors(i));
-- added to send back status to concurrent program manager bug 2625022
errbuf := sqlerrm ;
retcode := '2';
END LOOP;
end; -
Time Series Format of Operating System Statistics - "Please Ignore"
Hi List,
On the AWR report we see the "Operating System Statistics" section.. since generating multiple AWR reports is daunting and takes a lot of time and I'm only interested on particular columns of dba_hist_osstat (where that particular section of AWR pulls the data)... I'd like to have an output like this:
SNAP_ID BUSY_TIME LOAD NUM_CPUS PHYSICAL_MEMORY_BYTES
244 6792 .23 1 169520
245 1603 .04 1 154464
246 28415 .05 1 5148
Problem is, the dba_hist_osstat values are stored as rows...
select * from dba_hist_osstat where snap_id = 244;
244 2607950532 1 0 NUM_CPUS 1
244 2607950532 1 1 IDLE_TIME 57153
244 2607950532 1 2 BUSY_TIME 5339
244 2607950532 1 3 USER_TIME 1189
244 2607950532 1 4 SYS_TIME 4077
244 2607950532 1 5 IOWAIT_TIME 2432
244 2607950532 1 6 NICE_TIME 0
244 2607950532 1 14 RSRC_MGR_CPU_WAIT_TIME 0
244 2607950532 1 15 LOAD 0.099609375
244 2607950532 1 1008 PHYSICAL_MEMORY_BYTES 300536
So I got this query to walk through all the SNAP_IDs with the following output. I'd just like to format particular columns like the one I mentioned (above) for the data to be more meaningful and to be easily loaded on excel for visualization. Well I can also do this on "Instance Activity Stats" (dba_hist_sysstat) and other stuff..
select
b.snap_id, substr(e.stat_name, 1, 35) as name,
(case when e.stat_name like 'NUM_CPU%' then e.value
when e.stat_name = 'LOAD' then e.value
when e.stat_name = 'PHYSICAL_MEMORY_BYTES' then e.value
else e.value - b.value
end) as value
from dba_hist_osstat b,
dba_hist_osstat e
where
b.stat_name = 'BUSY_TIME' and
b.dbid = 2607950532
and e.dbid = 2607950532
and b.instance_number = 1
and e.instance_number = 1
and e.snap_id = b.snap_id + 1
and b.stat_id = e.stat_id
order by snap_id, name asc
SNAP_ID NAME VALUE
244 BUSY_TIME 6792
245 BUSY_TIME 1603
246 BUSY_TIME 28415
BTW, try to generate AWR reports on a particular SNAP_IDs, the value you get from the query will be the same on the report...
- Karl Arao
karlarao.wordpress.com
Edited by: Karl Arao on Jan 31, 2010 12:07 PMI got the final version of the script...
SELECT s0.snap_id,
TO_CHAR(s0.END_INTERVAL_TIME,'YYYY-Mon-DD HH24:MI:SS') snap_start,
TO_CHAR(s1.END_INTERVAL_TIME,'YYYY-Mon-DD HH24:MI:SS') snap_end,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) ela_min,
s1t1.value - s1t0.value AS busy_time,
s2t1.value AS load,
s3t1.value AS num_cpus,
s4t1.value AS physical_memory_bytes
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_osstat s1t0,
dba_hist_osstat s1t1,
dba_hist_osstat s2t1,
dba_hist_osstat s3t1,
dba_hist_osstat s4t1
WHERE s0.dbid = 2607950532
AND s1t0.dbid = s0.dbid
AND s1t1.dbid = s0.dbid
AND s2t1.dbid = s0.dbid
AND s3t1.dbid = s0.dbid
AND s4t1.dbid = s0.dbid
AND s0.instance_number = 1
AND s1t0.instance_number = s0.instance_number
AND s1t1.instance_number = s0.instance_number
AND s2t1.instance_number = s0.instance_number
AND s3t1.instance_number = s0.instance_number
AND s4t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s1t0.snap_id = s0.snap_id
AND s1t1.snap_id = s0.snap_id + 1
AND s2t1.snap_id = s0.snap_id + 1
AND s3t1.snap_id = s0.snap_id + 1
AND s4t1.snap_id = s0.snap_id + 1
AND s1t0.stat_name = 'BUSY_TIME'
AND s1t1.stat_name = s1t0.stat_name
AND s2t1.stat_name = 'LOAD'
AND s3t1.stat_name = 'NUM_CPUS'
AND s4t1.stat_name = 'PHYSICAL_MEMORY_BYTES'
ORDER BY snap_id ASC
SNAP_ID SNAP_START SNAP_END ELA_MIN BUSY_TIME LOAD NUM_CPUS PHYSICAL_MEMORY_BYTES
244 2010-Jan-14 12:38:36 2010-Jan-14 13:03:23 24.78 6792 .239257813 1 169520
245 2010-Jan-14 13:03:23 2010-Jan-14 13:10:38 7.25 1603 .049804688 1 154464
246 2010-Jan-14 13:10:38 2010-Jan-14 14:00:39 50.02 28415 .059570313 1 5148
247 2010-Jan-14 14:00:39 2010-Jan-14 15:00:42 60.04 8993 0 1 29292
248 2010-Jan-14 15:00:42 2010-Jan-15 23:56:37 1975.92 -46770 .049804688 1 311216
249 2010-Jan-15 23:56:37 2010-Jan-16 01:00:40 64.05 17722 .659179688 1 109880
250 2010-Jan-16 01:00:40 2010-Jan-16 02:00:42 60.03 7089 .229492188 1 43576
251 2010-Jan-16 02:00:42 2010-Jan-16 10:05:01 484.31 -23928 0 1 310720
252 2010-Jan-16 10:05:01 2010-Jan-16 11:01:02 56.03 8906 .099609375 1 186432
From the AWR...
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 244 14-Jan-10 12:38:36 25 1.8 <-- same from above output (start & end)
End Snap: 245 14-Jan-10 13:03:23 24 2.0
Elapsed: 24.78 (mins) <-- same from above output (4th col)
DB Time: 0.32 (mins)
... output snipped ...
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 244-245
Statistic Total
BUSY_TIME 6,792 <-- same from above output (5th col)
IDLE_TIME 141,649
IOWAIT_TIME 4,468
NICE_TIME 0
SYS_TIME 4,506
USER_TIME 2,115
LOAD 0
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 169,520 <-- same from above output (last col)
NUM_CPUS 1
And comming from the original query.. all output is similar above..
select
b.snap_id, substr(e.stat_name, 1, 35) as name,
(case when e.stat_name like 'NUM_CPU%' then e.value
when e.stat_name = 'LOAD' then e.value
when e.stat_name = 'PHYSICAL_MEMORY_BYTES' then e.value
else e.value - b.value
end) as value
from dba_hist_osstat b,
dba_hist_osstat e
where
b.stat_name = 'LOAD' and
b.dbid = 2607950532
and e.dbid = 2607950532
and b.instance_number = 1
and e.instance_number = 1
and e.snap_id = b.snap_id + 1
and b.stat_id = e.stat_id
order by snap_id, name asc
SNAP_ID NAME VALUE
244 LOAD .239257813
245 LOAD .049804688
246 LOAD .059570313
247 LOAD 0
248 LOAD .049804688
249 LOAD .659179688
250 LOAD .229492188
251 LOAD 0
252 LOAD .099609375
Hope this helps...
- Karl Arao
karlarao.wordpress.com
Maybe you are looking for
-
How do i transfer my external iPhoto library to another mac
On one of my mac June of 2008 imac's i have redirected my iphoto library to function on an external hard drive. Recently my imac crashed and died, fortunatley i was able to purchase a new mac a week ago. What im looking to do is upload my entire ipho
-
Creation of HRMD_A05 idoc
Hi, In Transaction PA30, if i create a HR Document the IDOC should be created. I don't find the options for output medium in PA30 (Such as Messages Button or Extras->Output in the Menu). Hope your answers will be helpful for me to proceed furthe
-
ITunes won't open because the library file is locked
The iTunes 4 Music Library file is locked, on a locked disc, or I do not have write permission, what do I do? It was working perfectly until I plugged in my iPod. I waited, the iPod did not appear. The computer then told me that I had improperly disc
-
Material not getting reversed after quality inspection
Hi Friends, I want to reverse a material document for material which has been cleared in the quality inspection step. But I am unable to do so. Can you please let me know if there is any way we can reverse material which has been cleared from quality
-
Problems after updating...
I recently downloaded the latest updates on my Mac. SInce then, when I open a new program it takes anywhere from 5-20 seconds to open. When I'm going through iPhoto, its slow to switch between pictures, albums, etc. And, the lights on my keyboard are