Sql timings, statistics
hi,
i want to test the difference between ANSI sql and oracle native sql using joins (inner, outer, left, right, full). how do i get the statistics? the timings? and where does this get displayed? into which table?
plz help on this...
thanx in advance..
Ajit
There are various methods and tools that you can use. You can simply set timing on and run your queries to see roughly how long it takes to execute a query once. For more exact results, you can use dbms_utility.get_time and take the difference. Better yet, you can repeat the query multiple times in a loop, and take the difference between start time and end time using dbms_utility.get_time. Below is a link to a method by Tom Kyte that does this and also provides additional information, so that you can see exactly what resources the query is using. I have included an example, below the link, of how you would use Tom Kyte's runstats_pkg to do the sort of comparison that you want to do. You will probably want to use much larger tables and loop through more times than in my brief example just to demonstrate how to do it and show the type of information that it provides. Although the times were the same in my test, you can see that there are tiny differences that may show up with larger test samples and more loops. When you have completed your testing, please post the results.
http://asktom.oracle.com/~tkyte/runstats.html
-- usage example:
scott@ORA92> SET SERVEROUTPUT ON SIZE 1000000
scott@ORA92> EXECUTE runstats_pkg.rs_start
PL/SQL procedure successfully completed.
scott@ORA92> DECLARE
2 v_dummy NUMBER;
3 BEGIN
4 FOR i IN 1 .. 1000
5 LOOP
6 SELECT COUNT(*) INTO v_dummy
7 FROM emp, dept
8 WHERE emp.deptno = dept.deptno;
9 END LOOP;
10 END;
11 /
PL/SQL procedure successfully completed.
scott@ORA92> EXECUTE runstats_pkg.rs_middle
PL/SQL procedure successfully completed.
scott@ORA92> DECLARE
2 v_dummy NUMBER;
3 BEGIN
4 FOR i IN 1 .. 1000
5 LOOP
6 SELECT COUNT(*) INTO v_dummy
7 FROM emp INNER JOIN dept
8 ON emp.deptno = dept.deptno;
9 END LOOP;
10 END;
11 /
PL/SQL procedure successfully completed.
scott@ORA92> EXECUTE runstats_pkg.rs_stop
Run1 ran in 31 hsecs
Run2 ran in 31 hsecs
run 1 ran in 100% of the time
Name Run1 Run2 Diff
LATCH.active checkpoint queue 0 1 1
LATCH.lgwr LWN SCN 0 1 1
LATCH.mostly latch-free SCN 0 1 1
LATCH.redo allocation 487 489 2
STAT...calls to kcmgcs 3 5 2
STAT...consistent gets 5,003 5,005 2
STAT...consistent gets - exami 1,003 1,005 2
STAT...cleanout - number of kt 3 5 2
STAT...active txn count during 3 5 2
LATCH.shared pool 1,069 1,071 2
LATCH.redo writing 0 3 3
STAT...CPU used by this sessio 23 20 -3
STAT...recursive cpu usage 20 17 -3
STAT...CPU used when call star 23 20 -3
LATCH.cache buffers chains 11,496 11,500 4
STAT...db block gets 501 505 4
STAT...db block changes 982 986 4
STAT...consistent changes 491 495 4
LATCH.messages 0 5 5
LATCH.library cache 2,100 2,106 6
STAT...session logical reads 5,504 5,510 6
LATCH.library cache pin 2,056 2,062 6
STAT...bytes received via SQL* 1,021 1,029 8
LATCH.checkpoint queue latch 0 32 32
STAT...redo size 60,628 60,668 40
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
17,257 17,320 63 99.64%
PL/SQL procedure successfully completed.
Similar Messages
-
Query.. in sql timings
how to select sysdate and systime from SQL SERVER 2005
and
i have timings like this
4:50
4:50
i want output as sum of above timings..
9:40
in sql server 2005 in QueryYou may have better luck in a SQL Server forum.
-
Windows Azure SQL Databases Statistics Best practices
ON SQL Azure is it a good practice to have Statistics auto-update disabled? or otherwise.
Pl do not compare Azure to on premise SQL Engine.. Those who have worked on Azure know what i mean..
It is a pain to maintain the indexes specially if they are have BLOB Type Columns.. No Index online rebuilds are allowed in Azure. I was targetting statistics as i see the data being frequently updated.. so maybe i can have the developers update the stats
as soon as they do major update/insert or i can have a job that can do it on weekly basis if i turn off the suto stats update.
I execute a Stats FULLSCAN Update every week, but i think it is overwritten by the Auto update stats .. So Now back to my question does anyone have any experience with turning off stats on Azure.. (any Benefits)You can't disable auto stats in WASD. They're on by default and have to stay that way.
Rebuilding indexes is possible, but you have to be careful how you approach it. See my blog post for rebuilding indexes:
http://sqltuna.blogspot.co.uk/2013/10/index-fragmentation-in-wasd.html
As a rule I wouldn't have LOB columns as part of an index key - is that what's causing you issues?
Statistics work the same as on-premises, in that they are triggered when a certain threshold of changes is reached (or some other triggers). That's not a bad thing though, as it means they're up to date. Is there any reason you think this is
causing you issues? -
Tuning: how distinguish PL/SQL timings from SQL in trace and/or tkprof output?
Hi,
we have a performance problem with one of our customer's databases and are trying to tune it out.
The activity in question is a long-running PL/SQL stored procedure operating more-or-less in batch mode, calling many sub-procedures along the way. The PL/SQL code has been instrumented to take timings of execution of different operations, and we are running with tracing on, and analyzing the trace output using TKPROF.
Oddly, even though we are running through 70+ pages of PL/SQL code, with bulk-collect into large tables and nesting of SQL in other SQL cursor loops, TKPROF is reporting SQL times which account for nearly all the elapsed time reported by our log messages. I mean, we're talking within one or two percent of elapsed time.
How can I distinguish the time spent in PL/SQL operations from the time doing an execute or a fetch in SQL? Should I just believe TKPROF when it says all the time is going into SQL?
Thanks in advance...A common problem with Oracle timings is the granularity of the clock can give misleading results when viewed over many itterations. That might not be your issue but it's worth noting. I would suggest you use DBMS_PROFILER if you want to get a more accurate picture of your codes performance. In most PL/SQL processes, the SQL will account for the vast majority of the processing time (maybe even 95%+). You really have to be doing some intense string or analytical processing for that percentage to change much.
Richard -
What is exactly STATISTICS in SQL Server
hi all,
What is exactly STATISTICS in SQL server query optimiser ?
Thanks
SelvaSome good content with proper example can help you for sure.
Link:
http://blog.idera.com/sql-server/understanding-sql-server-statistics/
Some part of text may give you idea
If there’s an upcoming election and you are running for office and getting ready to go from town to town city to city with your flyers, you will want to know approximately how many flyers you’re going to bring.
If you’re the coach of a sports team, you will want to know your players’ stats before you decide who to play when, and against who. You will often play a matchup game, even if you have 20 players, you might be allowed to play just 5 at a time, and you will
want to know which of your players will best match up to the other team’s roster. And you don’t want to interview them one by one at game time (table scan), you want to know, based on their statistics, who your best bets are.
Just like the election candidate or the sports coach, SQL Server tries to use statistics to “react intelligently” in its query optimization. Knowing number of records, density of pages, histogram, or available indexes help the SQL Server optimizer “guess”
more accurately how it can best retrieve data. A common misnomer is that if you have indexes, SQL Server will use those indexes to retrieve records in your query.
Not necessarily. If you create, let’s say, an index to a column City and <90% of the values are ‘Vancouver’, SQL Server will most likely opt for a table scan instead of using the index if it knows these stats......
Santosh Singh -
How to find the backend SQL query of the JSP page in OIC
Does anybody how the best way to find the backend SQL QUERY of OIV JSP page?
How To Generate Trace Files in in HTML/JSP (using Profile Option)
Note: This requires proper responsibility to set SQL Initialization statement using Profile option.
Step 1. Login to the desired Form application.
Step 2. Select +Profile >> System ('Find System Profile Values' screen will pop up)
Step 3. Check 'User' and Type in the Username (in which the account for that user will be trace)
Step 4. Type 'Initialization%' in the Profile box and Hit 'Find' (Click here for preview.)
Step 5. In the User box, type the following statement and Hit 'Save' (Click here for preview)
BEGIN FND_CTL.FND_SESS_CTL('','','TRUE','TRUE','','ALTER SESSION SET TRACEFILE_IDENTIFIER = TESTING MAX_DUMP_FILE_SIZE = 5000000 EVENTS ='||''''||' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12'||'''');END;
Note: specify any name you like to identify your trace, in this case, testing is the end name on the trace. You can also specify the amount of data allowable to be in the trace, in this case, 5000000 is the amount set. Make sure you hit 'Save' afterwards.[Quotes in the statement are all 'Single' quotes.]
specifying TRACEFILE_IDENTIFIER value is mandatory when setting up the trace using the above profile option value
Step 6. Login to HTML / JSP page with username/password and start your flow. (Everything you do once login to HTML / JSP will get trace.)
Step 7. Logout of HTML / JSP application once you completed with your flow.
Step 8. Go back to the Profile option in the Form application and delete the Initialization SQL statement, and Hit 'Save'.
Step 9. Log in to the database server or login server and retrieve your trace file.
Identify and retrieve the trace file using the tracefile_identifier specified in Step 5.
In this case the tracefile_identifier is TESTING. (Click here for Trace file locations) *
Note: If you need to regenerate your trace or tracing a new flow, then repeat Step 1 to Step 8. To avoid self-confusion, choose a different name for your trace identifier everytime you set to trace.
Step 10. See TKPROF section on how to format trace file into readable text.
Trace Options Definition
No Trace Tracing is not activated
Activities will not get traced.
Regular Trace
(Level 1) Contains SQL, execution statistics, and execution plan.
Provides execution path, row counts as well as produces smallest flat file.
Trace with Binds
(Level 4) Regular Trace plus value supplied to SQL statement via local variables.
Trace with Waits
(Level 8) Regular Trace plus database operation timings that the SQL waited to have done in order to complete, i.e. disk access.
Trace with Binds and Waits
(Level 12) Regular trace with both waits and binds information.
Contains the most complete information and will produce the largest trace file.
****Send me an email to [email protected],I will share the document with you. -
Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
Checklist for Quick Performance problem Resolution
· get trace, code and other information for given PE case
- Latest Code from Production env
- Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
- Program parameters & their frequently used values
- Run Frequency of the program
- existing Run-time/response time in Production
- Business Purpose
· Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
· Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
· Identify most time consuming operation(s) using Row Source Operation section
· Study program parameter input directly mapped to SQL
· Identify all Input bind parameters being used to SQL
· Is SQL query returning large records for given inputs
· what are the large tables and their respective columns being used to mapped with input parameters
· which operation is scanning highest number of records in Row Source operation/Explain Plan
· Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
· Check the time consuming index on large table and measure Index Selectivity
· Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
· Is correct index being used for all large tables?
· Is there any Full Table Scan on Large tables ?
· Is there any unwanted Table being used in SQL ?
· Evaluate Join condition on Large tables and their columns
· Is FTS on large table b'cos of usage of non index columns
· Is there any implicit or explicit conversion causing index not getting used ?
· Statistics of all large tables are upto date ?
Quick Resolution tips
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
2) Use Data Caching Technique/Options to cache static data
3) Use Pipe Line Table Functions whenever possible
4) Use Global Temporary Table, Materialized view to process complex records
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
8) Follow Oracle PL/SQL Best Practices
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
12) Review Join condition on existing query explain plan
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
14) Avoid applying SQL functions on index columns
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
Thanks
PrafulI understand you were trying to post something helpful to people, but sorry, this list is appalling.
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
No, use pure SQL.
2) Use Data Caching Technique/Options to cache static data
No, use pure SQL, and the database and operating system will handle caching.
3) Use Pipe Line Table Functions whenever possible
No, use pure SQL
4) Use Global Temporary Table, Materialized view to process complex records
No, use pure SQL
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
No, use pure SQL
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
Makes no sense.
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
What about using the execution trace?
8) Follow Oracle PL/SQL Best Practices
Which are?
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
You mean design your database and queries properly? And table scanning is not always bad.
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
It depends if that is necessary or not.
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan. There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
12) Review Join condition on existing query explain plan
Well, if you don't have your join conditions right then your query won't work, so that's obvious.
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
No. Oracle recommends you do not use hints for query optimization (it says so in the documentation). Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general. Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
14) Avoid applying SQL functions on index columns
Why? If there's a need for a function based index, then it should be used.
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
See 13.
In short, there are no silver bullets for dealing with performance. Each situation is different and needs to be evaluated on its own merits. -
Hi Experts,
IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan for Daily/weekly??
Vinai Kumar GandlaHi Vikki,
Many systems rely solely on SQL Server to update statistics automatically(AUTO UPDATE STATISTICS enabled), however, based on my research, large tables, tables with uneven data distributions, tables with ever-increasing keys and tables that have significant
changes in distribution often require manual statistics updates as the following explanation.
1.If a table is very big, then waiting for 20% of rows to change before SQL Server automatically updates the statistics could mean that millions of rows are modified, added or removed before it happens. Depending on the workload patterns and the data,
this could mean the optimizer is choosing a substandard execution plans long before SQL Server reaches the threshold where it invalidates statistics for a table and starts to update them automatically. In such cases, you might consider updating statistics
manually for those tables on a defined schedule (while leaving AUTO UPDATE STATISTICS enabled so that SQL Server continues to maintain statistics for other tables).
2.In cases where you know data distribution in a column is "skewed", it may be necessary to update statistics manually with a full sample, or create a set of filtered statistics in order to generate query plans of good quality. Remember,
however, that sampling with FULLSCAN can be costly for larger tables, and must be done so as not to affect production performance.
3.It is quite common to see an ascending key, such as an IDENTITY or date/time data types, used as the leading column in an index. In such cases, the statistic for the key rarely matches the actual data, unless we update the Statistic manually after
every insert.
So in the case above, we could perform manual statistics updates by
creating a maintenance plan that will run the UPDATE STATISTICS command, and update statistics on a regular schedule. For more information about the process, please refer to the article:
https://www.simple-talk.com/sql/performance/managing-sql-server-statistics/
Regards,
Michelle Li -
Scaleability with Functions in SQL queries
Hi,
In one of our applications we have many views that use a packaged function in the where clause to filter data. This function uses a SYS_CONTEXT() to set and get values. There are couple of issues while using this approach:
1/ The deterministic function doesn't allow any scability with PQ-server.
2/ Another issue with this function and also the SYS_CONTEXT-function, they manuipulate the estimated CBO-statistics.
CREATE TABLE TAB_I
COLUMN1 NUMBER(16, 0) NOT NULL
, COLUMN2 VARCHAR2(20)
, CONSTRAINT TAB_I_PK PRIMARY KEY
COLUMN1
ENABLE
CREATE TABLE TAB_V
I_COL1 NUMBER(16,0) NOT NULL ENABLE,
VERSION_ID NUMBER(16,0) NOT NULL ENABLE,
CRE_DATIM TIMESTAMP (6) NOT NULL ENABLE,
TERM_DATIM TIMESTAMP (6) NOT NULL ENABLE,
VERSION_VALID_FROM DATE NOT NULL ENABLE,
VERSION_VALID_TILL DATE NOT NULL ENABLE,
CONSTRAINT TAB_V_PK PRIMARY KEY (I_COL1, VERSION_ID) USING INDEX NOCOMPRESS LOGGING ENABLE,
CONSTRAINT COL1_FK FOREIGN KEY (I_COL1) REFERENCES TAB_I (COLUMN1) ENABLE
CREATE OR REPLACE
PACKAGE app_bitemporal_rules IS
FUNCTION f_knowledge_time RETURN TIMESTAMP DETERMINISTIC;
END app_bitemporal_rules;
create or replace
PACKAGE BODY app_bitemporal_rules IS
FUNCTION f_knowledge_time RETURN TIMESTAMP DETERMINISTIC IS
BEGIN
RETURN TO_TIMESTAMP(SYS_CONTEXT ('APP_USR_CTX', 'KNOWLEDGE_TIME'),'DD.MM.YYYY HH24.MI.SSXFF');
END f_knowledge_time;
END app_bitemporal_rules;
explain plan for select *
FROM tab_i
JOIN tab_v
ON tab_i.column1 = tab_v.i_col1
AND app_bitemporal_rules.f_knowledge_time BETWEEN tab_v.CRE_DATIM AND tab_v.TERM_DATIM
where tab_i.column1 = 11111;
select * from table(dbms_xplan.display);
Plan hash value: 621902595
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 95 | 5 (0)| 00:00:06 |
| 1 | NESTED LOOPS | | 1 | 95 | 5 (0)| 00:00:06 |
| 2 | TABLE ACCESS BY INDEX ROWID| TAB_I | 1 | 25 | 1 (0)| 00:00:02 |
|* 3 | INDEX UNIQUE SCAN | TAB_I_PK | 1 | | 1 (0)| 00:00:02 |
|* 4 | TABLE ACCESS FULL | TAB_V | 1 | 70 | 4 (0)| 00:00:05 |
Predicate Information (identified by operation id):
3 - access("TAB_I"."COLUMN1"=11111)
4 - filter("TAB_V"."I_COL1"=11111 AND
"TAB_V"."CRE_DATIM"<="APP_BITEMPORAL_RULES"."F_KNOWLEDGE_TIME"() AND
"TAB_V"."TERM_DATIM">="APP_BITEMPORAL_RULES"."F_KNOWLEDGE_TIME"())
Note
- 'PLAN_TABLE' is old version
- dynamic sampling used for this statement (level=2)
explain plan for select *
FROM tab_i
JOIN tab_v
ON tab_i.column1 = tab_v.i_col1
AND '10-OCT-2011' BETWEEN tab_v.CRE_DATIM AND tab_v.TERM_DATIM
where tab_i.column1 = 11111;
select * from table(dbms_xplan.display);
Plan hash value: 621902595
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 256 | 24320 | 5 (0)| 00:00:06 |
| 1 | NESTED LOOPS | | 256 | 24320 | 5 (0)| 00:00:06 |
| 2 | TABLE ACCESS BY INDEX ROWID| TAB_I | 1 | 25 | 1 (0)| 00:00:02 |
|* 3 | INDEX UNIQUE SCAN | TAB_I_PK | 1 | | 1 (0)| 00:00:02 |
|* 4 | TABLE ACCESS FULL | TAB_V | 256 | 17920 | 4 (0)| 00:00:05 |
Predicate Information (identified by operation id):
3 - access("TAB_I"."COLUMN1"=11111)
4 - filter("TAB_V"."I_COL1"=11111 AND "TAB_V"."CRE_DATIM"<=TIMESTAMP'
2011-10-10 00:00:00.000000000' AND "TAB_V"."TERM_DATIM">=TIMESTAMP' 2011-10-10
00:00:00.000000000')
Note
- 'PLAN_TABLE' is old version
- dynamic sampling used for this statement (level=2) As can be seen in the second plan the cardinality has been guessed correctly, but not in the first case.
I have also tried with:
ASSOCIATE STATISTICS WITH packages app_bitemporal_rules DEFAULT COST (1000000/*246919*/,1000,0) DEFAULT SELECTIVITY 50;
But, this just leads to a increased cost, but no change in cardinality.
The (1) problem gets solved if I directly use "TO_TIMESTAMP(SYS_CONTEXT ('APP_USR_CTX', 'KNOWLEDGE_TIME'),'DD.MM.YYYY HH24.MI.SSXFF')" in the where clause. But am not able to find a solution for the (2) issue.
Can you please help.
Regards,
Vikram RHi Vikram,
On the subject of using [url http://download.oracle.com/docs/cd/E11882_01/server.112/e26088/statements_4006.htm#i2115932]ASSOCIATE STATISTICS, having done a little investigation on 11.2.0.2, I'm having trouble adjusting selectivity via "associate statististics ... default selectivity" but no problems with adjusting default cost.
I've also tried to do the same using an interface type and am running into other issues.
It's not functionality that I'm overly familiar with as I try to avoid/eliminate using functions in predicates.
Further analysis/investigation required.
Including test case of what I've done so far in case anyone else wants to chip in.
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL> drop table t1;
Table dropped.
SQL>
SQL> create table t1
2 as
3 select rownum col1
4 from dual
5 connect by rownum <= 100000;
Table created.
SQL>
SQL> exec dbms_stats.gather_table_stats(USER,'T1');
PL/SQL procedure successfully completed.
SQL>
SQL> create or replace function f1
2 return number
3 as
4 begin
5 return 1;
6 end;
7 /
Function created.
SQL>
SQL> create or replace function f2 (
2 i_col1 in number
3 )
4 return number
5 as
6 begin
7 return 1;
8 end;
9 /
Function created.
SQL> Created one table with 100000 rows.
Two functions - one without arguments, one with (for later).
With no associations:
SQL> select * from user_associations;
no rows selected
SQL> Run a statement that uses the function:
SQL> select count(*) from t1 where col1 >= f1;
COUNT(*)
100000
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID gm7ppkbzut114, child number 0
select count(*) from t1 where col1 >= f1
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 139 (100)| |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | TABLE ACCESS FULL| T1 | 5000 | 25000 | 139 (62)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("COL1">="F1"())
19 rows selected.
SQL> Shows that default selectivity of 5% for an equality predicate against function.
Let's try to adjust the selectivity using associate statistics - the argument for selectivity should be a percentage between 0 and 100:
(turning off cardinality feedback for clarity/simplicity)
SQL> alter session set "_optimizer_use_feedback" = false;
Session altered.
SQL>
SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f1 default selectivity 100;
Statistics associated.
SQL> select count(*) from t1 where col1 >= f1;
COUNT(*)
100000
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID gm7ppkbzut114, child number 1
select count(*) from t1 where col1 >= f1
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 139 (100)| |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | TABLE ACCESS FULL| T1 | 5000 | 25000 | 139 (62)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("COL1">="F1"())
19 rows selected.
SQL> Didn't make any difference to selectivity.
An excerpt from a 10053 trace file had the following:
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning FALSE.
No statistics type defined for function F1
No default cost defined for function F1So, crucially what's missing here is a clause saying:
No default selectivity defined for function F1But there's no other information that I could see to indicate why it should be discarded.
Moving on, adjusting the cost does happen:
SQL>exec spflush;
PL/SQL procedure successfully completed.
SQL> disassociate statistics from functions f1;
Statistics disassociated.
SQL>
SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f1 default selectivity 100 default cost (100,5,0);
Statistics associated.
SQL> select count(*) from t1 where col1 >= f1;
COUNT(*)
100000
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID gm7ppkbzut114, child number 0
select count(*) from t1 where col1 >= f1
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 500K(100)| |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | TABLE ACCESS FULL| T1 | 5000 | 25000 | 500K (1)| 00:41:41 |
Predicate Information (identified by operation id):
2 - filter("COL1">="F1"())
19 rows selected.
SQL> And we see the following in a 10053:
No statistics type defined for function F1
Default costs for function F1 CPU: 100, I/O: 5So, confirmation that default costs for function were found and applied but nothing else about selectivity again.
I wondered whether the lack of arguments for function F1 made any difference, hence function F2.
Didn't seem to:
Vanilla:
SQL> select count(*) from t1 where col1 >= f2(col1);
COUNT(*)
100000
SQL>
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 2wxw32wadgc1v, child number 0
select count(*) from t1 where col1 >= f2(col1)
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 139 (100)| |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | TABLE ACCESS FULL| T1 | 5000 | 25000 | 139 (62)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("COL1">="F2"("COL1"))
19 rows selected.
SQL> Plus association:
SQL>exec spflush;
PL/SQL procedure successfully completed.
SQL>
SQL> associate statistics with functions f2 default selectivity 90 default cost (100,5,0);
Statistics associated.
SQL> select count(*) from t1 where col1 >= f2(col1);
COUNT(*)
100000
SQL>
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 2wxw32wadgc1v, child number 0
select count(*) from t1 where col1 >= f2(col1)
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 500K(100)| |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | TABLE ACCESS FULL| T1 | 5000 | 25000 | 500K (1)| 00:41:41 |
Predicate Information (identified by operation id):
2 - filter("COL1">="F2"("COL1"))
19 rows selected.
SQL> Just to confirm associations:
SQL> select * from user_associations;
OBJECT_OWNER OBJECT_NAME COLUMN_NAME OBJECT_TY
STATSTYPE_SCHEMA STATSTYPE_NAME DEF_SELECTIVITY DEF_CPU_COST DEF_IO_COST DEF_NET_COST
INTERFACE_VERSION MAINTENANCE_TY
RIMS F2 FUNCTION
90 100 5
0 USER_MANAGED
RIMS F1 FUNCTION
100 100 5
0 USER_MANAGED
SQL> So.... started thinking about whether using an interface type would help?
SQL> CREATE OR REPLACE TYPE test_stats_ot AS OBJECT
2 (dummy_attribute NUMBER
3 ,STATIC FUNCTION ODCIGetInterfaces (
4 ifclist OUT SYS.ODCIObjectList
5 ) RETURN NUMBER
6 ,STATIC FUNCTION ODCIStatsSelectivity (
7 pred IN SYS.ODCIPredInfo,
8 sel OUT NUMBER,
9 args IN SYS.ODCIArgDescList,
10 strt IN NUMBER,
11 stop IN NUMBER,
12 --i_col1 in NUMBER,
13 env IN SYS.ODCIEnv
14 ) RETURN NUMBER
15 --,STATIC FUNCTION ODCIStatsFunctionCost (
16 -- func IN SYS.ODCIPredInfo,
17 -- cost OUT SYS.ODCICost,
18 -- args IN SYS.ODCIArgDescList,
19 -- i_col1 in NUMBER,
20 -- env IN SYS.ODCIEnv
21 -- ) RETURN NUMBER
22 );
23 /
Type created.
SQL> CREATE OR REPLACE TYPE BODY test_stats_ot
2 AS
3 STATIC FUNCTION ODCIGetInterfaces (
4 ifclist OUT SYS.ODCIObjectList
5 ) RETURN NUMBER
6 IS
7 BEGIN
8 ifclist := sys.odciobjectlist(sys.odciobject('SYS','ODCISTATS2'));
9 RETURN odciconst.success;
10 END;
11 STATIC FUNCTION ODCIStatsSelectivity
12 (pred IN SYS.ODCIPredInfo,
13 sel OUT NUMBER,
14 args IN SYS.ODCIArgDescList,
15 strt IN NUMBER,
16 stop IN NUMBER,
17 --i_col1 in NUMBER,
18 env IN SYS.ODCIEnv)
19 RETURN NUMBER
20 IS
21 BEGIN
22 sel := 90;
23 RETURN odciconst.success;
24 END;
25 -- STATIC FUNCTION ODCIStatsFunctionCost (
26 -- func IN SYS.ODCIPredInfo,
27 -- cost OUT SYS.ODCICost,
28 -- args IN SYS.ODCIArgDescList,
29 -- i_col1 in NUMBER,
30 -- env IN SYS.ODCIEnv
31 -- ) RETURN NUMBER
32 -- IS
33 -- BEGIN
34 -- cost := sys.ODCICost(10000,5,0,0);
35 -- RETURN odciconst.success;
36 -- END;
37 END;
38 /
Type body created.
SQL> But this approach is not happy - perhaps not liking the function with no arguments?
SQL> disassociate statistics from functions f1;
Statistics disassociated.
SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f1 USING test_stats_ot;
Statistics associated.
SQL> select count(*) from t1 where col1 >= f1;
select count(*) from t1 where col1 >= f1
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-06550: line 12, column 22:
PLS-00103: Encountered the symbol "ÀÄ" when expecting one of the following:
) , * & = - + < / > at in is mod remainder not rem =>
<an exponent (**)> <> or != or ~= >= <= <> and or like like2
like4 likec between || multiset member submultiset
SQL> So, back to F2 again (uncommenting argument i_col1 in ODCIStatsSelectivity):
SQL> disassociate statistics from functions f1;
Statistics disassociated.
SQL> disassociate statistics from functions f2;
Statistics disassociated.
SQL> ASSOCIATE STATISTICS WITH FUNCTIONS f2 USING test_stats_ot;
Statistics associated.
SQL> select count(*) from t1 where col1 >= f2(col1);
COUNT(*)
100000
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 2wxw32wadgc1v, child number 0
select count(*) from t1 where col1 >= f2(col1)
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 139 (100)| |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | TABLE ACCESS FULL| T1 | 5000 | 25000 | 139 (62)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("COL1">="F2"("COL1"))
19 rows selected.
SQL> Nothing obviously happening.
You'll note also in my interface type implementation that I commented out a declaration of ODCIStatsFunctionCost.
This post is probably already too long already so I've skipped some of the detail.
But when ODCIStatsFunctionCost was used with function F2, I presume I've made a mistake in the implementation because I had an error in the 10053 trace as follows:
Calling user-defined function cost function...
predicate: "RIMS"."F2"("T1"."COL1")
declare
cost sys.ODCICost := sys.ODCICost(NULL, NULL, NULL, NULL);
arg0 NUMBER := null;
begin
:1 := "RIMS"."TEST_STATS_OT".ODCIStatsFunctionCost(
sys.ODCIFuncInfo('RIMS',
'F2',
NULL,
1),
cost,
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(2, 'T1', 'RIMS', '"COL1"', NULL, NULL, NULL))
, arg0,
sys.ODCIENV(:5,:6,:7,:8));
if cost.CPUCost IS NULL then
:2 := -1.0;
else
:2 := cost.CPUCost;
end if;
if cost.IOCost IS NULL then
:3 := -1.0;
else
:3 := cost.IOCost;
end if;
if cost.NetworkCost IS NULL then
:4 := -1.0;
else
:4 := cost.NetworkCost;
end if;
exception
when others then
raise;
end;
ODCIEnv Bind :5 Value 0
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 4
ORA-6550 received when calling RIMS.TEST_STATS_OT.ODCIStatsFunctionCost -- method ignoredThere was never any such feedback about ODCIStatsSelectivity.
So, in summary, more questions than answers.
I'll try to have another look later. -
Sql statements in v$sqlarea
Hi
When I get the sqls from v$sqlarea, does these sqls already executed, or currently runnıng?I suggest you check the Oracle Document,
While showing similar content, you might also note there are difference between V$SQLAREA and V$SQL view
V$SQL
V$SQL lists statistics on shared SQL area without the GROUP BY clause and contains one row for each child of the original SQL text entered. Statistics displayed in V$SQL are normally updated at the end of query execution. However, for long running queries, they are updated every 5 seconds. This makes it easy to see the impact of long running SQL statements while they are still in progress.
V$SQLAREA
V$SQLAREA lists statistics on shared SQL area and contains one row per SQL string. It provides statistics on SQL statements that are in memory, parsed, and ready for execution. -
Hello,
The system we use is a kind of OLTP thing.
platform - linux
version - 10.2
here, in the statspack everything seems okay to me except the logical reads.(if not tell)
the problems is, the cpu grows gradually and reaches 100.
i need the cpu to be steady.
can somebody tell what is happening here?
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
2386172435 apple22a 1 11-Aug-09 23:14 10.2.0.1.0 NO
Host Name: xxxxxxxxx Num CPUs: 4 Phys Memory (MB): 2
~~~~
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 1747 11-Aug-09 23:23:46 96 7.6
End Snap: 1752 11-Aug-09 23:34:00 218 12.5
Elapsed: 10.23 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 2,864M Std Block Size: 8K
Shared Pool Size: 656M Log Buffer: 29,855K
Load Profile Per Second Per Transaction
~~~~~~~~~~~~ --------------- ---------------
Redo size: 8,051,891.15 5,042.02
Logical reads: 289,821.64 181.48
Block changes: 49,889.55 31.24
Physical reads: 197.76 0.12
Physical writes: 717.84 0.45
User calls: 1,908.82 1.20
Parses: 962.84 0.60
Hard parses: 0.25 0.00
Sorts: 591.85 0.37
Logons: 0.35 0.00
Executes: 25,757.48 16.13
Transactions: 1,596.96
% Blocks changed per Read: 17.21 Recursive Call %: 94.11
Rollback per transaction %: 26.58 Rows per Sort: 628.58
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.97 Redo NoWait %: 100.00
Buffer Hit %: 99.93 In-memory Sort %: 100.00
Library Hit %: 100.01 Soft Parse %: 99.97
Execute to Parse %: 96.26 Latch Hit %: 99.78
Parse CPU to Parse Elapsd %: 91.30 % Non-Parse CPU: 99.31
Shared Pool Statistics Begin End
Memory Usage %: 47.56 49.99
% SQL with executions>1: 60.62 73.55
% Memory for SQL w/exec>1: 77.58 84.79
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
CPU time 1,362 31.6
log file sync 16,960 1,264 75 29.4
PL/SQL lock timer 10 586 58606 13.6
buffer busy waits 57,444 388 7 9.0
enq: TX - row lock contention 12,036 298 25 6.9
Host CPU (CPUs: 4)
~~~~~~~~ Load Average
Begin End User System Idle WIO WCPU
0.20 10.74 53.82 9.51 36.67
Note: There is a 8% discrepancy between the OS Stat total CPU time and
the total CPU time estimated by Statspack
OS Stat CPU time: 2261(s) (BUSY_TIME + IDLE_TIME)
Statspack CPU time: 2456(s) (Elapsed time * num CPUs in end snap)
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 63.51
% of busy CPU for Instance: 100.30
%DB time waiting for CPU - Resource Mgr:
Memory Statistics Begin End
~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 1.9 .0
SGA use (MB): 3,584.0 3,584.0
PGA use (MB): 164.2 258.5
% Host Mem used for SGA+PGA: 194875.2 8987233.1
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file sync 16,960 4 1,264 75 0.0
PL/SQL lock timer 10 100 586 58606 0.0
buffer busy waits 57,444 0 388 7 0.1
enq: TX - row lock contention 12,036 0 298 25 0.0
log file parallel write 11,870 0 163 14 0.0
db file sequential read 21,324 0 95 4 0.0
log file sequential read 3,963 0 47 12 0.0
db file scattered read 22,614 0 29 1 0.0
log file switch completion 102 17 28 272 0.0
latch: cache buffers chains 5,829 0 11 2 0.0
Log archive I/O 4,346 0 9 2 0.0
enq: TX - index contention 1,153 0 7 6 0.0
latch free 1,483 0 4 3 0.0
control file parallel write 328 0 4 11 0.0
control file sequential read 1,593 0 2 1 0.0
latch: enqueue hash chains 337 0 2 6 0.0
buffer deadlock 1,091 99 2 2 0.0
Segments by Logical Reads DB/Inst: apple22A/apple22a Snaps: 1747-1752
-> End Segment Logical Reads Threshold: 10000
-> Pct Total shows % of logical reads for each top segment compared with total
logical reads for all segments captured by the Snapshot
Subobject Obj. Logical Pct
Owner Tablespace Object Name Name Type Reads Total
TPCCDB TPCCDB NEW_ORDER TABLE 89,638,240 51.4
TPCCDB TPCCDB PK_STOCK INDEX 22,913,776 13.1
TPCCDB TPCCDB PK_ORDER_LINE INDEX 14,941,264 8.6
TPCCDB TPCCDB PK_O_ORDER INDEX 10,503,040 6.0
TPCCDB TPCCDB ORDER_LINE TABLE 6,368,896 3.7
Segments by Physical Reads DB/Inst: apple22A/apple22a Snaps: 1747-1752
-> End Segment Physical Reads Threshold: 1000
Subobject Obj. Physical Pct
Owner Tablespace Object Name Name Type Reads Total
TPCCDB TPCCDB NEW_ORDER TABLE 49 12.2
TPCCDB TPCCDB WAREHOUSE TABLE 49 12.2
TPCCDB TPCCDB DISTRICT TABLE 49 12.2
TPCCDB TPCCDB INDEX_NO_D_ID INDEX 49 12.2
TPCCDB TPCCDB PK_NEW_ORDER INDEX 49 12.2
SQL Memory Statistics DB/Inst: apple22A/apple22a Snaps: 1747-1752
Begin End % Diff
Avg Cursor Size (KB): 65.12 67.79 3.95
Cursor to Parent ratio: 1.03 1.02 -.08
Total Cursors: 560 620 9.68
Total Parents: 546 605 9.75
init.ora Parameters DB/Inst: apple22A/apple22a Snaps: 1747-1752
End value
Parameter Name Begin value (if different)
aq_tm_processes 1
audit_file_dest /rdbms/oracle/apple22i/64/admin/o
background_dump_dest /rdbms/oracle/apple22i/64/admin/o
commit_write BATCH,NOWAIT
compatible 10.2.0.1.0
control_files /rdbms/oracle/apple22i/64/oradata
core_dump_dest /rdbms/oracle/apple22i/64/admin/o
cursor_sharing EXACT
db_block_size 8192
db_domain yyyyyyy
db_file_multiblock_read_count 16
db_name apple22a
db_recovery_file_dest /rdbms/oracle/apple22i/64/flash_r
db_recovery_file_dest_size 2147483648
dispatchers (PROTOCOL=TCP) (SERVICE=apple22aX
dml_locks 30028
global_names TRUE
job_queue_processes 10
log_archive_dest_1 LOCATION=/perf0/Archivelog_10g_ch
log_archive_format arch_%t_%s_%r.dbf
log_buffer 30571520
open_cursors 300
pga_aggregate_target 524288000
processes 2000
remote_login_passwordfile EXCLUSIVE
sessions 2205
sga_max_size 3758096384
sga_target 3758096384
transactions 7507
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /rdbms/oracle/apple22i/64/admin/o
-------------------------------------------------------------Process Memory Summary Stats DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> Num Procs or Allocs: For Begin/End snapshot lines, it is the number of
processes. For Category lines, it is the number of allocations
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist Num
Avg Std Dev Max Max Procs
Alloc Used Freeabl Alloc Alloc Alloc Alloc or
Category (MB) (MB) (MB) (MB) (MB) (MB) (MB) Allocs
B -------- 192.0 95.1 8.8 2.0 6.4 51 55 97
Other 179.0 1.8 6.3 50 54 97
Freeable 8.8 .0 .8 .6 2 11
PL/SQL 2.7 1.4 .0 .0 0 0 95
SQL 2.0 1.0 .0 .0 0 2 58
E -------- 311.2 166.7 11.3 1.4 4.3 52 55 220
Other 284.0 1.3 4.1 49 52 220
Freeable 11.4 .0 1.0 1.0 3 11
PL/SQL 10.0 5.4 .0 .0 0 0 218
SQL 5.8 2.8 .0 .0 0 2 208
Top Process Memory (by component) DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> ordered by Begin/End snapshot, Alloc (MB) desc
Alloc Used Freeabl Max Hist Max
PId Category (MB) (MB) (MB) Alloc (MB) Alloc (MB)
B 5 DBW0 -------- 51.3 22.5 1.0 51.3 54.8
Other 50.3 50.3 53.8
Freeable 1.0 .0 1.0
PL/SQL .0 .0 .0 .0
6 LGWR -------- 24.7 11.7 .1 24.7 25.5
Other 24.5 24.5 25.4
Freeable .1 .0 .1
PL/SQL .0 .0 .0 .0
16 ARC0 -------- 21.9 10.3 .0 21.9 21.9
Other 21.9 21.9 21.9
PL/SQL .0 .0 .0 .0
17 ARC1 -------- 21.9 10.3 .0 21.9 21.9
Other 21.9 21.9 21.9
PL/SQL .0 .0 .0 .0
54 TNS V1-V3 --- 4.4 1.3 1.7 4.4 4.4
Other 2.6 2.6 2.6
Freeable 1.7 .0 1.7
SQL .2 .1 .2 2.3
PL/SQL .0 .0 .0 .0
11 MMON -------- 3.5 1.6 1.3 3.5 3.6
Other 2.1 2.1 2.1
Freeable 1.3 .0 1.3
SQL .1 .0 .1 1.1
PL/SQL .0 .0 .0 .1
8 SMON -------- 2.8 .7 1.9 2.8 2.8
Freeable 1.9 .0 1.9
Other .8 .8 .8
SQL .1 .0 .1 .6
PL/SQL .0 .0 .0 .0
10 CJQ0 -------- 1.6 .6 .8 1.6 1.7
Freeable .8 .0 .8
Other .7 .7 .7
SQL .1 .0 .1 .6
PL/SQL .0 .0 .0 .0
20 q000 -------- 1.6 .7 .2 1.6 1.6
Other 1.3 1.3 1.3
Freeable .2 .0 .2
SQL .1 .1 .1 .5
PL/SQL .0 .0 .0 .0
24 ------------ 1.6 .6 .3 1.6 1.6
Other 1.2 1.2 1.2
Freeable .3 .0 .3
SQL .1 .0 .1 .6
PL/SQL .1 .0 .1 .1
7 CKPT -------- 1.4 .4 .8 1.4 2.3
Freeable .8 .0 .8
Other .6 .6 1.4
SQL .0 .0 .0 .1
PL/SQL .0 .0 .0 .0
9 RECO -------- 1.2 .5 .6 1.2 1.2
Freeable .6 .0 .6
Other .5 .5 .5
SQL .1 .1 .1 .5
Top Process Memory (by component) DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> ordered by Begin/End snapshot, Alloc (MB) desc
Alloc Used Freeabl Max Hist Max
PId Category (MB) (MB) (MB) Alloc (MB) Alloc (MB)
B 9 PL/SQL .0 .0 .0 .0
21 ------------ 1.1 .5 .0 1.1 1.1
Other 1.0 1.0 1.0
PL/SQL .0 .0 .0 .0
SQL .0 .0 .0 .2
31 ------------ 1.1 .6 .1 1.1 1.1
Other .9 .9 .9
SQL .1 .0 .1 .2
Freeable .1 .0 .1
PL/SQL .1 .0 .1 .1
E 5 DBW0 -------- 52.4 23.4 3.3 52.4 54.8
Other 49.2 49.2 51.5
Freeable 3.3 .0 3.3
PL/SQL .0 .0 .0 .0
6 LGWR -------- 24.7 11.7 .1 24.7 25.5
Other 24.5 24.5 25.4
Freeable .1 .0 .1
PL/SQL .0 .0 .0 .0
16 ARC0 -------- 21.9 10.3 .0 21.9 21.9
Other 21.9 21.9 21.9
PL/SQL .0 .0 .0 .0
17 ARC1 -------- 21.9 10.3 .0 21.9 21.9
Other 21.9 21.9 21.9
PL/SQL .0 .0 .0 .0
54 TNS V1-V3 --- 4.6 1.3 1.9 4.6 4.6
Other 2.4 2.4 2.4
Freeable 2.1 .0 2.1
SQL .1 .1 .1 2.5
PL/SQL .0 .0 .0 .0
11 MMON -------- 3.5 1.6 1.3 3.5 3.6
Other 2.1 2.1 2.1
Freeable 1.3 .0 1.3
SQL .1 .0 .1 1.1
PL/SQL .0 .0 .0 .1
8 SMON -------- 2.8 .7 1.8 2.8 2.8
Freeable 1.8 .0 1.8
Other 1.0 1.0 1.0
SQL .1 .0 .1 .6
PL/SQL .0 .0 .0 .0
10 CJQ0 -------- 1.6 .6 .8 1.6 1.7
Freeable .8 .0 .8
Other .7 .7 .7
SQL .1 .0 .1 .6
PL/SQL .0 .0 .0 .0
20 q000 -------- 1.6 .7 .2 1.6 1.6
Other 1.3 1.3 1.3
Freeable .2 .0 .2
SQL .1 .1 .1 .5
PL/SQL .0 .0 .0 .0
24 ------------ 1.6 .6 .6 1.6 1.6
Other .9 .9 .9
Freeable .6 .0 .6
SQL .1 .0 .1 .6
Top Process Memory (by component) DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> ordered by Begin/End snapshot, Alloc (MB) desc
Alloc Used Freeabl Max Hist Max
PId Category (MB) (MB) (MB) Alloc (MB) Alloc (MB)
E 24 PL/SQL .1 .0 .1 .1
7 CKPT -------- 1.5 .4 .7 1.5 2.3
Other .8 .8 1.5
Freeable .7 .0 .7
SQL .0 .0 .0 .1
PL/SQL .0 .0 .0 .0
9 RECO -------- 1.2 .5 .6 1.2 1.2
Freeable .6 .0 .6
Other .5 .5 .5
SQL .1 .1 .1 .5
PL/SQL .0 .0 .0 .0
219 ------------ 1.2 .5 .0 1.2 1.2
Other 1.1 1.1 1.1
PL/SQL .0 .0 .0 .0
SQL .0 .0 .0 .2
21 ------------ 1.1 .5 .0 1.1 1.1
Other 1.0 1.0 1.0
PL/SQL .0 .0 .0 .0
SQL .0 .0 .0 .2
31 ------------ 1.1 .6 .1 1.1 1.1
Other .9 .9 .9
SQL .1 .0 .1 .2
Freeable .1 .0 .1
PL/SQL .1 .0 .1 .1
205 ------------ 1.1 .5 .0 1.1 1.1
Other 1.0 1.0 1.0
PL/SQL .1 .0 .1 .1
SQL .0 .0 .0 .1
27 ------------ 1.1 .5 .0 1.1 1.1
Other 1.0 1.0 1.0
PL/SQL .1 .0 .1 .1
SQL .0 .0 .0 .1
158 ------------ 1.1 .5 .0 1.1 1.1
Other 1.0 1.0 1.0
PL/SQL .1 .0 .1 .1
SQL .0 .0 .0 .1
172 ------------ 1.1 .5 .0 1.1 1.1
Other 1.0 1.0 1.0
PL/SQL .1 .0 .1 .1
SQL .0 .0 .0 .1
Enqueue activity DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc
Enqueue Type (Request Reason)
Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)
TX-Transaction (row lock contention)
106,475 106,474 0 106,341 20,273 190.64
TX-Transaction (index contention)
44,355 44,355 0 44,319 2,784 62.81
TX-Transaction (allocate ITL entry)
184 184 0 182 9 46.81
HW-Segment High Water Mark
1,975 1,975 0 70 5 66.29
FB-Format Block
2,164 2,164 0 50 3 54.60
TX-Transaction
394,649 394,668 0 30 0 4.33
Undo Segment Summary DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out Of Space count
-> Undo segment block stats:
uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concy TR (mins) OOS eS/eR/eU
1 117.7 322,423 49 73 15/15 0/0 0/0/0/0/0/0
Undo Segment Stats DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> Most recent 35 Undostat rows, ordered by End Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
17-Aug 03:40 117,733 322,423 49 73 15 0/0 0/0/0/0/0/0
Latch Activity DB/Inst: apple22A/apple22a Snaps: 2147-2151
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
Consistent RBA 3,517 0.0 0 0
FAL request queue 11 0.0 0 0
FAL subheap alocation 11 0.0 0 0
FIB s.o chain latch 20 0.0 0 0
FOB s.o list latch 361 0.0 0 0
JS mem alloc latch 2 0.0 0 0
JS queue access latch 2 0.0 0 0
JS queue state obj latch 3,706 0.0 0 0
JS slv state obj latch 16 0.0 0 0
KGX 0 0 353,668 6.5
KMG MMAN ready and start 636 0.0 0 0
KMG resize request state 27 33.3 1.0 0 0
KTF sga latch 2 0.0 0 165 0.0
KWQP Prop Status 4 0.0 0 0
MQL Tracking Latch 0 0 11 0.0
Memory Management Latch 660 0.2 0.0 0 624 0.0
OS process 294 0.0 0 0
OS process allocation 507 0.0 0 0
OS process: request allo 333 0.0 0 0
PL/SQL warning settings 270,940 0.3 0.0 0 0
SGA IO buffer pool latch 2,654 0.0 0 5,801 0.0
SQL memory manager latch 4 0.0 0 158 0.0
SQL memory manager worka 11,158 0.0 0 0
Shared B-Tree 29 0.0 0 0
active checkpoint queue 8,205 0.0 0 0
active service list 2,335 0.0 0.0 0 174 0.0
archive control 13 0.0 0 0
archive process latch 171 0.0 0 0
buffer pool 139 0.0 0 0
cache buffer handles 46,062 0.1 0.0 0 0
cache buffers chains 457,192,374 0.2 0.0 1082 3,785,637 0.6
cache buffers lru chain 447,547 0.5 0.3 8 90,454,746 2.6
cache table scan latch 0 0 11,447 0.0
cas latch 100 0.0 0 0
channel handle pool latc 333 0.0 0 0
channel operations paren 8,286 0.0 0 0
checkpoint queue latch 199,380 0.0 0.0 0 386,367 0.0
client/application info 1,208 0.0 0 0
compile environment latc 791,470 0.0 0.1 1 0
dml lock allocation 3,552,580 0.5 0.1 117 0
dummy allocation 336 0.3 0.0 0 0
enqueue hash chains 5,288,101 0.3 0.1 45 23,479 0.4
enqueues 1,120,394 0.1 0.1 2 0
event group latch 239 0.0 0 0
file cache latch 2,388 0.0 0 0
global KZLD latch for me 236 0.0 0 0
hash table column usage 0 0 4,564 0.0
hash table modification 30 0.0 0 0
job workq parent latch 0 0 4 0.0
job_queue_processes para 11 0.0 0 0
Latch Activity DB/Inst: apple22A/apple22a Snaps: 2147-2151
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
kks stats 302 0.0 0 0
ksuosstats global area 58 0.0 0 0
ktm global data 270 0.0 0 0
kwqbsn:qsga 29 0.0 0 0
lgwr LWN SCN 3,520 0.0 0 0
library cache 19,899,407 0.4 0.0 199 16,683 ######
library cache load lock 1,030 0.0 0 63 0.0
library cache lock 17,688 0.2 0.0 0 0
library cache lock alloc 990 0.0 0 0
library cache pin 19,007,237 0.2 0.0 35 1,074 0.0
library cache pin alloca 681 0.0 0 0
list of block allocation 1,042 0.1 1.0 0 0
longop free list parent 8 0.0 0 16 12.5
messages 38,525 0.0 0.0 0 0
mostly latch-free SCN 2,543,316 0.1 0.0 0 0
multiblock read objects 30,207 0.0 1.0 0 0
ncodef allocation latch 8 0.0 0 0
object queue header heap 10 0.0 0 1,365 0.0
object queue header oper 1,198,162 0.1 0.1 0 0
object stats modificatio 832 0.0 0 0
parallel query alloc buf 64 0.0 0 0
parameter table allocati 116 1.7 0.5 0 0
post/wait queue 28,580 0.4 0.0 0 8,842 0.0
process allocation 333 0.0 0 239 0.0
process group creation 333 0.0 0 0
qmn state object latch 1 0.0 0 0
qmn task queue latch 124 0.0 0 0
redo allocation 22,668 2.0 0.2 1 9,366,319 0.5
redo copy 13 76.9 1.3 0 9,367,099 0.4
redo on-disk SCN 11,212 0.0 0 0
redo writing 23,270 0.0 0.0 0 0
resmgr group change latc 244 0.0 0 0
resmgr:actses active lis 347 0.0 0 0
resmgr:actses change gro 238 0.0 0 0
resmgr:free threads list 335 0.3 0.0 0 0
resmgr:schema config 12 0.0 0 0
rm cas latch 1,038 0.0 0 0
row cache objects 464,390 0.0 0.0 0 0
rules engine rule set st 400 0.0 0 0
sequence cache 752 0.0 0 0
session allocation 1,627,067 0.2 0.0 1 0
session idle bit 1,875,662 0.0 0.0 0 0
session state list latch 486 0.0 0 0
session switching 8 0.0 0 0
session timer 174 0.0 0 0
shared pool 58,091 0.3 0.3 1 0
simulator hash latch 32,009,012 0.0 0.0 0 0
simulator lru latch 20,996,297 4.9 0.0 1243 15,131 0.2
slave class 1 0.0 0 0
slave class create 3 0.0 0 0
Latch Activity DB/Inst: apple22A/apple22a Snaps: 2147-2151
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
sort extent pool 100 0.0 0 0
threshold alerts latch 29 0.0 0 0
transaction allocation 965 0.0 0 0
transaction branch alloc 8 0.0 0 0
undo global data 24,845,984 0.2 0.0 20 0
user lock 658 4.4 0.9 1 0
Latch Sleep breakdown DB/Inst: apple22A/apple22a Snaps: 2147-2151
-> ordered by misses desc
Get Spin
Latch Name Requests Misses Sleeps Gets
simulator lru latch 20,996,297 1,020,829 20,140 1,003,339
cache buffers chains 457,192,374 1,016,828 24,247 994,418
library cache 19,899,407 86,387 3,201 83,529
undo global data 24,845,984 42,072 497 41,638
library cache pin 19,007,237 36,024 619 35,469
dml lock allocation 3,552,580 17,725 1,223 16,696
enqueue hash chains 5,288,101 14,754 1,086 13,773
simulator hash latch 32,009,012 7,219 54 7,171
session allocation 1,627,067 2,489 117 2,385
cache buffers lru chain 447,547 2,278 583 1,792
mostly latch-free SCN 2,543,316 1,814 14 1,802
enqueues 1,120,394 1,253 89 1,172
object queue header operat 1,198,162 1,010 52 965
PL/SQL warning settings 270,940 682 5 677
redo allocation 22,668 448 71 389
session idle bit 1,875,662 387 8 380
compile environment latch 791,470 176 12 165
shared pool 58,091 171 48 127
checkpoint queue latch 199,380 33 1 32
user lock 658 29 25 5
redo copy 13 10 13 0
KMG resize request state o 27 9 9 0
parameter table allocation 116 2 1 1
multiblock read objects 30,207 1 1 0
list of block allocation 1,042 1 1 0
-------------------------------------------------------------Edited by: praveenkumaar on Aug 18, 2009 4:07 AM -
I am attempting to manually install Oracle Spatial with Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64 bit. It is my understanding that in order to install Oracle Spatial, that the following components are required:
1) Java
2) XML database
3) Oracle Multimedia (In Pre-11g versions, its called interMedia)
I was able to successful install the Java component running the following scripts when connected as sysdba:
alter system set "_system_trig_enabled" = false scope=memory;
>@C:\oracle\ora11\javavm\install\initjvm.sql
>@C:\oracle\ora11\xdk\admin\initxml.sql
>@C:\oracle\ora11\xdk\admin\xmlja.sql
>@C:\oracle\ora11\RDBMS\ADMIN\catjava.sql
>@C:\oracle\ora11\RDBMS\ADMIN\catexf.sql
After running all of these scripts, I run the following query:
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
I get the following results:
COUNT(*) OBJECT_TYPE
332 JAVA DATA
837 JAVA RESOURCE
22958 JAVA CLASS
2 JAVA SOURCE
I then run the following query:
select comp_id,version,status from dba_registry
where comp_id in ('JAVAVM');
I get the following results which appears that Java component is now correctly installed:
COMP_ID
VERSION
STATUS
JAVAVM
11.2.0.3.0
VALID
After successful installing the Java component, I next attempt to install the XML component. I do this running the following scripts:
create tablespace XDB
datafile 'C:\ORACLE\ORADATA\ORA11\XDB1.DBF' size 50m autoextend on next 1m maxsize 4096m
extent management local
uniform size 1m
segment space management auto;
> @C:\oracle\ora11\RDBMS\ADMIN\catqm.sql XDB XDB TEMP
I get the following errors when running this script:
drop type xdb.xdbpi_im
ERROR at line 1:
ORA-04043: object XDBPI_IM does not exist
drop table xdb.xdb$path_index_params
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> drop view xdb.xdb$resource_view;
drop view xdb.xdb$resource_view
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> drop view xdb.xdb$rv;
drop view xdb.xdb$rv
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> drop indextype xdb.xdbhi_idxtyp force;
drop indextype xdb.xdbhi_idxtyp force
ERROR at line 1:
ORA-29833: indextype does not exist
SQL> drop operator xdb.path force;
drop operator xdb.path force
ERROR at line 1:
ORA-29807: specified operator does not exist
SQL> drop operator xdb.depth force;
drop operator xdb.depth force
ERROR at line 1:
ORA-29807: specified operator does not exist
SQL> drop operator xdb.abspath force;
drop operator xdb.abspath force
ERROR at line 1:
ORA-29807: specified operator does not exist
SQL> drop operator xdb.under_path force;
drop operator xdb.under_path force
ERROR at line 1:
ORA-29807: specified operator does not exist
SQL> drop operator xdb.equals_path force;
drop operator xdb.equals_path force
ERROR at line 1:
ORA-29807: specified operator does not exist
SQL> drop package xdb.xdb_ancop;
drop package xdb.xdb_ancop
ERROR at line 1:
ORA-04043: object XDB_ANCOP does not exist
SQL> drop package xdb.xdb_funcimpl;
drop package xdb.xdb_funcimpl
ERROR at line 1:
ORA-04043: object XDB_FUNCIMPL does not exist
SQL> drop type xdb.xdbhi_im force;
drop type xdb.xdbhi_im force
ERROR at line 1:
ORA-04043: object XDBHI_IM does not exist
SQL> drop type xdb.path_array force;
drop type xdb.path_array force
ERROR at line 1:
ORA-04043: object PATH_ARRAY does not exist
SQL> drop type xdb.path_linkinfo force;
drop type xdb.path_linkinfo force
ERROR at line 1:
ORA-04043: object PATH_LINKINFO does not exist
SQL> drop table xdb.xdb$workspace;
drop table xdb.xdb$workspace
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> drop table xdb.xdb$checkouts;
drop table xdb.xdb$checkouts
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> drop operator xdb.all_path force;
drop operator xdb.all_path force
ERROR at line 1:
ORA-29807: specified operator does not exist
SQL> drop function XDB.XMLIndexInsFunc;
drop function XDB.XMLIndexInsFunc
ERROR at line 1:
ORA-04043: object XMLINDEXINSFUNC does not exist
SQL> drop function XDB.XMLIndexLoadFunc;
drop function XDB.XMLIndexLoadFunc
ERROR at line 1:
ORA-04043: object XMLINDEXLOADFUNC does not exist
SQL> drop type XDB.XMLIndexLoad_Imp_t force;
drop type XDB.XMLIndexLoad_Imp_t force
ERROR at line 1:
ORA-04043: object XMLINDEXLOAD_IMP_T does not exist
SQL> drop type XDB.XMLIndexTab_t;
drop type XDB.XMLIndexTab_t
ERROR at line 1:
ORA-04043: object XMLINDEXTAB_T does not exist
SQL> drop type XDB.XMLIndexLoad_t force;
drop type XDB.XMLIndexLoad_t force
ERROR at line 1:
ORA-04043: object XMLINDEXLOAD_T does not exist
SQL> /* disassociate statistics type */
SQL> disassociate statistics from indextypes xdb.xdbhi_idxtyp;
disassociate statistics from indextypes xdb.xdbhi_idxtyp
ERROR at line 1:
ORA-29931: specified association does not exist
SQL> disassociate statistics from packages xdb.xdb_funcimpl;
disassociate statistics from packages xdb.xdb_funcimpl
ERROR at line 1:
ORA-29931: specified association does not exist
SQL>
SQL> /* drop statistics type */
SQL> drop type xdb.funcstats;
drop type xdb.funcstats
ERROR at line 1:
ORA-04043: object FUNCSTATS does not exist
create table NET$_ACL
ERROR at line 1:
ORA-00955: name is already used by an existing object
create table WALLET$_ACL
ERROR at line 1:
ORA-00955: name is already used by an existing object
Does anyone have any idea on what might be causing these errors to occur? After the script finishes, it appears that the XML component is installed correctly. I ran the following query to check:
select comp_id,version,status from dba_registry
where comp_id in ('XML');
COMP_ID
VERSION
STATUS
XML
11.2.0.3.0
VALID
Should I be concerned about any of these errors?
Cheers,
JeremyI think those errors are ok as they are really warnings about stuff that does not exist. As long as the status is VALID in the end, the component
should be valid.
siva -
Error installing RAPID INSTALL R12 on Linux
I did a Rapid Install of Oracle Applications R12 (which contains 10g database) from the root user. After the installation, the system validates the configuration and it shows some problem with database availability. I have attached the log for database availability (bottom of this email), which shows the error "RW-50011: Error: - Apps ORACLE_HOME connection test has returned an error: 2"
Log for Database Availability :-
Database Availability
command: su oracle -c "/home/apps_rapid_install/startCD/Disk1/rapidwiz/bin/riwTDBup.sh /d01/oracle/VIS/db/tech_st/10.2.0/VIS_localhost.env APPS/APPS"
riwTDBup.sh started at Wed Sep 26 10:50:45 PDT 2007
Parameters passed are : /d01/oracle/VIS/db/tech_st/10.2.0/VIS_localhost.env APPS/APPS
The environment settings are as follows ...
ORACLE_HOME : /d01/oracle/VIS/db/tech_st/10.2.0
ORACLE_SID : VIS
TWO_TASK :
PATH : /d01/oracle/VIS/db/tech_st/10.2.0/perl/bin:/d01/oracle/VIS/db/tech_st/10.2.0/bin:/usr/bin:/usr/sbin:/d01/oracle/VIS/db/tech_st/10.2.0/appsutil/jre/bin:/usr/ccs/bin:/bin:/usr/bin/X11:/usr/local/bin:/usr/bin:/home/apps_rapid_install/startCD/Disk1/rapidwiz/unzip/Linux:/usr/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:/usr/java/jre1.5.0_06/bin:/root/bin:/sbin:/usr/sbin:/usr/java/jre1.5.0_06/bin
LD_LIBRARY_PATH : /d01/oracle/VIS/db/tech_st/10.2.0/lib:/usr/X11R6/lib:/usr/openwin/lib:/d01/oracle/VIS/db/tech_st/10.2.0/lib:/usr/dt/lib:/d01/oracle/VIS/db/tech_st/10.2.0/ctx/lib
Executable : /d01/oracle/VIS/db/tech_st/10.2.0/bin/sqlplus
riwTDBup.sh exiting with status 0
Database ORACLE_HOME connection test has succeeded
command: su applmgr -c "/home/apps_rapid_install/startCD/Disk1/rapidwiz/bin/riwTDBup.sh /d01/oracle/VIS/inst/apps/VIS_localhost/ora/10.1.2/VIS_localhost.env APPS/APPS"
riwTDBup.sh started at Wed Sep 26 10:50:47 PDT 2007
Parameters passed are : /d01/oracle/VIS/inst/apps/VIS_localhost/ora/10.1.2/VIS_localhost.env APPS/APPS
The environment settings are as follows ...
ORACLE_HOME : /d01/oracle/VIS/apps/tech_st/10.1.2
ORACLE_SID :
TWO_TASK : VIS
PATH : /d01/oracle/VIS/apps/tech_st/10.1.2/bin:/usr/bin:/usr/ccs/bin:/usr/sbin:/usr/bin:/home/apps_rapid_install/startCD/Disk1/rapidwiz/unzip/Linux:/usr/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:/usr/java/jre1.5.0_06/bin:/root/bin:/sbin:/usr/sbin:/usr/java/jre1.5.0_06/bin
LD_LIBRARY_PATH : /d01/oracle/VIS/apps/tech_st/10.1.2/lib32:/d01/oracle/VIS/apps/tech_st/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/d01/oracle/VIS/apps/tech_st/10.1.2/jdk/jre/lib/i386:/d01/oracle/VIS/apps/tech_st/10.1.2/jdk/jre/lib/i386/server:/d01/oracle/VIS/apps/tech_st/10.1.2/jdk/jre/lib/i386/native_threads
Executable : /d01/oracle/VIS/apps/tech_st/10.1.2/bin/sqlplus
riwTDBup.sh exiting with status 2
RW-50011: Error: - Apps ORACLE_HOME connection test has returned an error: 2
As the database is not available, the login page does not come up. It gives this error.
Login Page
checking URL = http://localhost.localdomain:8016/OA_HTML/AppsLogin
RW-50016: Error: - {0} was not created:
File = {1}
Even the Configuration Upload gives this error :-
Configuration Upload
uploading config file at /d01/oracle/VIS/db/tech_st/10.2.0/appsutil/conf_VIS.txt
Failed upload of config file at /d01/oracle/VIS/db/tech_st/10.2.0/appsutil/conf_VIS.txt
Can you please advice on where I could be going wrong. There was no error during the installation. It seems that some setups may be missing in the Linux Box.Hi Tom,
This excerpts the pre-upgrade patches MW recommends:
Patch 4685497...11i UPGRADE TO R12: OPTIONAL
PRE-UPGRADE PROGRAMS IN 11i Product: General Ledger
Patch 4676589...11i.ATG_PF.H.RUP4 Product:
Applications Technology Family
Patch 3438354...Patch 11i.ATG_PF.H Product:
Applications Technology Family
Patch 5120936...TUMS for R12: TO DELIVER TUMS UTILITY
FOR UPGRADES FROM 11i TO R12
Patch 4712852...Minipack 11i.AD.I.4 Product:
Applications DBA
Patch 4963569...CN UPGRADE - PRE-UPGRADE PATCH FOR
RELEASE 12.0 Product: Incentive Compensation
Patch 5382135...CSD R12 PRE-UPGRADE DATA VERIFICATION
PATCH UPDATE Product: Depot Repair
Patch 5259121...R12.FIN.A.UI.XB.3: ASM PRE/POST
DIAGNOSIS PROGRAM OUTPUT EDITS Product: General
Ledger
Patch 3649470...11i.10 Pre-Upgrade patch to find
carrier ship method inconsistencies Product: Shipping
Execution Common
Patch 3464936...AD.I NT PREREQ Product: Applications
DBA
Patch 5233248...SLA PRE-UPGRADE PROGRAM MODIFICATIONS
FOR 11i Product: Subledger Accounting.These are either optional pre-upgrade patches where you can control how much 11i data you migrate, or were included in the list of R12 upgrade patches provided.
And this excerpts the patches listed in 403339.1 Path C:
For all platforms:
* 4733582 - APPSPERF: AQ: UPGRADE SCRIPT
(A1001000.SQL) DELETES STATISTICS ON AQ TABLES. -
Product: RDBMS Server
* 4906594 - APPSST10201:ORA-39706 WHILE UPGRADE
FROM 8174 TO 10201 - Product: Oracle Intermedia
* 5567658 - ORACLE CONFIGURATION MANAGER UPDATE -
Product: Oracle Configuration Manager Family
* 5601428 - MLR BACKPORT FOR 5126270, 5238255 ON
TOP OF VERSION 10.2.0.2 - Product: RDBMS Server
* 4247037 - RFID-EPC GENERATION FEATURE FROM
DATABASE SERVER. TO BE USED WITH APPS WMS R12 -
Product: Spatial
* 4868804 - REGRESSION WHEN PATCH 4628170 IS
APPLIED TO 10.1.0.4 AND 10.2.0.1. ENV - Product:
RDBMS Server
* 4898580 - APPSST10G: UNABLE TO EXTRACT COMMENT
COLUMN DEFINITION ON A MVIEW - Product: RDBMS
Server
* 5005469 - NEED SCRIPT TO ASSIST WITH MIGRATION
FROM KOREAN_LEXER TO KOREAN_MORPH_LEXER - Product:
Oracle Text
* 5153209 - R12: AS10G: REHOST JRE 1.5.0_06 FOR
RDBMS ORACLE HOME 10.2.0.2 - Product: Techstack
* 5477912 - APPST:MEMORY LEAK ISSUE FOR R12 -
Product: RDBMS Server
* 5865568 UPDATE TO JVM TIME ZONE CLASSES NEEDED -
Product: Java/VM
or Linux and Unix only:
* 4380928 - APPSST102 : ALTER MATERIALIZED VIEW
LOG FAILS WITH ORA-03113 ERROR - Product: RDBMS
Server
* 4518443 - LISTENER GETS HUNG UP - Product:
Oracle Database Family
* 4592596 - APPSST102: ORA-01410 INVALID ROWID
WHEN SELECTING FROM A TABLE - Product: RDBMS Server
* 4639977 - APPSST102: ORA-00600: [KKQCBY: QBCFKKC
NOT ZERO], - Product: RDBMS Server
* 4643322 - APPSST: GETTING ERROR "ORA-22167"
WHILE DOING BULK UPDATE OPERATION - Product: RDBMS
Server
* 4686006 - QUERY ON VIEW WITH EXISTS AND HAVING
CLAUSE RAISES ORA-600 [KKQSALJG:NOBJS IS 0] -
Product: RDBMS Server
* 4689959 - DST RULE CHANGE IN US, NEED PATCHED
TIMEZONE FILES - Product: CORE
* 4744317 - SCMATGR12:D5 APPSST-XSLT TRANS FAILS
WHEN TAG CONTAINS '_' AT BEGINNING IN 10GR2 -
Product: XML Developers Kit
* 4751145 - ARRAYINDEXOUTOFBOUNDSEXCEPTION ON
QUERY EXECUTION WITH 10G JDBC - Product: JDBC
* 4932527 - CLIENT SIDE PLSQL CRASHES PASSING
RECORD PARAMETER TO STORED FUNCTION - Product: RDBMS
Server
* 4949257 - UNNESTED SUBQUERY RETURNS WRONG RESULT
- Product: RDBMS Server * 4966417
* 4967236 - APPSST10201:ORA 600[17282], WHILE
PERFORMING DATA IMPORT. R12 PROJECT - Product: XML
Developers Kit
* 5128946 - APPSPERF: RDBMS: MEMORY LEAK DUE TO
DDL REPARSE - Product: RDBMS Server
* 5150177 - APPSST: CREATE INTERMEDIA (TEXT)
INDEX ON 10G TAKES A LONG TIME - Product: Oracle
Text
* 5206570 - DUMP IN KSUSRS() WHEN SESSION FROM SQL
DEVELOPER IS SNIPED - Product: RDBMS Server
* 5254539 - INTERNAL ERROR CODE ON EXECUTING
HIERARCHICAL QUERY - Product: RDBMS Server
* 5434572 - MERGE LABEL REQUEST ON TOP OF 10.2.0.2
FOR BUGS 5106909 4151363 - Product: RDBMS Server
* 5455623 - MERGE LABEL REQUEST ON TOP OF 10.2.0.2
FOR BUGS 5392772 4759183 - Product: RDBMS Server
* 5460159 - MERGE LABEL REQUEST ON TOP OF 10.2.0.2
FOR BUGS 4417341 5092134 - Product: RDBMS Server
* 5548758 - MERGE LABEL REQUEST ON TOP OF 10.2.0.2
FOR BUGS 5253806 5125886 5117856 - Product: RDBMS
Server
* 5718367 - MERGE LABEL REQUEST ON TOP OF 10.2.0.2
FOR BUGS 5647056 5691091 5199213 - Product: RDBMS
Server
* 4450497 - UPGRADE 9.0.1.4.0 TO 10.1.0.4 RESULTS
IN SDO_INDEX_METADATA NULL FETCH - Product: Spatial
* 5066528 -
DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.TRACE_ALL_EXCEP
IONS) ROWS - Product: PLSQL
* 5612127 - (Except HP-UX Itanium) THE OLAP C-
MINUS PATCH - Product: Oracle OLAPThese patches are required if you upgrade you 11.5.9 or higher 9i database to 10.2.0.2 in preparation for an R12 upgrade. You can do this and run 11i on 10.2.0.2 for some time before the R12 upgrade in order to split up your downtimes. When you install the R12 Upgrade Filesystem, the 10.2.0.2 Oracle Home includes all of these database patches already. After the database upgrade, you just need to run the manual steps from these patch readmes, which can be condensed down to the following:
cd $ORACLE_HOME/rdbms/admin
sqlplus “/as sysdba”
spool post_install.log
@?/rdbms/admin/catdph
@?/rdbms/admin/catdpb
@?/md/admin/catmgdidcode
@?/rdbms/admin/dbmsxmld.sql
@?/rdbms/admin/prvtxmld.plb
@?/rdbms/admin/dbmspbt.sql
@?/rdbms/admin/prvtpbt.plb
@?/rdbms/admin/tracetab.sql
@?/rdbms/admin/prvtaw.plb
@?/olap/admin/apsrelod.sql
@?/olap/admin/xoqrelod.sql
@?/olap/admin/amdpatch.sql
@?/rdbms/admin/utlrp.sql
cd $ORACLE_HOME/md/admin
alter session set current_schema=MDSYS;
@?/md/admin/prvtimd.plb
spool off
exit
cd ORACLE_HOME/javavm/lib/zi/
@fix5075470a.sql
shutdown immediate
startup
@fix5075470b.sql
Install OLAP and ODM for R12 Analytical Workspaces:
Install Oracle Data Mining and OLAP (conditional)
Verify that Oracle Data Mining and OLAP are installed in your database by using SQL*Plus to connect to the database as SYSDBA and running the following command:
SQL> select comp_id from dba_registry where comp_id='ODM' or comp_id='AMD';
If the query does not return ODM, then you do not have Oracle Data Mining installed. To install Data Mining, use SQL*Plus to connect to the database as SYSDBA and run the following command:
SQL> @$ORACLE_HOME/rdbms/admin/dminst.sql SYSAUX TEMP
If the query does not return AMD, then you do not have OLAP installed. To install OLAP, use SQL*Plus to connect to the database as SYSDBA and run the following command:
SQL> @$ORACLE_HOME/olap/admin/olap.sql SYSAUX TEMP -
Is it possible that my update stats used only correct tables?
Whenever there is a schedule maintenance run I receive a error:
Executing the query "UPDATE STATISTICS [Perf].[PerfHourly_F65954CD35A54..." failed with the following error: "Table 'PerfHourly_F65954CD35A549E886A48E53F148F277' does not exist.". Possible failure reasons: Problems with the query, "ResultSet"
property not set correctly, parameters not set correctly, or connection not established correctly.
Is it possible that my update stats used only correct tables?
ThanksUse below script ...(change if required)
USE [dbname]
go
DECLARE @mytable_id INT
DECLARE @mytable VARCHAR(100)
DECLARE @owner VARCHAR(128)
DECLARE @SQL VARCHAR(256)
SELECT @mytable_id = MIN(object_id)
FROM sys.tables WITH(NOLOCK)
WHERE is_ms_shipped = 0
WHILE @mytable_id IS NOT NULL
BEGIN
SELECT @owner = SCHEMA_NAME(schema_id), @mytable = name
FROM sys.tables
WHERE object_id = @mytable_id
SELECT @SQL = 'UPDATE STATISTICS '+ QUOTENAME(@owner) +'.' + QUOTENAME(@mytable) +' WITH ALL, FULLSCAN;'
Print @SQL
EXEC (@SQL)
SELECT @mytable_id = MIN(object_id)
FROM sys.tables WITH(NOLOCK)
WHERE object_id > @mytable_id
AND is_ms_shipped = 0
END
Or use below for required table only but it will not execute only generate script, make change as per ur requirements:
SELECT X.*,
ISNULL(CASE
WHEN X.[Total Rows]<=1000
THEN
CASE
WHEN [Percent Modified] >=20.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN --20% Small Table Rule'
END
WHEN [Percent Modified] = 100.00
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN --100% No real Stats Rule'
--WHEN X.[Rows Modified] > 1000
--THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN --1000 Rows Modified Rule'
ELSE
CASE
WHEN X.[Total Rows] > 1000000000 --billion rows
THEN CASE
WHEN [Percent Modified] > 0.1
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 1B Big Table Rule'
END
WHEN X.[Total Rows] > 100000000 --hundred million rows
THEN CASE
WHEN [Percent Modified] > 1.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 100M Big Table Rule'
END
WHEN X.[Total Rows] > 10000000 --ten million rows
THEN CASE
WHEN [Percent Modified] > 2.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 10M Big Table Rule'
END
WHEN X.[Total Rows] > 1000000 --million rows
THEN CASE
WHEN [Percent Modified] > 5.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 1M Big Table Rule'
END
WHEN X.[Total Rows] > 100000 --hundred thousand rows
THEN CASE
WHEN [Percent Modified] > 10.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 100K Big Table Rule'
END
WHEN X.[Total Rows] > 10000 --ten thousand rows
THEN CASE
WHEN [Percent Modified] > 20.0
THEN 'UPDATE STATISTICS ' + [Schema Name] + '.' + [Table Name] + ' WITH ALL, FULLSCAN -- 10K Big Table Rule'
END
END
END,'') AS [Statistics SQL]
FROM (
SELECT DISTINCT
DB_NAME() AS [Database],
S.name AS [Schema Name],
T.name AS [Table Name],
I.rowmodctr AS [Rows Modified],
P.rows AS [Total Rows],
CASE
WHEN I.rowmodctr > P.rows
THEN 100
ELSE CONVERT(decimal(8,2),((I.rowmodctr * 1.0) / P.rows * 1.) * 100.0)
END AS [Percent Modified]
FROM
sys.partitions P
INNER JOIN sys.tables T ON P.object_Id = T.object_id
INNER JOIN sys.schemas S ON T.schema_id = S.schema_id
INNER JOIN sysindexes I ON P.object_id = I.id
WHERE P.index_id in (0,1)
AND I.rowmodctr > 0
) X
WHERE [Rows Modified] > 1000
ORDER BY [Rows Modified] DESC
Please click "Propose As Answer"
if a post solves your problem, or "Vote As Helpful" if a post has been useful
to you -
Hi all gurus,
My question is this , if a sql query is currently running on the production server and i found using v$sql view that it is taking very much time and it is becuase of this query my database is going slow.What can i do to that slow query at that time while it is running.?
Regards
Sahil SoniMy question is this , if a sql query is currently running on the production server and i found using v$sql view that it is taking very much time and it is becuase of this >query my database is going slow.What can i do to that slow query at that time while it is running.?Find the problem SQL.
Check:
Check the SQL.
Statistics on tables and indexes.
Full Table/Partition Scans
Local or remote access to the table.
Explain Plan
Any locks.
v$session_wait
referential integrity
Not using index
and etc.
you can very well identify and collect information on the above using tools and data dictionary views.
Use the available tools from oracle like AWR,SQL tuning advisor,SQL Access advisor and make use of them.
HTH
-Anantha
Maybe you are looking for
-
Error after transport of Virtual Cube with Services
Hi All, would any body can suggest me how we can change the source system assignment in QA or Production client for virtual Cube with Services. A virtual cube is created with Services using the settings for RFC and using the logical source system. ho
-
User Profile Syncrhonization Scheduling twice a day
Client wants to schedule User Profile Synchronization from AD scheduling twice a day at 12.00 AM and 12.00 PM which includes any update in the AD photos also. How can I achieve this ?
-
Satellite L655D - user profile service will not load
I have not been able to log in my partner on a new user account, keeps showing the message user profile service will not load - I have previously tried to use Face Recognition, but was not able to configure this properly, and deleted the account. The
-
Hi everyone, I'm running "Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)" and i have 3 domains configured: mydomain.com (default domain) doma.mydomain.com domb.mydomain.com I need to configure backup-groups.con
-
Start-up error: Fuzzy graphics then freezing
Hi Guys, I've had my lovely iMac for about a year and a half now, and everything's been fine until recently... My iMac is having issues starting up properly. Every time I turn it on there's all these funny graphics errors all over the screen, small o