HTMLDB 1.5 SQL Optimization
Hi All
I'm using HTMLDB 1.5, and SQL optimization hints vanish from all regions when my app is migrated from development to production. e.g. /*+ hint INDEX */
Tested re-importing the app in the dev environ and have the same issue.
Is this a htmldb bug or am I doing something wrong?
Thanks
Kezie
Kezie - Actually that particular bug was fixed in 1.5.1. If you can apply the 1.5.1 patch, the application installation page will not strip out hints. For SQL*Plus import/install, you must connect as FLOWS_010500 (DBA can change password) or connect as any schema assigned to the workspace into which your application will be installed. The workspace ID and app ID must be identical to those from the source HTML DB instance for this to work.
Scott
Similar Messages
-
많은 INLIST와 MULTIPLE OR 연산을 갖는 SQL의 OPTIMIZATION
제품 : ORACLE SERVER
작성날짜 : 2004-04-19
많은 Inlist와 multiple OR 연산을 갖는 SQL의 Optimization
=========================================================
PURPOSE
이 문서는 IN 연산 내의 많은 IN List와 많은 OR 연산자를 갖는
경우에 CBO가 어떻게 처리하는가에 대한 자료이다.
Explanation
많은 개발자나 DBA들은 IN 연산자와 OR 연산자를 사용하는 SQL이
과도한 Optimization time을 야기시키는 문제를 경험했을 것이다.
이 문서에서는 CBO가 어떻게 IN list와 OR 연산자를 처리하는지
설명하고자 한다.
CBO가 IN list 연산을 만나면 다음과 같은 몇 가지 option을 가지고 판단한다.
1. SQL 문장을 UNION ALL 이 들어간 문장의 연속으로 나눈다.
SELECT empno FROM emp WHERE deptno IN (10,20,30);
라는 문장을 살펴보자.
이 문장은 다음과 같이 다시 쓰여질 수 있다.
SELECT empno FROM emp WHERE deptno = 10
UNION ALL
SELECT empno FROM emp WHERE deptno = 20
UNION ALL
SELECT empno FROM emp WHERE deptno = 30
만약 deptno column이 indexed된다면 index는 각 branch 단에서 loopup하기
위해 사용될 수 있다.
만약 split이 Cost Based Optimizer로 자동으로 발생하지 않는다면
USE_CONCAT hint 를 사용함으로써 강제로 수행될 수 있다.
이 내용에 대해서는 <Note:17214.1>을 참조하도록 한다.
2. IN list를 list로 남겨 두고, filter로서 값을 사용한다.
Oracle 7에서 이 옵션은 index를 사용할 수 없다.
Oracle 8에서 이 옵션은 index를 사용할 수 있는 'inlist iterator' 라는
것을 사용하여 제공된다.
NO_EXPAND hint를 사용함으로써 expand가 일어나지 않도록 CBO 에게 지정할 수 있다.
아주 긴 inlist는 CBO 환경에서 문제를 야기시킬 수 있다. 특히 inlist가 많은 수의
UNION ALL 문장으로 expand될 때 그러하다. 왜냐하면 CBO가 expand된 문장들에
대해서 Cost를 결정해야 하기 때문이다. 이러한 expand된 문장들은 많은 수의
branch 때문에 time을 소모하는 문장들이다.
RBO(Rule Based Optimizer) 환경에서는 이것은 cost 산정을 하지 않으므로 문제가
되지 않는다.
Workaround
만약 아주 긴 inlist 때문에 parsing 문제가 있다면 workaround는 다음과 같다.
1) NO_EXPAND hint를 사용하도록 한다. 이 힌트를 쓰면 Oracle 7에서는 index를
사용하지 않고, Oracle 8에서는 index를 사용할 수 있다.
2) RBO 를 사용하도록 한다.
3) Query를 재작성한다. Inlist가 lookup table에 저장이 되도록 해서
inlist를 사용하는 대신에 그 table에 join을 한다.
주의) hint를 사용하게 되면 CBO로 동작하게 됨을 기억해야 한다.
Example
none
Reference Documents
<Note:62153.1> -
SQL Optimization with join and in subselect
Hello,
I am having problems finding a way to optimize a query that has a join from a fact table to several dimension tables (star schema) and a constraint defined as an in (select ....). I am hoping that this constraint will filter the fact table then perform the joins but I am seeing just the opposite with the optimizer joining first and then filtering at the very end. I am using the cost optimizer and saw that it does in subselects last in the predicate order. I tried the push_subq hint with no success.
Does anyone have any other suggestions?
Thanks in advance,
David
example sql:
select ....
from fact, dim1, dim2, .... dim <n>
where
fact.dim1_fk in ( select pf from dim1 where code = '10' )
and fact.dim1_fk = dim1.pk
and fact.dim2_fk = dim2.pk
and fact.dim<n>_fk = dim<n>.pkThe original query probably shouldn't use the IN clause because in this example it is not necessary. There is no limit on the values returned if a sub-select is used, the limit is only an issue with hard coded literals like
.. in (1, 2, 3, 4 ...)Something like this is okay even in 8.1.7
SQL> select count(*) from all_objects
2 where object_id in
3 (select object_id from all_objects);
COUNT(*)
32378The IN clause has its uses and performs better than EXISTS in some conditions. Blanket statements to avoid IN and use EXISTS instead are just nonsense.
Martin -
Oracle 10.2.0.4 vs 10.2.0.5 SQL optimizer
Hello,
Recently we upgraded from Oracle 10.2.0.4 to 10.2.0.5 deployed on AIX 5. Immediately we could see slowness for a particular SQL which used partition as well as indexed column in predicate clause.
e.g.
SELECT COL1, COL2
FROM TAB1 PARTITION (P1)
WHERE TAB1.COL3 = 123;
There is an index created on COL3. However explain plan for this SQL showed that this index was not getting used. Surprisingly, when we removed partition from SQL, itused the index
SELECT COL1, COL2
FROM TAB1
WHERE TAB1.COL3 = 123;
There is one more observation - When we reverted back to 10.2.0.4 optimization strategy on Oracle 10.2.0.5. The original SQL that had partition clause used the index as it should have been and explain plan matched to what was before the Oracle upgrade.
I have few questions based on these observations. Any help will be appreciated.
1. Are there any changes in the 10.2.0.5 optimizer that is making SQL becoming slow?
2. Is there any problem in SQL that is making it slow?
3. I believe moving to 10.2.0.4 optmizer on Oracle 10.2.0.5 is a short-term solution. Is there any permanent fix to this problem?
4. Does Oracle 11g support 10.2.0.4 optimizer?
Please let me know if more details are needed.
Thank you!Onkar Talekar wrote:
1. Are there any changes in the 10.2.0.5 optimizer that is making SQL becoming slow?There are always changes with the CBO happening, it's a complicated bit of software. Some bugs will be fixed, others introduced. You may have been unfortunate enough to hit a bug, search MOS or raise a SR with Oracle support if you feel that is the case.
Onkar Talekar wrote:
2. Is there any problem in SQL that is making it slow?Entirely possible you have a poorly written SQL statement, yes.
Onkar Talekar wrote:
3. I believe moving to 10.2.0.4 optmizer on Oracle 10.2.0.5 is a short-term solution. Is there any permanent fix to this problem?Yes, raise a SR with Oracle.
Onkar Talekar wrote:
4. Does Oracle 11g support 10.2.0.4 optimizer?Yes, but i wouldn't recommend running an 11 instance with optimizer compatibility set to less than the current version without a very compelling reason (the one you've posted doesn't seem to be compelling to me at the moment).
What happens if you specify the partition column in the WHERE clause instead of the actual partition in the FROM clause ... Oracle should use partition elimination to visit only that partition and utilize the local index on COL3 (i am assuming there is a local index in play here).
I would guess, a very speculative guess, that you hit a bug pertaining to specifying the partition name, and that if you can get Oracle to do a partition elimination on it's own (instead of 'hard coding' the partition name) that it will smarten up and you'll get the execution plan you want / expect ... just a guess. -
10.2.0.4 vs 10.2.0.5 SQL optimizer
Hello,
Recently we upgraded from Oracle 10.2.0.4 to 10.2.0.5 deployed on AIX 5. Immediately we could see slowness for a particular SQL which used partition as well as indexed column in predicate clause.
e.g.
SELECT COL1, COL2
FROM TAB1 PARTITION (P1)
WHERE TAB1.COL3 = 123;
There is an index created on COL3. However explain plan for this SQL showed that this index was not getting used. Surprisingly, when we removed partition from SQL, itused the index
SELECT COL1, COL2
FROM TAB1
WHERE TAB1.COL3 = 123;
There is one more observation - When we reverted back to 10.2.0.4 optimization strategy on Oracle 10.2.0.5. The original SQL that had partition clause used the index as it should have been and explain plan matched to what was before the Oracle upgrade.
I have few questions based on these observations. Any help will be appreciated.
1. Are there any changes in the 10.2.0.5 optimizer that is making SQL becoming slow?
2. Is there any problem in SQL that is making it slow?
3. I believe moving to 10.2.0.4 optmizer on Oracle 10.2.0.5 is a short-term solution. Is there any permanent fix to this problem?
4. Does Oracle 11g support 10.2.0.4 optimizer?
Please let me know if more details are needed.
Thank you!Have statistics been gathered after the upgrade ? Has the OPTIMIZER_FEATURES_ENABLE init.ora parameter been set to 10.2.0.5 after the upgrade ?
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm#CHDFABEF
Pl post the explain plans for the statement in both 10.2.0.4 and 10.2.0.5 following the instructions in this thread - HOW TO: Post a SQL statement tuning request - template posting
Every upgrade/patch can introduce changes in the optimizer. You can use 10.2.0.4 features in 11g using the parameter mentioned above - but why would you want to do that ?
HTH
Srini -
Importing HTMLDB application through sql scripts
I have a question regarding importing application in HTML DB. We have developed an application in one of our instances and now we would like
to deploy the same in all other instances (around 180 instances). We will copy the schema objects through export / import from the backend (through a script). But to import the application, we need to logon to htmldb (from frontend ) and import it.
As we have to do it across 180 instances, its very difficult to do it after login to htmldb and import the application. We would like to know if we can import an application and workspace by running the sql scripts ( created by exporting application and workspace on source instance) in the sqlplus prompt.
In this regard could any one please let me know about the following things.
1. Can we directly run the sql's (which were created when we exported workspace and application) in the target database?.
2. If yes, from which user we should run this sql script (Presently we are using HTMLDB 2.0 version)
I have also noticed the following schemas are locked in the database.
FLOWS_020000
FLOWS_FILES
FLOWS_010500
3. Are there any extra steps which we need to do it after running the sql scripts in target instance.
4. Is there any way to delete the application and workspace from backend (i.e sqlplus) incase if we need to re-import the same application and workspace.
(This situation arises if we need to make any changes to the application after deploying it in the data center.)
Your help in this regard is highly appreciated.
Thanks in advance.
Suneelif we can import an application and workspace by running the sql scripts ( created by exporting application and workspace on source instance) in the sqlplus prompt
Yes, you can.
1. Yes.
2. The workspace export files need to be run as the FLOWS_nnnnnn schema (depending on which version of HTML DB is the active one, you seem to have both 1.5 and 2.0 installed). Yes, you would need to unlock the account first to do this.
3. If your application needs external CSS or Javascript or image files, you need to copy those over using standard OS file copy tools
4. Importing a application using SQL*Plus deletes the existing application first so you are fine. You would never need to delete a workspace, a workspace is just a "shell", a container for all your applications.
Hope this helps. -
SQL Optimization - Exit on First Match
Hi,
I have a requirement where a query, sometimes, takes more than 25 seconds. This should come out in less than 1 second.
About the Query :
SELECT 1 FROM DUAL
WHERE exists
(SELECT TM.AD
FROM TM,
GM
WHERE TM.AD = GM.AD
AND TM.LOA = :b1
and GM.soid='Y');
The way this query has been written, it fetches only 1 row. The plan of this query is (not from production but from my test instance as I could reproduce this but the number of rows differ) :
Rows Row Source Operation
1 FILTER (cr=26 pr=0 pw=0 time=433 us)
1 FAST DUAL (cr=0 pr=0 pw=0 time=12 us)
1 NESTED LOOPS (cr=26 pr=0 pw=0 time=398 us)
9 TABLE ACCESS BY INDEX ROWID TM (cr=6 pr=0 pw=0 time=150 us)
9 INDEX RANGE SCAN TM_LOA (cr=2 pr=0 pw=0 time=21 us)(object id 56302)
1 TABLE ACCESS BY INDEX ROWID GM (cr=20 pr=0 pw=0 time=258 us)
9 INDEX UNIQUE SCAN PK_GM (cr=11 pr=0 pw=0 time=123 us)(object id 56304)
The plan of Production is exactly the same. The issue here is :
1. LOA has an Index and for certain values of LOA, the number of records are around 1000. Issue is normally reported when the number of rows fetched are more than 800.
2. The clustering factor of LOA index is not good and from the plan, it is observed that for every row fetched from an Index, approx equal number of blocks are read from table.
3. AD column of GM is a Primary Key
Also, the problem is visible, when the disk reads of this query is very high i.e. if the CR is 800, PR is 700. For any subsequent executions, it gets the results in less than a second.
In my view, it is the table access of TM that is causing an increase in response time and therefore, if I can eliminate these (unwanted) table access. One way is reorganizing the table to improve the CF, but it can have a negative impact. Therefore, optimizing the query seems to be a better option. Based on the Query Plan, I assume, the optimizer gets 1000 rows from an Index and Table TM, then joins to GM. Fetching these 1000 rows seems to be an issue. The query can be optimized, if the search from TM exits immediately a matching row is found in GM table. Therefore, instead of fetching 1000 rows, it matches each and every row and exits immediately when the first match is found. AD in TM is not Unique, therefore, for each AD from TM, it checks for the existence in GM. So, in case there are 10 matching AD from TM and GM, the search should complete immediately on the first matching AD.
Would appreciate help on this.
RegardsHi,
Will check for the performance with FIRST_ROWS and arrays, but, feel that these will not yield any benefit as 1) The code is directly run on the server, and 2) It is doing a proper index scan, but the code needs a modification to exit immediately as the first match is found.
A pl/sql representation of this code is pasted below :
create or replace function check_exists(la in varchar2)
return number
as
cursor tm_csr is
select ad from tm
where LOA = la;
l_number number:=0;
l_ad tm.ad%type;
begin
open tm_csr;
loop
fetch tm_csr into l_ad;
begin
select 1 into l_number from gm
where gm.ad = l_ad
AND GM.soid='Y';
exception when others then
l_number:=0;
end;
exit when tm_csr%notfound or l_number=1;
end loop;
close tm_csr;
return l_number;
end;
The code, while not a feasible solution but is just a representation of the requirement, fetches AD from TM. Then it checks for the existence in GM and if a matching row is found, exits from the LOOP.
Edited by: Vivek Sharma on Jul 1, 2009 12:20 PM -
PL/SQL: Optimize for speed
I've got a very simple function that i need
use to order the result of a SELECT.
I takes two parameters. One is the current code (that keeps changing with the values of
the SELECT), and the other does not
change.
I'm using Oracle 8.1.5 under Linux and i'd
like to optimize this for speed.
Any suggestions?
Thanks a lot!
FUNCTION OrdenarNomenclator2 (CurrentCode in varchar2, CodeToFind in varchar2) return number IS
BEGIN
IF CurrentCode LIKE CodeToFind| |'%' then
IF CurrentCode = CodeToFind then
return 20;
ELSE
return 10;
END IF;
ELSE
return 0;
END IF;
END;Ferran,
There is an overhead in calling stored functions. Try this as an expression in your select:
decode(CurrentCode ,CodeToFind,20,decode(substr(CurrentCode ,1,length(CodeToFind)),CodeToFind,10,0))
which seems quite fast.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Ferran Foz ([email protected]):
I've got a very simple function that i need
use to order the result of a SELECT.
I takes two parameters. One is the current code (that keeps changing with the values of
the SELECT), and the other does not
change.
I'm using Oracle 8.1.5 under Linux and i'd
like to optimize this for speed.
Any suggestions?
Thanks a lot!
FUNCTION OrdenarNomenclator2 (CurrentCode in varchar2, CodeToFind in varchar2) return number IS
BEGIN
IF CurrentCode LIKE CodeToFind| |'%' then
IF CurrentCode = CodeToFind then
return 20;
ELSE
return 10;
END IF;
ELSE
return 0;
END IF;
END; <HR></BLOCKQUOTE>
null -
Hi,
I'm relatively new to SQL and I'm trying to figure out a good way to run a select statement multiple times (with different parameters) and only have one results table.
SELECT *
FROM table
WHERE date=&1
This is what I'm doing now, but I'm sure it's not optimal:
SELECT *
FROM table
WHERE date=date1
UNION ALL
SELECT *
FROM table
WHERE date=date2
UNION ALL
SELECT *
FROM table
WHERE date=date1
ect....
Help please.Hi,
Can you insert the parameters into a global temporary table? If so, you could join to the global temporary table like this:
/* Set up demo table */
create table source_table (
id number,
date_field date
create index idx_date_field on source_table (date_field);
/* Insert 10000 dates */
insert into source_table
select rownum, trunc(sysdate) + rownum from dual connect by rownum < 10000;
commit;
/* Set up the global temporary table */
create global temporary table gtt_dates (date_val date) on commit preserve rows;
/* Insert 10 random dates */
insert into gtt_dates
select * from (select date_field from source_table order by dbms_random.random) where rownum <= 10;
/* Demo query */
select a.*
from source_table a,
gtt_dates g
where a.date_field = g.date_val;
Regards,
Melvin -
Hello,
I am using Oracle 11.2, I have a very slow query on the following table:
Create table tb_base (crt_dttm timestamp(6), extrl_id varchar2(12), extrl_addr varchar2(128));
Create index idx111_base on tb_base(crt_dttm);
Create index idx222_base on tb_base(extrl_id);
When I have the query with subquery, the explain plan is 'FULL TABLE SCAN' on tb_base and takes very long time to complete:
Select * from tb_base
where crt_dttm between to_timestamp('24/03/2011 17:58:59', 'DD-MM-YYYY HH24:MI:SS') and to_timestamp('24/03/2011 16:58:59', 'DD-MM-YYYY HH24:MI:SS')
OR extrl_id in ( ...a sub query for other tables... )
Explain Plan:
SELECT STATMENT ALL ROWS
Cost 16
FILTER
TABLE ACCESS FULL TABLE TB_BASE
Cost 16 Bytes 1, 678,600 Cardinality 3, 185
#HASH JOIN
When I use the specific data instead of subquery, the explain plan is doing concatination with index scan and takes several seconds to complete:
Select * from tb_base
where crt_dttm between to_timestamp('24/03/2011 17:58:59', 'DD-MM-YYYY HH24:MI:SS') and to_timestamp('24/03/2011 16:58:59', 'DD-MM-YYYY HH24:MI:SS')
OR extrl_id in ( 'EXT001')
Explain plan:
SELECT STATMENT ALL ROWS
Cost 42 Bytes 1,320 Cardinality 3
CONCATENATION
TABLE ACCESSS BY INDEX ROWID TABLE TB_BASE
Cost 14 Bytes 44 Cardinality 1
INDEX RANGE SCAN INDEX idx111_base
TABLE ACCESSS BY INDEX ROWID TABLE TB_BASE
Cost 14 Bytes 44 Cardinality 1
INDEX RANGE SCAN INDEX idx222_base
Does anyone know why the first query take very long to complete? The subquery itself only takes 2 seconds. How can I speed up the first query?
Thanks in advance!Not use "*"....
To my knowledge, anytime you use "select *", you force a full table scan. You only have three columns...have you tried listing them out long?
But the answer seems simple to me. You are taking the calculation/formulation out of the equation in the second query...therefore it won't take as long.
For the first query, for each row it finds in the subquery, it also has to analyze against the primary query in order to give you the desired output.
Not knowing how many rows you have in the sub query, it's hard to give a hard answer. But when you give it a finite variable by eliminating the subquery, it no longer has to figure that part out....so it's faster. -
Pl/sql boolean expression short circuit behavior and the 10g optimizer
Oracle documents that a PL/SQL IF condition such as
IF p OR q
will always short circuit if p is TRUE. The documents confirm that this is also true for CASE and for COALESCE and DECODE (although DECODE is not available in PL/SQL).
Charles Wetherell, in his paper "Freedom, Order and PL/SQL Optimization," (available on OTN) says that "For most operators, operands may be evaluated in any order. There are some operators (OR, AND, IN, CASE, and so on) which enforce some order of evaluation on their operands."
My questions:
(1) In his list of "operators that enforce some order of evaluation," what does "and so on" include?
(2) Is short circuit evaluation ALWAYS used with Boolean expressions in PL/SQL, even when they the expression is outside one of these statements? For example:
boolvariable := p OR q;
Or:
CALL foo(p or q);This is a very interesting paper. To attempt to answer your questions:-
1) I suppose BETWEEN would be included in the "and so on" list.
2) I've tried to come up with a reasonably simple means of investigating this below. What I'm attempting to do it to run a series of evaluations and record everything that is evaluated. To do this, I have a simple package (PKG) that has two functions (F1 and F2), both returning a constant (0 and 1, respectively). These functions are "naughty" in that they write the fact they have been called to a table (T). First the simple code.
SQL> CREATE TABLE t( c1 VARCHAR2(30), c2 VARCHAR2(30) );
Table created.
SQL>
SQL> CREATE OR REPLACE PACKAGE pkg AS
2 FUNCTION f1( p IN VARCHAR2 ) RETURN NUMBER;
3 FUNCTION f2( p IN VARCHAR2 ) RETURN NUMBER;
4 END pkg;
5 /
Package created.
SQL>
SQL> CREATE OR REPLACE PACKAGE BODY pkg AS
2
3 PROCEDURE ins( p1 IN VARCHAR2, p2 IN VARCHAR2 ) IS
4 PRAGMA autonomous_transaction;
5 BEGIN
6 INSERT INTO t( c1, c2 ) VALUES( p1, p2 );
7 COMMIT;
8 END ins;
9
10 FUNCTION f1( p IN VARCHAR2 ) RETURN NUMBER IS
11 BEGIN
12 ins( p, 'F1' );
13 RETURN 0;
14 END f1;
15
16 FUNCTION f2( p IN VARCHAR2 ) RETURN NUMBER IS
17 BEGIN
18 ins( p, 'F2' );
19 RETURN 1;
20 END f2;
21
22 END pkg;
23 /
Package body created.Now to demonstrate how CASE and COALESCE short-circuits further evaluations whereas NVL doesn't, we can run a simple SQL statement and look at what we recorded in T after.
SQL> SELECT SUM(
2 CASE
3 WHEN pkg.f1('CASE') = 0
4 OR pkg.f2('CASE') = 1
5 THEN 0
6 ELSE 1
7 END
8 ) AS just_a_number_1
9 , SUM(
10 NVL( pkg.f1('NVL'), pkg.f2('NVL') )
11 ) AS just_a_number_2
12 , SUM(
13 COALESCE(
14 pkg.f1('COALESCE'),
15 pkg.f2('COALESCE'))
16 ) AS just_a_number_3
17 FROM user_objects;
JUST_A_NUMBER_1 JUST_A_NUMBER_2 JUST_A_NUMBER_3
0 0 0
SQL>
SQL> SELECT c1, c2, count(*)
2 FROM t
3 GROUP BY
4 c1, c2;
C1 C2 COUNT(*)
NVL F1 41
NVL F2 41
CASE F1 41
COALESCE F1 41We can see that NVL executes both functions even though the first parameter (F1) is never NULL. To see what happens in PL/SQL, I set up the following procedure. In 100 iterations of a loop, this will test both of your queries ( 1) IF ..OR.. and 2) bool := (... OR ...) ).
SQL> CREATE OR REPLACE PROCEDURE bool_order ( rc OUT SYS_REFCURSOR ) AS
2
3 PROCEDURE take_a_bool( b IN BOOLEAN ) IS
4 BEGIN
5 NULL;
6 END take_a_bool;
7
8 BEGIN
9
10 FOR i IN 1 .. 100 LOOP
11
12 IF pkg.f1('ANON_LOOP') = 0
13 OR pkg.f2('ANON_LOOP') = 1
14 THEN
15 take_a_bool(
16 pkg.f1('TAKE_A_BOOL') = 0 OR pkg.f2('TAKE_A_BOOL') = 1
17 );
18 END IF;
19
20 END LOOP;
21
22 OPEN rc FOR SELECT c1, c2, COUNT(*) AS c3
23 FROM t
24 GROUP BY
25 c1, c2;
26
27 END bool_order;
28 /
Procedure created.Now to test it...
SQL> TRUNCATE TABLE t;
Table truncated.
SQL>
SQL> var rc refcursor;
SQL> set autoprint on
SQL>
SQL> exec bool_order(:rc);
PL/SQL procedure successfully completed.
C1 C2 C3
ANON_LOOP F1 100
TAKE_A_BOOL F1 100
SQL> ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL=0;
Session altered.
SQL> exec bool_order(:rc);
PL/SQL procedure successfully completed.
C1 C2 C3
ANON_LOOP F1 200
TAKE_A_BOOL F1 200The above shows that the short-circuiting occurs as documented, under the maximum and minimum optimisation levels ( 10g-specific ). The F2 function is never called. What we have NOT seen, however, is PL/SQL exploiting the freedom to re-order these expressions, presumably because on such a simple example, there is no clear benefit to doing so. And I can verify that switching the order of the calls to F1 and F2 around yields the results in favour of F2 as expected.
Regards
Adrian -
I have a few questions about SQL Developer that I was wondering if someone can help with:
1. Does SQL Developer have the sql optimization feature that is present in Toad?
Within TOAD, there is a facility that will help the user to optimize the sql and I was wondering if this was present in SQL Developer
2. When using the compare facility for showing the difference between 2 tables, does SQL Developer show the data that is different?
ThanksK,
There are a couple of project in the works for this. Heres a quick exert of what we are planning. There is no confirmed release train for this at the minute though. This is provided for information only.
SQL Code Advisor*
By leveraging database features like Automated Workload Repository (AWR) and Active Session History (ASH), the potential exists to evaluate any SQL statement within a package or worksheet that has been executed and flag any statements exceeding some performance threshold. The developer will then immediately know if the SQL in question merits any tuning effort or should be left "as is".
The goal of SQL Code Advisor is to provide real-time feedback to developer within an editor or worksheet on factors which may impact performance. Without going into great detail in this overview, here are some:
* Connected to database instance with missing system statistics
* SQL references tables and indexes with missing or stale statistics, or indexes in an invalid state
* Population and cardinality estimates of referenced tables
* Type, compression status, cache status, degree of parallelism for referenced tables
* Explain plan indicates Full Table Scan performed on a large table
* Explicit datatype conversions of columns in predicates, preventing use of available indexes
SQL Tuning Advisor*
By leveraging existing database APIs in the DBMS_SQLTUNE and DBMS_ADVISOR packages, with appropriate UI enhancements, this SQL Tuning Advisor extension will allow a developer to generate a report to warn when SQL performance may be impaired by:
* stale optimizer statistics
* missing indexes
* improper coding practices.
These APIs are able to perform more in-depth analyses of SQL statements than the optimizer. As a consequence, in addition to offering advice on specific environment and coding issues, it can produce a SQL Profile. The Profile contains additional statistics which help the optimizer find a more efficient execution plan. The original execution plan can be presented side-by-side with the enhanced SQL Profile-assisted execution plan for comparison. The developer has control over which, if any, of the recommendations to accept and deploy. -
SQL Server 2008R2 SP2 Query optimizer memory leak ?
It looks like we are facing a SQL Server 2008R2 queery optimizer memory leak.
We have below version of SQL Server
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Jun 28 2012 08:36:30
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
The instance is set MAximum memory tro 20 GB.
After executing a huge query (2277 kB generated by IBM SPSS Clementine) with tons of CASE and a lot of AND/OR statements in the WHERE and CASE statements and muliple subqueries the server stops responding on Out of memory in the internal pool
and the query optimizer has allocated all the memory.
From Management Data Warehouse we can find that the query was executed at
7.11.2014 22:40:57
Then at 1:22:48 we recieve FAIL_PACE_ALLOCATION 1
2014-11-08 01:22:48.70 spid75 Failed allocate pages: FAIL_PAGE_ALLOCATION 1
And then tons of below errors
2014-11-08 01:24:02.22 spid87 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:02.22 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:02.22 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:02.30 Server Error: 17312, Severity: 16, State: 1.
2014-11-08 01:24:02.30 Server SQL Server is terminating a system or background task Fulltext Host Controller Timer Task due to errors in starting up the task (setup state 1).
2014-11-08 01:24:02.22 spid74 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:02.22 spid74 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 Server Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:13.22 spid87 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:13.22 spid87 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 spid63 Error: 701, Severity: 17, State: 130.
2014-11-08 01:24:13.22 spid63 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 spid57 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:13.22 spid57 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:13.22 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:18.26 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:24.43 spid81 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:24.43 spid81 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:18.25 Server Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:18.25 Server BRKR TASK: Operating system error Exception 0x1 encountered.
2014-11-08 01:24:30.11 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:30.11 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:35.18 spid57 Error: 701, Severity: 17, State: 131.
2014-11-08 01:24:35.18 spid57 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:35.18 spid71 Error: 701, Severity: 17, State: 193.
2014-11-08 01:24:35.18 spid71 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:35.18 Server Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:35.41 Server Error: 17312, Severity: 16, State: 1.
2014-11-08 01:24:35.41 Server SQL Server is terminating a system or background task SSB Task due to errors in starting up the task (setup state 1).
2014-11-08 01:24:35.71 Server Error: 17053, Severity: 16, State: 1.
2014-11-08 01:24:35.71 Server BRKR TASK: Operating system error Exception 0x1 encountered.
2014-11-08 01:24:35.71 spid73 Error: 701, Severity: 17, State: 123.
2014-11-08 01:24:35.71 spid73 There is insufficient system memory in resource pool 'internal' to run this query.
2014-11-08 01:24:46.30 Server Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:51.31 Server Error: 17053, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:51.31 Server Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2014-11-08 01:24:51.31 Logon Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
Last error message is half an hour after the inital Out of memory at 2014-11-08 01:52:54.03. Then the Instance is completely shut down
From the memory information in the error log we can see that all the memory is consumed by the QUERY_OPTIMIZER
Buffer Pool Value
Committed 2621440
Target 2621440
Database 130726
Dirty 3682
In IO
0
Latched
1
Free
346
Stolen 2490368
Reserved 0
Visible 2621440
Stolen Potential 0
Limiting Factor 17
Last OOM Factor 0
Last OS Error 0
Page Life Expectancy 28
2014-11-08 01:22:48.90 spid75
Process/System Counts Value
Available Physical Memory 29361627136
Available Virtual Memory 8691842715648
Available Paging File 51593969664
Working Set 628932608
Percent of Committed Memory in WS 100
Page Faults 48955000
System physical memory high 1
System physical memory low 0
Process physical memory low 1
Process virtual memory low 0
MEMORYCLERK_SQLOPTIMIZER (node 1) KB
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
SinglePage Allocator 19419712
MultiPage Allocator 128
Memory Manager KB
VM Reserved 100960236
VM Committed 277664
Locked Pages Allocated 21483904
Reserved Memory 1024
Reserved Memory In Use 0
On the other side MDW reports that the MEMORYCLERK_SQLOPTIMIZER increases since the execution of the query up to the point of OUTOF MEMORY, but the Average value is 54.7 MB during that period as can be seen on attached graph.
We have encountered this issue already two times (every time the critical query is executed).Hi,
This does seems to me kind of memory Leak and actually it is from SQL Optimizer which leaked memory from buffer pool so much that it did not had any memory to be allocated for new page.
MEMORYCLERK_SQLOPTIMIZER (node 1) KB
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
SinglePage Allocator 19419712
MultiPage Allocator 128
Can you post complete DBCC MEMORYSTATUS output which was generated in errorlog. Is this the only message in errorlog or there are some more messages before and after it.
select (SUM(single_pages_kb)*1024)/8192 as total_stolen_pages, type
from sys.dm_os_memory_clerks
group by typeorder by total_stolen_pages desc
and
select sum(pages_allocated_count * page_size_in_bytes)/1024,type from sys.dm_os_memory_objects
group by type
If you can post the output of above two queries with dbcc memorystaus output on some shared drive and share location with us here. I would try to find out what is leaking memory.
You can very well apply SQL Server 2008 r2 SP3 and see if this issue subsides but I am not sure whether this is fixed or actually it is a bug.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Post Upgrade SQL Performance Issue
Hello,
I Just Upgraded/Migrated my database from 11.1.0.6 SE to 11.2.0.3 EE. I did this with datapump export/import out of the 11.1.0.6 and into a new 11.2.0.3 database. Both the old and the new database are on the same Linux server. The new database has 2GB more RAM assigned to its SGA then the old one. Both DB are using AMM.
The strange part is I have a SQL statement that completes in 1 second in the Old DB and takes 30 seconds in the new one. I even moved the SQL Plan from the Old DB into the New DB so they are using the same plan.
To sum up the issue. I have one SQL statement using the same SQL Plan running at dramatically different speeds on two different databases on the same server. The databases are 11.1.0.7 SE and 11.2.0.3 EE.
Not sure what is going on or how to fix it, Any help would be great!
I have included Explains and Auto Traces from both NEW and OLD databases.
NEW DB Explain Plan (Slow)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 94861 | 193M| | 74043 (1)| 00:18:52 |
| 1 | SORT ORDER BY | | 94861 | 193M| 247M| 74043 (1)| 00:18:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 94861 | 193M| | 31803 (1)| 00:08:07 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1889 | 173K| | 455 (1)| 00:00:07 |
|* 5 | HASH JOIN | | 1889 | 164K| | 454 (1)| 00:00:07 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2138 | 21380 | | 8 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1889 | 145K| | 446 (1)| 00:00:07 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 92972 | 9987K| | 31347 (1)| 00:08:00 |
| 10 | NESTED LOOPS OUTER| | 92972 | 8443K| | 31346 (1)| 00:08:00 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 92972 | 7989K| | 31344 (1)| 00:08:00 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%' AND "MI"."CLAIM_NUMBER" IS NOT NULL)
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%' AND "M"."THEIR_GROUP_ID" IS NOT NULL)
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
Note
- SQL plan baseline "SYS_SQL_PLAN_a3c20fdcecd98dfe" used for this statement
OLD DB Explain Plan (Fast)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 95201 | 193M| | 74262 (1)| 00:14:52 |
| 1 | SORT ORDER BY | | 95201 | 193M| 495M| 74262 (1)| 00:14:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 95201 | 193M| | 31853 (1)| 00:06:23 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1943 | 178K| | 486 (1)| 00:00:06 |
|* 5 | HASH JOIN | | 1943 | 168K| | 486 (1)| 00:00:06 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2105 | 21050 | | 7 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1943 | 149K| | 479 (1)| 00:00:06 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 93258 | 9M| | 31367 (1)| 00:06:17 |
| 10 | NESTED LOOPS OUTER| | 93258 | 8469K| | 31358 (1)| 00:06:17 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 93258 | 8014K| | 31352 (1)| 00:06:17 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%')
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%')
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
NEW DB Auto trace (Slow)
active txn count during cleanout 0
blocks decrypted 0
buffer is not pinned count 664129
buffer is pinned count 3061793
bytes received via SQL*Net from client 3339
bytes sent via SQL*Net to client 28758
Cached Commit SCN referenced 662366
calls to get snapshot scn: kcmgss 3
calls to kcmgas 0
calls to kcmgcs 8
CCursor + sql area evicted 0
cell physical IO interconnect bytes 0
cleanout - number of ktugct calls 0
cleanouts only - consistent read gets 0
cluster key scan block gets 0
cluster key scans 0
commit cleanout failures: block lost 0
commit cleanout failures: callback failure 0
commit cleanouts 0
commit cleanouts successfully completed 0
Commit SCN cached 0
commit txn count during cleanout 0
concurrency wait time 0
consistent changes 0
consistent gets 985371
consistent gets - examination 2993
consistent gets direct 0
consistent gets from cache 985371
consistent gets from cache (fastpath) 982093
CPU used by this session 3551
CPU used when call started 3551
CR blocks created 0
cursor authentications 1
data blocks consistent reads - undo records applied 0
db block changes 0
db block gets 0
db block gets direct 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 3553
deferred (CURRENT) block cleanout applications 0
dirty buffers inspected 0
Effective IO time 0
enqueue releases 0
enqueue requests 0
execute count 3
file io wait time 0
free buffer inspected 0
free buffer requested 0
heap block compress 0
Heap Segment Array Updates 0
hot buffers moved to head of LRU 0
HSC Heap Segment Block Changes 0
immediate (CR) block cleanout applications 0
immediate (CURRENT) block cleanout applications 0
IMU Flushes 0
IMU ktichg flush 0
IMU Redo allocation size 0
IMU undo allocation size 0
index fast full scans (full) 2
index fetch by key 0
index scans kdiixs1 12944
lob reads 0
LOB table id lookup cache misses 0
lob writes 0
lob writes unaligned 0
logical read bytes from cache -517775360
logons cumulative 0
logons current 0
messages sent 0
no buffer to keep pinned count 10
no work - consistent read gets 982086
non-idle wait count 6
non-idle wait time 0
Number of read IOs issued 0
opened cursors cumulative 4
opened cursors current 1
OS Involuntary context switches 853
OS Maximum resident set size 0
OS Page faults 0
OS Page reclaims 2453
OS System time used 9
OS User time used 3549
OS Voluntary context switches 238
parse count (failures) 0
parse count (hard) 0
parse count (total) 1
parse time cpu 0
parse time elapsed 0
physical read bytes 0
physical read IO requests 0
physical read total bytes 0
physical read total IO requests 0
physical read total multi block requests 0
physical reads 0
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 0
physical reads direct (lob) 0
physical write bytes 0
physical write IO requests 0
physical write total bytes 0
physical write total IO requests 0
physical writes 0
physical writes direct 0
physical writes direct (lob) 0
physical writes non checkpoint 0
pinned buffers inspected 0
pinned cursors current 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo size for direct writes 0
redo subscn max counts 0
redo synch time 0
redo synch time (usec) 0
redo synch writes 0
Requests to/from client 3
rollbacks only - consistent read gets 0
RowCR - row contention 0
RowCR attempts 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 3
session logical reads 985371
session pga memory 131072
session pga memory max 0
session uga memory 392928
session uga memory max 0
shared hash latch upgrades - no wait 284
shared hash latch upgrades - wait 0
sorts (memory) 3
sorts (rows) 243
sql area evicted 0
sql area purged 0
SQL*Net roundtrips to/from client 4
switch current to new buffer 0
table fetch by rowid 1861456
table fetch continued row 9
table scan blocks gotten 0
table scan rows gotten 0
table scans (short tables) 0
temp space allocated (bytes) 0
undo change vector size 0
user calls 7
user commits 0
user I/O wait time 0
workarea executions - optimal 10
workarea memory allocated 342
OLD DB Auto trace (Fast)
active txn count during cleanout 0
buffer is not pinned count 4
buffer is pinned count 101
bytes received via SQL*Net from client 1322
bytes sent via SQL*Net to client 9560
calls to get snapshot scn: kcmgss 15
calls to kcmgas 0
calls to kcmgcs 0
calls to kcmgrs 1
cleanout - number of ktugct calls 0
cluster key scan block gets 0
cluster key scans 0
commit cleanouts 0
commit cleanouts successfully completed 0
concurrency wait time 0
consistent changes 0
consistent gets 117149
consistent gets - examination 56
consistent gets direct 115301
consistent gets from cache 1848
consistent gets from cache (fastpath) 1792
CPU used by this session 118
CPU used when call started 119
cursor authentications 1
db block changes 0
db block gets 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 123
deferred (CURRENT) block cleanout applications 0
Effective IO time 2012
enqueue conversions 3
enqueue releases 2
enqueue requests 2
enqueue waits 1
execute count 2
free buffer requested 0
HSC Heap Segment Block Changes 0
IMU Flushes 0
IMU ktichg flush 0
index fast full scans (full) 0
index fetch by key 101
index scans kdiixs1 0
lob writes 0
lob writes unaligned 0
logons cumulative 0
logons current 0
messages sent 0
no work - consistent read gets 117080
Number of read IOs issued 1019
opened cursors cumulative 3
opened cursors current 1
OS Involuntary context switches 54
OS Maximum resident set size 7868
OS Page faults 12
OS Page reclaims 2911
OS System time used 57
OS User time used 71
OS Voluntary context switches 25
parse count (failures) 0
parse count (hard) 0
parse count (total) 3
parse time cpu 0
parse time elapsed 0
physical read bytes 944545792
physical read IO requests 1019
physical read total bytes 944545792
physical read total IO requests 1019
physical read total multi block requests 905
physical reads 115301
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 115301
physical reads prefetch warmup 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo synch writes 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 2
session logical reads 117149
session pga memory -983040
session pga memory max 0
session uga memory 0
session uga memory max 0
shared hash latch upgrades - no wait 0
sorts (memory) 2
sorts (rows) 157
sql area purged 0
SQL*Net roundtrips to/from client 3
table fetch by rowid 0
table fetch continued row 0
table scan blocks gotten 117077
table scan rows gotten 1972604
table scans (direct read) 1
table scans (long tables) 1
table scans (short tables) 2
undo change vector size 0
user calls 5
user I/O wait time 0
workarea executions - optimal 4Hi Srini,
Yes the stats on the tables and indexes are current in both DBs. However the NEW DB has "System Stats" in sys.aux_stats$ and the OLD DB does not. The old DB has optimizer_index_caching=0 and optimizer_index_cost_adj=100. The new DB as them at optimizer_index_caching=90 and optimizer_index_cost_adj=25 but should not be using them because of the "System Stats".
Also I thought none of the SQL Optimize stuff would matter because I forced in my own SQL Plan using SPM.
Differences in init.ora
OLD-11 optimizerpush_pred_cost_based = FALSE
NEW-15 audit_sys_operations = FALSE
audit_trail = "DB, EXTENDED"
awr_snapshot_time_offset = 0
OLD-16 audit_sys_operations = TRUE
audit_trail = "XML, EXTENDED"
NEW-22 cell_offload_compaction = "ADAPTIVE"
cell_offload_decryption = TRUE
cell_offload_plan_display = "AUTO"
cell_offload_processing = TRUE
NEW-28 clonedb = FALSE
NEW-32 compatible = "11.2.0.0.0"
OLD-27 compatible = "11.1.0.0.0"
NEW-37 cursor_bind_capture_destination = "memory+disk"
cursor_sharing = "FORCE"
OLD-32 cursor_sharing = "EXACT"
NEW-50 db_cache_size = 4294967296
db_domain = "my.com"
OLD-44 db_cache_size = 0
NEW-54 db_flash_cache_size = 0
NEW-58 db_name = "NEWDB"
db_recovery_file_dest_size = 214748364800
OLD-50 db_name = "OLDDB"
db_recovery_file_dest_size = 8438939648
NEW-63 db_unique_name = "NEWDB"
db_unrecoverable_scn_tracking = TRUE
db_writer_processes = 2
OLD-55 db_unique_name = "OLDDB"
db_writer_processes = 1
NEW-68 deferred_segment_creation = TRUE
NEW-71 dispatchers = "(PROTOCOL=TCP) (SERVICE=NEWDBXDB)"
OLD-61 dispatchers = "(PROTOCOL=TCP) (SERVICE=OLDDBXDB)"
NEW-73 dml_locks = 5068
dst_upgrade_insert_conv = TRUE
OLD-63 dml_locks = 3652
drs_start = FALSE
NEW-80 filesystemio_options = "SETALL"
OLD-70 filesystemio_options = "none"
NEW-87 instance_name = "NEWDB"
OLD-77 instance_name = "OLDDB"
NEW-94 job_queue_processes = 1000
OLD-84 job_queue_processes = 100
NEW-104 log_archive_dest_state_11 = "enable"
log_archive_dest_state_12 = "enable"
log_archive_dest_state_13 = "enable"
log_archive_dest_state_14 = "enable"
log_archive_dest_state_15 = "enable"
log_archive_dest_state_16 = "enable"
log_archive_dest_state_17 = "enable"
log_archive_dest_state_18 = "enable"
log_archive_dest_state_19 = "enable"
NEW-114 log_archive_dest_state_20 = "enable"
log_archive_dest_state_21 = "enable"
log_archive_dest_state_22 = "enable"
log_archive_dest_state_23 = "enable"
log_archive_dest_state_24 = "enable"
log_archive_dest_state_25 = "enable"
log_archive_dest_state_26 = "enable"
log_archive_dest_state_27 = "enable"
log_archive_dest_state_28 = "enable"
log_archive_dest_state_29 = "enable"
NEW-125 log_archive_dest_state_30 = "enable"
log_archive_dest_state_31 = "enable"
NEW-139 log_buffer = 7012352
OLD-108 log_buffer = 34412032
OLD-112 max_commit_propagation_delay = 0
NEW-144 max_enabled_roles = 150
memory_max_target = 12884901888
memory_target = 8589934592
nls_calendar = "GREGORIAN"
OLD-114 max_enabled_roles = 140
memory_max_target = 6576668672
memory_target = 6576668672
NEW-149 nls_currency = "$"
nls_date_format = "DD-MON-RR"
nls_date_language = "AMERICAN"
nls_dual_currency = "$"
nls_iso_currency = "AMERICA"
NEW-157 nls_numeric_characters = ".,"
nls_sort = "BINARY"
NEW-160 nls_time_format = "HH.MI.SSXFF AM"
nls_time_tz_format = "HH.MI.SSXFF AM TZR"
nls_timestamp_format = "DD-MON-RR HH.MI.SSXFF AM"
nls_timestamp_tz_format = "DD-MON-RR HH.MI.SSXFF AM TZR"
NEW-172 optimizer_features_enable = "11.2.0.3"
optimizer_index_caching = 90
optimizer_index_cost_adj = 25
OLD-130 optimizer_features_enable = "11.1.0.6"
optimizer_index_caching = 0
optimizer_index_cost_adj = 100
NEW-184 parallel_degree_limit = "CPU"
parallel_degree_policy = "MANUAL"
parallel_execution_message_size = 16384
parallel_force_local = FALSE
OLD-142 parallel_execution_message_size = 2152
NEW-189 parallel_max_servers = 320
OLD-144 parallel_max_servers = 0
NEW-192 parallel_min_time_threshold = "AUTO"
NEW-195 parallel_servers_target = 128
NEW-197 permit_92_wrap_format = TRUE
OLD-154 plsql_native_library_subdir_count = 0
NEW-220 result_cache_max_size = 21495808
OLD-173 result_cache_max_size = 0
NEW-230 service_names = "NEWDB, NEWDB.my.com, NEW"
OLD-183 service_names = "OLDDB, OLD.my.com"
NEW-233 sessions = 1152
sga_max_size = 12884901888
OLD-186 sessions = 830
sga_max_size = 6576668672
NEW-238 shared_pool_reserved_size = 35232153
OLD-191 shared_pool_reserved_size = 53687091
OLD-199 sql_version = "NATIVE"
NEW-248 star_transformation_enabled = "TRUE"
OLD-202 star_transformation_enabled = "FALSE"
NEW-253 timed_os_statistics = 60
OLD-207 timed_os_statistics = 5
NEW-256 transactions = 1267
OLD-210 transactions = 913
NEW-262 use_large_pages = "TRUE" -
Login failed when i login to a SQL server in a different domain
Hi Experts,
I have a this issue. When i try to connect to a SQL server located in another domain (Domain A) with a management studio of the same version in another domain (Domain B) It is not allowing me. I am getting the below error.
Error content details:
===================================
Cannot connect to Server_sql.E2K.AD.GE.COM.
===================================
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider:
Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (.Net SqlClient Data Provider)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=53&LinkId=20476
Error Number: 53
Severity: 20
State: 0
Program Location:
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning()
at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity)
at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, SqlConnection owningObject)
at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, TimeoutTimer timeout)
at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, TimeoutTimer timeout, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance)
at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance)
at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup)
at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.SqlClient.SqlConnection.Open()
at Microsoft.SqlServer.Management.SqlStudio.Explorer.ObjectExplorerService.ValidateConnection(UIConnectionInfo ci, IServerType server)
at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser()
When i connect to the SQL server which is in Domain A using its IP (SQL Server in Domain A's IP) in the SQL management studio with my Domain B account as below then it is able to connect and showing the database and everything.
There is totally a 2 ways Forest trust between Domain A and Domain B.
Can any one help please.
Gautam.75801Hi
I think the Network Admin should create an account in Active Directory as well and then adding it to the host server... Otherwise , connect with SQL Authentication.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence
Maybe you are looking for
-
Unix/Linux Process Monitoring
When I attempt to run the add monitoring wizard in Operations Manager 2012 R2 against the unix/linux process monitor I receive the following error :- Date: 12/02/2015 11:55:23 Application: Operations Manager Application Version: 7.1.10226.0 Severity:
-
Strange terminal command caching while manually running SWUPD.
I've built dozens of software update servers with little issues. This one has me stumped today. I built a software update server to update solely our Tiger machines. Initially, I had sent a command to the clients that hit the wrong IP address. Once I
-
Edited mesh object--now can't redo clipping mask!!!
I edited the color properties of a mesh object that had a shaped clipping mask. Now I'd like to reapply the mask, but Illustrator, in its uniquely enigmatic fashion, will not allow that option in the layers palette. Any ideas on what could possibly b
-
22EL833R Russian Characters problem
I have some problems with Russian characters on Menu, teletext, guide, subtitle... The Russian "E" Letter are in most cases displayed as a question sign... Please, correct this bug
-
Deleting members in EPMA via ADS
Is there a way to delete members in EPMA with an ads upload? If so, can you please tell me the format/syntax? I need to delete hundreds of members and do not want to do it by hand. I cannot replace the metadata because that would break all of plannin