SQL Discovery issue
Hello everyone,
I have different issue for SQL DB Engine discovery, for one of the Clustered SQL instance the discovery is not happening and I followed all the steps specified in SQL MP guide and below is the configuration made in general
1) Installed the agent on cluster nodes and enabled the proxy for those agents.
2) Configured the Run as account and liked them to specific SQL profiles as mentioned in SQL MP guide.
3) Necessary permission for low privilege environment has been provided at SQL Nodes, cluster and Database.
when logged on the cluster nodes to investigate further I see different versions of the SQL server is installed (2008 and 2012) on the same node and just wondering whether our SQL MP is capable of discovering these type of nodes?
Any comments or any suggestion how to monitor this type of SQL environment. Thanks in Advance!
Regards,
Bhaskar K
Hi,
Based on my experience, the MP is able to discovery the two versions of SQL instances.
System Center Management Pack for SQL Server
https://www.microsoft.com/en-us/download/details.aspx?id=10631
Are there any errors in the operations manager event log?
Or any errors in the SQL Server error logs on the one of instance?
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
Similar Messages
-
SQL Performance issue: Using user defined function with group by
Hi Everyone,
im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
IS
tz_name VARCHAR2(100);
date_out date;
BEGIN
SELECT
to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
INTO date_out
FROM dual;
RETURN date_out;
END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
select
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
But if I execute the following statement, it takes only ~90ms ...
select
fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
noi
from
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stamp
)The execution plan for all three statements is EXACTLY the same!!!
Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
My questions are:
Why is the second statement sooo much slower than the third?
and
Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
I would really appreciate some help on this really weird issue.
Thanks in advance,
AndiHi,
The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
drop table t cascade constraints purge;
create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
exec dbms_stats.gather_table_stats(user, 't');
create or replace function test_fnc(p_int number) return number is
begin
return trunc(p_int);
end;
explain plan for select id from t group by id;
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from t group by test_fnc(id);
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from (select id from t group by id);
select * from table(dbms_xplan.display(null,null,'advanced'));Output:
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL>
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "TEST_FNC"("ID")[22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$F5BB74E1
2 - SEL$F5BB74E1 / T@SEL$2
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
MERGE(@"SEL$2")
OUTLINE_LEAF(@"SEL$F5BB74E1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
37 rows selected. -
How to track the sql commands issued against database
Hello, We are using an interface developed by oracle forms. Its giving some error while pressing an icon. I can not able to trace out from where it arises. This error is displayed by form developed by oracle forms. I dont have source code to track it. I would like to know , which SQL statement rises the error, so that I can update the oracle object and can able to solve the problem if I can able to find the exect SQL statment issued before the error was displayed. kindly help me . thanks.
habfat wrote:
Hello, We are using an interface developed by oracle forms. Its giving some error while pressing an icon. I can not able to trace out from where it arises. This error is displayed by form developed by oracle forms. I dont have source code to track it. I would like to know , which SQL statement rises the error, so that I can update the oracle object and can able to solve the problem if I can able to find the exect SQL statment issued before the error was displayed. kindly help me . thanks.Hmm, well kind of a silly but still, if you don't have source code of the form( you said so) so even if you would come to know what statement is raising the error, how would you go and edit it ? Did I miss some thing? And you asked for a sql statement ? How did you come to know that its a sql statement that is causing the error? And if that's actually a sql statement error, isn't it associated with a meaningful message ?
Find the developer who coded the application, he would be the best person to track the error and tell you its resolution IMO.
HTH
Aman.... -
PL/SQL speed issues when using a variable
I have a very strange issue that is causing problems.
I am running Golden connecting to a 11g database.
I created a procedure to insert records into a table based on a query. The source query includes variables that I have populated prior to the insert statement. The problem is that if I use the variable for one very specific where statement, the statement goes from running in less than 1 second, to running in 15 minutes. It gets even more strange though. Not only does a 2nd variable not cause any problems, the exact same variable in the same statement works fine.
This procedure takes 15 minutes to run.
declare
v_start_period date;
v_end_period date;
begin
select add_months(trunc(sysdate,'mm'), -1), trunc(sysdate,'mm')
into v_start_period, v_end_period
from dual;
insert into RESULTS_TABLE
(first_audit_date, last_audit_date,
data_column1, data_column2, data_column3)
select
a.first_audit_date, a.last_audit_date,
b.data_column1, b.data_column2, b.data_column3
from
SOURCE_TABLE_1 b,
(select marker_id, min(action_time) as first_audit_date, max(action_time) as last_audit_date
from SOURCE_TABLE_2
where action_time >= v_start_period and action_time < v_end_period
group by marker_id) a
where b.marker_id = a.marker_id
and exists (select 1 from SOURCE_TABLE_2
where marker_id = b.marker_id
and action_time >= v_start_period and action_time < v_end_period
and action_type = 'XYZ');
commit;
end;This procedure runs in less than 1 second, yet returns the exact same results.
declare
v_start_period date;
v_end_period date;
begin
select add_months(trunc(sysdate,'mm'), -1), trunc(sysdate,'mm')
into v_start_period, v_end_period
from dual;
insert into RESULTS_TABLE
(first_audit_date, last_audit_date,
data_column1, data_column2, data_column3)
select
a.first_audit_date, a.last_audit_date,
b.data_column1, b.data_column2, b.data_column3
from
SOURCE_TABLE_1 b,
(select marker_id, min(action_time) as first_audit_date, max(action_time) as last_audit_date
from SOURCE_TABLE_2
where action_time >= v_start_period and action_time < trunc(sysdate,'mm')
group by marker_id) a
where b.marker_id = a.marker_id
and exists (select 1 from SOURCE_TABLE_2
where marker_id = b.marker_id
and action_time >= v_start_period and action_time < v_end_period
and action_type = 'XYZ');
commit;
end;The only difference between the two is where I replace the first v_end_period variable with trunc(sysdate,'mm').
I've been googling for possible solutions and keep running into something called "parameter sniffing". It appears to be a SQL Server issue though, as I cannot find solutions for Oracle. I tried nesting the insert statement inside it's own procedure with new variables populated by the old variables, but nothing changed.
Edited by: user_7000017 on Jan 8, 2013 9:45 AM
Edited by: user_7000017 on Jan 8, 2013 9:52 AM Put the code in code tags.You are not describing procedures. You are listing anonymous PL/SQL blocks.
As for the code - this approach to assigning PL/SQL variables are highly questionable. As the functions are native PL/SQL functions too, there is no need to parse and execute a SQL cursor, do context switching, in order to assign values to PL/SQL variables.
This is wrong:
select
add_months(trunc(sysdate,'mm'), -1), trunc(sysdate,'mm')
into v_start_period, v_end_period
from dual;This is correct:
v_start_period := add_months(trunc(sysdate,'mm'), -1);
v_end_period := trunc(sysdate,'mm');Just as you would not use +"select 1 into plVariable from dual;+", instead of a standard variable assignment statement like +"plVariable := 1;"+.
As for the performance/execution difference. Does not make sense. I suggest simplifying the code in order to isolate the problem. For example, take that in-line SQL (aliased as a in the main SQL) and create a testcase where it uses the function, versus a PL/SQL bind variable with the same data type and value as that returned by the function. Examine and compare the execution plans.
Increase complexity (if no error) and repeat. Until the error is isolated.
The 1st step in any troubleshooting, is IDENTIFYING the problem. Without knowing what the problem is, how can one fix it? -
Oracle 11g - External Table/SQL Developer Issue?
Oracle 11g - External Table/SQL Developer Issue?
==============================
I hope this is the right forum for this issue, if not let me, where to go.
We are using Oracle 11g (11.2.0.1.0) on (Platform : solaris[tm] oe (64-bit)), Sql Developer 3.0.04
We are trying to use oracle external table to load text files in .csv format. Here is our data look like.
======================
Date1,date2,Political party,Name, ROLE
20-Jan-66,22-Nov-69,Democratic,"John ", MMM
22-Nov-70,20-Jan-71,Democratic,"John Jr.",MMM
20-Jan-68,9-Aug-70,Republican,"Rick Ford Sr.", MMM
9-Aug-72,20-Jan-75,Republican,Henry,MMM
------ ALL NULL -- record
20-Jan-80,20-Jan-89,Democratic,"Donald Smith",MMM
======================
Our Expernal table structures is as follows
CREATE TABLE P_LOAD
DATE1 VARCHAR2(10),
DATE2 VARCHAR2(10),
POL_PRTY VARCHAR2(30),
P_NAME VARCHAR2(30),
P_ROLE VARCHAR2(5)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY P_EXT_TAB_D
ACCESS PARAMETERS (
RECORDS DELIMITED by NEWLINE
SKIP 1
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
MISSING FIELD VALUES ARE NULL
DATE1 CHAR (10) Terminated by "," ,
DATE2 CHAR (10) Terminated by "," ,
POL_PRTY CHAR (30) Terminated by "," ,
P_NAME CHAR (30) Terminated by "," OPTIONALLY ENCLOSED BY '"' ,
P_ROLE CHAR (5) Terminated by ","
LOCATION ('Input.dat')
REJECT LIMIT UNLIMITED;
It created successfully using SQL Developer
Here is the issue.
It is not loading the records, where fields are enclosed in '"' (Rec # 2,3,4,7)
It is loading all NULL value record (Rec # 6)
*** If we remove the '"' from input data, it loads all records including all NULL records
Log file has
KUP-04021: field formatting error for field P_NAME
KUP-04036: second enclosing delimiter not found
KUP-04101: record 2 rejected in file ....
Our questions
Why did "REJECT ROWS WITH ALL NULL FIELDS" not working?
Why did Terminated by "," OPTIONALLY ENCLOSED BY '"' not working?
Any idea?
Thanks in helping.I don't think this is a SQLDeveloper issue. You will get better answers in the Database - General or perhaps SQL and PL/SQL forums.
-
How to monitor SQL statements issued by SIEBEL ?
Hi,
We have developed BI Siebel 10.1.3.3. application.
One of the requirement is to persist/store in the database real SQL statement issued by BI Dashboards/Answers.
The best solution would be having acces on-line to cursor cache.
Could someone please tell me how to achive this ?
Regards,
CezarySounds like you're looking for Usage Tracking.
OBIEE Server Administration Guide – Pages: 220
OBIEE Installation and Configuration Guide – Pages: 229
And this post here;
http://oraclebizint.wordpress.com/2007/08/14/usage-tracking-in-obi-ee/
A. -
Trace SQL statement issued by BC4J
Is there a way to see at runtime (in the console window or in a log file) the SQL statements issued by the BC4Js to the database?
Perhaps there is a switch or a -D option to set to the OC4J startup command. This would be really helpfull during development.
Thanks,
Marco.Yes, you are right. that will be done by specify a Java virtual parameters - -Djbo.debugoutput=console.
-
How to retrieve last SQL statment issued
Is there any way in code to get the last SQL statement issued? I want to know the exact statement issued so that I can log it with the error message when any DB errors occurr in our app.
TIAIf your Application is developed using Forms 4.5/6i, then you can use ":system.last_query" to bet most recently executed query with in that session. If your app is developed using some other tools, then you have to relay on V$ Views, say querying V$SESSION and V$SQLAREA by OSUSER, USERNAME etc.
Is there any way in code to get the last SQL statement issued? I want to know the exact statement issued so that I can log it with the error message when any DB errors occurr in our app.
TIA -
A SQL tuning issue-sql runs much slower in test than in production?
Hi Buddies,
I am working on a sql tuning issue. A sql runs much slower in test than in production.
I compared the two explain plans in test and production
seems in test, CBO refuses to use index SUBLEDGER_ENTRY_I2.
we rebuile it and re-gether that index statistcs. run, still slow..
I compared the init.ora parameters like hash_area_size, sort_area_size in test, they are same as production.
I wonder if any expert friend can show some light.
in production,
SQL> set autotrace traceonly
SQL> SELECT rpt_horizon_subledger_entry_vw.onst_offst_cd,
2 rpt_horizon_subledger_entry_vw.bkng_prd,
3 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
4 rpt_horizon_subledger_entry_vw.jrnl_id,
5 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
6 rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
7 rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
8 rpt_horizon_subledger_entry_vw.crprt_melob_cd AS corp_mlb_cd,
rpt_horizon_subledger_entry_vw.onst_offst_cd, SUM (amt) AS amount
9 10 FROM rpt_horizon_subledger_entry_vw
11 WHERE rpt_horizon_subledger_entry_vw.bkng_prd = '092008'
12 AND rpt_horizon_subledger_entry_vw.jrnl_id = 'RCS0002100'
13 AND rpt_horizon_subledger_entry_vw.systm_afflt_cd = 'SAFF01'
14 GROUP BY rpt_horizon_subledger_entry_vw.onst_offst_cd,
15 rpt_horizon_subledger_entry_vw.bkng_prd,
16 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
17 rpt_horizon_subledger_entry_vw.jrnl_id,
18 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
19 rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
20 rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
21 rpt_horizon_subledger_entry_vw.crprt_melob_cd,
22 rpt_horizon_subledger_entry_vw.onst_offst_cd;
491 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=130605 Card=218764 B
ytes=16407300)
1 0 SORT (GROUP BY) (Cost=130605 Card=218764 Bytes=16407300)
2 1 VIEW OF 'RPT_HORIZON_SUBLEDGER_ENTRY_VW' (Cost=129217 Ca
rd=218764 Bytes=16407300)
3 2 SORT (UNIQUE) (Cost=129217 Card=218764 Bytes=35877296)
4 3 UNION-ALL
5 4 HASH JOIN (Cost=61901 Card=109382 Bytes=17719884)
6 5 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_ACCNT'
(Cost=2 Card=111 Bytes=3774)
7 5 HASH JOIN (Cost=61897 Card=109382 Bytes=14000896
8 7 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACC
OUNT' (Cost=2 Card=57 Bytes=1881)
9 7 HASH JOIN (Cost=61893 Card=109382 Bytes=103912
90)
10 9 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cost=
34 Card=4282 Bytes=132742)
11 9 HASH JOIN (Cost=61833 Card=109390 Bytes=7000
960)
12 11 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGE
R_ENTRY' (Cost=42958 Card=82076 Bytes=3611344)
13 12 INDEX (RANGE SCAN) OF 'SUBLEDGER_ENTRY_I
2' (NON-UNIQUE) (Cost=1069 Card=328303)
14 11 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_
LINK' (Cost=14314 Card=9235474 Bytes=184709480)
15 4 HASH JOIN (Cost=61907 Card=109382 Bytes=18157412)
16 15 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_ACCNT'
(Cost=2 Card=111 Bytes=3774)
17 15 HASH JOIN (Cost=61903 Card=109382 Bytes=14438424
18 17 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACC
OUNT' (Cost=2 Card=57 Bytes=1881)
19 17 HASH JOIN (Cost=61899 Card=109382 Bytes=108288
18)
20 19 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cost=
34 Card=4282 Bytes=132742)
21 19 HASH JOIN (Cost=61838 Card=109390 Bytes=7438
520)
22 21 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGE
R_ENTRY' (Cost=42958 Card=82076 Bytes=3939648)
23 22 INDEX (RANGE SCAN) OF 'SUBLEDGER_ENTRY_I
2' (NON-UNIQUE) (Cost=1069 Card=328303)
24 21 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_
LINK' (Cost=14314 Card=9235474 Bytes=184709480)
Statistics
25 recursive calls
18 db block gets
343266 consistent gets
370353 physical reads
0 redo size
15051 bytes sent via SQL*Net to client
1007 bytes received via SQL*Net from client
34 SQL*Net roundtrips to/from client
1 sorts (memory)
1 sorts (disk)
491 rows processed
in test
SQL> set autotrace traceonly
SQL> SELECT rpt_horizon_subledger_entry_vw.onst_offst_cd,
2 rpt_horizon_subledger_entry_vw.bkng_prd,
3 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
4 rpt_horizon_subledger_entry_vw.jrnl_id,
5 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
6 7 rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
8 rpt_horizon_subledger_entry_vw.crprt_melob_cd AS corp_mlb_cd,
9 rpt_horizon_subledger_entry_vw.onst_offst_cd, SUM (amt) AS amount
10 FROM rpt_horizon_subledger_entry_vw
11 WHERE rpt_horizon_subledger_entry_vw.bkng_prd = '092008'
12 AND rpt_horizon_subledger_entry_vw.jrnl_id = 'RCS0002100'
AND rpt_horizon_subledger_entry_vw.systm_afflt_cd = 'SAFF01'
13 14 GROUP BY rpt_horizon_subledger_entry_vw.onst_offst_cd,
15 rpt_horizon_subledger_entry_vw.bkng_prd,
16 rpt_horizon_subledger_entry_vw.systm_afflt_cd,
17 rpt_horizon_subledger_entry_vw.jrnl_id,
18 rpt_horizon_subledger_entry_vw.ntrl_accnt_cd,
rpt_horizon_subledger_entry_vw.gnrl_ldgr_chrt_of_accnt_nm,
rpt_horizon_subledger_entry_vw.lgl_entty_brnch_cd,
rpt_horizon_subledger_entry_vw.crprt_melob_cd,
rpt_horizon_subledger_entry_vw.onst_offst_cd; 19 20 21 22
no rows selected
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=92944 Card=708 Bytes
=53100)
1 0 SORT (GROUP BY) (Cost=92944 Card=708 Bytes=53100)
2 1 VIEW OF 'RPT_HORIZON_SUBLEDGER_ENTRY_VW' (Cost=92937 Car
d=708 Bytes=53100)
3 2 SORT (UNIQUE) (Cost=92937 Card=708 Bytes=124962)
4 3 UNION-ALL
5 4 HASH JOIN (Cost=46456 Card=354 Bytes=60180)
6 5 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACCOU
NT' (Cost=2 Card=57 Bytes=1881)
7 5 NESTED LOOPS (Cost=46453 Card=354 Bytes=48498)
8 7 HASH JOIN (Cost=11065 Card=17694 Bytes=1362438
9 8 HASH JOIN (Cost=27 Card=87 Bytes=5133)
10 9 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cos
t=24 Card=87 Bytes=2175)
11 9 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_
ACCNT' (Cost=2 Card=111 Bytes=3774)
12 8 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_LI
NK' (Cost=11037 Card=142561 Bytes=2566098)
13 7 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGER_EN
TRY' (Cost=2 Card=1 Bytes=60)
14 13 INDEX (UNIQUE SCAN) OF 'SUBLEDGER_ENTRY_PK'
(UNIQUE) (Cost=1 Card=1)
15 4 HASH JOIN (Cost=46456 Card=354 Bytes=64782)
16 15 TABLE ACCESS (FULL) OF 'SUBLEDGER_CHART_OF_ACCOU
NT' (Cost=2 Card=57 Bytes=1881)
17 15 NESTED LOOPS (Cost=46453 Card=354 Bytes=53100)
18 17 HASH JOIN (Cost=11065 Card=17694 Bytes=1362438
19 18 HASH JOIN (Cost=27 Card=87 Bytes=5133)
20 19 TABLE ACCESS (FULL) OF 'HORIZON_LINE' (Cos
t=24 Card=87 Bytes=2175)
21 19 TABLE ACCESS (FULL) OF 'GNRL_LDGR_CHRT_OF_
ACCNT' (Cost=2 Card=111 Bytes=3774)
22 18 TABLE ACCESS (FULL) OF 'HORIZON_SUBLEDGER_LI
NK' (Cost=11037 Card=142561 Bytes=2566098)
23 17 TABLE ACCESS (BY INDEX ROWID) OF 'SUBLEDGER_EN
TRY' (Cost=2 Card=1 Bytes=73)
24 23 INDEX (UNIQUE SCAN) OF 'SUBLEDGER_ENTRY_PK'
(UNIQUE) (Cost=1 Card=1)
Statistics
1134 recursive calls
0 db block gets
38903505 consistent gets
598254 physical reads
60 redo size
901 bytes sent via SQL*Net to client
461 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
34 sorts (memory)
0 sorts (disk)
0 rows processed
Thanks a lot in advance
JerryHi
Basically there are two kinds of tables
- fact
- lookup
The number of records in a lookup table is usually small.
The number of records in a fact table is usually huge.
However, in test systems the number of records in a fact table is often also small.
This results in different execution plans.
I notice again you don't post version and platform info, and you didn't make sure your explain is properly idented
Please read the FAQ to make sure it is properly idented.
Also using the word 'buddies' is as far as I am concerned nearing disrespect and rudeness.
Sybrand Bakker
Senior Oracle DBA -
Post Upgrade SQL Performance Issue
Hello,
I Just Upgraded/Migrated my database from 11.1.0.6 SE to 11.2.0.3 EE. I did this with datapump export/import out of the 11.1.0.6 and into a new 11.2.0.3 database. Both the old and the new database are on the same Linux server. The new database has 2GB more RAM assigned to its SGA then the old one. Both DB are using AMM.
The strange part is I have a SQL statement that completes in 1 second in the Old DB and takes 30 seconds in the new one. I even moved the SQL Plan from the Old DB into the New DB so they are using the same plan.
To sum up the issue. I have one SQL statement using the same SQL Plan running at dramatically different speeds on two different databases on the same server. The databases are 11.1.0.7 SE and 11.2.0.3 EE.
Not sure what is going on or how to fix it, Any help would be great!
I have included Explains and Auto Traces from both NEW and OLD databases.
NEW DB Explain Plan (Slow)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 94861 | 193M| | 74043 (1)| 00:18:52 |
| 1 | SORT ORDER BY | | 94861 | 193M| 247M| 74043 (1)| 00:18:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 94861 | 193M| | 31803 (1)| 00:08:07 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1889 | 173K| | 455 (1)| 00:00:07 |
|* 5 | HASH JOIN | | 1889 | 164K| | 454 (1)| 00:00:07 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2138 | 21380 | | 8 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1889 | 145K| | 446 (1)| 00:00:07 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 92972 | 9987K| | 31347 (1)| 00:08:00 |
| 10 | NESTED LOOPS OUTER| | 92972 | 8443K| | 31346 (1)| 00:08:00 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 92972 | 7989K| | 31344 (1)| 00:08:00 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%' AND "MI"."CLAIM_NUMBER" IS NOT NULL)
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%' AND "M"."THEIR_GROUP_ID" IS NOT NULL)
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
Note
- SQL plan baseline "SYS_SQL_PLAN_a3c20fdcecd98dfe" used for this statement
OLD DB Explain Plan (Fast)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 95201 | 193M| | 74262 (1)| 00:14:52 |
| 1 | SORT ORDER BY | | 95201 | 193M| 495M| 74262 (1)| 00:14:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 95201 | 193M| | 31853 (1)| 00:06:23 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1943 | 178K| | 486 (1)| 00:00:06 |
|* 5 | HASH JOIN | | 1943 | 168K| | 486 (1)| 00:00:06 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2105 | 21050 | | 7 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1943 | 149K| | 479 (1)| 00:00:06 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 93258 | 9M| | 31367 (1)| 00:06:17 |
| 10 | NESTED LOOPS OUTER| | 93258 | 8469K| | 31358 (1)| 00:06:17 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 93258 | 8014K| | 31352 (1)| 00:06:17 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%')
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%')
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
NEW DB Auto trace (Slow)
active txn count during cleanout 0
blocks decrypted 0
buffer is not pinned count 664129
buffer is pinned count 3061793
bytes received via SQL*Net from client 3339
bytes sent via SQL*Net to client 28758
Cached Commit SCN referenced 662366
calls to get snapshot scn: kcmgss 3
calls to kcmgas 0
calls to kcmgcs 8
CCursor + sql area evicted 0
cell physical IO interconnect bytes 0
cleanout - number of ktugct calls 0
cleanouts only - consistent read gets 0
cluster key scan block gets 0
cluster key scans 0
commit cleanout failures: block lost 0
commit cleanout failures: callback failure 0
commit cleanouts 0
commit cleanouts successfully completed 0
Commit SCN cached 0
commit txn count during cleanout 0
concurrency wait time 0
consistent changes 0
consistent gets 985371
consistent gets - examination 2993
consistent gets direct 0
consistent gets from cache 985371
consistent gets from cache (fastpath) 982093
CPU used by this session 3551
CPU used when call started 3551
CR blocks created 0
cursor authentications 1
data blocks consistent reads - undo records applied 0
db block changes 0
db block gets 0
db block gets direct 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 3553
deferred (CURRENT) block cleanout applications 0
dirty buffers inspected 0
Effective IO time 0
enqueue releases 0
enqueue requests 0
execute count 3
file io wait time 0
free buffer inspected 0
free buffer requested 0
heap block compress 0
Heap Segment Array Updates 0
hot buffers moved to head of LRU 0
HSC Heap Segment Block Changes 0
immediate (CR) block cleanout applications 0
immediate (CURRENT) block cleanout applications 0
IMU Flushes 0
IMU ktichg flush 0
IMU Redo allocation size 0
IMU undo allocation size 0
index fast full scans (full) 2
index fetch by key 0
index scans kdiixs1 12944
lob reads 0
LOB table id lookup cache misses 0
lob writes 0
lob writes unaligned 0
logical read bytes from cache -517775360
logons cumulative 0
logons current 0
messages sent 0
no buffer to keep pinned count 10
no work - consistent read gets 982086
non-idle wait count 6
non-idle wait time 0
Number of read IOs issued 0
opened cursors cumulative 4
opened cursors current 1
OS Involuntary context switches 853
OS Maximum resident set size 0
OS Page faults 0
OS Page reclaims 2453
OS System time used 9
OS User time used 3549
OS Voluntary context switches 238
parse count (failures) 0
parse count (hard) 0
parse count (total) 1
parse time cpu 0
parse time elapsed 0
physical read bytes 0
physical read IO requests 0
physical read total bytes 0
physical read total IO requests 0
physical read total multi block requests 0
physical reads 0
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 0
physical reads direct (lob) 0
physical write bytes 0
physical write IO requests 0
physical write total bytes 0
physical write total IO requests 0
physical writes 0
physical writes direct 0
physical writes direct (lob) 0
physical writes non checkpoint 0
pinned buffers inspected 0
pinned cursors current 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo size for direct writes 0
redo subscn max counts 0
redo synch time 0
redo synch time (usec) 0
redo synch writes 0
Requests to/from client 3
rollbacks only - consistent read gets 0
RowCR - row contention 0
RowCR attempts 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 3
session logical reads 985371
session pga memory 131072
session pga memory max 0
session uga memory 392928
session uga memory max 0
shared hash latch upgrades - no wait 284
shared hash latch upgrades - wait 0
sorts (memory) 3
sorts (rows) 243
sql area evicted 0
sql area purged 0
SQL*Net roundtrips to/from client 4
switch current to new buffer 0
table fetch by rowid 1861456
table fetch continued row 9
table scan blocks gotten 0
table scan rows gotten 0
table scans (short tables) 0
temp space allocated (bytes) 0
undo change vector size 0
user calls 7
user commits 0
user I/O wait time 0
workarea executions - optimal 10
workarea memory allocated 342
OLD DB Auto trace (Fast)
active txn count during cleanout 0
buffer is not pinned count 4
buffer is pinned count 101
bytes received via SQL*Net from client 1322
bytes sent via SQL*Net to client 9560
calls to get snapshot scn: kcmgss 15
calls to kcmgas 0
calls to kcmgcs 0
calls to kcmgrs 1
cleanout - number of ktugct calls 0
cluster key scan block gets 0
cluster key scans 0
commit cleanouts 0
commit cleanouts successfully completed 0
concurrency wait time 0
consistent changes 0
consistent gets 117149
consistent gets - examination 56
consistent gets direct 115301
consistent gets from cache 1848
consistent gets from cache (fastpath) 1792
CPU used by this session 118
CPU used when call started 119
cursor authentications 1
db block changes 0
db block gets 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 123
deferred (CURRENT) block cleanout applications 0
Effective IO time 2012
enqueue conversions 3
enqueue releases 2
enqueue requests 2
enqueue waits 1
execute count 2
free buffer requested 0
HSC Heap Segment Block Changes 0
IMU Flushes 0
IMU ktichg flush 0
index fast full scans (full) 0
index fetch by key 101
index scans kdiixs1 0
lob writes 0
lob writes unaligned 0
logons cumulative 0
logons current 0
messages sent 0
no work - consistent read gets 117080
Number of read IOs issued 1019
opened cursors cumulative 3
opened cursors current 1
OS Involuntary context switches 54
OS Maximum resident set size 7868
OS Page faults 12
OS Page reclaims 2911
OS System time used 57
OS User time used 71
OS Voluntary context switches 25
parse count (failures) 0
parse count (hard) 0
parse count (total) 3
parse time cpu 0
parse time elapsed 0
physical read bytes 944545792
physical read IO requests 1019
physical read total bytes 944545792
physical read total IO requests 1019
physical read total multi block requests 905
physical reads 115301
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 115301
physical reads prefetch warmup 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo synch writes 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 2
session logical reads 117149
session pga memory -983040
session pga memory max 0
session uga memory 0
session uga memory max 0
shared hash latch upgrades - no wait 0
sorts (memory) 2
sorts (rows) 157
sql area purged 0
SQL*Net roundtrips to/from client 3
table fetch by rowid 0
table fetch continued row 0
table scan blocks gotten 117077
table scan rows gotten 1972604
table scans (direct read) 1
table scans (long tables) 1
table scans (short tables) 2
undo change vector size 0
user calls 5
user I/O wait time 0
workarea executions - optimal 4Hi Srini,
Yes the stats on the tables and indexes are current in both DBs. However the NEW DB has "System Stats" in sys.aux_stats$ and the OLD DB does not. The old DB has optimizer_index_caching=0 and optimizer_index_cost_adj=100. The new DB as them at optimizer_index_caching=90 and optimizer_index_cost_adj=25 but should not be using them because of the "System Stats".
Also I thought none of the SQL Optimize stuff would matter because I forced in my own SQL Plan using SPM.
Differences in init.ora
OLD-11 optimizerpush_pred_cost_based = FALSE
NEW-15 audit_sys_operations = FALSE
audit_trail = "DB, EXTENDED"
awr_snapshot_time_offset = 0
OLD-16 audit_sys_operations = TRUE
audit_trail = "XML, EXTENDED"
NEW-22 cell_offload_compaction = "ADAPTIVE"
cell_offload_decryption = TRUE
cell_offload_plan_display = "AUTO"
cell_offload_processing = TRUE
NEW-28 clonedb = FALSE
NEW-32 compatible = "11.2.0.0.0"
OLD-27 compatible = "11.1.0.0.0"
NEW-37 cursor_bind_capture_destination = "memory+disk"
cursor_sharing = "FORCE"
OLD-32 cursor_sharing = "EXACT"
NEW-50 db_cache_size = 4294967296
db_domain = "my.com"
OLD-44 db_cache_size = 0
NEW-54 db_flash_cache_size = 0
NEW-58 db_name = "NEWDB"
db_recovery_file_dest_size = 214748364800
OLD-50 db_name = "OLDDB"
db_recovery_file_dest_size = 8438939648
NEW-63 db_unique_name = "NEWDB"
db_unrecoverable_scn_tracking = TRUE
db_writer_processes = 2
OLD-55 db_unique_name = "OLDDB"
db_writer_processes = 1
NEW-68 deferred_segment_creation = TRUE
NEW-71 dispatchers = "(PROTOCOL=TCP) (SERVICE=NEWDBXDB)"
OLD-61 dispatchers = "(PROTOCOL=TCP) (SERVICE=OLDDBXDB)"
NEW-73 dml_locks = 5068
dst_upgrade_insert_conv = TRUE
OLD-63 dml_locks = 3652
drs_start = FALSE
NEW-80 filesystemio_options = "SETALL"
OLD-70 filesystemio_options = "none"
NEW-87 instance_name = "NEWDB"
OLD-77 instance_name = "OLDDB"
NEW-94 job_queue_processes = 1000
OLD-84 job_queue_processes = 100
NEW-104 log_archive_dest_state_11 = "enable"
log_archive_dest_state_12 = "enable"
log_archive_dest_state_13 = "enable"
log_archive_dest_state_14 = "enable"
log_archive_dest_state_15 = "enable"
log_archive_dest_state_16 = "enable"
log_archive_dest_state_17 = "enable"
log_archive_dest_state_18 = "enable"
log_archive_dest_state_19 = "enable"
NEW-114 log_archive_dest_state_20 = "enable"
log_archive_dest_state_21 = "enable"
log_archive_dest_state_22 = "enable"
log_archive_dest_state_23 = "enable"
log_archive_dest_state_24 = "enable"
log_archive_dest_state_25 = "enable"
log_archive_dest_state_26 = "enable"
log_archive_dest_state_27 = "enable"
log_archive_dest_state_28 = "enable"
log_archive_dest_state_29 = "enable"
NEW-125 log_archive_dest_state_30 = "enable"
log_archive_dest_state_31 = "enable"
NEW-139 log_buffer = 7012352
OLD-108 log_buffer = 34412032
OLD-112 max_commit_propagation_delay = 0
NEW-144 max_enabled_roles = 150
memory_max_target = 12884901888
memory_target = 8589934592
nls_calendar = "GREGORIAN"
OLD-114 max_enabled_roles = 140
memory_max_target = 6576668672
memory_target = 6576668672
NEW-149 nls_currency = "$"
nls_date_format = "DD-MON-RR"
nls_date_language = "AMERICAN"
nls_dual_currency = "$"
nls_iso_currency = "AMERICA"
NEW-157 nls_numeric_characters = ".,"
nls_sort = "BINARY"
NEW-160 nls_time_format = "HH.MI.SSXFF AM"
nls_time_tz_format = "HH.MI.SSXFF AM TZR"
nls_timestamp_format = "DD-MON-RR HH.MI.SSXFF AM"
nls_timestamp_tz_format = "DD-MON-RR HH.MI.SSXFF AM TZR"
NEW-172 optimizer_features_enable = "11.2.0.3"
optimizer_index_caching = 90
optimizer_index_cost_adj = 25
OLD-130 optimizer_features_enable = "11.1.0.6"
optimizer_index_caching = 0
optimizer_index_cost_adj = 100
NEW-184 parallel_degree_limit = "CPU"
parallel_degree_policy = "MANUAL"
parallel_execution_message_size = 16384
parallel_force_local = FALSE
OLD-142 parallel_execution_message_size = 2152
NEW-189 parallel_max_servers = 320
OLD-144 parallel_max_servers = 0
NEW-192 parallel_min_time_threshold = "AUTO"
NEW-195 parallel_servers_target = 128
NEW-197 permit_92_wrap_format = TRUE
OLD-154 plsql_native_library_subdir_count = 0
NEW-220 result_cache_max_size = 21495808
OLD-173 result_cache_max_size = 0
NEW-230 service_names = "NEWDB, NEWDB.my.com, NEW"
OLD-183 service_names = "OLDDB, OLD.my.com"
NEW-233 sessions = 1152
sga_max_size = 12884901888
OLD-186 sessions = 830
sga_max_size = 6576668672
NEW-238 shared_pool_reserved_size = 35232153
OLD-191 shared_pool_reserved_size = 53687091
OLD-199 sql_version = "NATIVE"
NEW-248 star_transformation_enabled = "TRUE"
OLD-202 star_transformation_enabled = "FALSE"
NEW-253 timed_os_statistics = 60
OLD-207 timed_os_statistics = 5
NEW-256 transactions = 1267
OLD-210 transactions = 913
NEW-262 use_large_pages = "TRUE" -
Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
Checklist for Quick Performance problem Resolution
· get trace, code and other information for given PE case
- Latest Code from Production env
- Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
- Program parameters & their frequently used values
- Run Frequency of the program
- existing Run-time/response time in Production
- Business Purpose
· Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
· Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
· Identify most time consuming operation(s) using Row Source Operation section
· Study program parameter input directly mapped to SQL
· Identify all Input bind parameters being used to SQL
· Is SQL query returning large records for given inputs
· what are the large tables and their respective columns being used to mapped with input parameters
· which operation is scanning highest number of records in Row Source operation/Explain Plan
· Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
· Check the time consuming index on large table and measure Index Selectivity
· Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
· Is correct index being used for all large tables?
· Is there any Full Table Scan on Large tables ?
· Is there any unwanted Table being used in SQL ?
· Evaluate Join condition on Large tables and their columns
· Is FTS on large table b'cos of usage of non index columns
· Is there any implicit or explicit conversion causing index not getting used ?
· Statistics of all large tables are upto date ?
Quick Resolution tips
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
2) Use Data Caching Technique/Options to cache static data
3) Use Pipe Line Table Functions whenever possible
4) Use Global Temporary Table, Materialized view to process complex records
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
8) Follow Oracle PL/SQL Best Practices
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
12) Review Join condition on existing query explain plan
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
14) Avoid applying SQL functions on index columns
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
Thanks
PrafulI understand you were trying to post something helpful to people, but sorry, this list is appalling.
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
No, use pure SQL.
2) Use Data Caching Technique/Options to cache static data
No, use pure SQL, and the database and operating system will handle caching.
3) Use Pipe Line Table Functions whenever possible
No, use pure SQL
4) Use Global Temporary Table, Materialized view to process complex records
No, use pure SQL
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
No, use pure SQL
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
Makes no sense.
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
What about using the execution trace?
8) Follow Oracle PL/SQL Best Practices
Which are?
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
You mean design your database and queries properly? And table scanning is not always bad.
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
It depends if that is necessary or not.
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan. There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
12) Review Join condition on existing query explain plan
Well, if you don't have your join conditions right then your query won't work, so that's obvious.
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
No. Oracle recommends you do not use hints for query optimization (it says so in the documentation). Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general. Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
14) Avoid applying SQL functions on index columns
Why? If there's a need for a function based index, then it should be used.
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
See 13.
In short, there are no silver bullets for dealing with performance. Each situation is different and needs to be evaluated on its own merits. -
How to load XML file to table (non-XML) with SQL*Loader -- issue with nulls
I have been attempting to use SQL*Loader to load an XML file into a "regular" Oracle table. All fields work fine, unless a null is encountered. The way that nulls are represented is shown below:
<PAYLOAD>
<FIELD1>ABCDEF</FIELD1>
<FIELD2/>
<FIELD3>123456</FIELD3>
</PAYLOAD>
In the above example, FIELD2 is a null field and that is the way it is presented. I have searched everywhere and have not found how I could code for this. The issue is that if FIELD2 is present, it is coded like: <FIELD2>SOMEDATA</FIELD2>, but the null is represented as <FIELD2/>. Here is a sample of the control file I am using to attempt the load -- very simplistic, but works fine when fields are present:
load data
infile 'testdata.xml' "str '<PAYLOAD>'"
TRUNCATE
into table DATA_FROM_XML
FIELD1 ENCLOSED BY '<FIELD1>' AND '</FIELD1>',
FIELD2 ENCLOSED BY '<FIELD2>' AND '</FIELD2>',
FIELD3 ENCLOSED BY '<FIELD3>' AND '</FIELD3>')
What do I need to do to account for the way that nulls are presented? I have tried everything I could glean from the web and the documentation and nothing has worked. Any help would be really appreciated.I hadn't even got that far. can you direct me to where the docs are to import data that is stored within xml but that you don't need any xml functionality, that just happens to be the format the data is stored in? thx
-
I am attempting to connect to a Progress database using JDBC and am having a few issues with my SQL statement. The code concerned looks like this:
// Get a statement from the connection
Statement stmt = conn.createStatement() ;
// Execute the query
ResultSet rs = stmt.executeQuery( "SELECT * FROM pub.cust" ) ;
// Loop through the result set
while( rs.next() )
System.out.println( rs.getString( "cust-code" ) ) ;
That works just fine. Prints out each of the cust-code values. My issue comes when I try to refine the query to be:
ResultSet rs = stmt.executeQuery( "SELECT * FROM pub.cust WHERE cust.Cust-code='100001-0'" ) ;
or even
ResultSet rs = stmt.executeQuery( "SELECT Cust-code FROM pub.cust" ) ;
Both of these give me an error saying Column not found/specified.
if I wrap Cust-code in single quotes, the script prints Cust-code on each line, not the value for it. If I escape Cust-code like Cust\-code, I get an Illegal escape character. Any ideas?I have never used Progress, but maybe you have to fully qualify the field names like this:
ResultSet rs = stmt.executeQuery( "SELECT * FROM pub.cust WHERE pub.cust.Cust-code='100001-0'" ) ;
ResultSet rs = stmt.executeQuery( "SELECT pub.cust.Cust-code FROM pub.cust" ) ;Just a thought. -
Oracle Sql Query issue Running on Different DB Version
Hello All,
I have come into situation where we are pruning sql queries on different DB version of Oracle and have performance issue. Let me tell you in brief and i really appreciate for your prompt response as its very imperative stuff.
I have a query which is running on a DB of version 7.3.4 and it takes around 30 mins where as the same query when run on 8i it takes 15sec., its a huge difference. I have run the statistics to analyze on 7.3 and its comparatively very high. Question here is, the sql query trys to select data from same schema table and 2 tables from another DB using DB link and 2 other tables from another DB using DB link.So,how can we optimize this stuff and achieve this run as same time as 8i DB in 7.3. Hope i am clear about my question, Eagerly waiting for your replies.
Thanks in Advance.
Message was edited by:
Ram8Difficult to be sure without any more detailed information, but I suspect that O7 is in effect copying the remote tables to local temp space, then joining; 8i is factoring out a better query to send to the remote DBs, which does as much work as possible on the remote DB before shipping remaining rows back to local.
You should be able to use EXPLAIN PLAN to identify what SQL is being shipped to the remote DB, If you can't (and it's been quite a while since I tried DB links or O7) then get the remote DBs to yourself, and set SQL_TRACE on for the remote instances. Execute the query and then examine the remote trace files, And don't forget to turn off the tracing when you're done.
Of course it could just be that the CBO got better,,,
HTH - if not, post your query and plans for the local db, and the remote queries.
Regards Nigel -
SQL Developer 1.1.2.25 Build MAIN-25.79
Java 1.5.0_11
Issue 1:
The version in the about box lists '(null) Version 1.1.2.25'
Issue 2:
Right-clicking on a cell in the results window yields the following exception in the console:
java.lang.IndexOutOfBoundsException: out of bounds: dataSize: 1912 offset: 904 length: 1104
at oracle.javatools.buffer.GapArrayTextBuffer.checkOffsets(GapArrayTextBuffer.java:506)
at oracle.javatools.buffer.GapArrayTextBuffer.getStringImpl(GapArrayTextBuffer.java:209)
at oracle.javatools.buffer.AbstractTextBuffer.getString(AbstractTextBuffer.java:343)
at oracle.ide.model.TextNode$TextBufferWrapper.getString(TextNode.java:1743)
at oracle.javatools.editor.BasicDocument.getText(BasicDocument.java:679)
at oracle.dbdev.oviewer.base.PopupDescribe.menuWillShow(PopupDescribe.java:197)
at oracle.ide.controller.ContextMenu.callMenuWillShow(ContextMenu.java:489)
at oracle.ide.controller.ContextMenu.prepareShow(ContextMenu.java:265)
at oracle.ide.controller.ContextMenu.show(ContextMenu.java:229)
at oracle.dbtools.sqlworksheet.sqlview.SqlEditorMainPanel._tryContextMenu(SqlEditorMainPanel.java:1262)
at oracle.dbtools.sqlworksheet.sqlview.SqlEditorMainPanel.access$1900(SqlEditorMainPanel.java:161)
at oracle.dbtools.sqlworksheet.sqlview.SqlEditorMainPanel$SqlEditorHandler.mouseReleased(SqlEditorMainPanel.java:1959)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:232)
at java.awt.Component.processMouseEvent(Component.java:5501)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3135)
at java.awt.Component.processEvent(Component.java:5266)
at java.awt.Container.processEvent(Container.java:1966)
at java.awt.Component.dispatchEventImpl(Component.java:3968)
at java.awt.Container.dispatchEventImpl(Container.java:2024)
at java.awt.Component.dispatchEvent(Component.java:3803)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4212)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:3892)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:3822)
at java.awt.Container.dispatchEventImpl(Container.java:2010)
at java.awt.Window.dispatchEventImpl(Window.java:1778)
at java.awt.Component.dispatchEvent(Component.java:3803)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:463)
at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:242)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:163)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:157)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:149)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:110)
Issue 3:
After issue 2 occurs, selecting Export from the context menu causes the application to freeze with no further exceptions.
Notes:
I am able to reproduce this every time for a specific long-running query which completes normally after 260-290 seconds. I cannot reproduce this issue for queries which complete rapidly (i.e. select * from table).On Issue 1:
As MRM has said, this was posted several times for 1.1.2 and I noticed the (null) in the Help -> About window with 1.1.2. I have now upgraded to 1.1.3 via check for updates and I don't get the (null) anymore, so I would guess that you are still on 1.1.2. Note that my Help -> About shows Version 1.1.2.25.79, with the 1.1.3.27.66 version only appearing in the Extension tab.
On Issue 3:
1.1.3.27.66 (as per Extensions) runs queries twice to export - once to show the data in the Results (ie 1st 260-290 seconds) and once again when exporting (ie 2nd 260-290 seconds).
1.1.2.25.79 (as per Extensions) runs queries three times to export - once to show the data in the Results (ie 1st 260-290 seconds), once as part of opening the Export dialog (ie 2nd 260-290 seconds) and once again when exporting (ie 3rd 260-290 seconds).
This means that if you are really still on 1.1.2 it will take up to 5 minutes for the Export dialog to appear :(
Maybe you are looking for
-
Help! Problems using FCE 2.0.3 after upgrading to OS 10.4.9
I recently upgraded to OS 10.4.9 and now I am getting "dropped frames on capture" error messages. None of the variables have changed (Same computer, camera, tape, external HD, cables) It was working great two days prior to upgrading to 10.4.9 and now
-
How to transfer music and fotos from iPod ti iTunes?
I lost all dates on my pc but I have music and fotos on the iPod and I want to transfer themo again to iTunes. How Can I do? Was kann ich tun?
-
NOT using xp_cmdshell, Can we check free space of drives in windows servers?
Hello all, I have a strange situation here(may not be to you), as per client security standards, I should NOT use xp_cmdshell. Adding to this, I was asked to generate a report daily about drives and their free spaces. Having with xp_cmshell, we have
-
Increasing the volume on flv files
I've got about 200 flv files have issues with volume. I need to increase the volume on each. Is their a quick and easy way. They all load into the one swf file so is there any code I can add to the swf the will automatically increase the volume of an
-
Using an iPhone 5s, Contacts are on Verizon server: Able to locate my contacts with myVerizon web site but cannot print a listing, Print dialogue box never appears. Used Safari, Chrome, and Foxfire, same error message: problem start again. Looked at