Client-induced table scan
We have a situation where a combination of NLS_COMP='LINGUISTIC' in an ONLOGON trigger and
Select X.stk_num From X left join Y on X.stk_num = Y.stk_num WHERE Y.textfield = 'asdf'
induces Oracle to do a table scan on X for the join when executed from FoxPro (and FoxPro only). The index on X.stk_num exists as
CREATE INDEX X_STK_NUM ON X(stk_num, 'nls_sort=BINARY_CI')
I also tried adding an index for plain old binary sort order, too. If we change the ONLOGON trigger to say NLS_COMP='BINARY', Oracle uses the index. What really confuses us is that executing the query from other clients over the same ODBC DSN, or from JDBC, Oracle always uses the index no matter what the ONLOGON trigger says. (Incidentally, the index for Y.textfield is always used).
The only differences I can see in the connection parameters after sniffing the connection (an admittedly clumsy approach, but I'm not a DBA) is that FoxPro includes
AUTH_RTT=25955
AUTH_CLNT_MEM=4096
whatever they are.
The reason for our overall approach is that we are modifying a large FoxPro application to use Oracle on the back end in place of native FoxPro tables. We created the typical ONLOGON trigger...
<snippet>
cmmd1:='ALTER SESSION SET NLS_SORT=BINARY_CI';
cmmd2:='ALTER SESSION SET NLS_COMP=LINGUISTIC';
EXECUTE IMMEDIATE cmmd1;
EXECUTE IMMEDIATE cmmd2;
</snippet>
to preserve the application's Fox/Windows case-insensitive behavior. We also created functional indexes as shown above to work with the case-insensitive queries. We have to adjust the SQL in Fox sometimes to account for the lack of control in explicitly setting data type parameters in where clauses, but this is not a parameter issue, since the table scan is for the join.
We desire to continue with the NLS_COMP = LINGUISTIC setting over the BINARY setting to honor case-insensitivity in all contexts per Oracle's documentation, but obviously we cannot if Oracle won't use the indexes. Any ideas?
Some quick thoughts:
1) why are you fooling everybody including yourself and writing left [outer] join with where clause on outer table when it is actually inner join? Do you really understand what are you getting back and is it really what you need?
2) is stk_num number? if yes then does NLS_COMP really make any difference for numbers? I'm not quite sure but the first thought would be that for numbers it doesn't matter.
3) You can try to use 10053 event to check why oracle chooses this particular execution plan. Probably this client makes any differences in optimizer_environment?
4) For such cases I'd personally use function based index and do some kind of upper(column) for both index and where and/or join clause. Probably for you it would be hard to use because you need to change too much code.
5) you can try to put this question also in [url http://forums.oracle.com/forums/forum.jspa?forumID=50&start=0]globalization support forum if it is really the problem of NLS_COMP= 'LINGUISTIC'
Gints Plivna
http://www.gplivna.eu
Similar Messages
-
Serial table scan with direct path read compared to db file scattered read
Hi,
The environment
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
8K block size
db_file_multiblock_read_count is 128
show sga
Total System Global Area 1.6702E+10 bytes
Fixed Size 2219952 bytes
Variable Size 7918846032 bytes
Database Buffers 8724152320 bytes
Redo Buffers 57090048 bytes
16GB of SGA with 8GB of db buffer cache.
-- database is built on Solid State Disks
-- SQL trace and wait events
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true )
-- The underlying table is called tdash. It has 1.7 Million rows based on data in all_objects. NO index
TABLE_NAME Rows Table Size/MB Used/MB Free/MB
TDASH 1,729,204 15,242 15,186 56
TABLE_NAME Allocated blocks Empty blocks Average space/KB Free list blocks
TDASH 1,943,823 7,153 805 0
Objectives
To show that when serial scans are performed on database built on Solid State Disks (SSD) compared to Magnetic disks (HDD), the performance gain is far less compared to random reads with index scans on SSD compared to HDD
Approach
We want to read the first 100 rows of tdash table randomly into buffer, taking account of wait events and wait times generated. The idea is that on SSD the wait times will be better compared to HDD but not that much given the serial nature of table scans.
The code used
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_with_tdash_ssdtester_noindex';
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
/Server is rebooted prior to any tests
Whern run as default, the optimizer (although some attribute this to the execution engine) chooses direct path read into PGA in preference to db file scattered read.
With this choice it takes 6,520 seconds to complete the query. The results are shown below
SQL ID: 78kxqdhk1ubvq
Plan Hash: 1148949653
SELECT *
FROM
TDASH WHERE OBJECT_ID = :B1
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 2 47 0 0
Execute 100 0.00 0.00 1 51 0 0
Fetch 100 10.88 6519.89 194142802 194831012 0 100
total 201 10.90 6519.90 194142805 194831110 0 100
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER) (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL TDASH (cr=1948310 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 2 0.00 0.00
direct path read 1517504 0.05 6199.93
asynch descriptor resize 196 0.00 0.00
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 3.84 4.03 320 48666 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 3.84 4.03 320 48666 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID: 9babjv8yq8ru3
Plan Hash: 0
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 3.84 4.03 320 48666 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.84 4.03 320 48666 0 2
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
log file sync 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.01 0.00 2 47 0 0
Execute 129 0.01 0.00 1 52 2 1
Fetch 140 10.88 6519.89 194142805 194831110 0 130
total 278 10.91 6519.91 194142808 194831209 2 131
Misses in library cache during parse: 9
Misses in library cache during execute: 8
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 5 0.00 0.00
Disk file operations I/O 3 0.00 0.00
direct path read 1517504 0.05 6199.93
asynch descriptor resize 196 0.00 0.00
102 user SQL statements in session.
29 internal SQL statements in session.
131 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: mydb_ora_16394_test_with_tdash_ssdtester_noindex.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
102 user SQL statements in trace file.
29 internal SQL statements in trace file.
131 SQL statements in trace file.
11 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ssdtester.plan_table
Schema was specified.
Table was created.
Table was dropped.
1531657 lines in trace file.
6520 elapsed seconds in trace file.I then force the query not to use direct path read by invoking
ALTER SESSION SET EVENTS '10949 trace name context forever, level 1' -- No Direct path read ;In this case the optimizer uses db file scattered read predominantly and the query takes 4,299 seconds to finish which is around 34% faster than using direct path read (default).
The report is shown below
SQL ID: 78kxqdhk1ubvq
Plan Hash: 1148949653
SELECT *
FROM
TDASH WHERE OBJECT_ID = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 2 47 0 0
Execute 100 0.00 0.00 2 51 0 0
Fetch 100 143.44 4298.87 110348670 194490912 0 100
total 201 143.45 4298.88 110348674 194491010 0 100
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER) (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL TDASH (cr=1944909 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 129759 0.01 17.50
db file scattered read 1218651 0.05 3770.02
latch: object queue header operation 2 0.00 0.00
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 3.92 4.07 319 48625 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 3.92 4.07 319 48625 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID: 9babjv8yq8ru3
Plan Hash: 0
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 3.92 4.07 319 48625 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.92 4.07 319 48625 0 2
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
log file sync 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.01 0.00 2 47 0 0
Execute 129 0.00 0.00 2 52 2 1
Fetch 140 143.44 4298.87 110348674 194491010 0 130
total 278 143.46 4298.88 110348678 194491109 2 131
Misses in library cache during parse: 9
Misses in library cache during execute: 8
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 129763 0.01 17.50
Disk file operations I/O 3 0.00 0.00
db file scattered read 1218651 0.05 3770.02
latch: object queue header operation 2 0.00 0.00
102 user SQL statements in session.
29 internal SQL statements in session.
131 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: mydb_ora_26796_test_with_tdash_ssdtester_noindex_NDPR.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
102 user SQL statements in trace file.
29 internal SQL statements in trace file.
131 SQL statements in trace file.
11 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ssdtester.plan_table
Schema was specified.
Table was created.
Table was dropped.
1357958 lines in trace file.
4299 elapsed seconds in trace file.I note that there are 1,517,504 waits with direct path read with total time of nearly 6,200 seconds. In comparison with no direct path read, there are 1,218,651 db file scattered read waits with total wait time of 3,770 seconds. My understanding is that direct path read can use single or multi-block read into the PGA. However db file scattered reads do multi-block read into multiple discontinuous SGA buffers. So it is possible given the higher number of direct path waits that the optimizer cannot do multi-block reads (contigious buffers within PGA) and hence has to revert to single blocks reads which results in more calls and more waits?.
Appreciate any advise and apologies for being long winded.
Thanks,
MichHi Charles,
I am doing your tests for t1 table using my server.
Just to clarify my environment is:
I did the whole of this test on my server. My server has I7-980 HEX core processor with 24GB of RAM and 1 TB of HDD SATA II for test/scratch backup and archive. The operating system is RHES 5.2 64-bit installed on a 120GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive.
Oracle version installed was 11g Enterprise Edition Release 11.2.0.1.0 -64bit. The binaries were created on HDD. Oracle itself was configured with 16GB of SGA, of which 7.5GB was allocated to Variable Size and 8GB to Database Buffers.
For Oracle tablespaces including SYS, SYSTEM, SYSAUX, TEMPORARY, UNDO and redo logs, I used file systems on 240GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive. With 4K Random Read at 53,500 IOPS and 4K Random Write at 56,000 IOPS (manufacturer’s figures), this drive is probably one of the fastest commodity SSDs using NAND flash memory with Multi-Level Cell (MLC). Now my T1 table created as per your script and has the following rows and blocks (8k block size)
SELECT
NUM_ROWS,
BLOCKS
FROM
USER_TABLES
WHERE
TABLE_NAME='T1';
NUM_ROWS BLOCKS
12000000 178952which is pretty identical to yours.
Then I run the query as brelow
set timing on
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_bed_T1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
SELECT
COUNT(*)
FROM
T1
WHERE
RN=1;
which gives
COUNT(*)
60000
Elapsed: 00:00:05.29
tkprof output shows
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.02 5.28 178292 178299 0 1
total 4 0.02 5.28 178292 178299 0 1
Compared to yours:
Fetch 2 0.60 4.10 178493 178498 0 1
It appears to me that my CPU utilisation is by order of magnitude better but my elapsed time is worse!
Now the way I see it elapsed time = CPU time + wait time. Further down I have
Rows Row Source Operation
1 SORT AGGREGATE (cr=178299 pr=178292 pw=0 time=0 us)
60000 TABLE ACCESS FULL T1 (cr=178299 pr=178292 pw=0 time=42216 us cost=48697 size=240000 card=60000)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
60000 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 3 0.00 0.00
SQL*Net message from client 3 0.00 0.00
Disk file operations I/O 3 0.00 0.00
direct path read 1405 0.00 4.68
Your direct path reads are
direct path read 1404 0.01 3.40Which indicates to me you have faster disks compared to mine, whereas it sounds like my CPU is faster than yours.
With db file scattered read I get
Elapsed: 00:00:06.95
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.22 6.93 178293 178315 0 1
total 4 1.22 6.94 178293 178315 0 1
Rows Row Source Operation
1 SORT AGGREGATE (cr=178315 pr=178293 pw=0 time=0 us)
60000 TABLE ACCESS FULL T1 (cr=178315 pr=178293 pw=0 time=41832 us cost=48697 size=240000 card=60000)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
60000 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
Disk file operations I/O 3 0.00 0.00
db file sequential read 1 0.00 0.00
db file scattered read 1414 0.00 5.36
SQL*Net message from client 2 0.00 0.00
compared to your
db file scattered read 1415 0.00 4.16On the face of it with this test mine shows 21% improvement with direct path read compared to db scattered file read. So now I can go back to re-visit my original test results:
First default with direct path read
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 2 47 0 0
Execute 100 0.00 0.00 1 51 0 0
Fetch 100 10.88 6519.89 194142802 194831012 0 100
total 201 10.90 6519.90 194142805 194831110 0 100
CPU ~ 11 sec, elapsed ~ 6520 sec
wait stats
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
direct path read 1517504 0.05 6199.93
roughly 0.004 sec for each I/ONow with db scattered file read I get
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 2 47 0 0
Execute 100 0.00 0.00 2 51 0 0
Fetch 100 143.44 4298.87 110348670 194490912 0 100
total 201 143.45 4298.88 110348674 194491010 0 100
CPU ~ 143 sec, elapsed ~ 4299 sec
and waits:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 129759 0.01 17.50
db file scattered read 1218651 0.05 3770.02
roughly 17.5/129759 = .00013 sec for single block I/O and 3770.02/1218651 = .0030 for multi-block I/ONow my theory is that the improvements comes from the large buffer cache (8320MB) inducing it to do some read aheads (async pre-fetch). Read aheads are like quasi logical I/Os and they will be cheaper compared to physical I/O. When there is large buffer cache and read aheads can be done then using buffer cache is a better choice than PGA?
Regards,
Mich -
How to avoid full Table scan when using Rule based optimizer (Oracle817)
1. We have a Oracle 8.1.7 DB, and the optimizer_mode is set to "RULE"
2. There are three indexes on table cm_contract_supply, which is a large table having 28732830 Rows, and average row length 149 Bytes
COLUMN_NAME INDEX_NAME
PROGRESS_RECID XAK11CM_CONTRACT_SUPPLY
COMPANY_CODE XIE1CM_CONTRACT_SUPPLY
CONTRACT_NUMBER XIE1CM_CONTRACT_SUPPLY
COUNTRY_CODE XIE1CM_CONTRACT_SUPPLY
SUPPLY_TYPE_CODE XIE1CM_CONTRACT_SUPPLY
VERSION_NUMBER XIE1CM_CONTRACT_SUPPLY
CAMPAIGN_CODE XIF1290CM_CONTRACT_SUPPLY
COMPANY_CODE XIF1290CM_CONTRACT_SUPPLY
COUNTRY_CODE XIF1290CM_CONTRACT_SUPPLY
SUPPLIER_BP_ID XIF801CONTRACT_SUPPLY
COMMISSION_LETTER_CODE XIF803CONTRACT_SUPPLY
COMPANY_CODE XIF803CONTRACT_SUPPLY
COUNTRY_CODE XIF803CONTRACT_SUPPLY
COMPANY_CODE XPKCM_CONTRACT_SUPPLY
CONTRACT_NUMBER XPKCM_CONTRACT_SUPPLY
COUNTRY_CODE XPKCM_CONTRACT_SUPPLY
SUPPLY_SEQUENCE_NUMBER XPKCM_CONTRACT_SUPPLY
VERSION_NUMBER XPKCM_CONTRACT_SUPPLY
3. We are querying the table for a particular contract_number and version_number. We want to avoid full table scan.
SELECT /*+ INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
However despite of giving hint, query results are fetched after full table scan.
Execution Plan
0 SELECT STATEMENT Optimizer=RULE (Cost=1182 Card=1 Bytes=742)
1 0 TABLE ACCESS (FULL) OF 'CM_CONTRACT_SUPPLY' (Cost=1182 Card=1 Bytes=742)
4. I have tried giving
SELECT /*+ FIRST_ROWS + INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
and
SELECT /*+ CHOOSE + INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
But it does not work.
Is there some way without changing optimizer mode and without creating an additional index, we can use the index instead of full table scan?David,
Here is my test on a Oracle 10g database.
SQL> create table mytable as select * from all_tables;
Table created.
SQL> set autot traceonly
SQL> alter session set optimizer_mode = choose;
Session altered.
SQL> select count(*) from mytable;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE)
Statistics
1 recursive calls
0 db block gets
29 consistent gets
0 physical reads
0 redo size
223 bytes sent via SQL*Net to client
276 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> analyze table mytable compute statistics;
Table analyzed.
SQL> select count(*) from mytable
2 ;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE) (Cost=11 Card=1
788)
Statistics
1 recursive calls
0 db block gets
29 consistent gets
0 physical reads
0 redo size
222 bytes sent via SQL*Net to client
276 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options -
FASTER THROUGH PUT ON FULL TABLE SCAN
제품 : ORACLE SERVER
작성날짜 : 1995-04-10
Subject: Faster through put on Full table scans
db_file_multiblock_read only affects the performance of full table scans.
Oracle has a maximum I/O size of 64KBytes hence db_blocksize *
db_file_multiblock_read must be less than or equal to 64KBytes.
If your query is really doing an index range scan then the performance
of full scans is irrelevant. In order to improve the performance of this
type of query it is important to reduce the number of blocks that
the 'interesting' part of the index is contained within.
Obviously the db_blocksize has the most impact here.
Historically Informix has not been able to modify their database block size,
and has had a fixed 2KB block.
On most Unix platforms Oracle can use up to 8KBytes.
(Some eg: Sequent allow 16KB).
This means that for the same size of B-Tree index Oracle with
an 8KB blocksize can read it's contents in 1/4 of the time that
Informix with a 2KB block could do.
You should also consider whether the PCTFREE value used for your index is
appropriate. If it is too large then you will be wasting space
in each index block. (It's too large IF you are not going to get any
entry size extension OR you are not going to get any new rows for existing
index values. NB: this is usually only a real consideration for large indexes - 10,000 entries is small.)
db_file_simultaneous_writes has no direct relevance to index re-balancing.
(PS: In the U.K. we benchmarked against Informix, Sybase, Unify and
HP/Allbase for the database server application that HP uses internally to
monitor and control it's Tape drive manufacturing lines. They chose
Oracle because: We outperformed Informix.
Sybase was too slow AND too
unreliable.
Unify was short on functionality
and SLOW.
HP/Allbase couldn't match the
availability
requirements and wasn't as
functional.
Informix had problems demonstrating the ability to do hot backups without
severely affecting the system throughput.
HP benchmarked all DB vendors on both 9000/800 and 9000/700 machines with
different disks (ie: HP-IB and SCSI). Oracle came out ahead in all
configurations.
NNB: It's always worth throwing in a simulated system failure whilst the
benchmark is in progress. Informix has a history of not coping gracefully.
That is they usually need some manual intervention to perform the database
recovery.)
I have a perspective client who is running a stripped down souped version of
informix with no catalytic converter. One of their queries boils down to an
Index Range Scan on 10000 records. How can I achieve better throughput
on a single drive single CPU machine (HP/UX) without using raw devices.
I had heard rebuilding the database with a block size factor greater than
the OS block size would yield better performance. Also I tried changing
the db_file_multiblock_read_count to 32 without much improvement.
Adjusting the db_writers to two did not help either.
Also will the adjustment of the db_file_simultaneous_writes help on
the maintenance of a index during rebalancing operations.2)if cbo, how are the stats collected?
daily(less than millions rows of table) and weekly(all tables)There's no need to collect stats so frequently unless it's absolute necessary like you have massive update on tables daily or weekly.
It will help if you can post your sample explain plan and query. -
Prompt on DATE forces FULL TABLE SCAN
When using a prompt on a datetime field OBIEE sends a SQL to the Database with the TIMESTAMP COMMAND.
Due to Timestamp the Oracle database does a Full Table Scan. The field ATIST is a date with time on the physical database.
By Default ATIST was configured as TIMESTAMP in the rpd physical layer. The SQL request is sent to a Oracle 10 Database.
That is the query sent to the database:
-------------------- Sending query to database named PlantControl1 (id: <<10167>>):
select distinct T1471.ATIST as c1,
T1471.GUTMENGEMELD2 as c2
from
AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
where ( T1471.ATIST = TIMESTAMP '2005-04-01 13:48:05' )
order by c1, c2
The result takes more than half a minute to appear.
Because BIEE is using "TIMESTAMP" , the database performs a full table scan instead of using the index.
By using TO_DATE instead of timestamp the result appears after a second.
select distinct T1471.ATIST, T1471.GUTMENGEMELD2 as c2
from
AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
where ( T1471.ATIST = to_date('2005.04.01 13:48:05', 'yyyy.mm.dd hh24:mi:ss') );
Is there any way resolving the issue?
PS:When the field ATIST is configured as date at the physical layer the SQL performs well is it uses the command "to_date" instead of "timestamp". But this cuts the time of the date, When it is configured as DATETIME, OBIEE uses TIMESTAMP again.
What I need is a working date + time field.
Anybody has encountered a similiar problem?To be honest I haven't come across many scenarios where the Time has been important. Most of our reporting stops at Day level.
What is the real world business question being asked here that requires DayTime?
Incidentally if you change your datatype on the base table you will see it works fine.
CREATE TABLE daytime( daytime TIMESTAMP );
CREATE UNIQUE INDEX dt ON daytime (daytime)
SQL> set autotrace traceonly
SQL> SELECT * FROM daytime
2 WHERE daytime = TIMESTAMP '2007-04-01 13:00:45';
no rows selected
Execution Plan
Plan hash value: 3985400340
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 13 | 1 (0)| 00:00:01 |
|* 1 | INDEX UNIQUE SCAN| DT | 1 | 13 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("DAYTIME"=TIMESTAMP' 2007-04-01 13:00:45.000000000')
Statistics
1 recursive calls
0 db block gets
1 consistent gets
0 physical reads
0 redo size
242 bytes sent via SQL*Net to client
362 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>However if its a date it would appear to do some internal function call which I guess is the source of the problem ...
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 9 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DAYTIME | 1 | 9 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(INTERNAL_FUNCTION("DAYTIME")=TIMESTAMP' 2007-04-01
13:00:45.000000000') -
Question about Full Table Scans and Tablespaces
Good evening (or morning),
I'm reading the Oracle Concepts (I'm new to Oracle) and it seems that, based on the way that Oracle allocates and manages storage the following premise would be true:
Premise: A table that is often accessed using a full table scan (for whatever reasons) would best reside in its own dedicated tablespace.
The main reason I came to this conclusion is that when doing a full table scan, Oracle does multiblock I/O, likely reading one extent at a time. If the Tablespace's datafile(s) only contain data for a single table then a serial read will not have to skip over segments that contain data for other tables (as would be the case if the tablespace is shared with other tables). The performance improvement is probably small but, it would seem that there is one nonetheless.
I'd like to have the thoughts of experienced DBAs regarding the above premise.
Thank you for your contribution,
John.Good morning :) Aman,
>
A little correction! A segment(be it a table,index, cluster, temporary) , would stay always in its own tablespace. Segments can't span tablespaces!
>
Fortunately, I understood that from the beginning :)
You mentioned fragmentation, I understand that too. As rows get deleted small holes start existing in the segment and those holes are not easily reusable because of their limited size.
What I am referring to is different though.
Let's consider a tablespace that is the home of 2 or more tables, the tablespace in turn is represented by one or more OS datafiles, in that case the situation will be as shown in the following diagram (not a very good diagram but... best I can do here ;) ):
Tablespace TablespaceWithManyTables
(segment 1 contents)
TableA Extent 1
TableA Block 1
TableA Block 2
Fragmentation may happen in these blocks or
even across blocks because Oracle allows rows
to span blocks
TableA Block n
End of TableA Extent 1
more extents here all for TableA
(end of segment 1 contents)
(segment 2 contents)
TableZ Extent 5
blocks here
End of TableZ Extent 5
more extents here, all for tableZ
(end of segment 2 contents)
and so on
(more segments belonging to various tables)
end of Tablespace TablespaceWithManyTablesOn the other hand, if the tablespace hosts only one table, the layout will be:
Tablespace TablespaceExclusiveForTableA
(segment 1 contents)
TableA Extent 1
TableA Block 1
TableA Block 2
Fragmentation may happen in these blocks or
even across blocks because Oracle allows rows
to span blocks
TableA Block n
End of TableA Extent 1
another extent for TableA
(end of segment 1 contents)
(segment 2 contents)
TableA Extent 5
blocks here
End of TableA Extent 5
more extents for TableA
(end of segment 2 contents)
and so on
(more segments belonging to TableA)
end of Tablespace TablespaceExclusiveForTableAThe fragmentation you mentioned takes place in both cases. In the first case, regardless of fragmentation, some segments don't belong to the table that is being serially scanned, therefore they have to be skipped over at the OS level. In the second case, since all the extents belong to the same table, they can be read serially at the OS level. I realize that in that case the segments may not be read in the "right" sequence but they don't have to because they can be served to the client app in sequence.
It is because of this that, I thought that if a particular table is mostly read serially, there might be a performance benefit (and also less work for Oracle) to dedicate a tablespace to it.
I can't wait to see what you think of this :)
John. -
Dear all,
While doing a stress testing I found that there was a lot a full table scans due to which there was preformance drop. How can I avoid full table scans. Please suggest some ways as I am in clients place.
Waiting for your help.
Regards and thanks in advance
SLHi SL,
How can I avoid full table scansFull table scans are not always bad! It depends foremost on your optimizer goal (all_rows vs. first_rows), plus your multiblock read count, table size, percentage of rows requested, and many other factors.
Here are my notes:
http://www.dba-oracle.com/art_orafaq_oracle_sql_tune_table_scan.htm
To avoid full table scans, start by running plan9i.sql and then drill-in and see if you have missing indexes:
http://www.dba-oracle.com/t_plan9i_sql_full_table_scans.htm
You can also run the 10g SQLTuning advisor to find missing indexes, and also, don't forget to consider function-based indexes, a great way to eliminate unncessary lage-table full-table scans:
http://www.dba-oracle.com/oracle_tips_index_scan_fbi_sql.htm
Hope this helps. . .
Donald K. Burleson
Oracle Press author -
How to check small table scan full table scan if we will use index in where clause.
How to check small table scan full table scan if i will use index column in where clause.
Is there example link there i can test small table scan full table if index is used in where clause.Use explain plan on your statement or set autotrace traceonly in your SQL*Plus session followed by the SQL you are testing.
For example
SQL> set autotrace traceonly
SQL> select *
2 from XXX
3 where id='fga';
no rows selected
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=13 Card=1 Bytes=16
5)
1 0 PARTITION RANGE (ALL) (Cost=13 Card=1 Bytes=165)
2 1 TABLE ACCESS (FULL) OF 'XXX' (TABLE) (Cost=13 Card
=1 Bytes=165)
Statistics
1 recursive calls
0 db block gets
1561 consistent gets
540 physical reads
0 redo size
1864 bytes sent via SQL*Net to client
333 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed -
Data Federator Full Table scan
Hi,
Is it possible to prevent a Full table scan when creating a join between tables of 2 catalogues in a Data Federator?
This is seriously hampering the development time within Data Federator Development.
I am working with IDT Beta to create a Universe based on Multiple Source. The delay is so huge when creating joins that we could revert to Universe Design Tool. I have posted it here as Data Federator gurus will have tweak about IDT as it incorporates DF within itself in BI 4.0.
Any inputs will be great.. In case this is in the wrong forums, then please move it accordingly.
VFernandesThe issue was fixed when the GA was released. This was present in Beta
-
Select statement in a function does Full Table Scan
All,
I have been coding a stored procedure that writes 38K rows in less than a minute. If I add another column which requires call to a package and 4 functions within that package, it runs for about 4 hours. I have confirmed that due to problems in one of the functions, the code does full table scans. The package and all of its functions were written by other contractors who have been long gone.
Please note that case_number_in (VARCHAR2) and effective_date_in (DATE) are parameters sent to the problem function and I have verified through TOAD’s debugger that their values are correct.
Table named ps2_benefit_register has over 40 million rows but case_number is an index for that table.
Table named ps1_case_fs has more than 20 million rows but also uses case_number as an index.
Select #1 – causes full table scan runs and writes 38K rows in a couple of hours.
{case}
SELECT max(a2.application_date)
INTO l_app_date
FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
WHERE a2.case_number = case_number_in and
a1.case_number = a2.case_number and
a2.application_date <= effective_date_in and
a1.DOCUMENT_TYPE = 'F';
{case}
Select #2 – runs – hard coding values makes the code to write the same 38K rows in a few minutes.
{case}
SELECT max(a2.application_date)
INTO l_app_date
FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
WHERE a2.case_number = 'A006438' and
a1.case_number = a2.case_number and
a2.application_date <= '01-Apr-2009' and
a1.DOCUMENT_TYPE = 'F';
{case}
Why using the values in the passed parameter in the first select statement causes full table scan?
Thank you for your help,
Seyed
Edited by: user11117178 on Jul 30, 2009 6:22 AM
Edited by: user11117178 on Jul 30, 2009 6:23 AM
Edited by: user11117178 on Jul 30, 2009 6:24 AMHello Dan,
Thank you for your input. The function is not determinsitic, therefore, I am providing you with the explain plan. By version number, if you are refering to the Database version, we are running 10g.
PLAN_TABLE_OUTPUT
Plan hash value: 2132048964
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 324K| 33M| 3138 (5)| 00:00:38 | | |
|* 1 | HASH JOIN | | 324K| 33M| 3138 (5)| 00:00:38 | | |
| 2 | BITMAP CONVERSION TO ROWIDS | | 3 | 9 | 1 (0)| 00:00:01 | | |
|* 3 | BITMAP INDEX FAST FULL SCAN| IDX_PS2_ACTION_TYPES | | | | | | |
| 4 | PARTITION RANGE ITERATOR | | 866K| 87M| 3121 (4)| 00:00:38 | 154 | 158 |
| 5 | TABLE ACCESS FULL | PS2_FS_TRANSACTION_FACT | 866K| 87M| 3121 (4)| 00:00:38 | 154 | 158 |
Predicate Information (identified by operation id):
1 - access("AL1"."ACTION_TYPE_ID"="AL2"."ACTION_TYPE_ID")
3 - filter("AL2"."ACTION_TYPE"='1' OR "AL2"."ACTION_TYPE"='2' OR "AL2"."ACTION_TYPE"='S')
Thank you very much,
Seyed -
URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query
Hi Everyone,
When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
UPDATE TABLE_A a
SET a.current_commit_date =
(SELECT MAX (b.loading_date)
FROM TABLE_B b
WHERE a.sales_order_id = b.sales_order_id
AND a.sales_order_line_id = b.sales_order_line_id
AND b.confirmed_qty > 0
AND b.data_flag IS NULL
OR b.schedule_line_delivery_date >= '23 NOV 2008')
Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
Please please help me in optimizing this.
Thanks,
SudhindraCheck the instruction again, you're leaving out information we need in order to help you, like optimizer information.
- Post your exact database version, that is: the result of select * from v$version;
- Don't use TOAD's execution plan, but use
SQL> explain plan for <your_query>;
SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
It's also explained in the instruction.
When was the last time statistics were gathered for table_a and table_b?
You can find out by issuing the following query:select table_name
, last_analyzed
, num_rows
from user_tables
where table_name in ('TABLE_A', 'TABLE_B');
Can you also post the results of these counts;select count(*)
from table_b
where confirmed_qty > 0;
select count(*)
from table_b
where data_flag is null;
select count(*)
from table_b
where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy'); -
Query optimization - Query is taking long time even there is no table scan in execution plan
Hi All,
The below query execution is taking very long time even there are all required indexes present.
Also in execution plan there is no table scan. I did a lot of research but i am unable to find a solution.
Please help, this is required very urgently. Thanks in advance. :)
WITH cte
AS (
SELECT Acc_ex1_3
FROM Acc_ex1
INNER JOIN Acc_ex5 ON (
Acc_ex1.Acc_ex1_Id = Acc_ex5.Acc_ex5_Id
AND Acc_ex1.OwnerID = Acc_ex5.OwnerID
WHERE (
cast(Acc_ex5.Acc_ex5_92 AS DATETIME) >= '12/31/2010 18:30:00'
AND cast(Acc_ex5.Acc_ex5_92 AS DATETIME) < '01/31/2014 18:30:00'
SELECT DISTINCT R.ReportsTo AS directReportingUserId
,UC.UserName AS EmpName
,UC.EmployeeCode AS EmpCode
,UEx1.Use_ex1_1 AS PortfolioCode
SELECT TOP 1 TerritoryName
FROM UserTerritoryLevelView
WHERE displayOrder = 6
AND UserId = R.ReportsTo
) AS BranchName
,GroupsNotContacted AS groupLastContact
,GroupCount AS groupTotal
FROM ReportingMembers R
INNER JOIN TeamMembers T ON (
T.OwnerID = R.OwnerID
AND T.MemberID = R.ReportsTo
AND T.ReportsTo = 1
INNER JOIN UserContact UC ON (
UC.CompanyID = R.OwnerID
AND UC.UserID = R.ReportsTo
INNER JOIN Use_ex1 UEx1 ON (
UEx1.OwnerId = R.OwnerID
AND UEx1.Use_ex1_Id = R.ReportsTo
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupCount
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.Acc_ex1_3 > '0'
AND Accounts.OwnerID = 109
GROUP BY Accounts.AssignedTo
) TotalGroups ON (TotalGroups.AssignedTo = R.ReportsTo)
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupsNotContacted
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.OwnerID = Accounts.OwnerID
AND Acc_ex1.Acc_ex1_3 > '0'
INNER JOIN Acc_ex5 ON (
Accounts.AccountID = Acc_ex5.Acc_ex5_Id
AND Acc_ex5.OwnerID = Accounts.OwnerID
WHERE Accounts.OwnerID = 109
AND Acc_ex1.Acc_ex1_3 NOT IN (
SELECT Acc_ex1_3
FROM cte
GROUP BY Accounts.AssignedTo
) TotalGroupsNotContacted ON (TotalGroupsNotContacted.AssignedTo = R.ReportsTo)
WHERE R.OwnerID = 109
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash JugranHi All,
Thanks for the replies.
I have optimized that query to make it run in few seconds.
Here is my final query.
select ReportsTo as directReportingUserId,
UserName AS EmpName,
EmployeeCode AS EmpCode,
Use_ex1_1 AS PortfolioCode,
BranchName,
GroupInfo.groupTotal,
GroupInfo.groupLastContact,
case when exists
(select 1 from ReportingMembers RM
where RM.ReportsTo = UserInfo.ReportsTo
and RM.MemberID <> UserInfo.ReportsTo
) then 0 else UserInfo.ReportsTo end as memberid1,
(select code from Regions where ownerid=109 and name=UserInfo.BranchName) as BranchCode,
ROW_NUMBER() OVER (ORDER BY directReportingUserId) AS ROWNUMBER
FROM
(select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo )
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
union
select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
--INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo)
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
where R.MemberID = 1
) UserInfo
inner join
select directReportingUserId, sum(Groups) as groupTotal, SUM(GroupsNotContacted) as groupLastContact
from
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
union
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
where R.MemberID = 1
) GroupWiseInfo
group by directReportingUserId
) GroupInfo
on UserInfo.ReportsTo = GroupInfo.directReportingUserId
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran -
Associative Array vs Table Scan
Still new to PL/SQL, but very keen to learn. I wondered if somebody could advise me whether I should use a collection (such as an associative array) instead of repeating a table scan within a loop for the example below. I need to read from an input table of experiment data and if the EXPERIMENT_ID does not already exist in my EXPERIMENTS table, then add it. Here is the code I have so far. My instinct is that it my code is inefficient. Would it be more efficient to scan the EXPERIMENTS table only once and store the list of IDs in a collection, then scan the collection within the loop?
-- Create any new Experiment IDs if needed
open CurExperiments;
loop
-- Fetch the explicit cursor
fetch CurExperiments
into vExpId, dExpDate;
exit when CurExperiments%notfound;
-- Check to see if already exists
select count(id)
into iCheckExpExists
from experiments
where id = vExpId;
if iCheckExpExists = 0 then
-- Experiment ID is not already in table so add a row
insert into experiments
(id, experiment_date)
values(vExpId, dExpDate);
end if;
end loop;Except that rownum is assigned after the result set
is computed, so the whole table will have to be
scanned.really?
SQL> explain plan for select * from i;
Explained.
SQL> select * from table( dbms_xplan.display );
PLAN_TABLE_OUTPUT
Plan hash value: 1766854993
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 910K| 4443K| 630 (3)| 00:00:08 |
| 1 | TABLE ACCESS FULL| I | 910K| 4443K| 630 (3)| 00:00:08 |
8 rows selected.
SQL> explain plan for select * from i where rownum=1;
Explained.
SQL> select * from table( dbms_xplan.display );
PLAN_TABLE_OUTPUT
Plan hash value: 2766403234
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 5 | 2 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | TABLE ACCESS FULL| I | 1 | 5 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM=1)
14 rows selected. -
I have a problem with full table scans that make very slow the performance of a report.
The test case is below. It looks that when the column is called from the table, the index is in use. If I use the same select from the view, then I get a table scan.
I would appreciate any idea on how to optimize it.
Thanks a lot for the help.
mj
<pre>
create table test1 (id1 number , id2 number, id3 number, col1 varchar(10),col2 varchar(50), col3 varchar(100));
create table test2 (id4 number , id5 number, id6 number, col4 varchar(10),col5 varchar(50), col6 varchar(100));
ALTER TABLE test1 ADD CONSTRAINT PK_test1 PRIMARY KEY(ID1) USING INDEX REVERSE;
create index index1 on test1(ID2);
create index index2 on test1(ID3,col2 );
ALTER TABLE test2 ADD CONSTRAINT PK_test2 PRIMARY KEY(ID4) USING INDEX REVERSE;
create or replace view test_view as select t1.*,
case (select t2.id4 from test2 t2 where t1.id2 = t2.id5 and t2.id6 = -1)
when t1.id2 then t1.id3
else t1.id2
end as main_id
from test1 t1 ;
create or replace view test_view2 as select * from test_view; --(requred by security levels)
select * from test1 where id2 =1000;
select * from test_view where id2 = 1000;
select * from test_view2 where id2 = 1000;
SQL> select * from test_view where id2 = 1000;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1970977999
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 125 | 1 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL | TEST2 | 1 | 39 | 2 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| TEST1 | 1 | 125 | 1 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | INDEX1 | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("T2"."ID5"=:B1 AND "T2"."ID6"=(-1))
3 - access("T1"."ID2"=1000)
SQL> select * from test_view where main_id = 1000;
Elapsed: 00:00:00.03
Execution Plan
Plan hash value: 3806368241
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 125 | 4 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL | TEST2 | 1 | 39 | 2 (0)| 00:00:01 |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS FULL| TEST1 | 1 | 125 | 2 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| TEST2 | 1 | 39 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("T2"."ID5"=:B1 AND "T2"."ID6"=(-1))
2 - filter(CASE WHEN "T1"."ID2"= (SELECT /*+ */ "T2"."ID4" FROM
MJ42."TEST2" "T2" WHERE "T2"."ID5"=:B1 AND "T2"."ID6"=(-1)) THEN
"T1"."ID3" ELSE "T1"."ID2" END =1000)
4 - filter("T2"."ID5"=:B1 AND "T2"."ID6"=(-1))
SQL>
</pre>If you think about what the two queries are doing, it is easy to see why the first uses an index and the second does not.
Your first query:
SELECT * FROM test_view WHERE id2 = 1000explicitly uses an indexed column from test1 in the predicate. Oracle can use the index to identify the correct row from test1. Having found that single row in test1, it uses the FULL SCAN test2 to resolve the case statement.
Your second query:
SELECT * FROM test_view WHERE main_id = 1000uses the result of the case statement as the predicate. Oracle has no way of determing what row from test1 to use initially, so it must full scan both tables.
John -
Use of client dependent tables
Hi Gurus,
I have read many threads on difference between client dependent and independent data/objects etc.
But, someone tell me please, wheat exactly the 'Use or Advantage' of client dependent tables/data.
Thanks in advance.
VHI,
The use of client dependent tables is that if data in one table of a particular client is updated then that data wont be seen in any other clent it provides a security to the data we retrict users with their authorizations in differen clients
When you log on to an SAP System, you log on to a particular client of this system. Any activities you carry out in the system are always carried out in one client. When you plan your SAP system landscape, you must consider which clients you need for which activities.
By assigning activities to be performed in a client, you give each client a particular role. This section describes the most important client roles.
Since you need to adapt the SAP software for your own business needs, each SAP system landscape requires a client where Customizing settings, and possibly ABAP Workbench developments, can be made. This client is known as the Customizing and development client, or Customizing client for short. The abbreviation CUST is used for this client.
Before you can use the Customizing settings and Workbench developments productively, you need to test them extensively for errors. Any faulty settings can seriously disrupt productive operations, and at worst, lead to the loss of productive data. The integrated nature of the various SAP applications means that there are many dependencies between the different Customizing settings. Even an experienced Customizing developer may not discover these dependencies immediately. The correctness of the settings can only be guaranteed with extensive testing. The client where these tests are made is the Quality Assurance Client, QTST for short.
A separate client is required for productive use of the SAP System. So that this client can be used without disruption, it is essential that no Customizing settings or Workbench developments are made here, and also that no tests are carried out. This client is known as the Production Client, PROD for short.
These three clients, CUST, QTST and PROD, are the central clients that exist in every system landscape. Standard system landscapes have precisely one client for each of these client roles.
We recommend that you make all your Customizing settings in a single Customizing client, and then use the CTS to transport them to the other clients.
We also recommend that you do not make any Customizing settings or Workbench developments in the quality assurance or production clients. You can make sure of this by making appropriate client settings.
In addition to the central clients, you can also set up other clients for other tasks. However, you must remember that each extra client takes up additional system resources (main memory and database space). They also need to be administrated. For example, you need to set up and administrate access authorization for the users, and also distribute any changes to other clients with the CTS. You must weigh up the advantages and disadvantages of setting up other clients.
Examples of other client roles are:
Development test client (TEST): Developers can use this client to test their Customizing settings and Workbench developments, before they release their change requests. In this client the developers can create test application data for realistic tests. If they discover errors, they can remove them in the Customizing client. A development test client is always set up in the same SAP System as the Customizing client. This means that any changes that are made to cross-client data in the Customizing client are also immediately visible in the development test client. Changes to client-specific data are copied from the Customizing client to the development test client using a special client copy function. The client copy function uses the unreleased change requests from the Customizing client to do this. The development test client is set so that you cannot make changes to Customizing data and Repository objects.
Prototype or sandbox client (SAND): You can use this client to test any client-specific Customizing settings if you are not sure whether you want to use them in this form. Any settings that you want to keep are then entered in the Customizing client. To prevent conflicts between the prototype client settings and real settings in the Customizing client, you cannot make changes to cross-client Customizing data and Repository objects in the prototype client. The CTS does not record changes made to client-specific Customizing data, and does not transport them from the prototype client. You can make sure of this by making appropriate client settings.
Training client (TRNG): To prepare end users for new functions that are to be transported into the production client, you can set up a training client. The users can use the new functions in this client with specially created application data. This client is set so that you cannot make changes to Customizing data and Repository objects.'''
plzz reward if this information is usefull to u plaa dont forget to reward
Maybe you are looking for
-
How to read Excel file in java
Respected sir/madam I want to read the values from Excel file in Java program. How can I do that. When I searched over the net I came to know that you can treat Excel file as a Database and write the code as u write for making DB connections . I did
-
My wife's iPhone 4s and my 5 can no longer connect to wifi this morning (1-1-13).
We both get an error message that says "Unable to join the network...". We have never had a problem before. Our iPad 2 connects with no problem, as well as other wifi devises. Don't know yet if it's just our home network or other.
-
How to make photoshop default opening images
I have Bridge CC when I double click to open a image in photoshop it opens in windows photo viewer intsted,.and photopshop is not in the File / open with /
-
just now i replace the 5s mother board but could not work touch id pls help me.....
-
Hello, I cannot extract a jar file's contents. The jar tool just hangs. Is my jar file corrupted? Thanks.