Bug in 10.2.0.5 for query using index on trunc(date)
Hi,
We recently upgraded from 10.2.0.3 to 10.2.0.5 (enterprise edition, 64bit linux). This resulted in some strange behaviour, which I think could be the result of a bug.
In 10.2.0.5, after running the script below the final select statement will give different results for TRUNC(b) and TRUNC (b + 0). Running the same script in 10.2.0.3 the select statement returns correct results.
BTW: it is related to index usage, because skipping the "CREATE INDEX" statement leads to correct results for the select.
Can somebody please confirm this bug?
Regards,
Henk Enting
-- test script:
DROP TABLE test_table;
CREATE TABLE test_table(a integer, b date);
CREATE INDEX test_trunc_ind ON test_table(trunc(b));
BEGIN
FOR i IN 1..100 LOOP
INSERT INTO test_table(a,b) VALUES(i, sysdate -i);
END LOOP;
END;
SELECT *
FROM (
SELECT DISTINCT
trunc(b)
, trunc(b + 0)
FROM test_table
Results on 10.2.0.3:
TRUNC(B) TRUNC(B+0)
05-08-2010 05-08-2010
04-08-2010 04-08-2010
01-08-2010 01-08-2010
30-07-2010 30-07-2010
28-07-2010 28-07-2010
27-07-2010 27-07-2010
23-07-2010 23-07-2010
22-07-2010 22-07-2010
17-07-2010 17-07-2010
03-07-2010 03-07-2010
26-06-2010 26-06-2010
etc.
Results on 10.2.0.5:
TRUNC(B) TRUNC(B+0)
04-05-2010 03-08-2010
04-05-2010 31-07-2010
04-05-2010 24-07-2010
04-05-2010 06-07-2010
04-05-2010 05-07-2010
04-05-2010 01-07-2010
04-05-2010 16-06-2010
04-05-2010 14-06-2010
04-05-2010 08-06-2010
04-05-2010 07-06-2010
04-05-2010 30-05-2010
etc.
Thanks four your reply.
I already looked at the metalink doc. It lists 4 bugs introduced in 10.2.0.5, but none of them seems related to my problem. Did I overlook something?
Regards,
Henk
Similar Messages
-
How to write the query using Index
Hi All,
I have to fetch the records from Database table using Index, how can i write the query using Index. Can any body plz send me the sample code.
Help Me,
Balu.Hi,
See the below Example.
select * from vbak up to 100 rows
into table t_vbak
where vbeln > v_vbeln.
sort t_vbak by vbeln descending.
read table t_vbak index 1.
Regards,
Ram
Pls reward points if helpful. -
Query Uses Index in 8i but not in 9i
I have simple query which runs good in Oracle 8i. It uses index ,if i use = or in clause.
Same query is not using Index in Oracle 9i,(We made the the optimizer as choose) . If i remove in clause and make it = ,it is using index
select * from DWFE_ELE_CAT_ACC_HISTORY
where udc_acct_num in (Select z.LAH_CURR_LDC_ACCT_NUM
from DWFE_LDC_ACCT_HISTORY B,DWFE_LDC_ACCT_HISTORY z
Where B.LAH_CURR_LDC_ACCT_NUM ='0382900397'
and B.cpa_prem_num = Z.cpa_prem_num );Plan for Oracle 8i
Execution Plan
0 SELECT STATEMENT Optimizer=RULE
1 0 NESTED LOOPS
2 1 VIEW OF 'VW_NSO_1'
3 2 REMOTE* RSSCP_DB
LINK
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'CAT_TRANS_HISTORY'
5 4 INDEX (RANGE SCAN) OF 'IDX3_CAT_TRANS_HISTORY' (NON-UN
IQUE)
3 SERIAL_FROM_REMOTE SELECT /*+ */ DISTINCT "A1"."LAH_CURR_LDC_AC
CT_NUM" FROM "RSSC"."LDC_ACCT_HISTOR
Statistics
0 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
7827 bytes sent via SQL*Net to client
316 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
6 rows processed
Plan for Oracle 9i
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=12018 Card=67728 Byt
es=9752832)
1 0 HASH JOIN (Cost=12018 Card=67728 Bytes=9752832)
2 1 VIEW OF 'VW_NSO_1' (Cost=707 Card=17041 Bytes=238574)
3 2 REMOTE* RSSCP_DB
LINK
4 1 TABLE ACCESS (FULL) OF 'CAT_TRANS_HISTORY' (Cost=6204 Ca
rd=1905290 Bytes=247687700)
3 SERIAL_FROM_REMOTE SELECT /*+ */ "A1"."LAH_CURR_LDC_ACCT_NUM" F
ROM "RSSC"."LDC_ACCT_HISTORY" "A2","
Statistics
42 recursive calls
1 db block gets
41038 consistent gets
41010 physical reads
380 redo size
7833 bytes sent via SQL*Net to client
253 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
6 rows processed -
RRi for query using ABAP program
For a report, we are using RRI (jump to target) functionality to see invoices in R/3 system using an ABAP program. I do not have much idea on R/3 systems how invoices are setup in R/3 Dev, test and prod. The ABAP report is done by a backend person. I need to place that in RSBBS. Here, my doubt is that do we need to follow the same procedure for this to move it to production system i.e, first create that RRI and ABAP in respective dev systems and transport both to test and then to Prod? Could anyone please explain me the steps on how this will work in real-time?
Points will be assigned.
ThanksHi,
I am not able to find how the two reports are conncted.
You can check in RSBBS t-code. RRI will be defined here to jump from One Query yo another and many more options.
My work is to copy the parent query a nd do some modification to it. Please let me know how to check and achieve that.
You can do this BEx Query Designer itself.
Regards,
Suman -
Query tuning for query using connect by prior
I have written following query to fetch the data. The query is written in this format because there are multiple rows, which make one recrd and we need to bring that record into one row.
For one CAT(commented here), this query takes around 4 minutes and fetches 6900 records but when it runs for 3 CAT, it takes 17 mins.
I want to tune this as this has to run for 350 CAT values.
It is doing FTS on the main table. I tried to use different hints like PARALLEL, APPEND (in insert) but nothing worked.
The cost of the query is 51.
Any help/suggestions will be appreciated.
SELECT DISTINCT MIN(SEQ) SEQ,
PT, APP, IT, LR, QT,CD, M_A_FLAG,
STRAGG(REM) REM, -- aggregates the data from different columns to one which is parent
CAT
FROM (WITH R AS (SELECT CAT, SEQ, PT, M_A_FLAG, IT, LR,QT,CD, REM, APP
FROM table1
WHERE REC = '36' AND M_A_FLAG = '1'
--AND CAT = '11113')
SELECT CAT, SEQ,
CONNECT_BY_ROOT PT AS PT,
CONNECT_BY_ROOT APP AS APPL,
M_A_FLAG,
CONNECT_BY_ROOT IT AS IT,
CONNECT_BY_ROOT LR AS LR,
CONNECT_BY_ROOT QT AS QT,
CONNECT_BY_ROOT CD AS CD,
REM
FROM R A
START WITH PT IS NOT NULL
CONNECT BY PRIOR SEQ + 1 = SEQ
AND PRIOR CAT = CAT
AND PT IS NULL)
GROUP BY PT, APP, IT,LR, QT, CD, M_A_FLAG, CAT
ORDER BY SEQ;
Thanks.
Edited by: user2544469 on Feb 11, 2011 1:12 AMThe following threads detail the approach and information required.
Please gather relevant info and post back.
How to post a SQL tuning request - HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long - When your query takes too long ... -
I'm using the LabView Database Connectivity Toolset and am using the following query...
UPDATE IndexStation
SET Signal_Size=200
WHERE 'StartTime=12:05:23'
Now the problem is that this command seems to update all rows in the table IndexStation... Not just specifically the row where StartTime=12:05:23
I have tries all sorts of {} [] / ' " around certain characters and column names but it always seems to update all rows...
I've begun to use the SQL query tab in Access to try and narrow down as to why this happens, but no luck!
Any ideas!?
Thanks,
Chris.Chris Walter wrote:
I completely agree about the Microsoft issue.
But it seems no SQL based manual states that { } will provide a Date/Time constant.
Is this an NI only implementation? Because I can't seem to get it to function correctly within LabView or in any SQL query.
Chris.
There is nothing about the database toolkit in terms of SQL syntax that would be NI specific. The database Toolkit simply interfaces to MS ADO/DAO and the actual SQL syntax is usually implemented in the database driver or database itself although I wouldn't be surprised if ADO/DAO does at times munch a bit with that too.
The Database Toolkit definitely does not. So this might be a documentation error indeed. My understanding of SQL syntax is in fact rather limited so not sure which databases might use what delimiters to format date/time values. I know that SQL Server is rather tricky thanks to MS catering for the local date/time format in all their tools and the so called universal date/time format has borked on me on several occasions.
Rolf Kalbermatter
Rolf Kalbermatter
CIT Engineering Netherlands
a division of Test & Measurement Solutions -
Slow Query Using index. Fast with full table Scan.
Hi;
(Thanks for the links)
Here's my question correctly formated.
The query:
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'Runs on 32 seconds!
Same query, but with one extra where clause:
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
and ( (ec.contextVersion = 'REALWORLD') --- ADDED HERE
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'This runs in 400 seconds.
It should return data from one table, given the conditions.
The version of the database is Oracle9i Release 9.2.0.7.0
These are the parameters relevant to the optimizer:
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 99
optimizer_index_cost_adj integer 10
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
SQL> Here is the output of EXPLAIN PLAN for the first fast query:
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
|* 2 | TABLE ACCESS FULL | EHCONS | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE"
IS NULL AND "EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy
-mm-dd
hh24:mi:ss') AND "EC"."TYPE"='BAR')
Note: rule based optimizationHere is the output of EXPLAIN PLAN for the slow query:
PLAN_TABLE_OUTPUT
| |
| 1 | SORT AGGREGATE | | |
| |
|* 2 | TABLE ACCESS BY INDEX ROWID| ehgeoconstru | |
| |
|* 3 | INDEX RANGE SCAN | ehgeoconstru_VSN | |
| |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE" IS
NULL AND "EC"."TYPE"='BAR')
PLAN_TABLE_OUTPUT
3 - access("EC"."CONTEXTVERSION"='REALWORLD' AND "EC"."BIRTHDATE"<=TO_DATE('2
009-10-06
11:52:12', 'yyyy-mm-dd hh24:mi:ss'))
filter("EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy-mm-dd hh24:
mi:ss'))
Note: rule based optimizationThe TKPROF output for this slow statement is:
TKPROF: Release 9.2.0.7.0 - Production on Tue Nov 17 14:46:32 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: gen_ora_3120.trc
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
and ( (ec.contextVersion = 'REALWORLD')
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 538.12 162221 1355323 0 1
total 4 0.00 538.12 162221 1355323 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 153
Rows Row Source Operation
1 SORT AGGREGATE
27747 TABLE ACCESS BY INDEX ROWID OBJ#(73959)
2134955 INDEX RANGE SCAN OBJ#(73962) (object id 73962)
alter session set sql_trace=true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.02 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.02 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 153
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.02 0 0 0 0
Fetch 2 0.00 538.12 162221 1355323 0 1
total 5 0.00 538.15 162221 1355323 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
2 user SQL statements in session.
0 internal SQL statements in session.
2 SQL statements in session.
Trace file: gen_ora_3120.trc
Trace file compatibility: 9.02.00
Sort options: prsela exeela fchela
2 sessions in tracefile.
2 user SQL statements in trace file.
0 internal SQL statements in trace file.
2 SQL statements in trace file.
2 unique SQL statements in trace file.
94 lines in trace file.Edited by: PauloSMO on 17/Nov/2009 4:21
Edited by: PauloSMO on 17/Nov/2009 7:07
Edited by: PauloSMO on 17/Nov/2009 7:38 - Changed title to be more correct.Although your optimizer_mode is choose, it appears that there are no statistics gathered on ehgeoconstru. The lack of cost estimate and estimated row counts from each step of the plan, and the "Note: rule based optimization" at the end of both plans would tend to confirm this.
Optimizer_mode choose means that if statistics are gathered then it will use the CBO, but if no statistics are present in any of the tables in the query, then the Rule Based Optimizer will be used. The RBO tends to be index happy at the best of times. I'm guessing that the index ehgeoconstru_VSN has contextversion as the leading column and also includes birthdate.
You can either gather statistics on the table (if all of the other tables have statistics) using dbms_stats.gather_table_stats, or hint the query to use a full scan instead of the index. Another alternative would be to apply a function or operation against the contextversion to preclude the use of the index. something like this:
SELECT COUNT(*)
FROM ehgeoconstru ec
WHERE ec.type='BAR' and
ec.contextVersion||'' = 'REALWORLD'
ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') and
deathdate is null and
SUBSTR(ec.strgfd, 1, LENGTH('[CIMText')) <> '[CIMText'or perhaps UPPER(ec.contextVersion) if that would not change the rows returned.
John -
Writing a query using a multisource-enabled data foundation
I know there is an easy way to do this but Iu2019m suffering from a mind block late on a Friday afternoon.
What I have is a multisource-enabled data foundation that is reading data from a connection to multi-cube MC07 in our DEV system and multi-cube MC07 in our QAS system. The two multi-cubes are joined in the data foundation on create date and contract number.
Here is what I have done so far:
- Created two relational multisource-enabled connections. One connections to multi-cube MC07 in DEV system and one to multi-cube MC07 in our QA system, one to a to DEV BW and
- Created a multi-source data foundation on the connections with joins on create date and contract number.
- Created a relational data source business layer using my data foundation
- In the business layer I click on the Queries tab and then the u201C+u201D to bring up the query panel
I want to write a query that combines the data from DEV and QA and list all the contract numbers, their create date and a gross quantity field from both systems
How do I write the query?
Appreciate any help
Mike...Whenever we are creating DataFoundation with Multi source Enabled, the data sources are not enabled. For single source it is working fine. we are doing it with SAP BO4.0 SP4 Client tools. How to reslove this issue..
-
HR Query using two diffferent master data attribute values
I have a requirement to calculate headount based on certain condition.
Example: The condition is as follows:
For each "Performace Rating" (ROW)
Count Headcount for "Below MIn" (COLUMN)
- Below Min = "Annual Salary" less than "Pay Grade Level MIn"
- "Annual Salary" and "Pay Grade Level" are attributes of "Employee" Master data and is of type CURR
-"Pay Grade Level Min" in turn is an attribute of "Pay Grade Level" master data.
I am trying to build this query based on a standard Infoset 0ECM_IS02 that has an ODS (Compensation Process), Employee and Person" .
Any help in the required approach is appreciated like creating a restricetd KF using ROWCOUNT (Number of Records) and implement the logic for "Below Min"
Thanks
RajHi
Have you tried to create a variable for ths kf with exit. I thnk it is possible here
Assign points if useful
Regards
N Ganesh -
SQL Query using a Variable in Data Flow Task
I have a Data Flow task that I created. THe source query is in this file "LPSreason.sql" and is stored in a shared drive such as
\\servername\scripts\LPSreason.sql
How can I use this .sql file as a SOURCE in my Data Flow task? I guess I can use SQL Command as Access Mode. But not sure how to do that?Hi Desigal59,
You can use a Flat File Source adapter to get the query statement from the .sql file. When creating the Flat File Connection Manager, set the Row delimiter to a character that won’t be in the SQL statement such as “Vertical Bar {|}”. In this way, the Flat
File Source outputs only one row with one column. If necessary, you can set the data type of the column from DT_STR to DT_TEXT so that the Flat File Source can handle SQL statement which has more than 8000 characters.
After that, connect the Flat File Source to a Recordset Destination, so that we store the column to a SSIS object variable (supposing the variable name is varQuery).
In the Control Flow, we can use one of the following two methods to pass the value of the Object type variable varQuery to a String type variable QueryStr which can be used in an OLE DB Source directly.
Method 1: via Script Task
Add a Script Task under the Data Flow Task and connect them.
Add User::varQuery as ReadOnlyVariables, User::QueryStr as ReadWriteVariables
Edit the script as follows:
public void Main()
// TODO: Add your code here
System.Data.OleDb.OleDbDataAdapter da = new System.Data.OleDb.OleDbDataAdapter();
DataTable dt = new DataTable();
da.Fill(dt, Dts.Variables["User::varQuery"].Value);
Dts.Variables["QueryStr2"].Value = dt.Rows[0].ItemArray[0];
Dts.TaskResult = (int)ScriptResults.Success;
4. Add another Data Folw Task under the Script Task, and join them. In the Data Flow Task, add an OLE DB Source, set its Data access mode to “SQL command from variable”, and select the variable User::QueryStr.
Method 2: via Foreach Loop Container
Add a Foreach Loop Container under the Data Flow Task, and join them.
Set the enumerator of the Foreach Loop Container to Foreach ADO Enumerator, and select the ADO object source variable as User::varQuery.
In the Variable Mappings tab, map the collection value of the Script Task to User::QueryStr, and Index to 0.
Inside the Foreach Loop Container, add a Data Flow Task like step 4 in method 1.
Regards,
Mike Yin
TechNet Community Support -
HELP! Query Using View to denormalize data
Help!!!
Below are two logical records of data that have been denormalized so that each column is represented as a different record in the database
RECORD NUMBER, COL POSITION, VALUE
1, 1, John
1, 2, Doe
1, 3, 123 Nowhere Lake
1, 4, Tallahassee
1, 5, FL
2, 1, Mary
2, 2, Jane
2, 3, 21 Shelby Lane
2, 4, Indianapolis
2, 5, IN
I need to write a view to return the data values in this format:
John, Doe, 123 Nowhere Lake, Tallahassee, FL
Mary, Jane, 21 Shelby Lane, Indianapolis, IN
I REALLY need this as soon as possible! Someone PLEASE come to my rescue!!!Assuming that the other table has one record per record_num in the first table, then you could do something like:
SQL> SELECT * FROM t1;
RECORD_NUM DATE_COL
1 17-MAR-05
2 16-MAR-05
SQL> SELECT a.record_num, col1, col2, col3, col4, col5, t1.date_col
2 FROM (SELECT record_num,
3 MAX(DECODE(col_position, 1, value)) Col1,
4 MAX(DECODE(col_position, 2, value)) Col2,
5 MAX(DECODE(col_position, 3, value)) Col3,
6 MAX(DECODE(col_position, 4, value)) Col4,
7 MAX(DECODE(col_position, 5, value)) Col5
8 FROM t
9 GROUP BY record_num) a, t1
10 WHERE a.record_num = t1.record_num;
RECORD_NUM COL1 COL2 COL3 COL4 COL5 DATE_COL
1 John Doe 123 Nowhere Lake Tallahassee FL 17-MAR-05
2 Mary Jane 21 Shelby Lane Indianapolis IN 16-MAR-05If your second table is structured like the first table then something more like:
SELECT record_num,
MAX(DECODE(source,'Tab1',DECODE(col_position, 1, value))) Col1,
MAX(DECODE(source,'Tab1',DECODE(col_position, 2, value))) Col2,
MAX(DECODE(source,'Tab1',DECODE(col_position, 3, value))) Col3,
MAX(DECODE(source,'Tab1',DECODE(col_position, 4, value))) Col4,
MAX(DECODE(source,'Tab1',DECODE(col_position, 5, value))) Col5,
MAX(DECODE(source,'Tab2',DECODE(col_position, 1, value))) T2_Col1,
MAX(DECODE(source,'Tab2',DECODE(col_position, 2, value))) T2_Col2
FROM (SELECT 'Tab1' source, record_num, col_position, value
FROM t
UNION ALL
SELECT 'Tab2' source, record_num, col_position, value
FROM t1)
GROUP BY record_numBy the way, I can't say I am enamoured of your data model.
John -
How to assure index is being used for Query
Hi All,
Indexed are best way to speedup the data retrieval process selectively.
however i wanted to how to trace that weather index is being used in a particular query.
1. Table tab1 has composite index on a and b.
2. Select a,b,c from tab1 where a>500 ;
Will this query use index create from 1 ?
Conceptually it should not because where as part contains only field A where as index is composit on field A and B.
However , i wanted to know to trace that a perticual query is using index or not ?
This question was asked to me in an interview.
Pls le me know at [email protected]
Thanks in Advance
AlokHi,
Use explain plan to know if index is use.
As you can see in the following example, even if index is composed and I use only a part in query, index can be used :
SQL> create table test (col1 number, col2 number, col3 varchar2(10));
Table created.
SQL> insert into test select object_id, data_object_id, substr(object_name,1,10) from dba_objects;
70211 rows created.
SQL> create index idx on test(col1,col2);
Index created.
SQL> explain plan for
2 select col1,col2,col3 from test where col1 > 500;
Explained.
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST | | | |
|* 2 | INDEX RANGE SCAN | IDX | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - access("TEST"."COL1">500)
filter("TEST"."COL1">500)
Note: rule based optimization
16 rows selected.
--If stats are collected, index is no more used
SQL> analyze table test compute statistics;
SQL> explain plan for
2 select col1,col2,col3 from test where col1 > 500;
Explained.
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 69808 | 1227K| 16 |
|* 1 | TABLE ACCESS FULL | TEST | 69808 | 1227K| 16 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("TEST"."COL1">500)
Note: cpu costing is off
14 rows selected.
--But if you use an equlity in your query instead of >, index is still used
SQL> explain plan for
2 select col1,col2,col3 from test where col1 = 500;
Explained.
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 18 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 18 | 3 |
|* 2 | INDEX RANGE SCAN | IDX | 1 | | 2 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - access("TEST"."COL1"=500)
Note: cpu costing is off
15 rows selected.
SQL> Nicolas. -
In DBI , how to find out the Source Query used for the Report
Hi All,
How to find out the Source Query used to display the data in the DBI Reports or Dashboards. We can get it in Apps Front end by Going to Help and Record Histroty. But DBI Runs in Internet Explorer so i dont know how to get the source query ( SELECT Query ) Used.
In IE we have View --> Source . But that does not help since it gives the HTML Coding and not the SELECT Query used.
If anyone has ever worked on it...Please help me in finding it.
Thanks,
Neeraj ShrivastavaHi neeraj,
You can see the query used to display reports.Follow these steps to get the query.
1)Login to oracle apps
2)Select "Daily Business Intelligence Administrator" responsiblity.
3)Now click on "Enable/Disable Debugging" (Now u enabled debugging)
4)now open the report which you want to see the query of
5)In view source it displays query along with the bind varilables.
Feel free to ping me if you have any doubts
thanks
kittu -
How to know whether query is using Indexes in oracle?
Please let me know necessary steps to check whether query using indexes.
Try the below and check the explain plan.. See below explain plan using index marked as "RED" font
SET AUTOTRACE TRACEONLY EXPLAIN
SELECT * FROM emp WHERE empno = 7839;
Execution Plan
Plan hash value: 2949544139
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 38 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 38 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PK_EMP | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("EMPNO"=7839) -
How to create the layout for query in Bex 3.5
Hi All,
i have one requirement to do layout for query. Report like after passing the variable values and then they will select the required layout. It has to get output according layout selection fields.
i have no idea about how to create layout.
thanks in advance...
Thanks & Regards,
Mallikarjuna.kHi Gregor,
In the note 1149346 that you have mentioned, it says -
You must start the input-ready query in change mode.
BEx Analyzer: In the properties of the analysis grid, the switch
"Suppress new rows" must not be set. Furthermore,
the switch "Allow entry of plant values"
must be set in the local query properties.
I have not seen this setting Allow entry of plant values in a query - can you tell what is refers to?
Maybe you are looking for
-
White Balance In "PS" Or In "LR"
Hello Everyone I have a question please. 1- Shooting with my Canon 5D, Mostly shooting with Strobes, Usually I choose a Custom white Balance shooting a Gray-Card. Or some-times I set it at: 5200- 5500 K. 2- Later I shoot a Color-Checker with the sam
-
Finder crashes when in Cover Flow
Hi, every time I open and click on the finder window it crashes. This only happens when I'm in Cover flow. This never happened before. Here's the crash log: Process: Finder [237] Path: /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder Ide
-
I'm running Oracle 9.2.0 on Red Hat 8, and when I try to connect via SQL, I get the following errors: ERROR: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux Error: 2: No such file or directory I have 256k of physic
-
Replication of account master with 2 diff number ranges to ECC.
HI All, Wish you happy new Year.... I have a doubt in Master Data replicating from CRM to ECC. Here is a situation where i have to create 2 number ranges with groupings in CRM for prospects and has to replicate to ECC SD. I heard that only one numbe
-
2014 MacBook Pro will not connect to printer/scanner
Hello All, My 2014 MacBook Pro is not picking up my wireless printer (HP LaserJet Pro MFP M127fw). Even when I connect the printer to the computer it doesn't pick it up so that I can scan documents. Printing is fine - I can't scan anything. Thanks in