Tricky query optimisation
Hi all,
our task is to develop a new application based on the data
model of an old application. As the old application must remain
available and running, no changes are possible to the tune the
database! (currently running on 9.2.0.7)
We've put together this example code describing the problem:
-------------------- snip --------------------
-- clean up last run
drop table shares;
drop table files;
drop table families;
-- families table
create table families (
family_id integer,
primary key (family_id)
-- files table: each file belongs to one family
create table files (
file_id integer,
family_id integer not null,
primary key (file_id),
foreign key (family_id) references families (family_id)
-- shares table: shares can be defined on family basis or on file basis.
-- If defined on family and file, only the file shares are relevant!
create table shares (
file_id integer,
family_id integer not null,
person varchar2(100) not null,
percentage number(4,2) not null,
unique (file_id, family_id, person, percentage),
foreign key (file_id) references files (file_id),
foreign key (family_id) references families (family_id)
-- insert one family and two files with shares for demonstration purposes
insert into families values (1);
insert into files values (1,1);
insert into files values (2,1);
insert into shares values (null, 1, 'Thomas',33.4);
insert into shares values (null, 1, 'Marcus',66.6);
insert into shares values (2, 1, 'Thomas',50);
insert into shares values (2, 1, 'Marcus',50);
commit;
-- Goal: determine the shares for each file
-- good query:
select s.family_id, s.file_id, s.person, s.percentage
from shares s
where s.file_id is not null
union
select p.family_id, p.file_id, s.person, s.percentage
from shares s, files p
where p.family_id = p.family_id
and s.file_id is null
and not exists (select 1
from shares p2
where p2.file_id = p.file_id);
-- poor query:
select distinct p.family_id, p.file_id, s.person, s.percentage
from files p, shares s
where p.family_id = s.family_id
and (s.file_id is null or s.file_id = p.file_id)
order by p.family_id, p.file_id;
-------------------- /snip --------------------
Now here's the hook!
Amount of data (rows)
- families: 116.000
- files: 633.000
- shares: 423.000
- avg persons per family: 2.1
We were using the queries above in views for hiding the
complexity to our newly developed application.
A test case containing the view and several other tables performed
like this:
- execution time when view used the poor query: ~ 0.3 seconds
- execution time when view used the good query: ~ 37 seconds
We know the design has it flaws but it evolved with time and
proofed to be useful for the old application.
So my question is: does anyone have a clue how we can combine
performance and correctness within one statement? Any comment
is appreciated!
I'm with shoblock. If the "good" query returns the wrong resutls, then its unuable and not really good.
I'm afraid you're stuck with tuning the long-running query which returns the right results.
Similar Messages
-
Hi
I have a problem regarding query optimisation, and in particularly one query. The query I have is for a view, and when I ran the cost on it, there were full table scans, and some costs were quite high:
Select a.name, b.id, c.date
from table a,
table b,
table c
where a.id = b.id(+) and
decode(a.version, null, a.num, a.version) = b.version(+) and
a.id = c.id and
b.type is null
My question is whether this query can be made more efficient by removing the outer joins. I was thinking whether this could be carried out by some union or intersect query with an outer select. Is this possible, if so, what would be the best alternative query?
ThanksHi,
Is b.type a NOT null column? Is the reason why you have b.type is null .. is to exclude the records that are present in both tableA and tableB, In that case try and see if this gives you the same result.
select a.name,
a.id,
c.date
from table1 a, table3 c
where a.id = c.id
and not exists
(select 1
from table2 b
where a.id = b.id
and nvl (a.version, a.num) = b.version)Make sure you have gathered statistics on the tables.
And as sb92075 said above. Always mark a thread as answered once you get the answer. You need to help the forum as well rather than just taking help from the forum.
G. -
Can someone explain this crazy query optimisation?
A software company has me trialling a product that has a query optimiser. I can't for the life of me explain what is going on below and would like some help from someone with a bit more SQL experience. I have a query I've been struggling to bring down the time on:
CREATE OR REPLACE VIEW PLATE_STATS_DATA_VIEW AS
SELECT P.Folder_ID, P.expt_or_control_ID, P.Plate_Type, P.Dose_Weight, P.Volume, P.Strain_Code, P.S9_Plus,
P.type_Name as Contents, P.Replicate_ID,
P.Number_Of_Plates, round(avg(P.count)) as mean_count,
min(P.count) as min_count, max(P.count) as max_count, count(P.count) as Plates_Counted
FROM expt_folder_plates P, History_Control_Log L
WHERE P.expt_or_control_ID = L.Control_ID
AND P.Strain_Code = L.Strain_Code
AND P.Plate_Type = L.Type_Code
AND P.S9_Plus = L.S9_Plus
AND L.control_Included > 0
GROUP BY P.Folder_ID, P.expt_or_control_ID, P.Plate_Type, P.Dose_Weight, P.Volume, P.Strain_Code,
P.S9_Plus, P.type_Name, P.Replicate_ID, P.Number_Of_PlatesIt took 20 seconds on my large test database, so I put it through the optimiser. It took it down to 0.1 seconds simply by changing 'WHERE P.expt_or_control_ID = L.Control_ID' to 'WHERE P.expt_or_control_ID = L.Control_ID + 0'.
I have no idea why this would make any difference - adding zero to a value?! Can anyone enlighten me?
Many thanks,
Gary
Message was edited by:
GaryKyleAhhh, thanks guys. I'm a bit of a beginner here. This is my first look at explain plans - just had to work out how to see them! I think I understand what is happening now - it looks like that with the index, it does the group by FIRST on all the data and this takes a large amount of time. Am I right?
Before +0:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=162787Cardinality=1380965Bytes=328669670
SORT GROUP BY Cost=162787 Cardinality=1380965 Bytes=328669670
HASH JOIN Cost=16773 Cardinality=1380965 Bytes=328669670
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_DETAILS Cost=29Cardinality=4038Bytes=387648
HASH JOIN Cost=16730 Cardinality=1380965 Bytes=196097030
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=AMES_PLATE_TYPES Cost=2Cardinality=6Bytes=192
HASH JOIN Cost=16715 Cardinality=1380965 Bytes=151906150
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=HISTORY_CONTROL_LOG Cost=2Cardinality=40Bytes=880
HASH JOIN Cost=16694 Cardinality=2002400 Bytes=176211200
HASH JOIN Cost=59 Cardinality=8076 Bytes=282660
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_SOLVENTSCost=2Cardinality=3Bytes=51
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=CONTROLSCost=56Cardinality=8078Bytes=145404
TABLE ACCESS FULL Object owner=PI_AMES_BIGObject name=EXPT_FOLDER_PLATESCost=16584Cardinality=5499657Bytes=291481821After +0:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=1655 Cardinality=138 Bytes=45954
HASH JOIN Cost=1655 Cardinality=138 Bytes=45954
HASH JOIN Cost=1625 Cardinality=138 Bytes=33672
HASH JOIN Cost=1569 Cardinality=414 Bytes=96462
MERGE JOIN CARTESIAN Cost=4 Cardinality=18 Bytes=630
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_SOLVENTSCost=2Cardinality=3Bytes=30
BUFFER SORT Cost=2 Cardinality=6 Bytes=150
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=AMES_PLATE_TYPESCost=1 Cardinality=6Bytes=150
VIEW Object owner=PI_AMES_BIG Object name=TEST_PLATE_STATSCost=1564Cardinality=138Bytes=27324
SORT GROUP BY Cost=1564 Cardinality=138 Bytes=10350
TABLE ACCESS BY INDEX ROWID Object owner=PI_AMES_BIGObject name=EXPT_FOLDER_PLATESCost=39Cardinality=3Bytes=159
NESTED LOOPS Cost=1563 Cardinality=138 Bytes=10350
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=HISTORY_CONTROL_LOG Cost=2Cardinality=40Bytes=880
INDEX RANGE SCAN Object owner=PI_AMES_BIG Object name=EXPT_CONTROL_ID_INDEXCost=5Cardinality=248
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=CONTROLSCost=56 Cardinality=8078Bytes=88858
TABLE ACCESS FULL Object owner=PI_AMES_BIG Object name=EXPT_FOLDER_DETAILS Cost=29 Cardinality=4038Bytes=359382Thanks again,
Gary
P.S. looks like the explain plan's made the post horribly wide again ;) sorry. I'll keep it this way though otherwise the plan is hard to read. -
Tricky query with multiple hierarchial sub queries
Here's a pussle that I cannot solve. Who can help me?
Given table F (records form a binary tree with levels 0, 1, and
2):
Id IdParent
F1 null
F2 F1
F3 F2
F4 F2
F5 F1
F6 F5
F7 F5
and given table D (records form a similar binary tree with
levels 0, 1, and 2):
Id IdParent
D1 null
D2 D1
D3 D2
D4 D2
D5 D1
D6 D5
D7 D5
and given table P (cross referencing tables F and D):
IdF IdD
F2 D6
F3 D2
F5 D7
and given table S (which holds the seed to the query):
IdF
F3
and given table I (just any collection of records that reference
table D which we want to select records from):
Id IdD
I1 D1
I2 D2
I3 D3
I4 D4
I5 D5
I6 D6
I7 D7
I8 D1
I9 D5
all together being related like in figure 1:
S.IdF =>> F.Id
F.Id <- P.IdF
P.IdD -> D.Id
D.Id <<= I.IdD
where =>> denotes 'is or is a descenden of'
and -> denotes 'is'
I want to build a query that lets me select all records from
table I, which reference table D, such that the referenced
records in table D are hierarchial descendents of any record
that is referenced by records in table P, for which they
reference records in table F, which are ancestors to records
referenced by records in table S.
If it wasn't for the hierarchial approach on both sides of the
cross referencing table, matters would have been easy. Then the
releational map would have been like in figure 2:
S.IdF <- P.IdF
P.IdD -> I.IdD
and the query would have been:
SELECT I.Id
FROM I, P, S
WHERE I.IdD = P.IdD
AND P.IdF = S.IdF
So in what I am looking for, you may say that the '='-signs in
this select statement should denote 'is or is a descenden of'
going towards the side of table P.
Given the tables listed above and given the query I am seeking,
I expect to retrieve the following result set:
I.Id
I2
I3
I4
I6
Tricky, eh? I belive the figures are the best angles to
understand this problem.You do this with subqueries and hierarchical queries...
First the hierarchal subquery on F...
SQL> SELECT f.id
2 FROM f
3 CONNECT BY PRIOR f.idp = f.id
4 START WITH f.id IN ( SELECT idf FROM s )
5 ;
ID
F3
F2
F1
Then join with the cross reference table...
SQL> SELECT p.idd
2 FROM p
3 , ( SELECT f.id
4 FROM f
5 START WITH f.id IN ( SELECT idf FROM s )
6 CONNECT BY PRIOR f.idp = f.id
7 ) s
8 WHERE s.id = p.idf
9 ;
ID
D6
D2
Use this as a subquery in a hierarchial query for the
descendents in D...
SQL> SELECT d.id
2 FROM d
3 START WITH d.idd IN ( SELECT p.idd
4 FROM p
5 , ( SELECT f.id
6 FROM f
7 START WITH f.id IN ( SELECT
idf FROM s )
8 CONNECT BY PRIOR f.idp = f.id
9 ) s
10 WHERE s.id = p.idf
11 )
12 CONNECT BY PRIOR d.id = d.idp
13 ;
ID
D2
D3
D4
D6
Then use that as a subquery to return the I result set...
SQL> SELECT i.id
2 FROM i
3 WHERE i.idd IN ( SELECT d.id
4 FROM d
5 START WITH d.id IN ( SELECT p.idd
6 FROM p
7 , ( SELECT
f.id
8 FROM f
9 START
WITH f.id IN ( SELECT idf FROM s )
10 CONNECT
BY PRIOR f.idp = f.id
11 ) s
12 WHERE s.id = p.idf
13 )
14 CONNECT BY PRIOR d.id = d.idp
15 )
16 ;
ID
I2
I3
I4
I6Good Luck... -
Hi.
Is there any way to optimise the below code. or can i write a join query for this if yes then how.
SELECT land1
zone1
FROM tzone
INTO CORRESPONDING FIELDS OF TABLE t_land
FOR ALL ENTRIES IN int_delivery
WHERE zone1 = int_delivery-zone1.
IF sy-subrc = 0.
SELECT land1
landx
FROM t005t
INTO CORRESPONDING FIELDS OF TABLE t_landx
FOR ALL ENTRIES IN t_land
WHERE land1 = t_land-land1.
ENDIF.
Thanks
Any help will not go unappreciated..hi,
<i>how to optimize ?</i>
1. Avoid
INTO CORRESPONDING
, create an internal table with required fields only.
2.
select tzone~land1 tzone~zone1 t005~landx
from tzone
join t005t
on tzone~land1 = t005t~land1
into table t_landx
for all entries in int_delivery
where
tzone~zone1 = int_delivery-zone1
AND spras = sy-langu.
Regards
Anver -
Tricky Query regarding JTables.
This is a bit tricky regarding JTables
I have run the flwg query
select id_status 'STATUS',id_entity 'ENTITY'
from TableA
group by id_status,id_entity
Output
STATUS ENTITY
100 AGL
123 BSS
234 RDB
245 GERI have now both the rows and cols in 2 vectors.
The Output I want is this:
Depending on the No.of entities displayed above,these then
become the column headings and the 4 rows will be shown as one row:
OUTPUT REQUIRED:
Type AGL BSS RDB GER
Files 100 123 234 245I am able to display the 4 entity names as columns names by :
for ( int i = 0; i < rowCount; i++ ) {
m_colName.add(stm.getValueAt(i,2) ); // Where m_colName is a vector.
JTable table = new Table(m_rowName,m_colName);Now how do I read the data from the row Vector which now contains 4 rows
as one row and display them
This is my current row display (which is my current row display)
for ( int i = 0; i < rowCount; i++ ) {
m_rows = new Vector();
for ( int z = 0; z < colCount; z++ ) {
m_rows.add(stm.getValueAt(i,z) );
// Vector of vectors.
m_rowValues.addElement(m_rows);
}I may have to modify the above code so that the 4 rows are displayed as one row
and displayed as discussed above.
My present output is
AGL BSS RDB GER
100
123
234
245Please help me now to read the 4 rows as one row and display them.Wrong use of rows/columns. Try like below:
for ( int i = 0; i < colCount; i++ ) {
m_rows = new Vector();
for ( int z = 0; z < rowCount; z++ ) {
m_rows.add(stm.getValueAt(i,z) );
// Vector of vectors.
m_rowValues.addElement(m_rows);
}You weren't so far from it ;-) -
Need help in query optimisation.
Hi,
A small doubt in increasing the query peformance.
I have a join query operating on multiple tables. The tables contains millions of record. Will it improve the performance if the columns used in joins are sorted. I am using these joins in DW- mappings.
Thanks for any suggestions.
AshishHere are something you can look at:
The best way to optimise a join is to ensure that each join involves only a few tables.
Make sure that the smallest result set is created first (either use hints or rearrange the order of the tables in the join clause).
Ensure that the join columns are indexed and that the query does not inadvertently prevent the use of indexes by using functions such as "upper".
You must also be aware that the creation of new indexes may affect the performance of unrelated queries and that de-normalising tables is a trade off between faster queries and slower updates (and reduced flexibility).
Better use Explain plan to analyse how your query behaves.... -
Expert help needed with tricky query
I have a query database with a real simple schema but a tricky requirement: i need to display records with a simple select but then filter the result based on the authority/access level of the user making the query.
The source data is held in a table with just the following columns:
SRCTABLE:
subject_ID
date
data_ID
data_item
All column types are text strings and the first 3 are a composite key. There are 10s of millions of records in the table.
The access authorization is held in another table with the following columns:
ACCTABLE:
data_ID
access1
access2
accessn
The ellipsis means there are as many (boolean type) access1...n columns as there are distinct access levels to the source data.
The table contains one row for each distinct data_ID appearing in the source table. On each row the TRUE values in the access1...n columns indicate authorization to see the data item and the filter should leave that row in the result set.
The question then is how to write the query statement? It is assumed that the access id (i.e. the relevant column) is known when the query is made.
Something like
SELECT data_item FROM SRCTABLE
WHERE subject_ID="xxx" AND date = "1/1/2000";
would do it except for the need to filter each row based on the access authorization of the user.
Any help would be appreciated.Thanks everybody for responding.
APC has a good point about really protecting every single item type separately. Unfortunately this is precisely the case. The security in this case is not oriented to increasing security in a levels oriented way. Rather each kind of item is protected by a need to know type security related to that particular item. Users are classified by their need to know a combination of the item types and those combinations are not in any sense consistent (and there will be new classes over time). This way access control necessarily becomes a matrix of item types vs access classes.
Fortunately this particular database does not exist yet so i am free to solve the problem in any way that fulfills the requirement. This is just the suggested form. I am not entirely happy with it hence the question on this forum in the first place.
So, i appreciate it should you have any further suggestions for optimal solution to handle the requirements. Again, those are:
1. A query that returns the data_items for a given ID and date (this is dead simple)
2. A filter (preferably in the query) that filters out those data_items the current user (his/her access class is known) is not authorized to see.
3. The plan calls for a table listing every possible item type with a column for each access class, enumerating the items allowed for that class. Any other solution to this issue would be acceptable provided it is capable to independently validate any single item type against any access class.
I hope this makes sense. -
Tricky query needing clever SQL...
A table called alt_websearch_log records the time it takes to complete each search performed on a website. I am therefore able to write something like this:
select to_char(search_date,'DD/MON/YYYY'), count(*), avg(searchtime_secs), max(searchtime_secs)
from alt_websearch_log
group by to_char(search_date,'DD/MON/YYYY')
order by to_char(search_date,'DD/MON/YYYY') desc
...and it works fine, giving me a daily average search time over a period of weeks or months for reporting to management. The trouble is that I have to periodically rebuild the indexes that make this search work, and during the time that the rebuild is taking place -not surprisingly- search performance slows down quite a bit.
(Incidentally, I know not to rebuild indexes routinely, but these two are special, because they're Context indexes and the table on which they're built is itself completely reconstructed at weekends. It's a weird system, but we have to do it that way, for lots of reasons I ask you to take on trust, at least for now!)
My rebuilding code has been modified to insert a line into the alt_websearch_log table, so that I can select from it and see this sort of thing:
SESSION_ID SERVERNAME SEARCHTERMS SEARCHDATE
99999999999 LOCALDBSRVR START REBUILD 15/7/2008 16:00
And when the rebuild completes, I put a very similar line that says 'STOP REBUILD' at (say) 15/7/2008 18:00
So what I would like to be able to do is to run the same sort of query as I started with, but with all the searches that take place between a start and stop rebuild marker being ignored for the purposes of computing the average and maximum.
My problem is that I cannot conceive of a query that discards rows between a start and stop time that happens repeatedly (for example, I would have a START/STOP pair on 8th July, 1st July, 23 June and so on, but because manual rebuilds might sometimes happen, there's no definite pattern for when such pairs occur).
In other words, if the sorted data ran;
1/7/08 10:00 1.3secs
1/7/08 10.05 1.4 secs
1/7/08 10:10 start rebuild
1/7/08 10:11 2.5 secs
1/7/08 10:12 4.5 secs
1/7/08 10:15 stop rebuild
1/7/08 10:16 1.1 secs
2/7/08 11:00 1.1secs
2/7/08 11.03 1.1 secs
2/7/08 11:05 start rebuild
2/7/08 11:14 3.1 secs
2/7/08 11:16 3.6 secs
2/7/08 11:25 stop rebuild
2/7/08 11:36 1.2 secs
...I would want 1.3, 1.4 and 1.1 to be averaged to a total of 1.267 for 1/7/08, and the two 2 second plus searches on that to be ignored for reporting purposes. And then I'd need the average for 2/7/08 to be reported as 1.13 seconds, with the two 3+ second response times ignored, because they were recorded between that day's start/stop rebuild markers.
Whilst it is theoretically possible that a 'start' could begin at, say 5 to midnight and its corresponding 'stop' wouldn't be recorded until 5 past midnight, if it makes it any easier, you can assume that a start and a stop always share the same date.
It is also possible in real life that we might do more than one rebuild per day, but if it makes it any easier, it's OK to assume only one start/stop marker will be recorded per day. (Actually, it would be nice if you didn't have to assume that, because this is a situation I know we would definitely face at some point in the future).
I'd very much appreciate some insight into any SQL techniques that can be used to deal with this issue. I am guessing some of the modelling commands might be useful, but I don't know where to begin on that. I'm using 10.2.0.3 on Windows.
Sorry for the long question. Grateful for any help.OK build a sample table,
create table alt_websearch_log
(search_date date,
search_time_secs numeric(9,2),
searchterms varchar2(50));
insert into alt_websearch_log values (to_date('1/7/08 10:00','dd/mm/yy hh24:mi'),1.3,'bob,sally');
insert into alt_websearch_log values (to_date('1/7/08 10:05','dd/mm/yy hh24:mi'),1.4,'dev');
insert into alt_websearch_log values (to_date('1/7/08 10:10','dd/mm/yy hh24:mi'),null,'start rebuild');
insert into alt_websearch_log values (to_date('1/7/08 10:11','dd/mm/yy hh24:mi'),2.5,'code');
insert into alt_websearch_log values (to_date('1/7/08 10:12','dd/mm/yy hh24:mi'),4.5,'nugget');
insert into alt_websearch_log values (to_date('1/7/08 10:15','dd/mm/yy hh24:mi'),null,'stop rebuild');
insert into alt_websearch_log values (to_date('1/7/08 10:16','dd/mm/yy hh24:mi'),1.1,'random');
insert into alt_websearch_log values (to_date('2/7/08 11:00','dd/mm/yy hh24:mi'),1.1,'search');
insert into alt_websearch_log values (to_date('2/7/08 11:03','dd/mm/yy hh24:mi'),1.1,'term');
insert into alt_websearch_log values (to_date('2/7/08 11:05','dd/mm/yy hh24:mi'),null,'start rebuild');
insert into alt_websearch_log values (to_date('2/7/08 11:14','dd/mm/yy hh24:mi'),3.1,null);
insert into alt_websearch_log values (to_date('2/7/08 11:16','dd/mm/yy hh24:mi'),3.1,null);
insert into alt_websearch_log values (to_date('2/7/08 11:25','dd/mm/yy hh24:mi'),null,'stop rebuild');
insert into alt_websearch_log values (to_date('2/7/08 11:36','dd/mm/yy hh24:mi'),3.1,null);
insert into alt_websearch_log values (to_date('2/7/08 11:39','dd/mm/yy hh24:mi'),1.1,'tube');
commit;then try this..
col av_stime for 99.99
col max_s_time for 99.99
select trunc(search_date),avg(SEARCH_TIME_SECS) av_stime,max(search_time_secs) max_s_time from
(select search_date,search_time_secs,
nvl((LAST_VALUE(decode(searchterms,'stop rebuild','stop rebuild','start rebuild','start rebuild',null) IGNORE NULLS) over
(ORDER BY search_date ROWS BETWEEN UNBOUNDED PRECEDING AND current row)),'stop rebuild') AS term
from alt_websearch_log) a
where term = 'stop rebuild'
group by trunc(search_date)
order by trunc(search_date) desc;
TRUNC(SEA AV_STIME MAX_S_TIME
02-JUL-08 1.60 3.10
01-JUL-08 1.27 1.40this work accross days and with multiple rebuilds, randolphs comments apply if
the logic is broken and a rebuild deosn't get logged correctly this will return odd
results. It may be safer to add an additional column to log the rebuilds as you will
have a problem is a user searches for one of the two key terms. Also I have
assumed the search_time_secs will be null when you log the rebuilds.
Chris -
Where-clause with the in-keyword. Query optimising.
Hi I have following question.
I have a query, e. g.
select x, y from tableA where y in (a_1, a_2, ..., a_n)
I have test it with n=1...5
1. 5-6 ms
2. 5-6 ms
3. 150 - 200 ms
4. 150 - 200 ms
5. 180 - 250 ms
There is a gap between n = 2 and n = 3. According to the execution plan the Oracle make a full table access if n >= 3. If n < 3 a an index is used. Is it possible to ensure that the index is always used?
Additionally I have test the equivalent query:
select x, y from tableA where y = a_1 or y = a_2 ... < = a_n
It showsthe same behaviour.Hi,
I wouldn't say that the optimizer is wrong here. I would look on some more imputs.
How many values of y do you have in the table?
please post the result of:
select count(*),y
from tablea
group by y;
to see how many values of each y-value exists in the table.
For example, if there are only values 1..5 for y in the table a full table scan is the fastet answer for your query, because all values have to read!
Regards
Udo -
I need help with SQL query (if it can be accomplished with query at all).
I'm going to create a table with structure similar to:
Article_Name varchar2(30), Author_Name varchar2(30), Position varchar2(2). Position field is basicly position of an article author in the author list, e.g. if there is one author, his/her position is 0, if 2, then 1st author is 0, second is 1, etc.
Article_Name Author_Name Position
Outer Space Smith 0
Outer Space Blake 1
How can I automate creation of Position, based on number of authors on the fly? Let's say I have original table without Position, but I want to create a new table that will have this information.
RegardsIf you have an existing table whose structure doesn't tell you what position the author is in, what's the algorithm you'd use to determine who was the first author, the second author, etc? If you issue a select query on a table without providing an "order by" clause, Oracle makes no guarantees about the order in which it retrieves rows.
As an aside, why would you store position number in a varchar2 field? If it's a number, it ought to be stored as a number.
Justin -
Suggest the query to this tricky query please
Hi people,
I just got a query which is a brain tease for me (not all).
that is
sql> select * from mytab;
sql> no name
1 asuri
1 prasanth
2 brian
2 lara
the above is query is returned by the sql;
here 1 and 2 are duplicated values;
now output required is
sql>no name
1 asuri prasanth
2 brain lara
Is this possible with a single sql query;
please tell me solution
regards
prasanthHow do find that it should be brian lara not lara brian..
But any way the following will do ...SQL> select * from mytab;
A NAME
1 asuri
1 prasanth
2 brian
2 lara
SQL> select a,name||' '||name2 name from
2 (
3 select a,name,lead(name) over (partition by a order by a) name2 from mytab)
4 where name2 is not null;
A NAME
1 asuri prasanth
2 brian lara
The names by above query can be returned as 'lara brian' and 'prasanth asuri' as there is no way telling which one is a surname.. -
Suggest me the tricky query please
Hi
i want to write a sql query which shown as follows:
input inline:
a : prasanth
b : asuri
output should be:
prasanth asuri(combination of a and v variables)
that is too without pl/sql block.
please suggest me sql query;
thanks in advance
prasanthHi,
SELECT SYS_CONNECT_BY_PATH(name, ' ') "Path" FROM
(SELECT rownum ID ,rownum-1 PID,name FROM (
SELECT 'CHITTA' NAME FROM DUAL UNION SELECT 'RANJAN' FROM DUAL))
where id = (2) -- this should be max Rownum
START WITH pID=0
CONNECT BY PRIOR id = pid
or if it is a table
SELECT SYS_CONNECT_BY_PATH(dname, ' ') "Path" FROM
(SELECT rownum ID ,rownum-1 PID,dname FROM (select * from dept))
where id = (select max(rownum) from dept)
START WITH pID=0
CONNECT BY PRIOR id = pid
It is OK With you ..
Bye
Chitta -
I am facing some funny problem regarding performance of query.
If I fire query thro' PL/SQL..
Select a.*,b.*
into ..........
from a,b
where a.key between m_min and m_max
and a.col_1 = b.key
This takes a long time for execution. But if I replace m_min and m_max with actual values then Its very fast.
like
Select a.*,b.*
into ..........
from a,b
where a.key between 100 and 100
and a.col_1 = b.key
Has anybody faced such problem. Let me know what can be cause.Check if your database is running in choose based mode:
sqlplus> show parameter optimizer_mode
CHOOSE
If so, did you do an analyze of one of the tables?
sqlplus> select num_rows from user_tables where table_name='A'
If this returns a value other then NULL, your table is analyzed.
If so, please make sure that you periodically analyze your tables again (especially after a batch upload)
sqlplus> analyze table a compute statistics; -
Hello,
I have table such that:
SQL> desc sample_table;
Name Null? Type
USER VARCHAR2(30)
TIMESTAMP DATE
ACTION VARCHAR2(100)
In the table I have values like this:
USER TIMESTAMP ACTION
FRED 5/8/2002 8:47:52 AM OPEN
FRED 5/8/2002 8:50:33 AM CLOSE
JOHN 5/8/2002 8:53:57 AM OPEN
ANN 5/8/2002 8:54:17 AM OPEN
JOHN 5/8/2002 8:55:02 AM CLOSE
ANN 5/8/2002 8:58:43 AM CLOSE
ETC....
I am displaying this information in FORMS 6i.
So I have defined the USER field as a character field.
The TIMESTAMP is a date field and I have a format mask = 'DD-MON-YYYY HH24:MI:SS'
The ACTION is another character field.
I also have and 'ON-ERROR' trigger on the TIMESTAMP field to format the date correctly. IE: if they enter 5/08/2002 it will automaticly return as 08-MAY-2002. The format mask will make it 08-MAY-2002 00:00:00
I wanted to execute a query on the timestamp field. Now if I just enter a date like 08-MAY-2002 I will get no returns. I understand this as it would try to match up the TIMESTAMP field with 08-MAY-2002 00:00:00, which there is none.
I tried to create a block query for 'KEY_EXEQY' such that:
DECLARE
where_cl VARCHAR2 (5000);
date_c1 VARCHAR2 (25);
BEGIN
date_c1 := '''' || TO_CHAR (:sample_table.TIMESTAMP, 'DD-MON-YYYY') || '''';
IF :sample_table.TIMESTAMP IS NOT NULL THEN
where_cl := get_block_property ('sample_table', default_where);
IF where_cl IS NULL THEN
where_cl := 'TRUNC(TIMESTAMP) = ' || date_c1;
ELSE
where_cl := where_cl || ' AND TRUNC(TIMESTAMP) = ' || date_c1;
END IF;
set_block_property ('sample_table', default_where, where_cl);
END IF;
execute_query;
set_block_property ('sample_table', default_where, '');
END;
But when I tried to execute the query but entering 08-MAY-2002 in the timestamp field I recieved an error saying 'Query cause no record to be retrieved.'
I have tired to remove the 'ON-ERROR' trigger on the TIMESTAMP field but it did not change anything.
Since the database time field has the HH:MI:SS how can you query on that field (and have it return the values for that specified date)? How can you also allow it to accept other values for the query (IE: On the form press F7 and then for USER type FRED and for TIMESTAMP enter 08-MAY-2002 then press F8 to query it (should return all records where user is FRED and the TRUNC(TIMESTAMP) = '08-MAY-2002')) and return the desired results.
Any have a solution or idea?Yes, I also tried a similar piece of code for Pre-Query.
I also recieved an email suggesting:
Present is = date_c1 := '''' || TO_CHAR (:sample_table.TIMESTAMP,
'DD-MON-YYYY') || '''';
Try for = date_c1 := '''' || TO_CHAR (:sample_table.TIMESTAMP,
'DD-MON-YYYY') || '''' || %
I would tend to think this would work but when I attempted it I would still recieved the same error notice.
For some reason it does not like me replacing the where clause. If I force the where clause in KEY_EXEQRY or the PRE-QUERY it works (IE: set_block_property ('sample_table', default_where, 'trunc(timestamp) = '||''''||'08-MAY-2002'||'''') ) Stange how it can work in once case but when I try to use a variable it just blows up. I wonder if has anything to do with using the :sample_table.TIMESTAMP field as a source for the variable?
Maybe you are looking for
-
Can't Find File in Time Machine
Kind of a weird problem, usually I can figure this stuff out on my own. Basically, I have an edited version of a video and I want to retrieve the original. I know I did the edit on that day and I know its date created. I have a weekly backup from t
-
How to Include Date variables in the Attachments sent via ibots
Hi Experts, I have a requirement that when i configure an i-bot , i need to get a mail with attachment as an excel say "attname".This is fine,but how to customize such that the attachment contains date variables like "attname_24April2010" when recipi
-
Why can't I download the update to Firefox?
I have a brand new I-mac and just recently added Mozilla Firefox to my browsers. I get a notice that a security update is available. When I select update, it say "connecting to update server", but it just stays on that place and never downloads the u
-
How to consider the Purchase Orders in Long Term Planning
Hi All, I wanted to include the Purchase Orders of Finished Material in Long Term Planning. I have selected the check box "Long-term planning: include firmed issues and receipts" in the Planning Scenario but after LTP run system is not considering t
-
Lost ability to select/edit multiple images
my apologies if this is a simple question. I must have somehow disabled the feature that allowed me to select multiple images (i.e. images 1-10 within a particular project/folder) and edit them all at once. when i select multiple images and attempt t