Query to run select statement conditionally
Want: when either variable is null then run a full report, if both variables are not null then run the report with range specified
I have the following query, but kept getting invliad sql error. Can you help ?
IF (:P1_START_DATE IS NULL or :P1_END_DATE IS NULL)
THEN select empno,ename,job,mgr, hiredate from emp;
else IF (:P1_START_DATE IS NOT NULL and :P1_END_DATE IS NOT NULL)
then select empno,ename,job,mgr, hiredate from emp where hiredate between :P1_START_DATE and :P1_END_DATE;
end if;
Tai
Hi,
The way most people deal with that problem is to always filter on parameters, but, when a parameter is not passed, make it default to some value that wuill include all rows. With DATEs, that's easy, because it's easy to specify the earliest and latest DATEs that are possible in Oracle:
Assuming the datatype of :p1_start_date and :p1_end_date is DATE:
SELECT empno
, ename
, job
, mgr
, hiredate
FROM emp
WHERE hiredate BETWEEN NVL ( :P1_START_DATE
, T0_DATE ('1', 'J')
AND NVL ( :P1_END_DATE
, TO_DATE ('31-Dec-9999')
;Te query above treats each parameter independently. If, say, :p1_start_date is NULL, but :p1_end_date is, say, January 1, 2005, then the query above would look for people hired on before January 1, 2005. That is, there would not be a lower limit to hiredate, because :p1_start_date was NULL, but there would be an upper limit, because :p1_end_date was not NULL.
You're saying that if either parameter is NULL, you want to ignore both of them, not just the one that is NULL.
Here's one way to do that:
SELECT empno
, ename
, job
, mgr
, hiredate
FROM emp
WHERE hiredate BETWEEN NVL ( LEAST ( :P1_START_DATE
, :P1_END_DATE
, T0_DATE ('1', 'J')
AND NVL ( GREATEST ( :P1_START_DATE
, :P1_END_DATE
, TO_DATE ('31-Dec-9999')
;This will also automatically correct the error if someone enters the parameters in the wrong order.
If you're doing this in PL/SQL, and speed is very impratant, then you may want to have tow separate queries (one with a WHERE clause, the other without), as in the example you posted.
Edited by: Frank Kulash on Jun 18, 2010 3:29 PM
MScallion's solution is better.
It can be simplified like this:
WHERE :p1_start_date IS NULL
OR :p1_end_date IS NULL
OR hiredate BETWEEN :p1_start_date
AND :p1_end_date
Similar Messages
-
Long running select statement and v$session_longops
Oracle Version: 10.2.0.4
I've a long running sql query that takes the estimated 6 minutes to complete and return the result.
While it's running I'd like to observe it into the view v$session_longops.
I altered the session that runs the query with
ALTER SESSION SET timed_statistics=TRUE;The tables it queries have gathered statistics on them.
However I don't see any rows in the view v$session_longops for the respective SID and serial#. Why is that? What am I missing?
Thank you!Hi,
Now I understand what you all meant by "loops" here .. Yes, the query does nested loops as one can see from the execution plan. So it could be the reason
SELECT STATEMENT, GOAL = ALL_ROWS
SORT GROUP BY
CONCATENATION
TABLE ACCESS BY LOCAL INDEX ROWID TABLE_1
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY GLOBAL INDEX ROWID TABLE_2
INDEX RANGE SCAN IPK_t2_CDATE
TABLE ACCESS BY INDEX ROWID TABLE_3
INDEX RANGE SCAN IPK_T3
PARTITION RANGE ALL
INDEX RANGE SCAN IRGP_REGCODE
TABLE ACCESS BY LOCAL INDEX ROWID TABLE_1
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY GLOBAL INDEX ROWID TABLE_2
INDEX RANGE SCAN IPK_t2_STATUS
TABLE ACCESS BY INDEX ROWID TABLE_3
INDEX RANGE SCAN IPK_T3
PARTITION RANGE SINGLE
INDEX RANGE SCAN IRGP_REGCODE -
Hi,
I want to read the data from aufk and jest table .
while selecting data i have some exclude functions .
Exclusions :
jest-stat = 'I0045' jest-inact = ' '.
jest-stat = '0012' jest-inact = ' '.
jest-stat = '0016' jest-inact = ' '.
I wrote the below code.
DATA : r_stat TYPE RANGE OF jest-stat,
r_inact TYPE RANGE OF jest-inact,
wa_stat LIKE LINE OF r_stat,
wa_inact LIKE LINE OF r_inact.
CLEAR: wa_stat,
wa_inact.
REFRESH: r_stat,
r_inact.
wa_stat-sign = 'E'.
wa_stat-option = 'EQ'.
wa_stat-low = 'I0045'.
APPEND wa_stat TO r_stat.
CLEAR wa_stat.
wa_stat-sign = 'E'.
wa_stat-option = 'EQ'.
wa_stat-low = '0012'.
APPEND wa_stat TO r_stat.
CLEAR wa_stat.
wa_stat-sign = 'E'.
wa_stat-option = 'EQ'.
wa_stat-low = 'I0016'.
wa_stat-high = 'I0016'.
APPEND wa_stat TO r_stat.
CLEAR wa_stat.
wa_inact-sign = 'E'.
wa_inact-option = 'EQ'.
wa_inact-low = ' '.
APPEND wa_inact TO r_inact.
CLEAR wa_inact.
types: begin of ty_aufk,
aufnr type aufk-aufnr,
objnr type aufk-objnr,
stat type jest-stat,
inact type jest-inact,
end of ty_aufk.
data: gt_aufk type table of ty_aufk,
wa_aufk type ty_aufk.
SELECT aaufnr aobjnr bstat binact into table gt_aufk FROM aufk AS a JOIN jest AS b ON aobjnr = bobjnr
WHERE a~werks = '0001'
AND a~auart EQ 'YB02'
AND b~stat IN r_stat
AND b~inact IN r_inact .
In Jest table contains data like
objnr stat inact
OR000000600000 I0001 X
OR000000600000 I0002
OR000000600000 I0016
iam getting only first record .
How can i moify my select conditions.
Regards,
Sureshu can use into corresponding fields of table <internal table> in select query
SELECT aufkaufnr aufkobjnr jeststat jestinact
into corresponding fields of table gt_aufk
FROM aufk inner JOIN jest
ON aufkobjnr = jestobjnr
WHERE aufk~werks = '0001'
AND aufk~auart EQ 'YB02'
AND jest~stat IN r_stat
AND jest~inact IN r_inact .
Edited by: krupa jani on Nov 25, 2008 8:22 AM -
How to use open query in oracle select statement
hi i have a requirement like this. I need to use the output of a procedure ( pl/sql table ) in developing a report
like ( SELECT * FROM <procedure name >)
please help me.You can do it by using classic report but I can't tell you without knowing complete detail. If possible can you post your procedure query which you want to use to create report.
Thanks
Lakshmi -
Can I query with a select statement in the from statement
I'm working with an application that creates a MASTERTABLE that keeps track of DATATABLEs as it creates them. These data tables are only allowed to be so big, so the data tables are time stamped. What I need to do is to be able to query the MASTERTABLE to find out what the latest datatable is and then query the specific datatable.
I've tried
select count(*) from (select max(timestamp) from mastertable)
and it always comes back with a count of 1.
Is this possible, or is there a better way?Well, I'm trying to understand... and if I understand, then you need something dynamic. I did create the following example, of course not exactly as yours (I don't have your data). Each employee has a table with his name, containing the sal history, and I want to query that table starting from actual sal :
SCOTT@db102 SQL> select ename from emp where sal= 3000;
ENAME
SCOTT
FORD
SCOTT@db102 SQL> select * from scott;
SAL_DATE SAL
01-JAN-85 2500
01-JAN-95 2750
01-JAN-05 3000
SCOTT@db102 SQL> set serveroutput on
SCOTT@db102 SQL> declare
2 v_rc sys_refcursor;
3 tname varchar2(30);
4 v_date date;
5 v_sal number;
6 begin
7 select max(ename) into tname
8 from emp
9 where sal = 3000;
10 open v_rc for 'select * from '||tname;
11 loop
12 fetch v_rc into v_date,v_sal;
13 exit when v_rc%notfound;
14 dbms_output.put_line(v_date||' '||v_sal);
15 end loop;
16* end;
SCOTT@db102 SQL> /
01-JAN-85 2500
01-JAN-95 2750
01-JAN-05 3000
PL/SQL procedure successfully completed.
SCOTT@db102 SQL> -
Dear All,
I have a small query on the Select Statement. If there are 2 identical rows and if i am retrieving them using the Select Statement, then will the select statement retrieves both the rows which are identical or only the first possible occurence? Pls help me out in this.
Thanks,
Sirisha.Hi,
That depends on the statement u use. If u use, 'SELECT' statment, u can retrieve all the records which are identical with one or few fields. For that, u need to put the output INTO A TABLE.
Ex. 1
data : int_ekko type table of ekko with header line,
fs_ekko type ekko.
select * from ekko into table int_ekko where ebeln = '6361003191'.
Here, int_ekko is an internal table contains all records whose EBELN = 63610003191.
2) If u use 'SELECT SINGLE', then it will retrieve only the first record out of all the records who satisfy the condition EBELN = 6361003191.
Ex 2:
select single * from ekko into fs where ebeln = '63610003191'.
'fs' is not a table, just of type structure. so contains only one record.
Hope it help u..
Kindly reward points if hepful
Regards,
Shanthi.
Edited by: Shanthi on Mar 4, 2008 8:17 AM -
Latency is very high when SELECT statements are running for LONG
We are a simple DOWN STREAM streams replication environment ( Archive log is shipped from source , CAPTURE & APPLY are running on destination DB).
Whenever there is a long running SELECT statement on TARGET the latency become very high.
SGA_MAX_SIZE = 8GB
STREAMS_POOL_SIZE=2GB
APPLY parallelism = 4
How can resolve this issue?Is the log file shipped but not acknowledge? -- NO
Is the log file not shipped? -- It is shipped
Is the log file acknowledged by not applied? -- Yes...But Apply process was not stopped. it may be slow or waiting for something?
It is 10g Environment. I will run AWR.. But what should i look for in AWR? -
How to use column name as variable in select statement
hi,
i want to make a sql query where in select statement using variable as a column name. but its not working plz guide me how can i do this.
select :m1 from table1;
regardsHi,
Is this what you want..
SQL> select &m1 from dept;
Enter value for m1: deptno
old 1: select &m1 from dept
new 1: select deptno from dept
DEPTNO
10
20
30
40
SQL> select &m1 from dept;
Enter value for m1: dname
old 1: select &m1 from dept
new 1: select dname from dept
DNAME
ACCOUNTING
RESEARCH
SALES
OPERATIONS
SQL> select &&m1 from dept;
Enter value for m1: loc
old 1: select &&m1 from dept
new 1: select loc from dept
LOC
NEW YORK
DALLAS
CHICAGO
BOSTON
SQL> select &&m1 from dept;
old 1: select &&m1 from dept
new 1: select loc from dept
LOC
NEW YORK
DALLAS
CHICAGO
BOSTONIf you use single '&' then each time you fire the query, It will ask for the new value..
But if you will use double '&&' the value of m1 will be persistent across the session..
Twinkle -
Tracking select statement on a table
Hi Guys,
I want to be able to keep track of the person (user) who runs a select statement on a table to retireve a specific person.
For example. I have a table where I store list of customers. Several people have the ability to run select statement on this table. I want to know (store) who ran a select statement on this table to retieve customer "Bob".
How can I do this. Is this possible in oracle??
ThanksThanks for your reply,.
we are using oracle 10g
I have looked at the statement auditing, but this is for a specific table. meaning everyone who runs a select statement on that table will be report. I dont want that.
I want everyone who runs a select statement to get a specific person from that table. for example
select last_name from customers where customers.first_name = 'BOB'
Thanks -
Performance in the select statement
Hi All,
(select a.siteid siteid,a.bpaadd_0 bpaadd_0,a.bpanum_0 bpanum_0,
case when a.bpaaddlig_0 = '' then '-' else a.bpaaddlig_0 end
address1,
case when a.bpaaddlig_1 = '' then '-' else a.bpaaddlig_1 end
address2,
case when a.bpaaddlig_2 = '' then '-' else a.bpaaddlig_2 end
address3,
case when a.bpades_0 = '' then '-' else a.bpades_0 end place,
case when a.cty_0 = '' then '-' else a.cty_0 end city,
case when a.poscod_0 = '' then '-' else a.poscod_0 end
pincode,
case when b.cntnam_0 = '' then '-' else b.cntnam_0 end
contactname,
case when b.fax_0 = '' then '-' else b.fax_0 end fax,
case when b.MOBTEL_0 = '' then '-' else b.MOBTEL_0 end mobile,
case when b.TEL_0 = '' then '-' else b.TEL_0 end phone,
case when b.web_0 = '' then '-' else b.web_0 end website,
c.zinvcty_0 zcity,c.bpainv_0 bpainv_0,c.bpcnum_0 bpcnum_0
from lbcreport.bpaddress@info a,lbcreport.contact@info b
,lbcreport.bpcustomer@info c
where (a.bpanum_0=b.bpanum_0) and (a.cty_0 = c.zinvcty_0) and
(a.siteid = c.siteid))
In the above query Is there any performance degradation could i proceed with same query
or any other solution is there to increase the speed of the query.
in one select statement these many cases are allowed ah?
Please could anybody help me in this?
Thanks in advance
bye
SrikaviChange you query as follows...
(select
a.siteid siteid,
a.bpaadd_0 bpaadd_0,
a.bpanum_0 bpanum_0,
nvl(a.bpaaddlig_0, '-') address1,
nvl(a.bpaaddlig_1,'-' ) address2,
nvl(a.bpaaddlig_2,'-' ) address3,
nvl(a.bpades_0,'-' ) place,
nvl(a.cty_0,'-' ) city,
nvl(a.poscod_0,'-' ) pincode,
nvl(b.cntnam_0,'-' ) end
contactname,
nvl(b.fax_0,'-' ) fax,
nvl(b.MOBTEL_0,'-' ) mobile,
nvl(b.TEL_0,'-' ) phone,
nvl(b.web_0,'-' ) website,
c.zinvcty_0 zcity,c.bpainv_0 bpainv_0,c.bpcnum_0 bpcnum_0
from
lbcreport.bpaddress@info a,
lbcreport.contact@info b,
lbcreport.bpcustomer@info c
where
(a.bpanum_0=b.bpanum_0) and
(a.cty_0 = c.zinvcty_0) and
(a.siteid = c.siteid))
/ For performace check the execution plan of the query.. also BluShadow's post
Regards
Singh -
Adhoc Query : Error during selection; check the selection conditions
Hi
We have a report set-up and which we want to run using our adhoc query report tcode S_PH0_48000513
The report, has a few different selection criteria in it to look at all action IT screen data in the system for ee's in specific personnel areas. There is also a criteria to allow us to paste in specific employee numbers we are interested in. The issue I am facing is that over about 3000 ids, the system automatically returns me a message when I click on the Output button to run the report which states:
Error during selection; check the selection conditions
Message no. PAIS206
I am not sure why this is happening. The selection criteria are fine and the other day I ran the report and I experienced no issues. The report ran successfully. Now though, if I try and paste in all the ids I am interested in (about 8000) I get this message straightaway.
Can anything be done to overcome this issue?
Any advice would be much appreciated.
NicolaHi
The message in full is:
Error during selection; check the selection conditions
Message no. PAIS206
Diagnosis
A runtime error occurred during dynamic selection.
System response
The runtime error will be caught; no short dump will be created. This error should not occur as a rule. However, very large select statements may trigger the runtime error SAPSQL_STMNT_TOO_LARGE or DBIF_RSQL_INVALID_RSQL. There is no way to prevent this happening. In this case, the error can only be caught.
Procedure
Check the selection conditions to see whether the error was caused because the option "Import from text file" included too many objects in the "Multiple selection" dialog. If this is so, you must limit the number of individual values. -
SELECT query sometimes runs extremely slowly - UNDO question
Hi,
The Background
We have a subpartitioned table:
CREATE TABLE TAB_A
RUN_ID NUMBER NOT NULL,
COB_DATE DATE NOT NULL,
PARTITION_KEY NUMBER NOT NULL,
DATA_TYPE VARCHAR2(10),
START_DATE DATE,
END_DATE DATE,
VALUE NUMBER,
HOLDING_DATE DATE,
VALUE_CURRENCY VARCHAR2(3),
NAME VARCHAR2(60),
PARTITION BY RANGE (COB_DATE)
SUBPARTITION BY LIST (PARTITION_KEY)
SUBPARTITION TEMPLATE
(SUBPARTITION GROUP1 VALUES (1) TABLESPACE BROIL_LARGE_DATA,
SUBPARTITION GROUP2 VALUES (2) TABLESPACE BROIL_LARGE_DATA,
SUBPARTITION GROUP3 VALUES (3) TABLESPACE BROIL_LARGE_DATA,
SUBPARTITION GROUP4 VALUES (4) TABLESPACE BROIL_LARGE_DATA,
SUBPARTITION GROUP5 VALUES (DEFAULT) TABLESPACE BROIL_LARGE_DATA
PARTITION PARTNO_03 VALUES LESS THAN
(TO_DATE(' 2008-07-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
( SUBPARTITION PARTNO_03_GROUP1 VALUES (1),
SUBPARTITION PARTNO_03_GROUP2 VALUES (2),
SUBPARTITION PARTNO_03_GROUP3 VALUES (3),
SUBPARTITION PARTNO_03_GROUP4 VALUES (4),
SUBPARTITION PARTNO_03_GROUP5 VALUES (DEFAULT) ),
PARTITION PARTNO_01 VALUES LESS THAN
(TO_DATE(' 2008-07-23 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
( SUBPARTITION PARTNO_01_GROUP1 VALUES (1),
SUBPARTITION PARTNO_01_GROUP2 VALUES (2),
SUBPARTITION PARTNO_01_GROUP3 VALUES (3),
SUBPARTITION PARTNO_01_GROUP4 VALUES (4),
SUBPARTITION PARTNO_01_GROUP5 VALUES (DEFAULT) ),
PARTITION PARTNO_02 VALUES LESS THAN
(TO_DATE(' 2008-07-24 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
( SUBPARTITION PARTNO_02_GROUP1 VALUES (1),
SUBPARTITION PARTNO_02_GROUP2 VALUES (2),
SUBPARTITION PARTNO_02_GROUP3 VALUES (3),
SUBPARTITION PARTNO_02_GROUP4 VALUES (4),
SUBPARTITION PARTNO_02_GROUP5 VALUES (DEFAULT) ),
PARTITION PARTNO_OTHER VALUES LESS THAN (MAXVALUE)
( SUBPARTITION PARTNO_OTHER_GROUP1 VALUES (1),
SUBPARTITION PARTNO_OTHER_GROUP2 VALUES (2),
SUBPARTITION PARTNO_OTHER_GROUP3 VALUES (3),
SUBPARTITION PARTNO_OTHER_GROUP4 VALUES (4),
SUBPARTITION PARTNO_OTHER_GROUP5 VALUES (DEFAULT) )
CREATE INDEX TAB_A_IDX ON TAB_A
(RUN_ID, COB_DATE, PARTITION_KEY, DATA_TYPE, VALUE_CURRENCY)
LOCAL;The table is subpartitioned as each partition typically has 135million rows in it.
Overnight, serveral runs occur that load data into this table (the partitions are rolled over daily, such that the oldest one is dropped and a new one created. Stats are exported from the oldest partition prior to being dropped and imported to the newly created partition. The oldest partition once the partition has been created has it's stats analyzed).
Data loads can load anything from 200 rows to 20million rows into the table, with most of the rows ending up in the Default subpartition. Most of the runs that load a larger set of rows have been set up to add into one of the other 4 subpartitions.
We then run a process to extract data from the table that gets put into a file. This is a two step process (due to Oracle completely picking the wrong execution plan and us not being able to rewrite the query in such a way that it'll pick the right path up by itself!):
1. Identify all the unique currencies
2. Update the (dynamic) sql query to add a CASE clause into the select clause based on the currencies identified in step 1, and run the query.
Step 1 uses this query:
SELECT DISTINCT value_currency
FROM tab_a
WHERE run_id = :b3 AND cob_date = :b2 AND partition_key = :b1;and usually finishes this within 20 minutes.
The problem
Occasionally, this simple query runs over 20 minutes (I don't think we've ever seen it run to completion on these occurrences, and I've certainly seen it take over 3 hours before we killed it, for a run where it would normally complete in 2 or 3 minutes), which we've now come to recognise as it "being stuck". The execution path it takes is the same as when it runs normally, there are no unusual wait events, and no unusual wait times. All in all, it looks "normal" except for the fact that it's taking forever (tongue-in-cheek!) to run. When we kill and rerun, the execution time returns to normal. (We've sent system state dumps to Oracle to be analyzed, and they came back with "The database is doing stuff, can't see anything wrong")
We've never been able to come up with any explanation before, but the same run has failed consistently for the last three days, so I managed to wangle a DBA to help me investigate it further.
After looking through the ASH reports, he proposed a theory that the problem was it was having to go to the UNDO to retrieve results, and that this could explain the massive run time of the query.
I looked at the runs and agreed that UNDO might have been used in that particular instance of the query, as another run had loaded data into the table at the same time it was being read.
However, another one of the problematic runs had not had any inserts (or updates/deletes - they don't happen in our process) during the reading of the data, and yet it had taken a long time too. The ASH report showed that it too had read from UNDO.
My question
I understand from this link: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:44798632736844 about how Selects may generate REDO, but I don't see why UNDO would possibly be generated by a select. Does anyone know of a situation where a select would end up looking through UNDO, even though no inserts/updates/deletes had taken place on the table/index it was looking at?
Also, does the theory that having to look through the UNDO (currently UNDO ts is 50000MB, in case that's relevant) causing queries to take an extremely long time hold water? We're on 10.2.0.3
Message was edited by:
Boneist
Ok, having carried on searching t'internet, I can see that it's maybe Delayed Block Cleanout that's causing the UNDO to be referenced. Even taking that into account, I can't see why going back to the UNDO to be told to commit the change to disk could slow the query down that much? What waits would this show up as, if any?Since you're on 10.2 and I understand that you use
the statistics of the "previous" content for the
partition that you are now loading (am I right?) you
have to be very careful with the 10g optimizer. If
the statistics tell the optimizer that the values
that you're querying for are sufficiently
out-of-range this might change the execution plan
because it estimates that only a few or no rows will
be returned. So if the old statistics do not fit the
new data loaded in terms of column min/max values
then this could be a valid reason for different
execution plans for some executions (depending on the
statistics of the old partition and the current
values used). Your RUN_ID is a good candidate I guess
as it could be ever increasing... If the max value of
the old partition is sufficiently different from the
current value this might be the cause.
Do you actually use bind variables for that
particular statement? Then we have in addition bind
variable peeking and potentially statement re-using
to consider.
I would prefer literals instead of bind variables or
do you encounter parse issues?Yes, that query runs as part of a procedure and uses bind variables (well, pl/sql variables!). We are aware that because of the histograms that get produced, the stats are not as good as we'd like. I'm wondering if analyzing the partition would be the best way to go, only that means analysing the entire partition, not just the subpartition, I guess? But if other inserts are taking place at the same time, having several analyzes taking place won't help the speed of inserting, or won't it matter?
Do you have the "default" 10g statistics gathering
job active? This could also explain why you get
different execution plans at different execution
times. If the job determines that the statistics of
some of your partitions are stale then it will
attempt to gather statistics even if you already have
statistics imported/generated.No, we turned that off. The stats do not change when we rerun the query - we guess there is some sort of contention taking place, possibly when reading from the UNDO, although I would expect that to show up in the waits - it doesn't appear to though.
Data loads can load anything from 200 rows to
20million rows into the table, with most of therows
ending up in the Default subpartition. Most of the
runs that load a larger set of rows have been setup
to add into one of the other 4 subpartitions.
I'm not sure about above description. Do most of the
rows end up in the default subpartition (most rows in
default partition) or do the larger sets load into
the other 4 ones... (most rows in the non-default
partitions)?Sorry, I mean that the loads that load say 20million + rows at a time have a specified subpartition to go into (defined via some config. We had to make up a "partition key" in order to do this as there is nothing in the data that lends itself to the subpartition list, unfortunately - the process determines which subpartition to load to/extract from via a config table), but this applies to not many of the runs. So, the majority of the runs (with fewer rows) go into the default partition.
The query itself scans the index, not the table, doing partition pruning, etc.
The same SQL_ID doesn't mean it's the same plan. So
are you 100% sure that the plans where the same? I
could imagine (see above) that the execution plans
might be different.The DBA looking at it said that the plans were the same, and I have no reason to doubt him. Also, the session browser in Toad shows the same explain plan in the "Current Statement" tab for normal and abnormal runs and is as follows:
Time IO Cost CPU Cost Cardinality Bytes Cost Plan
6 SELECT STATEMENT ALL_ROWS
4 1 4 4 4 82,406 4 1 4 20 4 6 4 HASH UNIQUE
3 1 3 4 3 28,686 3 1 3 20 3 5 3 PARTITION RANGE SINGLE Partition #: 2
2 1 2 4 2 28,686 2 1 2 20 2 5 2 PARTITION LIST SINGLE Partition #: 3
1 1 1 4 1 28,686 1 1 1 20 1 5 1 INDEX RANGE SCAN INDEX TAB_A_IDX Access Predicates: "RUN_ID"=:B3 AND "COB_DATE"=:B2 AND "PARTITION_KEY"=:B1 Partition #: 3
How do you perform your INSERTs? Are overlapping
loads and queries actually working on the same
(sub-)partition of the table or in different ones? Do
you use direct-path inserts or parallel dml?
Direct-path inserts as far I know create "clean"
blocks that do not need a delayed block cleanout.We insert using a select from an external table - there's a parallel hint in there, but I think that is often ignored (at least, I've never seen any hint of sessions running in parallel when looking at the session browser, and I've seen it happen in one of our dev databases, so...). As mentioned above, rows could get inserted into different partitions, although the majority of runs load into the default subpartition. In practise, I don't think more than 3 or 4 loads take place at the same time.
If you loading and querying different partitions then
your queries shouldn't have to check for UNDO except
for the delayed block cleanout case.
You should check at least two important things:
- Are the execution plans different for the slow and
normal executions?
- Get the session statistics (logical I/Os, redo
generated) for the normal and slow ones in order to
see and compare the amount of work that they
generate, and to find out how much redo your query
potentially generated due to delayed block cleanout.It's difficult to do a direct comparison that's exact, due to other work going on in the database, and the abnormal query taking far longer than normal, but here is the ASH comparison between a normal run (1st) and our abnormal run (2nd) (both taken over 90 mins, and the first run may well include other runs that use the same query in the results):
Exec Time of DB Exec Time (ms) #Exec/sec CPU Time (ms) Physical Reads / Exec #Rows Processed
Time / Exec (DB Time) / Exec / Exec
SQL Id 1st 2nd Diff 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st 2nd Multiple Plans SQL Text
gpgaxqgnssnvt 3.54 15.76 12.23 223,751 1,720,297 0.00 0.00 11,127 49,095 42,333.00 176,565.00 2.67 4.00 No SELECT DISTINCT VALUE_CURRENCY... -
1 Conditional report based on 3 select lists/ 3 different select statements
I have made 1 report based on the three select lists. The report is displayed in the center of the page. The user needs to fill them in order, the select lists are:
Selectlist:
1. P1_LOVPG - Null is allowed, but is only appearing at top with a label of Productgroup
2. P1_LOVSG - Null is allowed, but is only appearing at top with a label of Subgroup
3. P1_LOVMA - Null is allowed, but is only appearing at top with a label of Manufacturer
LOVPG contains a distinct collect of the ProductGroups
QUERY LOV = select distinct pg from X
LOVSG contains a distinct collect of the SubGroups inside the selected PG.list
QUERY LOV = select distinct sg from X where pg = :P1_LOVPG
LOVMA contains a distinct collect of the Manufacturers inside the selected SG.lst
QUERY LOV = select distinct ma from X where sg = :P_LOVSG
Based on the the selected items the user would see the following:
Table X
PG SG MA ART
A-----X----M---1
A-----X----N---2
A-----Y----M---3
A-----X----M---4
B-----X----M---5
B-----Y----N---6
B-----Z----O---7
Seletion 1 PG = A -> selects PG A in select list result, User sees:
Report A
PG SG MA ART
A-----X----M---1
A-----X----N---2
A-----Y----M---3
A-----X----M---4
Query would be: select * from X where PG = :P1_LOVPG
Selection 2, user still sees the above, can only select from the SG select list NULL, X, Y. User needs to choose between X or Y value. He picks X, he sees:
Report B
PG SG MA ART
A-----X----M---1
A-----X----N---2
A-----X----M---4
Query would be: select * from X where PG = :P1_LOVPG and SG = :P_LOVSG
Selection 3, user still sees selection 2 on his screen, can only select from the MA list bewteen NULL, M or N, user needs to choose between M or N. He picks M, he sees:
Report C
PG SG MA ART
A-----X----M---1
A-----X----M---4
Query would be: select * from X where PG = :P1_LOVPG and SG = :P_LOVSG
As you can see the query changes as the user goes deeper into the structure. It is a simple if then else system where the quey changes. How do I set this up in htmldb?
(I've read something about Oracle's SQL and it's decode function, but can they be used with changing select statements?)are you sure your data meets the JOIN conditions?
You can make a quick check.. only example...
select single * from KONP into ls_konp
where knumh = P_knumh.
if sy-subrc eq 0.
select * from from A905 into table lt_a905 where
kappl in so_kappl
and kschl in so_kschl
and VKORG in so_vkorg
and vtweg in so_vtweg
and kondm in so_kondm
and wkreg in so_wkreg
and knumh = ls_konp-knumh.
if sy-subrc eq 0.
select * from A919 into table lt_a919
for all entries in lt_a905
where kappl = lt_a905-kappl
and kschl = lt_905-kschl
and knumh = lt_905-knumh.
endif.
endif. -
Using plsql tables in select statement of report query
Hi
Anyone have experience to use plsql table to select statement to create a report. In otherwords, How to run report using flat file (xx.txt) information like 10 records in flat files and use this 10 records to the report and run pdf files.
thanks in advance
sureshhi,
u can use the utl_file package to do that using a ref cursor query in the data model and u can have this code to read data from a flat file
declare
ur_file utl_file.file_type;
my_result varchar2(250);
begin
ur_file := UTL_FILE.FOPEN ('&directory', '&filename', 'r') ;
utl_file.get_line(ur_file, my_result);
dbms_output.put_line(my_result);
utl_file.fclose(ur_file);
end;
make sure u have an entry in ur init.ora saying that your
utl_file_dir = '\your directory where ur files reside'
cheers!
[email protected] -
Slow query results for simple select statement on Exadata
I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
I ran an execution plan it returns just this:
Plan
SELECT STATEMENT ALL_ROWSCost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576
1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576 I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
Edited by: k1ng87 on Apr 24, 2013 7:58 AMk1ng87 wrote:
I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?)
Maybe you are looking for
-
"The iPod cannot be synced. The disk could not be read or written to."
I just bought an iPod touch today and rushed home to sync it with iTunes. After making sure I had the latest version of iTunes, I plugged the new ipod in and followed the steps to sync. It seems that after about 200 songs transfered to my iPod touch,
-
Anonymous user/SAP Integration
Hi I am building a public website,accessible to anonoymous users. Anonymous user will not logon to portal system. is it possible to display some data from SAP System for Anonymous users.To make connections to SAP System, one requires userid/passwo
-
Provisioning - Emailing User Name and Password to end user
In GRC AC 5.3 Access Enforcer when a new user account is provisioned, a URL or link is sent to the end user's email. When the user clicks the link the user can view the user name and password. My questions? 1. Should the end user have a UME role, to
-
Printed Guide or Books for Sony Xperia Z Tablet
Looking for printed guide or books for the Sony Xperia Z Tablet, similar to what is available for Ipad & Kindle Fire 8.9 HD. Thanks, Lgriffi
-
There is no agent set up to handle mozconfig URLs
Hi, Trying to run makepkg for firefox 3 from aur and I get this message; There is no agent set up to handle mozconfig URLs Check /etc/makepkg.conf What do I need to change/add? Thanks