Need to fine tune Query ....
Hi...
SELECT
a.id, a.fname, a.lname
FROM
a inner join b ON a.id = b.id
WHERE
a.del_status = 'N' AND a.type = 2
GROUP BY
a.id, a.name, a.lname
if i run this Query it take s one sec
SELECT
a.id, a.fname, a.lname
FROM
a inner join b ON a.id = b.id
WHERE
a.del_status = 'N' AND a.type = 2 and b.status =1
GROUP BY
a.id, a.name, a.lname
if i add status it takes 7 Min
i cant found the reason
this is the explain plan for second query
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 73 92
SORT GROUP BY 73 3 K 92
NESTED LOOPS 73 3 K 88
TABLE ACCESS FULL NOTIFY_SOURCE_DETAIL 73 1 K 15
TABLE ACCESS BY INDEX ROWID MEMBER 1 20 1
INDEX UNIQUE SCAN I_MEMBER_2 2
Thanks,
Hi
If you have an index on the b.id then Oracle does not have to read the actual data in the first query.
But when you refer to status, then Oracle will have to retrieve the row and check that field.
pls. supply BOTH explain plans
regards
Mette
Similar Messages
-
Hi All,
I need to fine tune my select query . I am giving a sample query which is model of my Query .
Sample Tables: Employee, Manager, Department
Select a.employee_name,a.employee_age,decode(a.employee_status,'Y','Promoted','Still in Progress'), NVL(b.employee_details , 0),c.department_name from employee a, manager b , department c where a.employee_id =b.employee_id(+) and a.employee_id = c.employee_id(+) and c.department_status in ('Y',A') order by a.employee_joining_date
This is a sample query. In my query which should be fine tune has lot of decode and nvl functions. This query is passed to oracle as string from Java .It takes 2 seconds but it should be taking only fraction of seconds..
What shall i do?
Shall i create materialise views or a function in oracle with return value as sys_refucursor to Java code. does function based index should be used in where clause only. If so what should i do to minimise decode and nvl functions.Please help me>
Select a.employee_name,a.employee_age,decode(a.employee_status,'Y','Promoted','Still in Progress'),
NVL(b.employee_details , 0),c.department_name from employee a, manager b , department c
where a.employee_id =b.employee_id(+) and a.employee_id = c.employee_id(+) and c.department_status
in ('Y',A') order by a.employee_joining_date I'm wondering why an "employee_details" field is 0 - doesn't make much sense to me.
Also, the (+) syntax is deprecated - use standard ANSI join syntax instead (Mr. Google is your friend).
This is a sample query. In my query which should be fine tune has lot of decode and nvl functions.
This query is passed to oracle as string from Java .It takes 2 seconds but it should be taking only fraction of seconds..Why should it take only a fraction of a second? How do you know this for a fact? TKProf should be
able to tell you what's happening.
What shall i do?
Shall i create materialise views or a function in oracle with return value as sys_refucursor to Java code.
does function based index should be used in where clause only. If so what should i do to minimise
decode and nvl functions.Please help meSounds to me like you're thrashing around desperately in search of a "fancy" solution.
Read some performance books on Oracle - you don't mention your version of Oracle
or your OS, so it's difficult to help with that.
Christopher Lawson's book is an excellent and very readable into to Oracle performance.
Also, Oracle Performance 101 - as well as anything by Tom Kyte. More advanced stuff is
available from Jonathan Lewis and Christian Antognini. Also, take a look at the FAQ for
this group (as recommended by the other poster) and don't forget the docco*
or (short version) RTFM*.
HTH,
Paul... -
Hi,
I am having below two queries. 2nd query is taking more time & it gives time out error. Can anybody tell how to fine tune below query. Thanks.
1st Query.
SELECT EKETEBELN EKETEBELP EKETETENR EKETEINDT
EKETMENGE EKETWEMNG
INTO TABLE I_EKET
FROM EKET
WHERE EKETMENGE <> EKETWEMNG
AND
EKET~EINDT IN S_EINDT.
DESCRIBE TABLE I_EKET LINES V_ZLINES.
IF V_ZLINES > 0.
2nd Query.
SELECT EKKOEBELN EKKOAEDAT EKKOLIFNR EKPOEBELP EKPO~MATNR
EKPO~WERKS
EKPOLOEKZ EKPOELIKZ EKPOTXZ01 EKPONETPR LFA1~NAME1
INTO TABLE I_PODEL
FROM EKKO
INNER JOIN EKPO ON EKKOEBELN = EKPOEBELN
INNER JOIN LFA1 ON EKKOLIFNR = LFA1LIFNR
FOR ALL ENTRIES IN I_EKET
WHERE EKKO~EBELN = I_EKET-EBELN AND
EKPO~EBELP = I_EKET-EBELP AND
EKPO~MATNR IN S_MATNR AND
EKPO~WERKS IN S_WERKS AND
EKPO~WERKS NE 'W001' AND
EKKO~EKORG = P_EKORG AND
EKKO~LIFNR IN S_LIFNR AND
EKKO~LOEKZ NE 'X' AND
EKPO~LOEKZ NE 'S' AND
EKPO~ELIKZ NE 'X' AND
EKPO~LOEKZ NE 'L' AND
EKKO~AEDAT IN S_AEDAT.
ELSE.
WRITE 'No POs found for the selection criteria!'.
ENDIF.Not the right forum to ask this question.
VJ -
Hi , I would like to ask the expert here..how could i fine tune the below query..now it return data within 60 seconds. however my client require the data should return <5 second
SELECT DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE) START_DATE,
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR) START_HOUR,
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER) IN_PARTNER,
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER) OUT_PARTNER,
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE) SUBSCRIBER_TYPE,
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE) TRAFFIC_TYPE,
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE) EVENT_TYPE,
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD) INTERVAL_PERIOD,
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD) THRESHOLD,
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP) CALLED_NO_GRP,
SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) EVENT_COUNT,
--SUM (DECODE (CURR.EVENT_DURATION, '', 0, CURR.EVENT_DURATION)) EVENT_DURATION,
--SUM (DECODE (CURR.DATA_VOLUME, '', 0, CURR.DATA_VOLUME)) DATA_VOLUME,
--AVG (DECODE (CURR.AVERAGE_DURATION, '', 0, CURR.AVERAGE_DURATION)) AVERAGE_DURATION,
SUM (DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV)) EVENT_COUNT_PREV,
--SUM ( DECODE (PREV.EVENT_DURATION_PREV, '', 0, PREV.EVENT_DURATION_PREV)) EVENT_DURATION_PREV,
--SUM (DECODE (PREV.DATA_VOLUME_PREV, '', 0, PREV.DATA_VOLUME_PREV)) DATA_VOLUME_PREV,
--AVG ( DECODE (PREV.AVERAGE_DURATION_PREV, '', 0, PREV.AVERAGE_DURATION_PREV)) AVERAGE_DURATION_PREV,
ABS ( SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) - SUM ( DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV))) EVENT_COUNT_DIFF
FROM ------------------------------- CURR
(SELECT START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
--rd_thr.param_value THRESHOLD,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--SUM (DATA_VOLUME) AS DATA_VOLUME,
--SUM (EVENT_DURATION) AS EVENT_DURATION,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION,
SUM (EVENT_COUNT) AS EVENT_COUNT
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC (SYSDATE - TO_DATE (START_DATE, 'YYYYMMDD')) <= rd_min.param_value
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TO_DATE (START_DATE, 'YYYYMMDD') < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
) CURR
FULL OUTER JOIN
---------------------------------- PREV --------------------------
SELECT TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD') START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--rd_thr.param_value THRESHOLD,
SUM (EVENT_COUNT) AS EVENT_COUNT_PREV
--SUM (EVENT_DURATION) AS EVENT_DURATION_PREV,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION_PREV,
--SUM (DATA_VOLUME) AS DATA_VOLUME_PREV
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC ( SYSDATE - TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD'))) <= rd_min.param_value
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')) < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD'),
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
) PREV
-------------------------- join ------------------
ON ( CURR.START_DATE = PREV.START_DATE
AND CURR.START_HOUR = PREV.START_HOUR
AND CURR.IN_PARTNER = PREV.IN_PARTNER
AND CURR.OUT_PARTNER = PREV.OUT_PARTNER
AND CURR.SUBSCRIBER_TYPE = PREV.SUBSCRIBER_TYPE
AND CURR.TRAFFIC_TYPE = PREV.TRAFFIC_TYPE
AND CURR.INTERVAL_PERIOD = PREV.INTERVAL_PERIOD
--AND CURR.THRESHOLD = PREV.THRESHOLD
AND CURR.EVENT_TYPE = PREV.EVENT_TYPE
AND CURR.CALLED_NO_GRP = PREV.CALLED_NO_GRP)
GROUP BY DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE),
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR),
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER),
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER),
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE),
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE),
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD),
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD),
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE),
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP);
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------I changed to query as below, however the performance not much difference compare to original
WITH CURR AS
SELECT /*+ MATERIALIZE */ START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
--rd_thr.param_value THRESHOLD,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--SUM (DATA_VOLUME) AS DATA_VOLUME,
--SUM (EVENT_DURATION) AS EVENT_DURATION,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION,
SUM (EVENT_COUNT) AS EVENT_COUNT
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC (SYSDATE - TO_DATE (START_DATE, 'YYYYMMDD')) <= rd_min.param_value
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TO_DATE (START_DATE, 'YYYYMMDD') < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
), PREV AS
SELECT /*+ MATERIALIZE */ TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD') START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--rd_thr.param_value THRESHOLD,
SUM (EVENT_COUNT) AS EVENT_COUNT_PREV
--SUM (EVENT_DURATION) AS EVENT_DURATION_PREV,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION_PREV,
--SUM (DATA_VOLUME) AS DATA_VOLUME_PREV
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC ( SYSDATE - TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD'))) <= rd_min.param_value
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')) < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD'),
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
SELECT /*+ USE_HASH(T1 T2) */ DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE) START_DATE,
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR) START_HOUR,
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER) IN_PARTNER,
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER) OUT_PARTNER,
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE) SUBSCRIBER_TYPE,
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE) TRAFFIC_TYPE,
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE) EVENT_TYPE,
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD) INTERVAL_PERIOD,
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD) THRESHOLD,
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP) CALLED_NO_GRP,
SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) EVENT_COUNT,
--SUM (DECODE (CURR.EVENT_DURATION, '', 0, CURR.EVENT_DURATION)) EVENT_DURATION,
--SUM (DECODE (CURR.DATA_VOLUME, '', 0, CURR.DATA_VOLUME)) DATA_VOLUME,
--AVG (DECODE (CURR.AVERAGE_DURATION, '', 0, CURR.AVERAGE_DURATION)) AVERAGE_DURATION,
SUM (DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV)) EVENT_COUNT_PREV,
--SUM ( DECODE (PREV.EVENT_DURATION_PREV, '', 0, PREV.EVENT_DURATION_PREV)) EVENT_DURATION_PREV,
--SUM (DECODE (PREV.DATA_VOLUME_PREV, '', 0, PREV.DATA_VOLUME_PREV)) DATA_VOLUME_PREV,
--AVG ( DECODE (PREV.AVERAGE_DURATION_PREV, '', 0, PREV.AVERAGE_DURATION_PREV)) AVERAGE_DURATION_PREV,
ABS ( SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) - SUM ( DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV))) EVENT_COUNT_DIFF
FROM CURR
FULL OUTER JOIN
PREV
ON ( CURR.START_DATE = PREV.START_DATE
AND CURR.START_HOUR = PREV.START_HOUR
AND CURR.IN_PARTNER = PREV.IN_PARTNER
AND CURR.OUT_PARTNER = PREV.OUT_PARTNER
AND CURR.SUBSCRIBER_TYPE = PREV.SUBSCRIBER_TYPE
AND CURR.TRAFFIC_TYPE = PREV.TRAFFIC_TYPE
AND CURR.INTERVAL_PERIOD = PREV.INTERVAL_PERIOD
--AND CURR.THRESHOLD = PREV.THRESHOLD
AND CURR.EVENT_TYPE = PREV.EVENT_TYPE
AND CURR.CALLED_NO_GRP = PREV.CALLED_NO_GRP)
GROUP BY DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE),
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR),
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER),
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER),
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE),
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE),
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD),
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD),
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE),
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP); -
Need to fine tune ....
HI....
I'm using this block statement it takes more than 15 min .. the cursor query returns 20000 rec is any other way to update this ....pls give solution for this
declare
cursor c1 is select m_id from mem where del_status='N' and exists(select 1 from mem_alb where ma_id =mem_id) and exists (select 1 from mem_us where mu_m_id=m_id and MU_USED =0);
begin
for idx in c1 loop
update mem_us set (MUL_PUSED , MU_USED) =(select sum(1),sum(m_size) from mem_alb,mem_alb_poto where map_ma_id=ma_id and ma_m_id=idx.m_id and map_del_status=0 and mal_del_status=0) where mu_m_id=idx.m_id;
end loop;
end;Hello
Well, I'm in dire need of a double espresso so apologies if I've missed something, but I think that this is logically equivalent:
update
mem_us upd
set
(MUL_PUSED , MU_USED) =(select
sum(1),
sum(m_size)
from
mem_alb,
mem_alb_poto
where
map_ma_id=ma_id
and
ma_m_id=upd.m_id
and
map_del_status=0
and
mal_del_status=0
where
upd.del_status='N'
and
exists( select
1
from
mem_alb
where
ma_id =upd.m_id
and
exists( select
1
from
mem_us
where
mu_m_id=upd.m_id
and
MU_USED =0
);HTH
David
Message was edited by:
david_tyler
Fixed the formatting
Message was edited by:
david_tyler
Again!
Message was edited by:
david_tyler
Oh I give up! :-) -
Dear All,
I am using 11g version. The query given below.
Sample Data:
Table1: KEY ( It has 4 million records and DUR_UK column is unique key)
PK
DUR_UK
1
BD|4|J650|01AB|0001LLA|N01005309106|01003014101|0H9ZN|0HB0T|6970189|30748R2|001000
2
BD|506001022502|06076001901|0G1GF|0HB0T|7270478|2035772C91|000100
3
BD|J08005852109|10004001301|0HB0T|6971058|3551787C91|000100|03-MAY-10|31-DEC-99|3551787C91
Table2: Section (It has 12k records)
Column: FVC
Data: OHBOT
In table 1, there are 3 types of order for bold and underlined data. I need to get the PK, JVC from KEY table by JOINING with Section table based on bold and underlined field with FVC column.
I tired with below query but it takes long time and didnt complete. Becoz it go thru 4 million records 3 times.
SELECT A.PK,B.JVC
FROM SECTION B
INNER JOIN KEY A
ON (REGEXP_SUBSTR(A.DUR_UK,'[^|]+',1,9) = B.FVC OR REGEXP_SUBSTR(A.DUR_UK,'[^|]+',1,5) = B.FVC OR REGEXP_SUBSTR(A.DUR_UK,'[^|]+',1,4) = B.FVC;
The output could be like.
PK JVC
1 OHBOT
2 OHBOT
3 OHBOT
Can someone please help me out to fine tune.
ThanksHi,
In a relational database, each column of each row should contain 1 piece of information, not a delimted list of several items. This is so basic to database design that it's called First Normal Form. Relational database are designed to work with data that is in First Normal Form. Your table was not designed to work well, and it seems to be doing exactly what it was designed for. This problem (and many others, no doubt) would be much simpler and much faster if each of the |-delimited items was on a separate row.
If you must use the current design, avoid regular expressions. You don't need all the power of regular expressions in this problem; you can get the results you need faster using less powerful tools, such as LIKE, INSTR and SUBSTR.
Depending on your data and your requirements, you might try something like this:
WITH got_pos AS
SELECT k.pk, k.dur_uk
, s.jvc, s.fvc
, INSTR (k.dur_uk, '|', 1, 3) AS pos3
, INSTR (k.dur_uk, '|', 1, 4) AS pos4
, INSTR (k.dur_uk, '|', 1, 5) AS pos5
, INSTR (k.dur_uk, '|', 1, 8) AS pos8
, INSTR (k.dur_uk, '|', 1, 9) AS pos9
FROM section s
JOIN key k ON k.dur_uk LIKE '%|' || s.fvc || '|%'
SELECT pk, jvc
FROM got_pos
WHERE fvc IN ( SUBSTR (dur_uk, pos3 + 1, pos4 - (pos3 + 1))
, SUBSTR (dur_uk, pos4 + 1, pos5 - (pos4 + 1))
, SUBSTR (dur_uk, pos8 + 1, pos9 - (pos8 + 1))
Of course, without any sample data, I can't test this.
Oracle Text might help with this problem. I don't use it myself, but I believe it allows you to create indexes so that it's faster to find keywords (such as anyhting in section.fvc) anywhere in large strings (such as key.dur_uk). -
Help needed to optimize the query
Help needed to optimize the query:
The requirement is to select the record with max eff_date from HIST_TBL and that max eff_date should be > = '01-Jan-2007'.
This is having high cost and taking around 15mins to execute.
Can anyone help to fine-tune this??
SELECT c.H_SEC,
c.S_PAID,
c.H_PAID,
table_c.EFF_DATE
FROM MTCH_TBL c
LEFT OUTER JOIN
(SELECT b.SEC_ALIAS,
b.EFF_DATE,
b.INSTANCE
FROM HIST_TBL b
WHERE b.EFF_DATE =
(SELECT MAX (b2.EFF_DATE)
FROM HIST_TBL b2
WHERE b.SEC_ALIAS = b2.SEC_ALIAS
AND b.INSTANCE =
b2.INSTANCE
AND b2.EFF_DATE >= '01-Jan-2007')
OR b.EFF_DATE IS NULL) table_c
ON table_c.SEC_ALIAS=c.H_SEC
AND table_c.INSTANCE = 100;To start with, I would avoid scanning HIST_TBL twice.
Try this
select c.h_sec
, c.s_paid
, c.h_paid
, table_c.eff_date
from mtch_tbl c
left
join (
select sec_alias
, eff_date
, instance
from (
select sec_alias
, eff_date
, instance
, max(eff_date) over(partition by sec_alias, instance) max_eff_date
from hist_tbl b
where eff_date >= to_date('01-jan-2007', 'dd-mon-yyyy')
or eff_date is null
where eff_date = max_eff_date
or eff_date is null
) table_c
on table_c.sec_alias = c.h_sec
and table_c.instance = 100; -
I need to pass a query in form of string to DBMS_XMLQUERY.GETXML package...the parameters to the query are date and varchar ..please help me build the string .Below is the query and the out put. ( the string is building fine except the parameters are with out quotes)
here is the procedure
create or replace
procedure temp(
P_MTR_ID VARCHAR2,
P_FROM_DATE IN DATE ,
P_THROUGH_DATE IN DATE ) AS
L_XML CLOB;
l_query VARCHAR2(2000);
BEGIN
l_query:= 'SELECT
a.s_datetime DATETIME,
a.downdate Ending_date,
a.downtime Ending_time,
TO_CHAR(ROUND(a.downusage,3),''9999999.000'') kWh_Usage,
TO_CHAR(ROUND(a.downcost,2),''$9,999,999.00'') kWh_cost,
TO_CHAR(ROUND(B.DOWNUSAGE,3),''9999999.000'') KVARH
FROM
(SELECT s_datetime + .000011574 s_datetime,
TO_CHAR(S_DATETIME ,''mm/dd/yyyy'') DOWNDATE,
DECODE(TO_CHAR(s_datetime+.000011574 ,''hh24:'
||'mi''), ''00:'
||'00'',''24:'
||'00'', TO_CHAR(s_datetime+.000011574,''hh24:'
||'mi'')) downtime,
s_usage downusage,
s_cost downcost
FROM summary_qtrhour
WHERE s_mtrid = '
||P_MTR_ID||
' AND s_mtrch = ''1''
AND s_datetime BETWEEN TO_DATE('
||P_FROM_DATE||
',''DD-MON-YY'') AND (TO_DATE('
||P_THROUGH_DATE||
',''DD-MON-YY'') + 1)
) a,
(SELECT s_datetime + .000011574 s_datetime,
s_usage downusage
FROM summary_qtrhour
WHERE s_mtrid = '
||P_MTR_ID||
' AND s_mtrch = ''2''
AND s_datetime BETWEEN TO_DATE('
||P_FROM_DATE||
',''DD-MON-YY'') AND (TO_DATE('
||P_THROUGH_DATE||
','' DD-MON-YY'') + 1)
) B
where a.S_DATETIME = B.S_DATETIME(+)';
SELECT DBMS_XMLQUERY.GETXML('L_QUERY') INTO L_XML FROM DUAL;
INSERT INTO NK VALUES (L_XML);
DBMS_OUTPUT.PUT_LINE('L_QUERY IS :'||L_QUERY);
END;
OUTPUT parameters are in bold (the issue is they are coming without single quotes otherwise th equery is fine
L_QUERY IS :SELECT
a.s_datetime DATETIME,
a.downdate Ending_date,
a.downtime Ending_time,
TO_CHAR(ROUND(a.downusage,3),'9999999.000') kWh_Usage,
TO_CHAR(ROUND(a.downcost,2),'$9,999,999.00') kWh_cost,
TO_CHAR(ROUND(B.DOWNUSAGE,3),'9999999.000') KVARH
FROM
(SELECT s_datetime + .000011574 s_datetime,
TO_CHAR(S_DATETIME ,'mm/dd/yyyy') DOWNDATE,
DECODE(TO_CHAR(s_datetime+.000011574 ,'hh24:mi'), '00:00','24:00', TO_CHAR(s_datetime+.000011574,'hh24:mi')) downtime,
s_usage downusage,
s_cost downcost
FROM summary_qtrhour
WHERE s_mtrid = N3165 AND s_mtrch = '1'
AND s_datetime BETWEEN TO_DATE(01-JAN-13,'DD-MON-YY') AND (TO_DATE(31-JAN-13,'DD-MON-YY') + 1)
) a,
(SELECT s_datetime + .000011574 s_datetime,
s_usage downusage
FROM summary_qtrhour
WHERE s_mtrid = N3165 AND s_mtrch = '2'
AND s_datetime BETWEEN TO_DATE(01-JAN-13,'DD-MON-YY') AND (TO_DATE(31-JAN-13,' DD-MON-YY') + 1)
) B
where a.S_DATETIME = B.S_DATETIME(+)The correct way to handle this is to use bind variables.
And use DBMS_XMLGEN instead of DBMS_XMLQUERY :
create or replace procedure temp (
p_mtr_id in varchar2
, p_from_date in date
, p_through_date in date
is
l_xml CLOB;
l_query VARCHAR2(2000);
l_ctx dbms_xmlgen.ctxHandle;
begin
l_query:= 'SELECT
a.s_datetime DATETIME,
a.downdate Ending_date,
a.downtime Ending_time,
TO_CHAR(ROUND(a.downusage,3),''9999999.000'') kWh_Usage,
TO_CHAR(ROUND(a.downcost,2),''$9,999,999.00'') kWh_cost,
TO_CHAR(ROUND(B.DOWNUSAGE,3),''9999999.000'') KVARH
FROM
(SELECT s_datetime + .000011574 s_datetime,
TO_CHAR(S_DATETIME ,''mm/dd/yyyy'') DOWNDATE,
DECODE(TO_CHAR(s_datetime+.000011574 ,''hh24:'
||'mi''), ''00:'
||'00'',''24:'
||'00'', TO_CHAR(s_datetime+.000011574,''hh24:'
||'mi'')) downtime,
s_usage downusage,
s_cost downcost
FROM summary_qtrhour
WHERE s_mtrid = :P_MTR_ID
AND s_mtrch = ''1''
AND s_datetime BETWEEN TO_DATE(:P_FROM_DATE,''DD-MON-YY'')
AND (TO_DATE(:P_THROUGH_DATE,''DD-MON-YY'') + 1)
) a,
(SELECT s_datetime + .000011574 s_datetime,
s_usage downusage
FROM summary_qtrhour
WHERE s_mtrid = :P_MTR_ID
AND s_mtrch = ''2''
AND s_datetime BETWEEN TO_DATE(:P_FROM_DATE,''DD-MON-YY'')
AND (TO_DATE(:P_THROUGH_DATE,'' DD-MON-YY'') + 1)
) B
where a.S_DATETIME = B.S_DATETIME(+)';
l_ctx := dbms_xmlgen.newContext(l_query);
dbms_xmlgen.setBindValue(l_ctx, 'P_MTR_ID', p_mtr_id);
dbms_xmlgen.setBindValue(l_ctx, 'P_FROM_DATE', to_char(p_from_date, 'DD-MON-YY'));
dbms_xmlgen.setBindValue(l_ctx, 'P_THROUGH_DATE', to_char(p_through_date, 'DD-MON-YY'));
l_xml := dbms_xmlgen.getXML(l_ctx);
dbms_xmlgen.closeContext(l_ctx);
insert into nk values (l_xml);
end; -
Need to write a query while configuring Database Adapter
Hi,
I am working in BPEL.i need to write a query while configuring the DB adapter. i need to query a table which contains five rows each row containing 7 columns. my query should be like if i give the first column name(for eg:Title) it should return the complete row(i.e all the 7 columns in a row).
So after deploying during testing when i give the Column name(Title) in the BPEL console it should return the row.
How to write a query such tht if i give any title name in the five rows it should return the respective row
please help me out.
Regards,
MohamedHi,
I got the answer for this.its working fine now.
the query is: select TITLE,..... from movie where (TITLE =#title);
The second "title" will be the one which i am going to give in bpel console
so it wil check with the "TITLE" in the table and execute the query.
Regards,
Mohamed -
How to fine tune ROIs of LEDs of a camera image?
Hi,
I use NI Smart Camera and NI Vision Builder 2012 for measuring HUE values of the LEDs. I can manually adjust the ROI of each LED. Sometimes mechanics move a bit and my ROIs are then tilted/off from LED and measurement results are staring to fail. I am wondering how to set vision builder/camera to automatically fine tune ROIs when needed? I attach a sample picture of LEDs. There is also situation where I manually fine tune LED D79's ROI.
BR,
Jick
Attachments:
LEDs.png 37 KBThanks for the example. There is still one open question: how do I detect if one LED is OFF? For example if one LED is missing Detect Objects 1 step finds one object less but how can I know which LED is missing/off? How about creating Detect Object step for each LED separately and if there is no LED on measure colors step just fails? I attach my suggestion here.
BR,
Jick
Attachments:
Adjust_ROIs.vbai 114 KB
camera_simulation_image2.png 26 KB -
Hi DBA,
i am new to this field.
i can see from EM some sql are taking 8%- 95% of Db . i get the SQL_ID from the Em how to fine tune this?
also can you please help me out with some standard recommendations to run the DB in best manner.
Thanks in advance.As a generic recommendation for slow sql queries, read the following thread ,
When your query takes too long ...
HTH
Aman.... -
Need help th tuning query or re write the query--
Hi,
Need help to tune the below query or rewrite th query for reducing the execution time Please find the query and explain plan.
QUERY
explain plan FOR SELECT consumer_key,product_key,days_in_product,20100201 period_key FROM
(SELECT consumer_key,
product_key,
days_in_product,
row_number() over ( Partition BY consumer_key order by Days_in_product DESC) row_num
FROM
(SELECT consumer_key,
product_key,
SUM(no_ofdays) days_in_product
FROM
(SELECT pcv.consumer_key,
pcv.product_key,
pcv.product_consumer_valid_from,
pcv.product_consumer_valid_to,
DECODE (SIGN(20100201000000-product_consumer_valid_from),1,20100201000000,product_consumer_valid_from) period_start,
DECODE (SIGN(20100228235959-product_consumer_valid_to),1,product_consumer_valid_to,20100228235959) period_end,
CASE
WHEN to_number(TO_CHAR(cd.activation_date,'YYYYMMDDHH24MISS')) BETWEEN 20100201000000 AND 20100228235959
AND activation_date > to_Date(product_consumer_valid_to,'YYYYMMDDHH24MISS')
THEN 0
WHEN to_number(TO_CHAR(cd.activation_date,'YYYYMMDDHH24MISS')) BETWEEN 20100201000000 AND 20100228235959
AND activation_date BETWEEN to_Date(product_consumer_valid_from,'YYYYMMDDHH24MISS') AND to_Date(product_consumer_valid_to,'YYYYMMDDHH24MISS')
THEN
--to_char(activation_date,'MON-YYYY')='PERIOD_ACTIVE' and activation_date >= to_Date(product_consumer_valid_from,'YYYYMMDDHH24MISS') then
(to_date(DECODE (SIGN(20100228235959-product_consumer_valid_to),1,product_consumer_valid_to,20100228235959),'YYYYMMDDHH24MISS') - to_date(TO_CHAR(activation_date,'YYYYMMDDHH24MISS'),'YYYYMMDDHH24MISS') )
WHEN to_number(TO_CHAR(cd.activation_date,'YYYYMMDDHH24MISS')) < 20100201000000
THEN (to_date(DECODE (SIGN(20100228235959-product_consumer_valid_to),1,product_consumer_valid_to,20100228235959),'YYYYMMDDHH24MISS') - to_Date(DECODE (SIGN(20100201000000-product_consumer_valid_from),1,20100201000000,product_consumer_valid_from),'YYYYMMDDHH24MISS') )
WHEN to_number(TO_CHAR(cd.activation_date,'YYYYMMDDHH24MISS')) > 20100228235959
THEN 0
ELSE
--unusual situation
(to_date(DECODE (SIGN(20100228235959-product_consumer_valid_to),1,product_consumer_valid_to,20100228235959),'YYYYMMDDHH24MISS') - to_Date(DECODE (SIGN(20100201000000-product_consumer_valid_from),1,20100201000000,product_consumer_valid_from),'YYYYMMDDHH24MISS') )
END No_ofDays
FROM cimtran.product_consumer_validity pcv,
consumer_dimension cd
WHERE pcv.consumer_key =cd.consumer_key
AND product_consumer_valid_to >= 20100201000000
AND product_consumer_valid_from <= 20100228235959
--and product_consumer_valid_from > '20090801000000'
ORDER BY consumer_key,
product_key,
product_consumer_valid_from
) a
GROUP BY consumer_key,
product_key
ORDER BY consumer_key,
product_key
) WHERE row_num=1 ;EXPLAIN PLAN
"PLAN_TABLE_OUTPUT"
"Plan hash value: 3823907703"
"| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |"
"| 0 | SELECT STATEMENT | | 4665K| 231M| | 133K (1)| 00:31:08 |"
"|* 1 | VIEW | | 4665K| 231M| | 133K (1)| 00:31:08 |"
"|* 2 | WINDOW SORT PUSHED RANK| | 4665K| 173M| 232M| 133K (1)| 00:31:08 |"
"| 3 | VIEW | | 4665K| 173M| | 104K (1)| 00:24:18 |"
"| 4 | SORT GROUP BY | | 4665K| 182M| 729M| 104K (1)| 00:24:18 |"
"|* 5 | HASH JOIN | | 13M| 533M| 65M| 44241 (1)| 00:10:20 |"
"| 6 | TABLE ACCESS FULL | CONSUMER_DIMENSION | 2657K| 35M| | 4337 (1)| 00:01:01 |"
"|* 7 | TABLE ACCESS FULL | PRODUCT_CONSUMER_VALIDITY | 13M| 351M| | 15340 (2)| 00:03:35 |"
"Predicate Information (identified by operation id):"
" 1 - filter(""ROW_NUM""=1)"
" 2 - filter(ROW_NUMBER() OVER ( PARTITION BY ""CONSUMER_KEY"" ORDER BY "
" INTERNAL_FUNCTION(""DAYS_IN_PRODUCT"") DESC )<=1)"
" 5 - access(""PCV"".""CONSUMER_KEY""=""CD"".""CONSUMER_KEY"")"
" 7 - filter(""PRODUCT_CONSUMER_VALID_FROM""<=20100228235959 AND "
" ""PRODUCT_CONSUMER_VALID_TO"">=20100201000000)"I doubt that this query can be tuned without using indexes. There is a lot of unnecessary work specified in your query, like unnecessary intermediate sorting and selecting unused columns. The cost based optimizer recognized it and skips some of that unnecessary work, it seems. For clarity's sake, I would rewrite your query like below. Note that the query is untested:
select consumer_key
, max(product_key) keep (dense_rank last order by days_in_product) product_key
, max(days_in_product) days_in_product
, 20100201 period_key
from ( select pcv.consumer_key
, pcv.product_key
, sum
( case
when to_number(to_char(cd.activation_date,'yyyymmddhh24miss')) between 20100201000000 and 20100228235959
then
case
when cd.activation_date > to_date(pcv.product_consumer_valid_to,'yyyymmddhh24miss')
then
0
when cd.activation_date between to_date(pcv.product_consumer_valid_from,'yyyymmddhh24miss') and to_date(product_consumer_valid_to,'yyyymmddhh24miss')
then
to_date(to_char(pcv.product_consumer_valid_to),'yyyymmddhh24miss'))
- to_date(to_char(activation_date,'yyyymmddhh24miss'),'yyyymmddhh24miss')
end
when to_number(to_char(cd.activation_date,'yyyymmddhh24miss')) < 20100201000000
then
to_date(to_char(pcv.product_consumer_valid_to),'yyyymmddhh24miss'))
- to_date(to_char(pcv.product_consumer_valid_from),'yyyymmddhh24miss'))
when to_number(to_char(cd.activation_date,'yyyymmddhh24miss')) > 20100228235959
then
0
end
) days_in_product
from cimtran.product_consumer_validity pcv
, consumer_dimension cd
where pcv.consumer_key = cd.consumer_key
and product_consumer_valid_to >= 20100201000000
and product_consumer_valid_from <= 20100228235959
group by consumer_key
, product_key
group by consumer_keyRegards,
Rob. -
Hi, Everyone,
My oracle database version is oracle 11.2.0.1.
I am running this procedure in a pl/sql package. When i the process comes to running this process it does not come back. Could you please help me to fine tune this procedure
PROCEDURE ICM_MIN_PR_DIFF
(V_LOW IN NUMBER,
V_UP IN NUMBER)
AS
L_INS_TABLE VARCHAR2(4000);
L_QRY_1_PART VARCHAR2(4000);
L_QRY_2_PART VARCHAR2(4000);
L_QRY_3_PART VARCHAR2(4000);
L_QRY_POPULATE VARCHAR2(4000);
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE ICM_MIN_PRDIFF_0';
L_QRY_1_PART := ' select a.customer_no,a.pr_code_bbl,a.score,min(b.price_diff) price_diff, a.flag
from ICM_MAX_SCORE_0 a, icm b
where
a.customer_no = b.customer_no
and a.pr_code_bb=b.pr_code_bb
and a.score = b.score
and a.flag = b.flag
and b.price_diff > 0
and b.score >=0.5
and b.flag = 0
and b.price_diff > ';
L_QRY_2_PART := ' AND b.price_diff <= ';
L_QRY_3_PART := ' GROUP BY a.customer_no,a.pr_code_bb,a.score,a.flag ';
L_QRY_POPULATE := L_QRY_1_PART || V_LOW ||L_QRY_2_PART || V_UP || L_QRY_3_PART;
--Construct Insertion Statment by appending the Query Statement
L_INS_TABLE := 'INSERT /*APPEND */ INTO ICM_MIN_PRDIFF_0 ' ||
L_QRY_POPULATE;
--DBMS_OUTPUT.PUT_LINE(L_INS_TABLE);
DBMS_OUTPUT.PUT_LINE('POPULATED ICM_MIN_PRDIFF_0');
EXECUTE IMMEDIATE L_INS_TABLE;
COMMIT;
END ICM_MIN_PR_DIFF;
The following is the explain plan for the query
plan FOR succeeded.
PLAN_TABLE_OUTPUT
Plan hash value: 636749222
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1390K| 132M| | 1796K (1)| 05:59:13 |
| 1 | LOAD TABLE CONVENTIONAL | ICM_MIN_PRDIFF_0 | | | | | |
| 2 | HASH GROUP BY | | 1390K| 132M| 155M| 1796K (1)| 05:59:13 |
|* 3 | HASH JOIN | | 1390K| 132M| 68M| 1764K (1)| 05:52:54 |
|* 4 | TABLE ACCESS FULL | ICM_MAX_SCORE_0 | 1390K| 53M| | 2478 (1)| 00:00:30 |
|* 5 | TABLE ACCESS FULL | ICM | 10M| 611M| | 1722K (1)| 05:44:26 |
Predicate Information (identified by operation id):
3 - access("A"."CUSTOMER_NO"="B"."CUSTOMER_NO" AND "A"."PR_CODE_BB"="B"."PR_CODE_BB" AND
"A"."SCORE"="B"."SCORE" AND "A"."FLAG"="B"."FLAG")
4 - filter("A"."FLAG"=0 AND "A"."SCORE">=0.5)
5 - filter("B"."PRICE_DIFF"<=10 AND "B"."SCORE">=0.5 AND "B"."PRICE_DIFF">0 AND "B"."FLAG"=0)
20 rows selected
select count(1) from ICM_MAX_SCORE_0 = 1390409
select count(1) from ICM = 586260240please look at the explain plan for the sql after implementing your advice
explain plan for
insert /*+ append */ into ICM_MIN_PRDIFF_0 (customer_no,pr_code_bb,score,price_diff,flag)
select a.customer_no,a.pr_code_bb,a.score,min(b.price_diff) price_diff, a.flag
from ICM_MAX_SCORE_0 a, icm b
where a.customer_no = b.customer_no
and a.pr_code_bb=b.pr_code_bb
and a.score = b.score
and a.flag = b.flag
and b.price_diff > 0
and b.score >=0.5
and b.flag = 0
and b.price_diff > 0
AND b.price_diff <= 10
GROUP BY a.customer_no,a.pr_code_bb,a.score,a.flag ;
SELECT * FROM TABLE(dbms_xplan.display);
plan FOR succeeded.
PLAN_TABLE_OUTPUT
Plan hash value: 4089758326
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1390K| 132M| | 1796K (1)| 05:59:13 |
| 1 | LOAD AS SELECT | ICM_MIN_PRDIFF_0 | | | | | |
| 2 | HASH GROUP BY | | 1390K| 132M| 155M| 1796K (1)| 05:59:13 |
|* 3 | HASH JOIN | | 1390K| 132M| 68M| 1764K (1)| 05:52:54 |
|* 4 | TABLE ACCESS FULL| ICM_MAX_SCORE_0 | 1390K| 53M| | 2478 (1)| 00:00:30 |
|* 5 | TABLE ACCESS FULL| ICM | 10M| 611M| | 1722K (1)| 05:44:26 |
Predicate Information (identified by operation id):
3 - access("A"."CUSTOMER_NO"="B"."CUSTOMER_NO" AND "A"."PR_CODE_BB"="B"."PR_CODE_BB" AND
"A"."SCORE"="B"."SCORE" AND "A"."FLAG"="B"."FLAG")
4 - filter("A"."FLAG"=0 AND "A"."SCORE">=0.5)
5 - filter("B"."PRICE_DIFF"<=10 AND "B"."SCORE">=0.5 AND "B"."PRICE_DIFF">0 AND "B"."FLAG"=0)
20 rows selected -
Can I fine tune the Ken Burns effect?
I'm using iPhoto 6 for the first time - making a slide show. I turned off the default Ken Burns effect for all slides, but I'm manually setting the KB effect on some slides. I like the ability to specify the starting view and the ending view - such as starting the view with a close-up of a bottle of wine, then zooming back to show the table full of food and the people around it, but....
Is it possible to "fine tune" the KB effect as follows: Have it show the starting view for a specified period of time (e.g. one second), then specify the transition time (e.g. 3 seconds), then have it show the final view for a specified period of time (e.g. 2 seconds).
I.e., I'd like to set up the KB effect so that it "lingers" on the starting view and ending view for a time period that I specify. Is this possible? (If not, this would be a great feature to add in a future release.)You can kinda do that in iPhoto. It's possible to set individual slide display times with the Adjust pane in the slideshow mode if you don't have the Fit slideshow to Music option selected. You can also set transitions and speed of transition.
To get music to end at the end you will have to estimate the time of the slideshow and use music with a time close to that time. I use a blank, black slide at the end of my slideshow so that the music will end to a blank screen if I miss the timing a bit instead of starting over.
TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto (iPhoto.Library for iPhoto 5 and earlier) database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
I've created an Automator workflow application (requires Tiger or later), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 6 and 7 libraries and Tiger and Leopard. Just put the application in the Dock and click on it whenever you want to backup the dB file. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.
Note: There's now an Automator backup application for iPhoto 5 that will work with Tiger or Leopard. -
Hi all,
I noticed some slowness for my oracle database during few time not as before. i want to fine tune that database could you help please what is the way i have to follow to fine tune my database in general.
The database is on unix system and 10g version.
I herad statspack or tkproof but i've never used them and i don't know how to interpret output, please help.
Thanks
RaitsarevoIf you want help you need to ask a specific question.
Oracle provides a Performance and Tuning manual which contains a great deal of information including coverage for the statistics reported by the AWR / Statspack and use of the SQL trace facility.
For more help with statspack see Oracle support document
STATSPACK COMPLETE REFERENCE #94224.1
HTH -- Mark D Powell --
Maybe you are looking for
-
How do I install mountain lion onto my 2009 macbook?
Just bought my first Mac and not sure what I'm doing.
-
Is it possible to change firmware?
Hi, I want to buy a router. Since I'm a happy Apple user, I think Airport Extreme would be a nice option for me. There're some Linux-based router firmwares (Tomato, OpenWRT, FreeWRT,...) that supposedly transform a regular router into a "super router
-
Trouble Setting the structured app on import - FS_StructuredImportApplication
Hello All, I have some referenced documents that I refresh when opening the parent document and I am trying to set the structured application on import. I am using> var importParams = GetImportDefaultParams();
-
How To transform to anyType element in 10g?
HI, How to map to anyType element in Jdeveloper 10g in transform activity? Is it supported? I think in 11g, we can use copy-of construct. What is the alternative in 10g? Thanks Manish
-
EJB module loading gives error in java project
Hi All, i made an ejb jar file.........i imported it in a java project and in this prj, i called a main function.........in this main function, i instantiated the ejb class....<b>But when i am running it then in the debug mode, i see that when the pk