Tune query in Discoverer 4i
Hi,
I am using disc 4i Desktop edition. I want to tune the query / report by adding some Hints to the query.
I am new to discoverer. Can somebody guide me how to this. I am using 4.1.48 destop version
I want to know, how to Edit the query to introduce the Hints
Hi,
Using Discoverer Administrator you can enter hints in the optimizer hints property of the folders. These will then be added to the query when the folder is used in Discoverer Desktop.
Rod West
Similar Messages
-
Performance issues when creating a Report / Query in Discoverer
Hi forum,
Hope you are can help, it involves a performance issues when creating a Report / Query.
I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = Posted we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
Please see attached the SQL Inspector Plan:
Before Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
AND-EQUAL
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_N1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
After Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
TABLE ACCESS FULL GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
Many thanks,
LanceHi Rod,
I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
Ive been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
I think the problem is with the column using DECODE. When querying the column in TOAD the value of P is returned. But in discoverer the condition is done on the value Posted. Im not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
Lance
DECODE( JOURNAL_BATCH1.STATUS,
'+', 'Unable to validate or create CTA',
'+*', 'Was unable to validate or create CTA',
'-','Invalid or inactive rounding differences account in journal entry',
'-*', 'Modified invalid or inactive rounding differences account in journal entry',
'<', 'Showing sequence assignment failure',
'<*', 'Was showing sequence assignment failure',
'>', 'Showing cutoff rule violation',
'>*', 'Was showing cutoff rule violation',
'A', 'Journal batch failed funds reservation',
'A*', 'Journal batch previously failed funds reservation',
'AU', 'Showing batch with unopened period',
'B', 'Showing batch control total violation',
'B*', 'Was showing batch control total violation',
'BF', 'Showing batch with frozen or inactive budget',
'BU', 'Showing batch with unopened budget year',
'C', 'Showing unopened reporting period',
'C*', 'Was showing unopened reporting period',
'D', 'Selected for posting to an unopened period',
'D*', 'Was selected for posting to an unopened period',
'E', 'Showing no journal entries for this batch',
'E*', 'Was showing no journal entries for this batch',
'EU', 'Showing batch with unopened encumbrance year',
'F', 'Showing unopened reporting encumbrance year',
'F*', 'Was showing unopened reporting encumbrance year',
'G', 'Showing journal entry with invalid or inactive suspense account',
'G*', 'Was showing journal entry with invalid or inactive suspense account',
'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
'I', 'In the process of being posted',
'J', 'Showing journal control total violation',
'J*', 'Was showing journal control total violation',
'K', 'Showing unbalanced intercompany journal entry',
'K*', 'Was showing unbalanced intercompany journal entry',
'L', 'Showing unbalanced journal entry by account category',
'L*', 'Was showing unbalanced journal entry by account category',
'M', 'Showing multiple problems preventing posting of batch',
'M*', 'Was showing multiple problems preventing posting of batch',
'N', 'Journal produced error during intercompany balance processing',
'N*', 'Journal produced error during intercompany balance processing',
'O', 'Unable to convert amounts into reporting currency',
'O*', 'Was unable to convert amounts into reporting currency',
'P', 'Posted',
'Q', 'Showing untaxed journal entry',
'Q*', 'Was showing untaxed journal entry',
'R', 'Showing unbalanced encumbrance entry without reserve account',
'R*', 'Was showing unbalanced encumbrance entry without reserve account',
'S', 'Already selected for posting',
'T', 'Showing invalid period and conversion information for this batch',
'T*', 'Was showing invalid period and conversion information for this batch',
'U', 'Unposted',
'V', 'Journal batch is unapproved',
'V*', 'Journal batch was unapproved',
'W', 'Showing an encumbrance journal entry with no encumbrance type',
'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
'X', 'Showing an unbalanced journal entry but suspense not allowed',
'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
'Z', 'Showing invalid journal entry lines or no journal entry lines',
'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ), -
Passing parameter to a query in Discoverer Admiin custome folder.
I am developing a report in which I need to paa in the request Id as parameter to display the user the data corresponding to that request Id. However the query is very complex and If I am enetering the request Id in Discoverer Desktop se as Condition for the workbook created the report takes a lot of time to generate. Is there any mechanism through which I can pass the parameter directly to the Custome folder query. As this will reduce my computation time since all the joins will be done only for the rows with that given request Id rather than for all the rows which is happenning correctly.
Thanks.Hi,
Well there is no straight forward way to do that.
There is a work around to get that functionality by using outer tables or context.
Take a look at the following posts:
Re: Parameters in SubQuery
Re: Parameters in Discoverer Administration
Re: Passing multiple parameters into Custom Folder... -
How do I get this query into Discoverer Plus
Hi all,
I have the following query:
SELECT h.hrs, NVL(Quantity, 0) Quantity
FROM (SELECT TRIM(to_char(LEVEL - 1, '00')) hrs
FROM dual
CONNECT BY LEVEL < 25) h
LEFT JOIN (SELECT TO_CHAR(event_date, 'HH24') AS during_hour,
COUNT(*) Quantity
FROM user_activity u
WHERE event_date BETWEEN
to_date('15-JUN-2010 14:00:00', 'DD-MON-YYYY HH24:MI:SS') AND
to_date('16-JUN-2010 13:59:59', 'DD-MON-YYYY HH24:MI:SS')
AND event = 'user.login'
GROUP BY TO_CHAR(event_date, 'HH24')) t
ON (h.hrs = t.during_hour)
ORDER BY h.hrs;
Which produces the number of actions performed (from an event table, user_activity) grouped by the hour of the day they occurred (including displaying hours that have zero records - this bit is important!). I want to be able to put this into Discoverer plus as a worksheet, but I'm having trouble trying to figure out how.
I was able to create a custom folder in Administrator for the select from DUAL, but I cannot link it to the user_activity table because there's no relationship between to two tables to create a join.
The user_activity table is:
USER_ID - VARCHAR2(8 CHAR)
EVENT_DATE - DATE
EVENT - VARCHAR2(100 CHAR)
The custom folder is this part of the SQL:
SELECT TRIM(to_char(LEVEL - 1, '00')) hrs FROM dual CONNECT BY LEVEL < 25
Any suggestions would be greatly appreciated.
Thanks.
Edited by: Cyntech on Aug 12, 2010 10:41 AMKK wrote:
hi,
In the custom folder we can join tables,but the thing is you said ther is no join between them.
I would suggest you to built this query into a view and use this view in the DUAL table by writing inline query or subquery what ever way.
This is the only possibility i can think off.
Hope it helps you.
By,
KKHi,
Thanks for the reply, though I'm not sure that I understand what you are suggesting.
Which query would you turn into a view? If you are referring to the select from dual, then how would the view be any different from the custom folder? You still would not be able to join it to user_activity as there are no common columns.
Edited by: Cyntech on Aug 12, 2010 2:49 PM -
Can you import local tables into Oracle to query in Discoverer?
long time lurker, first time poster...
I've got a local list in excel of over 100k records. I want to bounce this list off our network's data in Oracle to add a few fields to my file. Is there a way to import the local list into Oracle (like I might using MS Access) so that I can query the data using Discoverer, and output my local list with the additional data I pull out of Oracle? Thanks so much for any assistance!Hi,
Create an external table in the database that points to the excel workbook then you will be able to query the excel workbook like any other table in Discoverer. You will need to create a database directory which points to the directory containing the excel workbook. You will also have to save the workbook in a csv format. But once that is set up you will be able to make changes in the excel workbook save them, then refresh the Discoverer workbook and immediately see the changed data in Discoverer.
Rod West -
Ho wto Dynamically Pass Parameters to a Query in Discoverer
Hi All,
I have a Query where I would to like to pass the Parameters Dynamically.In my query I have three selects and two UNION ALL's.In the three selects I have to pass Period & Year Parameters.In Each Select the Period Calculation is Different.Say in first & second select from the passed period parameter I calculate the previous Months.Whereas in the last select I do the the calculation with the Passed Period.How is it Possible with Discoverer..
Thnx in Advance
Rgds,
Sai SrivatshavHi,
Well there is no straight forward way to do that.
There is a work around to get that functionality by using outer tables or context.
Take a look at the following posts:
Re: Parameters in SubQuery
Re: Parameters in Discoverer Administration
Re: Passing multiple parameters into Custom Folder... -
Hi,
I am having below two queries. 2nd query is taking more time & it gives time out error. Can anybody tell how to fine tune below query. Thanks.
1st Query.
SELECT EKETEBELN EKETEBELP EKETETENR EKETEINDT
EKETMENGE EKETWEMNG
INTO TABLE I_EKET
FROM EKET
WHERE EKETMENGE <> EKETWEMNG
AND
EKET~EINDT IN S_EINDT.
DESCRIBE TABLE I_EKET LINES V_ZLINES.
IF V_ZLINES > 0.
2nd Query.
SELECT EKKOEBELN EKKOAEDAT EKKOLIFNR EKPOEBELP EKPO~MATNR
EKPO~WERKS
EKPOLOEKZ EKPOELIKZ EKPOTXZ01 EKPONETPR LFA1~NAME1
INTO TABLE I_PODEL
FROM EKKO
INNER JOIN EKPO ON EKKOEBELN = EKPOEBELN
INNER JOIN LFA1 ON EKKOLIFNR = LFA1LIFNR
FOR ALL ENTRIES IN I_EKET
WHERE EKKO~EBELN = I_EKET-EBELN AND
EKPO~EBELP = I_EKET-EBELP AND
EKPO~MATNR IN S_MATNR AND
EKPO~WERKS IN S_WERKS AND
EKPO~WERKS NE 'W001' AND
EKKO~EKORG = P_EKORG AND
EKKO~LIFNR IN S_LIFNR AND
EKKO~LOEKZ NE 'X' AND
EKPO~LOEKZ NE 'S' AND
EKPO~ELIKZ NE 'X' AND
EKPO~LOEKZ NE 'L' AND
EKKO~AEDAT IN S_AEDAT.
ELSE.
WRITE 'No POs found for the selection criteria!'.
ENDIF.Not the right forum to ask this question.
VJ -
Tune query on dual table.
Hello,
I am using ORACLE 11g and RHEL 5.
I have a performance tuning issue.
One of the query which returns the output in 0.016 seconds i.e. the below one
select to_number((sysdate- to_date('1970-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS')) * 86400 -
((substr(TZ_OFFSET(sessiontimezone),1,
instr(TZ_OFFSET(sessiontimezone),':')-1) * 3600)+((substr(TZ_OFFSET(SESSIONTIMEZONE),
(instr(TZ_OFFSET(sessiontimezone),':')+1), 2)*60)))) TIME_STAMP ,
to_number(to_char(systimestamp,'FF')) MICROSEC
from dualI want to optimize this query at such a extend that it should give the output in less than 10 microseconds
now this query returns the output in 0.016 seconds...which is way to more as per the need.
I can also adjust if the output time is constant all the time....
for ex:-
if the query returns the output in 15 microsecond when fired for the first time then it should return the output in 15 microsecond for all the consequential execution. If the query gets executed for 10 times then i should get the output in 15 microsecond for all the 10 times.
can anyone guide me how to tune this query to get the output in less than 10 microsecond ???
else
how can i execute this query to return the output in the same time at every execution ???
Thanks in advance ...Hi,
VJ4 wrote:
can anyone guide me how to tune this query to get the output in less than 10 microsecond ???I don't think it is possible using a (disk based) database. Their purpose is not to measure time but to serve and manipulate relational data.
As far as i know, <10µs will be hard to achieve.
Maybe TimesTen IMDB (InMemoryDataBase) can do that...
but wouldn't warranty the constant exec time.
VJ4 wrote:
how can i execute this query to return the output in the same time at every execution ???Well, I don't think you can. The Oracle database is not real-time. The execution time will higly depends on several factors, like parsed query still in shared pool, database server not being too charged, etc...
More over : assuming the query will always execute at exactly the same "speed" in order to measure how fast you are processing unit is not a good idea, as it assumes your unit processing speed is constant.
I think you should try to explain the big picture, and how you came to decide you need a query that return in less than 10µs.
Tell us why you cannot just execute it one in a while, and do the usual math :
processing speed = ("units processed at T2" - "units processed at T1") / (T2 - T1) -
Hi
I have created more than 15 packages for a single application
i have used many procedures functions on no:of tables.
after running the project they have taking long longtime.
what is best way to tune the plsql query's
and please explain how to tune the query's with an example..
i am very thankfull ... in advanceuser1175507 wrote:
Hi
I have created more than 15 packages for a single application
i have used many procedures functions on no:of tables.
after running the project they have taking long longtime.
what is best way to tune the plsql query's
and please explain how to tune the query's with an example..
i am very thankfull ... in advanceYou mean tune SQL queries and tune the PL SQL code, NOT plsql query :)
Ok, let's start from the begining :
- Are you a DBA ?
- Which Oracle version are you using ? -
Modify existing query in Discoverer Report
Hi,
1) I want to put an outer join to the existing discoverer work sheet query. Is there any way I could do it.
2) I am trying to modify the query by Export and Import options in the Discoverer Desktop and I am getting the below error. All I did was added (+) in the existing query
Error: No EUL folders based on table AR_LOOKUPS found.
was there any thing I am missing here?
Thank you,
PrashanthThere seems to be a little confusion in the details mentioned.
First check on which folder, Business Area is the report based on. Then is that folder a Sql folder or a table/view ?
Then in the source you may have to change the code add outer joins etc., Not in the Discoverer Report.
From what I think is export/import does not work in this scenario. -
Hi Team,
could you please tell me how to tune the below query.
UPDATE OBSERVATION_T
SET OBS_NOOBSDOC=
SELECT COUNT(*)
FROM OBSERVATION_DOC_T od
WHERE od.SEQ_NO_OBS =:1
AND od.OBS_SEVERITY NOT IN('K','X')
UPD_DATE =:2 ,
VERSION_NO =VERSION_NO+1,
OBS_SEVERITY=:3 ,
OBS_CORRSTAT=:4 ,
OBS_MESSAGE =:5
WHERE SEQ_NO_OBS =:6Please post the explain plain for your query .
-
How to tune query using dblink?
Hello,
I have query running on production DB(Version 9i).
The query is fetching data over the dblink.
Its taking around 14 minutes and total number of rows fetched are around 2 million.
Is there any way I can tune the query? I have used driving site hist but that didn't helped.Ario wrote:
Hello,
I have query running on production DB(Version 9i).
The query is fetching data over the dblink.
Its taking around 14 minutes and total number of rows fetched are around 2 million.
Is there any way I can tune the query? I have used driving site hist but that didn't helped.
Do some math and tell us if there is a way you can tune this.
We have no idea what "2 million rows" means in terms of size. That could be 1GB of data, it could be 1PB of data. Figure out how much data you're trying to move (size wise), then figure out roughly how fast your network can pipe the data (again, we have no idea if you have a fiber line and this is an internal network between machines, or if you're transporting the data 1/2 way around the globe over a dial up connection).
When you have all that figured out you'll know if the 14 minutes is / or is not an acceptable amount of time.
Cheers, -
Hi,
How i can tune complex query?
what are the step to follow.A very good one
http://www.dba-oracle.com/art_sql_tune.htm
Not particularly. At best it's a rambling collection of general principles where you have to work out which bits are right, out of date, or wrong. (Or - like the bit about Oracle 7 - so far out of date as to be a waste of space).
How much are you going to trust an article about SQL tuning that manages to say the following:
<i>
For example, a simple query such as “What students received an A last semester?” can be written in three ways, as shown in below, each returning an identical result.
</i>
A nested query:
SELECT *
FROM STUDENT
WHERE
student_id =
(SELECT student_id
FROM REGISTRATION
WHERE
grade = 'A'
A correlated subquery:
SELECT *
FROM STUDENT
WHERE
0 <
(SELECT count(*)
FROM REGISTRATION
WHERE
grade = 'A'
AND
student_id = STUDENT.student_id
);Far from returning identical results, the first one returns multiple copies of each student's information, one row for each 'A' they got (along with the registration columns); the second will fail with Oracle error "ORA-01427: single-row subquery returns more than one row" if there is more than one 'A' in the entire registration table; and the third query returns the required result.
Funnily enough, the most "intuitive" query - which is the one with an existence subquery - is not mentioned.
The OP would be far better off starting with the Performance Tuning Guide from http://tahiti.oracle.com
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"The greatest enemy of knowledge is not ignorance,
it is the illusion of knowledge." Stephen Hawking. -
Hi , I would like to ask the expert here..how could i fine tune the below query..now it return data within 60 seconds. however my client require the data should return <5 second
SELECT DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE) START_DATE,
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR) START_HOUR,
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER) IN_PARTNER,
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER) OUT_PARTNER,
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE) SUBSCRIBER_TYPE,
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE) TRAFFIC_TYPE,
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE) EVENT_TYPE,
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD) INTERVAL_PERIOD,
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD) THRESHOLD,
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP) CALLED_NO_GRP,
SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) EVENT_COUNT,
--SUM (DECODE (CURR.EVENT_DURATION, '', 0, CURR.EVENT_DURATION)) EVENT_DURATION,
--SUM (DECODE (CURR.DATA_VOLUME, '', 0, CURR.DATA_VOLUME)) DATA_VOLUME,
--AVG (DECODE (CURR.AVERAGE_DURATION, '', 0, CURR.AVERAGE_DURATION)) AVERAGE_DURATION,
SUM (DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV)) EVENT_COUNT_PREV,
--SUM ( DECODE (PREV.EVENT_DURATION_PREV, '', 0, PREV.EVENT_DURATION_PREV)) EVENT_DURATION_PREV,
--SUM (DECODE (PREV.DATA_VOLUME_PREV, '', 0, PREV.DATA_VOLUME_PREV)) DATA_VOLUME_PREV,
--AVG ( DECODE (PREV.AVERAGE_DURATION_PREV, '', 0, PREV.AVERAGE_DURATION_PREV)) AVERAGE_DURATION_PREV,
ABS ( SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) - SUM ( DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV))) EVENT_COUNT_DIFF
FROM ------------------------------- CURR
(SELECT START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
--rd_thr.param_value THRESHOLD,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--SUM (DATA_VOLUME) AS DATA_VOLUME,
--SUM (EVENT_DURATION) AS EVENT_DURATION,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION,
SUM (EVENT_COUNT) AS EVENT_COUNT
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC (SYSDATE - TO_DATE (START_DATE, 'YYYYMMDD')) <= rd_min.param_value
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TO_DATE (START_DATE, 'YYYYMMDD') < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
) CURR
FULL OUTER JOIN
---------------------------------- PREV --------------------------
SELECT TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD') START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--rd_thr.param_value THRESHOLD,
SUM (EVENT_COUNT) AS EVENT_COUNT_PREV
--SUM (EVENT_DURATION) AS EVENT_DURATION_PREV,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION_PREV,
--SUM (DATA_VOLUME) AS DATA_VOLUME_PREV
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC ( SYSDATE - TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD'))) <= rd_min.param_value
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')) < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD'),
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
) PREV
-------------------------- join ------------------
ON ( CURR.START_DATE = PREV.START_DATE
AND CURR.START_HOUR = PREV.START_HOUR
AND CURR.IN_PARTNER = PREV.IN_PARTNER
AND CURR.OUT_PARTNER = PREV.OUT_PARTNER
AND CURR.SUBSCRIBER_TYPE = PREV.SUBSCRIBER_TYPE
AND CURR.TRAFFIC_TYPE = PREV.TRAFFIC_TYPE
AND CURR.INTERVAL_PERIOD = PREV.INTERVAL_PERIOD
--AND CURR.THRESHOLD = PREV.THRESHOLD
AND CURR.EVENT_TYPE = PREV.EVENT_TYPE
AND CURR.CALLED_NO_GRP = PREV.CALLED_NO_GRP)
GROUP BY DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE),
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR),
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER),
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER),
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE),
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE),
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD),
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD),
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE),
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP);
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------I changed to query as below, however the performance not much difference compare to original
WITH CURR AS
SELECT /*+ MATERIALIZE */ START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
--rd_thr.param_value THRESHOLD,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--SUM (DATA_VOLUME) AS DATA_VOLUME,
--SUM (EVENT_DURATION) AS EVENT_DURATION,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION,
SUM (EVENT_COUNT) AS EVENT_COUNT
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC (SYSDATE - TO_DATE (START_DATE, 'YYYYMMDD')) <= rd_min.param_value
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TO_DATE (START_DATE, 'YYYYMMDD') < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
), PREV AS
SELECT /*+ MATERIALIZE */ TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD') START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--rd_thr.param_value THRESHOLD,
SUM (EVENT_COUNT) AS EVENT_COUNT_PREV
--SUM (EVENT_DURATION) AS EVENT_DURATION_PREV,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION_PREV,
--SUM (DATA_VOLUME) AS DATA_VOLUME_PREV
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC ( SYSDATE - TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD'))) <= rd_min.param_value
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')) < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD'),
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
SELECT /*+ USE_HASH(T1 T2) */ DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE) START_DATE,
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR) START_HOUR,
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER) IN_PARTNER,
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER) OUT_PARTNER,
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE) SUBSCRIBER_TYPE,
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE) TRAFFIC_TYPE,
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE) EVENT_TYPE,
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD) INTERVAL_PERIOD,
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD) THRESHOLD,
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP) CALLED_NO_GRP,
SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) EVENT_COUNT,
--SUM (DECODE (CURR.EVENT_DURATION, '', 0, CURR.EVENT_DURATION)) EVENT_DURATION,
--SUM (DECODE (CURR.DATA_VOLUME, '', 0, CURR.DATA_VOLUME)) DATA_VOLUME,
--AVG (DECODE (CURR.AVERAGE_DURATION, '', 0, CURR.AVERAGE_DURATION)) AVERAGE_DURATION,
SUM (DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV)) EVENT_COUNT_PREV,
--SUM ( DECODE (PREV.EVENT_DURATION_PREV, '', 0, PREV.EVENT_DURATION_PREV)) EVENT_DURATION_PREV,
--SUM (DECODE (PREV.DATA_VOLUME_PREV, '', 0, PREV.DATA_VOLUME_PREV)) DATA_VOLUME_PREV,
--AVG ( DECODE (PREV.AVERAGE_DURATION_PREV, '', 0, PREV.AVERAGE_DURATION_PREV)) AVERAGE_DURATION_PREV,
ABS ( SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) - SUM ( DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV))) EVENT_COUNT_DIFF
FROM CURR
FULL OUTER JOIN
PREV
ON ( CURR.START_DATE = PREV.START_DATE
AND CURR.START_HOUR = PREV.START_HOUR
AND CURR.IN_PARTNER = PREV.IN_PARTNER
AND CURR.OUT_PARTNER = PREV.OUT_PARTNER
AND CURR.SUBSCRIBER_TYPE = PREV.SUBSCRIBER_TYPE
AND CURR.TRAFFIC_TYPE = PREV.TRAFFIC_TYPE
AND CURR.INTERVAL_PERIOD = PREV.INTERVAL_PERIOD
--AND CURR.THRESHOLD = PREV.THRESHOLD
AND CURR.EVENT_TYPE = PREV.EVENT_TYPE
AND CURR.CALLED_NO_GRP = PREV.CALLED_NO_GRP)
GROUP BY DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE),
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR),
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER),
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER),
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE),
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE),
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD),
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD),
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE),
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP); -
Hi,
Friends, I need some help.
I have to tune some queries.
1. How to find the last STATISTICS generation date?
2. How generate statistics in database level?
3. What is the difference between
>alter table emp compute statistics;
and
>alter table emp estimate statistics;
Can Any one give some good advice to tune queries?
regards
MathewHi,
Friends, I need some help.
I have to tune some queries.
1. How to find the last STATISTICS generation
date?
query view USER_TABLES/DBA_TABLES, column LAST_ANALYZED
2. How generate statistics in database level?Use
ANALYZE TABLE <table_name> <statistic_collection_method>
Or
DBMS_STATS package
3. What is the difference between
alter table emp compute statistics;and
alter table emp estimate statistics;
Should be ANALYZE TABLE emp compute statistics;
and ANALYZE TABLE emp estimate statistics;
Compute statistics will sample all data of the table,
estimate only sample certain percent of data of the table.
ny one give some good advice to tune queries?
Oracle Performance Tuning Guide
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm
Maybe you are looking for
-
Simple question (Trace issue) Access of undefined property
Having a strange issue with my new Flash Builder and the trace command. Anyone seen this before and why? <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark"
-
Queries related to databasr 11g .
We are upgrading our database to 11g . There some programs that have queries using the below hints. all_rows first_rows choose rule full rowid index parallel Please let us know which hints 11g query optmizer will consider.Which of them will not be co
-
Getting syntax error in Crystal Report Viewer
The web application that I am supporting has reports that are presented through CR Viewer. The problem that I am facing is that if I pass a value to get the report, its giving the following error. The syntax of the value of prompt 'name_of_paramente_
-
I previously used Adobe 8 Standard to create PDF packages, I can't seem to find the feature in the new Adobe X Standard. Can anyone help point me in the right direction? Thanks
-
MACBOOK is not starting up!
My Macbook is not starting up, it doesnot showing recovery partion as well.. Please read this :https://discussions.apple.com/thread/5486338?tstart=0 Now i do have mavericks.app downloaded from appstore in my usbdrive. Can someone help me how can i re