How to optimize an aggregate query
There is a table table1 having more than 3 lacs of records. It has an index on a column say col1. when We issue a simple query select count(col1) from table1, it is taking about 1 minute in exectuion even if index is there. can anyone guide me on how to optimize it
More information about the problem.
SQL> select count(r_object_id) from dmi_queue_item_s;
COUNT(R_OBJECT_ID)
292784
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname,pname,pval1,pval2
2 from sys.aux_stats$;
no rows selected
SQL> explain plan for
2 select count(r_object_id) from dmi_queue_item_s;
select count(r_object_id) from dmi_queue_item_s
ERROR at line 2:
ORA-02402: PLAN_TABLE not found
Similar Messages
-
How to optimize the select query that is executed in a cursor for loop?
Hi Friends,
I have executed the code below and clocked the times for every line of the code using DBMS_PROFILER.
CREATE OR REPLACE PROCEDURE TEST
AS
p_file_id NUMBER := 151;
v_shipper_ind ah_item.shipper_ind%TYPE;
v_sales_reserve_ind ah_item.special_sales_reserve_ind%TYPE;
v_location_indicator ah_item.exe_location_ind%TYPE;
CURSOR activity_c
IS
SELECT *
FROM ah_activity_internal
WHERE status_id = 30
AND file_id = p_file_id;
BEGIN
DBMS_PROFILER.start_profiler ('TEST');
FOR rec IN activity_c
LOOP
SELECT DISTINCT shipper_ind, special_sales_reserve_ind, exe_location_ind
INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
FROM ah_item --464000 rows in this table
WHERE item_id_edw IN (
SELECT item_id_edw
FROM ah_item_xref --700000 rows in this table
WHERE item_code_cust = rec.item_code_cust
AND facility_num IN (
SELECT facility_code
FROM ah_chain_div_facility --17 rows in this table
WHERE chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
AND div_id = (SELECT div_id
FROM ah_div --8 rows in this table
WHERE division = rec.division)));
END LOOP;
DBMS_PROFILER.stop_profiler;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
NULL;
WHEN TOO_MANY_ROWS
THEN
NULL;
END TEST;The SELECT query inside the cursor FOR LOOP took 773 seconds.
I have tried using BULK COLLECT instead of cursor for loop but it did not help.
When I took out the select query separately and executed with a sample value then it gave the results in a flash of second.
All the tables have primary key indexes.
Any ideas what can be done to make this code perform better?
Thanks,
Raj.As suggested I'd try merging the queries into a single SQL. You could also rewrite your IN clauses as JOINs and see if that helps, e.g.
SELECT DISTINCT ai.shipper_ind, ai.special_sales_reserve_ind, ai.exe_location_ind
INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
FROM ah_item ai, ah_item_xref aix, ah_chain_div_facility acdf, ah_div ad
WHERE ai.item_id_edw = aix.item_id_edw
AND aix.item_code_cust = rec.item_code_cust
AND aix.facility_num = acdf.facility_code
AND acdf.chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
AND acdf.div_id = ad.div_id
AND ad.division = rec.division;ALSO: You are calling ah_internal_data_pkg.get_chain_id (p_file_id) every time. Why not do it outside the loop and just use a variable in the inner query? That will prevent context switching and improve speed.
Edited by: Dave Hemming on Dec 3, 2008 9:34 AM -
How does u find whether query touches aggregates or not?
Hi gurus
How does u find whether query touches aggregates or not?
Thanks in advance
RajHi Rajaiah.
You can test this from TA RSRT -> Execute and debug -> Display aggregate found.
Hope it helps.
BR
Stefan -
I am trying to create a multi provider ? how to optimize my query performan
hi,
I am trying to creating a multi provider using four ods ? can anyone let me know how to optimize my query performance. Since my query takes a lot of time to get executed.
If anyone has any docs for query optimization that is built based on multi provider pls do send it to my email id [email protected]
regds
harithahi wond,
Thanxs a lot for the quick response. Can you let me know how to create secondary indexes on ods and about partioning to be carried out.
if u have any docs or url can you pls share. my email id [email protected]
regds
haritha -
How I can change this query, so I can display the name and scores in one r
How I can change this query, so I can add the ID from the table SPRIDEN
as of now is giving me what I want:
1,543 A05 24 A01 24 BAC 24 BAE 24 A02 20 BAM 20in one line but I would like to add the id and name that are stored in the table SPRIDEN
SELECT sortest_pidm,
max(decode(rn,1,sortest_tesc_code)) tesc_code1,
max(decode(rn,1,score)) score1,
max(decode(rn,2,sortest_tesc_code)) tesc_code2,
max(decode(rn,2,score)) score2,
max(decode(rn,3,sortest_tesc_code)) tesc_code3,
max(decode(rn,3,score)) score3,
max(decode(rn,4,sortest_tesc_code)) tesc_code4,
max(decode(rn,4,score)) score4,
max(decode(rn,5,sortest_tesc_code)) tesc_code5,
max(decode(rn,5,score)) score5,
max(decode(rn,6,sortest_tesc_code)) tesc_code6,
max(decode(rn,6,score)) score6
FROM (select sortest_pidm,
sortest_tesc_code,
score,
row_number() over (partition by sortest_pidm order by score desc) rn
FROM (select sortest_pidm,
sortest_tesc_code,
max(sortest_test_score) score
from sortest,SPRIDEN
where
SPRIDEN_pidm =SORTEST_PIDM
AND sortest_tesc_code in ('A01','BAE','A02','BAM','A05','BAC')
and sortest_pidm is not null
GROUP BY sortest_pidm, sortest_tesc_code))
GROUP BY sortest_pidm;
Hi,
That depends on whether spriden_pidm is unique, and on what you want for results.
Whenever you have a problem, post a little sample data (CREATE TABLE and INSERT statements, relevamnt columns only) for all tables, and the results you want from that data.
If you can illustrate your problem using commonly available tables (such as those in the scott or hr schemas) then you don't have to post any sample data; just post the results you want.
Either way, explain how you get those results from that data.
Always say which version of Oracle you're using.
It looks like you're doing something similiar to the following.
Using the emp and dept tables in the scott schema, produce one row of output per department showing the highest salary in each job, for a given set of jobs:
DEPTNO DNAME LOC JOB_1 SAL_1 JOB_2 SAL_2 JOB_3 SAL_3
20 RESEARCH DALLAS ANALYST 3000 MANAGER 2975 CLERK 1100
10 ACCOUNTING NEW YORK MANAGER 2450 CLERK 1300
30 SALES CHICAGO MANAGER 2850 CLERK 950On each row, the jobs are listed in order by the highest salary.
This seems to be analagous to what you're doing. The roles played by sortest_pidm, sortest_tesc_code and sortest_test_score in your sortest table are played by deptno, job and sal in the emp table. The roles played by spriden_pidm, id and name in your spriden table are played by deptno, dname and loc in the dept table.
It sounds like you already have something like the query below, that produces the correct output, except that it does not include the dname and loc columns from the dept table.
SELECT deptno
, MAX (DECODE (rn, 1, job)) AS job_1
, MAX (DECODE (rn, 1, max_sal)) AS sal_1
, MAX (DECODE (rn, 2, job)) AS job_2
, MAX (DECODE (rn, 2, max_sal)) AS sal_2
, MAX (DECODE (rn, 3, job)) AS job_3
, MAX (DECODE (rn, 3, max_sal)) AS sal_3
FROM (
SELECT deptno
, job
, max_sal
, ROW_NUMBER () OVER ( PARTITION BY deptno
ORDER BY max_sal DESC
) AS rn
FROM (
SELECT e.deptno
, e.job
, MAX (e.sal) AS max_sal
FROM scott.emp e
, scott.dept d
WHERE e.deptno = d.deptno
AND e.job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY e.deptno
, e.job
GROUP BY deptno
;Since dept.deptno is unique, there will only be one dname and one loc for each deptno, so we can change the query by replacing "deptno" with "deptno, dname, loc" throughout the query (except in the join condition, of course):
SELECT deptno, dname, loc -- Changed
, MAX (DECODE (rn, 1, job)) AS job_1
, MAX (DECODE (rn, 1, max_sal)) AS sal_1
, MAX (DECODE (rn, 2, job)) AS job_2
, MAX (DECODE (rn, 2, max_sal)) AS sal_2
, MAX (DECODE (rn, 3, job)) AS job_3
, MAX (DECODE (rn, 3, max_sal)) AS sal_3
FROM (
SELECT deptno, dname, loc -- Changed
, job
, max_sal
, ROW_NUMBER () OVER ( PARTITION BY deptno -- , dname, loc -- Changed
ORDER BY max_sal DESC
) AS rn
FROM (
SELECT e.deptno, d.dname, d.loc -- Changed
, e.job
, MAX (e.sal) AS max_sal
FROM scott.emp e
, scott.dept d
WHERE e.deptno = d.deptno
AND e.job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY e.deptno, d.dname, d.loc -- Changed
, e.job
GROUP BY deptno, dname, loc -- Changed
;Actually, you can keep using just deptno in the analytic PARTITION BY clause. It might be a little more efficient to just use deptno, like I did above, but it won't change the results if you use all 3, if there is only 1 danme and 1 loc per deptno.
By the way, you don't need so many sub-queries. You're using the inner sub-query to compute the MAX, and the outer sub-query to compute rn. Analytic functions are computed after aggregate fucntions, so you can do both in the same sub-query like this:
SELECT deptno, dname, loc
, MAX (DECODE (rn, 1, job)) AS job_1
, MAX (DECODE (rn, 1, max_sal)) AS sal_1
, MAX (DECODE (rn, 2, job)) AS job_2
, MAX (DECODE (rn, 2, max_sal)) AS sal_2
, MAX (DECODE (rn, 3, job)) AS job_3
, MAX (DECODE (rn, 3, max_sal)) AS sal_3
FROM (
SELECT e.deptno, d.dname, d.loc
, e.job
, MAX (e.sal) AS max_sal
, ROW_NUMBER () OVER ( PARTITION BY e.deptno
ORDER BY MAX (sal) DESC
) AS rn
FROM scott.emp e
, scott.dept d
WHERE e.deptno = d.deptno
AND e.job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY e.deptno, d.dname, d.loc
, e.job
GROUP BY deptno, dname, loc
;This will work in Oracle 8.1 and up. In Oracle 11, however, it's better to use the SELECT ... PIVOT feature. -
Is there a better way to do this projection/aggregate query?
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll();Neil,
It sounds like the problem that you're running into is that Kodo doesn't
yet support the JDO2 grouping constructs, so you're doing your own
grouping in the Java code. Is that accurate?
We do plan on adding direct grouping support to our aggregate/projection
capabilities in the near future, but as you've noticed, those
capabilities are not there yet.
-Patrick
Neil Bacon wrote:
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll(); -
How to optimize xquery expression ?
hi,
i got berkeley db xml database with containers: dicom.dbxml and instancemetadata.dbxml.
dicom.dbxml contains documents as follow:
<?xml version="1.0" encoding="UTF-8"?>
<instance docid="dicom_1009">
<dicom_item>
<dicom_header>
<dicom_tag group="0002" element="0000" vr="UL">194</dicom_tag>
<dicom_tag group="0002" element="0001" vr="OB"/>
<dicom_tag group="0002" element="0002" vr="UI">1.2.840.10008.5.1.4.1.1.2</dicom_tag>
<dicom_tag group="0002" element="0003" vr="UI">2.16.840.1.113662.2.1.4519.41582.4105152.419990505.410523251</dicom_tag>
<dicom_tag group="0002" element="0010" vr="UI">1.2.840.10008.1.2.1</dicom_tag>
<dicom_tag group="0002" element="0012" vr="UI">2.16.840.1.113662.2.1.1</dicom_tag>
<dicom_tag group="0002" element="0016" vr="AE">PHOENIXSCP</dicom_tag>
</dicom_header>
<dicom_body>
<dicom_tag group="0008" element="0000" vr="UL">596</dicom_tag>
<dicom_tag group="0008" element="0005" vr="CS">ISO_IR 100</dicom_tag>
<dicom_tag group="0008" element="0008" vr="CS">ORIGINAL\PRIMARY\AXIAL</dicom_tag>
<dicom_tag group="0008" element="0012" vr="DA">1999.05.05</dicom_tag>
<dicom_tag group="0008" element="0013" vr="TM">10:52:34.530000</dicom_tag>
<dicom_tag group="0008" element="0016" vr="UI">1.2.840.10008.5.1.4.1.1.2</dicom_tag>
<dicom_tag group="0008" element="0018" vr="UI">2.16.840.1.113662.2.1.4519.41582.4105152.419990505.410523251</dicom_tag>
<dicom_tag group="0008" element="0020" vr="DA">1999.05.05</dicom_tag>
<dicom_tag group="0008" element="0021" vr="DA">1999.05.05</dicom_tag>
<dicom_tag group="0008" element="0022" vr="DA">1999.05.05</dicom_tag>
<dicom_tag group="0008" element="0023" vr="DA">1999.05.05</dicom_tag>
<dicom_tag group="0008" element="0030" vr="TM">10:52:34.530000</dicom_tag>
<dicom_tag group="0008" element="0031" vr="TM">10:52:34.530000</dicom_tag>
<dicom_tag group="0008" element="0032" vr="TM">10:52:34.530000</dicom_tag>
<dicom_tag group="0008" element="0033" vr="TM">10:52:32.510000</dicom_tag>
<dicom_tag group="0008" element="0060" vr="CS">CTTR</dicom_tag>
</dicom_body>
</dicom_item>
</instance>
instancemetadata.dbxml contains documents as follow:
<?xml version="1.0" encoding="UTF-8"?>
<instancemetadata xmlns="imuba.med" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="imuba.med Instancemetadata.xsd">
<name/>
<notes/>
<id>instancemetadata_1</id>
<instanceid>dicom_1</instanceid>
<createusername>dd</createusername>
<createdate>Tue May 02 21:08:06 CEST 2006</createdate>
<lastmodusername>dd</lastmodusername>
<lastmoddate>Tue May 02 21:08:06 CEST 2006</lastmoddate>
</instancemetadata>
and i got XQuery expression:
declare namespace n = "imuba.med";
declare variable $insCont external;
for $ins in collection(concat(concat("dbxml:containers/", string($insCont)),".dbxml"))/instance,
$met in collection("dbxml:containers/instancemetadata.dbxml")/n:instancemetadata
where
$ins/dicom_item/dicom_body/dicom_tag[@group='0008' and @element='0060'] = "CTTR" and
$ins/@docid = $met/n:instanceid
return
<row>
{ $ins/@docid }
{ $met/n:name }
{ $met/n:notes }
{ $met/n:id }
{ $met/n:instanceid }
{ $met/n:createusername }
{ $met/n:createdate }
{ $met/n:lastmodusername }
{ $met/n:lastmoddate }
</row>
while i got 5000 documents in dicom container, the xquery execution time is close to 10 secs. i've tried to create indices using commands:
XmlIndexSpecification is = xcDicom.getIndexSpecification();
is.addIndex("", "docid", "unique-node-attribute-equality-string");
and
XmlIndexSpecification iss = xcIns.getIndexSpecification();
iss.addIndex("imuba.med", "instanceid", "unique-node-element-equality-string");
And then the execution time is nearly about 7-8 sec, but it's still big (the database contains only 5000 documents).
Have you any idea how to optimize it ? I suppose the index on element i'm using in the WHERE clause would be helpful (dicom_item/dicom_body/dicom_tag[@group='0008' and @element='0060']). Well, i haven't found concept how to add index on element which can be shown using xpath expression.
thanks for any help
DarekHi Darek,
First off, why not try adding these indexes to see what happens:
is.addIndex("", "dicom_tag", "node-element-equality-string");
is.addIndex("", "group", "node-attribute-equality-string");
is.addIndex("", "element", "node-attribute-equality-string");
Secondly, what storage model are you using? I would expect you to get better query times using a NodeContainer, with the DBXML_INDEX_NODES flag enabled.
Thirdly, your "instance" document is not very "XML" like, so you will struggle to get very good query times using that format. If you have control over the format of the document, I would suggest incorporating one or more of the "group", "element", and "vr" attributes into the name of the element - so that you will get multiple elements with different names, instead of one element name with multiple permutations of attributes. Selecting an element by name will always be faster than selecting it by some kind of value.
Let me know how you get on with these suggestions,
John -
Performance optimization on select query for all entries
Hi All,
I want to optimize the select query in my Program.
The select query is taking lot of time to search the records for the given condition in the where clause
and more interestingly there are no records fetched from the database as the where condition does not matches.
It is taking more than 30 min to search the record and the result is no record found.
Below is my select query. I have also created the secondary Index for the same.
In My opinion FOR ALL ENTRIES is taking lot of time. Because there are more than 1200 records in internal table t_ajot
select banfn bnfpo bsart txz01 matnr Werks lgort matkl reswk menge meins flief ekorg
INTO CORRESPONDING FIELDS OF TABLE t_req
FROM eban
FOR ALL ENTRIES IN t_ajot
WHERE matkl >= t_ajot-matkl_low
AND matkl <= t_ajot-matkl_high
AND werks = t_ajot-werks
AND loekz = ' '
AND badat IN s_badat
AND bsart = 'NB'.
Please suggest.Hi,
that,
FOR ALL ENTRIES IN t_ajot
WHERE matkl >= t_ajot-matkl_low
AND matkl <= t_ajot-matkl_high
AND werks = t_ajot-werks
AND loekz = ' '
AND badat IN s_badat
AND bsart = 'NB'.
looks strange.
However:
How does your index look like?
What executoin plan do you get?
How do the statistics look like?
Whats the content of the variables t_ajot-... and s_badata?
Kind regards,
Hermann -
What is the best way to Optimize a SQL query : call a function or do a join?
Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement or do a simple join?
Hi,
If you're even considering a join, then it will probably be faster. As Justin said, it depends on lots of factors.
A user-defined function is only necessary when you can't figure out how to do something in pure SQL, using joins and built-in functions.
You might choose to have a user-defined function even though you could get the same result with a join. That is, you realize that the function is slow, but you believe that the convenience of using a function is more important than better performance in that particular case. -
How to optimize a MDX aggregation functions containing "Exists"?
I have the following calculated measure:
sum(([D Player].[Player Name].[All],
exists([D Match].[Match Id].children,([D Player].[Player Name].currentmember,[Measures].[In Time]),"F Player In Match Stat" ))
,[Measures].[Goals])
Analyzing this calculated measure (the one with "nonempty") in MDX Studio shows "Function
'Exists' was used inside aggregation function - this disables block computation mode".
Mosha Pasumansky spoke about this in one of his posts titled "Optimizing
MDX aggregation functions" where he explains how to optimize MDX aggregation functions containing "Filter",
"NonEmpty", and "Union", but he said he didn't have time to write about Exists, CrossJoin, Descendants, or EXISTING (he posted this in Oct. 2008 and the busy man didn't have time since that date :P )... so anyone knows an article that continues
on what Mosha miss or forgot? how to optimize a MDX aggregation function containing "Exists"? what can I do to achieve the same as this calculated measure but in block mode not cell-by-cell mode ?Sorry for the late replay.
I didn't check if your last proposed solution is faster or not, but I'm sorry to say that it gave the wrong result, look at this:
Player Name
Players Team
Goals Player Scored with Team
A
Team's Goals in Player's Played Matches
Lionel Messi
Argentina
28
28
110
Lionel Messi
Barcelona
341
330
978
The correct result should be like the green column. The last proposed solution in the red column.
If you look at the query in my first post you will find that the intention is to find the total number of goals a team scored in all matches a player participated in. So in the above example Messi scored 28 goals for Argentina (before the last world cup:)
) when the whole Argentinian team scored 110 goals (including Messi's goals) in those matches that Messi played even one minute in. -
How to optimize the below select query
SELECT mvke~matnr "Material Number
mbew~bwkey "Plant
mvke~mvgr1 "Line of Business
mara~meins "Unit of measure
mara~ntgew "Net weight
mara~mhdrz "Remaining shelf life
mara~zzmax_exp_days "Max expiry days
makt~maktx "Material Description
tvm1t~bezei "Line of Business description
mbew~stprs "Standard price
mbew~vprsv "Price control (S/V)
mbew~verpr "Variable Price
FROM mvke INNER JOIN mara
ON maramatnr EQ mvkematnr
INNER JOIN makt
ON maktmatnr EQ mvkematnr
INNER JOIN tvm1t
ON tvm1tmvgr1 EQ mvkemvgr1
INNER JOIN mbew
ON mbewmatnr EQ maramatnr
AND
mbewbwkey EQ mvkevkorg
INTO TABLE gt_matdata
WHERE mbew~matnr IN s_matnr AND
mbew~bwkey IN s_werks AND
mvke~mvgr1 IN s_mvgr1 AND
makt~spras EQ sy-langu AND
tvm1t~spras EQ sy-langu.
Please advice.
Thanks and Regards
Syed SamdaniHi,
You are taking different fields of data from five different tables it seems.
Well you can take separate internal tables and fire five separate select queries using the 'For All Entries' condition after the first select stmnt, so that u get all the related data. Then take your final internal table and merge all data from all these five internal tables into you final table.
Regards,
Jayadeep -
How to speed up this query?
I have created a demo table:
create table demo1(d date);
and insert some data to table:
begin
-- add 6000000 rows
for i in 1..1000000 loop
insert into demo1 values(trunc(sysdate-i));
insert into demo1 values(trunc(sysdate-i));
insert into demo1 values(trunc(sysdate-i));
insert into demo1 values(trunc(sysdate-i));
insert into demo1 values(trunc(sysdate-i));
insert into demo1 values(trunc(sysdate-i));
end loop;
commit;
end;
The query
select * from demo1
where d=to_date('25.10.2004','DD.MM.YYYY')
executed three times faster than
select from demo1 where d=trunc(sysdate-1);
Why? How to speed up this query if I do not want to use index?
I have created index:
create index demo1_indx on demo1(d);
Execution time of queries became identical (for this volume of data).Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> create table demo1(d date);
Table created.
SQL> begin
2 for i in 1..1000000 loop
3 insert into demo1 values(trunc(sysdate-i));
4 insert into demo1 values(trunc(sysdate-i));
5 insert into demo1 values(trunc(sysdate-i));
6 insert into demo1 values(trunc(sysdate-i));
7 insert into demo1 values(trunc(sysdate-i));
8 insert into demo1 values(trunc(sysdate-i));
9 insert into demo1 values(trunc(sysdate-i));
10 insert into demo1 values(trunc(sysdate-i));
11 end loop;
12 commit;
13 end;
14 /
PL/SQL procedure successfully completed.
SQL> alter session set timed_statistics=true;
Session altered.
SQL> alter session set sql_trace=true;
Session altered.
SQL> set timing on;
SQL> set autotrace on;
SQL> select * from demo1 where d='25.10.2004';
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:10.70
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
Bytes=1431)
Statistics
29 recursive calls
1 db block gets
28988 consistent gets
13030 physical reads
1035300 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> select * from demo1 where d='25.10.2004';
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:03.35
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
Bytes=1431)
Statistics
0 recursive calls
0 db block gets
14441 consistent gets
12837 physical reads
0 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> select * from demo1 where d='25.10.2004';
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:04.95
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
Bytes=1431)
Statistics
0 recursive calls
0 db block gets
14441 consistent gets
12757 physical reads
0 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> select * from demo1 where d='25.10.2004';
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:03.82
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
Bytes=1431)
Statistics
0 recursive calls
0 db block gets
14441 consistent gets
12752 physical reads
0 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> select * from demo1 where d=trunc(sysdate-3);
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:17.53
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
Bytes=1431)
Statistics
6 recursive calls
0 db block gets
14503 consistent gets
12758 physical reads
0 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> select * from demo1 where d=trunc(sysdate-3);
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:15.82
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
Bytes=1431)
Statistics
0 recursive calls
0 db block gets
14441 consistent gets
12753 physical reads
0 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> select * from demo1 where d=trunc(sysdate-3);
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:14.56
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
Bytes=1431)
Statistics
0 recursive calls
0 db block gets
14441 consistent gets
12758 physical reads
0 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> select * from demo1 where d=trunc(sysdate-3);
D
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
25.10.04
8 rows selected.
Elapsed: 00:00:11.84
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
s=1431)
1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
Bytes=1431)
Statistics
0 recursive calls
0 db block gets
14441 consistent gets
12757 physical reads
0 redo size
453 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL> alter session set sql_trace=false;
Session altered.
Elapsed: 00:00:00.00
SQL> alter session set timed_statistics=false;
Session altered.
Elapsed: 00:00:00.01
SQL> -
How to optimize multiple inserts at runtime?
Hello, guys,
I have problem of optimizing multiple inserts at runtime using pro* C. The execution has the following form:
for(int i = 0; i < 100000; i++)
EXEC SQL EXECUTE IMMEDIATE :QUERY [ i ];
EXEC SQL COMMIT WOK;
The QUERY strings are only to be known at runtime, and all of them are to insert into the same table with different VALUES clauses, e.g.
"INSERT INTO NSMALL (AN,DU,DE,AD,F1,F2,F3,F4,CAL,TYP,TS,TC,TSL,TCE,PC,RDU,ASD,AF,NETIDENT,ES,EF,LS,LF) VALUES('1',1,0,'','','','','','','',NULL,NULL,NULL,NULL,0,0,NULL,NULL,'',TO_DATE('19760101','YYYYMMDD'),TO_DATE('19760101','YYYYMMDD'),TO_DATE('19760101','YYYYMMDD'),TO_DATE('19760101','YYYYMMDD')) "
I have tried to concategate the queries with ';', enclose them with "begin ... end", and execute them as a single SQL, but got less than 10% improvement(100 inserts/batch)
Host array and FORALL clause could not been used in this usecase since the table is not known until runtime.
So I have no idea about this problem, could any one tell me how to optimize?
Thank you very much!You are sending 100,000 insert statements to the database.
If you want better performance, then send only 1 statement that inserts 100,000 rows.
So get rid of the for-loop and issue this one instead:
insert into nsmall
( an
, du
, de
, ad
, f1
, f2
, f3
, f4
, cal
, typ
, ts
, tc
, tsl
, tce
, pc
, rdu
, asd
, af
, netident
, es
, ef
, ls
, lf
select '1'
, 1
, 0
, null
, null
, null
, null
, null
, null
, null
, null
, null
, null
, null
, 0
, 0
, null
, null
, null
, to_date('19760101','yyyymmdd')
, to_date('19760101','yyyymmdd')
, to_date('19760101','yyyymmdd')
, to_date('19760101','yyyymmdd')
from dual
connect by level <= 100000Regards,
Rob. -
Hi All,
How to get the physical SQL query for the OBIEE reports.
Thanks in advance,
Haree.Hi Anitha,
Thanks for your reply,
I am getting XML script in log file. (Settings > Administration > Manage Sessions > View Log).
How to get physical SQL query ?
Thanks,
Haree -
How to tune past SQL query??
Hi Team,
Straight to issue --> I am seeing an query running for long time. When i begun to trace that particular query it got over by the time and now how to trace that specific SID and QUERY..
I am working on 10.2.0.4 version DB..
How to trace an sql query after its execution?? pls provide steps how to begin with..
regards
dkoracledkoracle wrote:
Hi Team,
Straight to issue --> I am seeing an query running for long time. When i begun to trace that particular query it got over by the time and now how to trace that specific SID and QUERY..
I am working on 10.2.0.4 version DB..
How to trace an sql query after its execution?? pls provide steps how to begin with..Can not be done.
ALTER SESSION SET SQL_TRACE=TRUE;
-- run query again
Maybe you are looking for
-
just got our daughter a new iTouch, but want to try and keep hers separate from our iPhone stuff. can you have multiple iTunes accounts on a single computer?
-
Error while creating Materialized View in AWM
Hello, I am using Oracle 11g with Analytic WorkSpace Manager 11.1.0.6.0A I am trying to create materialized view on my Cube.. The Cube has 5 Dimensions out of which Four are having two levels i.e., TotalA-->DimensionA TotalB-->DimensionB...and so on
-
TM drive for other data too?
Can I put other data on my Time Machine FW drive as long as I don't poot it in the TM backups folder?
-
LabVIEW Runtime 7.0 Linux and Mozilla
I have successfuly instaled runtime plugin into Mozilla under linux. I copied plugin into mozilla plugins directory. When I connect to any page with LV Remote Panel, the plugin loads the page, but after 100% it doesn't work. It just display gray rect
-
Keyboard shortcut for Compare Documents (or configurable keyboard shortcuts)
Hi, I'm guessing the ability to configure Keyboard Shortcuts as one can in other Adobe applications is coming soon to Acrobat (the software release cycles haven't quite synched up yet), but if there are no plans for this, I would appreciate a Keyboar