Aggregate query.
I want to find out difference between average earnings of department no 20 and 30.
Could anyone help me on this.
Hi,
Welcome to the forum!
You can do that without sub-queries, like this:
SELECT AVG (CASE WHEN deptno = 20 THEN sal END)
- AVG (CASE WHEN deptno = 30 THEN sal END)
AS diff
FROM scott.emp
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all the tables involved, and the results you want from that data.
Explain, using specific examples, how you get those results from that data.
If you're using commonly available tables (such as those in the scott schema) then you don't need to post sample data; just make it clear which table(s) you're using, and post the results and the explanation.
Always say what version of Oracle you're using (e.g. 11.2.0.2.0).
See the forum FAQ {message:id=9360002}
Similar Messages
-
Is there a better way to do this projection/aggregate query?
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll();Neil,
It sounds like the problem that you're running into is that Kodo doesn't
yet support the JDO2 grouping constructs, so you're doing your own
grouping in the Java code. Is that accurate?
We do plan on adding direct grouping support to our aggregate/projection
capabilities in the near future, but as you've noticed, those
capabilities are not there yet.
-Patrick
Neil Bacon wrote:
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll(); -
Characteristic "XYZ" is compressed but is not in the aggregate/query
Hi there,
we have build couple of aggregates and receive the following message in debugging mode:
e.g." ____Characteristic 0VALUATION is compressed but is not in the aggregate/query"
Does this mean, we have to include the characteristic into the aggregate or does it mean, it is in but is not compressed ?Hi
Is this warning message or error message?
It seems warning message.It just saying but in aggregates or on query.You can ignore it.
Regards,
Chandu. -
Right time to do aggregates/query caching
Hi
when is right time to do aggregates/query caching?Whats the business scenario
thanksHi Jack,
Refer to the document link below for details.
https://websmp106.sap-ag.de/~sapidb/011000358700004339892004
Hope it helps.
Cheers
Anurag -
Aggregate Query Optimization (with indexes) for trivial queries
Table myTable, which is quite large, has an index on the month column.
"select max(month) from myTable" uses this index and returns quickly.
"select max(month) from myTable where 1 = 1" does not use this index, falls through to a full table scan, and takes a very long time.
Can this possibly be a genuine omission in the query optimizer, or is there some setting or another to convince it to perform the latter query more sanely?Oracle 11.2.0.1
SQL> select table_name, num_rows from dba_tables where table_name = 'DWH_ADDRESS_MASTER';
TABLE_NAME NUM_ROWS
DWH_ADDRESS_MASTER 295729948
SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 8 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1 | 1 | 8 | 4 (0)| 00:00:01 |
SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER where 1 = 1;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 8 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | FIRST ROW | | 1 | 8 | 4 (0)| 00:00:01 |
| 3 | INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1 | 1 | 8 | 4 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------ -
How to optimize an aggregate query
There is a table table1 having more than 3 lacs of records. It has an index on a column say col1. when We issue a simple query select count(col1) from table1, it is taking about 1 minute in exectuion even if index is there. can anyone guide me on how to optimize it
More information about the problem.
SQL> select count(r_object_id) from dmi_queue_item_s;
COUNT(R_OBJECT_ID)
292784
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname,pname,pval1,pval2
2 from sys.aux_stats$;
no rows selected
SQL> explain plan for
2 select count(r_object_id) from dmi_queue_item_s;
select count(r_object_id) from dmi_queue_item_s
ERROR at line 2:
ORA-02402: PLAN_TABLE not found -
Query rewrite don't work wor aggregate query but work for join query
Dear experts,
Let me know what's wrong for
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
I have two MATERIALIZED VIEW:
A) -- Only join
CREATE MATERIALIZED VIEW "SCOTT"."TST_MV"
ENABLE QUERY REWRITE AS
SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
"T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
"T57410"."DISTANCE" "DISTANCE",
"T57410"."IS_LEAF" "IS_LEAF",
"T57460"."DEPARTMENTID" "DEPARTMENTID",
"T57460"."NAME" "NAME","T57460"."PARENT"
"PARENT","T57460"."SHORTNAME" "SHORTNAME",
"T57460"."SKIMOID" "SKIMOID"
FROM "BI_OIV_HIER" "T57410",
"BI_DEPARTMENTS" "T57460"
WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
B) -- Join with aggregation
CREATE MATERIALIZED VIEW "SCOTT"."TST_MV2"
("C41", "C42", "C43",
"C44", "C45", "C46",
"C47", "C48", "C49",
"C50", "C51", "C52",
"C53", "C54", "C55",
"C56", "C57", "C58",
"C59", "C60", "C61",
"INCIDENTTYPE")
ENABLE QUERY REWRITE
AS SELECT COUNT(T56454.TOTAL) AS c41,
T56840.CATEGORYID AS c42,
T56840.PARENT AS c43,
T56908.DOCSTATEID AS c44,
T56908.PARENT AS c45,
T56947.EXPIREDID AS c46,
T56947.PARENT AS c47,
T56986.ISSUESTATEID AS c48,
T56986.PARENT AS c49,
T57025.LOCATIONID AS c50,
T57025.PARENT AS c51,
T57064.NEWID AS c52,
T57064.PARENT AS c53,
T57103.PARENT AS c54,
T57103.RESOLUTIONID AS c55,
T57142.PARENT AS c56,
T57142.RESPONSIBLEID AS c57,
T57181.PARENT AS c58,
T57181.SOURCEID AS c59,
T57460.DEPARTMENTID AS c60,
T57460.PARENT AS c61,
T56454.INCIDENTTYPE
FROM BI_OIV_HIER T57410
BI_DEPARTMENTS T57460
BI_SOURCE_HIER T57176
SOURCE T57181
BI_RESPONSIBLE_HIER T57137
RESPONSIBLE T57142
BI_RESOLUTIONS_HIER T57098
RESOLUTIONS T57103
BI_NEW_HIER T57059
NEW T57064
BI_LOCATIONS_HIER T57020
LOCATIONS T57025
BI_ISSUESTATES_HIER T56981
ISSUESTATES T56986
BI_EXPIRED_HIER T56942
EXPIRED T56947
BI_DOCSTATES_HIER T56903
DOCSTATES T56908
BI_CATEGORY_HIER T56835
CATEGORY T56840
INCIDENTS T56454
WHERE ( T56454.RESOLUTION = T57098.MEMBER_KEY
AND T56454.CATEGORY = T56835.MEMBER_KEY
AND T56454.DOCSTATE = T56903.MEMBER_KEY
AND T56454.EXPIRED = T56942.MEMBER_KEY
AND T56454.ISSUESTATE = T56981.MEMBER_KEY
AND T56454.LOCATION = T57020.MEMBER_KEY
AND T56454.NEW = T57059.MEMBER_KEY
AND T56454.RESPONSIBLE = T57137.MEMBER_KEY
AND T56454.SOURCE = T57176.MEMBER_KEY
AND T56454.DEPARTMENTID = T57410.MEMBER_KEY
AND T56835.ANCESTOR_KEY = T56840.CATEGORYID
AND T56903.ANCESTOR_KEY = T56908.DOCSTATEID
AND T56942.ANCESTOR_KEY = T56947.EXPIREDID
AND T56981.ANCESTOR_KEY = T56986.ISSUESTATEID
AND T57020.ANCESTOR_KEY = T57025.LOCATIONID
AND T57059.ANCESTOR_KEY = T57064.NEWID
AND T57098.ANCESTOR_KEY = T57103.RESOLUTIONID
AND T57137.ANCESTOR_KEY = T57142.RESPONSIBLEID
AND T57176.ANCESTOR_KEY = T57181.SOURCEID
AND T57410.ANCESTOR_KEY = T57460.DEPARTMENTID
GROUP BY T56840.CATEGORYID,
T56840.PARENT,
T56908.DOCSTATEID,
T56908.PARENT,
T56947.EXPIREDID,
T56947.PARENT,
T56986.ISSUESTATEID,
T56986.PARENT,
T57025.LOCATIONID,
T57025.PARENT,
T57064.NEWID,
T57064.PARENT,
T57103.PARENT,
T57103.RESOLUTIONID,
T57142.PARENT,
T57142.RESPONSIBLEID,
T57181.PARENT,
T57181.SOURCEID,
T57460.DEPARTMENTID,
T57460.PARENT,
T56454.INCIDENTTYPE;
So, optimizer uses query rewrite in
select * from TST_MV
and don't use query rewrite in
select * from TST_MV2
within one session.
select * from TST_MV should be read as underlying select for TST_MV:
SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
"T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
"T57410"."DISTANCE" "DISTANCE",
"T57410"."IS_LEAF" "IS_LEAF",
"T57460"."DEPARTMENTID" "DEPARTMENTID",
"T57460"."NAME" "NAME","T57460"."PARENT"
"PARENT","T57460"."SHORTNAME" "SHORTNAME",
"T57460"."SKIMOID" "SKIMOID"
FROM "BI_OIV_HIER" "T57410",
"BI_DEPARTMENTS" "T57460"
WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
So, select * from TST_MV2 should be read by similar way as underlying select to TST_MV2
DBMS_STATS.GATHER_TABLE_STAT is done for each table and MV.
Please help to investigate the issue.
Why TST_MV2 don't used for query rewrite ?
Kind regards.Hi Carlos
It looks like you have more than one question in your posting. Would I be right in saying that you have an issue with how long Discoverer takes when compared with SQL, and a second issue with regards to MVs not being used? I will add some comments on both. If one of these is not an issue please inform.
Issue 1:
Have you compared the explain plan from Discoverer with SQL? You may need to use a tool like TOAD to see it.
Also, is Discoverer doing anything complicated with the data after it comes back? By complicated I mean do you have a large number of Page Items and / or Group Sorted items? SQL wouldn't have this overhead you see.
Because SQL would create a table, have you tried creating a table in Discoverer and seeing how long it takes?
Finally, what version of the database are you using?
Issue 2:
Your initial statement was that query rewrite works with several MV but not with others, yet in the body of the report you only show explain plans that do use the MV. Could you therefore go into some more detail regarding this situation.
Best wishes
Michael -
I have a partioned table containing 2002 data with the following format:
TAGNAME NOT NULL VARCHAR2(15)
DATA_TYPE NOT NULL CHAR(1)
DATA_DATE NOT NULL DATE
QUALITY CHAR(1)
AVERAGE NUMBER(22,6)
MINIMUM NUMBER(22,6)
MINTIME DATE
MAXIMUM NUMBER(22,6)
MAXTIME DATE
TOTAL NUMBER(22,6)
MINUTES_ON NUMBER(11)
MINUTES_OFF NUMBER(11)
MOTOR_STARTS NUMBER(11)
MOTOR_STOPS NUMBER(11)
CERTIFIED CHAR(1)
I need to build a query that will bring back the max(total) and min(total) and the dates they occurred for month-to-date and year-to-date. I have tried including the value in a subquery, but it fails on "group function not allowed here." I also need to keep the date to one returned value in the event that more than one date exists with the min/max value.
An example of one I tried just getting the min value and date returned for the year (the partition):
SELECT min(sd.total) as "Minimum",
(SELECT data_date
FROM sd_table
WHERE total = min(sd.total)
and tagname = sd.tagname
and rownum = 1) as "MaxDate"
FROM sd_table sd
SQL> /
(SELECT data_date FROM sh_table WHERE total = min(a.total)
ERROR at line 2:
ORA-00934: group function is not allowed here
Any suggestions??
Thanks.Correction:
The examples that I gave above would have been for the last year or 365 days and the last month or 30 days. The following corrected version is for the current year and the current month up until today's date, which is what I think you mean by year-to-date and month-to-date.
scott@ORA92> -- test data
scott@ORA92> SELECT total, data_date FROM sd_table
2 /
TOTAL DATA_DATE
3 03-APR-04
20 24-APR-04
1 24-APR-04
3 02-APR-04
4 01-APR-04
2 03-FEB-04
10 24-JAN-04
7 rows selected.
scott@ORA92> COLUMN min_or_max FORMAT A10
scott@ORA92> SELECT 'Year-to-date' AS period,
2 'Minimum' AS min_or_max,
3 total,
4 data_date
5 FROM (SELECT total, data_date,
6 ROW_NUMBER () OVER (ORDER BY total ASC, data_date DESC) AS rk
7 FROM sd_table
8 WHERE data_date BETWEEN TRUNC (SYSDATE, 'YYYY') AND SYSDATE)
9 WHERE rk = 1
10 UNION ALL
11 SELECT 'Month-to-date' AS period,
12 'Minimum' AS min_or_max,
13 total,
14 data_date
15 FROM (SELECT total, data_date,
16 ROW_NUMBER () OVER (ORDER BY total ASC, data_date DESC) AS rk
17 FROM sd_table
18 WHERE data_date BETWEEN TRUNC (SYSDATE, 'MM') AND SYSDATE)
19 WHERE rk = 1
20 UNION ALL
21 SELECT 'Year-to-date' AS period,
22 'Maximum' AS min_or_max,
23 total,
24 data_date
25 FROM (SELECT total, data_date,
26 ROW_NUMBER () OVER (ORDER BY total DESC, data_date DESC) AS rk
27 FROM sd_table
28 WHERE data_date BETWEEN TRUNC (SYSDATE, 'YYYY') AND SYSDATE)
29 WHERE rk = 1
30 UNION ALL
31 SELECT 'Month-to-date' AS period,
32 'Maximum' AS min_or_max,
33 total,
34 data_date
35 FROM (SELECT total, data_date,
36 ROW_NUMBER () OVER (ORDER BY total DESC, data_date DESC) AS rk
37 FROM sd_table
38 WHERE data_date BETWEEN TRUNC (SYSDATE, 'MM') AND SYSDATE)
39 WHERE rk = 1
40 /
PERIOD MIN_OR_MAX TOTAL DATA_DATE
Year-to-date Minimum 2 03-FEB-04
Month-to-date Minimum 3 03-APR-04
Year-to-date Maximum 10 24-JAN-04
Month-to-date Maximum 4 01-APR-04
scott@ORA92> SELECT year_min.total AS ytd_min,
2 year_min.data_date AS y_mindate,
3 mon_min.total AS mtd_min,
4 mon_min.data_date AS m_mindate,
5 year_max.total AS ytd_max,
6 year_max.data_date AS y_maxdate,
7 mon_max.total AS mtd_max,
8 mon_max.data_date AS m_maxdate
9 FROM (SELECT total, data_date
10 FROM (SELECT total, data_date,
11 ROW_NUMBER () OVER (ORDER BY total ASC, data_date DESC) AS rk
12 FROM sd_table
13 WHERE data_date BETWEEN TRUNC (SYSDATE, 'YYYY') AND SYSDATE)
14 WHERE rk = 1) year_min,
15 (SELECT total, data_date
16 FROM (SELECT total, data_date,
17 ROW_NUMBER () OVER (ORDER BY total ASC, data_date DESC) AS rk
18 FROM sd_table
19 WHERE data_date BETWEEN TRUNC (SYSDATE, 'MM') AND SYSDATE)
20 WHERE rk = 1) mon_min,
21 (SELECT total, data_date
22 FROM (SELECT total, data_date,
23 ROW_NUMBER () OVER (ORDER BY total DESC, data_date DESC) AS rk
24 FROM sd_table
25 WHERE data_date BETWEEN TRUNC (SYSDATE, 'YYYY') AND SYSDATE)
26 WHERE rk = 1) year_max,
27 (SELECT total, data_date
28 FROM (SELECT total, data_date,
29 ROW_NUMBER () OVER (ORDER BY total DESC, data_date DESC) AS rk
30 FROM sd_table
31 WHERE data_date BETWEEN TRUNC (SYSDATE, 'MM') AND SYSDATE)
32 WHERE rk = 1) mon_max
33 /
YTD_MIN Y_MINDATE MTD_MIN M_MINDATE YTD_MAX Y_MAXDATE MTD_MAX M_MAXDATE
2 03-FEB-04 3 03-APR-04 10 24-JAN-04 4 01-APR-04 -
This is probably a relatively easy question as long as I explain it well enough:
I have two tables:
categorycodes and properties
categorycodes is a lookup table.
both tables have a field catcode that is a char(1) that contains matching data (only numbers 1 - 6)
CREATE
TABLE CATEGORYCODES
CATCODE CHAR(1 BYTE) NOT NULL ,
DESCRIPTION VARCHAR2(25 BYTE) NOT NULL ,
CONSTRAINT CATEGORYCODES_PK PRIMARY KEY ( CATCODE ) ENABLE
catcode
1
2
3
4
5
6
The properties table has about 600,000 records. The properties table also has a field called parcelno which is a char(9). It contains a string of numbers and only numbers.
What I'd like is the following:
catcode, count(*)
1 580
2 300
3 3000
4 235
5 0
6 80
I limited the query results to make sure that there was a set that would not have all the catcodes in it. I am having trouble getting the one with zero to display. I know this has to do with how I'm doing the join, but I'm not sure what.
This is a sample of what I've tried:
select i.*
from
(select catcode, count(*)
from properties p
where substr(parcelno,1,3) = '871'
group by catcode) i
right outer join categorycodes cc
on i.catcode = cc.catcode;
I am not worried about situations where catcode is null in properties. Parcelno can never be null.Hi,
It looks like your query should work; except you don't want to COUNT (*); that would make every number at least 1. COUNT (*) means count the total number of rows, no matter what is on them, so it will see the row with just the catcode from the lookup table that doesn't match anything, and count that as 1. You want to count the number of rows from the properties table, so count some column from the properties that can't be NULL.
Here's a slightly different way
SELECT c.catcode
, COUNT (p.catcode) AS cnt
FROM categorycodes c
LEFT OUTER JOIN properties p ON p.catcode = c.catcode
AND SUBSTR ( p.parcelno
, 1
, 3
) = '871'
If the condition about '871' is part of the join condition, then you don't need a sub-query.
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all tables involved, and also post the results you want from that data.
Point out where the statement above is getting the wrong results, and explain, using specific examples, how you get the right results from the given data in those places.
Always say which version of Oracle you're using (e.g., 11.2.0.2.0).
See the forum FAQ: https://forums.oracle.com/message/9362002 -
Please help with optimizing aggregate query
Good Morning.
I hope this will be a simple question, so I won't give a lot of detail about the db unless you ask for clarification.
I have four tables I need to join. CLC can have many CLS, and a CLS can have many CUSTD, and a CUSTD can have many CUSTDR.
I need to add up all the rows in CUSTD for one CLC, and subtract the sum of all the rows in CUSTDR for that CLC.
I first tried this
SELECT SUM(custd.amount_owed) - SUM(cusdr.amount_refunded)but that doesn't work because the amount owed is returned for every amount refunded.
Then I tried using a subquery
SELECT
TO_CHAR(owed.amount - NVL(SUM(custdr.refund_amount),0), 'L999G999G999G999D00')
INTO
g$_value
FROM
claim_settlements cls
,customer_debts custd
,customer_debt_recoveries custdr
SELECT
cls1.clc_id id
,SUM(custd1.owed_amount),0 amount
FROM
claim_settlements cls1
,customer_debts custd1
WHERE
custd1.st_table_short_name = 'CLS'
AND custd1.key_value = cls1.id
AND custd1.status != 'WO'
GROUP BY
cls1.clc_id
)owed
WHERE
custd.st_table_short_name = 'CLS'
AND custd.key_value = cls.id
AND custd.id = custdr.custd_id(+)
AND cls.clc_id = p$_key_value
AND owed.id = cls.clc_id
GROUP BY
owed.amount; I would like to know if this is possible using an analytic sum. This query works, but I read that analytics are better than sub queries, and I am still not sure on all the uses of analytic functions.
Thanks
Message was edited by:
dmill
Updated my question.
Message was edited by:
dmillThanks, Nic
That is the kind of query I was imaging, so I hope we can get it to work.
Here is information about the tables and the data:
DESC customer_debts
Name Null Type
ID NOT NULL NUMBER(28)
CUSTA_ID NOT NULL NUMBER(28)
GLTT_ID NOT NULL NUMBER(28)
ST_TABLE_SHORT_NAME NOT NULL VARCHAR2(10)
KEY_VALUE NOT NULL NUMBER(28)
OWED_AMOUNT NOT NULL NUMBER(11,2)
OWED_BY_CUSTOMER NOT NULL VARCHAR2(1)
STATUS NOT NULL VARCHAR2(2)
NOTES VARCHAR2(4000)
DATE_CREATED NOT NULL DATE
CREATED_BY NOT NULL VARCHAR2(30)
DATE_MODIFIED DATE
MODIFIED_BY VARCHAR2(30)
13 rows selected
DESC customer_debt_recoveries
Name Null Type
ID NOT NULL NUMBER(28)
CUSTD_ID NOT NULL NUMBER(28)
ST_TABLE_FOR_PAID_BY NOT NULL VARCHAR2(10)
KEY_VALUE_FOR_PAID_BY NOT NULL NUMBER(28)
REFUND_AMOUNT NOT NULL NUMBER(11,2)
REFUND_CHECK_NUMBER NUMBER(9)
DATA_SOURCE NOT NULL VARCHAR2(1)
NOTES VARCHAR2(4000)
DATE_CREATED NOT NULL DATE
CREATED_BY NOT NULL VARCHAR2(30)
DATE_MODIFIED DATE
MODIFIED_BY VARCHAR2(30)
12 rows selected
customer_debts custd
ID CSL_ID OWED_AMOUNT
1 4143802 20
2 4143802 10
3 4143802 10
5 4143796 10
6 4143806 10
7 999999999 20
8 999999999 10
9 999999999 10
11 4143802 100
9 rows selected
customer_debt_recoveries custdr
ID CUSTD_ID REFUND_AMOUNT
1 5 10
2 1 27
3 1 5
3 rows selected
claim_charges clc and claim_settlements cls
CLC_ID CLS_ID
537842 4143802
537842 999999999
538057 4143796
538209 4143806
4 rows selectedThe clc is the object that we want the information for. For example, clc 537842 should return the amount of 148, which is the sum of all the debts minus the sum of all the recoveries.
clc 4143796 would return 0
and 4143806 would return 10
When I run your query for clc 537842, which looks like this with joins
select distinct
clc.id clc_id
,sum(custd.owed_amount) over (partition by clc.id, cls.id, custd.id)
sum(custdr.refund_amount) over (partition by clc.id, cls.id, custd.id, custdr.id) amount_owed
from
claim_charges clc
,claim_settlements cls
,customer_debts custd
,customer_debt_recoveries custdr
where
cls.clc_id = clc.id
AND custd.key_value = cls.id
AND custd.st_table_short_name = 'CLS'
AND custd.id = custdr.custd_id (+)
AND custd.status != 'WO'
AND clc.id = 537842I get this result
CLC_ID AMOUNT_OWED
537842
537842 35
537842 13
3 rows selectedThanks again, and let me know if this is not enough info. -
Question on an aggregate query
DB Version:10gR2
I have four tables : container1, container2, container3, container4. All these tables have the same structure(have two columns : ITEM_TYPE, QUANTITY). ITEM_TYPE columns in all these tables are interlinked.
create table container1
(item_type VARCHAR2(35),--Container1's PK
quantity number
create table container2
(item_type VARCHAR2(35),--Container2's PK
quantity number
create table container3
(item_type VARCHAR2(35),--Container3's PK
quantity number
create table container4
(item_type VARCHAR2(35),--Container4's PK
quantity number
insert into container1 values ('APPLE',15)
insert into container2 values ('APPLE',20)
insert into container3 values ('APPLE',30)
insert into container4 values ('APPLE',45)
insert into container1 values ('ORANGE',5)
insert into container2 values ('ORANGE',10)
insert into container3 values ('ORANGE',25)
insert into container4 values ('ORANGE',30)
SELECT * FROM CONTAINER1;
ITEM_TYPE QUANTITY
APPLE 15
ORANGE 5
SELECT * FROM CONTAINER2;
ITEM_TYPE QUANTITY
APPLE 20
ORANGE 10
SELECT * FROM CONTAINER3;
ITEM_TYPE QUANTITY
APPLE 30
ORANGE 25
SELECT * FROM CONTAINER4;
ITEM_TYPE QUANTITY
APPLE 45
ORANGE 30 I want to generate a report which will return the sum of all the quantity (15+20+30+45) in these 4 table for a particular item_type
The result should look like
ITEM_TYPE TOTAL_QUANTITY
APPLE 110
ORANGE 70Can i do this like
SELECT item_type,
--logic required
--logic required
FROM container1 INNER JOIN container2
INNER JOIN container3 INNER JOIN container4
ON container1.item_type=container2.item_type
AND container3.item_type=container4.item_type
GROUP BY item_type;or should i use more complex grouping/aggregate functions?
Edited by: user10450365 on Jan 7, 2009 1:20 AMelse you can do this:
SQL> SELECT
2 container1.item_type,
3 (container1.quantity + container2.quantity + container3.quantity + container4.quantity) qty
4 FROM
5 container1
6 INNER JOIN CONTAINER2 on (container1.item_type = container2.item_type)
7 INNER JOIN CONTAINER3 on (container1.item_type = container3.item_type)
8 INNER JOIN CONTAINER4 on (container1.item_type = container4.item_type)
9 /
ITEM_TYPE QTY
APPLE 110
ORANGE 70 -
Aggregate query on global cache group table
Hi,
I set up two global cache nodes. As we know, global cache group is dynamic.
The cache group can be dynamically loaded by primary key or foreign key as my understanding.
There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
Oracle:
1 Java
2 C
3 Python
Node A:
1 Java
Node B:
2 C
3 Python
If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
The questions are:
how I can get the real count 3?
Is it reasonable to do this query on global cache group table?
I have one idea that create another read-only node for aggregation query, but it seems weird.
Thanks very much.
Regards,
Nesta
Edited by: user12240056 on Dec 2, 2009 12:54 AMDo you mean something like
UPDATE sometable SET somecol = somevalue;
where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
Chris -
i was wondering how to rewrite a query like this to include the group by attribute, and the count (even if it's zero). my example will eliminate the row if the count is zero..
Company (c_id)
Client (c_id, client_id)
Bonus (client_id, approved)
select
company.c_id,
count(bonus.b_id),
count(bonus2.b_id)
from
db.company company
left outer join
db.client client on client.c_id=company.c_id
inner join
db.bonus bonus on bonus.client_id=client.client_id
inner join
db.bonus bonus2 on bonus2.client_id=client.client_id
where
bonus.approved=1 and bonus2.approved=0
group by company.c_id
i can do it with a where c_id=x but that doesn't help me. thanks for any help..still not sure but it is counting for 0, it will not count NULL
dev>with tab
2 as (select 1 as id, 0 as counta, 1 countb from dual
3 union all
4 select 2 as id, 0 counta, 0 countb from dual
5 union all
6 select 3 as id, 5 as counta , 6 countb from dual
7 union all
8 select 4 as id, null counta, null countb from dual
9 )
10 select id,count(counta),count(countb) from tab group by id;
ID COUNT(COUNTA) COUNT(COUNTB)
1 1 1
2 1 1
3 1 1
4 0 0 -
Query with aggregate on custom mapping returning wrong type
I've got a JDOQL query that returns the sum of a single column, where that
column is custom-mapped, but the result I get back is losing precision.
I create the JDOQL query as normal and set the result to the aggregate
expression:
KodoQuery query = (KodoQuery) pm.newQuery(candidateClass, filter);
query.setResult("sum(amount)");
I can also setUnique for good measure as I am expecting just 1 row back:
query.setUnique(true);
The query returns an Integer, but my amount column is a decimal with 5
digits after the decimal point. If I ask for a Double or BigDecimal as the
resultClass, it does return an object of that type, but loses all
precision after the decimal point:
query.setResultClass(BigDecimal.class);
The amount field in my candidate class is of the class Money, a class that
encapsulates a currency and a BigDecimal amount. See
http://www.martinfowler.com/ap2/quantity.html
It is mapped as a custom money mapping to an amount and currency column,
based on the custom mapping in the Kodo examples. I have tried mapping the
amount as a BigDecimal value, and querying the sum of this works. So the
problem seems to be the aggregate query on my custom mapping. Do I need to
write some code for my custom mapping to be able to handle aggregates?
Thanks,
AlexCan you post your custom mapping?
Also, does casting the value have any effect?
q.setResult ("sum((BigDecimal) amount)"); -
Report is not fetching the data from Aggregate..
Hi All,
I am facing the problem in aggregates..
For example when i am running the report using Tcode RSRT2, the BW report is not fetching the data from Aggregates.. instead going into the aggregate it is scanning whole cube Data....
FYI.. Checked the characteristcis is exactely matching with aggregates..
and also it is giving the message as:
<b>Characteristic 0G_CWWPTY is compressed but is not in the aggregate/query</b>
Can some body explain me about this error message.. pls let me know solution asap..
Thankyou in advance.
With regards,
HariHi
Deactivate the aggregates and then rebuild the indexes and then activate the aggregates again.
GTR
Maybe you are looking for
-
Realtime recording from microphone to network
Greetings, I've been trying to figure out how I can record from the microphone and process it straight to network. For instance, I would like to stream or upload as I have the mic recording. I saw this capability mentioned in the Apple documentation
-
After updating to Yosemite, following apps not opening. App Store Mail Contacts Calendars Reminders Maps Preview About the Mac I recently updated to Yosemite and now I am having problems with the above mentioned apps, not opening at all. I am unable
-
Did a clean Install install/upgrade of x.4 and realized it didn't come with
iPhoto. Not sure how to install iPhoto from the system cd's that shipped with my tiBook almost four years ago. Is there a place I can download the app? Or is there a way to install it from my original 10.1.4 install cd's? Thank You,
-
Hi, I'm using FCP 5.1.4 and I'm having a problem with the aspect ratio of the clips that were given to me. And pardon me in advance if some of what I write sounds silly, but I'm a beginner at FCP. I assumed that the clips were shot using the "Anamorp
-
No Scene Detect option in HDV project only SD
When capturing my HD footages, I've noticed that the scene detect option is grayed out but when I downconvert the HD footage and capture into a SD project, the option then becomes avail. Does this feature only work in SD projects?