Question on an aggregate query
DB Version:10gR2
I have four tables : container1, container2, container3, container4. All these tables have the same structure(have two columns : ITEM_TYPE, QUANTITY). ITEM_TYPE columns in all these tables are interlinked.
create table container1
(item_type VARCHAR2(35),--Container1's PK
quantity number
create table container2
(item_type VARCHAR2(35),--Container2's PK
quantity number
create table container3
(item_type VARCHAR2(35),--Container3's PK
quantity number
create table container4
(item_type VARCHAR2(35),--Container4's PK
quantity number
insert into container1 values ('APPLE',15)
insert into container2 values ('APPLE',20)
insert into container3 values ('APPLE',30)
insert into container4 values ('APPLE',45)
insert into container1 values ('ORANGE',5)
insert into container2 values ('ORANGE',10)
insert into container3 values ('ORANGE',25)
insert into container4 values ('ORANGE',30)
SELECT * FROM CONTAINER1;
ITEM_TYPE QUANTITY
APPLE 15
ORANGE 5
SELECT * FROM CONTAINER2;
ITEM_TYPE QUANTITY
APPLE 20
ORANGE 10
SELECT * FROM CONTAINER3;
ITEM_TYPE QUANTITY
APPLE 30
ORANGE 25
SELECT * FROM CONTAINER4;
ITEM_TYPE QUANTITY
APPLE 45
ORANGE 30 I want to generate a report which will return the sum of all the quantity (15+20+30+45) in these 4 table for a particular item_type
The result should look like
ITEM_TYPE TOTAL_QUANTITY
APPLE 110
ORANGE 70Can i do this like
SELECT item_type,
--logic required
--logic required
FROM container1 INNER JOIN container2
INNER JOIN container3 INNER JOIN container4
ON container1.item_type=container2.item_type
AND container3.item_type=container4.item_type
GROUP BY item_type;or should i use more complex grouping/aggregate functions?
Edited by: user10450365 on Jan 7, 2009 1:20 AM
else you can do this:
SQL> SELECT
2 container1.item_type,
3 (container1.quantity + container2.quantity + container3.quantity + container4.quantity) qty
4 FROM
5 container1
6 INNER JOIN CONTAINER2 on (container1.item_type = container2.item_type)
7 INNER JOIN CONTAINER3 on (container1.item_type = container3.item_type)
8 INNER JOIN CONTAINER4 on (container1.item_type = container4.item_type)
9 /
ITEM_TYPE QTY
APPLE 110
ORANGE 70
Similar Messages
-
Questions in Ad Hoc Query & How to Configure the EEO standard reports
Hi all,
I have a question in Ad hoc query report in HR.
<b>How to:</b> Get a list of the total number of employees included in a particular report at the end of the report. Ex: If i create and run a report for salaried employees, sorted out by company codes, how can i get a sub-total and total no. of employees listed in the report.
I tried Ranked format, but when you print the report it doesn't retain the report name on the top.
-->I have a question regarding the Standard reports for EEO and AAP
<b> How do I</b>
1. Start configuring these report
2. What are the things i should have before configuring it in IMG
If anyone can provide me with some documentation regarding the EEO and AAP report configuration that would be great.
Thanks in advance.....
HarishThis can be done using the security for the Infoprovider, provide the users access to create queries only for that Infoprovider.
-
Is there a better way to do this projection/aggregate query?
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll();Neil,
It sounds like the problem that you're running into is that Kodo doesn't
yet support the JDO2 grouping constructs, so you're doing your own
grouping in the Java code. Is that accurate?
We do plan on adding direct grouping support to our aggregate/projection
capabilities in the near future, but as you've noticed, those
capabilities are not there yet.
-Patrick
Neil Bacon wrote:
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll(); -
Characteristic "XYZ" is compressed but is not in the aggregate/query
Hi there,
we have build couple of aggregates and receive the following message in debugging mode:
e.g." ____Characteristic 0VALUATION is compressed but is not in the aggregate/query"
Does this mean, we have to include the characteristic into the aggregate or does it mean, it is in but is not compressed ?Hi
Is this warning message or error message?
It seems warning message.It just saying but in aggregates or on query.You can ignore it.
Regards,
Chandu. -
Right time to do aggregates/query caching
Hi
when is right time to do aggregates/query caching?Whats the business scenario
thanksHi Jack,
Refer to the document link below for details.
https://websmp106.sap-ag.de/~sapidb/011000358700004339892004
Hope it helps.
Cheers
Anurag -
Query rewrite don't work wor aggregate query but work for join query
Dear experts,
Let me know what's wrong for
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
I have two MATERIALIZED VIEW:
A) -- Only join
CREATE MATERIALIZED VIEW "SCOTT"."TST_MV"
ENABLE QUERY REWRITE AS
SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
"T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
"T57410"."DISTANCE" "DISTANCE",
"T57410"."IS_LEAF" "IS_LEAF",
"T57460"."DEPARTMENTID" "DEPARTMENTID",
"T57460"."NAME" "NAME","T57460"."PARENT"
"PARENT","T57460"."SHORTNAME" "SHORTNAME",
"T57460"."SKIMOID" "SKIMOID"
FROM "BI_OIV_HIER" "T57410",
"BI_DEPARTMENTS" "T57460"
WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
B) -- Join with aggregation
CREATE MATERIALIZED VIEW "SCOTT"."TST_MV2"
("C41", "C42", "C43",
"C44", "C45", "C46",
"C47", "C48", "C49",
"C50", "C51", "C52",
"C53", "C54", "C55",
"C56", "C57", "C58",
"C59", "C60", "C61",
"INCIDENTTYPE")
ENABLE QUERY REWRITE
AS SELECT COUNT(T56454.TOTAL) AS c41,
T56840.CATEGORYID AS c42,
T56840.PARENT AS c43,
T56908.DOCSTATEID AS c44,
T56908.PARENT AS c45,
T56947.EXPIREDID AS c46,
T56947.PARENT AS c47,
T56986.ISSUESTATEID AS c48,
T56986.PARENT AS c49,
T57025.LOCATIONID AS c50,
T57025.PARENT AS c51,
T57064.NEWID AS c52,
T57064.PARENT AS c53,
T57103.PARENT AS c54,
T57103.RESOLUTIONID AS c55,
T57142.PARENT AS c56,
T57142.RESPONSIBLEID AS c57,
T57181.PARENT AS c58,
T57181.SOURCEID AS c59,
T57460.DEPARTMENTID AS c60,
T57460.PARENT AS c61,
T56454.INCIDENTTYPE
FROM BI_OIV_HIER T57410
BI_DEPARTMENTS T57460
BI_SOURCE_HIER T57176
SOURCE T57181
BI_RESPONSIBLE_HIER T57137
RESPONSIBLE T57142
BI_RESOLUTIONS_HIER T57098
RESOLUTIONS T57103
BI_NEW_HIER T57059
NEW T57064
BI_LOCATIONS_HIER T57020
LOCATIONS T57025
BI_ISSUESTATES_HIER T56981
ISSUESTATES T56986
BI_EXPIRED_HIER T56942
EXPIRED T56947
BI_DOCSTATES_HIER T56903
DOCSTATES T56908
BI_CATEGORY_HIER T56835
CATEGORY T56840
INCIDENTS T56454
WHERE ( T56454.RESOLUTION = T57098.MEMBER_KEY
AND T56454.CATEGORY = T56835.MEMBER_KEY
AND T56454.DOCSTATE = T56903.MEMBER_KEY
AND T56454.EXPIRED = T56942.MEMBER_KEY
AND T56454.ISSUESTATE = T56981.MEMBER_KEY
AND T56454.LOCATION = T57020.MEMBER_KEY
AND T56454.NEW = T57059.MEMBER_KEY
AND T56454.RESPONSIBLE = T57137.MEMBER_KEY
AND T56454.SOURCE = T57176.MEMBER_KEY
AND T56454.DEPARTMENTID = T57410.MEMBER_KEY
AND T56835.ANCESTOR_KEY = T56840.CATEGORYID
AND T56903.ANCESTOR_KEY = T56908.DOCSTATEID
AND T56942.ANCESTOR_KEY = T56947.EXPIREDID
AND T56981.ANCESTOR_KEY = T56986.ISSUESTATEID
AND T57020.ANCESTOR_KEY = T57025.LOCATIONID
AND T57059.ANCESTOR_KEY = T57064.NEWID
AND T57098.ANCESTOR_KEY = T57103.RESOLUTIONID
AND T57137.ANCESTOR_KEY = T57142.RESPONSIBLEID
AND T57176.ANCESTOR_KEY = T57181.SOURCEID
AND T57410.ANCESTOR_KEY = T57460.DEPARTMENTID
GROUP BY T56840.CATEGORYID,
T56840.PARENT,
T56908.DOCSTATEID,
T56908.PARENT,
T56947.EXPIREDID,
T56947.PARENT,
T56986.ISSUESTATEID,
T56986.PARENT,
T57025.LOCATIONID,
T57025.PARENT,
T57064.NEWID,
T57064.PARENT,
T57103.PARENT,
T57103.RESOLUTIONID,
T57142.PARENT,
T57142.RESPONSIBLEID,
T57181.PARENT,
T57181.SOURCEID,
T57460.DEPARTMENTID,
T57460.PARENT,
T56454.INCIDENTTYPE;
So, optimizer uses query rewrite in
select * from TST_MV
and don't use query rewrite in
select * from TST_MV2
within one session.
select * from TST_MV should be read as underlying select for TST_MV:
SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
"T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
"T57410"."DISTANCE" "DISTANCE",
"T57410"."IS_LEAF" "IS_LEAF",
"T57460"."DEPARTMENTID" "DEPARTMENTID",
"T57460"."NAME" "NAME","T57460"."PARENT"
"PARENT","T57460"."SHORTNAME" "SHORTNAME",
"T57460"."SKIMOID" "SKIMOID"
FROM "BI_OIV_HIER" "T57410",
"BI_DEPARTMENTS" "T57460"
WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
So, select * from TST_MV2 should be read by similar way as underlying select to TST_MV2
DBMS_STATS.GATHER_TABLE_STAT is done for each table and MV.
Please help to investigate the issue.
Why TST_MV2 don't used for query rewrite ?
Kind regards.Hi Carlos
It looks like you have more than one question in your posting. Would I be right in saying that you have an issue with how long Discoverer takes when compared with SQL, and a second issue with regards to MVs not being used? I will add some comments on both. If one of these is not an issue please inform.
Issue 1:
Have you compared the explain plan from Discoverer with SQL? You may need to use a tool like TOAD to see it.
Also, is Discoverer doing anything complicated with the data after it comes back? By complicated I mean do you have a large number of Page Items and / or Group Sorted items? SQL wouldn't have this overhead you see.
Because SQL would create a table, have you tried creating a table in Discoverer and seeing how long it takes?
Finally, what version of the database are you using?
Issue 2:
Your initial statement was that query rewrite works with several MV but not with others, yet in the body of the report you only show explain plans that do use the MV. Could you therefore go into some more detail regarding this situation.
Best wishes
Michael -
Simply question: Are these two query same?
Question 1:
There is this query from someone: (I'm not sure how Join ON works)
SELECT NULL AS order_id
,line_id
,o.item_id
,o.customer_id
INTO v_line_table
FROM xx_items_v i,
JOIN order_dtl o ON i.item_id = o.item_id
WHERE o.que_id = 3380;
Is this same as below?
SELECT ORDER_ID , line_id , o.item_id, o.customer_id
into v_line_table
FROM xx_items_v i, order_dtl o
where i.item_id = o.item_id (+)
and o.que_id = 3380;
Question 2:
I want to retrieve ORDER_ID information from a 3rd table (order_header.order_id). The key between order_dtl & order_header are que_id. How can I add to the first query above?No. Your first query will never work it has a comma between the first table and the join key word remove it then...
remove the (+) from you second piece of code and it will be the same as the amended first piece of code, or turn the upper piece of code into an outer join by specifying whether it is a left or a right outer join.
In your case it's a left join:
select [column list]
from table1 i
left join table2 o on i.key = o.key
where o.column = 1234;However, the where clause turns this query back into an inner join becuase the right side of the join must exist. You can either allow o.column to be null in the where clause, or move the predicate up into the join. The two folowing queries are equivalent in their results:
select [column list]
from table1 i
left join table2 o on i.key = o.key
where o.column = 1234
or o.column is null;
select [column list]
from table1 i
left join table2 o on i.key = o.key and o.column = 1234;Message was edited by:
Sentinel -
Question related to changing query from Production to Development
Hello friends i have a question,
I did query designing and transported it to production.
Now there are lots of modification needs to be done in it.
So can i change the queryin production or not.
If i have to do modification what shall i do?
If i do it in dev again, how to transport it back to production.
what precaustion i need to take.
thanksHi Kartikey,
Follow the below steps.
1.Goto transport connection in RSA1 of ur development.
2.Click on icon Bex.Create a Bex request.
3.Edit ur query.Assign it to created Bex request.
4.Transport it to Production.
Your previous query in Production will be over written by the new Query.
Assigning points is the way of saying thanks in SDN.
Madhu. -
Question related to analytic query
Just realize I should post this question in this forum rather than regular SQL and PL/SQL topic.
Supposed I have a table with three columns: class, name, and score and something like below:
class Name score
c1 n1 76
c1 n2 92
c1 n3 37
c1 n4 50
c1 n5 87
c2 n6 97
c2 n7 85
c2 n8 61
c2 n9 88
c3 n10 85
I want to have the summary report based on the percentage distribution like (0%~30%, 31%~70%, 71%~100%). The result I want to generate is something like
class percentCategroy avg_score
c1 0%~30% 57
c1 31%~70% 88
c1 70%~100% 92
c3 0%~100% 85
Note: the avg_score for c1 are not the correct one and just use some dummy number.
Does anyone know how to write a query like above? Thank you very much in advance.I was thinking something like this:
SELECT class,
CASE
WHEN score >= top_third
THEN '66-100%'
WHEN (score < top_third
AND score >= middle_third)
THEN '33-66%'
ELSE '0-33%'
END pct_category,
AVG(score) avg_score
FROM
(SELECT class ,
max_score ,
min_score ,
score ,
ROUND((max_score - min_score)/3) * 2 + min_score top_third,
ROUND((max_score - min_score)/3) + min_score middle_third
FROM
(SELECT class ,
MAX(score) over (partition BY class) max_score,
MIN(score) over (partition BY class) min_score,
score
FROM t1
GROUP BY class,
CASE
WHEN score >= top_third
THEN '66-100%'
WHEN (score < top_third
AND score >= middle_third)
THEN '33-66%'
ELSE '0-33%'
END
ORDER BY 1,2;
CLASS PCT_CATEGORY AVG_SCORE
c1 0-33% 43.5
c1 66-100% 85
c2 0-33% 61
c2 66-100% 90
c3 66-100% 85
5 rows selectedThat NTILE analytic function looks promissing as well.
Message was edited by:
JoeC -
This is probably a relatively easy question as long as I explain it well enough:
I have two tables:
categorycodes and properties
categorycodes is a lookup table.
both tables have a field catcode that is a char(1) that contains matching data (only numbers 1 - 6)
CREATE
TABLE CATEGORYCODES
CATCODE CHAR(1 BYTE) NOT NULL ,
DESCRIPTION VARCHAR2(25 BYTE) NOT NULL ,
CONSTRAINT CATEGORYCODES_PK PRIMARY KEY ( CATCODE ) ENABLE
catcode
1
2
3
4
5
6
The properties table has about 600,000 records. The properties table also has a field called parcelno which is a char(9). It contains a string of numbers and only numbers.
What I'd like is the following:
catcode, count(*)
1 580
2 300
3 3000
4 235
5 0
6 80
I limited the query results to make sure that there was a set that would not have all the catcodes in it. I am having trouble getting the one with zero to display. I know this has to do with how I'm doing the join, but I'm not sure what.
This is a sample of what I've tried:
select i.*
from
(select catcode, count(*)
from properties p
where substr(parcelno,1,3) = '871'
group by catcode) i
right outer join categorycodes cc
on i.catcode = cc.catcode;
I am not worried about situations where catcode is null in properties. Parcelno can never be null.Hi,
It looks like your query should work; except you don't want to COUNT (*); that would make every number at least 1. COUNT (*) means count the total number of rows, no matter what is on them, so it will see the row with just the catcode from the lookup table that doesn't match anything, and count that as 1. You want to count the number of rows from the properties table, so count some column from the properties that can't be NULL.
Here's a slightly different way
SELECT c.catcode
, COUNT (p.catcode) AS cnt
FROM categorycodes c
LEFT OUTER JOIN properties p ON p.catcode = c.catcode
AND SUBSTR ( p.parcelno
, 1
, 3
) = '871'
If the condition about '871' is part of the join condition, then you don't need a sub-query.
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all tables involved, and also post the results you want from that data.
Point out where the statement above is getting the wrong results, and explain, using specific examples, how you get the right results from the given data in those places.
Always say which version of Oracle you're using (e.g., 11.2.0.2.0).
See the forum FAQ: https://forums.oracle.com/message/9362002 -
Please help with optimizing aggregate query
Good Morning.
I hope this will be a simple question, so I won't give a lot of detail about the db unless you ask for clarification.
I have four tables I need to join. CLC can have many CLS, and a CLS can have many CUSTD, and a CUSTD can have many CUSTDR.
I need to add up all the rows in CUSTD for one CLC, and subtract the sum of all the rows in CUSTDR for that CLC.
I first tried this
SELECT SUM(custd.amount_owed) - SUM(cusdr.amount_refunded)but that doesn't work because the amount owed is returned for every amount refunded.
Then I tried using a subquery
SELECT
TO_CHAR(owed.amount - NVL(SUM(custdr.refund_amount),0), 'L999G999G999G999D00')
INTO
g$_value
FROM
claim_settlements cls
,customer_debts custd
,customer_debt_recoveries custdr
SELECT
cls1.clc_id id
,SUM(custd1.owed_amount),0 amount
FROM
claim_settlements cls1
,customer_debts custd1
WHERE
custd1.st_table_short_name = 'CLS'
AND custd1.key_value = cls1.id
AND custd1.status != 'WO'
GROUP BY
cls1.clc_id
)owed
WHERE
custd.st_table_short_name = 'CLS'
AND custd.key_value = cls.id
AND custd.id = custdr.custd_id(+)
AND cls.clc_id = p$_key_value
AND owed.id = cls.clc_id
GROUP BY
owed.amount; I would like to know if this is possible using an analytic sum. This query works, but I read that analytics are better than sub queries, and I am still not sure on all the uses of analytic functions.
Thanks
Message was edited by:
dmill
Updated my question.
Message was edited by:
dmillThanks, Nic
That is the kind of query I was imaging, so I hope we can get it to work.
Here is information about the tables and the data:
DESC customer_debts
Name Null Type
ID NOT NULL NUMBER(28)
CUSTA_ID NOT NULL NUMBER(28)
GLTT_ID NOT NULL NUMBER(28)
ST_TABLE_SHORT_NAME NOT NULL VARCHAR2(10)
KEY_VALUE NOT NULL NUMBER(28)
OWED_AMOUNT NOT NULL NUMBER(11,2)
OWED_BY_CUSTOMER NOT NULL VARCHAR2(1)
STATUS NOT NULL VARCHAR2(2)
NOTES VARCHAR2(4000)
DATE_CREATED NOT NULL DATE
CREATED_BY NOT NULL VARCHAR2(30)
DATE_MODIFIED DATE
MODIFIED_BY VARCHAR2(30)
13 rows selected
DESC customer_debt_recoveries
Name Null Type
ID NOT NULL NUMBER(28)
CUSTD_ID NOT NULL NUMBER(28)
ST_TABLE_FOR_PAID_BY NOT NULL VARCHAR2(10)
KEY_VALUE_FOR_PAID_BY NOT NULL NUMBER(28)
REFUND_AMOUNT NOT NULL NUMBER(11,2)
REFUND_CHECK_NUMBER NUMBER(9)
DATA_SOURCE NOT NULL VARCHAR2(1)
NOTES VARCHAR2(4000)
DATE_CREATED NOT NULL DATE
CREATED_BY NOT NULL VARCHAR2(30)
DATE_MODIFIED DATE
MODIFIED_BY VARCHAR2(30)
12 rows selected
customer_debts custd
ID CSL_ID OWED_AMOUNT
1 4143802 20
2 4143802 10
3 4143802 10
5 4143796 10
6 4143806 10
7 999999999 20
8 999999999 10
9 999999999 10
11 4143802 100
9 rows selected
customer_debt_recoveries custdr
ID CUSTD_ID REFUND_AMOUNT
1 5 10
2 1 27
3 1 5
3 rows selected
claim_charges clc and claim_settlements cls
CLC_ID CLS_ID
537842 4143802
537842 999999999
538057 4143796
538209 4143806
4 rows selectedThe clc is the object that we want the information for. For example, clc 537842 should return the amount of 148, which is the sum of all the debts minus the sum of all the recoveries.
clc 4143796 would return 0
and 4143806 would return 10
When I run your query for clc 537842, which looks like this with joins
select distinct
clc.id clc_id
,sum(custd.owed_amount) over (partition by clc.id, cls.id, custd.id)
sum(custdr.refund_amount) over (partition by clc.id, cls.id, custd.id, custdr.id) amount_owed
from
claim_charges clc
,claim_settlements cls
,customer_debts custd
,customer_debt_recoveries custdr
where
cls.clc_id = clc.id
AND custd.key_value = cls.id
AND custd.st_table_short_name = 'CLS'
AND custd.id = custdr.custd_id (+)
AND custd.status != 'WO'
AND clc.id = 537842I get this result
CLC_ID AMOUNT_OWED
537842
537842 35
537842 13
3 rows selectedThanks again, and let me know if this is not enough info. -
Aggregate Query Optimization (with indexes) for trivial queries
Table myTable, which is quite large, has an index on the month column.
"select max(month) from myTable" uses this index and returns quickly.
"select max(month) from myTable where 1 = 1" does not use this index, falls through to a full table scan, and takes a very long time.
Can this possibly be a genuine omission in the query optimizer, or is there some setting or another to convince it to perform the latter query more sanely?Oracle 11.2.0.1
SQL> select table_name, num_rows from dba_tables where table_name = 'DWH_ADDRESS_MASTER';
TABLE_NAME NUM_ROWS
DWH_ADDRESS_MASTER 295729948
SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 8 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1 | 1 | 8 | 4 (0)| 00:00:01 |
SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER where 1 = 1;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 8 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | FIRST ROW | | 1 | 8 | 4 (0)| 00:00:01 |
| 3 | INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1 | 1 | 8 | 4 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------ -
Aggregate query on global cache group table
Hi,
I set up two global cache nodes. As we know, global cache group is dynamic.
The cache group can be dynamically loaded by primary key or foreign key as my understanding.
There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
Oracle:
1 Java
2 C
3 Python
Node A:
1 Java
Node B:
2 C
3 Python
If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
The questions are:
how I can get the real count 3?
Is it reasonable to do this query on global cache group table?
I have one idea that create another read-only node for aggregation query, but it seems weird.
Thanks very much.
Regards,
Nesta
Edited by: user12240056 on Dec 2, 2009 12:54 AMDo you mean something like
UPDATE sometable SET somecol = somevalue;
where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
Chris -
Basic question - Problems with basic query - Mask = INSIDE
Greetings,
I'm doing a basic query on Oracle 10g Spatial, is to determine the intersection between two layers (one point called "Calidad1") and other polygons, called "Parishes")
For issuing the following query based on the documentation of oracle:
"select C1.ID, C1.SECUENCIA,
P. COD_ENTIDAD, P. STATE
Cod_county P., P. TOWNSHIP,
P. COD_PARROQUIA, P. PARISH
from CALIDAD1 c1,
PARISHES p
where sdo_relate (c1.geoloc, p.geoloc, 'mask = INSIDE querytype = WINDOW') = 'TRUE'
order by C1.ID;
When I run the query, no errors but extends too long (more than 10 minutes) and no end, I have to cancel.
Canceling shows me the following error:
"ORA-13268: error obtaining dimension from USER_SDO_GEOM_METADATA
ORA-06512: at "MDSYS.MD", line 1723
ORA-06512: at "MDSYS.MDERR", line 8
ORA-06512: at "MDSYS.SDO_3GL", line 89 "
This query is very basic and the data volume is small, a conclusion: I must be skipping a step or activity.
Can you guide a little to resolve this situation and get the query that I need?
Thanks to allFirst, try this query with the ordered hint and also note the change in the FROM clause.
select /*+ ordered */ C1.ID, C1.SECUENCIA,
P. COD_ENTIDAD, P. STATE
Cod_county P., P. TOWNSHIP,
P. COD_PARROQUIA, P. PARISH
from PARISHES p,
CALIDAD1 c1,
where sdo_relate (c1.geoloc, p.geoloc, 'mask = INSIDE querytype = WINDOW') = 'TRUE'
order by C1.ID;
See if this is using the index as expected. If not try this:
select /*+ ordered index(p name_of_the_spatial_index_on_the_PARISH_Table) */
C1.ID, C1.SECUENCIA,
P. COD_ENTIDAD, P. STATE
Cod_county P., P. TOWNSHIP,
P. COD_PARROQUIA, P. PARISH
from PARISHES p,
CALIDAD1 c1,
where sdo_relate (c1.geoloc, p.geoloc, 'mask = INSIDE querytype = WINDOW') = 'TRUE'
order by C1.ID;
siva -
How to optimize an aggregate query
There is a table table1 having more than 3 lacs of records. It has an index on a column say col1. when We issue a simple query select count(col1) from table1, it is taking about 1 minute in exectuion even if index is there. can anyone guide me on how to optimize it
More information about the problem.
SQL> select count(r_object_id) from dmi_queue_item_s;
COUNT(R_OBJECT_ID)
292784
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname,pname,pval1,pval2
2 from sys.aux_stats$;
no rows selected
SQL> explain plan for
2 select count(r_object_id) from dmi_queue_item_s;
select count(r_object_id) from dmi_queue_item_s
ERROR at line 2:
ORA-02402: PLAN_TABLE not found
Maybe you are looking for
-
Can't view markers in LiveType
I can't view markers in Livetype (Using LiveType 2.1 and Final Cut Express HD 3.5.1). In order to synch a LiveType to audio... I've set up the audio in Final Cut Express, and marked the audio at my placement points... In Final Cut Express, I select F
-
I am contemplating using a PCI 6071E with SCXI. We have the boards (6071E), we would need the cable (I suppose SH1006868) and the SCXI system. I would like to know how would you access the counter and digital line capabilities of the board. I will be
-
PDF file gets open with blank page behind in IE7
Hi , We have recently upgraded the IE from IE6 to IE7 version. Now when we are tring to open the PDF file ,it opens in Adobe Acrobat reader leaving the blank page in the behind. Bur earlier i.e. in IE6 we were able to open the PDF in the browser itse
-
Which Event Classes i should use for finding good indexs and statistics for queries in SP.
Dear all, I am trying to use pro filer to create a trace,so that it can be used as workload in "Database Engine Tuning Advisor" for optimization of one stored procedure. Please tel me about the Event classes which i should use in trace. The stored p
-
MWA: Issue while capturing the values entered, before displaying the LOV
I have created a LOV in Mobile Web Application using PL/SQL procedure. I'm not able to get the values entered by the user before displaying the List of values. I need the values entered to pass it to the query. There might be a seeded event or method