Complex Query on a huge table
Hi,
I have a table with around 100 million records. This table has 3 columns.
Customer_id,
start_date
end_date
Customer ID may be repeated with different ranges of start_date and end_date. Below is an example
Customer ID Start_Date End_Date
1 01-JAN-2013 31-MAR-2013
1 01-APR-2013 31-MAY-2013
1 01-JUL-2013 31-DEC-2013
2 01-MAY-2013 NULL
2 01-JAN-2013 NULL
I need to create another table with the below output
Customer ID Start_Date End_Date
1 01-JAN-2013 31-MAY-2013 ---This got merged as the end date (31-MAR-2013) and start date (01-APR-2013) can be merged to 1 record.
1 01-JUL-2013 31-DEC-2013 --- This did not get merged as end date (31-MAY-2013) and start date (01-JUL-2013) have a gap
2 01-JAN-2013 NULL --- Here null to be treated as an active record, so we pick the MIN of the dates.
Is thr a way to do this using a query. I have a pl-sql script where I have added this logic.
Sampel data and table structure provided. Thanks
create table xx_temp (customer_id number,
start_date date,
end_date);
insert into xx_temp values (1,to_date('01012013','ddmmyyyy'), to_date('31032013','ddmmyyyy'));
insert into xx_temp values (1,to_date('01042013','ddmmyyyy'), to_date('31052013','ddmmyyyy'));
insert into xx_temp values (1,to_date('01072013','ddmmyyyy'), to_date('31122013','ddmmyyyy'));
insert into xx_temp values (2,to_date('01012013','ddmmyyyy'), null);
insert into xx_temp values (2,to_date('01052013','ddmmyyyy'), null);
Hi.
{message:id=10947774} shows how to do that.
Use
NVL ( end_date
, DATE '9999-12-31'
)to equate NULL end_dates with an impossibly late date.
Edited by: Frank Kulash on Apr 7, 2013 7:01 AM
I see you added CREATE TABLE and INSERT statements, so I can test it. Thanks.
Here's what you requested:
CREATE TABLE another_table
AS
WITH got_new_grp AS
SELECT customer_id, start_date, end_date
, CASE
WHEN start_date > 1 + MAX (end_date) OVER
( PARTITION BY customer_id
ORDER BY start_date
RANGE BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING
THEN 1
ELSE 0
END AS new_grp
FROM xx_temp
-- WHERE ... -- if you need any filtering, put it here
, got_grp AS
SELECT customer_id, start_date, end_date
, SUM (new_grp) OVER ( PARTITION BY customer_id
ORDER BY start_date
) AS grp
FROM got_new_grp
SELECT customer_id
, MIN (start_date) AS start_date
, MAX (end_date) AS end_date
FROM got_grp
GROUP BY customer_id
, grp
;There's no need to equate NULLs with an impossibly late date in this case. Ignore what I said about NVL.
Since you consider an end_date of March 31, 2013 to overlap with a start_date of 1 April, 2013, I changed
start_date >= MAX (end_date) ....from the earlier thread to
start_date > 1 + MAX (end_date) ....above.
Similar Messages
-
1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed in the stat plan ?
2. Does rowsource statistics gives some kind of understanding of Extended stats ?You can get Row Source Statistics only *after* the SQL has been executed. An Explain Plan midway cannot give you row source statistics.
To get row source statistics either set STATISTICS_LEVEL='ALL' in the session that executes theSQL OR use the Hint "gather_plan_statistics" in the SQL being executed.
Then use dbms_xplan.display_cursor
Hemant K Chitale -
Query for the huge table is not working.
Hi,
I am having a link between oracle server and Microsoft sql server let' say 'SQLWEB' this link is perfectly working fine when I query table having few hundred thousand records but It’s not working for one of the table which is having a more then 3 million record at sql server. any one of you is having any Idea why this peculiar behavior is there any limitations for this heterogeneous link is there any workaround for the same. Below you can see the first query returns the count from table but second query is getting disconnected as that’s a very huge table having millions of record.
shams at oracleserver> select count(*) from investors@sqlweb ;
COUNT(*)
15096
shams at oracleserver> select count(*) from transactions@sqlweb;
select count(*) from transactions@sqlweb
ERROR at line 1:
ORA-02068: following severe error from SQLWEB
ORA-28511: lost RPC connection to heterogeneous remote agent using SID=%s
ORA-28509: unable to establish a connection to non-Oracle system
Regards
Shamsheer
Message was edited by:
ShamsheerIn general you want to minimize the traffic going over the dblink. This is best handled with view on the sql server try. You might try creating a view on sql server like:
create view all_investors as
select * from investors
Then from sql plus:
select count from all_investors@sqlweb. -
How to create a Service based on complex query
Hi,
I'm using JDev 11.1.2.2.0 for developing fusion based application based on EJB3.0 and JPA. I search on net and got a link for sample application as shown below:
http://www.oracle.com/webfolder/technetwork/tutorials/obe/jdev/obe11jdev/ps1/ejb/ejb.html#t2s1
But above application is based on "Entities based on Tables" but my requirement to build a service based on query which contains multiple tables which means I want to create a generic one...
please let me know what I need to change or please provide any link for docs.Hi Desmukh,
I want to create a wsdl for complex queries based on ADF fusion applicaiton using EJB3.0 and JPA...!
the link which you provided 'entity with tables' which is restricting for my requirement but I want similar implementation for complex queries :( -
SELECT query performance : One big table Vs many small tables
Hello,
We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
multiple small tables will help ?
For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
Thanks.Hello,
There is some information on this topic in the FAQ at:
http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
If this does not address your question, please just let me know.
Thanks,
Sandra -
I am novice in PL/Sql. I need some help in writing a complex query.
I imagine the following table structure.
Type(string) Date(date) count(int)
Given a date range
The query needs to group the records on type and return the record (type and count) which has the max date (its just date, no time is involved) for each group (which is based on type) . If there are more than one records which have the max date then the average count should be returned for that type
i would be glad if someone could give any ideas as to how to go about this query. Thanking you in advance.
regards.Heres the query ... Forget the period .... wht this query is supposed to do is group on assigned ki for a given date range. then it has to get the value which is for the last record for that Ki in the date range. If there are 2 records which have max date then it should give an average.
e.g
assignedKI / date / value
a 1st may 2008 10
b 2nd may 2008 12
c 1st may 2008 13
a 30 - apr-2008 16
b 4th may 2008 17
a 1st may 2008 20
The query should return
a 1st may 2008 15 (which is the average as there are 2 values for the max date for a)
b 4th may 2008 17
c 1st may 2008 13
the following query doesnot work ....
SELECT
kiv2.assigned_k_i,
kiv2.ki_name,
kiv.period,
max( kiv2.ki_value_date) ,
avg(kiv2.ki_value)
FROM
SELECT
assigned_k_i,
period p1,
TO_CHAR(ki_value_date, period) period,
MAX(ki_value_date) ki_value_date
FROM
v_ki_value,
(select ? period from dual)
WHERE
(status = 'APPROVED' OR status = 'AUTOAPPROVED')
AND ( trunc(to_date(to_char(ki_value_date, 'DD/MM/YYYY'), 'DD/MM/YYYY')) >= ?)
AND ( trunc(to_date(to_char(ki_value_date, 'DD/MM/YYYY'), 'DD/MM/YYYY')) <= ?)
AND (INSTR(?, TO_CHAR(assigned_k_i)) > 0)
GROUP BY
assigned_k_i,
TO_CHAR(ki_value_date, period)
) kiv,
v_ki_value kiv2
WHERE
kiv.assigned_k_i = kiv2.assigned_k_i
AND to_char(kiv.ki_value_date, kiv.p1) = to_char(kiv2.ki_value_date, kiv.p1)
GROUP BY
kiv2.assigned_k_i,
kiv2.ki_name,
kiv.period,
kiv2.ki_value
ORDER BY
kiv2.assigned_k_i -
Trying to form complex query - need help
I have a fairly complex query that I need to join the results of to show actual and goal by day. The actuals are an aggregation of records that get put in every day, while the targets are a single entry in range format indicating an active range for which the target applies. I'm working on a query that will put things together by month and I'm running into a snag. Can someone please point out where appropriate naming needs to go to get this to come together?
This one works:
(select DATE_INDEX, SUM(LDS) as TTLLDS, SUM(TONS) as TTLTONS from
(select DATE_INDEX, VEH_LOC, SUM(LDS) as LDS, SUM(WT) as TONS from
(select c.DATE_INDEX, c.VEH_LOC, COUNT(j.LOAD_JOB_ID) as LDS,
CASE WHEN SUM(w.SPOT_WEIGHT) = 0 THEN SUM(j.MAN_SPOT_WT)
ELSE SUM(w.SPOT_WEIGHT)
END as WT
from TC c, TC_LOAD_JOBS j, LOAD_RATES r, SPOT_WEIGHTS w
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
and c.VEH_LOC in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
group by c.DATE_INDEX, c.VEH_LOC
union
select c.DATE_INDEX, c.VEH_LOC, COUNT(j.JOB_ID) as LDS,
DECODE(SUM(j.AVG_SPOT_WEIGHT),0,SUM(r.BID_TONS),SUM(j.AVG_SPOT_WEIGHT)) as WT
from TC_3RDPARTY c, TC_3RDPARTY_JOBS j, LOAD_RATES r
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
and j.FACTORY_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
group by c.DATE_INDEX, c.VEH_LOC)
group by DATE_INDEX, VEH_LOC)
group by DATE_INDEX)Now I need to add in the following query:
select (u.MACH_TPH_D+u.MACH_TPH_N)/2 as MTPH_TGT, (u.LABOR_TPH_D+u.LABOR_TPH_N)/2 as LTPH_TGT
from UTIL_TARGET_LOADERS u
where u.ORG_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)The join needs to be based on VEH_LOC and DAY in the form:
... WHERE u.ORG_ID = x.VEH_LOC
AND x.DATE_INDEX between u.START_DATE and NVL(u.END_DATE,sysdate)I had one that worked just fine when only one entity was involved; the complication arises in that this is a division-level report so I have to individually resolve the subordinates and their goals before I can aggregate. This is one of two queries I need to tie together using a WITH clause so I can pivot the whole thing and present it in month-by-month fashion. When I try to tie it together like the query below, I get: invalid relational operator.
select ttls.DATE_INDEX, SUM(ttls.LDS) as TTLLDS, SUM(ttls.TONS) as TTLTONS, u.TARGET_LTPH, u.TARGET_MTPH
from UTIL_TARGET_LOADERS u,
(select DATE_INDEX, VEH_LOC, SUM(LDS) as LDS, SUM(WT) as TONS from
(select c.DATE_INDEX, c.VEH_LOC, COUNT(j.LOAD_JOB_ID) as LDS,
CASE WHEN SUM(w.SPOT_WEIGHT) = 0 THEN SUM(j.MAN_SPOT_WT)
ELSE SUM(w.SPOT_WEIGHT)
END as WT
from TC c, TC_LOAD_JOBS j, LOAD_RATES r, SPOT_WEIGHTS w
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
and c.VEH_LOC in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
group by c.DATE_INDEX, c.VEH_LOC
union
select c.DATE_INDEX, c.VEH_LOC, COUNT(j.JOB_ID) as LDS,
DECODE(SUM(j.AVG_SPOT_WEIGHT),0,SUM(r.BID_TONS),SUM(j.AVG_SPOT_WEIGHT)) as WT
from TC_3RDPARTY c, TC_3RDPARTY_JOBS j, LOAD_RATES r
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
and j.FACTORY_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
group by c.DATE_INDEX, c.VEH_LOC)
group by DATE_INDEX, VEH_LOC) ttls
where ttls.DATE_INDEX beween u.START_DATE and NVL(u.END_DATE,sysdate)
and ttls.VEH_LOC = u.ORG_ID
group by ttls.DATE_INDEXI know this is a nested mess, as it has to grab the production from two tables for a range of VEH_LOC values and sum and aggregate by day and VEH_LOC, then I have to try and match that to the targets based on VEH_LOC and day. My final query is to aggregate the whole mess of sums and averages by month.
I'd appreciate it if someone can point me in the right direction.Figured it out.
select ttl.DATE_INDEX, SUM(ttl.LDS) as TTLLDS, SUM(ttl.TONS) as TTLTONS,
AVG((u.MACH_TPH_D+u.MACH_TPH_N)/2) as MTPH_TGT,
AVG((u.LABOR_TPH_D+u.LABOR_TPH_N)/2) as LTPH_TGT
from
(select DATE_INDEX, VEH_LOC, SUM(LDS) as LDS, SUM(WT) as TONS from
(select c.DATE_INDEX, c.VEH_LOC, COUNT(j.LOAD_JOB_ID) as LDS,
CASE WHEN SUM(w.SPOT_WEIGHT) = 0 THEN SUM(j.MAN_SPOT_WT)
ELSE SUM(w.SPOT_WEIGHT)
END as WT
from TC c, TC_LOAD_JOBS j, LOAD_RATES r, SPOT_WEIGHTS w
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
and c.VEH_LOC in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
group by c.DATE_INDEX, c.VEH_LOC
union
select c.DATE_INDEX, c.VEH_LOC, COUNT(j.JOB_ID) as LDS,
DECODE(SUM(j.AVG_SPOT_WEIGHT),0,SUM(r.BID_TONS),SUM(j.AVG_SPOT_WEIGHT)) as WT
from TC_3RDPARTY c, TC_3RDPARTY_JOBS j, LOAD_RATES r
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
and j.FACTORY_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
group by c.DATE_INDEX, c.VEH_LOC)
group by DATE_INDEX, VEH_LOC) ttl, UTIL_TARGET_LOADERS u
where u.ORG_ID = ttl.VEH_LOC
and ttl.DATE_INDEX between u.START_DATE and NVL(U.END_DATE,sysdate)
group by ttl.DATE_INDEX, (u.LABOR_TPH_D+u.LABOR_TPH_N)/2 -
Need complex query with joins and AGGREGATE functions.
Hello Everyone ;
Good Morning to all ;
I have 3 tables with 2 lakhs record. I need to check query performance.. How CBO rewrites my query in materialized view ?
I want to make complex join with AGGREGATE FUNCTION.
my table details
SQL> select from tab;*
TNAME TABTYPE CLUSTERID
DEPT TABLE
PAYROLL TABLE
EMP TABLE
SQL> desc emp
Name
EID
ENAME
EDOB
EGENDER
EQUAL
EGRADUATION
EDESIGNATION
ELEVEL
EDOMAIN_ID
EMOB_NO
SQL> desc dept
Name
EID
DNAME
DMANAGER
DCONTACT_NO
DPROJ_NAME
SQL> desc payroll
Name
EID
PF_NO
SAL_ACC_NO
SALARY
BONUS
I want to make complex query with joins and AGGREGATE functions.
Dept names are : IT , ITES , Accounts , Mgmt , Hr
GRADUATIONS are : Engineering , Arts , Accounts , business_applications
I want to select records who are working in IT and ITES and graduation should be "Engineering"
salary > 20000 and < = 22800 and bonus > 1000 and <= 1999 with count for males and females Separately ;
Please help me to make a such complex query with joins ..
Thanks in advance ..
Edited by: 969352 on May 25, 2013 11:34 AM969352 wrote:
why do you avoid providing requested & NEEDED details?I do NOT understand what do you expect ?
My Goal is :
1. When executing my own query i need to check expalin plan.please proceed to do so
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9010.htm#SQLRF01601
2. IF i enable query rewrite option .. i want to check explain plan ( how optimizer rewrites my query ) ? please proceed to do so
http://docs.oracle.com/cd/E11882_01/server.112/e16638/ex_plan.htm#PFGRF009
3. My only aim is QUERY PERFORMANCE with QUERY REWRITE clause in materialized view.It is an admirable goal.
Best Wishes on your quest for performance improvements. -
Complex Query in toplink descriptor
I have a complex query with "Order by" and "Max" aggregation function and I am trying to convert it to a named query. I am using toplink descriptor in Jdeveloper 11G.
1. Is it possible to write a named query which returns values from more than two different table?
2. how to specify Max operation in the Toplink descriptor.
Here is my Query
select * from table1 t1 ,table2 t2
where
t1.id = t2.id and
t1.name ='Name' and
t1.status<>'DELETE' and
t2.a_id = (select max(a_id) from table2 t21 where
t21.a_id = t1.id and
group by id)TopLink supports sub-selects through sub-queries in TopLink Expressions, and through JPQL. I'm not sure on the JDev support though, you may need to use a code customizer or amendment to add the named query.
James : http://www.eclipselink.org -
Bitmap index or Composite index better on a huge table
Hi All,
I got a question regarding the Bitmap index and Composite Index.
I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
My question is which one is BETTER. BTree or BITMAP Index and WHY?
Appreciate your valuable inputs on this one.
Regars,
Madhu K.Dear,
Hi All,
I got a question regarding the Bitmap index and Composite Index.
I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
My question is which one is BETTER. BTree or BITMAP Index and WHY?
Appreciate your valuable inputs on this one.First of all, bitmap indexes are not recommended for write intensive OLTP applications due to the locking threat they can produce in such a kind of applications.
You told us that this table is never updated; I suppose it is not deleted also.
Second, bitmap indexes are suitable for columns having low cardinality. The question is how can we define "low cardinality", you said that you have 100,000 distincts group_no on a table of 100,000,000 rows.
You have a cardinality of 100,000/100,000,000 =0,001. Group_no column might be a good candidate for a bitmap index.
You said that order_no is unique so you have a very high cardinality on this column and it might not be a candidate for your bitmap index
Third, your query where clause involves only the group_no column so why are you including both columns when testing the bitmap and the b-tree index?
Are you designing such a kind of index in order to not visit the table? but in your case the table is made only of those two columns, so why not follow Hermant advise for an Index Organized Table?
Finally, you can have more details about bitmap indexes in the following richard foot blog article
http://richardfoote.wordpress.com/2008/02/01/bitmap-indexes-with-many-distinct-column-values-wotsuh-the-deal/
Best Regards
Mohamed Houri -
Complex Query which needs tuning
Hello :
I have a complex query that needs to be tuned. I have little experience in tuning the sql and hence taking the help of your guys.
The Query is as given below:
Database version 11g
SELECT DISTINCT P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR,
P.PRODUCT_SERIES, P.PRODUCT_CATEGORY AS Category1, SO.REGION_CODE,
SO.STORE_CODE, S.Store_Name, SOL.PRODUCT_CODE, PRI.REPLENISHMENT_TYPE,
PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE,
PRI.INVOICE_COST, SOL.FIFO_COST,
SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '' AS FNAME, '' AS LNAME,
SOL.PRICE_EXCEPTION_CODE, SOL.AS_IS,
SOL.STATUS_DATE,
Sum(SOL.QUANTITY) AS SumOfQUANTITY,
Sum(SOL.EXTENDED_PRICE) AS SumOfEXTENDED_PRICE
--Format([SALES_ORDER].[STATUS_DATE],"mmm-yy") AS [Month]
FROM PRODUCT P,
PRODUCT_MAJORS PM,
SALES_ORDER_LINE SOL,
STORE S,
SALES_ORDER SO,
SALES_ORDER_SPLITS SOS,
PRODUCT_REGIONAL_INFO PRI,
REGION_MAP R
WHERE P.product_major = PM.PRODUCT_MAJOR
and SOL.PRODUCT_CODE = P.PRODUCT_CODE
and SO.STORE_CODE = S.STORE_CODE
AND SO.REGION_CODE = S.REGION_CODE
AND SOL.REGION_CODE = SO.REGION_CODE
AND SOL.DOCUMENT_NUM = SO.DOCUMENT_NUM
AND SOL.DELIVERY_SEQUENCE_NUM = SO.DELIVERY_SEQUENCE_NUM
AND SOL.STATUS_CODE = SO.STATUS_CODE
AND SOL.STATUS_DATE = SO.STATUS_DATE
AND SO.REGION_CODE = SOS.REGION_CODE
AND SO.DOCUMENT_NUM = SOS.DOCUMENT_NUM
AND SOL.PRODUCT_CODE = PRI.PRODUCT_CODE
AND PRI.REGION_CODE = R.CORP_REGION_CODE
AND SO.REGION_CODE = R.DS_REGION_CODE
AND P.PRODUCT_MAJOR In ('STEREO','TELEVISION','VIDEO')
AND SOL.STATUS_CODE = 'D'
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'
AND SO.STORE_CODE NOT IN
('10','20','30','40','70','91','95','93','94','96','97','98','99',
'9V','9W','9X','9Y','9Z','8Z',
'8Y','92','CZ','FR','FS','FT','FZ','FY','FX','FW','FV','GZ','GY','GU','GW','GV','GX')
GROUP BY
P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR, P.PRODUCT_SERIES, P.PRODUCT_CATEGORY,
SO.REGION_CODE, SO.STORE_CODE, /*S.Short Name, */
S.Store_Name, SOL.PRODUCT_CODE,
PRI.REPLENISHMENT_TYPE, PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE, PRI.INVOICE_COST,
SOL.FIFO_COST, SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '', '', SOL.PRICE_EXCEPTION_CODE,
SOL.AS_IS, SOL.STATUS_DATE
Explain Plan:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=583 Cardinality=1 Bytes=253
HASH GROUP BY Cost=583 Cardinality=1 Bytes=253
FILTER
NESTED LOOPS Cost=583 Cardinality=1 Bytes=253
HASH JOIN OUTER Cost=582 Cardinality=1 Bytes=234
NESTED LOOPS
NESTED LOOPS Cost=571 Cardinality=1 Bytes=229
NESTED LOOPS Cost=571 Cardinality=1 Bytes=207
NESTED LOOPS Cost=569 Cardinality=2 Bytes=368
NESTED LOOPS Cost=568 Cardinality=2 Bytes=360
NESTED LOOPS Cost=556 Cardinality=3 Bytes=435
NESTED LOOPS Cost=178 Cardinality=4 Bytes=336
NESTED LOOPS Cost=7 Cardinality=1 Bytes=49
HASH JOIN Cost=7 Cardinality=1 Bytes=39
VIEW Object owner=CORP Object name=index$_join$_015 Cost=2 Cardinality=3 Bytes=57
HASH JOIN
INLIST ITERATOR
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=3 Bytes=57
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMJR_PR_FK_I Cost=1 Cardinality=3 Bytes=57
VIEW Object owner=CORP Object name=index$_join$_016 Cost=4 Cardinality=37 Bytes=740
HASH JOIN
INLIST ITERATOR
INDEX RANGE SCAN Object owner=CORP Object name=PRDMNR1 Cost=3 Cardinality=37 Bytes=740
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMNR_PK Cost=4 Cardinality=37 Bytes=740
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=1 Bytes=10
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCTS Cost=171 Cardinality=480 Bytes=16800
INDEX RANGE SCAN Object owner=CORP Object name=PRD2 Cost=3 Cardinality=681
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER_LINE Cost=556 Cardinality=1 Bytes=145
BITMAP CONVERSION TO ROWIDS
BITMAP INDEX SINGLE VALUE Object owner=DS Object name=SOL2
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER Cost=4 Cardinality=1 Bytes=35
INDEX RANGE SCAN Object owner=DS Object name=SO1 Cost=3 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=REGION_MAP Cost=1 Cardinality=1 Bytes=4
INDEX RANGE SCAN Object owner=DS Object name=REGCD Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCT_REGIONAL_INFO Cost=2 Cardinality=1 Bytes=23
INDEX UNIQUE SCAN Object owner=CORP Object name=PRDRI_PK Cost=1 Cardinality=1
INDEX UNIQUE SCAN Object owner=CORP Object name=BI_STORE_INFO_PK Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=BI_STORE_INFO Cost=1 Cardinality=1 Bytes=22
VIEW Object owner=DS cost=11 Cardinality=342 Bytes=1710
HASH JOIN Cost=11 Cardinality=342 Bytes=7866
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_CORP Cost=5 Cardinality=429 Bytes=3003
NESTED LOOPS Cost=5 Cardinality=478 Bytes=7648
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_GROUP Cost=5 Cardinality=478 Bytes=5258
INDEX UNIQUE SCAN Object owner=CORP Object name=STORE_REGIONAL_INFO_PK Cost=0 Cardinality=1 Bytes=5
INDEX RANGE SCAN Object owner=DS Object name=SOS_PK Cost=2 Cardinality=1 Bytes=19
Regards,
BMPFirst thing that i notice in this query is you are Using Distinct as well as Group by.
Your group by will always give you distinct results ,then again why do you need the Distinct?
For example
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT distinct col1,col2,sum(value) from t
group by col1,col2Is always same as
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT col1,col2,sum(value) from t
group by col1,col2And also
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'It would be best to use a to_date when hard coding your dates.
Edited by: user5495111 on Aug 6, 2009 1:32 PM -
How to store data from a complex query and only fresh hourly or daily?
We have a report which runs quite slow (1-2 minutes) because the query is quite complicate, so we would like to run this query daily only and store in a table so for those procedures that need to use this complex query as a subquery, can just join to this table directly to get results.
However, I am not sure what kind of object I should use to store data for this complex query. Is data in global temp table only persist within transaction? I need something that can persist the data and be access by procedures.
Any suggestions are welcome,
CheersThank you for your reply. I looked at the materialized view earlier on, but have some difficulties to use it. So I have some questions here:
1.The complex query is not a sum or aggregate functions, it just need to get data from different tables based on different conditions, in this case is it still appropriate to use meterialized view?
2.If it is, I created one, but how to use it in my procedure? From the articles I read, it seems I can't just query from this view directly. So do I need to keep the complex query in my procedure and how the procedure will use the meterialized view instead?
3. I also put the complex query in a normal view, then create a materialized view for this normal view (I expect the data from the complex query will be cache here), then in the procedure I just select * from my_NormalView, but it takes the same time to run even when I set the QUERY_REWRITE_ENABLED to true in the alter session. So I am not sure what else I need to do to make sure the procedure use the materialized view instead of the normal view. Can I query from the Materialized View directly?
Below in the code I copied from one of the article to create the materialized view based on my normal view:
CREATE MATERIALIZED VIEW HK3ControlDB.MW_RIRating
PCTFREE 5 PCTUSED 60
TABLESPACE "USERS"
STORAGE (INITIAL 50K NEXT 50K)
USING INDEX STORAGE (INITIAL 25K NEXT 25K)
REFRESH START WITH ROUND(SYSDATE + 1) + 11/24
NEXT NEXT_DAY(TRUNC(SYSDATE), 'MONDAY') + 15/24
enable query rewrite
AS SELECT * FROM HK3ControlDB.VW_RIRating;
Cheers -
Creating a complex query in CR designer
I'm running Crystal Reports XI Release 2 on Windows NT. I have a very complex query and cannot figure out how to make it work in CR. Basically, I've got a table A with fields containing reference codes that may or may not be present. I created a SQL query that imbeds select statements in the select fields (the field names between SELECT and FROM). So, it would read something like SELECT field1, field2, (select Code_Name from REF_Table where REF_Table.code=TableA.code and TableA.code is not null) FROM TableA. That works great in SQL Developer.
In CR, I tried a command field but that produces a cartesian set where I get a line for every record that has the field for each and every record in TableA. No good. I just want the name for the value in the TableA record translated to the name if it is present. Is there a way to do this in CR? I used to be able to edit the SQL directly in CR but now it won't let me do that. Anyone know a better way to solve this kind of problem?I'm not an Oracle guy so I don't know what you can or can't do in PL SQL.
I'm surprised that you are getting different results between SQL Developer and the CR Command. CR should pass the query, exactly as it's written back to the server, same as SQL Dev.
IN T-SQL...If you want to make sure that the sub-query is returning only 1 row, try placing "TOP 1" behind the SELECT.
(select top 1 loc.location_name
from ref_location loc
where loc.location_num=obd.location_num and odb.location_num is not null)
I'm thinking that Oracle doesn't have the the TOP N option but it does have a rownum feature. So maybe...
(SELECT x.location_name
FROM (
select loc.location_name
from ref_location loc
where loc.location_num=obd.location_num and odb.location_num is not null) x
WHERE rownum = 1
ORDER BY rownum)
This is assuming that Oracle allows you to use ORDER BY in a sub-query. T-SQL only allows it if it's used in conjunction with TOP N...
Just a thought,
Jason -
Working with a complex query and trying to get rid of duplicates
I'm working with the following complex query, and I need to
select vin, number, year, make, model, and state from
2007 vehicles in certain models, vin types
2006 vehicles in certain models, vin types
Is there a more efficient way to write this than how I've been writing it?:
select distinct vehicle.vin VIN, unit.STATION_ID STID, unit.EMBEDDED_AREA_CODE||unit.EMBEDDED_PREFIX||lpad(unit.EMBEDDED_RON,4,0)
MIN, unit.AUTHENTICATION_ID AUTHCODE, vehicle.user_veh_desc, veh_model.VEH_MODEL_DESC MODEL, veh_model.VEH_MANUF_YEAR YEAR, acct_veh.STATE STATE
from vehicle
inner join veh_unit on vehicle.vehicle_sak=veh_unit.vehicle_sak
inner join unit on veh_unit.unit_sak=unit.unit_sak
inner join vdu_profile on vdu_profile.vehicle_sak=vehicle.vehicle_sak
inner join vdu_profile_program on vdu_profile_program.VDU_PROFILE_SAK=vdu_profile.VDU_PROFILE_SAK
inner join veh_model on vehicle.VEH_MODEL = veh_model.VEH_MODEL
inner join acct_veh on acct_veh.VEHICLE_SAK = vehicle.VEHICLE_SAK
and vehicle.user_veh_desc like ('%2007%')
AND unit.unit_gen_id >= 36
AND vdu_profile_program.VDU_PROGRAM_SAK = 3
and vdu_profile_program.PREFERENCE_VALUE = 'Y'
and acct_veh.STATE in ('MN','ND','IA')
AND (veh_model.VEH_MODEL_DESC like ('%Vehicle2%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle3%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle5%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle6%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle7%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle8%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle9%'))
and ACCT_VEH.ACCT_VEH_STATUS_ID = 'A'
and (vehicle.vin like ('_______3_________')
or vehicle.vin like ('_______W_________')
or vehicle.vin like ('_______K_________')
or vehicle.vin like ('_______0_________'))
UNION
select distinct vehicle.vin VIN, unit.STATION_ID STID, unit.EMBEDDED_AREA_CODE||unit.EMBEDDED_PREFIX||lpad(unit.EMBEDDED_RON,4,0)
MIN, unit.AUTHENTICATION_ID AUTHCODE, vehicle.user_veh_desc, veh_model.VEH_MODEL_DESC MODEL, veh_model.VEH_MANUF_YEAR YEAR, acct_veh.STATE STATE
from vehicle
inner join veh_unit on vehicle.vehicle_sak=veh_unit.vehicle_sak
inner join unit on veh_unit.unit_sak=unit.unit_sak
inner join vdu_profile on vdu_profile.vehicle_sak=vehicle.vehicle_sak
inner join vdu_profile_program on vdu_profile_program.VDU_PROFILE_SAK=vdu_profile.VDU_PROFILE_SAK
inner join veh_model on vehicle.VEH_MODEL = veh_model.VEH_MODEL
inner join acct_veh on acct_veh.VEHICLE_SAK = vehicle.VEHICLE_SAK
and vehicle.user_veh_desc like ('%2006%')
AND unit.unit_gen_id >= 36
AND vdu_profile_program.VDU_PROGRAM_SAK = 3
and vdu_profile_program.PREFERENCE_VALUE = 'Y'
and acct_veh.STATE in ('MN','ND','IA')
AND (veh_model.VEH_MODEL_DESC like ('%Vehicle1%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle2%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle3%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle4%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle5%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle6%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle7%'))
and ACCT_VEH.ACCT_VEH_STATUS_ID = 'A'
and (vehicle.vin like ('_______Z_________')
or vehicle.vin like ('_______K_________'))I am not sure how many rows you have in the tables but I am assuming that there would be performance benefits in performing the query only once. You can combine similar coding and use an OR condition for the parts that are different. I left the 'K' vin comparison in both years for readability.
I was not able to fully test my code but I believe that it will work as is. I hope it helps. If you are happier with the UNION you may want to investigate the WITH clause as you can query a subset of data twice, rather than the full data set.
-- Common code
SELECT DISTINCT
vehicle.vin,
unit.station_id stid,
unit.embedded_area_code || unit.embedded_prefix || LPAD (unit.embedded_ron, 4, 0) MIN,
unit.authentication_id authcode,
vehicle.user_veh_desc,
veh_model.veh_model_desc model,
veh_model.veh_manuf_year YEAR,
acct_veh.state
FROM vehicle
INNER JOIN veh_unit
ON vehicle.vehicle_sak = veh_unit.vehicle_sak
INNER JOIN unit
ON veh_unit.unit_sak = unit.unit_sak
INNER JOIN vdu_profile
ON vdu_profile.vehicle_sak = vehicle.vehicle_sak
INNER JOIN vdu_profile_program
ON vdu_profile_program.vdu_profile_sak = vdu_profile.vdu_profile_sak
INNER JOIN veh_model
ON vehicle.veh_model = veh_model.veh_model
INNER JOIN acct_veh
ON acct_veh.vehicle_sak = vehicle.vehicle_sak
AND unit.unit_gen_id >= 36
AND vdu_profile_program.vdu_program_sak = 3
AND vdu_profile_program.preference_value = 'Y'
AND acct_veh.state IN ('MN', 'ND', 'IA')
AND acct_veh.acct_veh_status_id = 'A'
AND
-- Annual code
( -- 2006 conditions
AND vehicle.user_veh_desc LIKE ('%2007%')
AND regexp_like(veh_model.veh_model_desc, 'Vehicle[2356789]')
AND regexp_like(vehicle.vin, '_{7}[3WK0]_{9}')
OR
( -- 2007 conditions
AND vehicle.user_veh_desc LIKE ('%2006%')
AND regexp_like(veh_model.veh_model_desc, 'Vehicle[1234567]')
AND regexp_like(vehicle.vin, '_{7}[ZK]_{9}')
). -
Working with multiple results of a complex query
Hi all!
As I "advance" in learning PL/SQL with oracle, I now get stuck in handling multiple results of a complex query. As far as I know, I cannot use a cursor here, as there is no table where the cursor could point to.
Here is the concept of what I want to do (pseudocode):
foreach result in SELECT * FROM table_1, table_n WHERE key_1 = foreign_key_in_n;
-- do someting with the resultHere is my attemt, that freezes the browser gui and throws an internal database error:
declare
type t_stock is record(
baggage_id baggage.baggage_id%type,
section_id sections.section_id%type,
shelf_id shelves.shelf_id%type
v_stock t_stock;
rcnt number(2);
begin
dbms_output.put_line(TO_CHAR(rcnt));
loop
SELECT COUNT(*) INTO rcnt FROM (
SELECT baggage.baggage_id, sections.section_id, shelves.shelf_id
FROM baggage, sections, shelves
WHERE baggage.baggage_id = sections.contained_baggage_id
AND shelves.is_connex_to_section_id = sections.section_id);
IF rcnt <= 0 THEN
exit;
END IF;
SELECT baggage.baggage_id, sections.section_id, shelves.shelf_id INTO v_stock
FROM baggage, sections, shelves
WHERE baggage.baggage_id = sections.contained_baggage_id
AND shelves.is_connex_to_section_id = sections.section_id
AND ROWNUM < 2;
UPDATE sections SET contained_baggage_id = NULL WHERE section_id = v_stock.baggage_id;
commit; -- do I need that?
end loop;
END;
/So, is there a way to traverse a list of results from a complex query? Maybe without creating a temporary table (or is that the better way?).
regards, Alex
I reformatted the code
pktmOk, here are the details:
The tables are used to model kind of a transport system. There are terminals connected with sections that may contain 1 piece of baggage. The baggage is moved by a procedure through a transport system. After each of these "moving steps", I check if the baggage is in front of the shelf it should be in.
[To be honest, the give statement doesn't contain the info, in which shelf the baggage wil bee inserted. That was spared out because of the lack of a working piece of code :)]
But: if we consider the fact, that a baggage is in front of such a shelf in the way, that it should be put in this shelf, then all this makes some sense.
- move baggae through a transport system
- see if you can put baggage into a shelf
In order to "put baggage in a shelf", I need to remove it from the transport section. As the transport system is not normalized, I need to update the section where the baggage was in.
Uhm... yes it's a task that doesn't make too much sense. It seems to be some kind of general spirit in university homework :)
But: the FOR r IN (Statement) lloks good. I'll use that.
And, the ROWNUM < 2 is used to limit the size of the result to 1, there is no need to have a specific ordering. It's just because - afaik - oracle doesn't have a limit clause. I would appreciate your help if you know a better way to do limit resultsets.
best regards, Alex
Maybe you are looking for
-
HP 3050 will not stay on the wireless network for more than a few minutes at a time.
My HP DeskJet 3050 J610 is constantly dropping the wireless connection. It won't stay on for more than a few minutes. If I power cycle, it comes right back on, but drops again after a few minutes. It will print fine from a USB. When it drops off, pin
-
Hi, Kindly help me to map below scenario: I have 2 company codes AAAA and BBBB.BBBB is a customer to AAAA. BBBB place a order to AAAA,AAAA in turn place an order to its sub-contracting vendor XXXX. AAAA sends component material to XXXX and ask him to
-
Moving from iPhoto to Aperture small issue...
Please go easy as I just made it to Aperture So my iphoto library is gotten too big. I am ready to make the move (editing skills are out growing iPhoto). I installed Aperture last week and did a ton of research on how to make the move. I even watched
-
Hi, I am using Flash CS4 on Mac Os X 10.5.3. I want to display the video captured by the built-in camera on the SWF. i have written the following code: But during runtime it just gives me a warning asking for access to camera. No video is displayed.
-
Run Time repository mapping deployment hangs.
We use 9.2 client on XP and repository on SUN. we did install awhile ago a serverside 9.2 client, but not sure it is still there, as other people did install 10.2 on the same SUN box using different Dtabase repository. When deploying mapping into the