Complex query advice needed
I have a mysql table that basically represents a log of daily events. I need to be able to get the total number of events for each week in a year. How can I efficiently query the database and tally the toal records for each week in a year, given that my events are recorded daily? Any advice would be much appreciated. Thanks.
Presumably the events have a timestamp.
One possibility (you need to deal with border conditions)
1. Find a function that returns day of the year from a timestamp (dayInYear)
2. Determine offset from beginning of year to 'first week' in year. (Offset)
3. week = (dayInYear(timestamp) - Offset)/7
4. Write a query that does a sum using group by on the week value.
Similar Messages
-
Complex Query which needs tuning
Hello :
I have a complex query that needs to be tuned. I have little experience in tuning the sql and hence taking the help of your guys.
The Query is as given below:
Database version 11g
SELECT DISTINCT P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR,
P.PRODUCT_SERIES, P.PRODUCT_CATEGORY AS Category1, SO.REGION_CODE,
SO.STORE_CODE, S.Store_Name, SOL.PRODUCT_CODE, PRI.REPLENISHMENT_TYPE,
PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE,
PRI.INVOICE_COST, SOL.FIFO_COST,
SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '' AS FNAME, '' AS LNAME,
SOL.PRICE_EXCEPTION_CODE, SOL.AS_IS,
SOL.STATUS_DATE,
Sum(SOL.QUANTITY) AS SumOfQUANTITY,
Sum(SOL.EXTENDED_PRICE) AS SumOfEXTENDED_PRICE
--Format([SALES_ORDER].[STATUS_DATE],"mmm-yy") AS [Month]
FROM PRODUCT P,
PRODUCT_MAJORS PM,
SALES_ORDER_LINE SOL,
STORE S,
SALES_ORDER SO,
SALES_ORDER_SPLITS SOS,
PRODUCT_REGIONAL_INFO PRI,
REGION_MAP R
WHERE P.product_major = PM.PRODUCT_MAJOR
and SOL.PRODUCT_CODE = P.PRODUCT_CODE
and SO.STORE_CODE = S.STORE_CODE
AND SO.REGION_CODE = S.REGION_CODE
AND SOL.REGION_CODE = SO.REGION_CODE
AND SOL.DOCUMENT_NUM = SO.DOCUMENT_NUM
AND SOL.DELIVERY_SEQUENCE_NUM = SO.DELIVERY_SEQUENCE_NUM
AND SOL.STATUS_CODE = SO.STATUS_CODE
AND SOL.STATUS_DATE = SO.STATUS_DATE
AND SO.REGION_CODE = SOS.REGION_CODE
AND SO.DOCUMENT_NUM = SOS.DOCUMENT_NUM
AND SOL.PRODUCT_CODE = PRI.PRODUCT_CODE
AND PRI.REGION_CODE = R.CORP_REGION_CODE
AND SO.REGION_CODE = R.DS_REGION_CODE
AND P.PRODUCT_MAJOR In ('STEREO','TELEVISION','VIDEO')
AND SOL.STATUS_CODE = 'D'
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'
AND SO.STORE_CODE NOT IN
('10','20','30','40','70','91','95','93','94','96','97','98','99',
'9V','9W','9X','9Y','9Z','8Z',
'8Y','92','CZ','FR','FS','FT','FZ','FY','FX','FW','FV','GZ','GY','GU','GW','GV','GX')
GROUP BY
P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR, P.PRODUCT_SERIES, P.PRODUCT_CATEGORY,
SO.REGION_CODE, SO.STORE_CODE, /*S.Short Name, */
S.Store_Name, SOL.PRODUCT_CODE,
PRI.REPLENISHMENT_TYPE, PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE, PRI.INVOICE_COST,
SOL.FIFO_COST, SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '', '', SOL.PRICE_EXCEPTION_CODE,
SOL.AS_IS, SOL.STATUS_DATE
Explain Plan:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=583 Cardinality=1 Bytes=253
HASH GROUP BY Cost=583 Cardinality=1 Bytes=253
FILTER
NESTED LOOPS Cost=583 Cardinality=1 Bytes=253
HASH JOIN OUTER Cost=582 Cardinality=1 Bytes=234
NESTED LOOPS
NESTED LOOPS Cost=571 Cardinality=1 Bytes=229
NESTED LOOPS Cost=571 Cardinality=1 Bytes=207
NESTED LOOPS Cost=569 Cardinality=2 Bytes=368
NESTED LOOPS Cost=568 Cardinality=2 Bytes=360
NESTED LOOPS Cost=556 Cardinality=3 Bytes=435
NESTED LOOPS Cost=178 Cardinality=4 Bytes=336
NESTED LOOPS Cost=7 Cardinality=1 Bytes=49
HASH JOIN Cost=7 Cardinality=1 Bytes=39
VIEW Object owner=CORP Object name=index$_join$_015 Cost=2 Cardinality=3 Bytes=57
HASH JOIN
INLIST ITERATOR
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=3 Bytes=57
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMJR_PR_FK_I Cost=1 Cardinality=3 Bytes=57
VIEW Object owner=CORP Object name=index$_join$_016 Cost=4 Cardinality=37 Bytes=740
HASH JOIN
INLIST ITERATOR
INDEX RANGE SCAN Object owner=CORP Object name=PRDMNR1 Cost=3 Cardinality=37 Bytes=740
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMNR_PK Cost=4 Cardinality=37 Bytes=740
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=1 Bytes=10
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCTS Cost=171 Cardinality=480 Bytes=16800
INDEX RANGE SCAN Object owner=CORP Object name=PRD2 Cost=3 Cardinality=681
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER_LINE Cost=556 Cardinality=1 Bytes=145
BITMAP CONVERSION TO ROWIDS
BITMAP INDEX SINGLE VALUE Object owner=DS Object name=SOL2
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER Cost=4 Cardinality=1 Bytes=35
INDEX RANGE SCAN Object owner=DS Object name=SO1 Cost=3 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=REGION_MAP Cost=1 Cardinality=1 Bytes=4
INDEX RANGE SCAN Object owner=DS Object name=REGCD Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCT_REGIONAL_INFO Cost=2 Cardinality=1 Bytes=23
INDEX UNIQUE SCAN Object owner=CORP Object name=PRDRI_PK Cost=1 Cardinality=1
INDEX UNIQUE SCAN Object owner=CORP Object name=BI_STORE_INFO_PK Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=BI_STORE_INFO Cost=1 Cardinality=1 Bytes=22
VIEW Object owner=DS cost=11 Cardinality=342 Bytes=1710
HASH JOIN Cost=11 Cardinality=342 Bytes=7866
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_CORP Cost=5 Cardinality=429 Bytes=3003
NESTED LOOPS Cost=5 Cardinality=478 Bytes=7648
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_GROUP Cost=5 Cardinality=478 Bytes=5258
INDEX UNIQUE SCAN Object owner=CORP Object name=STORE_REGIONAL_INFO_PK Cost=0 Cardinality=1 Bytes=5
INDEX RANGE SCAN Object owner=DS Object name=SOS_PK Cost=2 Cardinality=1 Bytes=19
Regards,
BMPFirst thing that i notice in this query is you are Using Distinct as well as Group by.
Your group by will always give you distinct results ,then again why do you need the Distinct?
For example
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT distinct col1,col2,sum(value) from t
group by col1,col2Is always same as
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT col1,col2,sum(value) from t
group by col1,col2And also
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'It would be best to use a to_date when hard coding your dates.
Edited by: user5495111 on Aug 6, 2009 1:32 PM -
Automating Complex jobs - advice needed (all are welcome)
Hi all,
We have successfully implemented XMl workflow with fully automated through script which will place all tables and images as per citation etc. and its working fine (this jobs are one time script writing since its a same style and same layout) and its a magazine & journals.
Now we are concentration to automate the book we know that its not one time script since each book has different elements and styles and boxes.
Here what we need a adivce from the scripting guys how to tackle these types of projects.
The projects which we have is high complex jobs lots of boxes (each box has its own design). We are going to take the book projects in XML workflow using DOCbook.
All the boxes are placed in the library, we have a question if we placed the boxes styles in library is the script is capable of draging the appropriate boxes from library and place the text automatically using script. Since we did't try using the library.
Sorry I am through ideas which we have anybody who came across with the complex jobs automation will give some ideas of how to tackle these types of projects.
Thanks...................
KavyaDo you want "general" advice or something more specific for your project?
Generally, when I have a large project, I like to break it down to smaller components. I script one simple action to make sure it all works. Then I try another part of the job and make sure all those commands work. Once I have about 60% of the core functions worked out then I start to combine them into a workflow application (I use XCode to develop Applescript applications). In each step I make sure to code it for flexibility for future changes.
As for dealing with library objects like you mention, I have not tried to work with libraries. You should make sure that you can script the library objects, if that is how you are going to fill items or build a document.
Chris -
Trying to form complex query - need help
I have a fairly complex query that I need to join the results of to show actual and goal by day. The actuals are an aggregation of records that get put in every day, while the targets are a single entry in range format indicating an active range for which the target applies. I'm working on a query that will put things together by month and I'm running into a snag. Can someone please point out where appropriate naming needs to go to get this to come together?
This one works:
(select DATE_INDEX, SUM(LDS) as TTLLDS, SUM(TONS) as TTLTONS from
(select DATE_INDEX, VEH_LOC, SUM(LDS) as LDS, SUM(WT) as TONS from
(select c.DATE_INDEX, c.VEH_LOC, COUNT(j.LOAD_JOB_ID) as LDS,
CASE WHEN SUM(w.SPOT_WEIGHT) = 0 THEN SUM(j.MAN_SPOT_WT)
ELSE SUM(w.SPOT_WEIGHT)
END as WT
from TC c, TC_LOAD_JOBS j, LOAD_RATES r, SPOT_WEIGHTS w
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
and c.VEH_LOC in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
group by c.DATE_INDEX, c.VEH_LOC
union
select c.DATE_INDEX, c.VEH_LOC, COUNT(j.JOB_ID) as LDS,
DECODE(SUM(j.AVG_SPOT_WEIGHT),0,SUM(r.BID_TONS),SUM(j.AVG_SPOT_WEIGHT)) as WT
from TC_3RDPARTY c, TC_3RDPARTY_JOBS j, LOAD_RATES r
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
and j.FACTORY_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
group by c.DATE_INDEX, c.VEH_LOC)
group by DATE_INDEX, VEH_LOC)
group by DATE_INDEX)Now I need to add in the following query:
select (u.MACH_TPH_D+u.MACH_TPH_N)/2 as MTPH_TGT, (u.LABOR_TPH_D+u.LABOR_TPH_N)/2 as LTPH_TGT
from UTIL_TARGET_LOADERS u
where u.ORG_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)The join needs to be based on VEH_LOC and DAY in the form:
... WHERE u.ORG_ID = x.VEH_LOC
AND x.DATE_INDEX between u.START_DATE and NVL(u.END_DATE,sysdate)I had one that worked just fine when only one entity was involved; the complication arises in that this is a division-level report so I have to individually resolve the subordinates and their goals before I can aggregate. This is one of two queries I need to tie together using a WITH clause so I can pivot the whole thing and present it in month-by-month fashion. When I try to tie it together like the query below, I get: invalid relational operator.
select ttls.DATE_INDEX, SUM(ttls.LDS) as TTLLDS, SUM(ttls.TONS) as TTLTONS, u.TARGET_LTPH, u.TARGET_MTPH
from UTIL_TARGET_LOADERS u,
(select DATE_INDEX, VEH_LOC, SUM(LDS) as LDS, SUM(WT) as TONS from
(select c.DATE_INDEX, c.VEH_LOC, COUNT(j.LOAD_JOB_ID) as LDS,
CASE WHEN SUM(w.SPOT_WEIGHT) = 0 THEN SUM(j.MAN_SPOT_WT)
ELSE SUM(w.SPOT_WEIGHT)
END as WT
from TC c, TC_LOAD_JOBS j, LOAD_RATES r, SPOT_WEIGHTS w
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
and c.VEH_LOC in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
group by c.DATE_INDEX, c.VEH_LOC
union
select c.DATE_INDEX, c.VEH_LOC, COUNT(j.JOB_ID) as LDS,
DECODE(SUM(j.AVG_SPOT_WEIGHT),0,SUM(r.BID_TONS),SUM(j.AVG_SPOT_WEIGHT)) as WT
from TC_3RDPARTY c, TC_3RDPARTY_JOBS j, LOAD_RATES r
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
and j.FACTORY_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
group by c.DATE_INDEX, c.VEH_LOC)
group by DATE_INDEX, VEH_LOC) ttls
where ttls.DATE_INDEX beween u.START_DATE and NVL(u.END_DATE,sysdate)
and ttls.VEH_LOC = u.ORG_ID
group by ttls.DATE_INDEXI know this is a nested mess, as it has to grab the production from two tables for a range of VEH_LOC values and sum and aggregate by day and VEH_LOC, then I have to try and match that to the targets based on VEH_LOC and day. My final query is to aggregate the whole mess of sums and averages by month.
I'd appreciate it if someone can point me in the right direction.Figured it out.
select ttl.DATE_INDEX, SUM(ttl.LDS) as TTLLDS, SUM(ttl.TONS) as TTLTONS,
AVG((u.MACH_TPH_D+u.MACH_TPH_N)/2) as MTPH_TGT,
AVG((u.LABOR_TPH_D+u.LABOR_TPH_N)/2) as LTPH_TGT
from
(select DATE_INDEX, VEH_LOC, SUM(LDS) as LDS, SUM(WT) as TONS from
(select c.DATE_INDEX, c.VEH_LOC, COUNT(j.LOAD_JOB_ID) as LDS,
CASE WHEN SUM(w.SPOT_WEIGHT) = 0 THEN SUM(j.MAN_SPOT_WT)
ELSE SUM(w.SPOT_WEIGHT)
END as WT
from TC c, TC_LOAD_JOBS j, LOAD_RATES r, SPOT_WEIGHTS w
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
and c.VEH_LOC in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
group by c.DATE_INDEX, c.VEH_LOC
union
select c.DATE_INDEX, c.VEH_LOC, COUNT(j.JOB_ID) as LDS,
DECODE(SUM(j.AVG_SPOT_WEIGHT),0,SUM(r.BID_TONS),SUM(j.AVG_SPOT_WEIGHT)) as WT
from TC_3RDPARTY c, TC_3RDPARTY_JOBS j, LOAD_RATES r
where c.TC_ID = j.TC_ID and j.LOAD_RATE_ID = r.LOAD_RATE_ID
and c.DATE_INDEX between to_date('08/01/2010','MM/DD/YYYY') and to_date('07/31/2011','MM/DD/YYYY')
and j.FACTORY_ID in (select ORG_ID from ORG_ENTITIES where MNG_ORG_ID = 200)
group by c.DATE_INDEX, c.VEH_LOC)
group by DATE_INDEX, VEH_LOC) ttl, UTIL_TARGET_LOADERS u
where u.ORG_ID = ttl.VEH_LOC
and ttl.DATE_INDEX between u.START_DATE and NVL(U.END_DATE,sysdate)
group by ttl.DATE_INDEX, (u.LABOR_TPH_D+u.LABOR_TPH_N)/2 -
Need complex query with joins and AGGREGATE functions.
Hello Everyone ;
Good Morning to all ;
I have 3 tables with 2 lakhs record. I need to check query performance.. How CBO rewrites my query in materialized view ?
I want to make complex join with AGGREGATE FUNCTION.
my table details
SQL> select from tab;*
TNAME TABTYPE CLUSTERID
DEPT TABLE
PAYROLL TABLE
EMP TABLE
SQL> desc emp
Name
EID
ENAME
EDOB
EGENDER
EQUAL
EGRADUATION
EDESIGNATION
ELEVEL
EDOMAIN_ID
EMOB_NO
SQL> desc dept
Name
EID
DNAME
DMANAGER
DCONTACT_NO
DPROJ_NAME
SQL> desc payroll
Name
EID
PF_NO
SAL_ACC_NO
SALARY
BONUS
I want to make complex query with joins and AGGREGATE functions.
Dept names are : IT , ITES , Accounts , Mgmt , Hr
GRADUATIONS are : Engineering , Arts , Accounts , business_applications
I want to select records who are working in IT and ITES and graduation should be "Engineering"
salary > 20000 and < = 22800 and bonus > 1000 and <= 1999 with count for males and females Separately ;
Please help me to make a such complex query with joins ..
Thanks in advance ..
Edited by: 969352 on May 25, 2013 11:34 AM969352 wrote:
why do you avoid providing requested & NEEDED details?I do NOT understand what do you expect ?
My Goal is :
1. When executing my own query i need to check expalin plan.please proceed to do so
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9010.htm#SQLRF01601
2. IF i enable query rewrite option .. i want to check explain plan ( how optimizer rewrites my query ) ? please proceed to do so
http://docs.oracle.com/cd/E11882_01/server.112/e16638/ex_plan.htm#PFGRF009
3. My only aim is QUERY PERFORMANCE with QUERY REWRITE clause in materialized view.It is an admirable goal.
Best Wishes on your quest for performance improvements. -
1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed in the stat plan ?
2. Does rowsource statistics gives some kind of understanding of Extended stats ?You can get Row Source Statistics only *after* the SQL has been executed. An Explain Plan midway cannot give you row source statistics.
To get row source statistics either set STATISTICS_LEVEL='ALL' in the session that executes theSQL OR use the Hint "gather_plan_statistics" in the SQL being executed.
Then use dbms_xplan.display_cursor
Hemant K Chitale -
Hope someone can help with this.
I have a database structure :
table : photos
Photo_ID (PK)
Title
Orientation
etc
table : keywords
Keyword_ID (PK)
Keyword
Category
table : photokeywords
Photo_ID
Keyword_ID
I have a search page that uses checkboxes allowing users to
search on keywords, that returns any photos which match the
selected keywords.
In the search page the checkboxes are created using an array,
and are displayed as :
//Display the checkbox
echo "<td width=\"2%\">";
echo "<input type=\"checkbox\"
class=\"tickbox_".$row_type."\"";
if (in_array($keyword['Keyword_ID'],$photokeywords)) { echo
" checked"; }
echo " name=\"ckbox[".$keyword['Keyword_ID']."]\"
id=\"ckbox[".$keyword['Keyword_ID']."]\">";
echo "</td>\n";
Someone kindly helped me out some time ago to write a fairly
complex (for me!) query that counts the matches, so enabling the
results to be ordered by the number of matches, ie the most
relevant matches first.
The code for this is :
//db session
mysql_select_db($database_Photolibrary, $Photolibrary);
if (isset($_GET['ckbox'])){
// get profile keys
$ckbox = array_keys($_GET['ckbox']);
// sql string
$sql = 'SELECT *, Count(*) As rank FROM photos, photokeywords
WHERE photos.Photo_ID =
photokeywords.Photo_ID AND photokeywords.Keyword_ID IN(' .
implode(',', $ckbox).') GROUP BY
photos.Photo_ID ORDER BY rank DESC, Title';
$photos = mysql_query($sql) or die(mysql_error());
$row_photos = mysql_fetch_assoc($photos);
@$totalRows = mysql_num_rows($photos);
The page I have is using Tom Muck's pages list and horizontal
looper behaviours to display the results.
Everything works fine except the pages list, where there is
more than one page of results. I've emailed the ever helpful Tom
who has advised me that the query may need to be constructed in DW,
as the Pages List behaviour relies on DW's code to manage the
paging.
So I've gotten as far as the basic query code :
<?php
mysql_select_db($database_Photolibrary, $Photolibrary);
$query_photos = "SELECT *, Count(*) As rank FROM photos,
photokeywords WHERE photos.Photo_ID = photokeywords.Photo_ID AND
photokeywords.Keyword_ID IN(' . implode(',', $ckbox).') GROUP BY
photos.Photo_ID ORDER BY rank DESC, Title";
$photos = mysql_query($query_photos, $Photolibrary) or
die(mysql_error());
$row_photos = mysql_fetch_assoc($photos);
$totalRows_photos = mysql_num_rows($photos);
?>
But am unsure where I need to call the ckbox values,
previously :
if (isset($_GET['ckbox'])){
// get profile keys
$ckbox = array_keys($_GET['ckbox']);
in the DW construction.
I've been playing around with different things in the query's
variables, such as
Name : Keyword_ID
Default value : 1
Run time value : $_GET['ckbox']
but to no avail.
If anyone can help out with this it'd be much appreciated.
Cheers.Something like this?
SQL> var P_DISCOUNT_RATE number
SQL> var P_BASE_YEAR number
SQL> var P_FUTURE_YEAR number
SQL> exec :P_DISCOUNT_RATE := 0.1; :P_BASE_YEAR := 2003; :P_FUTURE_YEAR := 2009;
PL/SQL-procedure is geslaagd.
SQL> select sum(npd)
2 from ( select nvl
3 ( ( select sum(sal)
4 from emp
5 where to_number(to_char(hiredate,'yyyy')) + 25 = :P_FUTURE_YEAR - l
6 )
7 , 0
8 ) / power(1 + :P_DISCOUNT_RATE, l) npd
9 from ( select level-1 l from dual connect by level <= 1 + :P_FUTURE_YEAR - :P_BASE_YEAR )
10 )
11 /
SUM(NPD)
22248,89010313503176012567447578717301
1 rij is geselecteerd.I counted 7 iterations for the values 2003 and 2009, so you might want to explain a little more.
Regards,
Rob. -
I am novice in PL/Sql. I need some help in writing a complex query.
I imagine the following table structure.
Type(string) Date(date) count(int)
Given a date range
The query needs to group the records on type and return the record (type and count) which has the max date (its just date, no time is involved) for each group (which is based on type) . If there are more than one records which have the max date then the average count should be returned for that type
i would be glad if someone could give any ideas as to how to go about this query. Thanking you in advance.
regards.Heres the query ... Forget the period .... wht this query is supposed to do is group on assigned ki for a given date range. then it has to get the value which is for the last record for that Ki in the date range. If there are 2 records which have max date then it should give an average.
e.g
assignedKI / date / value
a 1st may 2008 10
b 2nd may 2008 12
c 1st may 2008 13
a 30 - apr-2008 16
b 4th may 2008 17
a 1st may 2008 20
The query should return
a 1st may 2008 15 (which is the average as there are 2 values for the max date for a)
b 4th may 2008 17
c 1st may 2008 13
the following query doesnot work ....
SELECT
kiv2.assigned_k_i,
kiv2.ki_name,
kiv.period,
max( kiv2.ki_value_date) ,
avg(kiv2.ki_value)
FROM
SELECT
assigned_k_i,
period p1,
TO_CHAR(ki_value_date, period) period,
MAX(ki_value_date) ki_value_date
FROM
v_ki_value,
(select ? period from dual)
WHERE
(status = 'APPROVED' OR status = 'AUTOAPPROVED')
AND ( trunc(to_date(to_char(ki_value_date, 'DD/MM/YYYY'), 'DD/MM/YYYY')) >= ?)
AND ( trunc(to_date(to_char(ki_value_date, 'DD/MM/YYYY'), 'DD/MM/YYYY')) <= ?)
AND (INSTR(?, TO_CHAR(assigned_k_i)) > 0)
GROUP BY
assigned_k_i,
TO_CHAR(ki_value_date, period)
) kiv,
v_ki_value kiv2
WHERE
kiv.assigned_k_i = kiv2.assigned_k_i
AND to_char(kiv.ki_value_date, kiv.p1) = to_char(kiv2.ki_value_date, kiv.p1)
GROUP BY
kiv2.assigned_k_i,
kiv2.ki_name,
kiv.period,
kiv2.ki_value
ORDER BY
kiv2.assigned_k_i -
Complex Query in toplink descriptor
I have a complex query with "Order by" and "Max" aggregation function and I am trying to convert it to a named query. I am using toplink descriptor in Jdeveloper 11G.
1. Is it possible to write a named query which returns values from more than two different table?
2. how to specify Max operation in the Toplink descriptor.
Here is my Query
select * from table1 t1 ,table2 t2
where
t1.id = t2.id and
t1.name ='Name' and
t1.status<>'DELETE' and
t2.a_id = (select max(a_id) from table2 t21 where
t21.a_id = t1.id and
group by id)TopLink supports sub-selects through sub-queries in TopLink Expressions, and through JPQL. I'm not sure on the JDev support though, you may need to use a code customizer or amendment to add the named query.
James : http://www.eclipselink.org -
How to store data from a complex query and only fresh hourly or daily?
We have a report which runs quite slow (1-2 minutes) because the query is quite complicate, so we would like to run this query daily only and store in a table so for those procedures that need to use this complex query as a subquery, can just join to this table directly to get results.
However, I am not sure what kind of object I should use to store data for this complex query. Is data in global temp table only persist within transaction? I need something that can persist the data and be access by procedures.
Any suggestions are welcome,
CheersThank you for your reply. I looked at the materialized view earlier on, but have some difficulties to use it. So I have some questions here:
1.The complex query is not a sum or aggregate functions, it just need to get data from different tables based on different conditions, in this case is it still appropriate to use meterialized view?
2.If it is, I created one, but how to use it in my procedure? From the articles I read, it seems I can't just query from this view directly. So do I need to keep the complex query in my procedure and how the procedure will use the meterialized view instead?
3. I also put the complex query in a normal view, then create a materialized view for this normal view (I expect the data from the complex query will be cache here), then in the procedure I just select * from my_NormalView, but it takes the same time to run even when I set the QUERY_REWRITE_ENABLED to true in the alter session. So I am not sure what else I need to do to make sure the procedure use the materialized view instead of the normal view. Can I query from the Materialized View directly?
Below in the code I copied from one of the article to create the materialized view based on my normal view:
CREATE MATERIALIZED VIEW HK3ControlDB.MW_RIRating
PCTFREE 5 PCTUSED 60
TABLESPACE "USERS"
STORAGE (INITIAL 50K NEXT 50K)
USING INDEX STORAGE (INITIAL 25K NEXT 25K)
REFRESH START WITH ROUND(SYSDATE + 1) + 11/24
NEXT NEXT_DAY(TRUNC(SYSDATE), 'MONDAY') + 15/24
enable query rewrite
AS SELECT * FROM HK3ControlDB.VW_RIRating;
Cheers -
EJB-QL--- Complex Query --- Two Entites
hi
Can i write Complex Query in EJB-QL
between two entities...
have any buddy wrote the same....
i badly require the same...
bye
RAJRaj -- OC4J will support EJB-QL soon but not yet. You can look at
Re: Data Integrity
for more details.
Now, as for complex query, you may still be able to do this in the current version through the orion-ejb-jar.xml. You can
include complex SQL there in the finder methods.
What exactly do you need to do?
later -- Jeff
hi
Can i write Complex Query in EJB-QL
between two entities...
have any buddy wrote the same....
i badly require the same...
bye
RAJ -
Complex query crashing database!
Hi
Our 10g database runs on Windows 2003 (with 32 GB RAM on server).
Whenever a user is running a particular complex query, the database just crashes suddenly.
In fact, it doesn't write anything in alert log file for any problem.
The user starts the query, it runs for sometime, then suddenly database crashes!
Only way to work there after is to restart the database.
Although I asked the user to not to run that query again but it is generated by some application which may generate similar query again!
So, how I can prevent database from crashing again?
Will creating a resource profile to allocate finite resource help? Problem is since database crashes so suddenly, I can't find any info from log files!
ThanxThe database instance crashes. We have to start the ORACLE.EXE again to up the database.
Here is last few lines of the alert.log
Fri Feb 05 11:14:59 2010
Setting recovery target incarnation to 1
Fri Feb 05 11:14:59 2010
Successful mount of redo thread 1, with mount id 1911309359
Fri Feb 05 11:14:59 2010
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Fri Feb 05 11:14:59 2010
alter database open
Fri Feb 05 11:15:00 2010
Beginning crash recovery of 1 threads
parallel recovery started with 7 processes
Fri Feb 05 11:15:00 2010
Started redo scan
Fri Feb 05 11:15:00 2010
Completed redo scan
3886 redo blocks read, 276 data blocks need recovery
Fri Feb 05 11:15:00 2010
Started redo application at
Thread 1: logseq 179837, block 3
Fri Feb 05 11:15:00 2010
Recovery of Online Redo Log: Thread 1 Group 6 Seq 179837 Reading mem 0
Mem# 0 errs 0: F:\DB LOG FILES\ORADATA\REDO06.LOG
Mem# 1 errs 0: F:\DB LOG FILES MULTIPLEXED\ORADATA\REDO06.LOG
Fri Feb 05 11:15:00 2010
Completed redo application
Fri Feb 05 11:15:00 2010
Completed crash recovery at
Thread 1: logseq 179837, block 3889, scn 10584777673
276 data blocks read, 276 data blocks written, 3886 redo blocks read
Fri Feb 05 11:15:01 2010
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=21, OS id=5752
Fri Feb 05 11:15:01 2010
ARC0: Archival started
ARC1: Archival started
ARC1 started with pid=22, OS id=4548
Fri Feb 05 11:15:02 2010
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 179838
Thread 1 opened at log sequence 179838
Current log# 7 seq# 179838 mem# 0: F:\DB LOG FILES\ORADATA\REDO07.LOG
Current log# 7 seq# 179838 mem# 1: F:\DB LOG FILES MULTIPLEXED\ORADATA\REDO07.LOG
Successful open of redo thread 1
Fri Feb 05 11:15:02 2010
ARC1: STARTING ARCH PROCESSES
Fri Feb 05 11:15:02 2010
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
Fri Feb 05 11:15:02 2010
SMON: enabling cache recovery
Fri Feb 05 11:15:02 2010
ARC2: Archival started
ARC1: STARTING ARCH PROCESSES COMPLETE
ARC2 started with pid=23, OS id=6120
Fri Feb 05 11:15:02 2010
Successfully onlined Undo Tablespace 1.
Fri Feb 05 11:15:02 2010
SMON: enabling tx recovery
Fri Feb 05 11:15:02 2010
Database Characterset is WE8MSWIN1252
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=25, OS id=5312
Fri Feb 05 11:15:03 2010
Completed: alter database open
Fri Feb 05 11:15:03 2010
ARC1: Becoming the heartbeat ARCH
Fri Feb 05 11:16:01 2010
Shutting down archive processes
Fri Feb 05 11:16:06 2010
ARCH shutting down
ARC2: Archival stopped -
Working with a complex query and trying to get rid of duplicates
I'm working with the following complex query, and I need to
select vin, number, year, make, model, and state from
2007 vehicles in certain models, vin types
2006 vehicles in certain models, vin types
Is there a more efficient way to write this than how I've been writing it?:
select distinct vehicle.vin VIN, unit.STATION_ID STID, unit.EMBEDDED_AREA_CODE||unit.EMBEDDED_PREFIX||lpad(unit.EMBEDDED_RON,4,0)
MIN, unit.AUTHENTICATION_ID AUTHCODE, vehicle.user_veh_desc, veh_model.VEH_MODEL_DESC MODEL, veh_model.VEH_MANUF_YEAR YEAR, acct_veh.STATE STATE
from vehicle
inner join veh_unit on vehicle.vehicle_sak=veh_unit.vehicle_sak
inner join unit on veh_unit.unit_sak=unit.unit_sak
inner join vdu_profile on vdu_profile.vehicle_sak=vehicle.vehicle_sak
inner join vdu_profile_program on vdu_profile_program.VDU_PROFILE_SAK=vdu_profile.VDU_PROFILE_SAK
inner join veh_model on vehicle.VEH_MODEL = veh_model.VEH_MODEL
inner join acct_veh on acct_veh.VEHICLE_SAK = vehicle.VEHICLE_SAK
and vehicle.user_veh_desc like ('%2007%')
AND unit.unit_gen_id >= 36
AND vdu_profile_program.VDU_PROGRAM_SAK = 3
and vdu_profile_program.PREFERENCE_VALUE = 'Y'
and acct_veh.STATE in ('MN','ND','IA')
AND (veh_model.VEH_MODEL_DESC like ('%Vehicle2%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle3%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle5%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle6%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle7%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle8%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle9%'))
and ACCT_VEH.ACCT_VEH_STATUS_ID = 'A'
and (vehicle.vin like ('_______3_________')
or vehicle.vin like ('_______W_________')
or vehicle.vin like ('_______K_________')
or vehicle.vin like ('_______0_________'))
UNION
select distinct vehicle.vin VIN, unit.STATION_ID STID, unit.EMBEDDED_AREA_CODE||unit.EMBEDDED_PREFIX||lpad(unit.EMBEDDED_RON,4,0)
MIN, unit.AUTHENTICATION_ID AUTHCODE, vehicle.user_veh_desc, veh_model.VEH_MODEL_DESC MODEL, veh_model.VEH_MANUF_YEAR YEAR, acct_veh.STATE STATE
from vehicle
inner join veh_unit on vehicle.vehicle_sak=veh_unit.vehicle_sak
inner join unit on veh_unit.unit_sak=unit.unit_sak
inner join vdu_profile on vdu_profile.vehicle_sak=vehicle.vehicle_sak
inner join vdu_profile_program on vdu_profile_program.VDU_PROFILE_SAK=vdu_profile.VDU_PROFILE_SAK
inner join veh_model on vehicle.VEH_MODEL = veh_model.VEH_MODEL
inner join acct_veh on acct_veh.VEHICLE_SAK = vehicle.VEHICLE_SAK
and vehicle.user_veh_desc like ('%2006%')
AND unit.unit_gen_id >= 36
AND vdu_profile_program.VDU_PROGRAM_SAK = 3
and vdu_profile_program.PREFERENCE_VALUE = 'Y'
and acct_veh.STATE in ('MN','ND','IA')
AND (veh_model.VEH_MODEL_DESC like ('%Vehicle1%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle2%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle3%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle4%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle5%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle6%')
OR veh_model.VEH_MODEL_DESC like ('%Vehicle7%'))
and ACCT_VEH.ACCT_VEH_STATUS_ID = 'A'
and (vehicle.vin like ('_______Z_________')
or vehicle.vin like ('_______K_________'))I am not sure how many rows you have in the tables but I am assuming that there would be performance benefits in performing the query only once. You can combine similar coding and use an OR condition for the parts that are different. I left the 'K' vin comparison in both years for readability.
I was not able to fully test my code but I believe that it will work as is. I hope it helps. If you are happier with the UNION you may want to investigate the WITH clause as you can query a subset of data twice, rather than the full data set.
-- Common code
SELECT DISTINCT
vehicle.vin,
unit.station_id stid,
unit.embedded_area_code || unit.embedded_prefix || LPAD (unit.embedded_ron, 4, 0) MIN,
unit.authentication_id authcode,
vehicle.user_veh_desc,
veh_model.veh_model_desc model,
veh_model.veh_manuf_year YEAR,
acct_veh.state
FROM vehicle
INNER JOIN veh_unit
ON vehicle.vehicle_sak = veh_unit.vehicle_sak
INNER JOIN unit
ON veh_unit.unit_sak = unit.unit_sak
INNER JOIN vdu_profile
ON vdu_profile.vehicle_sak = vehicle.vehicle_sak
INNER JOIN vdu_profile_program
ON vdu_profile_program.vdu_profile_sak = vdu_profile.vdu_profile_sak
INNER JOIN veh_model
ON vehicle.veh_model = veh_model.veh_model
INNER JOIN acct_veh
ON acct_veh.vehicle_sak = vehicle.vehicle_sak
AND unit.unit_gen_id >= 36
AND vdu_profile_program.vdu_program_sak = 3
AND vdu_profile_program.preference_value = 'Y'
AND acct_veh.state IN ('MN', 'ND', 'IA')
AND acct_veh.acct_veh_status_id = 'A'
AND
-- Annual code
( -- 2006 conditions
AND vehicle.user_veh_desc LIKE ('%2007%')
AND regexp_like(veh_model.veh_model_desc, 'Vehicle[2356789]')
AND regexp_like(vehicle.vin, '_{7}[3WK0]_{9}')
OR
( -- 2007 conditions
AND vehicle.user_veh_desc LIKE ('%2006%')
AND regexp_like(veh_model.veh_model_desc, 'Vehicle[1234567]')
AND regexp_like(vehicle.vin, '_{7}[ZK]_{9}')
). -
Working with multiple results of a complex query
Hi all!
As I "advance" in learning PL/SQL with oracle, I now get stuck in handling multiple results of a complex query. As far as I know, I cannot use a cursor here, as there is no table where the cursor could point to.
Here is the concept of what I want to do (pseudocode):
foreach result in SELECT * FROM table_1, table_n WHERE key_1 = foreign_key_in_n;
-- do someting with the resultHere is my attemt, that freezes the browser gui and throws an internal database error:
declare
type t_stock is record(
baggage_id baggage.baggage_id%type,
section_id sections.section_id%type,
shelf_id shelves.shelf_id%type
v_stock t_stock;
rcnt number(2);
begin
dbms_output.put_line(TO_CHAR(rcnt));
loop
SELECT COUNT(*) INTO rcnt FROM (
SELECT baggage.baggage_id, sections.section_id, shelves.shelf_id
FROM baggage, sections, shelves
WHERE baggage.baggage_id = sections.contained_baggage_id
AND shelves.is_connex_to_section_id = sections.section_id);
IF rcnt <= 0 THEN
exit;
END IF;
SELECT baggage.baggage_id, sections.section_id, shelves.shelf_id INTO v_stock
FROM baggage, sections, shelves
WHERE baggage.baggage_id = sections.contained_baggage_id
AND shelves.is_connex_to_section_id = sections.section_id
AND ROWNUM < 2;
UPDATE sections SET contained_baggage_id = NULL WHERE section_id = v_stock.baggage_id;
commit; -- do I need that?
end loop;
END;
/So, is there a way to traverse a list of results from a complex query? Maybe without creating a temporary table (or is that the better way?).
regards, Alex
I reformatted the code
pktmOk, here are the details:
The tables are used to model kind of a transport system. There are terminals connected with sections that may contain 1 piece of baggage. The baggage is moved by a procedure through a transport system. After each of these "moving steps", I check if the baggage is in front of the shelf it should be in.
[To be honest, the give statement doesn't contain the info, in which shelf the baggage wil bee inserted. That was spared out because of the lack of a working piece of code :)]
But: if we consider the fact, that a baggage is in front of such a shelf in the way, that it should be put in this shelf, then all this makes some sense.
- move baggae through a transport system
- see if you can put baggage into a shelf
In order to "put baggage in a shelf", I need to remove it from the transport section. As the transport system is not normalized, I need to update the section where the baggage was in.
Uhm... yes it's a task that doesn't make too much sense. It seems to be some kind of general spirit in university homework :)
But: the FOR r IN (Statement) lloks good. I'll use that.
And, the ROWNUM < 2 is used to limit the size of the result to 1, there is no need to have a specific ordering. It's just because - afaik - oracle doesn't have a limit clause. I would appreciate your help if you know a better way to do limit resultsets.
best regards, Alex -
Please help with this complex query, I have been working on a solution for hours now. Here is a simplified version:
I have 3 fields in tableA:field1, field2, field3
I want to return all those records that have both field2 and field3 the same but field1 different.
For example:
Field1 Field2 Field3
======================
Cars Blue 6 liter
Cars Blue 6 liter
Van Blue 6 liter
Cars Green 5 liter
Cars Green 5 liter
I need the first 3 records returned because field2 and field3 are the same but field1 is different.
Anyone have any ideas?I think the second SELECT is unnessary. In other words, the following should be equivalent:
SELECT
TABLEA.*
FROM
TABLEA
WHERE
(FIELD2, FIELD3)
IN
SELECT
FIELD2,
FIELD3
FROM
SELECT DISTINCT
FIELD1,
FIELD2,
FIELD3
FROM
TABLEA
GROUP BY
FIELD2,
FIELD3
HAVING
COUNT(*) > 1
)Assuming FIELD1 is NOT NULL, it is also possible to eliminate the SELECT DISTINCT by using COUNT(DISTINCT...), as in something like:
SELECT
TABLEA.*
FROM
TABLEA
WHERE
(FIELD2, FIELD3)
IN
SELECT
FIELD2,
FIELD3
FROM
TABLEA
GROUP BY
FIELD2,
FIELD3
HAVING
COUNT(DISTINCT FIELD1) > 1
)No doubt there are yet other solutions using analytics.
Maybe you are looking for
-
"Error in TTCmd::Prepare() while allocating statement handle")
"Error in TTCmd::Prepare() while allocating statement handle") How does this kind of error come out? What is the reason? The posibilities are what? Hoping to receive your answers/ Thanks! Edited by: user10789526 on 2009-8-20 上午12:13
-
Hi all, We have done with ESS business package. And have assigned ESS roles to users. While checking with who is who iview is getting the below error. com.sap.dictionary.runtime.DdException: Typecom.sap.pcuigp.xssutils.pernr.model.grpinfo.types.Hrxs
-
Magic Mouse Menu Scrolling?!?
Just bought a Magic Mouse for my (old) G5. Running 10.5.8 with the Wireless Mouse Update installed. So far so good, BUT: The mouse does not seem to scroll through drop down menus like my old Stark Microsoft Mouse (sorry) did. Shouldn't it?!?
-
Vendor request approved and then denied
Hi, I registered in Vendor portal recently...I got a mail after that , asking some document verfication..I sent a scanned copy of my driving license... Within a few minutes , i got a mail saying VENDOR APPROVED..Then after a few mins, i got a mail sa
-
Cross farm service applications
Hi. I have a couple of questions regarding the cross farm service application set up 1) When we set up the cross farm service applications why the consuming farm needs to share the STS certificate also with the publishing farm why not only the ro