Query related to the group by rollup,cube
Hello experts,
I am not getting how the below query gets excuted ,
{code}
SELECT department_id, job_id, manager_id, SUM(salary) FROM employess GROUP BY department_id, ROLLUP(job_id), CUBE(manager_id)
{code}
can anybody simplified this plz ?
Thanks in advance
Hi,
SShubhangi wrote:
Hello experts,
I am not getting how the below query gets excuted ,
{code}
SELECT department_id, job_id, manager_id, SUM(salary) FROM employess GROUP BY department_id, ROLLUP(job_id), CUBE(manager_id)
{code}
can anybody simplified this plz ?
Thanks in advance
Here's how it works.
Since the GROUP BY clause includes department_id (not modified by ROLLUP or CUBE), every row of the result set will be limited to a specific department_id.
Since the GROUP BY clause includes ROLLUP (job_id). some rows of the result set will represent a specific job_id, and some rows will be Super-Aggregate rows, representing all job_ids at the same time.
Since the GROUP BY clause includes CUBE (manager_id). some rows of the result set will represent a specific manage_id, and some rows will be super-aggregates, representing all manager_ids at the same time. (When there is only 1 expression inside the parentheses, CUBE means the same thing as ROLLUP).
Here's how it can be simplified:
GROUP BY department_id
, CUBE (job_id, manager_id)
Here's why:
Since ROLLUP (x) means the same thing as CUBE (x) - when x is a single expression - then what you posted is equivalent to
GROUP BY department_id
, CUBE (job_id)
, CUBE (manager_id)
and
GROUP BY ...
CUBE (x)
, CUBE (y)
is just a longer way of saying
GROUP BY ...
CUBE (x, y)
To understand this better, do some experiments yourself. Try different combinations of ROLLUP and CUBE, and see what results they produce.
DO NOT use the hr.employees table for your experiments; it has far too many groups for anyone to understand. Also, department_id and manager_id can be NULL, so it's hard to tell super-aggregate rows from normal aggregate rows. Even scott.emp is more complicated than necessary. I suggest you make your own table, like this:
CREATE TABLE simp_emp AS
SELECT ename
, deptno
, job
, CASE
WHEN job IN ('MANAGER', 'PRESIDENT')
THEN 'NO'
ELSE 'YES'
END AS unionized
, EXTRACT (YEAR FROM hiredate) AS hireyear
, sal
FROM scott.emp
In this table, there are only 2 possible values for unionized, 3 values for deptno, and 4 values for hireyear, and none of those colums are ever NULL.
Similar Messages
-
Query related to filter group on matnr created in ALE distribution model
Hi All,
I have query related to filter group on matnr created in ALE distribution model.
I have created a filter group on matnr in ALE distribution model and put the value E* ( purpose is that all the material number started with E should be triggered in case of any changes in the material).But it is not working.
<b>Can anybody suggest the solution for this i.e how to capture E* value for the material master changes and should trigger idoc using change pointer using BD21.</b>
Thanks & Regards
PrabhatUnfortunately, you cannot filter using wildcards or exclusions. You have to explicitly list each allowed value in its entirety.
In my opinion, the simplest solution would be to copy function MASTERIDOC_CREATE_SMD_MATMAS, modify it to handle your custom filtering and update the message type entry in transaction BD60. -
Query relating to the creation of Managed Service Accounts
Hi Folks
I am studying for my 70-411 exam and have a query relating to the creation of Managed Service Accounts.
I have successfully created an MSA account named 'MSATest' on a DC using:
new-adserviceaccount -name msatest –dnshostname home-dc-01 -passthru
and
add-AdcomputerServiceAccount -identity home-ap-01 -serviceaccount msatest -passthru
However the guide that I am using now says that I now need to run: Install-ADServiceAccount on the host computer in the domain to install the MSA in order to make available it available for use by services.
So on my member server (home-ap-01) I have installed the Active Directory Module for powershell and ran:
PS C:\Users\administrator.PCECORP> Install-ADServiceAccount -Identity msatest
Install-ADServiceAccount : Cannot install service account. Error Message: 'An
unspecified error has occurred'.
At line:1 char:1
+ Install-ADServiceAccount -Identity msatest
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : WriteError: (msatest:String) [Install-ADServiceA
ccount], ADException
+ FullyQualifiedErrorId : InstallADServiceAccount:PerformOperation:Install
ServiceAcccountFailure,Microsoft.ActiveDirectory.Management.Commands.Insta
llADServiceAccount
PS C:\Users\administrator.PCECORP>
However this errors, Have I misunderstood the purpose of the Install-ADServiceAccount ? or am I doing something wrong?
Thanks in advance for you help.Try using -RestrictToSingleComputer parameter when creating service account with New-ADServiceAccount.
Gleb.
Hi Gleb
Thank you for your help, it is appreciated. That did the trick.
All the best. -
Please teach the method of acquiring the parent and child relation of the group with EDK5.2.
EDK5.2____________________,_________oHello.
Java________,_______________...
Class________o
com.plumtree.remote.auth.ChildGroupList
...o(^^;)
Best Regards,
--------- Hiroko Iida_______ (05/10/28 18:27) -------
Please teach the method of acquiring the parent and child relation of the group with EDK5.2.
EDK5.2____________________,_________o -
Query related to the transfer of the control to the other controller.
Hi all,
I have a query related to the transfer of the control to the other controller.
I have components A and B .From a view of component A I neeed to open a window which belong to component B.Problem is that ,if I use create_window_for_cmp_usage( ) and the open( ) method and after that there is some code,then that code is getting executed before the window is opening.
I want that the control should be back to the these code after the window is poped up and after clossing the window.
Eg
method ONACTIONOPEN_WINDOW .
DATA lo_window_manager TYPE REF TO if_wd_window_manager.
DATA lo_api_component TYPE REF TO if_wd_component.
DATA lo_window TYPE REF TO if_wd_window.
lo_api_component = wd_comp_controller->wd_get_api( ).
lo_window_manager = lo_api_component->get_window_manager( ).
lo_window = lo_window_manager->create_window_for_cmp_usage(
interface_view_name = 'ZHELLO_WORLD'
component_usage_name = 'USAGE_HELLO'
title =
close_in_any_case = abap_true
message_display_mode = if_wd_window=>co_msg_display_mode_selected
lo_window->open( ).
data a type i.
data b type i.
a = 2.
b = 3.
a = a + b.
endmethod.
In this case I am calling ONACTIONOPEN_WINDOW method.But before opening the window the a iscalculated here.I want that after popuping the window the calculations should be done .
How will I achieve this.
Thanks in advance.
Edited by: vaibhav nirmal on Nov 25, 2008 6:42 AMHi,
You will have to do your calculation as an event in your new window, or capture the closing of the new window as an event in your currenbt view and do your calculations in the event.
Regards,
Shruthi R -
Customized heading in the Group by Rollup clause
I have a table with the following data.
SQL> select region, accname, secname, col1 from acc;
REGION ACCNAME SECNAME COL1
region1 acc1 sec1 40
region1 acc1 sec2 60
region1 acc1 sec3 80
region1 acc2 sec2 50
region1 acc2 sec5 70
region2 acc3 sec6 120
6 rows selected.
I get the following output for the below query.
SELECT region, accname, secname, SUM (col1)
FROM acc
GROUP BY ROLLUP (region, accname, secname);
REGION ACCNAME SECNAME SUM(COL1)
region1 acc1 sec1 40
region1 acc1 sec2 60
region1 acc1 sec3 80
region1 acc1 180
region1 acc2 sec2 50
region1 acc2 sec5 70
region1 acc2 120
region1 300
region2 acc3 sec6 120
region2 acc3 120
region2 120
420
12 rows selected.
I need customized heading like 'Security Total'/'Account Total'/'Regionwise Total' against grouped amounts. Is there anyway to get this. I am using Oracle 9i. Please help me.
Thanks.We can throw in the GROUPING_ID function:
select case grouping_id(deptno, job)
when 0
then 'No Subtotalling'
when 1
then 'Job Subtotal'
when 2
then 'Department Total'
when 3
then 'Grand Total'
end
, case grouping(deptno)
when 1
then 'Department Total'
else to_char(deptno)
end Department
, case grouping(job)
when 1
then 'Job SubTotal'
else job
end Job
, sum(sal) salary_sum
from emp
group
by rollup (deptno, job)
I am not exactly sure what you are looking for. However, using GROUPING_ID you can determine what columns you are sub-totaling on (or rolling up by) in a given record.
If you are only interested in certain subtotals, you can use this query in an inline view that you further restrict.
Results:
CASEGROUPING_ID( DEPARTMENT JOB SALARY_SUM
No Subtotalling 10 CLERK 1300
No Subtotalling 10 MANAGER 2450
No Subtotalling 10 PRESIDENT 5000
Job Subtotal 10 Job SubTotal 8750
No Subtotalling 20 CLERK 1900
No Subtotalling 20 ANALYST 6000
No Subtotalling 20 MANAGER 2975
Job Subtotal 20 Job SubTotal 10875
No Subtotalling 30 CLERK 950
No Subtotalling 30 MANAGER 2850
No Subtotalling 30 SALESMAN 5600
Job Subtotal 30 Job SubTotal 9400
Grand Total Department Total Job SubTotal 29025
Lucas -
Query related to the all_tab_partitions
Hey experts ,
When I am excuting the query
{code}
SELECT partition_name, high_value
FROM all_tab_partitions
WHERE table_name = XXXXX
{code}
in the result set i can see then entry like
partition_name high_value
XXXXX_PCURRENT MAXVALUE
what is the mening of this entry ?See the Range Partitioning section of the VLDB and Partitioning guide
http://docs.oracle.com/cd/B28359_01/server.111/b32024/partition.htm
Range Partitioning
Range partitioning maps data to partitions based on ranges of values of the partitioning key that you establish for each partition. It is the most common type of partitioning and is often used with dates. For a table with a date column as the partitioning key, the January-2005 partition would contain rows with partitioning key values from 01-Jan-2005 to 31-Jan-2005.
Each partition has a VALUES LESS THAN clause, which specifies a non-inclusive upper bound for the partitions. Any values of the partitioning key equal to or higher than this literal are added to the next higher partition. All partitions, except the first, have an implicit lower bound specified by the VALUES LESS THAN clause of the previous partition.
A MAXVALUE literal can be defined for the highest partition. MAXVALUE represents a virtual infinite value that sorts higher than any other possible value for the partitioning key, including the NULL value. -
Query Related findout the Monday's date
Hi All,
can you suggest me, any standard FM to findout the weekly Monday date.
Means as importing parameter i will pass any date. so i want passing parameter date week's Monday date.
Example: i pass the date 01.08.2007 then FM will return Monday date(30.07.2007).
Thanks
Amithi,
interested, one sulotion is you can set a reference date, for example, 30.07.2007, then you check the day intervals between this with the date you import, let it be 22.08.2007, then the day intervals is 22.08.2007 - 30.07.2007 = 23, since
23 mod 7 = 2, you know the monday is 22.08.2007 - 2 = 20.08.2007.
so all you need is to develop a form that calculate the day intervals. -
MDX query for to get the data from two cubes
Hi
Can you tell me how to create MDX query to get the values from two cubes. (One hierarchy from first cube and one hierarchy from second cube)
Can you give me one example.
Regards,
Madhu.
SudhanHi Sudhan,
According to your description, you want to retrieve data from two different cubes, right? The short answer is yes. To query multiple cubes from a single MDX statement use the LOOKUPCUBE function (you can't specify multiple cubes in your FROM statement).
The LOOKUPCUBE function will only work on cubes that utilize the same source database as the cube on which the MDX statement is running. For the detail information about it, please refer to the link below to see the blog.
Retrieving Data From Multiple Cubes in an MDX Query Using the Lookupcube Function
Regards,
Charlie Liao
TechNet Community Support -
Query to get the hierarchical results
Hi,
Please help me in writing a Query to get the hierarchical results. I want a result like follows...
course-----groupname---TotalMembers---NotStarted---INProgress---Completed
Course1---country1--------12---------------6----------3-------------3
Course1-----state11-------12---------------6----------3-------------3
Course1------District111--10---------------5----------0-------------0
Course1--------City1111----0---------------0----------0-------------0
Course1--------City1112----1---------------0----------0-------------1
Course1--------City1113----6---------------3----------2-------------1
Course1---country2--------12---------------6----------3-------------3
Course1----state21--------12---------------6----------3-------------3
Course1------District211--10---------------5----------0-------------0
Course1--------City2111----0---------------0----------0-------------0
Course1--------City2112----1---------------0----------0-------------1
Course1--------City2113----6---------------3----------2-------------1
Course2---country1--------12---------------6----------2-------------3
Course2----state11--------12---------------6----------2-------------3
Course2------District111--10---------------5----------0-------------0
Course2--------City1111----0---------------0----------0-------------0
Course2--------City1112----1---------------0----------0-------------1
Course2--------City1113----6---------------3----------1-------------2
Course2---country2--------12---------------6----------3-------------3
Course2-----state21-------12---------------6----------3-------------3
Course2------District211--10---------------5----------0-------------0
Course2--------City2111----0---------------0----------0-------------0
Course2--------City2112----1---------------0----------0-------------1
Course2--------City2113----6---------------3----------2-------------1
These are the Tables available to me.
(I have just given some examle data in tables, to get the idea)
"Groups" Table (This table gives the information of the group)
GROUPID-----NAME-------PARENTID
1---------Universe--------1
2---------country1--------1
3---------state11---------2
4---------District111-----3
5---------City1111--------4
6---------City1112--------4
7---------City1113--------4
8---------country2--------1
9---------state21---------8
10--------District211-----9
11--------City2111--------10
12--------City2112--------10
13--------City2113--------10
"Users" Table (This table provides the user information)
userID----FIRSTNAME---LASTNAME
user1-----------Jim-------Carry
user2-----------Tom-------lee
user3-----------sunny-----boo
user4-----------mary------mall
"User-Group" Tables (This table provides the relation between the groups
and the members)
GROUPID---userID
3-------------user1
3-------------user2
3-------------user4
4-------------user5
5-------------user6
5-------------user7
user_score (This table provides the user scores of different courses)
USERID----course-----STATUS
user1------course1-----complete
user1------course2-----NotStarted
user2------course1-----NotStarted
user2------course2-----complete
user3------course1-----complete
user3------course2-----InProgress
user4------course2-----complete
user4------course1-----NotStarted
I will explain the first four lines of the above result.
Course1---country1--------12---------------6----------4-------------2
Course1-----state11-------12---------------6----------4-------------2
Course1------District111--10---------------5----------3-------------2
Course1--------City1111----0---------------0----------0-------------0
Course1--------City1112----1---------------0----------0-------------1
Course1--------City1113----6---------------3----------2-------------1
# "city1111" group has 0 members
# "city1112" group has 1 member (1 member completed the course1)
# "city1113" group has 6 members(3 members notStarted,2 members
InProgress,1 member completed the course1)
# "District111" is the parent group of above three groups, and has 3
members.(2 members NotStarted,1 member InProgress the course1). But this
group has child groups, so the scores of this group has to rollup the
child groups scores also. Thats why it has 2+3+0+0=6 members Not
Started,1+2+0+0=3 members InProgress,0+0+1+1=2 members completed.
# "state11" group also same as the above group.
I am able to get the group hierarchy by using "Connect By" like follows
"select name,groupid,parentid from groups_info start with groupid=1 connect by parentid = prior groupid;"
But i want to get the result as i have mentioned in the begining of this discussion.
I am using oracle 8i (oracle8.1.7).
Thank you for any help
Srinivas MThis may not be exactly what you want,
but it should be fairly close:
SET LINESIZE 100
SET PAGESIZE 24
COLUMN groupname FORMAT A20
SELECT INITCAP (user_score.course) "course",
groupnames.name "groupname",
COUNT (*) "TotalMembers",
SUM (NVL (DECODE (UPPER (user_score.status), 'NOTSTARTED', 1), 0)) "NotStarted",
SUM (NVL (DECODE (UPPER (user_score.status), 'INPROGRESS', 1), 0)) "InProgress",
SUM (NVL (DECODE (UPPER (user_score.status), 'COMPLETE', 1), 0)) "Completed"
FROM user_score,
user_group,
(SELECT ROWNUM rn,
name,
groupid
FROM (SELECT LPAD (' ', 2 * LEVEL - 2) || name AS name,
groupid
FROM groups
START WITH groupid = 1
CONNECT BY PRIOR groupid = parentid)) groupnames
WHERE user_score.userid = user_group.userid
AND user_group.groupid IN
(SELECT groupid
FROM groups
START WITH groupid = groupnames.groupid
CONNECT BY PRIOR groupid = parentid)
GROUP BY user_score.course, groupnames.name, groupnames.rn
ORDER BY user_score.course, groupnames.rn
I entered the minimal test data that you
provided and a bit more and got this result
(It was formatted as you requested,
but I don't know if it will display properly
on this post, or wrap around):
course groupname TotalMembers NotStarted InProgress Completed
Course1 Universe 6 2 0 4
Course1 country1 5 2 0 3
Course1 state11 5 2 0 3
Course1 District111 2 0 0 2
Course1 City1112 1 0 0 1
Course1 City1113 1 0 0 1
Course1 country2 1 0 0 1
Course1 state21 1 0 0 1
Course1 District211 1 0 0 1
Course1 City2113 1 0 0 1
Course2 Universe 5 1 1 3
Course2 country1 4 1 1 2
Course2 state11 4 1 1 2
Course2 District111 1 0 1 0
Course2 City1113 1 0 1 0
Course2 country2 1 0 0 1
Course2 state21 1 0 0 1
Course2 District211 1 0 0 1
Course2 City2113 1 0 0 1
Here is the test data that I used, in case
anyone else wants to play with it:
create table groups
(groupid number,
name varchar2(15),
parentid number)
insert into groups
values (1,'Universe',null)
insert into groups
values (2,'country1',1)
insert into groups
values (3,'state11',2)
insert into groups
values (4,'District111',3)
insert into groups
values (5,'City1111',4)
insert into groups
values (6,'City1112',4)
insert into groups
values (7,'City1113',4)
insert into groups
values (8,'country2',1)
insert into groups
values (9,'state21',8)
insert into groups
values (10,'District211',9)
insert into groups
values (11,'City2111',10)
insert into groups
values (12,'City2112',10)
insert into groups
values (13,'City2113',10)
create table user_group
(groupid number,
userid varchar2(5))
insert into user_group
values (3,'user1')
insert into user_group
values (3,'user2')
insert into user_group
values (3,'user4')
insert into user_group
values (4,'user5')
insert into user_group
values (5,'user6')
insert into user_group
values (5,'user7')
insert into user_group
values (7,'user8')
insert into user_group
values (13,'user9')
insert into user_group
values (11,'use11')
insert into user_group
values (6,'use6')
create table user_score
(userid varchar2(5),
course varchar2(7),
status varchar2(10))
insert into user_score
values ('use6','course1','complete')
insert into user_score
values ('user9','course1','complete')
insert into user_score
values ('user9','course2','complete')
insert into user_score
values ('user8','course1','complete')
insert into user_score
values ('user8','course2','InProgress')
insert into user_score
values ('user1','course1','complete')
insert into user_score
values ('user1','course2','NotStarted')
insert into user_score
values ('user2','course1','NotStarted')
insert into user_score
values ('user2','course2','complete')
insert into user_score
values ('user3','course1','complete')
insert into user_score
values ('user3','course2','InProgress')
insert into user_score
values ('user4','course2','complete')
insert into user_score
values ('user4','course1','NotStarted') -
Hi,
Can anyone help me to get the following output with site-sub-total and grand-total within the one possible query: (I know it's possible either with Analytic function or new Group By Rollup/Cube functionality but do not know how to use them).
QUERY:
SELECT /*+ index(con con_start_end_i) */
con.id Contract,
con.property_id Property,
con.site_code,
con.ctyp_code,
con.crea_code,
con.purchase_price Puarchase_Amt,
con.commission_amount Comm_Amt,
con.start_date,
con.target_exchange_date,
con.target_completion_date,
con.empl_id,
con.sold_by
FROM
t_contracts con
WHERE
EXISTS ( SELECT 1 FROM t_employee_sites es WHERE es.emp_login_name = USER and es.site_code = con.site_code ) AND
con.empl_id LIKE '%' AND
NVL(con.sold_by,-1) = NVL(NULL, nvl(con.sold_by,-1)) AND
con.ctyp_code = 'SALE' AND
INSTR('WITHDRAW,PULL_OUT,', NVL(con.crea_code,'*')) = 0 AND
con.deleted IS NULL AND
(con.completed_date IS NULL OR con.completed_date > TO_DATE('13/06/2005', 'dd/mm/rrrr')) AND
con.start_date BETWEEN TO_DATE('13/06/2005', 'dd/mm/rrrr') and TRUNC(SYSDATE)
ORDER BY
con.site_code;
Sample Output:
Please find the following sample output:
Contract, Property, Crea_Code, Purchase_Amt, Comm_Amt, Start_Date, Sold_By
Site: <Site_Code1>
xxxx xxxx xxxx 9,999,999.99 9,999,999.99 dd/mm/yyyy xx
xxxx xxxx xxxx 9,999,999.99 9,999,999.99 dd/mm/yyyy xx
Total for Site: <Site_Code1> <Site1_Pur> <Site1_Comm>
No of Property: <Site1_Count>
Site: <Site_Code2>
xxxx xxxx xxxx 9,999,999.99 9,999,999.99 dd/mm/yyyy xx
xxxx xxxx xxxx 9,999,999.99 9,999,999.99 dd/mm/yyyy xx
Total for Site: <Site_Code2> <Site2_Pur> <Site2_Comm>
No of Property: <Site2_Count>
Total: <Tot_Pur> <Tot_Comm>
Total Property: <Tot_Count>
Your help would appreciate to sort this out asap....
Kind Regards,
B Tanna
London
UKHi Riedelme,
Thanks for your reply. Further to your reply I go through Group By functionality. I tried the different ways and found the solution. I have to use
GROUP BY GROUPING SETS(con.site_code, (con.site_code, con.id, con.property_id, con.ctyp_code, con.crea_code, con.start_date, con.target_exchange_date, con.target_completion_date, con.empl_id, con.sold_by), ())But now my worry is what happens if number of column grows, say, 35. There should be some way of writing good query. Would it be possible using Analytic Function.
Experts please respond.. -
Discoverer Summary Adviser and GROUP BY / ROLLUP
Can anyone answer the following for me?
1. Does the Discoverer Summary Adviser ever create materialized views using the new ROLLUP function, to create summaries that calculate all the subtotals along all hierarchies and dimensions?
2. If the Discoverer Summary Adviser cannot create them using GROUP BY and ROLLUP, but I generated them manually or using OEM, would the Discoverer queries ever be eligable for query rewrite, as I presume the SQL generated by Discoverer doesn't use the GROUP BY / ROLLUP feature found in Oracle 9i?
Any advice gratefully received.
MarkCan anyone answer the following for me?
1. Does the Discoverer Summary Adviser ever create materialized views using the new ROLLUP function, to create summaries that calculate all the subtotals along all hierarchies and dimensions?
2. If the Discoverer Summary Adviser cannot create them using GROUP BY and ROLLUP, but I generated them manually or using OEM, would the Discoverer queries ever be eligable for query rewrite, as I presume the SQL generated by Discoverer doesn't use the GROUP BY / ROLLUP feature found in Oracle 9i?
Any advice gratefully received.
Mark -
Can you please explain how this query is fetching the rows?
here is a query to find the top 3 salaries. But the thing is that i am now able to understand how its working to get the correct data :How the data in the alias table P1 and P2 getting compared. Can you please explain in some steps.
SELECT MIN(P1.SAL) FROM PSAL P1, PSAL P2
WHERE P1.SAL >= P2.SAL
GROUP BY P2.SAL
HAVING COUNT (DISTINCT P1.SAL) <=3 ;
here is the data i used :
SQL> select * from psal;
NAME SAL
able 1000
baker 900
charles 900
delta 800
eddy 700
fred 700
george 700
george 700
Regards,
Renu... Please help me in understanding the query.
Your query looks like anything but a Top-N query.
If you run it in steps and analyze the output at the end of each step, then you should be able to understand what it does.
Given below is some brief information on the same:
test@ora>
test@ora> --
test@ora> -- Query 1 - using the non-equi (theta) join
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT p1.sal AS p1_sal, p1.NAME AS p1_name, p2.sal AS p2_sal,
12 p2.NAME AS p2_name
13 FROM psal p1, psal p2
14 WHERE p1.sal >= p2.sal;
P1_SAL P1_NAME P2_SAL P2_NAME
1000 able 1000 able
1000 able 900 baker
1000 able 900 charles
1000 able 800 delta
1000 able 700 eddy
1000 able 700 fred
1000 able 700 george
1000 able 700 george
900 baker 900 baker
900 baker 900 charles
900 baker 800 delta
900 baker 700 eddy
900 baker 700 fred
900 baker 700 george
900 baker 700 george
900 charles 900 baker
900 charles 900 charles
900 charles 800 delta
900 charles 700 eddy
900 charles 700 fred
900 charles 700 george
900 charles 700 george
800 delta 800 delta
800 delta 700 eddy
800 delta 700 fred
800 delta 700 george
800 delta 700 george
700 eddy 700 eddy
700 eddy 700 fred
700 eddy 700 george
700 eddy 700 george
700 fred 700 eddy
700 fred 700 fred
700 fred 700 george
700 fred 700 george
700 george 700 eddy
700 george 700 fred
700 george 700 george
700 george 700 george
700 george 700 eddy
700 george 700 fred
700 george 700 george
700 george 700 george
43 rows selected.
test@ora>
test@ora>This query joins PSAL with itself using a non equi-join. Take each row of PSAL p1 and see how it compares with PSAL p2. You'll see that:
- Row 1 with sal 1000 is >= to all sal values of p2, so it occurs 8 times
- Row 2 with sal 900 is >= to 9 sal values of p2, so it occurs 7 times
- Row 3: 7 times again... and so on.
- So, total no. of rows are: 8 + 7 + 7 + 5 + 4 + 4 + 4 + 4 = 43
test@ora>
test@ora> --
test@ora> -- Query 2 - add the GROUP BY
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT p2.sal AS p2_sal,
12 COUNT(*) as cnt,
13 COUNT(p1.sal) as cnt_p1_sal,
14 COUNT(DISTINCT p1.sal) as cnt_dist_p1_sal,
15 MIN(p1.sal) as min_p1_sal,
16 MAX(p1.sal) as max_p1_sal
17 FROM psal p1, psal p2
18 WHERE p1.sal >= p2.sal
19 GROUP BY p2.sal;
P2_SAL CNT CNT_P1_SAL CNT_DIST_P1_SAL MIN_P1_SAL MAX_P1_SAL
700 32 32 4 700 1000
800 4 4 3 800 1000
900 6 6 2 900 1000
1000 1 1 1 1000 1000
test@ora>
test@ora>Now, if you group by p2.sal in the output of query 1, and check the number of distinct p1.sal, min of p1.sal etc. you see that for p2.sal values - 800, 900 and 1000, there are 3 or less p1.sal values associated.
So, the last 3 rows are the ones you are interested in, essentially. As follows:
test@ora>
test@ora> --
test@ora> -- Query 3 - GROUP BY and HAVING
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT p2.sal AS p2_sal,
12 COUNT(*) as cnt,
13 COUNT(p1.sal) as cnt_p1_sal,
14 COUNT(DISTINCT p1.sal) as cnt_dist_p1_sal,
15 MIN(p1.sal) as min_p1_sal,
16 MAX(p1.sal) as max_p1_sal
17 FROM psal p1, psal p2
18 WHERE p1.sal >= p2.sal
19 GROUP BY p2.sal
20 HAVING COUNT(DISTINCT p1.sal) <= 3;
P2_SAL CNT CNT_P1_SAL CNT_DIST_P1_SAL MIN_P1_SAL MAX_P1_SAL
800 4 4 3 800 1000
900 6 6 2 900 1000
1000 1 1 1 1000 1000
test@ora>
test@ora>
test@ora>That's what you are doing in that query.
The thing is - in order to find out Top-N values, you simply need to scan that one table PSAL. So, joining it to itself is not necessary.
A much simpler query is as follows:
test@ora>
test@ora>
test@ora> --
test@ora> -- Top-3 salaries - distinct or not; using ROWNUM on ORDER BY
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT sal
12 FROM (
13 SELECT sal
14 FROM psal
15 ORDER BY sal DESC
16 )
17 WHERE rownum <= 3;
SAL
1000
900
900
test@ora>
test@ora>
test@ora>And for Top-3 distinct salaries:
test@ora>
test@ora> --
test@ora> -- Top-3 DISTINCT salaries; using ROWNUM on ORDER BY on DISTINCT
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT sal
12 FROM (
13 SELECT DISTINCT sal
14 FROM psal
15 ORDER BY sal DESC
16 )
17 WHERE rownum <= 3;
SAL
1000
900
800
test@ora>
test@ora>
test@ora>You may also want to check out the RANK and DENSE_RANK analytic functions.
RANK:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions123.htm#SQLRF00690
DENSE_RANK:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions043.htm#SQLRF00633
HTH
isotope -
Oracle 10g Reports: Control Break using Group By Rollup
Oracle 10g Contol-Break Reporting
Hello. I am trying to create a report using Group By Rollup. The report should look like:
MONTH......._WEEK_..... CODE.... TOTAL
JULY..........WEEK 1..... K1...........2
............................. K1...........2
.............................SUB:.........4
................WEEK 2..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 3..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 4..... K1...........2
............................. K1...........2
.............................SUB:.........4
..........................MTH Tot:.....16
AUG..........WEEK 1..... K1...........2
............................. K1...........2
.............................SUB:.........4
................WEEK 2..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 3..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 4..... K1...........2
............................. K1...........2
.............................SUB:.........4
..........................MTH Tot:.....16
..........................GRND TOT: 32
Not sure how to group the codes into the correct month/week and the labels are a problem. Here is the table/data and a my poor attempt at using the Group by rollup. I'm still working on it but any help would be very nice.
create table translog
ttcd VARCHAR2(5) not null,
stime TIMESTAMP(6) not null,
etime TIMESTAMP(6)
insert into translog ( TTCD, STIME, ETIME)
values ('T4', '01-JUL-12 12.00.01.131172 AM', '01-JUL-12 12.00.16.553256 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T4', '01-JUL-12 12.00.17.023083 AM', '01-JUL-12 12.00.37.762118 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('K2', '01-JUL-12 12.00.38.262408 AM', '01-JUL-12 12.00.40.686331 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('U1', '01-JUL-12 12.00.40.769385 AM', '01-JUL-12 12.00.41.281300 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('SK4', '08-JUL-12 12.00.41.746175 AM', '08-JUL-12 12.00.51.775487 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '08-JUL-12 12.00.53.274039 AM', '08-JUL-12 12.00.53.802800 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1','08-JUL-12 12.00.54.340423 AM', '08-JUL-12 12.01.03.767422 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '08-JUL-12 12.01.04.699631 AM', '08-JUL-12 12.01.04.744194 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '15-JUL-12 12.01.04.796472 AM', '15-JUL-12 12.01.04.817773 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-JUL-12 12.01.04.865641 AM', '15-JUL-12 12.01.05.154274 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-JUL-12 12.01.05.200749 AM', '15-JUL-12 12.01.05.508953 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '15-JUL-12 12.01.06.876433 AM', '15-JUL-12 12.01.07.510032 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-JUL-12 12.01.07.653582 AM', '15-JUL-12 12.01.07.686764 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '15-JUL-12 12.01.07.736894 AM', '15-JUL-12 12.01.08.163321 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.08.297696 AM', '22-JUL-12 12.01.08.562933 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '22-JUL-12 12.01.08.583805 AM', '22-JUL-12 12.01.08.620702 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.08.744821 AM', '22-JUL-12 12.01.08.987524 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.09.096695 AM', '22-JUL-12 12.01.09.382138 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.09.530122 AM', '22-JUL-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '01-AUG-12 12.01.10.550234 AM', '01-AUG-12 12.01.10.581535 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '01-AUG-12 12.01.10.628756 AM', '01-AUG-12 12.01.10.656373 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '01-AUG-12 12.01.10.740711 AM', '01-AUG-12 12.01.10.768745 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '01-AUG-12 12.01.10.819635 AM', '01-AUG-12 12.01.10.900849 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '01-AUG-12 12.01.09.530122 AM', '01-AUG-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '08-AUG-12 12.01.11.231004 AM', '08-AUG-12 12.01.24.073071 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '08-AUG-12 12.01.24.202920 AM', '08-AUG-12 12.01.24.244538 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '08-AUG-12 12.01.24.292334 AM', '08-AUG-12 12.01.24.318852 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '08-AUG-12 12.01.24.362643 AM', '08-AUG-12 12.01.24.397662 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1','15-AUG-12 12.01.09.530122 AM', '15-AUG-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-AUG-12 12.01.24.414572 AM', '15-AUG-12 12.01.24.444615 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L2W', '15-AUG-12 12.01.24.478739 AM', '15-AUG-12 12.01.25.020265 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('K4', '15-AUG-12 12.01.25.206721 AM', '15-AUG-12 12.01.25.729493 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '15-AUG-12 12.01.25.784746 AM', '15-AUG-12 12.01.39.226921 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1','15-AUG-12 12.01.39.517953 AM', '15-AUG-12 12.01.50.775295 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.01.57.676446 AM', '22-AUG-12 12.01.58.252945 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.01.09.530122 AM', '22-AUG-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.01.58.573242 AM', '22-AUG-12 12.02.10.651922 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.02.11.209305 AM', '22-AUG-12 12.02.24.140456 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('SK4','22-AUG-12 12.02.25.204035 AM', '22-AUG-12 12.02.25.580603 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1','22-AUG-12 12.02.25.656474 AM', '22-AUG-12 12.02.25.689249 AM');
select
decode(grouping(trunc(stime)),1, 'Grand Total: ', trunc(stime)) AS "DATE"
,decode(grouping(ttcd),1, 'SUB TTL:', ttcd) CODE,count(*) TOTAL
from translog
group by rollup (trunc(stime),ttcd);}
Thank you.830894 wrote:
Oracle 10g Contol-Break Reporting
Hello. I am trying to create a report using Group By Rollup. The report should look like:Couple of things:
1) Your test data setup dows not match with your expected output &
2) layout of data (like control break) should ideally be carried out using reporting tools
Here is what you are probably looking for:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> create table translog
2 (
3 ttcd VARCHAR2(5) not null,
4 stime TIMESTAMP(6) not null,
5 etime TIMESTAMP(6)
6 );
Table created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T4', '01-JUL-12 12.00.01.131172 AM', '01-JUL-12 12.00.16.553256 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T4', '01-JUL-12 12.00.17.023083 AM', '01-JUL-12 12.00.37.762118 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('K2', '01-JUL-12 12.00.38.262408 AM', '01-JUL-12 12.00.40.686331 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('U1', '01-JUL-12 12.00.40.769385 AM', '01-JUL-12 12.00.41.281300 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('SK4', '08-JUL-12 12.00.41.746175 AM', '08-JUL-12 12.00.51.775487 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '08-JUL-12 12.00.53.274039 AM', '08-JUL-12 12.00.53.802800 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1','08-JUL-12 12.00.54.340423 AM', '08-JUL-12 12.01.03.767422 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '08-JUL-12 12.01.04.699631 AM', '08-JUL-12 12.01.04.744194 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '15-JUL-12 12.01.04.796472 AM', '15-JUL-12 12.01.04.817773 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-JUL-12 12.01.04.865641 AM', '15-JUL-12 12.01.05.154274 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-JUL-12 12.01.05.200749 AM', '15-JUL-12 12.01.05.508953 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '15-JUL-12 12.01.06.876433 AM', '15-JUL-12 12.01.07.510032 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-JUL-12 12.01.07.653582 AM', '15-JUL-12 12.01.07.686764 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '15-JUL-12 12.01.07.736894 AM', '15-JUL-12 12.01.08.163321 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.08.297696 AM', '22-JUL-12 12.01.08.562933 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '22-JUL-12 12.01.08.583805 AM', '22-JUL-12 12.01.08.620702 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.08.744821 AM', '22-JUL-12 12.01.08.987524 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.09.096695 AM', '22-JUL-12 12.01.09.382138 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.09.530122 AM', '22-JUL-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '01-AUG-12 12.01.10.550234 AM', '01-AUG-12 12.01.10.581535 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '01-AUG-12 12.01.10.628756 AM', '01-AUG-12 12.01.10.656373 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '01-AUG-12 12.01.10.740711 AM', '01-AUG-12 12.01.10.768745 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '01-AUG-12 12.01.10.819635 AM', '01-AUG-12 12.01.10.900849 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '01-AUG-12 12.01.09.530122 AM', '01-AUG-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '08-AUG-12 12.01.11.231004 AM', '08-AUG-12 12.01.24.073071 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '08-AUG-12 12.01.24.202920 AM', '08-AUG-12 12.01.24.244538 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '08-AUG-12 12.01.24.292334 AM', '08-AUG-12 12.01.24.318852 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '08-AUG-12 12.01.24.362643 AM', '08-AUG-12 12.01.24.397662 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1','15-AUG-12 12.01.09.530122 AM', '15-AUG-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-AUG-12 12.01.24.414572 AM', '15-AUG-12 12.01.24.444615 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L2W', '15-AUG-12 12.01.24.478739 AM', '15-AUG-12 12.01.25.020265 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('K4', '15-AUG-12 12.01.25.206721 AM', '15-AUG-12 12.01.25.729493 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '15-AUG-12 12.01.25.784746 AM', '15-AUG-12 12.01.39.226921 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1','15-AUG-12 12.01.39.517953 AM', '15-AUG-12 12.01.50.775295 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.01.57.676446 AM', '22-AUG-12 12.01.58.252945 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.01.09.530122 AM', '22-AUG-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.01.58.573242 AM', '22-AUG-12 12.02.10.651922 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.02.11.209305 AM', '22-AUG-12 12.02.24.140456 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('SK4','22-AUG-12 12.02.25.204035 AM', '22-AUG-12 12.02.25.580603 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1','22-AUG-12 12.02.25.656474 AM', '22-AUG-12 12.02.25.689249 AM');
1 row created.
SQL> commit ;
Commit complete.
SQL> select case when row_number() over (partition by mth order by mth, wk, ttcd) = 1 then mth end as "Month"
2 ,case when row_number() over (partition by mth, wk order by mth, wk, ttcd) = 1 and wk is not null then 'WEEK '||wk end as "Week"
3 ,case when gttcd = 1 and gwk = 0 and gmth = 0 then 'SUB:'
4 when gttcd = 1 and gwk = 1 and gmth = 0 then 'Month Total:'
5 when gttcd = 1 and gwk = 1 and gmth = 1 then 'Grand Total:'
6 else ttcd
7 end as "Code"
8 ,cnt as "Total"
9 from (
10 select trunc(stime, 'MM') as mth, to_char(stime, 'W') as wk, ttcd, count(*) as cnt
11 ,grouping(trunc(stime, 'MM')) as gmth, grouping(to_char(stime, 'W')) as gwk, grouping(ttcd) as gttcd
12 from translog
13 group by rollup(trunc(stime, 'MM'), to_char(stime, 'W'), ttcd)
14 order by trunc(stime, 'MM'), to_char(stime, 'W'), ttcd
15 ) ;
Month Week Code Total
01-JUL-12 WEEK 1 K2 1
T4 2
U1 1
SUB: 4
WEEK 2 L1 2
SK4 1
T1 1
SUB: 4
WEEK 3 L1 1
S2 2
T1 3
SUB: 6
WEEK 4 L1 4
T1 1
SUB: 5
Month Total: 19
01-AUG-12 WEEK 1 L1 1
S2 1
T1 3
SUB: 5
WEEK 2 L1 1
S2 1
T1 2
SUB: 4
WEEK 3 K4 1
L1 3
L2W 1
T1 1
SUB: 6
WEEK 4 L1 4
SK4 1
T1 1
SUB: 6
Month Total: 21
Grand Total: 40
35 rows selected. -
GROUP BY ROLLUP with daily stats
I am pretty new to the group by rollup clause and I want to rollup daily stats into monthly subtotals. When I try to rollup this data by month, it rolls it day by day and then gives me a subtotal of all months together and then a grandtotal at the bottom for all interface_id. Is it possible to take daily stats and roll them up by month. I'm confused.
set linesize 300
col company_info for a30
col site_name for a25
col Month for a20
select c.company_info,
s.site_name,
i.interface_name,
to_char(to_date(a.day_month_year,'MM/DD/YY'), 'Month DD YYYY') month,
nvl(sum(adm_total_hits),0) adm_hits,
nvl(sum(end_total_hits),0) end_hits,
nvl(sum(bandwidth_bytes),0) bandwidth
from siteinfo.companies c,
siteinfo.sites s,
siteinfo.interfaces i,
apache_daily_stats a
where c.company_id = s.company_id
and s.site_id = i.site_id
and i.interface_id = a.interface_id(+)
and i.interface_id = 4511
group by rollup (c.company_info, s.site_name, i.interface_name, a.day_month_year)
set linesize 80
Thanks in advance,
ReedKYes, I would like to sum by month. This is opposed to building in-line views to sum up all of the daily stats into a month subtotal. To get 12 months of data using daily stats, I would have to build 12 in-line views. YUK! I would much rather use this cool group by rollup command to sum daily stats by month.
Any clarity on this function would be appreciated.
Thanks
ReedK
Maybe you are looking for
-
I've installed firefox 7 and can't get it to work with 1password
One of your techs said 1Password would work with Firefox 7. I can't make it work
-
Ken Burns movie clips (not still photo)
How can I apply ken burns to movie clips. I know that the ken burns button only appears when we edit still photos, not movie clips. Is there any method to achieve an effect similar to ken burns? Or at least remotely similar?
-
Installed xmms and xmms-imms, but the plugin does not show up?
Hi All, I have installed xmms and xmms-imms. Then I have started xmms. Then I looked at the plugins (options->preferences->plugins->general plugins). There I see IRman, Joystick and Song change, but no imms. I see libxxmsimms in the plugin directory:
-
Does iphoto have a watermark feature
i would like to put my company logo on photographs
-
How do we download an FM output?
Hi, I get 114,600 entries as output when I execute a FM. I am not able to downlaod all the entries to a spreadsheet at once - getting memory allocation error. Is there any way to download any FM's output to a spreadhsset? Regards Deepthi