Discoverer Summary Adviser and GROUP BY / ROLLUP
Can anyone answer the following for me?
1. Does the Discoverer Summary Adviser ever create materialized views using the new ROLLUP function, to create summaries that calculate all the subtotals along all hierarchies and dimensions?
2. If the Discoverer Summary Adviser cannot create them using GROUP BY and ROLLUP, but I generated them manually or using OEM, would the Discoverer queries ever be eligable for query rewrite, as I presume the SQL generated by Discoverer doesn't use the GROUP BY / ROLLUP feature found in Oracle 9i?
Any advice gratefully received.
Mark
Can anyone answer the following for me?
1. Does the Discoverer Summary Adviser ever create materialized views using the new ROLLUP function, to create summaries that calculate all the subtotals along all hierarchies and dimensions?
2. If the Discoverer Summary Adviser cannot create them using GROUP BY and ROLLUP, but I generated them manually or using OEM, would the Discoverer queries ever be eligable for query rewrite, as I presume the SQL generated by Discoverer doesn't use the GROUP BY / ROLLUP feature found in Oracle 9i?
Any advice gratefully received.
Mark
Similar Messages
-
Rank only summary rows from group by rollup
I would like to rank only the total rows generated by a group by rollup, but not rank the rows within each group by bucket. Furthermore, I'd like to order by rank in descending order and include the unranked rows within each group.
My query looks something like this:
select customer, month, sum (sell)
from sales
group by rollup (customer, month)
order by customer, month desc nulls first
The results look like:
customer month sum(sell)
<summary rowA> 1,200
custA JAN 100
custA FEB 100
custA MAR 100
custA DEC 100
<summary rowB> 600
custB JAN 50
custB FEB 50
custB MAR 50
custB DEC 50
What I would like to do is:
select dense_rank() over (order by sum(sell) desc <rollup rows only>) as rank,
customer, month, sum (sell)
from sales
group by rollup (customer, month)
order by <summary row rank desc> customer, month desc nulls first
The results would look like:
rank customer month sum(sell)
1 <summary rowA> 1,200
custA JAN 100
custA FEB 100
custA MAR 100
custA DEC 100
2 <summary rowB> 600
custB JAN 50
custB FEB 50
custB MAR 50
custB DEC 50
Any advice?
Thank youLike this?
sql>
select decode(job,null,decode(deptno,null,null,dense_rank() over(order by lst desc)-1),null) rank,
deptno,job,sm
from(
select deptno,job,sm,last_value(sm) over(partition by deptno order by null ) lst
from (
select deptno,job,sum(sal) sm
from emp
group by rollup(deptno,job)));
RANK DEPTNO JOB SM
29025
20 CLERK 1900
20 MANAGER 2975
1 20 10875
20 ANALYST 6000
30 CLERK 950
30 MANAGER 2850
2 30 9400
30 SALESMAN 5600
10 CLERK 1300
10 MANAGER 2450
10 PRESIDENT 5000
3 10 8750 -
Group by rollup and order by question
hi all,
normally, using order by would sort data based on the column. however, i am using a group by roll up with an order by and my select case contains some case grouping... see my code below.
SELECT
CASE GROUPING(org) + GROUPING(sbu)
WHEN 2 THEN 'Grand Total'
WHEN 1 THEN NULL
ELSE NULLIF(org, 'ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ')
END org,
CASE GROUPING(org) + GROUPING(sbu)
WHEN 2 THEN NULL
WHEN 1 THEN 'Total'
ELSE ipl_rep_sylk_pkg.get_sbu_name(sbu)
END sbu,
SUM(ft) ft,
SUM(pt) pt,
NULLIF(NVL(SUM(ft),0) + NVL(SUM(pt),0),0) tot,
NULLIF(NVL(SUM(ter),0) + NVL(SUM(ret),0),0) ter_ret,
SUM(adds) adds,
SUM(ins) ins,
SUM(outs) outs,
SUM(fft) fft,
SUM(fpt) fpt,
NULLIF(NVL(SUM(fft),0) + NVL(SUM(fpt),0),0) ftot,
SUM(bft) bft,
SUM(bpt) bpt,
NULLIF(NVL(SUM(bft),0) + NVL(SUM(bpt),0),0) btot
FROM xxipl_ceo_personnel_tbl
GROUP BY ROLLUP (org, sbu)
ORDER BY org, sbunot sure why but my data always start with the Grand Total Row and ends with all the Total Rows.
is there something wrong with my query?
thanks
allenA.Sandiego wrote:
not sure why but my data always start with the Grand Total Row and ends with all the Total Rows.Allen,
You have aliassed your columns with the same name as the original columns org and sbu. Your order by now orders by your aliassed expressions instead of the original columns. And the "G" of "Grand Total" is before the "T" of "Total" in the alphabet ;-)
Regards,
Rob. -
Hi
I am using group by rollup ((....)) in order to list my data and then provide a summary at the end.
Please take this example:
with mydata as (
select 'ABC' Part_no, sysdate-10 as Trans_Date, 100 Amount from dual union all
select 'ABC' Part_no, sysdate-10 as Trans_Date, -100 Amount from dual union all
select 'ABD' Part_no, sysdate-10 as Trans_Date, 100 Amount from dual union all
select 'ABD' Part_no, sysdate-10 as Trans_Date, 100 Amount from dual union all
select 'ABE' Part_no, sysdate-10 as Trans_Date, 100 Amount from dual union all
select 'ABE' Part_no, sysdate-10 as Trans_Date, 50 Amount from dual
select
part_no,
trans_date,
sum(amount) total
from
mydata
group by rollup ((part_no, trans_date))The trouble is, in that example, because some rows have no unique identifier, and they are grouped, the are summarised (by nature of group by).
However, I want to retain the detail and also have a summary at the end, as a grand total.
I get:
ABC 5/13/2008 6:10:41 AM 0
ABD 5/13/2008 6:10:41 AM 200
ABE 5/13/2008 6:10:41 AM 150
350I want:
ABC 5/13/2008 6:10:41 AM 100
ABC 5/13/2008 6:10:41 AM -100
ABD 5/13/2008 6:10:41 AM 100
ABD 5/13/2008 6:10:41 AM 100
ABE 5/13/2008 6:10:41 AM 100
ABE 5/13/2008 6:10:41 AM 50
350Can this be done?
ThanksYou can get total something like this:
WITH mydata AS
(SELECT 'ABC' part_no, SYSDATE - 10 AS trans_date, 100 amount
FROM DUAL
UNION ALL
SELECT 'ABC' part_no, SYSDATE - 10 AS trans_date, -100 amount
FROM DUAL
UNION ALL
SELECT 'ABD' part_no, SYSDATE - 10 AS trans_date, 100 amount
FROM DUAL
UNION ALL
SELECT 'ABD' part_no, SYSDATE - 10 AS trans_date, 100 amount
FROM DUAL
UNION ALL
SELECT 'ABE' part_no, SYSDATE - 10 AS trans_date, 100 amount
FROM DUAL
UNION ALL
SELECT 'ABE' part_no, SYSDATE - 10 AS trans_date, 50 amount
FROM DUAL)
SELECT part_no, trans_date, amount, sum (amount) over (partition by 1) total
FROM mydata
PART_NO TRANS_DATE AMOUNT TOTAL
ABC 5/13/2008 1:30:28 AM 100 350
ABC 5/13/2008 1:30:28 AM -100 350
ABE 5/13/2008 1:30:28 AM 50 350
ABD 5/13/2008 1:30:28 AM 100 350
ABE 5/13/2008 1:30:28 AM 100 350
ABD 5/13/2008 1:30:28 AM 100 350
---or
SELECT part_no, trans_date, amount, sum (amount) over (order by part_no) total
FROM mydata -
Selection criteria are not applied to summary fields on group footers.
I wonder if anyone can help me with this problem. I am using Crystal reports version 11.2, and my data source is a Sql Server view.
The records on the view have a date field, and I have selected all records within a given date range in "Selection Formulas".
The records are then grouped, and the Crystal summary facility used to summarise number fields on the group footers.
So for example, if my view contains four records, one with field "amount" = 2, one with field "amount" = 8, one with field "amount" = 6, one with field "amount" = 3, but only the first two records are within the valid date range, you would expect to see the first two records listed out at detail level, then field "amount" summarised at group level, with a summarised value of 10.
ie .... record1 2
record2 8
group level total 10
This works fine when I run the report using Crystal's "print preview" facility. However, when the report is run from within an application written in C#.NET, the selection criteria are not applied to the summary field, so you get ..
record1 2
record2 8
group level total 19
I tried putting the date selection criteria at both record and group level, but that did not work.
I googled the problem and found an article explaining that Crystal first performs the record-level selection, then it creates the groups and totals up any summary fields, and only then does it apply the group-level selection criteria, which can lead to problems like the one I have described above. However, since I have put my date selection criteria at both record and group level, I do not understand why I still get the problem.
In one report I got round this problem by creating a formula that returned zero if the record date was outside of the valid date range, and returned the number field to be summarised if the date was valid, then summarising that formula, instead of summarising the number field directly.
In other reports I created one formula to set a shared variable as zero, then another formula to accumulate it at detail record level, then another formula to display the variable at the group footer. In other words, I did not bother with the Crystal summary facility at all, but created my own summary facility.
While googling the problem to see what other people did in this situation, I noticed that most fixes used variations of the "shared variables and formulae" fix to get round the problem.
The problem is that I have lots of complex reports and it will take ages to replace the summarised fields with shared variables and formulae. The reports were initially tested with "Print Preview" so we did not notice this problem until the C#.Net application was ready to use them. And I can't believe that you are simply meant to ignore the summary facility and re-invent the wheel by doing it all manually.
Please tell me that there is something simple that I have been doing wrong!!! If I have not given enough information for you to answer, please let me know.
Thanks,
Anne-MarieHi, Anne-Marie;
You may be running into a common issue that is docuemented here:
[SelectionFormula|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_erq/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233303334333833393335%7D.do]
Regards,
Jonathan
Edited by: Jonathan Parminter on Mar 16, 2009 8:03 AM -
Using member sorting and grouping with two reports sharing rows
Hi!
I have a problem with one report and I need some help or advise here.
I have two dimensions with dynamic expansion in rows (PRODUCT, MATERIAL), and I use the option Member Sorting and Grouping at Member selector to obtain the total amount of PRODUCT group by PARENTH1:
PRODUCT MATERIAL AMOUNT
TOTAL PROD_A-X 100
PROD_A_A MAT1 22
PROD_A_B MAT1 50
PROD_A_A MAT2 28
TOTAL PROD_B-X 120
PROD_B_A MAT1 30
PROD_B_A MAT2 50
PROD_B_B MAT2 40
This works fine if I only have one report, but I need to create another one sharing the row and page axis with the Default Report, when I do that the option Member Sorting and Grouping doesn't work. I really need to have two reports with shared rows and also the summation by PARENTH1, how can I do that?
Thank you very muchHi!
I have a problem with one report and I need some help or advise here.
I have two dimensions with dynamic expansion in rows (PRODUCT, MATERIAL), and I use the option Member Sorting and Grouping at Member selector to obtain the total amount of PRODUCT group by PARENTH1:
PRODUCT MATERIAL AMOUNT
TOTAL PROD_A-X 100
PROD_A_A MAT1 22
PROD_A_B MAT1 50
PROD_A_A MAT2 28
TOTAL PROD_B-X 120
PROD_B_A MAT1 30
PROD_B_A MAT2 50
PROD_B_B MAT2 40
This works fine if I only have one report, but I need to create another one sharing the row and page axis with the Default Report, when I do that the option Member Sorting and Grouping doesn't work. I really need to have two reports with shared rows and also the summation by PARENTH1, how can I do that?
Thank you very much -
Radio button 'Names' and grouping
Ok, I am still working on my Monster form...
Probably pushing the envelope for what Forms can do...but hey...
My "buttons" are all named "Button.section.number" (ex. Button.1.13) so that my validate scripts can quickly go through and build other tables and things....
In each section, there are 3-4 buttons where I want at-least-one to be selected.
Obviously, a "Radio Button" will give me ONLY one selected, but I am having NAME problems in that when I try to name them so the scripts will work the button come out of the group. (and crash Acrobat)
so...in a section, what I want to have is:
Button.2.0 (.hidden, .value = "Grounds")
Button.2.1 \
Button.2.2 want these 4 in an exclusive group, where an entry is ".required" for at least one of them
Button.2.3 /
Button.2.4 A validate script runs later that looks for "blank" .required fields and reports out using "userName" in an "appAlert"
Button.2.5
Button.2.16 (.hidden, .value= "MAX")
My Validate scripts go through all the sections and all the buttons finding boxes that are checked that represent problems, and copy's the fields "Export Data" to a summary page -AND- looks for empty .required fields (and other things)
I would LIKE the user to be forced to select one of : Serviceable, Deficient, Hazard or N/A for each section. (Button.1.1, Button.1.2, Button.1.3, Button.1.4 in this example)
Do I need a "button_up" script (the same in all 4 buttons) that makes sure there is at least one selected. and clearing/setting the .required's on the fields ? That seems messy. I could start the form with the last choice (N/A) checked, but that seems error prone.
The current validate script checks for empty .required fields, so if I make all 4 fields as required I will get 4 error messages, not ideal....
Ideas Please??
Idea as I was writing this.... Since Button.section.0 is hidden, I can toggle the .required flag on that? But I still need a script that will set/clear the .required when the user select -any- of the checkboxes/buttons
In case people are interested, all my sections are different sizes, but Button.section.0 contains the "Section Name" ('Grounds' for example) and when (Button.Section.number).value = "MAX" you have reached the end of -that- section. It works very nicely and is FAST becasue the build is only done when requested, not continously..
PS...this works with PC Reader ONLY. Android reader does not support this but qPDF for Android does.I had the same problem; hopefully my solution will be of help to anyone searching this problem via a search engine.
http://forums.adobe.com/message/4347266#4347266
Individual radio buttons cannot be named (only the overal exclusion group can be named) or else rawValues will not export. -
Currency and Groups Dimensions in Legal Consolidation, BPC 7.5
Dear BPC Experts:
I am using BPC 7.5 NW SP03, and building Legal Consolidation now. According to the SAP Library,http://help.sap.com/saphelp_bpc75_nw/helpdata/en/bpc_nw_index.htm, it says in 7.5, the currency and groups dimension are required and should hold currency members and consolidation members seperately. I have couple of questions about this design.
1. Are GROUPS and CURRENCY dimensions both included in Legal Consolidation Application at the same time?
2. How to move data in currency diemnsion into corresponding members in GROUPS dimension?
For example: THe GROUPS member, CG1(its currency is USD). How to let data in currency USD move to CG1.
3. In Legal Consolidation data package, should I modify the prompt dimension from CURRENCY_DIM into GROUP_Dim?? cause 7.5 has CURRENCY dim in its package setting at default. However, the consolidation package is based on GROUP perspectives.
Does anyone give me any idea in it? I really appreciate your help.
Best Regards,
Fred ChengHi Collet,
Thanks for such a quick response. Based on your answer,
The group-type dimension is used for storing the group component of legal consolidation. The group-type dimension represents the relationship of entities for a given consolidation result. This group is consolidated in a single currency, so there is no need to have another dimension. As of Planning and Consolidation 7.5, you can continue to use the currency-type dimension for this purpose, or you can split it into a group-type dimension (type G) and use a pure currency-type dimension (type R) to allow reporting in multiple group currencies.
If at all I go by that will that be sufficient to have only Group Dimension and ignore RptCurrency field in BPC 7.5?
As of now I am having my Group Dimension looks like this.
Member ID EVDESCRIPTION GROUP_CURRENCY PARENT_GROUP ENTITY
G_CG1 XXXX USD E_CG1
G_CG2 XXX USD G_CG1 E_CG2
G_CG3 XXXX USD G_CG2 E_CG3
So now do I need to enter any RptCurrency Members under Group Member ID column to have RptCurrency ignored once for all. By the way our RptCurrency and Group Currency is USD only.
Please advise as right now we are not able to have Currency Conversion Functioning properly. I mean when I executed FXTRANSLGF, I am able to my DM status as Successful but saying 0 Submitted 0 Success 0 Fail
I am unable to crack this. I tired playing Groups and RptCurrency but same result. Successful but no records. We have maintained all Exchange Rates corrctly. -
Hierarchical Query with Rollup Sum (CONNECT BY with GROUP BY ROLLUP)
Hi all,
Imagine the following scenario: i have an ACCOUNT table which holds accounts and their hierarchy (currently 5 levels), and a BALANCE table which holds balance records for the accounts. Only CHILD accounts (level 5) have records in the BALANCE table. Simple example:
CREATE TABLE accounts (account_code VARCHAR2(30), parent_account VARCHAR2(30), account_desc VARCHAR2(400));
CREATE TABLE balances (account_code VARCHAR2(30), balance_amount NUMBER(18,2));
INSERT INTO ACCOUNTS VALUES ('TOT',NULL,'Total');
INSERT INTO ACCOUNTS VALUES ('ANA1','TOT','General Expenses');
INSERT INTO ACCOUNTS VALUES ('4801001','ANA1','Small Expenses');
INSERT INTO ACCOUNTS VALUES ('4801002','ANA1','Transportation');
INSERT INTO ACCOUNTS VALUES ('ANA2','TOT','Health Expenses');
INSERT INTO ACCOUNTS VALUES ('4802001','ANA2','Healthcare');
INSERT INTO ACCOUNTS VALUES ('4802002','ANA2','Facilities');
INSERT INTO BALANCES VALUES ('4801001', 2000);
INSERT INTO BALANCES VALUES ('4801002', 1000);
INSERT INTO BALANCES VALUES ('4802001', 3000);
INSERT INTO BALANCES VALUES ('4802002', 4000);What i need in this scenario is to run a hierarchical query, where for each node i compute the sum of all its children (In LEAF nodes which are the child accounts, this sum is the value in BALANCES itself). Final Result would be:
TOT -> 10000
ANA1 -> 3000
4801001 -> 2000
4801001 -> 1000
ANA2 -> 7000
4802001 -> 3000
4802002 -> 4000I have tried various ways, and found out a workaround which works for a fixed amount of levels, basically it builds the hierarchy and computes the SYS_CONNECT_BY_PATH, then splits this as a regular expression and uses GROUP BY ROLLUP to compute the higher levels. Then i assemble it again, now with the computed values. Below is the example query:
select level
, NVL (vfinal.child_account,'TOTAL') ||' - '||
( SELECT account_desc
FROM accounts
WHERE account_code = vfinal.child_acct ) account_name
, to_char(sum_bal, 'fm999g999g999g990') as rolled_up_balance
from
select coalesce( princ.lvl3, princ.lvl2, princ.lvl1 ) child_acct
, DECODE ( princ.lvl2 , NULL
, NULL
, DECODE ( princ.conta_lvl3, NULL
, princ.conta_lvl1,princ.conta_lvl2 ) ) parent_acct
, sum(princ.balance_amount) sum_bal
from (
select hier.lvl1
, hier.lvl2
, hier.lvl3
, hier.parent_account
, hier.account_code child_acc
, bal.balance_amount
from ( select level
, sys_connect_by_path( account_code, '/' ) hierarchy_acct
, REGEXP_SUBSTR(sys_connect_by_path( account_code, '/' ),'[^/]+',1,3) lvl3
, REGEXP_SUBSTR(sys_connect_by_path( account_code, '/' ),'[^/]+',1,2) lvl2
, REGEXP_SUBSTR(sys_connect_by_path( account_code, '/' ),'[^/]+',1,1) lvl1
, account_code
, parent_account
from accounts acc
where level <= 3
start with parent_account is null
connect by nocycle prior account = parent_account
order siblings by parent_account
) hier
, balances bal
where bal.cod_conta = hier.account_code
) princ
where princ.lvl1 is not null
group by rollup ( princ.lvl1
, princ.lvl2
, princ.lvl3 )
order by princ.conta_lvl1
, princ.conta_lvl2
, princ.conta_lvl3
) vfinal
where child_acct is not null
start with parent_acct is null
connect by nocycle prior child_acct = parent_acctAll said and done, what i need is to do the same thing for infinite levels, because this query has 3 fixed levels. Do you know how can i structure a new query where, independently of the number of levels, the parent sums are all rolled up like this?
Thanks a lot in advance! Best Regards!
Thiago
Edited by: Thiago on Sep 6, 2011 11:31 AM
Edited by: Thiago on Sep 6, 2011 1:01 PMHi,
Thiago wrote:
Hi all,
Imagine the following scenario: i have an ACCOUNT table which holds accounts and their hierarchy (currently 5 levels), and a BALANCE table which holds balance records for the accounts. Only CHILD accounts (level 5) have records in the BALANCE table. Simple example:
CREATE TABLE accounts (account_code VARCHAR2(30), parent_account VARCHAR2(30), account_desc VARCHAR2(400));
CREATE TABLE balances (account_code VARCHAR2(30), balance_amount NUMBER(18,2));
INSERT INTO ACCOUNTS ('TOT',NULL,'Total');
INSERT INTO ACCOUNTS ('ANA1','TOT','General Expenses');
INSERT INTO ACCOUNTS ('4801001','ANA1','Small Expenses');
INSERT INTO ACCOUNTS ('4801002','ANA1','Transportation');
INSERT INTO ACCOUNTS ('ANA2','TOT','Health Expenses');
INSERT INTO ACCOUNTS ('4802001','ANA2','Healthcare');
INSERT INTO ACCOUNTS ('4802002','ANA2','Facilities');
INSERT INTO BALANCES ('4801001', 2000);
INSERT INTO BALANCES ('4801001', 1000);
INSERT INTO BALANCES ('4802001', 3000);
INSERT INTO BALANCES ('4802001', 4000);
Thanks for posting the CREATE TABLE and INSERT statements. Remember why you do it: so that the people who want to help you can re-create the problem and test their ideas. If the statments don't work, then they are not so useful. None of the INSERT statements you posted work: they all need a VALUES keyword. Please test those statments before you post them.
Also, make sure that the reuslts you post correspond to the sample data you post. In your sample data, there are no rows in balances for account_codes '4801002' or '4802002'.
I think you want something like this:
WITH connect_by_results AS
SELECT CONNECT_BY_ROOT account_code AS root_account_code
, account_code
FROM accounts
-- NOTE: No START WITH clause
CONNECT BY parent_account = PRIOR account_code
SELECT c.root_account_code || ' -> '
|| TO_CHAR (SUM (b.balance_amount)) AS txt
FROM connect_by_results c
LEFT OUTER JOIN balances b ON c.account_code = b.account_code
GROUP BY c.root_account_code
; -
Materialized views with GROUP by ROLLUP
I am trying to created Materialized view (snapshot) using group by rollup functionality.
The idea is to use snapshot with all subtotals already calculated in place.
For example average group by rollup (columns of the time dimension).
The idea is to use "query rewrite" Oracle functionality and snapshot with calculated subtotals.
Does anybody have experiance with this method ?
Thank you
MichaelThe query rewrite is an internal function of Oracle. Normally, you have nothing to do than verify that the query that perform OBIEE is rewrited.
In any other way, you can in OBIEE define that you have create an aggregate and OBIEE will (re-)write the query.
http://www.rittmanmead.com/2007/10/26/using-the-obiee-aggregate-persistence-wizard/
Regards
Nico -
Summary Column and repeating frames
I have a report that shows customer orders(1 per page). If a customer orders 3 pizzas and 2 of them are identical, same size, same toppings, crust i want the quanitity column to sum as 2. So for this example instead of having 3 rows with two of them being identical with the quanity column dispalying 1 for each, I want 2 rows with quanity showing 2 for the identical pizzas and 1 for the other.
I have everything in the same group/repeating frame right now and it is showing all 3 pizzas on there own row. Do i need a summary column and change my groupings? How can i achieve this the summed quanity for identical orders?
CURRENT:
Crust......Toppings......size.....quantity
Thin.......Cheese........small....1
Thin.......Cheese........small....1
thick.......meat.........large....1
NEW:
Crust......Toppings......size.....quantity
Thin.......Cheese........small....2
thick.......meat.........large....1Forget summary columns. This can easily be just a group by query:
with t as (select 'Thin' crust, 'Cheese' top ,'Small' sze, 1 qty from dual
union all
select 'Thin' , 'Cheese' ,'Small' , 2 from dual
union all
select 'Thick' , 'Meat' ,'Large', 1 from dual
select t.crust, t.top, t.sze, sum(qty)
from t
group by crust,top,sze;
CRUST TOP SZE SUM(QTY)
Thin Cheese Small 3
Thick Meat Large 1 -
Extending DAM Asset Editor formitems and grouping new fields into a widgetcollection
Hi,
I'm a newbie so be kind pretty please...
We have a requirement for one of the websites hosted on our CQ instance to have some extended DAM properties for PDF files.
Eg. Summary (textfield) and Search Weighting (selection)
We have added a new namespace with our new items/properties underneath:
/apps/dam/option/metadata/edexcel
We’ve also extended the pdf asset editor by adding the new fields to formitems under:
/apps/dam/content/asseteditors/application/pdf/formitems
And we are getting the expected result in DAM editor in that when we choose to open a PDF we have new fields to edit our new properties.
We have also been able to prove that using group permissions we can tailor it so that only specific website authors see these extended PDF fields; essentially making the extension local to an individual website.
The issue we are having is trying to assign group permissions so that we can deploy and maintain these in the best possible way. What we are trying to do is group these new fields in formitems into a widgetcollection so that we can assign the permissions we need at one widgetcollection level rather than needing to assign at individual field level.
I have tried creating a widgetcollection under formitems and adding my new fields within and this works ok up until the point when I try to assign read permission at the widgetcollection level. Then what happens is I can no longer ‘open’ a PDF at all.
Is there a way that we can do this? Can you suggest any alternative solutions please?
Many thanks in advance
ClaireThank you jorg, so you say by applying ACL and deny read access for project B, and if Dam folder is Overlayed under apps such as apps/dam, I can restrict modified version of DAM for project B thereby ProjectB using the foundation Dam and have modified version of dam for project A? Please correct me if I am wrong, since I have the jsp's such as child asset and metadata editor jsp modified after being overlayed.
Regards,
NZ -
Group by rollup - inconsistent?
Ran across this when doing some aggregates with real data, where the difference is more pronounced. But this test case shows it:
with t as (
select case
when dbms_random.value(0,2) < 1 then 10
else 20
end dept,
case
when dbms_random.value(0,2) < 1 then 'abc'
else 'def'
end marker,
sysdate - 20 + trunc(dbms_random.value(1,4)) new_date,
sysdate - 20 + trunc(dbms_random.value(4,7)) old_date
from dual
connect by level<=20000)
select dept,
marker,
diff,
COUNT(*) / MAX(cnt) percent
from (select t.*,
old_date - new_date diff,
COUNT(*) over (partition by dept, marker) cnt
from t)
group by rollup(dept,
marker,
diff)
order by dept,
marker,
diffSo this generates 20,000 random records, groups them depending on how far apart two dates are, and then displays the % that each difference makes of that grouping. A ROLLUP isn't really necessary - I just put it in because I wanted to verify, quickly, that it was adding up to 100% for each group.
Now, here's a sample output:
DEPT MARKER DIFF PERCENT
10 abc 1 0.1125854443104141535987133092078809811017
10 abc 2 0.2088862082830719742661841576196220345798
10 abc 3 0.3331322878970647366304784881383192601528
10 abc 4 0.2316043425814234016887816646562123039807
10 abc 5 0.113791716928025733815842380377965420185
10 abc 1
10 def 1 0.1072504518979714802169110263105041172926
10 def 2 0.2185177746535448885318337015464952801767
10 def 3 0.3426390841534444667603936533440449889536
10 def 4 0.2217312713396264310102430206868849166499
10 def 5 0.1098614179554127334806185981120706969271
10 def 1
10 1.9989957822855995179754970877686282386
20 abc 1 0.11625148279952550415183867141162514828
20 abc 2 0.2196520363780150257018584420719652036378
20 abc 3 0.3301700276789244760775009885330170027679
20 abc 4 0.2178726769474100434954527481217872676947
20 abc 5 0.1160537761961249505733491498616053776196
20 abc 1
20 def 1 0.1060332732010422930446983363399478853478
20 def 2 0.2136700741631589496893164962918420525155
20 def 3 0.3553818400481058328322309079975947083584
20 def 4 0.2156744838645019041892162758067749047905
20 def 5 0.1092403287231910202445379835638404489878
20 def 1
20 1.98635824436536180308422301304863582444
3.9541320680110715697904310003954132068
27 rows selectedNow, with enough records I would expect some inaccuracy - this is a computer, and roundoff errors are a fact of life. The part that confuses me is that the first level of aggregation is always the exact "1", indicating a perfect 100% - so there is no roundoff error occuring there. But further rollups start to show some error.
Any ideas on why some would show roundoff errors while the others do not? With my actual data, it's much more pronounced - only getting ~142% out of what should be 200% for one, 193% out of 200% for the second, and the overall total is only ~211% out of 400%.Figured out how to show the "very weird" results - scewed data:
with t as (
select case
when dbms_random.value(0,2) < .7 then 10
else 20
end dept,
case
when dbms_random.value(0,2) < .5 then 'abc'
else 'def'
end marker,
sysdate - 20 + trunc(dbms_random.value(1,4)*5) new_date,
sysdate - 20 + trunc(dbms_random.value(4,7)*5) old_date
from dual
connect by level<=20000)
select dept,
marker,
case
when diff < 3 then '1-2'
when diff < 8 then '3-7'
when diff < 17 then '8-16'
else '17 or more'
end diff_range,
COUNT(*) / MAX(cnt) percent
from (select t.*,
old_date - new_date diff,
COUNT(*) over (partition by dept, marker) cnt
from t)
group by rollup(dept,
marker,
case
when diff < 3 then '1-2'
when diff < 8 then '3-7'
when diff < 17 then '8-16'
else '17 or more'
end)
order by dept,
marker,
length(diff_range),
diff_range
DEPT MARKER DIFF_RANGE PERCENT
10 abc 1-2 0.0121472621939824331900579330966174546814
10 abc 3-7 0.1113810502709773874042235096243692767707
10 abc 8-16 0.4692580826013829190805456923939450569987
10 abc 17 or more 0.4072136049336572603251728648850682115492
10 abc 1
10 def 1-2 0.0155172413793103448275862068965517241379
10 def 3-7 0.1143678160919540229885057471264367816092
10 def 8-16 0.4614942528735632183908045977011494252874
10 def 17 or more 0.4086206896551724137931034482758620689655
10 def 1
10 1.32517286488506821154924313212483647916
20 abc 1-2 0.0124575311438278595696489241223103057758
20 abc 3-7 0.109132091012045711932461649335941521672
20 abc 8-16 0.4739009574796664264387933697106970040152
20 abc 17 or more 0.404509420364460002059096056831051168537
20 abc 1
20 def 1-2 0.010325406758448060075093867334167709637
20 def 3-7 0.1129536921151439299123904881101376720901
20 def 8-16 0.4693366708385481852315394242803504380476
20 def 17 or more 0.4073842302878598247809762202753441802253
20 def 1
20 1.32904354988160197673221455780912179553
2.05909605683105116853701225162153814475
23 rows selected -
Populating users and groups - design considerations/best practice
We are currently running a 4.5 Portal in production. We are doing requirements/design for the 5.0 upgrade.
We currently have a stored procedure that assigns users to the appropriate groups based on the domain info and role info from an ERP database after they are imported and synched up by the authentication source.
We need to migrate this functionality to the 5.0 portal. We are debating whether to provide this functionality by doing this process via a custom Profile Web service. It was recommended during ADC and other presentation that we should stay away from using the database security/membership tables in the database directy and use the EDK/PRC instead.
Please advise on the best way to approach(With details) this issue. We need to finalize the best approach to take asap.
Thanks.
VanitaSo the best way to do this is to write a custom Authentication Web Service. Database customizations can do much more damage and the EDK/PRC/API are designed to prevent inconsistencies and problems.
Along those lines they also make it really easy to rationalize data from multiple backend systems into an orgainzation you'd like for your portal. For example you could write a Custom Authentication Source that would connect to your NT Domain and get all the users and groups, then connect to your ERP system and do the same work your stored procedure would do. It can then present this information to the portal in the way that the portal expects and let the portal maintain its own database and information store.
Another solution is to write an External Operation that encapsulates the logic in your stored procedure but uses the PRC/Server API to manipulate users and group memberships. I suggest you use the PRC interface since the Server API may change in subtle ways from release to release and is not as well documented.
Either of these solutions would be easier in the long term to maintain than a database stored procedure.
Hope this helps,
-Akash -
Oracle 10g Reports: Control Break using Group By Rollup
Oracle 10g Contol-Break Reporting
Hello. I am trying to create a report using Group By Rollup. The report should look like:
MONTH......._WEEK_..... CODE.... TOTAL
JULY..........WEEK 1..... K1...........2
............................. K1...........2
.............................SUB:.........4
................WEEK 2..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 3..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 4..... K1...........2
............................. K1...........2
.............................SUB:.........4
..........................MTH Tot:.....16
AUG..........WEEK 1..... K1...........2
............................. K1...........2
.............................SUB:.........4
................WEEK 2..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 3..... K1...........2
............................. K1...........2
.............................SUB:.........4
...............WEEK 4..... K1...........2
............................. K1...........2
.............................SUB:.........4
..........................MTH Tot:.....16
..........................GRND TOT: 32
Not sure how to group the codes into the correct month/week and the labels are a problem. Here is the table/data and a my poor attempt at using the Group by rollup. I'm still working on it but any help would be very nice.
create table translog
ttcd VARCHAR2(5) not null,
stime TIMESTAMP(6) not null,
etime TIMESTAMP(6)
insert into translog ( TTCD, STIME, ETIME)
values ('T4', '01-JUL-12 12.00.01.131172 AM', '01-JUL-12 12.00.16.553256 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T4', '01-JUL-12 12.00.17.023083 AM', '01-JUL-12 12.00.37.762118 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('K2', '01-JUL-12 12.00.38.262408 AM', '01-JUL-12 12.00.40.686331 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('U1', '01-JUL-12 12.00.40.769385 AM', '01-JUL-12 12.00.41.281300 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('SK4', '08-JUL-12 12.00.41.746175 AM', '08-JUL-12 12.00.51.775487 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '08-JUL-12 12.00.53.274039 AM', '08-JUL-12 12.00.53.802800 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1','08-JUL-12 12.00.54.340423 AM', '08-JUL-12 12.01.03.767422 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '08-JUL-12 12.01.04.699631 AM', '08-JUL-12 12.01.04.744194 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '15-JUL-12 12.01.04.796472 AM', '15-JUL-12 12.01.04.817773 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-JUL-12 12.01.04.865641 AM', '15-JUL-12 12.01.05.154274 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-JUL-12 12.01.05.200749 AM', '15-JUL-12 12.01.05.508953 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '15-JUL-12 12.01.06.876433 AM', '15-JUL-12 12.01.07.510032 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-JUL-12 12.01.07.653582 AM', '15-JUL-12 12.01.07.686764 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '15-JUL-12 12.01.07.736894 AM', '15-JUL-12 12.01.08.163321 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.08.297696 AM', '22-JUL-12 12.01.08.562933 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '22-JUL-12 12.01.08.583805 AM', '22-JUL-12 12.01.08.620702 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.08.744821 AM', '22-JUL-12 12.01.08.987524 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.09.096695 AM', '22-JUL-12 12.01.09.382138 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-JUL-12 12.01.09.530122 AM', '22-JUL-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '01-AUG-12 12.01.10.550234 AM', '01-AUG-12 12.01.10.581535 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '01-AUG-12 12.01.10.628756 AM', '01-AUG-12 12.01.10.656373 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '01-AUG-12 12.01.10.740711 AM', '01-AUG-12 12.01.10.768745 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '01-AUG-12 12.01.10.819635 AM', '01-AUG-12 12.01.10.900849 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '01-AUG-12 12.01.09.530122 AM', '01-AUG-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '08-AUG-12 12.01.11.231004 AM', '08-AUG-12 12.01.24.073071 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '08-AUG-12 12.01.24.202920 AM', '08-AUG-12 12.01.24.244538 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('S2', '08-AUG-12 12.01.24.292334 AM', '08-AUG-12 12.01.24.318852 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '08-AUG-12 12.01.24.362643 AM', '08-AUG-12 12.01.24.397662 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1','15-AUG-12 12.01.09.530122 AM', '15-AUG-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1', '15-AUG-12 12.01.24.414572 AM', '15-AUG-12 12.01.24.444615 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L2W', '15-AUG-12 12.01.24.478739 AM', '15-AUG-12 12.01.25.020265 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('K4', '15-AUG-12 12.01.25.206721 AM', '15-AUG-12 12.01.25.729493 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '15-AUG-12 12.01.25.784746 AM', '15-AUG-12 12.01.39.226921 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1','15-AUG-12 12.01.39.517953 AM', '15-AUG-12 12.01.50.775295 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.01.57.676446 AM', '22-AUG-12 12.01.58.252945 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.01.09.530122 AM', '22-AUG-12 12.01.10.420257 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.01.58.573242 AM', '22-AUG-12 12.02.10.651922 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('L1', '22-AUG-12 12.02.11.209305 AM', '22-AUG-12 12.02.24.140456 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('SK4','22-AUG-12 12.02.25.204035 AM', '22-AUG-12 12.02.25.580603 AM');
insert into translog ( TTCD, STIME, ETIME)
values ('T1','22-AUG-12 12.02.25.656474 AM', '22-AUG-12 12.02.25.689249 AM');
select
decode(grouping(trunc(stime)),1, 'Grand Total: ', trunc(stime)) AS "DATE"
,decode(grouping(ttcd),1, 'SUB TTL:', ttcd) CODE,count(*) TOTAL
from translog
group by rollup (trunc(stime),ttcd);}
Thank you.830894 wrote:
Oracle 10g Contol-Break Reporting
Hello. I am trying to create a report using Group By Rollup. The report should look like:Couple of things:
1) Your test data setup dows not match with your expected output &
2) layout of data (like control break) should ideally be carried out using reporting tools
Here is what you are probably looking for:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> create table translog
2 (
3 ttcd VARCHAR2(5) not null,
4 stime TIMESTAMP(6) not null,
5 etime TIMESTAMP(6)
6 );
Table created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T4', '01-JUL-12 12.00.01.131172 AM', '01-JUL-12 12.00.16.553256 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T4', '01-JUL-12 12.00.17.023083 AM', '01-JUL-12 12.00.37.762118 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('K2', '01-JUL-12 12.00.38.262408 AM', '01-JUL-12 12.00.40.686331 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('U1', '01-JUL-12 12.00.40.769385 AM', '01-JUL-12 12.00.41.281300 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('SK4', '08-JUL-12 12.00.41.746175 AM', '08-JUL-12 12.00.51.775487 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '08-JUL-12 12.00.53.274039 AM', '08-JUL-12 12.00.53.802800 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1','08-JUL-12 12.00.54.340423 AM', '08-JUL-12 12.01.03.767422 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '08-JUL-12 12.01.04.699631 AM', '08-JUL-12 12.01.04.744194 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '15-JUL-12 12.01.04.796472 AM', '15-JUL-12 12.01.04.817773 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-JUL-12 12.01.04.865641 AM', '15-JUL-12 12.01.05.154274 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-JUL-12 12.01.05.200749 AM', '15-JUL-12 12.01.05.508953 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '15-JUL-12 12.01.06.876433 AM', '15-JUL-12 12.01.07.510032 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-JUL-12 12.01.07.653582 AM', '15-JUL-12 12.01.07.686764 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '15-JUL-12 12.01.07.736894 AM', '15-JUL-12 12.01.08.163321 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.08.297696 AM', '22-JUL-12 12.01.08.562933 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '22-JUL-12 12.01.08.583805 AM', '22-JUL-12 12.01.08.620702 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.08.744821 AM', '22-JUL-12 12.01.08.987524 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.09.096695 AM', '22-JUL-12 12.01.09.382138 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-JUL-12 12.01.09.530122 AM', '22-JUL-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '01-AUG-12 12.01.10.550234 AM', '01-AUG-12 12.01.10.581535 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '01-AUG-12 12.01.10.628756 AM', '01-AUG-12 12.01.10.656373 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '01-AUG-12 12.01.10.740711 AM', '01-AUG-12 12.01.10.768745 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '01-AUG-12 12.01.10.819635 AM', '01-AUG-12 12.01.10.900849 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '01-AUG-12 12.01.09.530122 AM', '01-AUG-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '08-AUG-12 12.01.11.231004 AM', '08-AUG-12 12.01.24.073071 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '08-AUG-12 12.01.24.202920 AM', '08-AUG-12 12.01.24.244538 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('S2', '08-AUG-12 12.01.24.292334 AM', '08-AUG-12 12.01.24.318852 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '08-AUG-12 12.01.24.362643 AM', '08-AUG-12 12.01.24.397662 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1','15-AUG-12 12.01.09.530122 AM', '15-AUG-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1', '15-AUG-12 12.01.24.414572 AM', '15-AUG-12 12.01.24.444615 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L2W', '15-AUG-12 12.01.24.478739 AM', '15-AUG-12 12.01.25.020265 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('K4', '15-AUG-12 12.01.25.206721 AM', '15-AUG-12 12.01.25.729493 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '15-AUG-12 12.01.25.784746 AM', '15-AUG-12 12.01.39.226921 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1','15-AUG-12 12.01.39.517953 AM', '15-AUG-12 12.01.50.775295 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.01.57.676446 AM', '22-AUG-12 12.01.58.252945 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.01.09.530122 AM', '22-AUG-12 12.01.10.420257 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.01.58.573242 AM', '22-AUG-12 12.02.10.651922 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('L1', '22-AUG-12 12.02.11.209305 AM', '22-AUG-12 12.02.24.140456 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('SK4','22-AUG-12 12.02.25.204035 AM', '22-AUG-12 12.02.25.580603 AM');
1 row created.
SQL> insert into translog ( TTCD, STIME, ETIME)
2 values ('T1','22-AUG-12 12.02.25.656474 AM', '22-AUG-12 12.02.25.689249 AM');
1 row created.
SQL> commit ;
Commit complete.
SQL> select case when row_number() over (partition by mth order by mth, wk, ttcd) = 1 then mth end as "Month"
2 ,case when row_number() over (partition by mth, wk order by mth, wk, ttcd) = 1 and wk is not null then 'WEEK '||wk end as "Week"
3 ,case when gttcd = 1 and gwk = 0 and gmth = 0 then 'SUB:'
4 when gttcd = 1 and gwk = 1 and gmth = 0 then 'Month Total:'
5 when gttcd = 1 and gwk = 1 and gmth = 1 then 'Grand Total:'
6 else ttcd
7 end as "Code"
8 ,cnt as "Total"
9 from (
10 select trunc(stime, 'MM') as mth, to_char(stime, 'W') as wk, ttcd, count(*) as cnt
11 ,grouping(trunc(stime, 'MM')) as gmth, grouping(to_char(stime, 'W')) as gwk, grouping(ttcd) as gttcd
12 from translog
13 group by rollup(trunc(stime, 'MM'), to_char(stime, 'W'), ttcd)
14 order by trunc(stime, 'MM'), to_char(stime, 'W'), ttcd
15 ) ;
Month Week Code Total
01-JUL-12 WEEK 1 K2 1
T4 2
U1 1
SUB: 4
WEEK 2 L1 2
SK4 1
T1 1
SUB: 4
WEEK 3 L1 1
S2 2
T1 3
SUB: 6
WEEK 4 L1 4
T1 1
SUB: 5
Month Total: 19
01-AUG-12 WEEK 1 L1 1
S2 1
T1 3
SUB: 5
WEEK 2 L1 1
S2 1
T1 2
SUB: 4
WEEK 3 K4 1
L1 3
L2W 1
T1 1
SUB: 6
WEEK 4 L1 4
SK4 1
T1 1
SUB: 6
Month Total: 21
Grand Total: 40
35 rows selected.
Maybe you are looking for
-
How can I live trace a hand sketch line work and cut from background to apply to new coloured backgr
I placed a hand drawn scanned photo of artwork to illustrator cs4 in an attempt to trace the lines and remove from paper and add the line drawing to a brochure artwork. this to give illusion of an old artist's manuscript with drawings. first attempt
-
Unable to install CTM Client on a WIN 7 64 Bit machine
We recently U/g a 64bit machine to Windows 7 and now we are unable to install the CTM Client. The error we get says unable to install due to machine configuration. Is there an update to the CTM Client we need. we are running 8.5.
-
Getting ftp error message. Can't find remote host
I'm using this for three websites. All the same user and pw. Tried everything. BTW, don't laugh, still using CS3. About to upgrade but having to learn how to import settings. Never done that before. Why would my other websites work, but not this one?
-
Upgrading redwood cronacle from 6.02 to 7.04
Hi Experts, I am upgrading cronacle from 6.02 to CPS 7.04. For that i am looking for some relevant documentation,it will be great if you can provide me some pointer here. Also ,i need run one sql script in redwood,can anyboy help me with the steps ho
-
BRF Plus - Installation & Activation
I understand that BRF Plus comes as pre-installed with Netweaver 7. Enh pack 1. Is this is optional package while installing Netwever? Also I would like to what is the transaction code to launch the work bench? One document says it is FDT_WORKBENCH w