Aggregates Combinations
I have a data like this.
CREATE TABLE GROUPING_EX
CHANNEL VARCHAR2(10 BYTE),
CODE VARCHAR2(10 BYTE),
CNT NUMBER
COMMIT;
Insert into GROUPING_EX (CHANNEL, CODE, CNT) Values('a', 'c1', 1);
Insert into GROUPING_EX (CHANNEL, CODE, CNT) Values('a', 'c2', 1);
Insert into GROUPING_EX (CHANNEL, CODE, CNT) Values('a', 'c3', 1);
COMMIT;And this is the output format that I need:
Channel, New_Code, Cnt
a, c1, 1
a, c1, 1
a, c1, 1
a, c1,c2 2
a, c3,c2 2
a, c1,c3 2
a, c1,c2,c3 3 I am trying it with Cube and Rollup. But not getting it correct. Any hints are appreciated.
Kumar
Hi, Kumar,
Thanks for posting the CREATE TABLE and INSERT statmements; that's very helpful!
GROUP BY gets get the possible combinations of a value from column_x and and value from column_y, but (as far as I know) it can't get the combinations of 2 different values from column_x. You could possibly do a pivot, so that the different values were in different columns, but you'd have to know at the time you wrote the query how many different values there would be at run-time.
CONNECT BY is much better at getting the combinations from a single column:
WITH got_combinations AS
SELECT channel
, SYS_CONNECT_BY_PATH (code, ',') || ',' AS code_list
FROM grouping_ex
CONNECT BY code > PRIOR code
AND channel = PRIOR channel
SELECT c.channel
, TRIM (',' FROM c.code_list) AS new_code
, SUM (x.cnt) AS cnt
FROM got_combinations c
JOIN grouping_ex x ON c.channel = x.channel
AND INSTR ( c.code_list
, ',' || x.code || ','
) > 0
GROUP BY c.channel
, c.code_list
ORDER BY LENGTH (new_code)
, new_code
;Output:
CHANNEL NEW_CODE CNT
a c1 1
a c2 1
a c3 1
a c1,c2 2
a c1,c3 2
a c2,c3 2
a c1,c2,c3 3What role do channel and cnt play in this problem? It's hard to tell from the sample data, because they always have the same value in that data.
Similar Messages
-
Hi gurus,
I have problem with creating aggregate tables. I use Oracle business intelligence 11g R1 cookbook book for help but I think author forget some important things and I can't create it.
I have fact table and dimension (fact_table, group_product_dim). I summery them and now I got fact aggregate table and aggregate dimension (fact_table_agg, group_product_dim_agg):
summary:
fact_table
fact_table_agg
group_product_dim
group_product_dim_agg
After I create physical diagram for base and aggregate combination in physical layer.
Now I need to move this tables in BMM. Which combination I need to bring in BMM layer? Both, basic combination or aggregate combination?
I move basic combination (fact_table, group_product_dim) and try two add two logical table source in dimension group_product_dim. Create a new logical table source by right-clicking on the group_product_dim logical
table and selecting the New Object | Logical Table Source option. This will bring up the new window Logical Table Source.
I select the physical table by clicking the + sign and select (group_product_dim_agg) the aggregate dimension table. Now I make the column mapping on the Column Mapping tab of the same Logical Table Source. Only few columns are mapped and all the others are unmapped. This is because the aggregate dimension table doesn't store all columns from basic dimension group_product_dim. The last modification I do on the Content tab of the Logical Table Source.
I need define the levels at which this logical table source can satisfy queries. This step is very important. Making any kind of mistake will cause the BI server to use a wrong logical table source for the queries.
Here I have a problem because I can't put Logical Level into Aggregation content, group by menu. I can't put anyting else then the default value Column.
Tnx,
best regards1.) http://www.amazon.com/Oracle-Business-Intelligence-Cookbook-ebook/dp/B00DHCKTXQ/ref=wl_it_dp_o_pC_nS_nC?ie=UTF8&colid=322X8WO18RFEG&coliid=I13QJ38KK8PYTB
2.) I create NEW business model in BMM and after drag a fact_table and group_product_dim into it. Do I need drag aggregate objects into New business model, also?
I don't, I drag only fact_table and group_product_dim and create New Logical Table Source on group_product_dim where I try make new source.
Can you tell me what I need to do in background? Do I need map columns from dimension to aggregate dimension? -
Concatenated groupings question
Hello,
Regarding to concatenated groupings, if i have: GROUP BY grouping sets(a,b), grouping sets(c,d), the result is a cross product of groupings from each grouping set. So, we have the following groupings: (a,c), (a,d), (b,c), (b,d).
But if we have GROUP BY department_id, ROLLUP(job_id), CUBE(manager_id), which are the groupings in this case?
Also, on GROUP BY department_id, ROLLUP(job_id, <other column>), CUBE(manager_id) ?
The last 2 examples are a bit confusing for me.
Thanks!Hi,
Roger22 wrote:
Ok, so:
SQL> select department_id, job_id, manager_id, sum(salary), grouping(department_id)
2 from employees e
3 group by rollup(department_id, (job_id, manager_id))
4 ;
DEPARTMENT_ID JOB_ID MANAGER_ID SUM(SALARY) GROUPING(DEPARTMENT_ID)
SA_REP 149 7000 0
7000 0
10 AD_ASST 101 4400 0
10 4400 0
23 rows selected
What table are you using? When I run that query with my hr.employees table, I get 46 rows, not 23. If your employees table is a subset of hr.employees, then post the code you used to cfreate it.
I don't understant the first two rows(those with salary 7000). There is no grouping of (job_id, manager_id) in this example , because it's treated as a unit,Sorry, I don't understand what you mean here.
There is a grouping of (job_id, manager_id) in this example. The GROUP BY clause contains "(job_id, manager_id)", so there is a separate row for each distinct combination of job_id and manager_id.
it's a composite column and ROLLUP does not roll up in each direction possible (so, with all combinations in GROUP BY clause), but rolls up from the highest level to the outermost level (from right to left). So, the groupings are:
department_id, job_id, manager_id
department_id
and () -- which is the grand total calculated
Please clarify in this case.. hope i was more clearThat's right.
What exactly don't you understand about the beginning of the output?
I really think it would help if you displayed GROUPING for all of the GROUP BY columns, not just one of them, like this:
SELECT department_id
, job_id
, manager_id
, SUM (salary) AS sum_salary
, GROUPING (department_id) AS g_d
, GROUPING (job_id) AS g_j
, GROUPING (manager_id) As g_m
FROM hr.employees e
GROUP BY ROLLUP ( department_id
, (job_id, manager_id)
;Output (abridged):
DEPARTMENT_ID JOB_ID MANAGER_ID SUM_SALARY G_D G_J G_M
SA_REP 149 7000 0 0 0
7000 0 1 1
10 AD_ASST 101 4400 0 0 0
10 4400 0 1 1
691400 1 1 1As you said, ROLLUP is treating job_id and manager_id as a uint.
On each row where an individual job_id is used, an individual manager_id is used also.
Whenever all l job_ids are combined, all manager_ids are combined also.
GROUPING (job_id) = GROUPING (mnaager_id) on every row.
It's unclear, but you seem to have a question about how the 2nd row above
DEPARTMENT_ID JOB_ID MANAGER_ID SUM_SALARY G_D G_J G_M
7000 0 1 1is being produced. Look at the GROUPING output. GROUPING (department_id) = 1, meaning the row represents only one department_id (GROUP BY treats NULL as a value), but GROUPING (job_id) = 1 and GROUPING (manager_id) = 1, meaning that this row is a super-aggregate, combining rows regardless of their job_id or manager_id. The 2nd row functions exactly the same as any other row with the same values (0, 1, 1) in the GROUPING columns. For eample, the 4th row above:
DEPARTMENT_ID JOB_ID MANAGER_ID SUM_SALARY G_D G_J G_M
10 4400 0 1 1Do you uderstand how this row is formed? It is a summary of all the rows in the table where department_id=10 (As it happens, there is only one row with department_id=10 in the table, but GROUP BY produces a summary anyway.)
In exactly the same way, the 2nd row of output is a summary of all the rows in the table where department_id is NULL. (As it happens, there is only one row with department_id is NULL in the table, but GROUP BY produces a summary anyway.) -
Hi,
No doubt there are a number of people with the same type of question, but to date I have seen no solution.
I have an i-phone 5 and pc, I have about 5k songs on i-tunes, i would like my wife to listen to the music, she has an i-phone 5 and an ipad mini
We both have separate apple Id's, we both have seperate credit card on the separate accounts (thankfully), but we would both like to listen to our music, i.e. my music from my pc that i have bought.
I want to be able to use my apple I/d as normal, and she wishes to use hers for purchases, strange apps etc, can we both share the music from our library and still keep our separate apple i/d's? A bit like the old days when you bought a CD and made a copy for the wife to listen to on her portable discman.
I appreciate that some times there are concerns over copying and distributing etc, but managing the number of hardware items that can access match surely should solve this.. we share a house, a bed, a child, some kisses, my money, but we cant share the music collection, apple surely it is a simple matter to allow this by creating a family i/d of some kind.
Can anyone out there help please... i fear some of the sharing may stop if i cannot find an answer...not sharing my money is great, the kisses and bed on the other hand, well lets not go there..
Thank you all in anticipation.I wanna know if is there any plan to make a family plan that could aggregate, combine, share, associate, interdepend, link, connect or any kind of association that could allow members of a proven family to access a specific iMatch library?
In case of a negative response, please suggest the best way to address this issue: My wife is complaining that she also has the right to listen the music we both purchased together, and she cannot use my apple id. Maybe I should leave iMatch and store all the musics in a external HD with wireless capabilities. -
Aggregate-collection-mapping combined with optimistic-locking
I've tried to set up an aggregate-collection-mapping and everything seems to be okay. But one thing still lacks: use this aggregate-collection in combination with optimistic-locking. Defining to use optimistic-locking (exactly: timestamp locking) doesn't work. The generated sql-statement doesn't have the wanted timestamp field. Any idea is recommend.
Hi,
Aggregate collection mappings do support optimistic locking. You MUST map and store the version value in the object, not the cache. Aggregate objects are not cache independently of their parents so cannot store their version value in the cache.
Make sure that you have:
- provided a direc-to-field (non-readonly) mapping for the version field
- define the locking policy to store the version value in the object, not the cache
Example:
descriptor.descriptorIsAggregateCollection();
descriptor.useTimestampLocking("VERSION", false);
// SECTION: DIRECTTOFIELDMAPPING
DirectToFieldMapping directtofieldmapping3 = new DirectToFieldMapping();
directtofieldmapping3.setAttributeName("version");
directtofieldmapping3.setIsReadOnly(false );
directtofieldmapping3.setFieldName("VERSION");
descriptor.addMapping(directtofieldmapping3); -
Combine Motu 24I/O and internal spdif with "aggregate device"
Hi,
i'm using locic pro 7 with a motu 24 i/o for recording e.g. vocals. To improve the quality i plan to buy "benchmark adc-1 converter, but there are no digital inputs on my motu. Can i combine the 24i/o with the internal spdif input? The motu runs at 64 samples without troubles. if i combine both via "aggregate device",what about latencies? is the latency of the motu as high as the internal spdif? Should i better buy a digital card (rme)? Does anyone know a good digital card (pcin and not too expensive)? Thanx a lot,
OliverSend Apple feedback. They won't answer, but at least will know there is a problem. If enough people send feedback, it may get the problem solved sooner.
Feedback
Or you can use your Apple ID to register with this site and go the Apple BugReporter. Supposedly you will get an answer if you submit feedback.
Feedback via Apple Developer -
Unable to combine two distinct USB Audio devices into one Aggregate Device
I'm trying to use Audio MIDI Setup to combing two separate USB audio speaker systems into "one" so that iTunes will play through both simultaneously.
I can create the "Aggregate Device" and add the two USB devices but I can't get the two checkboxes to remain selected after I "Apply" changes when setting them up in "stereo". If I set them to Multichannel quadraphonic then I can select both devices but only one of them will play sound...I assume because iTunes doesn't output in Quadraphonic.
I've searched these forums and the Interwebs and hear of people having success (and difficulties). Difficulties mostly because of the misunderstanding of what a "device" is. These are both EXTERNAL USB devices. One is SoundSticks and the other is a USB audio converter that passes analog into an amp.
Why is this so difficult? Oh, I know...because wants to sell you $80 Airport Express to stream music wirelessly to devices that are literally ON THE SAME DESK!
Anyone have success (or confirm more failures) with what should be a trivial exercise...The idea with the Aggregate Device is to allow several audio devices to behave as though they are one multi-port device with discrete one to one I/O mapping. It is not intended for one stereo source to feed more than 2 channels of output.
However...
Set up your aggregate devices as 4 audio outputs, install Jack (http://jackaudio.org/) and use that as a virtual router with qjackctl.
You may also get similar results with Soundflower (http://cycling74.com/products/soundflower/)
Steve -
Combining aggregate and single row
for the below query
select
sum(rd_rate1*(last_date-old_date+1))/sum(last_date-old_date+1) "RATE1",
sum(rd_rate2*(last_date-old_date+1))/sum(last_date-old_date+1) "RATE2",
sum(rd_rate3*(last_date-old_date+1))/sum(last_date-old_date+1) "RATE3",
sum(rd_rate4*(last_date-old_date+1))/sum(last_date-old_date+1) "RATE4",
sum(rd_rate5*(last_date-old_date+1))/sum(last_date-old_date+1) "RATE5",
sum(rd_child*(last_date-old_date+1))/sum(last_date-old_date+1) "CHILD"
from ( select rd.begin_date + greatest(p.b_date-rd.begin_date,0) old_date, rd.end_date + least (p.e_date-rd.end_date ,0) last_date,
RD.rate1 as RD_RATE1, RD.rate2 as RD_RATE2, RD.rate3 as RD_RATE3,
RD.rate4 as RD_RATE4, RD.rate5 as RD_RATE5, RD.children_charge as RD_CHILD
from
pms_rate_header RH,pms_rate_detail RD,
pms_rate_room_cat RRC,
(select to_date('25-feb-2007') b_date, to_date('26-feb-2007') e_date from dual) p
where
RH.RESORT='HCHHGR' and RH.RATE_CODE ='CH$WEB' and RD.rate_header_id = RH.rate_header_id
and RD.inactive_date is NULL and rd.begin_date <= p.e_date and rd.end_date >= p.b_date
and RRC.rate_detail_id=RD.rate_detail_id and RRC.room_category=2477)
I get following out put and it is correct
RATE1 RATE2 RATE3 RATE4 RATE5 CHILD
1 95 105
Now i also want display room_categoty as seventh column along the result.
how to do this? Please Help.
Lee1212Include it in the inner query and group by it in the outer.
-
Best way to combine multiple fact tables in single mart
Hi, quick question that I think I know the answer to, just wanted to bounce it off everyone here to make sure I'm on the right track.
I have a HR datamart that contains several different fact tables. Some of the facts are additive across time (i.e. compensation - people get paid on different days, when I look at a month I want to see the total of all pay dates within that month). The other type of fact is more "status over a set of time" - i.e. a record saying that I'm employed in job X with a salary of Y from a given start date to a given end date.
For the "status over time" type facts, if I choose January 2009 (month level) in the time dimension, what I'd really like to see is the fact records that were in place "as of" the last day of the month - i.e. all records where the start date is on or before 1/1/2009, and whose end date is on or after 1/1/2009. Note that my time dimension does go down to the day level (so you could look at a person "as of" the middle of the month, etc. if you're browsing on a day-by-day basis)
I've set up the join between the time dimension and the fact table as a complex join in the physical layer, with a clause like "DIM_DATE.DATE >= FACT.START_DATE AND DIM_DATE.DATE <= FACT.END_DATE". This seems to work perfectly at the day level - I have no problems at all finding the proper records for a person as of any given day.
However, I'm not quite sure how to proceed at the month level. My initial thought is:
a) create a new LTS for the fact table at the month level
b) in the new LTS, add the join to the time dimension
c) in the new LTS, add a where clause similar to LAST_DAY_IND = 'Y' (true for the last day of each month).
Is this the proper way to do this?
Thanks in advance!
ScottHi Scott,
I think you're on the right track but I don't think you need the last part. Let me generalize the situation to the following tables
DAILY_FACT (
DAILY_FACT_KEY NUMBER, -- PRIMARY KEY
START_DATE_KEY NUMBER, -- FOREIGN KEY TO DATE DIMENSION FOR START DATE
END_DATE_KEY NUMBER, -- FOREIGN KEY TO DATE DIMENSION FOR END DATE
DAILY_VALUE NUMBER); -- FACT MEASURE
MONTHLY_FACT(
MONTHLY_FACT_KEY NUMBER, -- PRIMARY KEY
MONTH_DATE_KEY NUMBER, -- FOREIGN KEY TO DATE DIMENSION, POPULATED WITH THE KEY TO THE LAST DAY OF THE MONTH
MONTHLY_VALUE NUMBER); -- FACT MEASURE at MONTH LEVEL. DATE_KEY is at END of MONTH
DIM_DATE(
DATE_KEY NUMBER,
DATE_VALUE DATE,
DATE_MONTH VARCHAR2(20),
DATE_YEAR NUMBER(4));
DIM_DATE_END (ALIAS OF DIM_DATE for END_DATE_KEY join)
Step 1)
Make the following three joins in the physical layer:
a. DAILY_FACT.START_DATE_KEY = DIM_DATE.DATE_KEY
b. DAILY_FACT.END_DATE_KEY = DIM_DATE_END.DATE_KEY
C. MONTHLY_FACT.DATE_KEY = DIM_DATE.DATE_KEY
Note: The MONTHLY_FACT DATE_KEY is joined to the same instance of the date dimension as the START_DATE_KEY of the DAILY_FACT table. This is because these are the dates you want to make sure are in the same month.
Step 2)
Create a business model and drag DIM_DATE, DAILY_FACT and DIM_DATE_END into it.
Step 3)
Drag the physical table MONTHLY_FACT into the logical table source of the logical table DAILY_FACT.
Step 4)
Set DAILY_VALUE and MONTHLY_VALUE to be aggregates with a "SUM" aggregation function
Step 5)
Drag all required reporting columns to the Presentation layer.
Step 6)
Create your report using the two different measures from the different fact tables.
Step 7)
Filter the report by the Month that joined to the Start Date/Monthly Date (not the one that joined to the end date).
Step 8)
You're done.
The act of combining the two facts into one logical table allows you to report on them at the same time. The strategy of joining the START_DATE_KEY and the MONTH_DATE_KEY allows you to make sure that the daily measure start date will be in the same month as the monthly fact table.
Hope that helps!
-Joe
Edited by: Joe Bertram on Jan 5, 2010 6:29 PM -
Key figure fixing in aggregate level partially locking
Hi Guys,
When fix the cell in the planning book, getting "One or more cells could not be completely fixed" message.
1. If a material having only one CVC in the MPOS those quantity can be fixed correctly without any issues.
2. If a material having more than 1 CVC combination and try to fix one of the CVC combination quantity, it is fixing partially and getting the above message.
3. Even, it is not allowing to fix the quantity in aggregate level also.
We are in SCM 7.0.
Is there precondition that need to fix the material having only one CVC combination.
Even a material having multiple CVC combination why it is not allowing to fix one CVC combination in detail level.
Is aggregate level key figure fixing is not allowed ?
Please clarify.
Thanks
Saravanan VHi,
It is not mandatory to assign Standard KF to be able to fix. However your custom infoobject that you created must be of type APO KF and not BW KF.
That said, Let us try and address your first problem.
You can fix at an aggregate level. However there a few points to remember.
Let us consider a couple of scenarios.
1) In your selection id, it is showing a number of products. You are selecting all the products at one go and load the data and try to fix at this level. This is not possible.
2) In your selection id, you have selected a product division. For a single product division you are loading data and try to fix at this level. This is the true aggregate level and it should be possible at this level.
Hope this helps.
Thanks
Mani Suresh -
InfoCube Data Modeling "Or" combination Result Set
Hi All,
I am new to BW so please let me know if this is something that can be done without too much complexity.
How would one go about tackling an issue like this.
Below is the sales ODS data.
<u>Sales ODS Data</u>
Customer -
Sales $ - Rebate Indicator-----Refund Indicator
Apple -
1000--True--
False
Apple -
500----
False -
True
Apple -
2000--True--
True
How would one design an infocube such that when a user selects the following BEX inputs, the result should bring back all the rows. I would like the input to be an "OR" combination instead of an "AND" combination. Thanks.
<u>QUERY INPUTS #1 :</u>
Customer Site = Apple
Rebate Indicator = True
Refund Indicator = True
QUERY RESULT :
Customer -
Sales $
Apple -
1000 + 500 +2000 = 3500
Message was edited by:
Nigel KHi nigel,
if i understood your requirement
you would like to see your result as
CUSTOMER SALES
appel>>>>>500
appel>>>>>1000
appel>>>>>2000
but try to have the other two fields Rebate and Refund in the rows of the Query then you will get all the three line items.
becoz cube will aggregate when the key is common.
Or else if your requirement is to display the result when one of the Column is true either Rebate or Refund.
then you have to add one more field to your Infocube and fill that in the update rules saying
If Rebate eq true or Refund eq True.
Result = True.
Else
Result = False.
Then while displaying the query result you can filter on this Field then you will get the correct result.
thanks and regards
Neel
Message was edited by:
Neel Kamal -
How do I use Aggregate formulas with multiple measures from different tables?
I have three measures:
Cash - this sums the £ column in my 'Cash' table.
Online - this sums the £ column in my 'Online'
table
Phone - this sums the £ column in my 'Phone'
table
How do I now do aggregate formulas that combine this three measures, for example:
Find the MIN or MAX of the three measures
Average the three measures
I have worked out how to use simple aggregation like add, subtract etc through the CALCULATION formula, but doing the average or MIN/MAX does not work.
Thanks.Hi, thanks for the suggestions.
Re: Julian
I had thought about this method, unfortunately it is not always three measures so this doesn't work.
Re: Tim
I was not aware of the APPEND formula however I will definitely give it a try and report back - I can see this one working.
Re: Michael
Apologies, I have never found a an easy way of simulating some of my issues since it would mean creating a new power model and establishing quite a number of relationships. I definitely see the benefit when posting on the forum since it makes my issue far more
accessible, unfortunately when I've posted before I've generally been racing against time and not had time to prepare this anonymised data. Is there a quick way of doing it? -
Trying to create a Histogram type/object for aggregate functions
Hi,
I am trying to create an aggregate function that will return a histogram
type.
It doesn't have to be an object that is returned, I don't mind returning
a string but I would like to keep the associative array (or something
else indexed by varchar2) as a static variable between iterations.
I started out with the SecondMax example in
http://www.csis.gvsu.edu/GeneralInfo/Oracle/appdev.920/a96595/dci11agg.htm#1004821
But even seems that even a simpler aggregate function like one strCat
below (which works) has problems because I get multiple permutations for
every combination. The natural way to solve this would be to create an
associative array as a static variable as part of the Histogram (see
code below). However, apparently Oracle refuses to accept associate
arrays in this context (PLS-00355 use of pl/sql table not allowed in
this context).
If there is no easy way to do the histogram quickly can we at least get
something like strCat to work in a specific order with a "partition by
... order by clause"? It seems that even with "PARALLEL_ENABLE"
commented out strCat still calls merge for function calls like:
select hr,qtr, count(tzrwy) rwys,
noam.strCat(cnt) rwycnt,
noam.strCat(tzrwy) config,
sum(cnt) cnt, min(minscore) minscore, max(maxscore) maxscore from
ordrwys group by hr,qtr
Not only does this create duplicate entries in the query result like
"A,B,C" and "A,C,B" it seems that the order in rwycnt and config are not
always the same so a user can not match the results based on their
order.
The difference between my functions and functions like sum and the
secondMax demonstrated in the documentation is that secondMax does not
care about the order in which it gets its arguments and does not need to
maintain an ordered set in order to return the correct results. A good
example of a built in oracle function that does care about all its
arguments and probably has to maintain a similar data structure to the
one I want is the PERCTILE_DISC function. If you can find the code for
that function (or something like it) and forward a reference to me that
in itself would be very helpful.
Thanks,
K.Dingle
CREATE OR REPLACE type Histogram as object
-- TYPE Hist10 IS TABLE OF pls_integer INDEX BY varchar2(10),
-- retval hist10;
-- retval number,
retval noam.const.hist10,
static function ODCIAggregateInitialize (sctx IN OUT Histogram)
return number,
member function ODCIAggregateIterate (self IN OUT Histogram,
value IN varchar2) return number,
member function ODCIAggregateTerminate (self IN Histogram,
returnValue OUT varchar2,
flags IN number) return number,
member function ODCIAggregateMerge (self IN OUT Histogram,
ctx2 IN Histogram) return number
CREATE OR REPLACE type body Histogram is
static function ODCIAggregateInitialize(sctx IN OUT Histogram) return
number is
begin
sctx := const.Hist10();
return ODCIConst.Success;
end;
member function ODCIAggregateIterate(self IN OUT Histogram, value IN
varchar2)
return number is
begin
if self.retval.exist(value)
then self.retval(value):=self.retval(value)+1;
else self.retval(value):=1;
end if;
return ODCIConst.Success;
end;
member function ODCIAggregateTerminate(self IN Histogram,
returnValue OUT varchar2,
flags IN number)
return number is
begin
returnValue := self.retval;
return ODCIConst.Success;
end;
member function ODCIAggregateMerge(self IN OUT Histogram,
ctx2 IN Histogram) return number is
begin
i := ctx2.FIRST; -- get subscript of first element
WHILE i IS NOT NULL LOOP
if self.retval.exist(ctx2(i))
then self.retval(i):=self.retval(i)+ctx2.retval(i);
else self.retval(value):=ctx2.retval(i);
end if;
i := ctx2.NEXT(i); -- get subscript of next element
END LOOP;
return ODCIConst.Success;
end;
end;
CREATE OR REPLACE type stringCat as object
retval varchar2(16383), -- concat of all value to now varchar2, --
highest value seen so far
static function ODCIAggregateInitialize (sctx IN OUT stringCat)
return number,
member function ODCIAggregateIterate (self IN OUT stringCat,
value IN varchar2) return number,
member function ODCIAggregateTerminate (self IN stringCat,
returnValue OUT varchar2,
flags IN number) return number,
member function ODCIAggregateMerge (self IN OUT stringCat,
ctx2 IN stringCat) return number
CREATE OR REPLACE type body stringCat is
static function ODCIAggregateInitialize(sctx IN OUT stringCat) return
number is
begin
sctx := stringCat('');
return ODCIConst.Success;
end;
member function ODCIAggregateIterate(self IN OUT stringCat, value IN
varchar2)
return number is
begin
if self.retval is null
then self.retval:=value;
else self.retval:=self.retval || ',' || value;
end if;
return ODCIConst.Success;
end;
member function ODCIAggregateTerminate(self IN stringCat,
returnValue OUT varchar2,
flags IN number)
return number is
begin
returnValue := self.retval;
return ODCIConst.Success;
end;
member function ODCIAggregateMerge(self IN OUT stringCat,
ctx2 IN stringCat) return number is
begin
self.retval := self.retval || ctx2.retval;
return ODCIConst.Success;
end;
end;
CREATE OR REPLACE FUNCTION StrCat (input varchar2) RETURN varchar2
-- PARALLEL_ENABLE
AGGREGATE USING StringCat;GraphicsConfiguration is an abstract class. You would need to subclass it. From the line of code you posted, it seems like you are going about things the wrong way. What are you trying to accomplish? Shouldn't this question be posted in the Swing or AWT forum?
-
Is there a better way to do this projection/aggregate query?
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll();Neil,
It sounds like the problem that you're running into is that Kodo doesn't
yet support the JDO2 grouping constructs, so you're doing your own
grouping in the Java code. Is that accurate?
We do plan on adding direct grouping support to our aggregate/projection
capabilities in the near future, but as you've noticed, those
capabilities are not there yet.
-Patrick
Neil Bacon wrote:
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll(); -
Aggregate functions cannot be used in group expressions
Hi have report showing sales by Vendor. I need to list all the vendors with Monthly Total>5000 and combine the rest as "OTHER VENDORS"
Vendor is a Group in my report, so I tried to put an expression as a Group on:
=IIF(Sum(Fields!Mth_1_Sales.Value)>5000,Fields!Vendor_No.Value,"OTHER VENDORS")
I've got an error: "aggregate functions cannot be used in group expressions"
How do I get Vendors with Sales < 5000 into "OTHER VENDORS" ?Hi,
You need to group by Month on group expression,
And you can use the same expression in the report column as
=IIF(Sum(Fields!Mth_1_Sales.Value)>5000,Fields!Vendor_No.Value,"OTHER VENDORS")
Many Thanks
..................................................................................................................................................................Please
mark the post as Please mark the post as answered if this post helps to solve the post.
Maybe you are looking for
-
Hi all, i want to generate RMI stubs. But by default, the application server generates as RMI Over iiop stubs. I cant use this stubs for some reason. So, i want to know that how to generate RMI (JRMP) for an enterprise application using AS 8.1? Is th
-
Changing the default installation settings in AIR installer
I was wondering if there is a way to change the default settings the the AIR Installer uses when installing an Application that I created. More specifically, when the user is going through the installation process for my application using the AIR Ins
-
Web Service not returning Boolean objects from DTO class
Hi, I've seen posts reporting the same problem on other forums from a few years ago that are unresolved. I've created a service in NDS that has a method that returns some data via a DTO class. All the data is returned apart from those that are of typ
-
WebDynpro Abap table not rendered correctly inside portal iView
Hi, I have a webDynpro Abap application with a table that contains radio buttons as table cell editors. When I run the application directly it looks fine, but when I run it inside a portal iView the radio button allignment changes to the top of the t
-
Reducing pixel size of saved photos
I'm very new to my iMac. I've saved several dozen photos in iPhoto and just tried to upload one of them to a boating website. It was apparently too 'big' in that it had too many pixels. My question: How do I copy a photo in a reduced pixel 'size' so