Aggregation at cube
Hi All,
I am loading Transaction data into my cube from flat file.
My requirement is:
There are 2 doc's: x--10--10/10/2008
x---20-26/10/2008
Here when i load for the month these both get aggregated. And I need the date field in the display.(Date field is Actual Date)
I know this is which has less scope.
Any help that would be great.
Regards,
Madhu
Hi,
Aggreation at month level is not at all possible at the cube level so forget about it since you have got 0calday in the records(Lowest granularity) and it will never let it happen.
If you want to have a data at the month level then I would suggest you to go for another cube or remove calday from the existing cube.
Anyways you can have month by month display in the query as you can make restrictions on month.
Another was iy to create a new key figure and store the values in them for the sum of very month.
Here you can write the logic for the values to aggregate based on the same set of char but will be very tough to maintain.
Thanks
Ajeet
Similar Messages
-
<p>That's what it happens in my production environment... :-(</p><p> </p><p>alter database myappl.mycube clear aggregates;<br>OK<br>execute aggregate selection on database myappl.mycube force_dump toview_file def_sel;<br>OK<br>execute aggregate build on database myappl.mycube using view_filedef_sel;<br>1270032 The specified view list is invalid or the views wereselected using a different outline</p><p> </p><p>If I try an "execute aggregate process" it works andbuilds just the views that were defined in my csc file def_sel</p><p> </p><p>Any idea?</p><p>Thanks in advance</p><p> </p><p>In my test environment it works..</p>
Could you have a fragmented outline?
If so, see this thread for ways to reduce that fragmentation: Re: ASO too large
See Glenn's blog for an alternate way to get rid of fragmentation: http://glennschwartzbergs-essbase-blog.blogspot.com/2010/06/aso-outline-compaction.html
Regards,
Cameron Lackpour -
Net price aggregation in Purchase cube
Hi,
I have enhanced 0PUR_C01 cube to have Net price key figure, which I am getting from 2LIS_02_SCL. Problem I am facing over here is net price is getting aggregated in cube. Net price key figure is created with exception aggregation with Material as ref characteristic.
Can anyone explain how to use price/rate figures can be used in such cases?
regards,
Vikram.Hellow Vikram
Could i know, How did you solved.....? I also facing the same problem.
But for me i am using exception aggrgation as "Summation" only.
Regards
Raaju Saravanan -
Help please - 10.1.0.4 aggregation dying
Hi,
I have an aggregation that is dying and I'm not sure exactly what is going on. I have a pretty large cube (12 dimensions - 1 dense and 11 sparse), and was trying to solve it with COMPRESSION turned on. Below I've attached the POUTFILEUNIT results.
Note that the solve died when the database ran out of temp space (at 20 Gb).
The POUTFILEUNIT looks weird to me - when I tried aggregating on a test cube (small), I got messages saying things like "2,800,000 total tuples" and having "2,400,000 singles" - impling that the compression would work wonders. However, I'm not seeing anything at all talking about singles in this POUTFILEUNIT.
How should I read this output and/or is it actually using the compression?
Thanks,
Scott
16:23:57.258 [ AW] Reattaching SYS.AWXML
16:23:57.258 [ AW] Done
16:23:57.367 [ AW] Reattaching DW_AW.TEST_RISKMGMT
16:23:57.367 [ AW] Done
16:23:57.883 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=all
16:23:58.805 [ AgClean] finish clean=all
16:23:58.805 [ AgDangle] start aggmap=TEST_RISKMGMT!OBJ993979675
16:23:58.805 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=memory
16:23:58.805 [ AgClean] finish clean=memory
16:23:58.820 [ AgDangle] finish
16:23:58.867 [ MHierCheck] start rel=TEST_RISKMGMT!AS_OF_DATE_PARENTREL multidim
16:23:58.867 [ MHierCheck] finish - validated
16:23:58.867 [ MHierCheck] start rel=TEST_RISKMGMT!OWNERSHIP_PARENTREL multidim
16:24:00.367 [ MHierCheck] finish - validated
16:24:00.367 [ MHierCheck] start rel=TEST_RISKMGMT!COMPLIANCE_RATING_PARENTREL multidim
16:24:00.367 [ MHierCheck] finish - validated
16:24:00.367 [ MHierCheck] start rel=TEST_RISKMGMT!SECURITY_PARENTREL multidim
16:24:00.555 [ MHierCheck] finish - validated
16:24:00.555 [ MHierCheck] start rel=TEST_RISKMGMT!EXPOSURE_PARENTREL multidim
16:24:00.633 [ MHierCheck] finish - validated
16:24:00.633 [ MHierCheck] start rel=TEST_RISKMGMT!PROPERTY_PARENTREL multidim
16:24:00.648 [ MHierCheck] finish - validated
16:24:00.648 [ MHierCheck] start rel=TEST_RISKMGMT!SETTLEMENT_DATE_PARENTREL multidim
16:24:00.680 [ MHierCheck] finish - validated
16:24:00.680 [ MHierCheck] start rel=TEST_RISKMGMT!TRADE_DATE_PARENTREL multidim
16:24:00.711 [ MHierCheck] finish - validated
16:24:00.711 [ MHierCheck] start rel=TEST_RISKMGMT!LINES_OF_BUSINESS_PARENTREL multidim
16:24:00.711 [ MHierCheck] finish - validated
16:24:00.711 [ MHierCheck] start rel=TEST_RISKMGMT!MATURITY_DATE_PARENTREL multidim
16:24:00.726 [ MHierCheck] finish - validated
16:24:00.726 [ MHierCheck] start rel=TEST_RISKMGMT!STATUS_PARENTREL multidim
16:24:00.726 [ MHierCheck] finish - validated
16:24:00.742 [ MHierCheck] start rel=TEST_RISKMGMT!SYSTEM_PARENTREL multidim
16:24:00.742 [ MHierCheck] finish - validated
16:24:00.742 [multipath check] start
16:24:00.742 [multipath check] finish
16:24:02.539 [multipath check] start
16:24:10.476 [multipath check] finish
16:24:10.679 [multipath check] start
16:24:10.695 [multipath check] finish
16:24:11.039 [multipath check] start
16:24:12.023 [multipath check] finish
16:24:12.367 [multipath check] start
16:24:12.820 [multipath check] finish
16:24:12.898 [multipath check] start
16:24:12.945 [multipath check] finish
16:24:13.023 [multipath check] start
16:24:13.101 [multipath check] finish
16:24:13.179 [multipath check] start
16:24:13.258 [multipath check] finish
16:24:13.320 [multipath check] start
16:24:13.336 [multipath check] finish
16:24:13.445 [multipath check] start
16:24:13.523 [multipath check] finish
16:24:13.539 [multipath check] start
16:24:13.539 [multipath check] finish
16:24:13.554 [multipath check] start
16:24:13.570 [multipath check] finish
16:24:13.586 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=session
16:24:13.633 [ AgClean] finish clean=session
16:24:13.633 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=session
16:24:13.633 [ AgClean] finish clean=session
16:24:15.445 [ SQL Import] Start
16:24:15.445 [ SQL Import] 13 defines done, starting SQL execution
16:25:40.851 [ SQL Import] finished SQL execution
16:25:48.663 [ SQL Import] row # 100001
16:25:53.960 [ SQL Import] row # 200001
16:25:58.413 [ SQL Import] row # 300001
16:26:02.491 [ SQL Import] row # 400001
16:26:06.116 [ SQL Import] row # 500001
16:26:09.976 [ SQL Import] row # 600001
16:26:14.569 [ SQL Import] row # 700001
16:26:19.100 [ SQL Import] row # 800001
16:26:25.069 [ SQL Import] row # 900001
16:26:29.725 [ SQL Import] row # 1000001
16:26:34.257 [ SQL Import] row # 1100001
16:26:38.507 [ SQL Import] row # 1200001
16:26:42.991 [ SQL Import] row # 1300001
16:26:47.303 [ SQL Import] row # 1400001
16:26:52.882 [ SQL Import] row # 1500001
16:26:57.397 [ SQL Import] row # 1600001
16:27:01.366 [ SQL Import] row # 1700001
16:27:05.850 [ SQL Import] row # 1800001
16:27:10.381 [ SQL Import] row # 1900001
16:27:14.725 [ SQL Import] row # 2000001
16:27:19.334 [ SQL Import] row # 2100001
16:27:23.834 [ SQL Import] row # 2200001
16:27:28.272 [ SQL Import] row # 2300001
16:27:32.631 [ SQL Import] row # 2400001
16:27:36.444 [ SQL Import] row # 2500001
16:27:40.662 [ SQL Import] row # 2600001
16:27:45.584 [ SQL Import] row # 2700001
16:27:51.303 [ SQL Import] normal finish, 2796291 rows
16:27:51.334 [ Update] start
16:27:51.381 [ Update] updating TEST_RISKMGMT
16:27:52.787 [ Solo update] DW_AW.TEST_RISKMGMT-33024-0 (new=1) pg 2-64 begin
16:27:52.834 [ Solo update] DW_AW.TEST_RISKMGMT-33024-0 2/2 pgs end
16:27:52.850 [ Solo update] DW_AW.TEST_RISKMGMT-33025-0 (new=1) pg 2-64 begin
16:27:52.959 [ Solo update] DW_AW.TEST_RISKMGMT-33025-0 11/11 pgs end
16:27:52.975 [ Solo update] DW_AW.TEST_RISKMGMT-33027-0 (new=1) pg 2-64 begin
16:27:53.069 [ Solo update] DW_AW.TEST_RISKMGMT-33027-0 4/4 pgs end
16:27:53.084 [ Solo update] DW_AW.TEST_RISKMGMT-33029-0 (new=1) pg 2-64 begin
16:27:53.178 [ Solo update] DW_AW.TEST_RISKMGMT-33029-0 4/4 pgs end
16:27:53.178 [ Solo update] DW_AW.TEST_RISKMGMT-33030-0 (new=1) pg 2-64 begin
16:27:53.178 [ Solo update] DW_AW.TEST_RISKMGMT-33030-0 2/2 pgs end
16:27:53.194 [ Solo update] DW_AW.TEST_RISKMGMT-33031-0 (new=1) pg 2-64 begin
16:27:53.287 [ Solo update] DW_AW.TEST_RISKMGMT-33031-0 5/5 pgs end
16:27:53.303 [ Solo update] DW_AW.TEST_RISKMGMT-33032-0 (new=1) pg 2-64 begin
16:27:53.350 [ Solo update] DW_AW.TEST_RISKMGMT-33032-0 2/2 pgs end
16:27:53.350 [ Solo update] DW_AW.TEST_RISKMGMT-33033-0 (new=1) pg 2-64 begin
16:27:53.381 [ Solo update] DW_AW.TEST_RISKMGMT-33033-0 5/5 pgs end
16:27:53.381 [ Solo update] DW_AW.TEST_RISKMGMT-33035-0 (new=1) pg 2-64 begin
16:27:53.444 [ Solo update] DW_AW.TEST_RISKMGMT-33035-0 4/4 pgs end
16:27:53.459 [ Solo update] DW_AW.TEST_RISKMGMT-33037-0 (new=1) pg 2-64 begin
16:27:53.584 [ Solo update] DW_AW.TEST_RISKMGMT-33037-0 4/4 pgs end
16:27:53.600 [ Solo update] DW_AW.TEST_RISKMGMT-33038-0 (new=1) pg 2-64 begin
16:27:53.678 [ Solo update] DW_AW.TEST_RISKMGMT-33038-0 2/2 pgs end
16:27:53.678 [ Solo update] DW_AW.TEST_RISKMGMT-33039-0 (new=1) pg 2-64 begin
16:27:53.787 [ Solo update] DW_AW.TEST_RISKMGMT-33039-0 7/7 pgs end
16:27:53.819 [ Solo update] DW_AW.TEST_RISKMGMT-33040-0 (new=1) pg 2-64 begin
16:27:53.850 [ Solo update] DW_AW.TEST_RISKMGMT-33040-0 2/2 pgs end
16:27:53.866 [ Solo update] DW_AW.TEST_RISKMGMT-33041-0 (new=1) pg 2-64 begin
16:27:53.881 [ Solo update] DW_AW.TEST_RISKMGMT-33041-0 10/10 pgs end
16:27:53.897 [ Solo update] DW_AW.TEST_RISKMGMT-33043-0 (new=1) pg 2-64 begin
16:27:54.006 [ Solo update] DW_AW.TEST_RISKMGMT-33043-0 4/4 pgs end
16:27:54.022 [ Solo update] DW_AW.TEST_RISKMGMT-33045-0 (new=1) pg 2-64 begin
16:27:54.084 [ Solo update] DW_AW.TEST_RISKMGMT-33045-0 4/4 pgs end
16:27:54.100 [ Solo update] DW_AW.TEST_RISKMGMT-33046-0 (new=1) pg 2-64 begin
16:27:54.100 [ Solo update] DW_AW.TEST_RISKMGMT-33046-0 2/2 pgs end
16:27:54.100 [ Solo update] DW_AW.TEST_RISKMGMT-33047-0 (new=1) pg 2-64 begin
16:27:54.178 [ Solo update] DW_AW.TEST_RISKMGMT-33047-0 5/5 pgs end
16:27:54.194 [ Solo update] DW_AW.TEST_RISKMGMT-33048-0 (new=1) pg 2-64 begin
16:27:54.194 [ Solo update] DW_AW.TEST_RISKMGMT-33048-0 2/2 pgs end
16:27:54.194 [ Solo update] DW_AW.TEST_RISKMGMT-33049-0 (new=1) pg 2-64 begin
16:27:54.225 [ Solo update] DW_AW.TEST_RISKMGMT-33049-0 5/5 pgs end
16:27:54.225 [ Solo update] DW_AW.TEST_RISKMGMT-33051-0 (new=1) pg 2-64 begin
16:27:54.303 [ Solo update] DW_AW.TEST_RISKMGMT-33051-0 4/4 pgs end
16:27:54.319 [ Solo update] DW_AW.TEST_RISKMGMT-33053-0 (new=1) pg 2-64 begin
16:27:54.350 [ Solo update] DW_AW.TEST_RISKMGMT-33053-0 4/4 pgs end
16:27:54.350 [ Solo update] DW_AW.TEST_RISKMGMT-33054-0 (new=1) pg 2-64 begin
16:27:54.350 [ Solo update] DW_AW.TEST_RISKMGMT-33054-0 2/2 pgs end
16:27:54.616 [ Solo update] DW_AW.TEST_RISKMGMT-30751-0 (new=0) pg 2-64 begin
16:27:54.647 [ Solo update] DW_AW.TEST_RISKMGMT-30751-0 2/25 pgs end
16:27:54.647 [ Solo update] DW_AW.TEST_RISKMGMT-33055-0 (new=1) pg 2-64 begin
16:27:54.694 [ Solo update] DW_AW.TEST_RISKMGMT-33055-0 5/5 pgs end
16:27:54.709 [ Solo update] DW_AW.TEST_RISKMGMT-33056-0 (new=1) pg 2-64 begin
16:27:54.725 [ Solo update] DW_AW.TEST_RISKMGMT-33056-0 2/2 pgs end
16:27:54.725 [ Solo update] DW_AW.TEST_RISKMGMT-33057-0 (new=1) pg 2-64 begin
16:27:54.803 [ Solo update] DW_AW.TEST_RISKMGMT-33057-0 5/5 pgs end
16:27:54.866 [ Solo update] DW_AW.TEST_RISKMGMT-30244-0 (new=0) pg 1-464 begin
16:27:54.975 [ Solo update] DW_AW.TEST_RISKMGMT-30244-0 8/465 pgs end
16:27:55.084 [ Solo update] DW_AW.TEST_RISKMGMT-30502-0 (new=0) pg 2-64 begin
16:27:55.084 [ Solo update] DW_AW.TEST_RISKMGMT-30502-0 2/15 pgs end
16:27:55.256 [ Solo update] DW_AW.TEST_RISKMGMT-29995-0 (new=0) pg 2-64 begin
16:27:55.272 [ Solo update] DW_AW.TEST_RISKMGMT-29995-0 2/9 pgs end
16:27:55.334 [ Solo update] DW_AW.TEST_RISKMGMT-32570-0 (new=0) pg 2-64 begin
16:27:55.350 [ Solo update] DW_AW.TEST_RISKMGMT-32570-0 2/5 pgs end
16:27:55.381 [ Solo update] DW_AW.TEST_RISKMGMT-32571-0 (new=0) pg 2-64 begin
16:27:55.397 [ Solo update] DW_AW.TEST_RISKMGMT-32571-0 2/2 pgs end
16:27:55.537 [ Solo update] DW_AW.TEST_RISKMGMT-32576-0 (new=0) pg 2-64 begin
16:27:55.584 [ Solo update] DW_AW.TEST_RISKMGMT-32576-0 2/2 pgs end
16:27:55.678 [ Solo update] DW_AW.TEST_RISKMGMT-32577-0 (new=0) pg 2-64 begin
16:27:55.694 [ Solo update] DW_AW.TEST_RISKMGMT-32577-0 2/4 pgs end
16:27:55.772 [ Solo update] DW_AW.TEST_RISKMGMT-32578-0 (new=0) pg 2-64 begin
16:27:55.850 [ Solo update] DW_AW.TEST_RISKMGMT-32578-0 2/2 pgs end
16:27:55.959 [ Solo update] DW_AW.TEST_RISKMGMT-32579-0 (new=0) pg 2-64 begin
16:27:56.006 [ Solo update] DW_AW.TEST_RISKMGMT-32579-0 2/4 pgs end
16:27:56.053 [ Solo update] DW_AW.TEST_RISKMGMT-32580-0 (new=0) pg 2-64 begin
16:27:56.100 [ Solo update] DW_AW.TEST_RISKMGMT-32580-0 2/2 pgs end
16:27:56.162 [ Solo update] DW_AW.TEST_RISKMGMT-32581-0 (new=0) pg 2-64 begin
16:27:56.178 [ Solo update] DW_AW.TEST_RISKMGMT-32581-0 2/4 pgs end
16:27:56.240 [ Solo update] DW_AW.TEST_RISKMGMT-32584-0 (new=0) pg 2-64 begin
16:27:56.272 [ Solo update] DW_AW.TEST_RISKMGMT-32584-0 2/2 pgs end
16:27:56.334 [ Solo update] DW_AW.TEST_RISKMGMT-32585-0 (new=0) pg 2-64 begin
16:27:56.350 [ Solo update] DW_AW.TEST_RISKMGMT-32585-0 2/4 pgs end
16:27:56.365 [ Solo update] DW_AW.TEST_RISKMGMT-32588-0 (new=0) pg 2-64 begin
16:27:56.381 [ Solo update] DW_AW.TEST_RISKMGMT-32588-0 2/2 pgs end
16:27:56.397 [ Solo update] DW_AW.TEST_RISKMGMT-32589-0 (new=0) pg 2-64 begin
16:27:56.397 [ Solo update] DW_AW.TEST_RISKMGMT-32589-0 2/4 pgs end
16:27:56.428 [ Solo update] DW_AW.TEST_RISKMGMT-32592-0 (new=0) pg 2-64 begin
16:27:56.428 [ Solo update] DW_AW.TEST_RISKMGMT-32592-0 2/2 pgs end
16:27:56.459 [ Solo update] DW_AW.TEST_RISKMGMT-32593-0 (new=0) pg 2-64 begin
16:27:56.490 [ Solo update] DW_AW.TEST_RISKMGMT-32593-0 2/4 pgs end
16:27:56.553 [ Solo update] DW_AW.TEST_RISKMGMT-32596-0 (new=0) pg 2-64 begin
16:27:56.553 [ Solo update] DW_AW.TEST_RISKMGMT-32596-0 2/2 pgs end
16:27:56.631 [ Solo update] DW_AW.TEST_RISKMGMT-32597-0 (new=0) pg 2-64 begin
16:27:56.709 [ Solo update] DW_AW.TEST_RISKMGMT-32597-0 2/4 pgs end
16:27:56.787 [ Solo update] DW_AW.TEST_RISKMGMT-32600-0 (new=0) pg 2-64 begin
16:27:56.850 [ Solo update] DW_AW.TEST_RISKMGMT-32600-0 2/2 pgs end
16:27:57.006 [ Solo update] DW_AW.TEST_RISKMGMT-32601-0 (new=0) pg 2-64 begin
16:27:57.022 [ Solo update] DW_AW.TEST_RISKMGMT-32601-0 2/4 pgs end
16:27:57.069 [ Solo update] DW_AW.TEST_RISKMGMT-32604-0 (new=0) pg 2-64 begin
16:27:57.069 [ Solo update] DW_AW.TEST_RISKMGMT-32604-0 2/2 pgs end
16:27:57.100 [ Solo update] DW_AW.TEST_RISKMGMT-32605-0 (new=0) pg 2-64 begin
16:27:57.147 [ Solo update] DW_AW.TEST_RISKMGMT-32605-0 2/4 pgs end
16:27:57.178 [ Solo update] DW_AW.TEST_RISKMGMT-32608-0 (new=0) pg 2-64 begin
16:27:57.194 [ Solo update] DW_AW.TEST_RISKMGMT-32608-0 2/2 pgs end
16:27:57.272 [ Solo update] DW_AW.TEST_RISKMGMT-32609-0 (new=0) pg 2-64 begin
16:27:57.287 [ Solo update] DW_AW.TEST_RISKMGMT-32609-0 2/4 pgs end
16:27:57.303 [ Solo update] DW_AW.TEST_RISKMGMT-32612-0 (new=0) pg 2-64 begin
16:27:57.319 [ Solo update] DW_AW.TEST_RISKMGMT-32612-0 2/2 pgs end
16:27:57.381 [ Solo update] DW_AW.TEST_RISKMGMT-32613-0 (new=0) pg 2-64 begin
16:27:57.397 [ Solo update] DW_AW.TEST_RISKMGMT-32613-0 2/4 pgs end
16:27:57.428 [ Solo update] DW_AW.TEST_RISKMGMT-32616-0 (new=0) pg 2-64 begin
16:27:57.428 [ Solo update] DW_AW.TEST_RISKMGMT-32616-0 2/2 pgs end
16:27:57.475 [ Solo update] DW_AW.TEST_RISKMGMT-32617-0 (new=0) pg 2-64 begin
16:27:57.475 [ Solo update] DW_AW.TEST_RISKMGMT-32617-0 2/4 pgs end
16:27:57.506 [ Solo update] DW_AW.TEST_RISKMGMT-32620-0 (new=0) pg 2-64 begin
16:27:57.537 [ Solo update] DW_AW.TEST_RISKMGMT-32620-0 2/2 pgs end
16:27:57.569 [ Solo update] DW_AW.TEST_RISKMGMT-32621-0 (new=0) pg 2-64 begin
16:27:57.584 [ Solo update] DW_AW.TEST_RISKMGMT-32621-0 2/4 pgs end
16:27:57.725 [ Solo update] DW_AW.TEST_RISKMGMT-30835-0 (new=0) pg 2-64 begin
16:27:57.772 [ Solo update] DW_AW.TEST_RISKMGMT-30835-0 2/8 pgs end
16:27:57.865 [ Solo update] DW_AW.TEST_RISKMGMT-30583-0 (new=0) pg 2-64 begin
16:27:57.897 [ Solo update] DW_AW.TEST_RISKMGMT-30583-0 2/9 pgs end
16:27:58.256 [ Solo update] DW_AW.TEST_RISKMGMT-30334-0 (new=0) pg 2-64 begin
16:27:58.272 [ Solo update] DW_AW.TEST_RISKMGMT-30334-0 2/10 pgs end
16:27:58.381 [ Solo update] DW_AW.TEST_RISKMGMT-30079-0 (new=0) pg 2-64 begin
16:27:58.490 [ Solo update] DW_AW.TEST_RISKMGMT-30079-0 2/9 pgs end
16:27:58.897 [ Solo update] DW_AW.TEST_RISKMGMT-30913-0 (new=0) pg 2-64 begin
16:27:58.959 [ Solo update] DW_AW.TEST_RISKMGMT-30913-0 2/25 pgs end
16:27:58.975 [ Solo update] DW_AW.TEST_RISKMGMT-32962-0 (new=1) pg 2-64 begin
16:27:58.990 [ Solo update] DW_AW.TEST_RISKMGMT-32962-0 2/2 pgs end
16:27:59.037 [ Solo update] DW_AW.TEST_RISKMGMT-32963-0 (new=1) pg 2-64 begin
16:27:59.069 [ Solo update] DW_AW.TEST_RISKMGMT-32963-0 5/5 pgs end
16:27:59.069 [ Solo update] DW_AW.TEST_RISKMGMT-32965-0 (new=1) pg 2-64 begin
16:27:59.178 [ Solo update] DW_AW.TEST_RISKMGMT-32965-0 4/4 pgs end
16:27:59.194 [ Solo update] DW_AW.TEST_RISKMGMT-32967-0 (new=1) pg 2-64 begin
16:27:59.225 [ Solo update] DW_AW.TEST_RISKMGMT-32967-0 4/4 pgs end
16:27:59.303 [ Solo update] DW_AW.TEST_RISKMGMT-30664-0 (new=0) pg 1-132 begin
16:27:59.397 [ Solo update] DW_AW.TEST_RISKMGMT-30664-0 8/133 pgs end
16:27:59.412 [ Solo update] DW_AW.TEST_RISKMGMT-32968-0 (new=1) pg 2-64 begin
16:27:59.444 [ Solo update] DW_AW.TEST_RISKMGMT-32968-0 2/2 pgs end
16:27:59.490 [ Solo update] DW_AW.TEST_RISKMGMT-32969-0 (new=1) pg 2-64 begin
16:27:59.600 [ Solo update] DW_AW.TEST_RISKMGMT-32969-0 5/5 pgs end
16:27:59.881 [ Solo update] DW_AW.TEST_RISKMGMT-29642-0 (new=0) pg 2-64 begin
16:27:59.897 [ Solo update] DW_AW.TEST_RISKMGMT-29642-0 5/28 pgs end
16:27:59.928 [ Solo update] DW_AW.TEST_RISKMGMT-29643-0 (new=0) pg 2-64 begin
16:27:59.990 [ Solo update] DW_AW.TEST_RISKMGMT-29643-0 2/2 pgs end
16:27:59.990 [ Solo update] DW_AW.TEST_RISKMGMT-32971-0 (new=1) pg 2-64 begin
16:27:59.990 [ Solo update] DW_AW.TEST_RISKMGMT-32971-0 4/4 pgs end
16:28:00.006 [ Solo update] DW_AW.TEST_RISKMGMT-32973-0 (new=1) pg 2-64 begin
16:28:00.084 [ Solo update] DW_AW.TEST_RISKMGMT-32973-0 4/4 pgs end
16:28:00.084 [ Solo update] DW_AW.TEST_RISKMGMT-32974-0 (new=1) pg 2-64 begin
16:28:00.162 [ Solo update] DW_AW.TEST_RISKMGMT-32974-0 3/44 pgs end
16:28:00.162 [ Solo update] DW_AW.TEST_RISKMGMT-32975-0 (new=1) pg 2-64 begin
16:28:00.584 [ Solo update] DW_AW.TEST_RISKMGMT-32975-0 52/52 pgs end
16:28:01.115 [ Solo update] DW_AW.TEST_RISKMGMT-30160-0 (new=0) pg 2-64 begin
16:28:01.178 [ Solo update] DW_AW.TEST_RISKMGMT-30160-0 2/64 pgs end
16:28:01.194 [ Solo update] DW_AW.TEST_RISKMGMT-32976-0 (new=1) pg 2-64 begin
16:28:01.225 [ Solo update] DW_AW.TEST_RISKMGMT-32976-0 3/44 pgs end
16:28:01.287 [ Solo update] DW_AW.TEST_RISKMGMT-32977-0 (new=1) pg 1-232 begin
16:28:03.100 [ Solo update] DW_AW.TEST_RISKMGMT-32977-0 231/233 pgs end
16:28:03.272 [ Solo update] DW_AW.TEST_RISKMGMT-30418-0 (new=0) pg 2-64 begin
16:28:03.287 [ Solo update] DW_AW.TEST_RISKMGMT-30418-0 2/23 pgs end
16:28:03.303 [ Solo update] DW_AW.TEST_RISKMGMT-32979-0 (new=1) pg 2-64 begin
16:28:03.381 [ Solo update] DW_AW.TEST_RISKMGMT-32979-0 4/4 pgs end
16:28:03.381 [ Solo update] DW_AW.TEST_RISKMGMT-32981-0 (new=1) pg 2-64 begin
16:28:03.397 [ Solo update] DW_AW.TEST_RISKMGMT-32981-0 4/4 pgs end
16:28:03.397 [ Solo update] DW_AW.TEST_RISKMGMT-32982-0 (new=1) pg 2-64 begin
16:28:03.412 [ Solo update] DW_AW.TEST_RISKMGMT-32982-0 2/2 pgs end
16:28:03.553 [ Solo update] DW_AW.TEST_RISKMGMT-31191-0 (new=0) pg 2-64 begin
16:28:03.569 [ Solo update] DW_AW.TEST_RISKMGMT-31191-0 3/6 pgs end
16:28:03.569 [ Solo update] DW_AW.TEST_RISKMGMT-32983-0 (new=1) pg 2-64 begin
16:28:03.584 [ Solo update] DW_AW.TEST_RISKMGMT-32983-0 5/5 pgs end
16:28:03.584 [ Solo update] DW_AW.TEST_RISKMGMT-32984-0 (new=1) pg 2-64 begin
16:28:03.600 [ Solo update] DW_AW.TEST_RISKMGMT-32984-0 2/2 pgs end
16:28:03.600 [ Solo update] DW_AW.TEST_RISKMGMT-32985-0 (new=1) pg 2-64 begin
16:28:03.615 [ Solo update] DW_AW.TEST_RISKMGMT-32985-0 5/5 pgs end
16:28:03.615 [ Solo update] DW_AW.TEST_RISKMGMT-32987-0 (new=1) pg 2-64 begin
16:28:03.725 [ Solo update] DW_AW.TEST_RISKMGMT-32987-0 4/4 pgs end
16:28:03.725 [ Solo update] DW_AW.TEST_RISKMGMT-32989-0 (new=1) pg 2-64 begin
16:28:03.772 [ Solo update] DW_AW.TEST_RISKMGMT-32989-0 4/4 pgs end
16:28:03.772 [ Solo update] DW_AW.TEST_RISKMGMT-32990-0 (new=1) pg 2-64 begin
16:28:03.819 [ Solo update] DW_AW.TEST_RISKMGMT-32990-0 3/12 pgs end
16:28:03.819 [ Solo update] DW_AW.TEST_RISKMGMT-32991-0 (new=1) pg 2-64 begin
16:28:04.115 [ Solo update] DW_AW.TEST_RISKMGMT-32991-0 29/29 pgs end
16:28:04.115 [ Solo update] DW_AW.TEST_RISKMGMT-32992-0 (new=1) pg 2-64 begin
16:28:04.131 [ Solo update] DW_AW.TEST_RISKMGMT-32992-0 3/12 pgs end
16:28:04.131 [ Solo update] DW_AW.TEST_RISKMGMT-32993-0 (new=1) pg 2-64 begin
16:28:04.537 [ Solo update] DW_AW.TEST_RISKMGMT-32993-0 51/51 pgs end
16:28:04.569 [ Solo update] DW_AW.TEST_RISKMGMT-32995-0 (new=1) pg 2-64 begin
16:28:04.584 [ Solo update] DW_AW.TEST_RISKMGMT-32995-0 4/4 pgs end
16:28:04.584 [ Solo update] DW_AW.TEST_RISKMGMT-32997-0 (new=1) pg 2-64 begin
16:28:04.662 [ Solo update] DW_AW.TEST_RISKMGMT-32997-0 4/4 pgs end
16:28:04.662 [ Solo update] DW_AW.TEST_RISKMGMT-32998-0 (new=1) pg 2-64 begin
16:28:04.662 [ Solo update] DW_AW.TEST_RISKMGMT-32998-0 2/2 pgs end
16:28:04.740 [ Solo update] DW_AW.TEST_RISKMGMT-32999-0 (new=1) pg 2-64 begin
16:28:04.912 [ Solo update] DW_AW.TEST_RISKMGMT-32999-0 11/11 pgs end
16:28:04.912 [ Solo update] DW_AW.TEST_RISKMGMT-33000-0 (new=1) pg 2-64 begin
16:28:04.928 [ Solo update] DW_AW.TEST_RISKMGMT-33000-0 2/2 pgs end
16:28:04.928 [ Solo update] DW_AW.TEST_RISKMGMT-33001-0 (new=1) pg 2-64 begin
16:28:05.100 [ Solo update] DW_AW.TEST_RISKMGMT-33001-0 24/24 pgs end
16:28:05.115 [ Solo update] DW_AW.TEST_RISKMGMT-33003-0 (new=1) pg 2-64 begin
16:28:05.115 [ Solo update] DW_AW.TEST_RISKMGMT-33003-0 4/4 pgs end
16:28:05.131 [ Solo update] DW_AW.TEST_RISKMGMT-33005-0 (new=1) pg 2-64 begin
16:28:05.178 [ Solo update] DW_AW.TEST_RISKMGMT-33005-0 4/4 pgs end
16:28:05.178 [ Solo update] DW_AW.TEST_RISKMGMT-33006-0 (new=1) pg 2-64 begin
16:28:05.178 [ Solo update] DW_AW.TEST_RISKMGMT-33006-0 2/2 pgs end
16:28:05.194 [ Solo update] DW_AW.TEST_RISKMGMT-33007-0 (new=1) pg 2-64 begin
16:28:05.303 [ Solo update] DW_AW.TEST_RISKMGMT-33007-0 6/6 pgs end
16:28:05.303 [ Solo update] DW_AW.TEST_RISKMGMT-33008-0 (new=1) pg 2-64 begin
16:28:05.334 [ Solo update] DW_AW.TEST_RISKMGMT-33008-0 2/2 pgs end
16:28:05.350 [ Solo update] DW_AW.TEST_RISKMGMT-33009-0 (new=1) pg 2-64 begin
16:28:05.428 [ Solo update] DW_AW.TEST_RISKMGMT-33009-0 8/8 pgs end
16:28:05.428 [ Solo update] DW_AW.TEST_RISKMGMT-33011-0 (new=1) pg 2-64 begin
16:28:05.522 [ Solo update] DW_AW.TEST_RISKMGMT-33011-0 4/4 pgs end
16:28:05.522 [ Solo update] DW_AW.TEST_RISKMGMT-33013-0 (new=1) pg 2-64 begin
16:28:05.537 [ Solo update] DW_AW.TEST_RISKMGMT-33013-0 4/4 pgs end
16:28:05.537 [ Solo update] DW_AW.TEST_RISKMGMT-33014-0 (new=1) pg 2-64 begin
16:28:05.553 [ Solo update] DW_AW.TEST_RISKMGMT-33014-0 2/2 pgs end
16:28:05.553 [ Solo update] DW_AW.TEST_RISKMGMT-33015-0 (new=1) pg 2-64 begin
16:28:05.569 [ Solo update] DW_AW.TEST_RISKMGMT-33015-0 8/8 pgs end
16:28:05.569 [ Solo update] DW_AW.TEST_RISKMGMT-33016-0 (new=1) pg 2-64 begin
16:28:05.631 [ Solo update] DW_AW.TEST_RISKMGMT-33016-0 2/2 pgs end
16:28:05.662 [ Solo update] DW_AW.TEST_RISKMGMT-33017-0 (new=1) pg 2-64 begin
16:28:05.725 [ Solo update] DW_AW.TEST_RISKMGMT-33017-0 11/11 pgs end
16:28:05.725 [ Solo update] DW_AW.TEST_RISKMGMT-33019-0 (new=1) pg 2-64 begin
16:28:05.819 [ Solo update] DW_AW.TEST_RISKMGMT-33019-0 4/4 pgs end
16:28:05.834 [ Solo update] DW_AW.TEST_RISKMGMT-33021-0 (new=1) pg 2-64 begin
16:28:05.850 [ Solo update] DW_AW.TEST_RISKMGMT-33021-0 4/4 pgs end
16:28:05.850 [ Solo update] DW_AW.TEST_RISKMGMT-33022-0 (new=1) pg 2-64 begin
16:28:05.850 [ Solo update] DW_AW.TEST_RISKMGMT-33022-0 2/2 pgs end
16:28:05.865 [ Solo update] DW_AW.TEST_RISKMGMT-33023-0 (new=1) pg 2-64 begin
16:28:05.959 [ Solo update] DW_AW.TEST_RISKMGMT-33023-0 8/8 pgs end
16:28:06.006 [ Para update] begin parallel update of 3 files w/ 3 nslaves.
16:28:37.021 [ Para update] ps 32567 ext 0 done 2664 pgs written
16:28:40.646 [ Para update] ps 32568 ext 0 done 3089 pgs written
16:29:38.896 [ Para update] ps 32573 ext 0 done 17314 pgs written
16:29:38.896 [ Para update] finished parallel update of 3 files w/ 3 nslaves.
16:29:39.302 [ Update] finish
16:29:40.771 [ MHierCheck] start rel=TEST_RISKMGMT!AS_OF_DATE_PARENTREL multidim
16:29:40.787 [ MHierCheck] finish - validated
16:29:40.787 [ MHierCheck] start rel=TEST_RISKMGMT!OWNERSHIP_PARENTREL multidim
16:29:41.990 [ MHierCheck] finish - validated
16:29:41.990 [ MHierCheck] start rel=TEST_RISKMGMT!COMPLIANCE_RATING_PARENTREL multidim
16:29:42.005 [ MHierCheck] finish - validated
16:29:42.005 [ MHierCheck] start rel=TEST_RISKMGMT!SECURITY_PARENTREL multidim
16:29:42.302 [ MHierCheck] finish - validated
16:29:42.302 [ MHierCheck] start rel=TEST_RISKMGMT!EXPOSURE_PARENTREL multidim
16:29:42.427 [ MHierCheck] finish - validated
16:29:42.427 [ MHierCheck] start rel=TEST_RISKMGMT!PROPERTY_PARENTREL multidim
16:29:42.443 [ MHierCheck] finish - validated
16:29:42.443 [ MHierCheck] start rel=TEST_RISKMGMT!SETTLEMENT_DATE_PARENTREL multidim
16:29:42.521 [ MHierCheck] finish - validated
16:29:42.521 [ MHierCheck] start rel=TEST_RISKMGMT!TRADE_DATE_PARENTREL multidim
16:29:42.552 [ MHierCheck] finish - validated
16:29:42.552 [ MHierCheck] start rel=TEST_RISKMGMT!LINES_OF_BUSINESS_PARENTREL multidim
16:29:42.552 [ MHierCheck] finish - validated
16:29:42.552 [ MHierCheck] start rel=TEST_RISKMGMT!MATURITY_DATE_PARENTREL multidim
16:29:42.584 [ MHierCheck] finish - validated
16:29:42.599 [ MHierCheck] start rel=TEST_RISKMGMT!STATUS_PARENTREL multidim
16:29:42.599 [ MHierCheck] finish - validated
16:29:42.599 [ MHierCheck] start rel=TEST_RISKMGMT!SYSTEM_PARENTREL multidim
16:29:42.599 [ MHierCheck] finish - validated
16:29:42.646 [multipath check] start
16:29:42.646 [multipath check] finish
16:29:44.474 [multipath check] start
16:29:52.287 [multipath check] finish
16:29:52.459 [multipath check] start
16:29:52.459 [multipath check] finish
16:29:52.802 [multipath check] start
16:29:53.865 [multipath check] finish
16:29:54.052 [multipath check] start
16:29:54.505 [multipath check] finish
16:29:54.568 [multipath check] start
16:29:54.630 [multipath check] finish
16:29:54.849 [multipath check] start
16:29:54.927 [multipath check] finish
16:29:55.005 [multipath check] start
16:29:55.083 [multipath check] finish
16:29:55.177 [multipath check] start
16:29:55.193 [multipath check] finish
16:29:55.302 [multipath check] start
16:29:55.380 [multipath check] finish
16:29:55.380 [multipath check] start
16:29:55.396 [multipath check] finish
16:29:55.474 [multipath check] start
16:29:55.474 [multipath check] finish
16:29:55.490 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=session
16:29:55.490 [ AgClean] finish clean=session
16:29:55.490 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=session
16:29:55.490 [ AgClean] finish clean=session
16:29:56.193 [ Aggregate] start func=0 vars=1
16:29:56.208 [ Aggregate] compound: <POSITIONS2_PRT_MEASDIM AS_OF_DATE POSITIONS2_COMPOSITE <OWNERSHIP COMPLIANCE_RATING SECURITY EXPOSURE __XML_GENERATED_7 SETTLEMENT_DATE TRADE_DATE LINES_OF_BUSINESS MATURITY_DATE STATUS __XML_GENERATED_12>>
16:29:56.208 [ Dimen] dim[0]=POSITIONS2_CO stval=443295 stlen=0 smax=-1 composite
16:29:56.208 [ Dimen] dim[1]=AS_OF_DATE stval=0 stlen=39 smax=38
16:29:56.208 [ Dimen] dim[2]=POSITIONS2_PR stval=0 stlen=1 smax=0
16:29:56.208 [ MHierCheck] start rel=TEST_RISKMGMT!AS_OF_DATE_PARENTREL multidim
16:29:56.208 [ MHierCheck] finish - validated
16:29:56.208 [ MHierCheck] start rel=TEST_RISKMGMT!OWNERSHIP_PARENTREL multidim
16:29:57.146 [ MHierCheck] finish - validated
16:29:57.146 [ MHierCheck] start rel=TEST_RISKMGMT!COMPLIANCE_RATING_PARENTREL multidim
16:29:57.146 [ MHierCheck] finish - validated
16:29:57.146 [ MHierCheck] start rel=TEST_RISKMGMT!SECURITY_PARENTREL multidim
16:29:57.365 [ MHierCheck] finish - validated
16:29:57.380 [ MHierCheck] start rel=TEST_RISKMGMT!EXPOSURE_PARENTREL multidim
16:29:57.443 [ MHierCheck] finish - validated
16:29:57.443 [ MHierCheck] start rel=TEST_RISKMGMT!PROPERTY_PARENTREL multidim
16:29:57.458 [ MHierCheck] finish - validated
16:29:57.474 [ MHierCheck] start rel=TEST_RISKMGMT!SETTLEMENT_DATE_PARENTREL multidim
16:29:57.490 [ MHierCheck] finish - validated
16:29:57.490 [ MHierCheck] start rel=TEST_RISKMGMT!TRADE_DATE_PARENTREL multidim
16:29:57.521 [ MHierCheck] finish - validated
16:29:57.521 [ MHierCheck] start rel=TEST_RISKMGMT!LINES_OF_BUSINESS_PARENTREL multidim
16:29:57.537 [ MHierCheck] finish - validated
16:29:57.537 [ MHierCheck] start rel=TEST_RISKMGMT!MATURITY_DATE_PARENTREL multidim
16:29:57.552 [ MHierCheck] finish - validated
16:29:57.552 [ MHierCheck] start rel=TEST_RISKMGMT!STATUS_PARENTREL multidim
16:29:57.552 [ MHierCheck] finish - validated
16:29:57.568 [ MHierCheck] start rel=TEST_RISKMGMT!SYSTEM_PARENTREL multidim
16:29:57.568 [ MHierCheck] finish - validated
16:30:00.974 [multipath check] start
16:30:00.974 [multipath check] finish
16:30:00.990 [ CCCovStat] start dim=POSITIONS2_COMPOSITE
16:30:00.990 [ CCCovStat] base __XML_GENERATED_12 (9 of 10 in status)
16:30:00.990 [ CCCovStat] base STATUS (30 of 34 in status)
16:30:00.990 [ CCCovStat] base MATURITY_DATE (2996 of 3675 in status)
16:30:00.990 [ CCCovStat] base LINES_OF_BUSINESS (211 of 278 in status)
16:30:00.990 [ CCCovStat] base TRADE_DATE (3623 of 4184 in status)
16:30:00.990 [ CCCovStat] base SETTLEMENT_DATE (3623 of 4208 in status)
16:30:00.990 [ CCCovStat] base __XML_GENERATED_7 (2090 of 2137 in status)
16:30:01.005 [ CCCovStat] base EXPOSURE (10846 of 10871 in status)
16:30:01.005 [ CCCovStat] base SECURITY (20353 of 20526 in status)
16:30:01.005 [ CCCovStat] base COMPLIANCE_RATING (23 of 32 in status)
16:30:01.005 [ CCCovStat] base OWNERSHIP (82599 of 83007 in status)
16:30:01.005 [ CCCovStat] build leafstat
16:30:07.208 [ CCCovStat] finished stlen=443296
16:30:07.365 [multipath check] start
16:30:15.365 [multipath check] finish
16:30:15.380 [multipath check] start
16:30:15.380 [multipath check] finish
16:30:15.380 [multipath check] start
16:30:16.365 [multipath check] finish
16:30:16.365 [multipath check] start
16:30:16.833 [multipath check] finish
16:30:16.849 [multipath check] start
16:30:16.896 [multipath check] finish
16:30:16.896 [multipath check] start
16:30:16.974 [multipath check] finish
16:30:16.974 [multipath check] start
16:30:17.052 [multipath check] finish
16:30:17.052 [multipath check] start
16:30:17.068 [multipath check] finish
16:30:17.068 [multipath check] start
16:30:17.130 [multipath check] finish
16:30:17.130 [multipath check] start
16:30:17.130 [multipath check] finish
16:30:17.146 [multipath check] start
16:30:17.146 [multipath check] finish
16:30:17.177 [ ccube] build dim=TEST_RISKMGMT!POSITIONS2_COMPOSITE
16:30:17.458 [ ccube] Relation TEST_RISKMGMT!SYSTEM_PARENTREL, cost=18
16:30:17.490 [ ccube] Relation TEST_RISKMGMT!COMPLIANCE_RATING_PARENTREL, cost=69
16:30:17.505 [ ccube] Relation TEST_RISKMGMT!STATUS_PARENTREL, cost=90
16:30:17.536 [ ccube] Relation TEST_RISKMGMT!LINES_OF_BUSINESS_PARENTREL, cost=844
16:30:17.583 [ ccube] Relation TEST_RISKMGMT!PROPERTY_PARENTREL, cost=6270
16:30:17.599 [ ccube] Relation TEST_RISKMGMT!MATURITY_DATE_PARENTREL, cost=11984
16:30:17.646 [ ccube] Relation TEST_RISKMGMT!SETTLEMENT_DATE_PARENTREL, cost=14492
16:30:17.693 [ ccube] Relation TEST_RISKMGMT!TRADE_DATE_PARENTREL, cost=14492
16:30:17.755 [ ccube] Relation TEST_RISKMGMT!EXPOSURE_PARENTREL, cost=43384
16:30:17.927 [ ccube] Relation TEST_RISKMGMT!SECURITY_PARENTREL, cost=81412
16:30:18.568 [ ccube] Relation TEST_RISKMGMT!OWNERSHIP_PARENTREL, cost=412995
16:30:31.224 [ ccube] calc input=443296
16:30:31.224 [ ccube] calc hierarchy 0
16:30:37.739 [ ccube] calc nodes=443296/443296
16:30:43.989 [ ccube] calc depth=1 classes=0 avgchildren=0.0
16:30:49.989 [ ccube] calc hierarchy 1
16:30:57.411 [ ccube] calc nodes=443296/443296
16:31:08.036 [ ccube] calc depth=1 classes=5704 avgchildren=2.0
16:31:13.036 [ ccube] calc nodes=437366/449000
16:31:23.989 [ ccube] calc depth=2 classes=5060 avgchildren=2.0
16:31:31.380 [ ccube] calc hierarchy 2
16:31:38.708 [ ccube] calc nodes=432272/454060
16:32:02.458 [ ccube] calc depth=1 classes=86862 avgchildren=2.1
16:32:08.551 [ ccube] calc nodes=339360/540922
16:32:32.786 [ ccube] calc depth=2 classes=49366 avgchildren=2.0
16:32:37.692 [ ccube] calc hierarchy 3
16:32:43.114 [ ccube] calc nodes=288911/592157
16:33:20.114 [ ccube] calc depth=1 classes=49202 avgchildren=3.5
16:33:22.754 [ ccube] calc nodes=166295/700035
16:33:48.176 [ ccube] calc depth=2 classes=16454 avgchildren=4.1
16:33:49.145 [ ccube] calc nodes=115662/736271
16:34:02.239 [ ccube] calc depth=3 classes=7094 avgchildren=2.9
16:34:03.489 [ ccube] calc hierarchy 4
16:34:04.520 [ ccube] calc nodes=101845/751904
16:34:07.270 [ ccube] calc depth=1 classes=179 avgchildren=2.0
16:34:08.004 [ ccube] calc nodes=101664/755186
16:34:08.786 [ ccube] calc depth=2 classes=17 avgchildren=2.0
16:34:09.536 [ ccube] calc hierarchy 5
16:34:10.957 [ ccube] calc nodes=101647/755326
16:34:12.895 [ ccube] calc depth=1 classes=66 avgchildren=2.0
16:34:13.582 [ ccube] calc nodes=101581/755833
16:34:15.395 [ ccube] calc depth=2 classes=38 avgchildren=2.0
16:34:16.067 [ ccube] calc nodes=101543/755988
16:34:17.645 [ ccube] calc depth=3 classes=75 avgchildren=2.0
16:34:18.582 [ ccube] calc hierarchy 6
16:34:19.504 [ ccube] calc nodes=101468/756486
16:34:21.348 [ ccube] calc depth=1 classes=79 avgchildren=2.0
16:34:22.098 [ ccube] calc nodes=101389/757609
16:34:23.520 [ ccube] calc depth=2 classes=39 avgchildren=2.0
16:34:24.051 [ ccube] calc nodes=101350/757998
16:34:24.504 [ ccube] calc depth=3 classes=0 avgchildren=0.0
16:34:25.254 [ ccube] calc hierarchy 7
16:34:26.114 [ ccube] calc nodes=101350/757998
16:34:27.551 [ ccube] calc depth=1 classes=8 avgchildren=2.0
16:34:28.348 [ ccube] calc nodes=101342/758010
16:34:30.239 [ ccube] calc depth=2 classes=8 avgchildren=2.0
16:34:31.161 [ ccube] calc nodes=101334/758106
16:34:31.848 [ ccube] calc depth=3 classes=0 avgchildren=0.0
16:34:32.801 [ ccube] calc hierarchy 8
16:34:34.129 [ ccube] calc nodes=101334/758106
16:35:22.254 [ ccube] calc depth=1 classes=9415 avgchildren=3.0
16:35:23.848 [ ccube] calc nodes=82674/881186
16:35:24.411 [ ccube] calc depth=2 classes=13 avgchildren=2.0
16:35:24.879 [ ccube] calc nodes=82661/881202
16:35:26.567 [ ccube] calc depth=3 classes=32 avgchildren=2.0
16:35:27.864 [ ccube] calc hierarchy 9
16:35:30.661 [ ccube] calc nodes=82629/881243
16:35:39.379 [ ccube] calc depth=1 classes=1359 avgchildren=2.0
16:35:41.614 [ ccube] calc nodes=163862/890199
16:35:44.067 [ ccube] calc depth=2 classes=50 avgchildren=2.0
16:35:46.957 [ ccube] calc hierarchy 10
16:35:50.411 [ ccube] calc nodes=163812/890472
16:50:24.614 [ ccube] calc depth=1 classes=12571 avgchildren=12.0
16:50:29.411 [ ccube] calc nodes=25117/8688971
17:12:08.223 [ ccube] calc depth=2 classes=3385 avgchildren=3.8
17:12:14.067 [ ccube] calc nodes=15752/19103114
18:12:35.989 [ ccube] calc depth=3 classes=6645 avgchildren=3.7
18:12:47.301 [ ccube] calc nodes=6794/41907707
19:03:50.973 [ ccube] calc depth=4 classes=1829 avgchildren=3.0
19:03:57.051 [ ccube] calc topnodes=10241, dimmax=55141493
19:03:57.051 [ ccube] calc done
19:03:57.551 [ Aggregate] njobs=2
19:03:57.551 [ CCCovStat] start dim=POSITIONS2_COMPOSITE
19:03:57.598 [ CCCovStat] base __XML_GENERATED_12 (9 of 10 in status)
19:03:57.614 [ CCCovStat] base STATUS (30 of 34 in status)
19:03:57.614 [ CCCovStat] base MATURITY_DATE (2996 of 3675 in status)
19:03:57.614 [ CCCovStat] base LINES_OF_BUSINESS (211 of 278 in status)
19:03:57.614 [ CCCovStat] base TRADE_DATE (3623 of 4184 in status)
19:03:57.614 [ CCCovStat] base SETTLEMENT_DATE (3623 of 4208 in status)
19:03:57.629 [ CCCovStat] base __XML_GENERATED_7 (2090 of 2137 in status)
19:03:57.629 [ CCCovStat] base EXPOSURE (10846 of 10871 in status)
19:03:57.629 [ CCCovStat] base SECURITY (20353 of 20526 in status)
19:03:57.629 [ CCCovStat] base COMPLIANCE_RATING (23 of 32 in status)
19:03:57.645 [ CCCovStat] base OWNERSHIP (82599 of 83007 in status)
19:03:57.786 [ CCCovStat] build leafstat
19:04:05.457 [ CCCovStat] finished stlen=443296
19:04:05.457 [ Aggregate] cp2bas dim=0 stlen=443296 stval=-1
19:04:05.457 [ Aggregate] calcstart ndims=3 rudpos=1 depth=3
19:04:05.489 [ Aggregate] aggregate POSITIONS2_PRT_TOPVAR using OBJ993979675 over AS_OF_DATE op HIERARCHICAL-LAST
19:04:05.536 [ Dimen] dim[0]=POSITIONS2_CO stval=-1 stlen=443296 smax=55141493 composite
19:04:05.536 [ Dimen] dim[0]=__XML_GENERAT stval=9 stlen=9 smax=9
19:04:05.551 [ Dimen] dim[1]=STATUS stval=33 stlen=30 smax=33
19:04:05.551 [ Dimen] dim[2]=MATURITY_DATE stval=3674 stlen=2996 smax=3674
19:04:05.551 [ Dimen] dim[3]=LINES_OF_BUSI stval=277 stlen=211 smax=277
19:04:05.551 [ Dimen] dim[4]=TRADE_DATE stval=4183 stlen=3623 smax=4183
19:04:05.551 [ Dimen] dim[5]=SETTLEMENT_DA stval=4207 stlen=3623 smax=4207
19:04:05.551 [ Dimen] dim[6]=__XML_GENERAT stval=2136 stlen=2090 smax=2136
19:04:05.551 [ Dimen] dim[7]=EXPOSURE stval=10870 stlen=10846 smax=10870
19:04:05.551 [ Dimen] dim[8]=SECURITY stval=20525 stlen=20353 smax=20525
19:04:05.551 [ Dimen] dim[9]=COMPLIANCE_RA stval=31 stlen=23 smax=31
19:04:05.551 [ Dimen] dim[10]=OWNERSHIP stval=83006 stlen=82599 smax=83006
19:04:05.551 [ Dimen] dim[1]=AS_OF_DATE stval=38 stlen=39 smax=38
19:04:05.551 [ Dimen] dim[2]=POSITIONS2_PR stval=0 stlen=1 smax=0
19:04:05.567 [ Aggregate] agglen=1
19:04:05.567 [ Aggregate] fastdim=0 maxchsize=1
19:04:05.567 [ AgWlist] Generating worklists
19:04:05.567 [ AgWlist] start=0 end=3 len=3
19:04:05.567 [ AgWlist] done
19:04:05.567 [ Aggregate] wlstats worklist=1 parents=15
19:05:15.707 [ Aggregate] wlstats worklist=2 parents=6
19:06:15.707 [ Aggregate] wlstats worklist=3 parents=3
19:07:12.864 [ Aggregate] calcstart ndims=3 rudpos=0 depth=80
19:07:12.879 [ CCCovStat] start dim=POSITIONS2_COMPOSITE
19:07:12.942 [ CCCovStat] base __XML_GENERATED_12 (9 of 10 in status)
19:07:12.942 [ CCCovStat] base STATUS (30 of 34 in status)
19:07:12.942 [ CCCovStat] base MATURITY_DATE (2996 of 3675 in status)
19:07:12.957 [ CCCovStat] base LINES_OF_BUSINESS (211 of 278 in status)
19:07:12.957 [ CCCovStat] base TRADE_DATE (3623 of 4184 in status)
19:07:12.973 [ CCCovStat] base SETTLEMENT_DATE (3623 of 4208 in status)
19:07:13.051 [ CCCovStat] base __XML_GENERATED_7 (2090 of 2137 in status)
19:07:13.114 [ CCCovStat] base EXPOSURE (10846 of 10871 in status)
19:07:13.161 [ CCCovStat] base SECURITY (20353 of 20526 in status)
19:07:13.270 [ CCCovStat] base COMPLIANCE_RATING (23 of 32 in status)
19:07:13.270 [ CCCovStat] base OWNERSHIP (82599 of 83007 in status)
19:07:13.504 [ CCCovStat] build leafstat
19:07:21.629 [ CCCovStat] finished stlen=443296
19:07:22.176 [ CCCovStat] start dim=POSITIONS2_COMPOSITE
19:07:22.207 [ CCCovStat] base __XML_GENERATED_12 (10 of 10 in status)
19:07:22.223 [ CCCovStat] base STATUS (34 of 34 in status)
19:07:22.239 [ CCCovStat] base MATURITY_DATE (3675 of 3675 in status)
19:07:22.254 [ CCCovStat] base LINES_OF_BUSINESS (278 of 278 in status)
19:07:22.270 [ CCCovStat] base TRADE_DATE (4184 of 4184 in status)
19:07:22.286 [ CCCovStat] base SETTLEMENT_DATE (4208 of 4208 in status)
19:07:22.301 [ CCCovStat] base __XML_GENERATED_7 (2137 of 2137 in status)
19:07:22.317 [ CCCovStat] base EXPOSURE (10871 of 10871 in status)
19:07:22.332 [ CCCovStat] base SECURITY (20526 of 20526 in status)
19:07:22.332 [ CCCovStat] base COMPLIANCE_RATING (32 of 32 in status)
19:07:22.332 [ CCCovStat] base OWNERSHIP (83007 of 83007 in status)
19:07:22.348 [ CCCovStat] build allstat
19:08:35.754 [ Cmp2Bases] Created ALL status for POSITIONS2_COMPOSITE with 55141494 values
19:08:35.770 [ CCCovStat] finished stlen=55141494
19:08:35.770 [ Aggregate] aggregate POSITIONS2_PRT_TOPVAR using OBJ993979675 over POSITIONS2_COMPOSITE op SUM
19:08:35.770 [ Dimen] dim[0]=POSITIONS2_CO stval=-1 stlen=55141494 smax=55141493 composite
19:08:35.770 [ Dimen] dim[0]=__XML_GENERAT stval=9 stlen=10 smax=9
19:08:35.770 [ Dimen] dim[1]=STATUS stval=33 stlen=34 smax=33
19:08:35.770 [ Dimen] dim[2]=MATURITY_DATE stval=3674 stlen=3675 smax=3674
19:08:35.770 [ Dimen] dim[3]=LINES_OF_BUSI stval=277 stlen=278 smax=277
19:08:35.770 [ Dimen] dim[4]=TRADE_DATE stval=4183 stlen=4184 smax=4183
19:08:35.770 [ Dimen] dim[5]=SETTLEMENT_DA stval=4207 stlen=4208 smax=4207
19:08:35.770 [ Dimen] dim[6]=__XML_GENERAT stval=2136 stlen=2137 smax=2136
19:08:35.770 [ Dimen] dim[7]=EXPOSURE stval=10870 stlen=10871 smax=10870
19:08:35.770 [ Dimen] dim[8]=SECURITY stval=20525 stlen=20526 smax=20525
19:08:35.770 [ Dimen] dim[9]=COMPLIANCE_RA stval=31 stlen=32 smax=31
19:08:35.770 [ Dimen] dim[10]=OWNERSHIP stval=83006 stlen=83007 smax=83006
19:08:35.770 [ Dimen] dim[1]=AS_OF_DATE stval=2 stlen=39 smax=38
19:08:35.801 [ Dimen] dim[2]=POSITIONS2_PR stval=0 stlen=1 smax=0
19:08:35.801 [ Aggregate] agglen=39
19:08:35.801 [ Aggregate] fastdim=0 maxchsize=39
19:08:35.801 [ Aggregate] fastdim=1 maxchsize=1
19:08:35.801 [ AgWlist] Generating worklists
19:08:35.801 [ AgWlist] start=0 end=80 len=80
19:09:52.832 [ AgWlist] done
19:09:52.832 [ Aggregate] wlstats worklist=1 parents=5704
19:09:57.770 [ Aggregate] wlstats worklist=2 parents=5060
19:10:02.551 [ Aggregate] wlstats worklist=3 parents=86862
19:10:38.301 [ Aggregate] wlstats worklist=4 parents=51235
19:11:17.176 [ Aggregate] wlstats worklist=5 parents=107878
19:12:58.661 [ Aggregate] wlstats worklist=6 parents=36236
19:14:03.895 [ Aggregate] wlstats worklist=7 parents=15633
19:14:25.176 [ Aggregate] wlstats worklist=8 parents=2919
19:14:26.911 [ Aggregate] wlstats worklist=9 parents=340
19:14:27.004 [ Aggregate] wlstats worklist=10 parents=23
19:14:27.020 [ Aggregate] wlstats worklist=11 parents=136
19:14:27.207 [ Aggregate] wlstats worklist=12 parents=4
19:14:27.207 [ Aggregate] wlstats worklist=13 parents=505
19:14:27.536 [ Aggregate] wlstats worklist=14 parents=2
19:14:27.536 [ Aggregate] wlstats worklist=15 parents=155
19:14:27.770 [ Aggregate] wlstats worklist=16 parents=473
19:14:28.082 [ Aggregate] wlstats worklist=17 parents=20
19:14:28.082 [ Aggregate] wlstats worklist=18 parents=5
19:14:28.098 [ Aggregate] wlstats worklist=19 parents=1014
19:14:28.707 [ Aggregate] wlstats worklist=20 parents=90
19:14:28.723 [ Aggregate] wlstats worklist=21 parents=19
19:14:28.723 [ Aggregate] wlstats worklist=22 parents=389
19:14:29.036 [ Aggregate] wlstats worklist=23 parents=12
19:14:29.067 [ Aggregate] wlstats worklist=24 parents=92
19:14:29.223 [ Aggregate] wlstats worklist=25 parents=4
19:14:29.223 [ Aggregate] wlstats worklist=26 parents=108719
19:16:37.332 [ Aggregate] wlstats worklist=27 parents=11592
19:16:58.020 [ Aggregate] wlstats worklist=28 parents=2753
19:17:08.411 [ Aggregate] wlstats worklist=29 parents=16
19:17:08.739 [ Aggregate] wlstats worklist=30 parents=16
19:17:08.817 [ Aggregate] wlstats worklist=31 parents=41
19:17:09.082 [ Aggregate] wlstats worklist=32 parents=8486
19:17:16.411 [ Aggregate] wlstats worklist=33 parents=396
19:17:16.629 [ Aggregate] wlstats worklist=34 parents=68
19:17:16.645 [ Aggregate] wlstats worklist=35 parents=6
19:17:16.645 [ Aggregate] wlstats worklist=36 parents=258
19:17:16.723 [ Aggregate] wlstats worklist=37 parents=12
19:17:16.739 [ Aggregate] wlstats worklist=38 parents=3
19:17:16.739 [ Aggregate] wlstats worklist=39 parents=4016805
19:47:21.989 [ Aggregate] wlstats worklist=40 parents=2322919
20:14:22.661 [ Aggregate] wlstats worklist=41 parents=1008803
20:36:36.879 [ Aggregate] wlstats worklist=42 parents=333714
20:49:24.801 [ Aggregate] wlstats worklist=43 parents=90024
20:55:40.114 [ Aggregate] wlstats worklist=44 parents=20086
20:57:53.911 [ Aggregate] wlstats worklist=45 parents=4605
20:58:24.848 [ Aggregate] wlstats worklist=46 parents=1194
20:58:36.145 [ Aggregate] wlstats worklist=47 parents=281
20:58:36.973 [ Aggregate] wlstats worklist=48 parents=64
20:58:37.239 [ Aggregate] wlstats worklist=49 parents=4
20:58:37.286 [ Aggregate] wlstats worklist=50 parents=5038548
21:43:04.364 [ Aggregate] wlstats worklist=51 parents=3195811
22:24:27.582 [ Aggregate] wlstats worklist=52 parents=1429880
22:49:59.754 [ Aggregate] wlstats worklist=53 parents=517557
23:07:10.395 [ Aggregate] wlstats worklist=54 parents=169518
23:16:41.254 [ Aggregate] wlstats worklist=55 parents=48159
23:21:19.661 [ Aggregate] wlstats worklist=56 parents=11725
23:23:04.442 [ Aggregate] wlstats worklist=57 parents=2434
23:23:31.364 [ Aggregate] wlstats worklist=58 parents=453
23:23:36.551 [ Aggregate] wlstats worklist=59 parents=55
23:23:37.129 [ Aggregate] wlstats worklist=60 parents=3
23:23:37.207 [ Aggregate] wlstats worklist=61 parents=10862772
00:32:28.036 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=session
00:34:45.528 [ AgClean] finish clean=session
08:50:50.068 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675_PRT_PRTAGGMAP clean=memory
08:50:50.083 [ AgClean] finish clean=memory
08:50:50.083 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675_PRT_RUNAGGMAP clean=memory
08:50:50.099 [ AgClean] finish clean=memory
08:50:50.099 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675_PRT_TOPAGGMAP clean=memory
08:50:50.115 [ AgClean] finish clean=memory
08:50:50.130 [ AgClean] start aggmap=TEST_RISKMGMT!OBJ993979675 clean=memory
08:50:50.130 [ AgClean] finish clean=memory
SESSION DIED HERE WITH OUT OF SPACE ON SERVER
08:50:52.615 [ SessCache] Destroying sesscache on TEST_RISKMGMT!AS_OF_DATE_SHORT_DESCRIPTION
08:50:52.615 [ SessCache] Destroying sesscache on TEST_RISKMGMT!AS_OF_DATE_LONG_DESCRIPTION
08:50:52.646 [ SessCache] Destroying sesscache on TEST_RISKMGMT!AS_OF_DATE_GID
08:50:52.646 [ SessCache] Destroying sesscache on TEST_RISKMGMT!AS_OF_DATE_FAMILYREL
08:50:52.646 [ SessCache] Destroying sesscache on TEST_RISKMGMT!AS_OF_DATE_PARENTRELChris, sorry to keep pestering you with this, but have yet a few more questions. Any help is appreciated! (are there any types of published resources with this info in it?)
Thanks,
Scott
a) First, please see the POUTFILEUNIT text at the bottom. This came from a compressed composite aggregation on version 10.1.0.3. In it, you can see that the log was capturing both the total number of composite tuples, and how many were "singles" (apparently now called "coverage classes".
Something changed between 10.1.0.3 and 10.1.0.4, because I no longer see the singles data in the POUTFILEUNIT. Just for kicks, I'm going to try aggregating the cube in 10.1.0.4 from command line instead of through the front end, but suspect that will not make a diff. Any ideas on why the "numSingles" line has gone away?
b) Last (for now at least!!!), I tried to rebuild the cube today by taking away 2 of the dimensions that were least important. One of these dimensions had 3 total levels (including the leaf level), and one had 2 levels.
I expected my total number of CCs to decrease by a factor of 6, based on these CCs not having to be stored 3 times for the one dimension and 2 times for the other. However, the number of CCs barely went down at all (from the 54 million down to just 48 million or something). Obviously my thinking is wrong here, but I just can't quite figure out why.
Thanks again!
Scott
partial POUTFILEUNIT log from version 10.1.0.3:
23:51:52.213 [ chiers] start
23:51:52.228 [ Cmp2Bases] start dim=AW3_RISKMGMT_COMPOSITE, order=0, options specified=0000, used=0020
23:51:52.228 [ Cmp2Bases] entering skip/scan
23:51:52.228 [ Cmp2Bases] base AW3_DIM_COMPLIANCE (23 of 32 in status)
23:51:52.228 [ Cmp2Bases] base AW3_DIM_INSTRUMENTS (171 of 190 in status)
23:51:52.228 [ Cmp2Bases] base AW3_DIM_LOB (162 of 229 in status)
23:51:52.228 [ Cmp2Bases] base AW3_DIM_OWNERSHIP (69771 of 70162 in status)
23:51:53.666 [ Cmp2Bases] leaving skip/scan
23:51:53.666 [ Cmp2Bases] finish stlen=190676
23:52:00.369 [ ctuphiers] start
23:52:03.744 [ ctuphiers] [0] 220699
23:52:06.744 [ ctuphiers] [1] 335050
23:52:10.119 [ ctuphiers] [2] 525726
23:55:53.147 [ ctuphiers] [177] 4650739
23:55:53.147 [ ctuphiers] [178] 4650740
23:55:53.147 [ ctuphiers] joinhier
00:03:56.813 [ ctuphiers] joinhier - numSingles 4150231 -
Integrating Essbase cubes with Oracle Tables in BI Server
I'm trying to link together data from an aggregated Essbase Cube with some static table data from our oracle system. Both the essbase and oracle data exist correctly in their own right in the physical, business and presentation levels. Aggragted data is client sales, static data is client details
Within the OBIEE Administration tool I've tried to drag the physical oracle table for clients onto the clients essbase section in the business area, and it seems to work OK, until you try and report on them together and I get the following error:
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 42043] An external aggregate is found in an outer query block. (HY000)
Can anyone advise on what I'm doing wrong?
ThanksThanks Christian I found some very useful articles (one or two by you) - I'll have to look harder on the net before posting.
One thing I found out, with respect to vertical federation, that others may benefit from, is that I fuond it much easier to start from the most detailed level and then attach the less detailed source rather start with the less detailed and add the additional on. -
HI guys
there are many things that i couldn't understand about creating cube
for olap using Enterprise Mangager ( or cwm2 package ) ,
i need the most details about :
1- aggregation in cube
2- solve order
3- dimension alias for each hierarchy, what the difference between it and dimension
and in aggregation i defined how measure will be aggregated for each dimension
i don't understand this.
can anyone explain these things in more details.
Thanks Alot.The issue seems to be with the ODBC and got resolved...cheers
-
Hi,
I am using AWM 10g R2.
I am having a problem for defining my own formula for aggregating data in higher levels of hierarchy in a cube. Does anyone know how to define custom aggregation other than defined in AWM (like sum, average, maximum, minimum, etc). For example, I need completly different way of aggregating measure in Quarter level from Month level.
If anyone knows how to define cutom aggregation for cube, it would help me a lot.
Thanks
SubashNo, there is no workaround I am aware of for this.
I would recommend you just copy the struts tld, map it to a different URI and then add your custom extension stuff to that. -
Excessive time when maintaining cube
Hi there,
I have a star schema with:
a) 2 dimensions:
year with hierarchy : CALENDAR_YEAR ------------>all_years
location with hierarchy : COUNTRY -------------> CONTINENT -----------> ALL_COUNTRIES
b) 6 partitioned cubes (uncompressed)
Each cube contains measures with diffirent data types. In particular, each measure may have 1 of the following 3 data types:
varchar2 ------------> with aggregation maximum
int or dec ------------> with aggregation SUM (cube's aggregation)
date ------------> with aggregation Non additive
When i execute maintain cube (for 1 of the cubes i have) I leave my pc for 2 hours to load the data and after that it doesn't end but it continues to load data. So, data loading is never done. I have been on my pc for a week trying to solve the problem but nothing has changed. What could the problem be?
Notes:
(A)
I checked vls parameters and tha data's format and they are both compatible. See for yourself:
SQL> select value from V$NLS_Parameters;
VALUE
AMERICAN
AMERICA
$
AMERICA
GREGORIAN
DD-MON-RR
AMERICAN
WE8MSWIN1252
BINARY
HH.MI.SSXFF AM
VALUE
DD-MON-RR HH.MI.SSXFF AM
HH.MI.SSXFF AM TZR
DD-MON-RR HH.MI.SSXFF AM TZR
$
AL16UTF16
BINARY
BYTE
FALSE
19 rows selected.
(B)
Mappings are also ok. I checked them. As for each hierarchy, I gave on each attribute, values that prevent data conflict. I think `all_years` and `all_countries` levels are also ok as they include everything.
(C)
My computer is an Intel Pentiium 4 with 2x 512mb ram. I am running oracle 11g home on windows xp professional service pack 2.
Thanks in AdvanceI need uncompressed cubes because as i said i have non-numeric data types in my data. I have dates, nums and varchar2.
Anyway.
i don't understand what you mean by saying dimensional members, but i suppose you are refering to the levels and the hierarchy of each dimension. I have already included that on my previous post. Check it! If you mean something else inform me!
As for the amount of data:
YEAR:2 RECORDS (1990 and 1991)
CREATE TABLE YEARS
(CALENDAR_YEAR_KEY NUMBER NOT NULL,
CALENDAR_YEAR_NAME varchar2(40),
CALENDAR_YEAR_TIME_SPAN NUMBER,
CALENDAR_YEAR_END_DATE DATE,
PRIMARY KEY(CALENDAR_YEAR_KEY)
LOCATION : 256 RECORDS (It also contains a CONTINENT_ID whose value range from 350 to 362 REPRESENTING all oceans, continents and the world. COUNTRY_ID ranges from 1 to 253)
CREATE TABLE LOCATIONS
(COUNTRY_KEY varchar2(44) NOT NULL,
COUNTRY_NAME varchar2(54),
CONTINENT_KEY varchar2(20) NOT NULL,
CONTINENT_NAME varchar2(30),
COUNTRY_ID NUMBER,
CONTINENT_ID NUMBER NOT NULL,
PRIMARY KEY(COUNTRY_ID)
MEASURES : 498 RECORDS (249 records for 1990 and 249 records for 1991)
CREATE TABLE MEASURES
(GEOGRAPHY_total_area DEC(11,1),
GEOGRAPHY_local_area DEC(11,1),
GEOGRAPHY_arable_land DEC(5,4),
GEOGRAPHY_permanent_crops DEC(5,4),
. (various other measures)
MEASURES_YEAR NUMBER,
MEASURES_COUNTRY NUMBER,
PRIMARY KEY(MEASURES_YEAR,MEASURES_COUNTRY),
FOREIGN KEY (MEASURES_YEAR) REFERENCES YEARS(CALENDAR_YEAR_KEY),
FOREIGN KEY (MEASURES_COUNTRY) REFERENCES LOCATIONS(COUNTRY_ID)
TOTALLY : 268 measures
But to make data loading easier i created 6 cubes on Analytical Workspace Manager each one containing:
GEOGRAPHY : 51 attributes
PEOPLE : 24 attributes
ECONOMY : 40 attributes
GOVERNMENT : 113 attributes
COMMUNICATION : 28 attributes
DEFENSE FORCES : 11 attributes
(If i made any number counting error, forgive me. I only wanted to show you that there are many measures.)
So, Is there anything I can do to solve the problem? -
Query Performance with Exception aggregation
Hello,
My Query Keyfigures has exception aggregation on order line level as per requirement.
Currently cube holds 5M of records, when we run query its running more than 30min.
We cont remove exception aggregation.
Cube is alredy modeled correctly and we dont want to use the cache.
Does anybody can please advice if there is any other better approach to improve query performance with exception agg?
ThanksHi,
We have the same problem and raised an OSS ticket. They replied us with the note 1257455 which offers all ways of improving performance in such cases. I guess there s nothing else to do, but to precalculate this exception aggregated formula in data model via transformations or ABAP.
By the way, cache can not help you in this case since exc. agg. is calculated after cache retrieval.
Hope this helps,
Sunil -
Re aggregation when a member is deleted from dimension
Hi All,
I am aware that deleting a member from a non-partitioned dimension of a cube will trigger a re-aggregation.
The behaviour I am seeing is that this re-aggregation takes as long as (slightly less) than a full initial solve. For e.g. a full initial solve takes around 30 minutes, whereas a re-aggregating the cube after a dimension member is removed takes roughly around 28 minutes. Re-aggregating the cube just after the second re-aggregation is then quick, around 2 minutes
The member that was removed from the dimension does not have any data associated with it in the cube, so was expecting the re aggregation to be quick as it does not affect any of the existing aggregated data.
Could someone explain this behaviour?
The cube is compressed and is partitioned by Time Dimension. I am on Oracle 11.2.0.2
ThanksThere certainly can be a performance difference between 'Fast Solve' and 'Complete'. When no dimension members are changed, the fast solve will usually be quicker because it engages incremental aggregation. But your situation is different because you are adding new members. In theory this should only impact the latest partition, but it does not due to some known bugs. Here are two that I believe are relevant to your case. Neither is publicly visible at this point.
BUG 12536825 - CHANGED RELATION WON'T RETURN THE RIGHT VALUE
Bug 11934210 - CC USES FULL BUILD INSTEAD OF INCREMENTAL WHEN RELATION CHANGED
The RELATION in both cases refers to the parent-child relationship in the dimension. When you add a new member to the dimension, this relation is changed. The effect of bug 12536825 is that partitions that are not really involved (because there is no data for the new member) are re-aggregated anyway. The effect of bug 11934210 is that these partitions can get fully reaggregated even no data has changed in that partition.
These bugs are not (as of writing) fixed in any public patch, but you may be able to get a one-off fix if this is seriously impacting your performance. I would open an SR describing the problem. You can refer to my name, the bugs above, and this post so that it will be properly forwarded. -
Rejcted records cube build log.
I am on 11.1.0.7 db with 11.1.0.7B AWM and loading the dimensions and cube. The olapsys.xml_load_log table is not populated with the build log. I can query cube_build_log but xml_load_log give me better information(rejected/processed records). I know there is a bug for this issue on 11.1.
Can I somehow know the rejected records information other than doing a minus between fact and dimensions? Any workaroud for this?
Thanks,With 11gr2 we have the following logs which will give all the information including rejected dimensions without having to do a minus from fact table.
Maintenance Logs
The first time you load data into a cube or dimension using Analytic Workspace Manager, it creates several logs. These logs are stored in tables in the same schema as the analytic workspace:
• Cube Build Log: Contains information about what happened during a build. Use this log to determine whether the build produced the results you were expecting, and if not, why not. The log is continually updated whenever a cube or dimension is refreshed, whether by Analytic Workspace Manager, the database materialized view refresh subsystem, or a PL/SQL procedure. You can query the log at any time to evaluate the progress of the build and to estimate the time to completion. The default table name is CUBE_BUILD_LOG.
• Cube Dimension Compile Log: Contains errors that occur during the validation of the dimension hierarchies when OLAP is aggregating a cube. The default table name is CUBE_DIMENSION_COMPILE.
• Cube Operations Log: Contains messages and debugging information for all OLAP engine events. The default table name is CUBE_OPERATIONS_LOG.
• Cube Rejected Records Log: Identifies any records that were rejected because they did not meet the expected format. The default table name is CUBE_DIMENSION_COMPILE.
You can also run the $ORACLE_HOME/olap/admin/utlolaplog.sql script to create the build log with some useful views. -
<p>Hi, I'm having a difficult time find the right balancebetween aggregation, cube size and retrieval time. I have anASO cube that has 3 years of data in it, totalling at 160MB. It has a total of 8 dimensions, 6 of which that containmultiple rollups. So the data is obviously being aggregatedat many different points. The problem is that if I go to setthe aggregation size higher this will help make the retreival timesfaster, but at the expense of making the cube much larger. For example an aggregation of 400 MB will take 14.79seconds, where an aggreagtion of 1500 MB will take 7.66 seconds. My goal is to get retreival times down to 1-3seconds, where the user would notice almost no delay at all.</p><p> </p><p>Does anyone have any suggestions on how to find a good balancebetween faster retrievals and the different aggregation sizes? FYI, I've also tried changing whether members are tagged asDynamic or Multiple Hierarchies enabled, but I didn't find anynoticeable difference. Is there anything else that I shouldconsider to making the retreivals faster? </p><p>Thanks in advance.</p>
Given that disk space is relatively inexpensive, this should often be something you don't worry about as much as something like query performance. Especially since even a highly aggregated ASO cube is only a fraction of the foot print for a BSO cube. You should maximize as much aggregation as you can to get the best retrieval time possible and not worry about the disk space too much. Unfortunately in current releases of Essbase you cannot customize aggregations. You can use query tracking to improve aggregations by focusing on the areas most often queried. So I would suggest enabling that, let your users get in there and then start to aggregate again based on the results of the tracking. Of course remembering that by its very nature ASO cubes are very dynamic and some queries are just going to take a little longer. Something to also consider is how complex the query is and is the amount of time it takes to retrieve appropriate. Are you pulling back thousands and thousands of members? If you are then you have to expect a certain amount of time just to bring over the meta-data. Try turning on navigate without data. If your queries still take a long time to come back even though you are not pulling data, then it just means you have a large result set coming back and that's just the way it is. Also look if your result set is returning members with member formulas. MDX formulas can take a little while to run depending on how complex they are and how well they are written.<BR>
-
Permanently change default error configuration in Analysis Services 2005
Hi,
Currently, I am working on a BPC 5.1 application. The data for this application is loaded(inserted via SQL statement) right to the FACT table and then a full process is run for that cube via an SSIS package using the Analysis Services Processing Task. Often records are loaded this way where a dimension member for some of the records has not been added to the Account dimension yet. These records after loading are considered 'orphan records' until the accounts are added to the account dimension.
This loading process is used because of the volume of records loaded(over 2 million at a time) and the timing of the company's business process. They will receive data sometimes weeks before the account dimension is updated in BPC with the new dimension members.
If I try and process the application from the BPC Administration area with these orphan records in the FACT table, the processing stops and an error displays. Then when I process the cube from Analysis services, an error is displayed telling me that orphan data was found.
A temporary work-around is to go into the cube properties in Analysis Services 2005, click on Error Configuration, uncheck 'Use default error configuration' and select 'Ignore errors'. Then you can process the application from BPC's Administration page successfully. But, the problem is that after processing the application successfully, the Analysis Services Error Configuration automatically switches back from 'Ignore errors' to 'Use default error configuration'.
Does anyone have any suggestions on how to permanently keep the 'Ignore errors' configuration selected so it does not automatically switch back to 'Use default error configuration'? Prior to BPC 5.0 this was not occurring.
Also, does anyone know why this was changed in BPC 5.0/5.1?
Thanks,
GlennHi Glenn,
I understood the problem but I can say that it was a bad migration of appset from 4.2 to 5.0.
Any way they are using a dts package to import data into our fact table. That's means they have to add another step into that package where they have to do the verfications of records before to insert into fact table. Verfications can be done using the same mechanism from our standard import. Just edit that package and add similar steps into customer package.
Attention you need somebody with experience developing DTS packages with for BPC to avoid other problems.
One of big benefits from 5.X compare with 4.2 was the fact that we are able to use optimization schema and aggregations for cubes.
Heaving that orphan records it is not possible to use optimization schema for cubes and you are not able to create good aggregation into your cube.
So my idea is to provide all these information to customer and to try to modify that package instead to enable that option which can cause many other issues.
Sorin -
Dear All,
I am using BEx Query Designer to make a query.I view the report using browser. I have 2 keyfigure fields "price" and "forecast quantity". I have to arrive at "forecast Value" = price * forecast quantity using formula in Query designer. Although for each material this proves correct in report material wise. If I see the report customer-wise a customer who has bought 10 materials first the prices are added up andthen the forecast quantity is added up and then the sums are multiplied to get forecast value which is wrong.
Illustration :
For Customer 1 :
Quantity Price Value(Quantity * price)
Material1 | 2 | 10 | 20
Material2 | 3 | 10 | 30
Material3 | 5 | 10 | 50
Material4 | 7 | 10 | 70
Total | 17 | 40 | 170 = Correct value
But if I see report customer wise I get the value as 17 * 40 = 680 which is wrong.
Is there any way when I see an aggregation of materials the calculation in formula column first multiplies individual rows and then add the rows to give me final value?
Regards,
Ratish
[email protected]Hello ,
Right click on the calculated key figure select properties enhanced and change to Before aggregation , now the result is calculated correctly.
But if you use before aggregation, the cubes aggregates are not used, so this will have negative effect on performance , this property is applicable to query level key figures.
hope it is clear
assign points if useful -
Data loaded all level in a hierarchy and need to aggregate
I am relatively new to Essbase and I am having problems with the aggregation of cube.
Cube outline
Compute_Date (dense)
20101010 ~
20101011 ~
20101012 ~
Scenario (dense)
S1 ~
S2 ~
S3 ~
S4 ~
Portfolio (sparse)
F1 +
F11 +
F111 +
F112 +
F113 +
F12 +
F121 +
F122 +
F13 +
F131 +
F132 +
Instrument (sparse)
I1 +
I2 +
I3 +
I4 +
I5 +
Accounts (dense)
AGGPNL ~
PNL ~
Porfolio is a ragged hierarchy
Scenario is a flat hierrachy
Instrument is a flat hierarchy
PNL values are loaded for instruments at different points in the portfolio hierarchy.
Then want to aggregate the PNL values up the portfolio hierarchy into AGGPNL, which is not working the loaded PNL values should remain unchanged.
Have tried defining the following formula on AGGPNL, but this is not working.
IF (@ISLEV (folio,0))
pnl;
ELSE
pnl + @SUMRANGE("pnl",@RELATIVE(@CURRMBR("portfolio"),@CURGEN("portfolio")+1));
ENDIF;
using a calc script
AGG (instrument);
AGGPNL;
Having searched for a solution I have seen that essbase does implicit sharing when a parent has a single child. This I can disable but this is not the sole cause of my issues I think.The children of F11 are aggregated, but the value which is already present at F11 is over wriiten and the value in F11 is ignored in the aggregation.^^^That's the way Essbase works.
How about something like this:
F1 +
===F11 +
======F111 +
======F112 +
======F113 +
======F11A +
===F12 +
======F121 +
======F122 +
===F13 +
======F131 +
======F132 +
Value it like this:
F111 = 1
F112 = 2
F113 = 3
F11A = 4
Then F11 = 1 + 2 + 3 + 4 = 10.
Loading at upper levels is something I try to avoid whenever possible. The technique used above is incredibly common, practically universal as it allows the group level value to get loaded as well as detail and agg up correctly. Yes, you can load to upper level members, but you have hit upon why it isn't all that frequently done.
NB -- What you are doing is only possible in BSO cubes. ASO cube data must be at level 0.
Regards,
Cameron Lackpour
Maybe you are looking for
-
Store PDF File in Oracle database 10g
Hi all, I want to store PDF File in Oracle database 10g, and then I want to access the pdf file using Oracle Developer 6i can anyone tell me how to do this, thanks in advance.
-
Dreamweaver MX 2004 in Win 8 activation
I used the chat service to ask about transferring my Dreamweaver MX 2004 license from a Win 7 computer to a new Win 8 computer. The person said DW MX2004 will not run on Win 8. But it does - I am using it under the 30 day grace period. I just need
-
I cannot figure out why when I run the program it shows the second card, and will not change when I click on the buttons that should change the card. import java.io.*; import java.awt.*; import java.awt.event.*; import javax.swing.*; import java.text
-
Clearing down messages from Error Topic
We are using AIA Foundation Pack 2.3 and trung to develop interfaces between JDE and eBusiness Suite. I am running some test scenarios which are failing and it is kicking off the AIA BPEL Error Handling. From what I can see it writes the error messag
-
Purging of ASO.ASO_ORDER_FEEDBACK_T
HI All, I've been doing the purging of ASO.ASO_ORDER_FEEDBACK_T on our site and for the same I'm following the doc 181410.1 . According to that unused consumer can be purged . Refering to the point 10 of the same doc . I have run "execute clean_queue