Aggregation issues II
all,
I have ADS & PRO AMT* logical column in fact table and two sources Derived and aggregate
the way derived is calculated is different from Aggregate for ex
derived = promoAmt ( filter from another table PROMO TYPE ) where services = 'ADS & PRO' pretty straight
Calculated = ToT_non_Margin Amt- (some other amt ) ( filter from another table PROMO TYPE ) where services = 'ADS & PRO' and deal type = 'Non Margin'
so in a fact table if i set the day (derived) grain formulas then how to i tell OBIEE to follow another rule for the aggregate calculation.
is ther a way i can set up both on the same column .
Dear UOOLK ,
Its just that using Aggregation in sql,
U can use level for one logical column A and another B you can leave, in this case B will act as a dimention and A as measure.
Until an unless if you dont use both column in same analysis it wont affectt.
But if u use it , Column A will be again agregated by column B.
like
select SUM(B) , A
from tab_name
group by A
mark if it helps,
fiaz
Similar Messages
-
Hi all,
I am facing the following aggregation issue at reporing level. BW system 3.5
Cube1
Material, Company code, Cost center, Material, Month, Volume KF
Cube2
Material, Company code, Cost center, Material, Month, Price KF
Multi provider
Material, Company code, Cost center, Material, Month, Volume KF, Price KF
Report
- Global Calculated key figure 'Value' is based on basic KF's Volume KF, Price KF
- Time of aggregation is set to " Before aggregation" in propery of Calculated Key Figure.
- There is only one characteristics 'Company code' is used in report.
When, I execute this report, Calculated KF is not working (No values), If I change Time of aggregation " After aggregation" in propery of Calculated Key Figure, then It works but wrong values.Price gets aggregated(add ups) and multiply with Volume which is wrong.
Can you please give me a Ideal solution to resolve this.
Thanks,
HarryHi all,
Can I assume that there is no solution for this issue ??
Thanks,
Harry -
Aggregation issue for report with bw structure
Hi,
I am facing aggregation issue while grouping reports in webi.
We have a BW query with 16 values, which we bring to bw as structure. Out of 16, 8 are percentage values (agg. type should be average).
If we bring the data at site level, data is comming properly. But if we use same query and try sum/grouping( on region level), then percentage is getting added.
Since it's a dashboard report with lots of filters, we cannot go for seperate query at each level(site, region, zone).
How we can resolve this.. please give me suggestion.
Regards
BabyHi,
Since we were using structure, it was not possible to produce the required result in BO.
We change structure to keyfigures and bring all of them in to BO. All the column formulas are now in BO side.
Now it is working fine.
Regards
Baby
Edited by: Baby on May 10, 2010 11:39 AM -
I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
the two ports.
Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
Regards,
Jim
Output of several commands on the T5220:
uname -a:
SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
ifconfig -a (IP and broadcast hidden for security):
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
aggr1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
ether 0:14:4f:ec:bc:1e
dladm show-dev:
e1000g0 link: unknown speed: 0 Mbps duplex: half
e1000g1 link: unknown speed: 0 Mbps duplex: half
e1000g2 link: up speed: 1000 Mbps duplex: full
e1000g3 link: up speed: 1000 Mbps duplex: full
dladm show-link:
e1000g0 type: non-vlan mtu: 1500 device: e1000g0
e1000g1 type: non-vlan mtu: 1500 device: e1000g1
e1000g2 type: non-vlan mtu: 1500 device: e1000g2
e1000g3 type: non-vlan mtu: 1500 device: e1000g3
aggr1 type: non-vlan mtu: 1500 aggregation: key 1
dladm show-aggr:
key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
duplex link state
e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
e1000g3 <unknown> 1000 Mbps full up attached
dladm show-aggr -L:
key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
device activity timeout aggregatable sync coll dist defaulted expired
e1000g2 active short yes yes yes yes no no
e1000g3 active short yes yes yes yes no no
dladm show-aggr -s:
key: 1 ipackets rbytes opackets obytes %ipkts %opkts
Total 464982722061215050501612388529872161440848661
e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5
Edited by: JimBuitt on Sep 26, 2008 12:04 PMJimBuitt wrote:
I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s.Is this with multiple backup streams or just one?
I would not expect to get higher throughput with a single stream. Only with the aggregate throughput of multiple streams.
Darren -
BIGINT aggregation issue in Hana rev 91
Hi,
I have a BIGINT value field that isn't aggregating beyond 2147483648 (the max INTEGER value).
I'm seeing results as follows:
Period
Value
5
320,272,401
6
635,021,492
7
515,993,660
8
546,668,931
9
702,138,445
10
438,782,780
11
459,387,988
12
722,479,250
Result
-2,147,483,648
We've recently upgraded from rev 83 to 91. I'm pretty sure this is a new issue - has anyone else seen this?ect
I'm hoping there is some kind of fix as I don't want to have to convert fields throughout our system to a longer DECIMAL.
thanks
GuyI've figured out this issue only affects Analytical Views that have calculated attributes.
Such views generate a CALCULATION SCENARIO in _SYS_BIC, which seems to incorrectly define my field (which is in the data foundation, modelled as a BIGINT) as SQL Type 4, sqlLength 9, as per the following:
{"__Attribute__": true,"name": "miles","role": 2,"datatype": {"__DataType__": true,"type": 66,"length": 18,"sqlType": 4,"sqlLength": 9},"kfAggregationType": 1,"attributeType": 0}
I also have calculated measures modelled as BIGINT's in the Analytical View. These are correctly defined in the CALCULATION SCENARIO with an SQL length of 18, for example:
{"__Attribute__": true,"name": "count","role": 2,"datatype": {"__DataType__": true,"type": 66,"length": 18,"sqlType": 34,"sqlLength": 18},"kfAggregationType": 1,"attributeType": 0}
This looks like a bug to me. As a work around I had to define a calculated measure BIGINT which simply equals my "miles" field. Then hide the original field. -
ESSBASE Aggregation Issue.
Hi,
I am facing a serious problem with Essbase. Iam implementing Hyperion Planning 11.1.2.2 for one of our client and first time I am implementing this version.
The aggregation is not working in my setup. I have written a rule to aggregate the hierarchy. I have tried with AGG, CALC DIM, etc. But still the same issue.
I have also tried running the Calculate web form rule file, but still aggregation is not happening.
I have also noticed that in Planning Dimension maintenance, even the level 0 members showing the consolidation operation.
Any body has clue?
Please help me as I am unable to proceed further.
Thanks in Advance.
Regards,
Sunil.It is probably worth testing your script as a calc script and then run it directly against the essbase database using EAS, then check the data with Smart View, this process should eliminate any issues in planning or calc manager.
If you are still having problems then post your script and I am sure somebody will give you some further advice.
Cheers
John
http://john-goodwin.blogspot.com/ -
Hi Experts
Here is a scenario for which i need some help. We have multiple locations and these locations can be supplied by one or more distribution centers. The distribution centers need to be planned in APO, but the plants have to be MRP planned. So the scenrio is demand from multiple plants aggregating to distribution center A and demand from another bunch of plants aggregation to dc B. Apart from the demand from the plants the DC' also have their own demand .
I have maintained the hierarchy at material - DC level and I tried to plan the DC's using the SNP aggregate book. But the problem is DC's original demand gets overwritten with the demand from the individual plants. How do i overcome this issue. That is how do i make sure the aggregated demand is the sum of DC's original demand + demand placed by DC's on plants. I want to know if there is any straightforward way of achieving this before modifying the macros to the achieve the same.
Thanks
Saradha
Edited by: Saradha Ramesh on Sep 3, 2010 11:12 PMDatta
Yes the plants do not get planned in APO , only the DC's are APO planned. We run MRP at plant level to create STO's from DC to plants. And we forecast the material in DP (forecast for plants and DC's) and release the forecast to SNP. We transfer the supply / demand (SO's) / stock from R/3 to APO for the material ( plants and DC's transaction data ) . Now we know the net demand value at each plant. We roll up the net demand from the plants to the DC's by using the aggregation / hierarchies. Till this point everything is fine. But the issue arises when the net demand from the plants overwrites the DC's demand. That is the DC has 10 EA demand from the plants . The DC supplies to customer and the demand placed by customer on DC is say 5 EA. When i aggregate the demand , i should see 10 + 5 = 15 EA , but what i see is 10 EA . This is the issue
Thanks
Saradha -
Exceptional aggregation on Non *** KF - Aggregation issue in the Query
Hi Gurus,
Can anyone tell me a solution for the below scenario. I am using BW 3.5 front end.
I have a non cumulative KF coming from my Stock cube and Pricing KF coming from my
Pricing Cube.(Both the cubes are in Multiprovider and my Query is on top of it).
I want to multiply both the KF's to get WSL Value CKF but my query is not at the material level
it is at the Plant level.
So it is behaving like this: for Eg: ( Remember my Qty is Non-*** KF)
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20
My WSL val should be 600 but it is giving me 15 * 95 which is way too high.
I have tried out all options of storing the QTY and PRC in two separate CKF and setting the aggregation
as before aggregation and then multiplying them but it din't work.
I also tried to use Exceptional Aggregation but we don't have option of ' TOTAL' as we have in BI 7.0
front end here.
So any other ideas guys. Any responses would be appreciated.
Thanks
Jay.I dont think you are able to solve this issue on the query level
This type of calculation should be done before agregation and this feature doesnt exist in BI 7.0 any longer. Any kind of exceptional aggregation wont help here
It should be be done either through virtual KF (see below ) or use stock snapshot approach
Key figure QTY*PRC should be virtual key figure. In this case U just need to one cbe (stock quantity) and pick up PRC on the query run time
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20 -
Require Very Urgent Help on Aggregation Issue. Thanks in advance.
Hi All,
I am new to essbase.
I have got an issue with aggregation in Essbase. I load data at zero level and then when I aggregate using CALC DIM i do not get any value.
The zero level load being:
Budget,Version,Levmbr(Entity,0),Levmbr(Accounts,0),NoRegion,NoLoc,NoMod,Year,Month.
When I use default calc or give Calc Dim for the above no aggregation takes place at the parent level.
Requirement :
Values at Version,Region,Location,Model,Year,Month.Budget Level.
Please advice.
Thanks in advance.
Bal
Edited by: user11091956 on Mar 19, 2010 1:07 AM
Edited by: user11091956 on Mar 19, 2010 1:10 AMHi Bal,
If you had loaded without an error , and after that your default calc is resulting in not aggregated values. Then I can imagine only one way, it cannot happend is through your outline consolidations.
Check,if data which is loaded at the members does have IGNORE or ~ as the consolidation
Sandeep Reddy Enti
HCC
http://hyperionconsultancy.com/ -
I created an infospoke with a BADI.
Basically I am extracting data from a MP to a flat file and I need the data aggregated. In the BADI I do sum the data. However, I noticed in the monitor when it runs, the data is written to the file for each data package so even though I sum the data I will still have records with duplicate keys not aggregated. If the number of records extracted is less than the number set in the data package size then I have no issue. But I have a lot of data.
How do I fix this?Hi,
On buddy you need to create an temporary internal table with the same structure as E_T_DATA_OUT.. then transfer the raw data there.. then remove all data from E_T_DATA_OUT then loop on temporary internal table then tranfer the records on E_T_DATA_OUT via COLLECT statement this will aggregated (sum) the figures.
Assign poinst if it helps...
Thanks and regards,
Raymond -
Aggregation Issue when we use Hierarchy InfoObject in the Bex Query.
Hi All,
I have created a bex Query having some Char's and one hierarchy Infoobject in Rowa and RKF's. I haven't used any Exception aggreagation Objects in RKF. but, when I execute a query in the Over all result it's showing the Exceptional aggregation based on the Hierarchy Object.
Briefly Illustrated my problem here.
OrgUnitHierarchy EmpID RKF
Root 1 1
RootA1 1 1
RootA2 1 1
Root 2 1
RootB1 2 1
RootB2 2 1
Root 3 1
RootC1 3 1
RootC2 3 1
Over all result 3
In the above example the Sum of the RKF is 9. but its showing only 3. When I Connect this with crystal report is the sum of RKF is showing 9. Please help me which is the correct one and why it's not aggregating child nodes?
Is there any Config needs to be done to aggregate all the nodes of the Hierarchy? Thanks for your support in advance
Regards,
ShivaHi,
is this related to BEx Analyzer or BEx Web Reporting ? if so then I would suggest to post the entry into BEx Suite forum as this forum is for the SAP Integration Kit from BusinessObjects.
Ingo -
DATA AGGREGATION ISSUE IN REPORT
Hi,
when we were running query by selcting the the version . the data is aggregation dono where it went rong
Explanation :
we alredy loaded data with version for the year 2010.. we started laoding from April 2010 to March 2011.. as 12 fiscal periods, with 53 versiions. here version means week(1to53 weeks for year ) but we started laoding from April means 14th week of year 2010. to March 2011. til here no probelm with data every thing is matching in the report/
now the turn comes for laoding data from April 2011..Again 14 version came which is alredy in system for April 2010
now what is happening ..
when we laod data for 14 week of 2011 (April). data is Aggregating or some mixxing up with data..
so what we had done is we had added calyear in the filter so that user can see only the data for version with respective year.
even though data is Aggregation with previous year(2010) for the same version alredy in system ..
nothing is working outt..
can u pls sugest us , any probelm in the back end , need to do any changes for modelling ..
till now what we had done is deleating the data from cube and dso for April2010 for 14 version and laoding data for April 20110 with version 14 . which is toatlay new version in system.
So this Keep on continuying every time..when data not matching.
now for the Month May with version 20 we have to laod the same probelm should not repeat .
pls help with your valuable sugestion detailly.
And this forecast summary report , we have data for next 2 years. planning is doing for every 2 years from currentt date
Any queries pls let me know
Edited by: afzal baig on May 11, 2011 4:00 PMHi
is your data stored in a cube or DSO? If it is DSO - is version and period / calmonth a key field?
what type of key figure are you using and what is the aggregation rule for this key figure - in the infoobject definition and in the query definition?
is there anything unusual in the definition of your version characteristic? any compounding?
Why did you not use 0calweek?
regards
Cornelia -
Grouping - aggregation issue- Sample data included
The below data shows that this particular course A16 at a top level (set=01) equals to 60% coursework and 40% exam. At this level all the set=01's should always equal to 100%.
The complication is where a piece of Coursework and/or Exam is made up of other parts. So as we can see in the data below, the 40% coursework is made up of an Assessment paper (58%) and a Practical piece (42%). You can see the association between the 01 level, 02, 03 level through the set, subset relationship.
So record with subset 2 has two pieces associated to it.
with t as (
select 'A16' course, '01' sett, '03' Subset, 'E' Code, 'Exam' Descr, 40 "weight%" from dual UNION
select 'A16' course, '01' sett, '02' Subset, 'C' Code, 'Courswork Total' Descr, 60 "weight%" from dual UNION
select 'A16' course, '02' sett, '' Subset, 'C' Code, '1. Assignement' Descr, 58 "weight%" from dual UNION
select 'A16' course, '02' sett, '' Subset, 'P' Code, '2. Practical' Descr, 42 "weight%" from dual UNION
select 'A16' course, '03' sett, '' Subset, 'E' Code, '1. Exam' Descr, 50 "weight%" from dual UNION
select 'A16' course, '03' sett, '' Subset, 'W' Code, '2. Written Piece' Descr, 50 "weight%" from dual)
select * from t;This is what I had so far but this only looks at the top level which as you can see is no good as it doesnt knoe about the practical elements.
SELECT course
,sett
,NVL(SUM(CASE WHEN CODE IN ('C','F','J','L','R','Y') THEN "weight%" end),0) AS Coursework
,NVL(SUM(CASE WHEN CODE IN ('E','Q') THEN "weight%" end),0) AS Written
,NVL(SUM(CASE WHEN CODE IN ('A','D','O','P','S','T','V','W') THEN "weight%" end),0) AS Practical
FROM t
where sett = 01
GROUP BY course, sett
ORDER BY sett; What I am trying to calculated is a total Exam%, Written%, Practical% which when all Summed equal to 100%
EXPECTED Results for the supplied data set are below:
select 'A16' course, 20 Exam, 45.2 Practical, 34.8 Coursework, 20+45.2+34.8 Total from dual;The t.Code relates to whether the piece is coursework, exam or practical. As seen below. So this is how I know what sections relate to which part.
I need to basically sieve through each level and calculate its % of the 01 level and group them into Exam, Practical and Courswork.
CODE IN ('C','F','J','L','R','Y') Coursework
CODE IN ('E','Q') Written
CODE IN ('A','D','O','P','S','T','V','W') Practical Any ideas would be much appreciated.Thanks for that sKr.
abhi: The only issue I have with courses such as the below. This course has only has 01's so it cant roll them up, unless we use the 00 record. But if I change the code to start with 00, it will work for this one. But not the OP data i sent (This is further down this post)
START WITH T.SETT = '00'
with t as (
--select 'A16' course, '00' sett, '01' Subset, '' Code, 'Generated' Descr, 100 "WEIGHT" from dual UNION -- This record is the master record.
select 'A16' course, '01' sett, '' Subset, 'O' Code, 'Presen' Descr, 10 "WEIGHT" from dual UNION
select 'A16' course, '01' sett, '' Subset, 'R' Code, 'Case' Descr, 70 "WEIGHT" from dual UNION
select 'A16' course, '01' sett, '' Subset, 'O' Code, 'Poster' Descr, 10 "WEIGHT" from dual UNION
select 'A16' course, '01' sett, '' Subset, 'C' Code, 'Journel' Descr, 10 "WEIGHT" from dual)
SELECT TT.COURSE,
NVL(SUM(CASE WHEN TT.CODE IN ('E', 'Q') THEN "WW" END), 0) AS EXAM,
NVL(SUM(CASE WHEN TT.CODE IN ('A', 'D', 'O', 'P', 'S', 'T', 'V', 'W') THEN "WW" END), 0) AS PRACTICAL,
NVL(SUM(CASE WHEN TT.CODE IN ('C', 'F', 'J', 'L', 'R', 'Y') THEN "WW" END), 0) AS COURSEWORK,
NVL(SUM(WW),0) TOTAL
FROM (SELECT T.*, LEVEL, (CONNECT_BY_ROOT "WEIGHT") * T.WEIGHT / 100 WW
FROM T
START WITH T.SETT = '01'
CONNECT BY SETT = PRIOR SUBSET) TT
WHERE TT.SUBSET IS NULL
GROUP BY COURSE;In the OP, the data set had master levels at 01. But there is actual one more level above that which is 00 and all courses have one. I thought I would mention it in-case it can be use to fix the above examples: OP data below with the missing 00 record.
with t as (
select 'A16' course, '00' sett, '01' Subset, '' Code, 'Generated' Descr, 100 "WEIGHT" from dual UNION
select 'A16' course, '01' sett, '03' Subset, 'E' Code, 'Exam' Descr, 40 "WEIGHT" from dual UNION
select 'A16' course, '01' sett, '02' Subset, 'C' Code, 'Courswork Total' Descr, 60 "WEIGHT" from dual UNION
select 'A16' course, '02' sett, '' Subset, 'C' Code, '1. Assignement' Descr, 58 "WEIGHT" from dual UNION
select 'A16' course, '02' sett, '' Subset, 'P' Code, '2. Practical' Descr, 42 "WEIGHT" from dual UNION
select 'A16' course, '03' sett, '' Subset, 'E' Code, '1. Exam' Descr, 50 "WEIGHT" from dual UNION
select 'A16' course, '03' sett, '' Subset, 'W' Code, '2. Written Piece' Descr, 50 "WEIGHT" from dual)
SELECT TT.COURSE,
NVL(SUM(CASE WHEN TT.CODE IN ('E', 'Q') THEN "WW" END), 0) AS EXAM,
NVL(SUM(CASE WHEN TT.CODE IN ('A', 'D', 'O', 'P', 'S', 'T', 'V', 'W') THEN "WW" END), 0) AS PRACTICAL,
NVL(SUM(CASE WHEN TT.CODE IN ('C', 'F', 'J', 'L', 'R', 'Y') THEN "WW" END), 0) AS COURSEWORK,
NVL(SUM(WW),0) TOTAL
FROM (SELECT T.*, LEVEL, (CONNECT_BY_ROOT "WEIGHT") * T.WEIGHT / 100 WW
FROM T
START WITH T.SETT = '01'
CONNECT BY SETT = PRIOR SUBSET) TT
WHERE TT.SUBSET IS NULL
GROUP BY COURSE;
Edited by: oraCraft on Mar 12, 2012 10:16 AM -
"A” is an IT application
(name) this IT application has a total of 50 users (Number of Users) . So the Number of users need to be aggregated to the IT application
level. This application provides different Applications Functionalities, which
belong to certain Application functionality groups for instance “SC”,
“PO”, etc. The relation between Application Functionality
and Application Functionality Group is 1:n!
Application Functionality Group
“PO” will also be associated with other IT applications
(name), e.g. “B”. Due to the relationship that multiple
Application Functionality can be related to one Application functionality
group, if a query is created for Number of Users per Application Functionality
group, the Number of users does not have to be duplicated!
Example (this is just to display
the above, not actual data):
Name
Number of Users
Application Functionality
Application functionality Group
A
50
OCP
SC
A
50
DS
PO
A
50
WM
PO
A
50
PMD
MDM
A
50
RPC
SC
A
50
IM
SM
B
222
DS
PO
B
222
WM
PO
B
222
RPC
SC
If I look at Names and Number of
Users only, I would expect the report to show this:
Name
Number of Users
A
50
B
222
If I look at Application
Functionality Groups and Number of Users only, I would expect to see this:
Application functionality Group
Number of Users
PO
272
SC
272
MDM
50
SM
50Hi,
No need confusion.
first create the selection then apply the new formula -> apply the formula -> use the exception aggregations based on the your requirements like summation/total -> try with the different char with reference char.
fine the below doc.
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f0b8ed5b-1025-2d10-b193-839cfdf7362a?overridelayout=t…
Thanks,
Phani. -
CASE causes strange aggregation issue.
Hi I have a subject area; Journals; which has a dimension: - Budget Type (Actual or Budget), a dimension GL Code (Cost Centre) and a fact folder BUDGET FACTS which has a field amount which aggregates by default as SUM. (Simplified - but I think this is all that matters).
I create a report in Answers thus; -
Actual Type Cost Centre Amount
Actual 801041 100
Budget 801041 150This gives results as desired / expected.
However, wishing to see Sum(Amount) in two columns, one for Budget and One for Actual - next to each other I try this; -
Cost Centre Budget Amount Actual AmountWhere Budget Amount is populated by the following formula ; -
CASE WHEN "Journals"."Budget Type" = 'Budget' THEN amount else 0 end
And Actual Amount
CASE WHEN "Journals"."Budget Type" = 'Actual' THEN amount else 0 end
(Syntax may not be exact but you get the idea)
Now all of my results look cartesian - producing massively bigger numbers....
I have tried SUM in the formula and experimenting with default aggregation but cannot get it to work.
Any suggestions - do I have to resort to the ADVANCED tab and setting the SQL - GROUPING - which I can do but I am hoping for a simpler solution to pass on to my wider user community - and pointing them at writing SQL fills me with cold dread....
I am on 10.1.3.4
Thanks for your input,
Robert.Hi,
these are the log files....
The only difference between what was generated in answers in the first one to the second one is that in the first one I leave the 'Actual Type' column in my report, which works except for the fact that budgets and actuals do not appear on the same row, even when I hide the column....
So the second is identical, but with the afore mentioned 'Actual Type' column deleted.
Here all of the result data winds up in the 'Actual Amount' column (the first of the two case statements) - which makes no sense...
Is there a way round this, except creating a calculation in the repository as you suggest???
thanks,
Robert (code follows)
select distinct D1.c2 as c1,
D1.c3 as c2,
case when D1.c4 = 'Actual' then D1.c1 else 0 end as c3,
case when D1.c4 = 'Budget' then D1.c1 else 0 end as c4,
D1.c1 as c5,
D1.c4 as c6,
D1.c5 as c7
from
(select sum(T29613.AMOUNT) as c1,
T29642.COST_CENTRE as c2,
T29706.PERIOD_NAME as c3,
T31281.ACTUAL_TYPE as c4,
T29706.PERIOD_NUM as c5
from
GL_ACTUAL_TYPE_MV T31281,
GL_CODE_COMBINATIONS_MV T29642 /* Gl Code Combinations for GL Journal Drill */ ,
GL_PERIODS T29706 /* Gl Periods for Gl Journal Drill */ ,
GL_JOURNAL_DRILL T29613
where ( T29613.ACTUAL_KEY = T31281.ACTUAL_FLAG and T29613.CODE_KEY = T29642.CODE_KEY and T29613.PERIOD_KEY = T29706.PERIOD_NAME and T29613.PERIOD_KEY = 'JUL-11' and T29642.COST_CENTRE = '801040' and T29706.PERIOD_NAME = 'JUL-11' )
group by T29642.COST_CENTRE, T29706.PERIOD_NAME, T29706.PERIOD_NUM, T31281.ACTUAL_TYPE
) D1
order by c1, c7, c6
select D1.c2 as c1,
D1.c3 as c2,
D1.c1 as c3,
D1.c4 as c4
from
(select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4
from
(select T31281.ACTUAL_TYPE as c1,
T29642.COST_CENTRE as c2,
T29706.PERIOD_NAME as c3,
T29706.PERIOD_NUM as c4,
ROW_NUMBER() OVER (PARTITION BY T29642.COST_CENTRE, T29706.PERIOD_NAME, T31281.ACTUAL_TYPE ORDER BY T29642.COST_CENTRE ASC, T29706.PERIOD_NAME ASC, T31281.ACTUAL_TYPE ASC) as c5
from
GL_ACTUAL_TYPE_MV T31281,
GL_CODE_COMBINATIONS_MV T29642 /* Gl Code Combinations for GL Journal Drill */ ,
GL_PERIODS T29706 /* Gl Periods for Gl Journal Drill */ ,
GL_JOURNAL_DRILL T29613
where ( T29613.ACTUAL_KEY = T31281.ACTUAL_FLAG and T29613.CODE_KEY = T29642.CODE_KEY and T29613.PERIOD_KEY = T29706.PERIOD_NAME and T29613.PERIOD_KEY = 'JUL-11' and T29642.COST_CENTRE = '801040' and T29706.PERIOD_NAME = 'JUL-11' )
) D1
where ( D1.c5 = 1 )
) D1
order by c2, c1
Maybe you are looking for
-
What mean "logical end-of-file reached" messaage ? (screen attached)
I see that when loading project. Is all ok after i press OK but what is that means ?
-
How to update UDF based upon the row level dimensions.
Hello Experts, I have made 4 new udf at Sales Order. Its displaying at Righthand side under General Category. Now at row level im entering item detail, and other things with 4 profit center vaues. So i want that if i enter value in those 4 proft cen
-
BAPI_MATERIAL_MAINTAINDATA_RT ZZFIELDS
Hello, I am updating mara, marc append structure zfields using BAPI_MATERIAL_MAINTAINDATA_RT. I am using function value as 004. Value is not getting updated to material master. function = '004' material = matnr value plant = plant value field1 = val
-
My computer just got back from getting fixed, they wiped out everything. Now I cant authorize my i phone purchases because it says Its been done by more than 5 computers? If you can please help. Thanks
-
Why does the WRT150N appear as a G router instead of N router
I have a WRT150N for a few weeks. It works fine as a wireless N router at the beginning, but suddenly it appeared as a G router. If I forced it to work on N only, the communication would not be established. When I selected mixed mode or G only in wir