Aggregation in Infocube
Hi,
I have created an Infocube, which gets the data from A DSO with Delta update.
I maintained Aggregation in Infocube and compressed once. When the new data come, I want to aggregate once again. But it dont work. The new data could not compressed.
Why the Aggregation can be done only one time?
Thanks!!!!!!!!
Hi,
I believe aggregation mentioned by you means aggregates. And you need to roll up the data when ever new data is loaded into cube . When roll up is performed, it will push the data in cube to the aggregates and make the data available for reporting.
To roll up the new data , add the roll up step in process chain.
Hope i am answering your concern.
Thanks
Venkat
Similar Messages
-
Aggregation for reference characteristic
can i use aggregation (in infocube) for a reference characteristic?
RegardsHi...
The regular standard aggregation process consists of totaling or
determining the MIN/MAX values; however, more complex
aggregation processes may be required. The exception aggregation
is an example of such a process. This aggregation process
comprises different types of aggregation such as SUM, MAX, MIN,
AVG, COUNT, FIRST, LAST, VARIANCE. Typical examples of exception
aggregation include the calculation of the number of deliveries
per month and the average revenue per customer.
These types of key figures require the specification of an
InfoObject as a reference value. The InfoObjects can be both
time-related and non-time-related characteristics.
The following illustrates the calculation of an exception
aggregation based on a sample average value calculation for the
number of purchase orders per day:
STANDARD AGGREGATION SUM (Database)
CALENDAR DAY CUSTOMER MATERIAL ORDERS
01/17/05 0815 AAA 14 u03A3 30
01/17/05 0815 BBB 16
01/17/05 0816 AAA 50
01/18/05 0815 CCC 22
01/18/05 0816 DDD 18 u03A3 36
01/18/05 0816 AAA 18
BECOMES -->
CALENDAR DAY CUSTOMER ORDERS
01/17/05 0815 30 Ø 40
01/17/05 0816 50
01/18/05 0815 22 Ø 29
01/18/05 0816 36
The calculation of the Orders key figure is based on the standard
aggregation process, Summation (SUM). This step is carried out
in the database. Moreover, the key figure also contains the
exception aggregation, Average (AVG). The calculation of the key
figure Average number of orders per day is handled by the OLAP
Processor.
Versusu2026EXCEPTION AGGREGATION AVG (OLAP) -->
CALENDAR DAY Ø ORDERS
01/17/05 40
01/18/05 29
When you use key figures with exception aggregation in the
InfoCube, the reference characteristics of the key figure are
automatically included in the aggregate. In the example shown
above, the aggregate must contain the time characteristic
0CALDAY because 0CALDAY represents the reference characteristic
for the exception aggregation.
The automatic inclusion of reference characteristics in exception
aggregation can affect the size and therefore the performance of
the aggregate. You can remove the reference characteristics via
the Expert mode (menu path: ExtrasSwitch Expert Mode On).
But, in that case, the OLAP processor can use the aggregate only
if the query does not contain the corresponding key figure.
Hope this helps you..
Regards,
Debjani.. -
We have created batch jobs for rollups on the cube to commence after the load is over. But this job gets finished in notime and job log says that "no rollups necessary". In the joblog we can also see that it tries to rollup from request 00000000, which is not present to latest request. then it says
"Aggregation of InfoCube terminated, as end request ID 0000000000 not permitted
Rollup is finished: Data target , from 0000185518 to 000000000
No rollup necessary"
When we stop the scheduled batch job and execute it manually then rollups get completed successfully.
could anyone advise on this. Thanks in adv.Hi Rao,
What is the program you are using in the batch job. May be thats what is causing the problem. May be you are not passing the correct parameter to it.
Bye
Dinesh -
Compression failed - Inventory Cube - 2LIS_03_BX
HI Experts,
the compression of the initial request from datasource 2LIS_03_BX failed. The compression will be done with "no marker update".
On the development system it works fine, but on the test system the compression failed. The time dimension is with 0FISCPER and 0FISCYEAR.
All checks with RSDV are successfull.
Please find below the Job Log. Have anyone an idea ?
Thanks
Constantin
Message text
Message class
Message no.
Message type
Job started
0
516
S
Step 001 started (program RSCOMP1,
variant &0000000002226, user ID xxx)
0
550
S
Performing check and potential update for
status control table
RSM1
490
S
FB RSM1_CHECK_DM_GOT_REQUEST called from
PRG RSSM_PROCESS_COMPRESS; row 000200
RSM
53
S
Request '3.439.472'; DTA 'TKGSPLB16';
action 'C'; with dialog 'X'
RSM
54
S
Leave RSM1_CHECK_DM_GOT_REQUEST in row
70; Req_State ''
RSM
55
S
RSS2_DTP_RNR_SUBSEQ_PROC_SET
GET_INSTANCE_FOR_RNR 3439472 LINE 43
RSAR
51
S
RSS2_DTP_RNR_SUBSEQ_PROC_SET
GET_TSTATE_FOR_RNR 2 LINE 243
RSAR
51
S
Status transition 2 / 2 to 7 / 7
completed successfully
RSBK
222
S
RSS2_DTP_RNR_SUBSEQ_PROC_SET
SET_TSTATE_FURTHER_START_OK LINE 261
RSAR
51
S
Aggregation of InfoCube TKGSPLB16 to
request 3439472
DBMAN
396
S
statistic data written
DBMAN
102
S
Requirements check compelted: ok
DBMAN
102
S
enqueue set: TKGSPLB16
DBMAN
102
S
compond index checked on table:
/BIC/ETKGSPLB16
DBMAN
102
S
ref. points have const. pdim: 0
DBMAN
102
S
enqueue released: TKGSPLB16
DBMAN
102
S
Prerequisite for successful compression
not fulfilled
DBMAN
378
S
Collapse terminated: Data target
TKGSPLB16, from to 3.439.472
RSM
747
S
Aggregation of InfoCube TKGSPLB16 to
request 3439472
DBMAN
396
S
statistic data written
DBMAN
102
S
Requirements check compelted: ok
DBMAN
102
S
enqueue set: TKGSPLB16
DBMAN
102
S
compond index checked on table:
/BIC/ETKGSPLB16
DBMAN
102
S
ref. points have const. pdim: 0
DBMAN
102
S
enqueue released: TKGSPLB16
DBMAN
102
S
Prerequisite for successful compression
not fulfilled
DBMAN
378
S
Collapse terminated: Data target
TKGSPLB16, from to 3439472
RSM
747
S
Report RSCOMP1 ended with errors
RSM1
798
E
Job cancelled after system exception
ERROR_MESSAGE
0
564
A---> Compressing the request containing the opening stock that was just uploaded. Make
sure the "No marker update" indicator is not set.
The Problem is the time period 0FISCPER....
When I substitue it with 0CALMONTH and so on it works! -
What are the best design requisites for a Query design?
Hi Guru's
Could you please let me know,which item will execute first when you run a query,I mean Calculated keyfigure,restricted keyfigure or formula e,t,c...How does it effects the Query performance?What are the design requisites to optimise better query performance?
Thanks in advance,
rgds,
Srini.Hi Srinivas....
The design of queries can have a significant impact on the performance.
Sometimes long running queries are the result of poor design, not just the amount
of data. There are a number of design techniques that developers can use to
provide optimal query performance.
For example, in most cases characteristics should be placed in the rows and key
figures in the columns. A characteristic should only be used in the columns in
certain circumstances (like time). Characteristics having potentially many values
(such as 0MATERIAL) must not be added to the columns without a filter or
variables. Alternatively, it can be integrated into the query as a free characteristic
enabling it to be used in navigation.
If a relatively detailed time characteristic, such as calendar day (0CALDAY) is
added to the rows, the more aggregated time characteristics (such as calendar
month (0CALMONTH)) and calendar year (0CALYEAR) should also be included
in the free characteristics of the query. For most reports, a current period of time
(current month, previous or current calendar year) is useful. For this reason, the
use of variables is particularly relevant for time characteristics.
To improve query performance
1) Variables and drop down lists can improve query performance by making the
data request more specific. This is very important for queries against Data Store
Objects and InfoSets, which are not aggregated like InfoCubes.
2) When using restricted key figures, filters or selections, try to avoid the Exclusion
option if possible. Only characteristics in the inclusion can use database indexes.
Characteristics in the exclusion cannot use indexes.
3) When a query is run against a MultiProvider, all of InfoProviders in that
MultiProvider are read. The selection of the InfoProviders in a MultiProvider
query can be controlled by restricting the virtual characteristic 0INFOPROVIDER
to only read the InfoProviders that are needed. In this way, there will be no
unnecessary database reads.
4) Defining calculated key figures at the InfoProvider level instead
of the query level will improve query runtime performance, but may add
time for data loads.
5) Cell calculation by means of the cell editor generates separate queries at query
runtime. Be cautious with cell calculations.
6) Customer-specific code is necessary for virtual key figures and characteristics.
Check Code in Customer Exits.
7)Using graphics in queries, such as charts, can have a performance impact.
Hope this helps.........
Regards,
Debjani......... -
Performance problem counting ocurrences
Hi,
I have an infocube with 5 characteristics (region,company,distribution center, route, customer) and 3 key figures, I have set one of this KF to average ( values different to 0), i am loading data from 16 months and 70 weeks. In my query i have set a calculated KF which is counting the ocurrences by the lowest characateristic to obtain it by granularity level therefore I always count the lowest detail (customer) there are aprox, 500K customers so my web templates are taking more than 10 minutes displaying the 12 months, I have looked up to make aggregations however the query is not using them anyway, has anyone had this kind of performance problems with such a low level of data (6 million for 12 months), Has anyone found a workaround to improve performance? I really expect someone has this experience and could help me out, this will depend on the life of BW in the organzation.
Please help me out!
Thanks in advance!Hi,
First of all thanks for your advices, I have taken part of both in my solution, I am now not considering anymore to use the avg defined in the ratio, how ever i am still considering it in the query, it is answering at least for now taking up to 10 mins. Now my exact requirement is to display the count of distinct customers groped by the upper levels. I have populated my infocube with 1 in my key figure however, it may be duplicated for a distribution center, company or region, therefore i have to find out the distinct customer. With SAP's "How to count occurences" i managed that, but it is not performing at an acceptable level , i have performed tests without the division between CKF customer/ CKF avg customer and found this is what is now slowing the query. I find the boolean evaluation might be more useful and less costly if you could hint a little more in how to do it, i would appreciate with points, also a change in the model could be costly by the front end part because of dependences with queries and web templates, i rather have it solved in BW workbench by partitioning, aggregation, new infocubes, which is already a solution I have analyzed by disggregating the characteristics by totals in different infocubes with the same KF and then by query selecting the appropiate one. I was wondering if an initial routine could do the count distincts and group by with the same ratio for different characteristics so i do not rework the other configuration I already have -
Hi experts,
I have a rollup action in my process chain. It shows green light after execution, however, rollup is not performed.
I got the message "Aggregation of infocube XXXXX terminated, as end request id 0000000000 not permitted".
When I manage the cube, the technical status is also in green light, however, no click in "compression status of aggregate" and rollup status", and "request for reporting available" is nothing.. I tried to click the tab "Rollup", it said that "no valid request available for rollup".
Does anyone know what's going on... and how to solve it? Thanks in advance.
Points will be awarded for useful answers.Hi,
Check whether the aggregates and activated and showing green. If not reactive and fill them.
If they are green. Go in the manage tab and take the latest request ID from the llist and put it in the roll up tab and then try to rollup
Regds,
Shashank -
Compression Status of a Request in the cube
Dear Experts,
Can you please let me know How can I check the compression status of a Request in Inventory cube.
for example i have a request loaded from 2lis_03_bx data source into the 0ic_c03 and i would like to know how the request has been compressed, i.e using the marker update or with out using the marker update option ?
is there any way to know about it ?
regards
Krishna MohanHi Tibollo,
Thanks for your answer and i could abel to find the log as you said. and below is the information i have found in the log
" Aggregation of InfoCube 0IC_C03 to request 180698
Mass insert for marker initialization executed (51050 data records)
P-DIMID 625 deleted from table /BI0/D0IC_C03P (InfoCube 0IC_C03)
Request 180698 was summarized correctly
DB Statistics: 00000000 selects, 00000000 inserts, 00000000 updates, deletes 00000000
Request with P-Dimid 180698 from InfoCube 0IC_C03 deleted (51050 data records)
Aggregation of InfoCube 100033 to request 180698
InfoCube 0IC_C03 Successfully Compressed Up To Request 180,698 "
But how can we know while compressing the request it has used the NO MARKER UPDATE is Set or Not
which one of the above messages states that and how.
regards
Krishna Mohan -
How to delete a request in a collapsed and aggregated infocube ???
Hi all,
<u><b>The situation :</b></u>
I have a cube that is collapsed, and that has aggregates.
I'm not the one that designed this cube but if my understanding is good, aggregates were created to avoid the duplication of the recordings for the same key of characteristics because of the multiple loading. Actually KFs of the same recording are fed by various chains. Each chain creates a segment for the same characteristic values while updating the concerned KF. The aggregates make it possible to gather all these segments within the same recording by aligning the various updated KF. (that's definitelly not clear to me but...this might be the reason why...)
In the last chain that feeds the cube, right after loading datas, there is an "adjust" step, and then a "collapse" step.
<u><b>The problem :</b></u>
Sometimes for this cube (when we have a problem with figuresfor example) I have to run loading chains a second time for the same day. The result of that is obviously that datas are doubled in the cube, so that I have to delete requests in the cube to have them right.
The problem is that, when I try to delete a request, the following pop up message appears : "Request ID xxx is aggregated, compressed or scheduled
Message no. RSM725"
I think I have to "uncollapse" the cube to do that, but I don't know how to do. By the way I'd want to "recollapse" the cube after that (it seems it's necessary so...).
Could someone explain steps to solve the problem, and in the same time, explain the necessity and differencies of creating aggregates and collapsing please ????
That would be so usefull to me, obviously I assign points for helpfull answers !
Thanks !
FabriceAggregates
check this links for aggregates
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e55aaca6-0301-0010-928e-af44060bda32
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/2299d290-0201-0010-1a8e-880c6d3d0ade
How to doc
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e55aaca6-0301-0010-928e-af44060bda32
relation between agg and comp
Relation between Rollup and compression.
Aggregate Rollup and Compression -
Record Check / Aggregation when loading to infocube
When pushing daily data to a infocube, summarized by week, I am having inconsistent results in the cube.
For example, in a week, if I have 5 separate records in the ODS with the same plant, variety, and week, but each on a different day, those 5 records should roll up to one record in the infocube by week.
In my record count, however, I notice the system generates packets of varying sizes during update to the cube. My question is, what if ODS records with the same keys are spread across the packets, could this result in inaccurate update to the cube?
In the record check screen, the Converted --> Updated columns suggest to me that unless similar records exist in the same Packet, there is a chance they will not be rolled up properly.
Any thoughts?
CBI agree that compression will yield the correct results, but this seems not to address the root concern, that being that data are not fully roll-ed up during load.
I would not expect that the individual packets would have an impact to overall cube roll-up, but in our testing it appears this is the case.
Do you know if the roll-up of data in a cube with similar characteristic values should be impacted by the breakdown of data in the packets? -
Infocube- Roll up ,compression and Aggregation
Hi,
Any one can explain me the follolwing info cube concepts:
F-table --> F-Aggregate table
F-table --> E-Table
F-Aggregate table --> E-Aggregate table
With some examples.
Thanks in advance
Regards,
Swarnalatha.MHi Swarna,
Info Cube Cantain Two Fact Tables.
1. F- Fact Table 2. E- Fact Table
When ever you do the Compresstion Request ID is deleted and Data is Aggreated based on characterstic values go and sit the E Fact Table.
Ex:
F-Fact Table Contians
Request ID Customer no Material No Price Qty Rev
1516 10000 101 1000 2 20000
1517 10001 101 1000 2 20000.
1518 10000 101 1000 2 20000.
Once you done the compression E-Table like this
Customer no Material No Price Qty Rev
10000 101 1000 4 4000
10001 101 1000 2 2000
Aggreate Table : Aggreate table also smaller cube it's not contain any fact tables but it's aggreates your data based on charastricts values
In Aggreation Tables like this
Customer no Price Qty Rev
10000 1000 4 4000
10001 1000 2 2000.
I think you can understand above ex.
Thanks & Regards,
Venkat. -
Data in the Cube not getting aggregated
Hi Friends
We have Cube 1 and Cube 2.
The data flow is represented below:
R/3 DataSource>Cube1>Cube2
In Cube1 data is Stored according to the Calender Day.
Cube2 has Calweek.
In Transformations of Cube 1 and Cube 2 Calday of Cube 1 is mapped to Calweek of Cube 2.
In the CUBE2 when i upload data from Cube1.Keyfigure Values are not getting summed.
EXAMPLE: Data in Cube 1
MatNo CustNo qty calday
10001 xyz 100 01.01.2010
10001 xyz 100 02.01.2010
10001 xyz 100 03.01.2010
10001 xyz 100 04.01.2010
10001 xyz 100 05.01.2010
10001 xyz 100 06.01.2010
10001 xyz 100 07.01.2010
Data in Cube 2:
MatNo CustNo qty calweek
10001 xyz 100 01.2010
10001 xyz 100 01.2010
10001 xyz 100 01.2010
10001 xyz 100 01.2010
10001 xyz 100 01.2010
10001 xyz 100 01.2010
10001 xyz 100 01.2010
But Expected Output Should be:
MatNo CustNo qty calweek
10001 xyz 700 01.2010
How to acheive this?
I checked in the transformations all keyfigures are maintained in aggregation summation
regards
PreetamJust now i performed consisyency check for the cube:
I a getting following warnings:
Time characteristic 0CALWEEK value 200915 does not fit with time char 0CALMONTH val 0
Consistency of time dimension of InfoCube &1
Description
This test checks whether or not the time characteristics of the InfoCube used in the time dimension are consistent. The consistency of time characteristics is extremely important for non-cumulative Cubes and partitioned InfoCubes.
Values that do not fit together in the time dimension of an InfoCube result in incorrect results for non-cumulative cubes and InfoCubes that are partitioned according to time characteristics.
For InfoCubes that have been partitioned according to time characteristics, conditions for the partitioning characteristic are derived from restrictions for the time characteristic.
Errors
When an error arises the InfoCube is marked as a Cube with an non-consistent time dimension. This has the following consequences:
The derivation of conditions for partitioning criteria is deactivated on account of the non-fitting time characteristics. This usually has a negative effect on performance.
When the InfoCube contains non-cumulatives, the system generates a warning for each query indicating that the displayed data may be incorrect.
Repair Options
Caution
No action is required if the InfoCube does not contain non-cumulatives or is not partitioned.
If the Infocube is partitioned, an action is only required if the read performance has gotten worse.
You cannot automatically repair the entries of the time dimension table. However, you are able to delete entries that are no longer in use from the time dimension table.
The system displays whether the incorrect dimension entries are still being used in the fact table.
If these entries are no longer being used, you can carry out an automatic repair. In this case, all time dimension entries not being used in the fact table are removed.
After the repair, the system checks whether or not the dimension is correct. If the time dimension is correct again, the InfoCube is marked as an InfoCube with a correct time dimension once again.
If the entries are still being used, use transaction Listcube to check which data packages are affected. You may be able to delete the data packages and then use the repair to remove the time dimension entries no longer being used. You can then reload the deleted data packages. Otherwise the InfoCube has to be built again. -
Data Error in the Query/Report after selective data deletion for infocube
Hi Experts,
Please advise what i was missing and what went wrong...
I got a Query (Forecast) on a Multicube...which is based on 2 Infocubes with Aggregates...
As i identified some data discrepency..yesteraday i performed selective data deleation on one of the Infocube
and executed report yesteraday and the results in the query are correct...
When today i executed the same report i am getting different results..
When i compared the results of the report with that of data in cube they are not matching
The report is not displaying the data in cube..for some rows it is displaying the data in the cube but for some rows it is just displaying same as the above row
there is no data loaded into info cube after selective deleation
Do i need to perform request compression and fill the aggregated after selective deleation
Please advise what went wrongHi Venkat,
No i haven't done anything on aggregates before or after selective delete
As there is not data load after the selective delete according to SAP Manual we don't need to perform any thing on aggregates...as selective data deletion on cube will delete data from aggregates as well
Please update how to identify error -
InfoSet in SAP BI 7.10 and Key figure aggregation
HI SAP Gurus,
I am new in SAP BI area. I have my first problem.
I want to create a report for the profit of goods.
The cost of goods(cogs) are constant for each material for one month.
The formula to calculate the profit of goods = sales turn over u2013 cogs of month *sales amount.
I have defined in BW time dependent infoObejct with attribute cogs.
I have 2 info Sources. InfoCube for transactional sales data from R/3 and material cogs master data loaded from csv file each month to infoObject.
The info Provider for report is InfoSet (transactional Cube and cogs infoObject) .
My problems are
1) When I create an InfoSet, SAP BW create automatically new technical name for all characteristics and key figures and the first technical name should be alias fr each InfoCube and InfoObject in the InfoSet.
2) The new technical name infoSet erased my aggregation references characteristic (=calmonth)
3) In the report the key figure cogs was aggregated for each customer sales and customers, that means the value of cogs is not constant, when it is aggregated according to customer sales order.
Thanks a lot for your support
Solomon Kassaye
Munich GermanySolomon find some code below for the start routine, change the fields and edit code to suit your exact structure and requirements but the logic is all there.
4) Create a Start Routine on the transformation from sales DSO to Profit of Goods InfoCube.
Use a lookup from the the COG DSO to populate the monthly COG field in the COG DSO.
**Global Declaration
TYPES: BEGIN OF I_S_COG,
/BIC/GOODS_NUMBER TYPE /BIC/A<DSO Table name>-/BIC/GOODS_NUMBER,
/BIC/GOODS_NAME TYPE /BIC/A<DSO Table name>-/BIC/GOODS_NAME,
/BIC/COG TYPE /BIC/A<DSO Table name>-/BIC/COG,
/BIC/PERIOD TYPE /BIC/A<DSO Table name>-/BIC/PERIOD,
END OF I_S_COG.
DATA: I_T_COG type standard table of I_S_COG,
wa_COG like line of i_t_COG.
*Local Declaration
data: temp type _ty_t_SC_1.
*move SOURCE_PACKAGE[] to temp[].
temp[] = SOURCE_PACKAGE.
select /BIC/GOODS_NUMBER /BIC/GOODS_NAME /BIC/COG /BIC/PERIOD from
/BIC/A<DSO Table name>
into corresponding fields of table i_t_COG for all entries in
temp where /BIC/GOODS_NUMBER = temp-/BIC/GOODS_NUMBER.
sort i_t_COG by /BIC/GOODS_NUMBER.
loop at SOURCE_PACKAGE assigning <source_fields>.
move-corresponding <source_fields> to wa.
loop at i_t_COG into wa_COG where /BIC/GOODS_NUMBER =
<source_fields>-/BIC/GOODS_NUMBER and /BIC/PERIOD =
<source_fields>-/BIC/PERIOD.
modify SOURCE_PACKAGE from wa transporting /bic/COG.
endloop.
endloop.
5) Create an End Routine which calculates Profit using the formula and updates the result set with the value in the Profit column.
Given your requirement for the profit calculation
profit of goods = sales turn over u2013 cogs of month * sales amount
Write a simple end routine yourself
*Local Declaration
loop at RESULT_PACKAGE.
<result_fields>-profit = <result_fields>-sales turn over - <result_fields>-COG * <result_fields>-sales amount.
modify RESULT_PACKAGE from <result_fields> transporting profit.
endloop.
As the above start and end routines are used to enhance your sales DSO, your fields for customer number and the sales order should already be in your DSO for drilldown.
Let me know how you get on. -
Unable to consolidate data from two DSOs into an InfoCube for reporting.
Hello Experts
I am working on BW 7.0 to create BEx Report as below:
This report will have data coming from two different sources, some data from COPA DSO [such as Customer Number, Product Hierarchy1, Product Hierarchy2, Product Hierarchy3, Product Hierarchy4. Product Hierarchy5, Product Hierarchy6 and a few other Key Figures] and the rest [such as Product Hierarchy, Reference Document, Condition Type (both Active & Inactive), Condition Value and a few other Key Figures] from another DSO (ZSD_DS18) which is a copy of the BCT DSO (0SD_O06). I've chosen this DSO because this is the BCT DSO which is used to store data from a Standard Extractor 2LIS_13_VDKON.
Below are the screenshots of these 2 DSOs:
I have successfully extracted the data from 2LIS_13_VDKON (includes PROD_HIER but not Customer Number) and loaded into a DSO (ZSD_D17).
All the testing is done using only one Sales Document No (VBELN).
First test that I tried is.. to create an Infocube and loaded data from ZCOPA_01 and ZSD_DS18 and when the LISTCUBE was run on this InfoCube, the data is coming up in two different lines which is not very meaningful. Screenshot below:
Therefore, I have created another DSO (ZSD_DS17) to Consolidate the data from ZCOPA_01 & ZSD_DS18 establishing mappings between some of common Chars such as below:
ZCOPA_01 ZSD_DS18
0REFER_DOC <-> 0BILL_NUM
0REFER_ITM <-> 0BILL_ITEM
0ME_ORDER <-> 0DOC_NUMBER
0ME_ITEM <-> 0S_ORD_ITEM
51 records are loaded from ZSD_DS18 into ZSD_DS17 and 4 records are loaded from ZCOPA_01 into ZSD_DS17 for a particular Sales Document Number.
When I am using a Write-Optimized DSO, data is coming in just 1 line but it is showing only 4 lines which is aggregated which is as expected since W/O DSO aggregates the data. However, when I use Standard DSO, the data is again splitting up into many lines.
Is there something that I am missing in here while designing the Data Model or does this call for some ABAP being done, if so, where or should I have to talk to the Functional Lead and then Enhance the Standard Extractor, and even if I do that, I would still have to bring in those Key Figures from ZCOPA_01 for my reporting?
Thank you very much in advance and your help is appreciated.
Thanks,
Chanduin your (current) InfoCube setup, you could work with "constant selection" on the key figures
for the COPA key figures, you'll need to add product hierarchy to the key figures with a selection of # and constant selection flagged
for the SD key figures, you'll have to do the same for customer & the product hierarchy levels (instead of product hierarchy)
Maybe you are looking for
-
Can I buy an unlock iphone 5 in US now? If I can, how many business days can I get it? Thank you!
-
Silverlight Help! Please I beg of you!
I just bought a brand new mac book pro yesterday, I am currently running os x 10.8.3 and when I go to download silverlight it will not install because it is from an unidentified source :[ this is killing me because it is not allowing me to watch netf
-
Every time I try to connect my D600 to Lightroom, I get the message "Camera Not Detected" What am I doing wrong? The product information shows tethering support for D600. I am using an I-Mac with Mountain Lion Version 10.8.4.
-
AVCHD Not linking after updating
Hi All, Using Premiere Pro CS4 on an iMac 2.8Ghz i5 w/ 8GB ram. The footage files i'm using are AVCHD files with a .MTS extension. My recent problem this morning after recently installing CS4 on this new machine is that after doing all the current Ad
-
OK... Stupid Newbie here....
Can anyone please tell me how to do this? I want to record video using a very basic camera and at the same time run multitrack audio into cubase or Logic for really good audio quality. Basically I am recording 7 channels of drums and want video too.