Poor MDX performance on F4 master data lookup
Hi,
<P>
I've posted this to this forum as it didn't get much help in the BW 7.0 forum. I'm thinking it was too MDX oriented to get any help there. Hopefully someone has some ideas.
<P>
We have upgraded our BW system to 7.0 EHP1 SP6 from BW 3.5. There is substantial use of SAP BusinessObjects Enterprise XI 3.1 (BOXI) and also significant use of navigational attibutes. Everything works fine in 3.5 and we have worked through a number of performance problems in BW 7.0. We are using BOXI 3.1 SP1 but have tested with SP2 and it generates the same MDX. We do however have all the latest MDX related notes, including the composite note 1142664.
<P>
We have a number of "fat" queries that act as universes for BOXI and it is when BOXI sends a MDX statement that includes certain crossjoins with navigational attributes that things fall apart. This is an example of one that runs in about a minute in 3.5:
<P>
SELECT { [Measures]. [494GFZKQ2EHOMQEPILFPU9QMV], [Measures].[494GFZSELD3E5CY5OFI24BPCN], [Measures].[494GG07RNAAT6M1203MQOFMS7], [Measures]. [494GG0N4P7I87V3YBRRF8JK7R] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0MAT_SALES__ZPRODCAT]. [LEVEL01].MEMBERS, EXCEPT( { [0MAT_SALES__ZASS_GRP]. [LEVEL01].MEMBERS } , { { [0MAT_SALES__ZASS_GRP].[M5], [0MAT_SALES__ZASS_GRP].[M6] } } ) ), EXCEPT( { [0SALES_OFF]. [LEVEL01].MEMBERS } , { { [0SALES_OFF].[#] } } ) ), [0SALES_OFF__ZPLNTAREA].[LEVEL01].MEMBERS ), [0SALES_OFF__ZPLNTREGN]. [LEVEL01].MEMBERS ), [ZMFIFWEEK].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES MEMBER_UNIQUE_NAME, MEMBER_NAME, MEMBER_CAPTION ON ROWS FROM [ZMSD01/ZMSD01_QBO_Q0010]
<P>
However in 7.0 there appear to be some master data lookups that are killing performance before we even get to the BW queries. Note that in RSRT terms this is prior to even getting the popup screen withe "display aggregate".
<P>
They were taking 700 seconds but now take about 150 seconds after an index was created on the ODS /BIC/AZOSDOR0300. From what I can see, the navigational attributes require BW to ask "what are the valid SIDs for SALES_OFF in this multiprovider". The odd thing is that BW 3.5 does no such query. It just hits the fact tables directly.
<P>
SELECT "SID" , "SALES_OFF" FROM ( SELECT "S0000"."SID","P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000"."SALES_OFF" = "S0000"."SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BI0/D0PCA_C021" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDBL018" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR028" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR038" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR058" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR081" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDPAY016" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "P0000"."SALES_OFF" IN ( SELECT "O"."SALES_OFF" AS "KEY" FROM "/BIC/AZOSDOR0300" "O" ) ) ORDER BY "SALES_OFF" ASC
<P>
I had assumed this had something to do with BOXI - but I don't think this is a MDX specific problem, even though it's hard to test in RSRT as it's a query navigation. Also I assumed it might be something to do with the F4 master data lookup but that's not the case, because of course this "fat" query doesn't have a selection screen, just a small initial view and a large number of free characteristics. Still I set the characteristic settings not only to do lookups on the master data values and that made no difference. Nonetheless you can see in the MDXTEST trace that event 6001: F4: Read Data. Curiously this is an extra one that sits between event 40011: MDX Initialization and event 40010: MDX Execution.
<P>
I've tuned this query as much as I can from the Oracle perspective and checked the indexes and statistics. Also checked Oracle is perfectly tuned and parameterized as for 10.2.0.4 with the May 2010 patchset for AIX. But this query returns an estimated 56 million rows and runs an expensive UNION join on them - so no suprise that it's slow. As a point of interest changing it from UNION to UNION ALL cuts the time to 30 seconds. Don't think that helps me though, other than confirming that it is the sort which is expensive on 56m records.
<P>
Thinking that the UNORDER MDX statement might make a difference, I changed the MDX to the following but that didn't make any difference either.
<P>
SELECT { [Measures].[494GFZKQ2EHOMQEPILFPU9QMV], [Measures].[494GFZSELD3E5CY5OFI24BPCN], [Measures].[494GG07RNAAT6M1203MQOFMS7], [Measures].[494GG0N4P7I87V3YBRRF8JK7R] } ON COLUMNS ,
NON EMPTY UNORDER( CROSSJOIN(
UNORDER( CROSSJOIN(
UNORDER( CROSSJOIN(
UNORDER( CROSSJOIN(
UNORDER( CROSSJOIN(
[0MAT_SALES__ZPRODCAT].[LEVEL01].MEMBERS, EXCEPT(
{ [0MAT_SALES__ZASS_GRP].[LEVEL01].MEMBERS } , { { [0MAT_SALES__ZASS_GRP].[M5], [0MAT_SALES__ZASS_GRP].[M6] } }
) ), EXCEPT(
{ [0SALES_OFF].[LEVEL01].MEMBERS } , { { [0SALES_OFF].[#] } }
) ), [0SALES_OFF__ZPLNTAREA].[LEVEL01].MEMBERS
) ), [0SALES_OFF__ZPLNTREGN].[LEVEL01].MEMBERS
) ), [ZMFIFWEEK].[LEVEL01].MEMBERS
DIMENSION PROPERTIES MEMBER_UNIQUE_NAME, MEMBER_NAME, MEMBER_CAPTION ON ROWS FROM [ZMSD01/ZMSD01_QBO_Q0010]
<P>
Does anyone know why BW 7.0 behaves differently in this respect and what I can do to resolve the problem? It is very difficult to make any changes to the universe or BEx query because there are thousands of Webi queries written over the top and the regression test would be very expensive.
<P>
Regards,
<P>
John
Hi John,
couple of comments:
- first of all you posted it in the wrong forum. This belongs into the BW forum
- MDX enhancements in regards to BusinessObjects are part of BW 7.01 SP05 - not just BW 7.0
would suggest you post it in the BW forum
Ingo
Similar Messages
-
Master data Lookup missing for sold to party
Master data Lookup missing for sold to party?
The look up flow is from E_BOM to E_BOM1
regardsHi,
Check if SIDs are generated for the infoobject, if not try to do the change run attribute and then load the data to Cube.
Hope this helps...
Rgs,
Ravikanth. -
I am trying ABAP for master data lookup in update rule.
Here is the scenario ---
There is a Master Data object MDABC with attributes A1 , A2 . I need to Map IO1 to A1 and IO2 to A2.
What should be the Start Routine and Update Routine . Please help with a working code.
Thanks>
sap_newbee wrote:
> Thanks Aashish ,
> Here is the code I am usind but Its not populating any result May be you could help me out in debugging
>
>
> Start Routine -
>
> DATA: BEGIN OF ITAB_MDABC OCCURS 0,
> MDABC LIKE /BIC/PMDABC-/BIC/MDABC,
> A1 LIKE /BIC/PMDABC-/BIC/A1,
> A2 LIKE /BIC/PMDABC-/BIC/A2,
> END OF ITAB_NMDABC.
>
> SELECT
> /BIC/MDABC
> /BIC/A1
> /BIC/A2
> FROM /BIC/PMDABC INTO TABLE ITAB_MDABC
> FOR ALL ENTRIES IN DATA_PACKAGE
> WHERE /BIC/MDABC = DATA_PACKAGE-/BIC/MDABC.
> ENDSELECT.
>
>
> In Update Routine for Infoobject IO1 The code Iam using is
>
>
> READ TABLE ITAB_MDABC WITH KEY
> MDABC = COMM_STRUCTURE-/BIC/MDABC
> BINARY SEARCH.
> IF sy-subrc = 0.
> RESULT = ITAB_MDABC-A1.
> ENDIF.
> RETURNCODE = 0.
>
> ABORT = 0.
>
> Please help.
> Thanks
Please use table in the select statement. Modifications in BOLD
Edited by: Ashish Gour on Oct 17, 2008 2:57 PM -
0CALDAY for time dependent master data lookup unknown when migrated to 7.0
I am in the process of migrating a number of InfoProviders from the 3.x Business Content to the new methodology for BI 2004s.
When I try to create a transformation from the Update Rules for 0PA_C01, all of the rules that use a master data lookup into 0EMPLOYEE give me the error such as "Rule 41 (target field: 0PERSON group: Standard Group): Time char. 0CALDAY in rule unknown".
How do I fix the transformation rule that is generated from the update rule for these time-dependent master data attributes?Hi Mark,
look at http://www.service.sap.com/. I guess you need to implement some corrections or newer supp-packages.
kind regards
Siggi
PS: take a look: https://websmp104.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=941525&_NLANG=EN
Message was edited by:
Siegfried Szameitat -
Master Data lookup in Update Rule problem
Hi all,
I am currently having a problem loading data to an InfoCube using flat files.
The architecture is as follows:
1) The source of the data is a flat file
2) The data is loaded thru an Update Rule and is of type Full-Update
3) The Update Rules determines the Profit Center using the Master Data of the WBS-Element
4) The data is written in an InfoCube
This solution however does not always work as planned. In the following situation a problem occurs:
1) The flat file contains WBS-element RD.00753.02.01, which has a Profit Center attribute value 8060
2) When I load the flat file, the PC value 8060 is written into the row in the InfoCube, which is correct
3) Then I change the master data of the WBS-element by setting the Profit Center attribute value to 8068
4) I run the Attribute Change Run
5) Then i load a flat file again, which also contains WBS-element RD.00753.02.01
6) The master data attribute value should now write the value 8068 into the InfoCube. HOWEVER, this is when the evil occurs. BW does not write a PC value of 8068, but it write the value 8060. This is wrong.
Why does BW not take the newest version of the Master Data to performe the attribute value look-up? Or why doesn't BW write the correct Profit Center into the cube?
Thanks,
OnnoHi Ricardo,
The debug via PSA simulation of the update indicates that the CORRECT Profit Center value is to be written into the InfoCube.
However, if I check the contents of the cube (after the load has finished) using the request-id the WRONG Profit Center value is shown. This indicates that the correct Master Data is used, however the update of the Cube is wrong. Why does this happen. the load is of type full-update, so should add a new row in the cube using the value in the data from the UR.
Onno -
Problem with master data lookup in transformations
Hi,
We're experiencing a strange problem in the transformations for an Error Reporting scenario.
We've the SAP standard customer (0CUSTOMER) in DSO 2 which derives it's value from the navigational attribute of YCUSTNR (0CUSTOMER is the navigational attribute of YCUSTNR) in DSO1. But the problem here is that even when there is no entry for YCUSTNR, the 0CUSTOMER in DSO 2 returns a value.
Referential Integrity check is not employed for this scenario. Is it because of this reason that lack of master data is returning some value held in memory. I'm not sure whether it's an issue with the cache. Is there any work around for solving this problem other than the standard master data look up?
Any help in this issue would be appreciated and rewarded.
Thanks & Regards
HariNote 1031553 - Reading master data returns value that does not exist
will solve this problem.
Regards,
József. -
I am building custom screens in web dynpro that writes to ODS table in BI.
I have few fields such as Customer Master and Profit Center user has to select in that custom screen.
For now I manually created another screen, which brings up the customer list, that I pop up from screen 1 to select a customer. I am doing similarly for Profit center and other lookup fields.
I have two questions.
1. Is there a way to directly get those lookup screens and use in my custom screen without manually creating them.
2. If I cannot do that, another issue I am facing is, I have another application where I need similar functionality for screens in that application, I am ending up creating another lookup window. Is there a way to reuse windows that I created from one application in another application.
How do we modularize those common lookup windows that can be used across multiple applications.
ThanksWeb Dynpro Gurus,
Any one who has answer, ye or ne...can they please respond. I am trying to understand this, so that I can decide which path to go.
Thanks -
Performance tuning of Master Data Table: VBAK LIPS VBFA MSEG MKPF
Hi, ALL
How to improve performance to following statements: inner join.
select LIPSVGBEL LIPSVBELN MSEGMATNR MKPFBUDAT
MKPFUSNAM MSEGLBKUM MSEGBWART MSEGWERKS
VBAKIHREZ MSEGMBLNR VBAKAUART LIPSPSTYV
MSEGLGPLA MSEGMEINS MSEGMJAHR MKPFMJAHR
into corresponding fields of table it_out
from ( VBAK
<b>inner join</b> LIPS
on LIPSVGBEL = VBAKVBELN
<b> inner join</b> VBFA
on VBFAPOSNV = LIPSPOSNR
and VBFAVBELV = LIPSVBELN
<b>inner join</b> MSEG
on MSEGMATNR = VBFAMATNR
and MSEGMBLNR = VBFAVBELN
<b>inner join</b> MKPF
on MKPFMBLNR = MSEGMBLNR )
where VBAK~AUART in S_AUART
and VBAK~IHREZ in S_IHREZ
and LIPS~PSTYV in S_PSTYV
and LIPS~VBELN in S_VBELN
and LIPS~VGBEL in S_VGBEL
and MSEG~BWART in S_BWART
and MSEG~LBKUM in S_LBKUM
and MSEG~MATNR in S_MATNR
and MSEG~MBLNR in S_MBLNR
and MSEG~MENGE in S_MENGE
and MSEG~WERKS in S_WERKS
and MKPF~BUDAT in S_BUDAT
and MKPF~USNAM in S_USNAM.
Thank you very much.Thanks all of you and your suggestioin.
Now I have modified my original select statements to two parts.
1. select LIPS VBAK VBFA to an internal table it_A
2. select MKPF MSEG to an internal table it_B for all entries in it_A
3. disuse "into corresponding fields of table"
After that, my performance has improved for about 10 times.
Welcome any other suggestions.
Performance is an forever topic, I think:) -
Performance: reading huge amount of master data in end routine
In our 7.0 system, each day a full load runs from DSO X to DSO Y in which from six characteristics from DSO X master data is read to about 15 fields in DSO Y contains about 2mln. records, which are all transferred each day. The master data tables all contain between 2mln. and 4mln. records. Before this load starts, DSO Y is emptied. DSO Y is write optimized.
At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups. We redesigned and fill all master data attributes in the end routine, after fillilng internal tables with the master data values corresponding to the data package:
* Read 0UCPREMISE into temp table
SELECT ucpremise ucpremisty ucdele_ind
FROM /BI0/PUCPREMISE
INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise
FOR ALL ENTRIES IN RESULT_PACKAGE
WHERE ucpremise EQ RESULT_PACKAGE-ucpremise.
And when we loop over the data package, we write someting like:
LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>.
READ TABLE lt_0ucpremise INTO ls_0ucpremise
WITH KEY ucpremise = <fs_rp>-ucpremise
BINARY SEARCH.
IF sy-subrc EQ 0.
<fs_rp>-ucpremisty = ls_0ucpremise-ucpremisty.
<fs_rp>-ucdele_ind = ls_0ucpremise-ucdele_ind.
ENDIF.
*all other MD reads
ENDLOOP.
So the above statement is repeated for all master data we need to read from. Now this method is quite faster (1,5 hr). But we want to make it faster. We noticed that reading in the master data in the internal tables still takes a long time, and this has to be repeated for each data package. We want to change this. We have now tried a similar method, but now load all master data in internal tables, without filtering on the data package, and we do this only once.
* Read 0UCPREMISE into temp table
SELECT ucpremise ucpremisty ucdele_ind
FROM /BI0/PUCPREMISE
INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise.
So when the first data package starts, it fills all master data values, which 95% of them we would need anyway. To accomplish that the following data packages can use the same table and don't need to fill them again, we placed the definition of the internal tables in the global part of the end routine. In the global we also write:
DATA: lv_data_loaded TYPE C LENGTH 1.
And in the method we write:
IF lv_data_loaded IS INITIAL.
lv_0bpartner_loaded = 'X'.
* load all internal tables
lv_data_loaded = 'Y'.
WHILE lv_0bpartner_loaded NE 'Y'.
Call FUNCTION 'ENQUEUE_SLEEP'
EXPORTING
seconds = 1.
ENDWHILE.
LOOP AT RESULT_PACKAGE
* assign all data
ENDLOOP.
This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
Well this all seems to work: it takes now 10 minutes to load everything to DSO Y. But I'm wondering if I'm missing anything. The system seems to work fine loading all these records in internal tables. But any improvements or critic remarks are very welcome.This is a great question, and you've clearly done a good job of investigating this, but there are some additional things you should look at and perhaps a few things you have missed.
Zephania Wilder wrote:
At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups.
This is not accurate. After SP14, BW does a prefetch and buffers the master data values used in the lookup. Note [1092539|https://service.sap.com/sap/support/notes/1092539] discusses this in detail. The important thing, and most likely the reason you are probably seeing individual master data lookups on the DB, is that you must manually maintain the MD_LOOKUP_MAX_BUFFER_SIZE parameter to be larger than the number of lines of master data (from all characteristics used in lookups) that will be read. If you are seeing one select statement per line, then something is going wrong.
You might want to go back and test with master data lookups using this setting and see how fast it goes. If memory serves, the BW master data lookup uses an approach very similar to your second example (1,5 hrs), though I think that it first loops through the source package and extracts the lists of required master data keys, which is probably faster than your statement "FOR ALL ENTRIES IN RESULT_PACKAGE" if RESULT_PACKAGE contains very many duplicate keys.
I'm guessing you'll get down to at least the 1,5 hrs that you saw in your second example, but it is possible that it will get down quite a bit further.
Zephania Wilder wrote:
This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
This sleeping approach is not necessary as only one data package will be running at a time in any given process. I believe that the "global" internal table is not be shared between parallel processes, so if your DTP is running with three parallel processes, then this table will just get filled three times. Within a process, all data packages are processed serially, so all you need to do is check whether or not it has already been filled. Or are you are doing something additional to export the filled lookup table into a shared memory location?
Actually, you have your global data defined with the statement "DATA: lv_data_loaded TYPE C LENGTH 1.". I'm not completely sure, but I don't think that this data will persist from one data package to the next. Data defined in the global section using "DATA" is global to the package start, end, and field routines, but I believe it is discarded between packages. I think you need to use "CLASS-DATA: lv_data_loaded TYPE C LENGTH 1." to get the variables to persist between packages. Have you checked in the debugger that you are really only filling the table once per request and not once per package in your current setup? << This is incorrect - see next posting for correction.
Otherwise the third approach is fine as long as you are comfortable managing your process memory allocations and you know the maximum size that your master data tables can have. On the other hand, if your master data tables grow regularly, then you are eventually going to run out of memory and start seeing dumps.
Hopefully that helps out a little bit. This was a great question. If I'm off-base with my assumptions above and you can provide more information, I would be really interested in looking at it further.
Edited by: Ethan Jewett on Feb 13, 2011 1:47 PM -
Abap in Transfer Structure to lookup in Master Data
Hello Friends ,
I need help with one ABAP code...
I have an Infoobject 'Employee' which is mapped to source system 'A' in the transfer structure . But now I want it to look into my 'employee master data' and get populated from there.
Please suggest the ABAP code for this requirement .
Thanks a lot!!!Hello SAP_newbee
Here is the code for master data lookup
DATA: L_TGTFIELD LIKE RESULT.
L_TGTFIELD_CHVAL = COMM_STRUCTURE-XXXXX. "XXXXX is name of master data IO
CALL FUNCTION 'RSAU_READ_MASTER_DATA'
EXPORTING
I_IOBJNM = 'XXXXX' " Master data IO
I_CHAVL = L_TGTFIELD_CHVAL " Value of master data for which u want see attribute vale
I_ATTRNM = 'YYYYY' " Name of attribute, infoObject tech name
IMPORTING
E_ATTRVAL = L_TGTFIELD
EXCEPTIONS
READ_ERROR = 1
NO_SUCH_ATTRIBUTE = 2
WRONG_IMPORT_PARAMETERS = 3
CHAVL_NOT_FOUND = 4
OTHERS = 5.
CASE SY-SUBRC.
WHEN 1 OR 2 OR 3 OR 5.
* Error during read --> Skip whole package
ABORT = 4.
Copy error message into errorlog
MOVE-CORRESPONDING SY TO MONITOR.
APPEND MONITOR.
WHEN 0 OR 4.
Attribute found successfull or not found
RESULT = L_TGTFIELD.
ABORT = 0.
ENDCASE.
Thanks
Tripple k -
Production orders: Master data updating
Hi everyone,
This is my scenario:
I had created hundreds of production orders with a previously defined BOM and routing. Now both the BOM and the Routing have been modified.
I was wondering if there's any way for updating the master data in all the former production orders without doing it explicitely accessing one by one.
Points will be rewarded.
Thanks and best regards.
Ben.Dear ,
I hope you can do it by Mass processing of Production .
Logistics ---Production -- Production Control Control --- COHV Mass Processing to go to the Mass Processing Production Orders screen
Transaction Code :
COHVOMPP
Here u have to capacity and availability check for all the orders.Now u can include the function in the Mass processing tabe page in COHVOMPP ok and also set the error log.Once u have executed a list of PO will be displaed and u have select all of them and hit the mass procesing button.
2.In case your order beyond the shop fllor ,u can do it through Order Change Management
3.Or U can perform Read PP Master Data in the CO02 --Function tab which will ask u to include your BOM/Routing.
Hope it will help u to clear ur idea.
Reward point if useful
Regards
Jia -
Hi experts!!!
I am going to load new master data in our BI.Our BI consists of some cubes and DSOs which have data from past 3 years and which are based on old master data.Now the Master Data is completely changed to new values and will be loaded into our BI.I have following doubt regarding this:
From past three years the transaction data that is loaded into our cubes and DSOs is based on Old Master data.Now if I suddenly load NEW master data values,what will happen to this old transaction data loaded until today?I can imagine that once I load new values,from now onwards,transaction data will be based on new values only.The question is what will happen to the old transaction data ?
Please assume both cases of Time Independent Master data and Time Dependent Master data while reading my question.
Thanks for your time.Hi,
Case1 (normal case, happens frequently) - Only attributes of master data has changed.
Your transaction data would reflect the old master data when you have master data lookups or reads on master data table (P, Q tables for time indep/time dep MD) for populating master data in the transaction data flow.
In case just the attributes on the master data objects like Customer/Material have changed and these are used at the query level, the queries would show the new attribute values. In case the time validity/dependency has changed, the new time validity takes effect. For e.g Person X is respnsible for cost center A from 01/01/2010 till 31/12/2012. Now MD gets changed and Person Y is responsible for Cost Center A from 01.01.2011 till 31/12/2015. If the report is run now, the person Y would be shown in the report.
Case 2 - The master data has big changes in the source system. for e.g. Cust 10000 is now 11111 or material 12345 is now 112233.
In these cases your historical transaction data would be showing old master data.
For e.g Person X is respnsible for cost center A from 01/01/2010 till 31/12/2012. Now Cost center A is changed to B and person Y is responsible for the same period. Old transactions with Cost center A shows X as responsible and new transactions with Cost center B shows Y as responsible.
Even if your master data has changed completely, there would not be any affect on the existing transaction data....as new master data entries would be written to the tables and the existing master data remains. -
Hi Xperts,
I am working BI 7.0 system. Some of data sources having many master data lookups. I heard if we can use master data look ups, we have to load full loads not for delta. Because the master data changes are not reflected into data targets due to master data lookup? Is it correct?
Advance thanks for you,
Mannev.Hi Mannev,
Any data in the data target already will not be updated unless it is in the DSO/ODS and you are overwriting it. In many cases, a full load will be necessary, however, if you don't want the master data updated, then you do not want to do a full load.
For example, if you load Material A with a Std Price of $1.00 on Monday, you probably don't want to reload that data on Friday if the Std Price changed to $1.20. It will through off all of your historical data.
Since I don't know what you are dealing with specifically, it's hard to tell you whether or not to use a full or delta load. There are benefits to both, you just need to understand the requirements enough to make the best decision.
Thanks,
Brian -
How InfoSpoke reads time dependent master data ?
Hello Experts !!
How InfoSpoke reads the time dependent master data ?
What key date it reffers to ?
Can you please explain, I want to use this concept in writing master data lookup for time dependent attributes of 0MATERIAL.
Thank a lot !You can either specify the time period in the filtering area of infospoke or you can implement a transformation BAdI -OPENHUB_TRANSFORM to manipulate the data whichever way that suites your requirement. All time dependent infobjects have datefrom and dateto fields which you can use to choose your data range accordingly.
Hope this helped you. -
Performance credit master data (FD32)--SAP Note 705317
Hi,
When running the FD32 tranasaction,I am changing the credit account and saving the transaction...it is taking quite a long time and going for dump....TIME_OUT error.
The appropriate notes for this problem are 705317 and 9937.
I shall give the details details of the note.
SymptomLong runtimes occur for the reorganization of credit data in SD documents.
This reorganization can occur in two situation:
- You start report RFDKLI20.
- You create or change credit master data in Transaction FD32.
Reason and PrerequisitesIn the credit management, you change assignments in Customizing or create or change
credit master data. The transfer of the changed settings into existing SD documents
requires a reorganization of these documents.
The reorganization is triggered from Transaction FD32 or using report RFDKLI20. Since
the relevant partner function in the credit management is the payer, all open sales
documents for a payer are selectedfor the reorganization.
The selection is performed within function module SD_CREDIT_RECREATE, which is called
from both Transaction FD32 and report RFDKLI20.
The documents are selected according to one of the two following options:1. Partner index VAKPA
The relevant documents are selected via partner index VAKPA. In index table VAKPA,
the system updates all SD documents for a customer number in dependency on the
partner function. All documents for a payer can be determined using index table
VAKPA.
CAUTION: Partner index VAKPA is only updated if the corresponding partner
functions and transactions are activated in Customizing table TINPA. Table TINPA
does not provide any update for the payer partnerfunction in the standard system!
2. Document table VBPA
If the update for the payer is not active in table TINPA, the relevant documents
are selected via application table VBPA. Table VBPA contains the used partners
(customer numbers) for SD documents.
The access to table VBPA with the customer number of the payer requires a long
SAP Note Nr. 705317 07.02.2008 Page 1
runtime because the customer number is not a key field in table VBPA. It might
result in a TIME_OUT.
Solution
Due to the technical conditions described above, there are two possible solutions for
runtime problems in Transaction FD32 or with report RFDKLI20:
1. Activate the update of index table VAKPA for the payer partner function:
Activate the update of index table VAKPA for the payer partner function by
creating an entry with transaction = 0, partner function = RG in table TINPA. You
can use Transaction SM31 for that.
Subsequently, a reorganization of index table VAKPA is absolutely required!! You
can execute the reorganization with report RVV05IVB. Note that all existing SD
documents thus also those ones already closed are indexed again by this report.
Depending on the number of your documents in the system, this can result in a
runtime which is correspondingly long. Therefore test this beforehand in a copy of
your production system!
2. Create a database secondary index for table VBPA:
If a reorganization of index table VAKPA is not possible or not wanted, you can
optimize the access via document table VBPA. For that, create a database secondary
index for table VBPA.
We cannot provide a general recommendation on how you have to define the secondary
index in detail. This basically depends on your dataset.
Use the SQL trace with Transaction ST05 in order to analyze the access onto table
VBPA within function module SD_CREDIT_RECREATE. Then define a suitable database
secondary index for table VBPA according to your analysis results.
For example, you could create the secondary index for VBPA fields KUNNR and PARVW.
Test the new secondary index by creating an SQL trace with Transaction ST05 again
in order to check the effectiveness of the database access and the runtime.
Also take the related notes into account.
Use Note 99937 to avoid the automatic reorganization - and thus runtime problems - in
Transaction FD32. Note that this does not solve the runtime problem during the
reorganization in general, but shifts it to RFDKLI20 which you can schedule as a job.
Thus, you are not disturbed by the runtime problem.
Please can anyone help me as to in which table and what fields i need to take to create secondary index.Thank you,
madhu.Hi
Instead of creating Indexxes u contact basis to increase the the time limit for the execution
Regards
Shiva
Maybe you are looking for
-
How to get rid of the large slideshow images in Web gallery (CS4)
Here is the deal. I've created a flash web gallery/ But the new CS4 has the option for slideshow with this flash stuff. CS3 use to create thumbnail and large images folders. The new one creates 3 folders: 1. thumbnails, 2, medium images, 3 large imag
-
Card Charged From outside source
I went to log on to Skype this morning and it had seemed my account was suspended. After some digging and finally getting back on i find my account drained and my card charged several times. No luck contacting Skype.
-
Doubt on condition in P.O
In PO item detail condition, enter the conditions what we need EX discount, surcharge, freight etc.... after made GR. I need to display and show some of the condition types in PO as separate line. what setting have to do?
-
EIM integration with Siebel CRM
Hi Folks, I have a customer who wants to integrate EIM solution with Siebel CRM solution and their basic requirement is to record all the chat, email conversation and other activities including the articles shoud save in Siebel CRM and Agent must be
-
How to set proxy credentials to access wms services in Oracle Mapviewer
Hi, I use Oracle Mapviewer v. 11_1_1_4 and I want to access WMS service but my LAN uses a proxy server that requires authentication. The mapviewer configuration file doesn't allow setting 'user' and 'password' proxy parameters. Other threads report t