How purchasing master data setup impact CO costing run

Dear all,
Question 1:
Could somebody out there, if you have this area of integration knowledge with CO, tell me the detail process and logic of how, when a costing run is perform, the master data in the Material master fields of Procurement Type and Special Procurement Type; Purchasing Information Records; and Source List impact the successful of costing run?
Question 2:
CO guy complaints that there are missing PIR and source list for following type of material setup in material master:
A) procurement type - F, Special procurement - blank (i.e. third party)
B) procurement type - F, Special procurement - NON-blank (i.e. can be intercompany, subcontract, vendor consignment),
Hence Costing run unable to determine the price. Is it a valid complaints
Regards,
Daniel

Hi,
Though Costing is realted to the price control , cost lot size and valuation class in Material master.
It almost equvalent dependent on Special Procurement type also is costing view and MRP view.
Do refer the effect of special procurement type in costing in belowHelp Link
https://help.sap.com/saphelp_46c/helpdata/en/7e/cb7f5643a311d189ee0000e81ddfac/content.htm
Do refer the basisc of standarad costing estimates in below Link
Basics of SAP Standard Cost estimate- Understanding the flow of cost settings-Part 1

Similar Messages

  • Master Data Setup Steps & Execution for SNP Aggregated Planning

    Hi,
    I need some URGENT Help on the Master Data Steps and Execution of SNP Aggregated Planning.  I've read the SCM 5.0 help with quite confusing understanding from it as regards to Master Data setup.
    I want to run SNP for a Group of Products at a Group of Locations (say Customer Group or Group of Distribution Centers) at an Aggregate Level & then later Disaggregate the Generated Supplies from the Aggregate Level to the detailed Product-Locations level.
    I have setup a Location Hierarchy with the Group of Distribution Centers and Product Hierarchy with a Group of Products  & since  SNP_LOCPROD aggregation is assigned to the 9ASNP02 Planning area ... I created a LOCPROD Hierarchy which use the Location & Product Hierarchy.
    The Main intention is to Improve Performance of Planning Runs at a Rough Cut  / Aggregate Level by doing a Rough Cut Capacity Check & Split Distribution Center requirements to Multiple Plants producing the same product.  This would also help increase the Optimizer Performance.
    I am finding there is no place to run SNP Heuristic by Specifiying a Hierarchy in the Selection screen of /sapapo/snp01.  So how is Aggregated Planning even run ?
    (I know there are Aggregation & Disaggregation Functions available in SNP Interactive Planning ... but what do I put in the Selection as 'Hierarchies' are not available in the Selections in the 9ASNPAGGR  Aggregation Planning Book).
    Can someone also Provide me Detailed Steps and point me to a Detailed Training document on this  ?  I could not find anything on the SDN Form as well as BPX Forum as well as on Wiki site from within SDN.
    Please if anybody can Guide me Urgently  ?  I am needing to test this quickly to Demo to someone to demonstrate the APO Capabilities  (else the Customer may lose interest in APO as he is comparing functionality with another tool).  I don't want this to happen.
    Regards,
    Ambrish Mathur

    Harish,
    Read your Blog with great interest. I think you have very nicely captured the Features over there.  It was very helpfull.
    I still have some very very basic questions (sorry if I was not clear earlier) and I will try state with an Example.
    Are you saying ...
    1.  If I have 2 Products FG1 & FG2 .. then I need to FIRST
         create another Product Master called FG1_2_GP (say)
         via Trnx.  /sapapo/mat1  ?
    2. Similarly for the 5 DCs say DC1, DC2 .... DC5 ... I need
        to First Create a Location of type 1002 DC1_5_GP (say)
        via Trnx. /sapapo/loc3 first  ?
    I think I have not done these at all.
    3.  Finally are you saying that I need to now Create the
         Hierarchies ... ZCPGPROD  (using FG1 & FG2)
         and ZCPGLOC (using DC1, DC2 ... DC5)  and
         then create a generated Hierarchy ZCPGLOCPROD
         with Structure SNP_LOCPROD and using the ZCPGPROD
         and ZCPGLOD as Component Hierarchies ?
    4. I am still not clear how would you Link  Product Group
    FG1_2_GP  with  ZCPGPROD  ??  Where do we setup that FG1_2_GP is a Header of FG1 & FG2 ?
    5. Similarly where do you Link Location Group DC1_5_GP with the Location Hierarchy  ZCPGLOC  ?
    6. Finally in SNP Interactive Planning with Planning Book 9ASNPAGGR using the Location-Product (Header) as Selection what will I enter in this field ?
    I read your Blog. More details on the Example would help ... I saw in your Blog the PH_POWDER product Group (containing PH_FG1,2,3 Products) example which I was not clear whether it was a New Product you created via /sapapo/mat1 or a New Hierarchy Code created via /sapapo/relhshow.  Similarly you did create Location Groups PH_DC_AGG but do not seem to have used it.  I think you have assigned PH_POWDER group and assigned to Locations 3000, 3100 & 3400 individually via /sapapo/mat1 ... but was not clear on how you linked it to the Location Product Hierarchies.  You have also not created a Location Produc Hierarchy at all using PH_POWDER and PH_DC_AGG at all ... or have you  ? 
    7. Is the Untold Trick that the Product Hierarchy Name and the Product Group created in Trnx. MAT1 must be same  and the Location Hierarchy Name and the Location Group code created in LOC3 Trnx. should be same ?
    8. I had created a Generated Hierarchy at ZCPGLOCPROD ... is this what I enter in 9ASNPAGGR as Selection of Location-Product (Header)  ?
    9. I want to first understand Standard SNP before I go into the diretion of creating my own Planning areas and Planning Books. Is this really needed for me to Plan 2 Products in these 5 DCs at an Aggregate level.  I will be doing Aggregation of all kinds of Demand (STOs, POs, Forecast, SalesOrders, TLB Order Demand) and want the Supply generated at Aggregated Level to Disaggregate to the 2 FGs at the 5 DC Locations based on Proportion of Demand.
    I think my question is far more basic wrt the Master Data Setup ... would help if you can clarify each of my questions separately in your reply.  The Confusion is due to lack of clarity on the Master Data Setup in linking the Product Group PRODUCT Code & Location Group LOC Code to the respective Hierarchies and also SNP Interactive Planning not having a Place to enter these created Hierarchies.
    I do want to praise and appreciate your contribution to the Blog ,,, well done.  Full 10 Points Guaranteed on reply to above questions.
    Regards,
    Ambrish

  • How to fetch data for sales order costing

    Hi All,
    How to fetch data for VA03 --> Extras --> Costing having cost element details.
    Thanks
    Gaurav

    Hi Gaurao ,
    There is no such function module  to extract data  in one column   , you have to  Convert all the columns data    in one column
    for period   .
    regards
    Deepak.

  • How the master data of multi language is shared in the info cubes

    hi frindz,
    how the master data of multi language is shared in the info cubes
                                                                     bye bye
                                                                      venkata ramana reddy.G

    Hi Ventak,
    In the infocube you have the key. Then you have a text master data table that has as key (for the table) the key of the value (the one in the cube) and the language, and the text in that language as another field (short, med, long).
    Hope this helps.
    Regards,
    Diego

  • How the master data delta  works

    hi, experts
    i want to know how the master data delta (ale pointer) works? what the principle is? for example, the 0asset_attr datasource has 20 fields, any of the 20 fields changes can triger the delta? if i enhance 3 fields to the asset master data, does the 3 fields change can trigger delta?

    Application Link Enabling (ALE) change pointers are configured and used to be able to trigger processing of an outbound process, such as data extraction, and determine only those Master Data records that have changed. This is all done without the need for a program being required to determine deltas.
    SAP delivered program, RBDMIDOC runs periodically and deterlines if change pointers have been updated for specific message types. New or modified Master Data records are automatically initiated via ALE. RBDMIDOC references the correct IDoc program for any given type via TBDME (the TBDME table also cross-references message types with the ALE change pointer table) so that the data goes where it should and is processed accordingly. Tcode BD60 is the interface in SAP for maintaining the TBDME table.
    When a change is made to the standard content Master Data record, the delta will be identified. Any columns that have been enhanced to the Master Data will not be identified as a delta because enhancements are a post-extraction process and only get updated after the standard content structure has been populated.

  • Master data load for 0COSTCENTER (Cost Center) failing

    Hi Experts
    I have a master data load for 0COSTCENTER (Cost Center). The Load has started failing from a couple of days at DTP.
    (R) Filter Out New Records with the Same Key
          (G) Starting Processing...
          (R) Dump: ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    I am unable to understand the reason for this failure. I tried loading the data 1 costcenter at a time it is still failing so i doubt if its a internal table storage issue as suggested by the dump.
    Could you help me on this one please.
    Regards
    Akshay Chonkar

    Thanks All
    I have got the issue resolved
    The Error Stack for the DTP had accumulated so many entries that it was unable to process further.
    So i deleted the entries in Error Stack usin SE14. Then again executed my DTP.
    Everything is fine now thanks for your help.
    Regards
    Akshay Chonkar

  • Checking the master data automatically before the payroll run...

    Hi Friends,
    Can anyone tell me that, is it possible to check the master data automatically before the payroll run.
    If any user forget to maintain any important info-type ( like Cost Distribution ) in time of master data maintenance, SAP allow the Payroll system to run the payroll for that employee. Only in time of account posting the user is able to know the problem and due to rectify the problem they have to delete the payroll.
    So, I therefore wants to know that is there any check point or can customized any such things, which will check the necessary info-types before running payroll and generate an error massage about the missing info-type and also skip the processing for that person.
    With regards,
    Diptendu

    have
    a look at this report
    but before that make sure wht are the mandatory infotypes that are maintained to the employee so that u can come to an idea what are infotypes that are maintained for the memployee
    se 38   HTWLINFO  read the documentation of the is report mmight be useful

  • Retro master data change for previous cost center

    HI All,
    Could any one please reply on how I can unblock 'blocked cost center' to make retro changes for Master data.
    Thank you in advance. 
    Abhay

    Hi Abhay,
    1. Unblock the cost center for the period to apply the Master data changes. Unblock it via transaction KS02.
    2. Once the legacy cost center is unblocked, immediately proceed with master data updates.
    3. Once master data is updated, proceed with restoring the blocked status of the cost center that was used (Unblock).
    Thanks,
    Dhiraj

  • How Consolidated Master Data Gets Linked to the Transactional Data?

    Hi,
    Suppose we have three legacy systems holding customer master data. We fetch the data from these three systems into MDM and consolidate the records. Let's say there were three customer master records:
    CUST123, John James, 123 Elm Street
    CT00123, James, John, 123 Elm St.
    0000123,  John James, 123 Elm Street.
    We merge them in MDM and create one record
    NEWCUST123, James, John, 123 Elm Street
    Question is: How the transactional data from the three legacy system will be mapped to this new customer master record? How the purchase orders will know that CUST123 is now NEWCUST123 ? When this mapping happens in a project and who does that?
    Regards,
    -T
    Edited by: mdm3north on Jul 21, 2011 2:30 AM

    HI,
    I agree with Anil on that,it is the same reason for which few projects have key mapping repository in place,which holds the key field mapping information telling them that this Customer number means this in the Legacy system.
    Well manual intervention is required as PR/PO which is active should be closed before such an activity.
    Thanks,
    Ravi

  • How does Master Data delta works.

    Can any body please explain me what is data flow in master data delta.Are the ale change pointers utilised , if yes how..And is delta queue used in master data delta.

    Amit,
    As you may be aware of this, ALE Change pointers are used to minimize the no of records to be sent to the extracting systemextractos. in other words the master data delta is been handled this way. It is better that you read the following that should be giving you some idea.
    http://help.sap.com/saphelp_47x200/helpdata/en/84/63b320b55bf540aee7a40b225cda50/frameset.htm
    Hope this Helps.
    Alex.

  • How many master data tables

    Hi Friends,
                           When you activate master data in all kind of perspectives, then how tables we can get and what are those with naming conventions, i was told by somebody that we get 12 tables, is it right?
    Regards,
    Balaji K. Reddy

    Hi Balaji,
    It depends upon the settings you make when you create your info-object. If you check all the options then these tables would be created:
    1. SID table - /BI0/SXXXX
    2. View of master data table - /BI0/MXXX
    3. Master data table (time-independent) - /BI0/PXXXX
    4. SID table attributes - /BI0/XXXXXX
    5. Text table - /BI0/TXXXX
    6. Master data table (time-dependent) - /BI0/QXXXX
    7. SID table attributes - /BI0/YXXXXX (time dependent attr)
    8. Hierarchy table (of a characteristic) - BI0/HXXXX
    9.  SID table for hierarchies - BI0/KXXXX
    10. SID Structure of the hierarchy - BI0/IXXXX
    11. Hierarchy interval table (of a characteristic) - /BIC/JXXXX
    where "XXXX" is a standard SAP info-object.
    All these tables are generated by ths system and the data from these tables would be fetched depending upon the action executed in the query. In short, we need not bother about these tables and the system takes care of it.
    Bye
    Dinesh

  • Master data and impact of loads from transaction data

    If I have the following scenario:
    1. Load to cube which contains a dimension which has an info object 0CUSTOMER
    2. The transaction data is being loaded from a non SAP system and 0CUSTOMER is loaded on a daily basis from SAP system
    3. As part of the transaction load some Customers have a value which is not existing in the 0CUSTOMER info object
    Do these new customers also get loaded to the 0CUSTOMER master data i.e. the SIDs are added to the SID table for 0CUSTOMER or is it just loaded as part of the cube and stored in the specific dimension
    Thanks

    Hi,
    they are also automatically loaded to 0customer, as the sid of that record is needed in the cube. Possibly, if there is no relation to your R/3 customers, you have some initial records in your customer master (means all attributes as well as the texts are initial).
    regards
    Siggi

  • How make master data object available for ACR

    Hi,
    I remember that there is a program to run so that we can make the master data object available for ACR.
    i am not getting the program name. Please advice.
    Thanks,
    Dev

    Hi........
    If a master data is schedule for some changes...ie  InfoObjects and hierarchies that are planned for structural change are selected by default.........If u go to RSA1 >>  Administrator Workbench >> Tools >> Hierarchy/Attribute Changes from the menu...........If u click on Infoobject List or Hierachy List.........u will find that those infoobjects r already selected.....
    If u want to run ACR for some infoobjects manually.............either u can do it in RSA1..........by only selecting those infoobjects for which u want to run the ACR..........or u can also use the prog : RSDDS_AGGREGATES_MAINTAIN.........
    If that Particular infoobject is not there in the Infoobject or Hierarchy List in RSA1..............that means no structurak changes r required...........no need to run ACR..........
    Regards,
    Debjani.........

  • Master data loads vs attr change runs

    Hi experts,
    I need to load master data daily and want to know the difference between attribute change runs and process chains for master data. Please explain with steps. I know how to create process chains.

    Attribute change run is nothing but adjusting the master data after its been loaded from time to time so that it can change or generate or adjust the sid's so that u may not have any problem when loading the trasaction data in to data targets.
    Whenever master data attributes or hierarchy is loaded an attribute chnage run need to be performd due to the following reasons:
    1. When a master data is changed/updated it gets loaded into the master data table as version "M"(modified). It is only after a attribute chnage run that the master data becomes activated i.e. version "A".
    2. Master data attributes and hierachies are used in cube aggregates also. So to do update the latest master data attributes values in the aggregtes attribute change run need to be performed.
    Re: Attribute Change Run
    Re: Attribute Change Run for Hierarchy

  • Abt master data activation and attribute change run

    There is any difference between master data activation and attribute change run

    Hi,
    Master data activation is activating the master data once you extract the data from source system the data will be in inactive state, so in order to make the data available we will activate the master data.
    Attribute change run :Its used to update the newly changed master data records to the Hierachies and aggregates.
    If you are already using the available master data in aggregates in InfoCubes, you cannot activate the master data individually. In this case we will use attribute change run.
    For attribute change run we have direct tcode: RSATTR or  In the main menu, choose the path Tools Hierarchy/Attribute Change.
    regards
    KP

Maybe you are looking for