Go Live Strategy for CO/MM

Hi,
Please suggest the Go Live strategy for CO / MM especially from Product costing point of view.
I have an issue, whether material prices of FG and SFG should be uploaded along with the quantity?
1.Whether, these prices are to be maintained in material master before executing the standard cost estimate?
2.What would happen if std prices are maintained (based on legacy stock valuation) and thereafter std cost is run. Whether the differential would be recorded as material revaluation?
3.Suppose Go live is on 1st.. the stock valuation of 31st would be available only on 3rd.. then in such cases, I would have to execute the std cost.. to ensure production/despatches are not stopped on 1st and 2nd. In such case how would the FG/SFG value as per initial upload match with Material master.
Please suggest the implications as well as the best practice.
Regards

Hi Swapnik,
Your understanding is correct. Pls go through the following...
But, there are two strategies for this...
1. Which I explained previously...in this case, you need to explaine the Core Team to estimate the expected PDiff in SAP and make suitable JV and neutralise the balances at the time of upload itself. See, literally when you released the Cost Estimate for the Products and Inventory Qty balances are available in Excel (or legacy) you would definitely come to know what is going to be the difference between legacy Inventory Value and SAP Inventory Value. Even the Business will have an option to reconsider their Quantity Structure and other things.
This would be suggestible to Core Team when they have very systematic data and approach available with them about the Standard Costs of the Products.
2. The second one is your strategy...
Upload the Inventory with Values, then run the Std Cost Estimate. In this method, you should be sure of how much PDiff is going to hit your P&L. You can even change the sequence of steps you mentioned as follows...
1. You upload Material (inputs) Qty and value
2. Stock of FG and SFG with value is uploaded - This would tally with GL balance upload
3. Std cost of SFG and FG would be run and released. Difference is posted in material revaluation account.
The decision whether you have to follow method 1 or 2, you are the best person to choose depending on your client. It is an open secret that there would be lot of manipulations to be done in data by them so that they would not face any problem they get into SAP. Method 1 will be more suitable for such instances....
Hope you got it...revert back for futher explanation...
Srikanth Munnaluri
Edited by: Srikanth Munnaluri on Apr 18, 2009 12:16 AM

Similar Messages

  • Cutover strategy for BW

    BW Gurus,
    What is the best cutover strategy for BW. We are planning to do a system copy for go live.
    1)What are the points/instructions that need to be completed on the original system before doing a System copy?
    2)What are the steps for the Basis?
    3)What should be the status of outbound queues on r3/other source systems in the original system?
    If the answer is put in points, will be apprecited.
    Thanks
    Simmi

    dear Simmi,
    for delta queue check this
    https://websmp104.sap-ag.de/~sapidb/011000358700001584392004#17
    I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that?
    Before you copy a source client or source system, make sure that your deltas have been fetched from the DeltaQueue into SAP NetWeaver BI and that no delta is pending. After the client copy, an inconsistency might occur between SAP NetWeaver BI delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name which are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the SAP NetWeaver BI system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in SAP NetWeaver BI when editing or creating an InfoPackage, you should expect that the delta has to be initialized after the copy.

  • Release strategy for PO and Contract

    Hi,
    I maintain characteritic for PO
    Plant
    Doc Type
    Net Order Value
    Purchasing Group
    I maintain characteristic  for contract
    Company code
    Target Value
    Doc Type
    Purchasing group
    My class consist of
    Plant
    Doc Type
    Net Order Value
    Purchasing Group
    Company code
    Target value
    Q1: Do i need to maintain ALL characteristic value when i define my release strategy for PO or Contract as some characteristic only apply to PO only or contract only?
    Q2:  What option do i have if i need to maintain all characteristic value ? leave it blank ? empty
    Currently my PO release is working fine but when i try to maintain the contract release inside the class . my PO release is not triggering . I maintained diffrent release group for both PO and Contract .
    Thank for your advice

    First of all
    For purchasing like PO,pr,contract you can able to get in ekko table.No need to maintain separate release unless if it your business requirement.Take Document type as one of the characteristics it will distinguish whether it is PO or contract.
    You have to maintain all the characteristic value then only it will trigered release.
    Check the release indicator as well.
    Hope it will help.

  • Release strategy for quantity contract

    Hi Gurus,
    In quantity contract ..  target value will not filled by the value(in header details) as it is quantityt contract.
    We may have different line items with different cost..
    my doubt is how the release strategy will be triggered for the quanityt contract and how the system will consider the whole value of the contract..
    please explain/clarify
    regards
    subbu

    Hi,
    The release strategy for the contract will work in the exact mechanism as for the PO i.e. it will be determined  based upon the total net value of the entire purchasing document.
    Cheers,
    HT

  • Release Strategy for SA/Contract/PO

    Hi friends,
    Should we have different release groups and release codes in contracting/ SA/ PO.
    Because...
    I have maintained 4 characteristics in release strategy for PO _[ Comp Co, plant, porg, total net order value].
    But i want to maintan only 2 characteristics in release strategy for contract  [plant, total net order value]
    because the same is reflecting in contract config also.
    Is it possible..please help me frnds.
    Prabhu

    Hi Prabhu,
    Unfortunately you need to maintain characteristics of all possible values for company code and plant for contractsu2019 to subject the release.
    Wish that SAP would be more flexible with release of PO, Contract and RFQ but it does not.
    Regards

  • Release Strategy for Contract is not getting picked whereas it is ok for PO

    Dear Experts,
    I have created release strategy for PO with doc type, p.org, net value as chars and frg_ekko as class and release group as 02.
    I kept NB,FO,MK and WK as char values. it is working fine for PO's whereas it is not getting picked for Contracts. we need same release strategy for PO as well as for Contract.
    Kindly resolve the issue.
    Satish

    S0004830404 wrote:
    Dear Experts,
    >
    > I have created release strategy for PO with doc type, p.org, net value as chars and frg_ekko as class and release group as 02.
    > I kept NB,FO,MK and WK as char values. it is working fine for PO's whereas it is not getting picked for Contracts. we need same release strategy for PO as well as for Contract.
    >
    > Kindly resolve the issue.
    >
    > Satish
    hi,
    use create other characteristic for value CEKKO-GNETW in this use the mk , wk values in value field and assign this characteristic to your class and save.
    Thanking you

  • Data conversion strategy for new SOB

    Dear Viewers
    on 11.5.10
    We are creating a new SOB with a change in currency from Feb-11 as this is the requirement
    For the same, we need to do data conversion.
    I have a confusion for Purchase Orders and Sales Orders
    Purchase Orders:
    Open purchase orders will be converted, means the unfulfilled PO`s i.e the ones not received and are fully open.
    The PO`s which have been recieved but not delivered, Requested the users to clear the intransit receipts.
    The PO's which are partially received, what to be done for them?
    If a PO is fully received and Delivered will not me converted to the new SOB as its not an open PO
    but If invoice comes after Feb-11, than how the matching will be done?
    What if a return has to be made moving forward in FEB-11 under new SOB
    Sales Orders:
    Open sales orders will be converted, that is the ones that have been entered and not yet booked.
    Users have been requested to clear off the Sales order lines which are already pick confirmed but not yet shipped, hence they will be shipped and interfaced to AR
    For the Sales orders that have been booked, those lines that are not yet processed further will also be converted.
    Now what if a receipt comes after feb 11, how to handle this as the sales order wiould not have been converted?
    Please give your advise on the data migration strategy for PO`s and SO's.
    Please do add any point that may have been missed by me
    Appreciate your help
    Thanks
    Emm

    Hi David,
    for master data conversion you can use LSMW and the RE-FX BAPIs. (please refer to SAP note  [782947|https://service.sap.com/sap/support/notes/782947] ).
    Regards, Franz

  • Release strategy for Schedule lines in SD

    Hello All
    Can we create a release strategy for Schedule Lines in Scheduling Agreement? If Yes How?
    I have already maintained a release strategy for Item Level... which I want to put it for Schedule line as well....
    Thanks in advance.

    Hello,
    I think you are in the wrong forum. Please choose the right forum category and post it there.
    Matthias

  • Release strategy for Scheduling Agreement(SA)

    Dear Forum,
    This issue is with regards to the release strategy for Scheduling Agreement(SA)
    Until now release has only been applicable to contracts at our client site. But we now plan to introduce it for SA as well. Such a feature is available in SAP but the problem is that with the given release strategy GR is not allowed until the release is affected. The system allows you to post the delivery schedule.
    Now what I want is that after making the SA the system should not allow to make delivery schedule if release is not affected.
    We also have MRP run at client site..Kindly suggest the procedure as, how to configure it.Are der any User Exit for the same,if so please discuss..
    Warm Regards
    Nainesh
    SAP ECC 6.0

    Hi,
    What's the SA Type that was created via MRP.
    If it's LP (without release document), release will not do anything BUT
    If it's LPA (release document), then you will not be able to Post the GR until release is affected.
    Thanks.
    Scheduling Agreement can be categories as
    1) LP (without release document) and
    2) LPA (with release document)
    If use LP, you can do GR without release but with LPA, you need to release the schedule line via ME38.
    Thanks.
    NanoSAP  
    Posts: 58
    Registered: 1/5/09
    Forum Points: 6 
       Re: release strategy for Scheduling Agreement(SA)  
    Posted: Jun 26, 2009 8:32 AM    in response to: sathish.kumar           Reply 
    Dear Mr.Satish,
    Thanks a lot for your earliest reply...But my point is i want the system should not allow to make delivery schedule if release is not affected...Though schedule lines are generating through MRP run...Is is possible,by any chance...??

  • How to check LIV made for a partner vendor in Schedule agreement or P.O.?

    I have defined partner vendor in schedule agreement and made LIV for a different vendor during MIRO. I need to check this in system. What is the procedure for viewing LIV made for a partner vendor with all these details like LIV no, Amount,partner Vendor and Condition type,etc.

    Hi,
    Check this to see if helps: Re: MM LIV-Report
    Thanks,
    Gordon

  • How to setup a release strategy for store generated purchase order

    Hi there,
    Does anybody know how to setup a release strategy for store/plant generated purchase order? I have a request from our client, but I never cross this before. Please help and let me know the step with every single detail.
    Greatly thank for your help.
    Kind Regards,
    2tea

    Please go thru the below Release Procedure and check whether you have maintained all the settings properly.
    PO RELEASE STRATEGY
    The release code is a two-character ID allowing a person to release (clear, or approve) a requisition or an external purchasing document. The release codes is basically controlled via a system of authorizations (authorization object M_EINK_FRG).
    Use SE12, structure CEKKO to check all the fields available for controlling the Purchase Order.
    e.g. If the total value for the Purchase Order exceeds 10,000, release strategy 01 is assigned to the Purchase Order. There is only one characteristic created in this example. For controlling the Purchase Order type, create characteristic for CEKKO-BSTYP and the value NB.
    CT04 - Create Characteristic e.g. NETVALUE
    Click Additional data Table name CEKKO Field name GNETW and press enter
    (for currency dependent field, you are prompt to enter the currency which the system then converts the currency of the Purchasing document into this currency)
    In the Basic data (X refers to tick),
    X Mutliple values
    X Interval values
    In the Value data, in the Char. value column, type >10000 and press enter
    Save your data
    CL02 - Class
    Class - Create REL_PUR
    Class type - 032
    Click Create
    Description - Release Procedure for Purchase Order
    In the Same Classification section, click Check with error
    In the Char. (characteristic) tab, type NETVALUE to assign your characteristics to the class
    OMGS - Define Release Procedure for Purchase Order Type
    Release Group - New entries
    Rel.group Rel. Object Class Description
    02 REL_PUR Rel. Strategy for PO
    Release codes - New entries
    Grp Code
    02 01
    Release indicators
    Release indicators Release Description
    0 Blocked
    1 X Release
    Release Strategy
    Release group 02
    Rel.strategy 01
    Release codes 01
    Release status 01
    Classification Choose your check values
    OMGSCK - Check Release Strategies
    (make sure there are no error messages)
    Once the Purchase Order is not release, buyers will not be able to print the Purchase Order.
    Goods Receipts will be shown with Message no. ME 390 - Purchasing document XXXXXXX not yet released.
    In 4.6c, Purchase Order with Release Strategy have a tabs at the end of the Header. This allowed the buyers to check the release status of the Purchase Order.
    The person with the release authorization have to use ME28 to release the Purchase Order.
    Regards,
    Ashok

  • Long-term  retention backup strategy for Oracle 10g

    Hello,
    I need design a Archivelog, Level 0 and Level 1 long-term retention backup strategy for Oracle 10g, It must be based in Rman with tapes.
    ¿Somebody could tell me what it's the best option to configure long-term retention backups in Oracle?, "CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1825 DAYS;" It's posibble ?
    Regards and thanks in advance

    Hello;
    RECOVERY WINDOW is an integer so yes its possible. Does it make sense?
    Later My bad, this is Oracle 11 only ( the gavinsoorma link ) . Getting harder and harder to think about 10. Don't think I would use RECOVERY WINDOW.
    I would let the tape backup system handle the retention of your rman backups.
    Using crosscheck and delete expired commands you can keep the catalog current with the available backups on tape.
    Oracle 10
    Keeping a Long-Term Backup: Example
    http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmbackp.htm#i1006840
    http://web.njit.edu/info/oracle/DOC/backup.102/b14191/advmaint005.htm
    Oracle 11
    If you want to keep 5 years worth of backups I might just use KEEP specific date using the UNTIL TIME clause :
    keep until time 'sysdate+1825'More info :
    RMAN KEEP FOREVER, KEEP UNTIL TIME and FORCE commands
    http://gavinsoorma.com/2010/04/rman-keep-forever-keep-until-time-and-force-commands/
    Best Regards
    mseberg
    Edited by: mseberg on Dec 4, 2012 7:20 AM

  • How to turn on / off live updating for all smart playlists at once?

    Hello.  Is there a way to turn live updating on or off for all my smart playlists at once?  That would be a fantastic feature since too many smart playlists with live updating turned on brings iTunes to a crawl. 
    Currently, I run iTunes on Windows 7.  I have over 40,000 songs, and I have hundreds of smart playlists to help organize all my songs and what's on my various iPods at any time.  The problem with having so many smart playlists is that iTunes runs very slow.  One thing that I do to make iTunes run slightly faster is to turn off live updating for as many smart playlists as I can stand (by going into each one and editing), but then it's hard to tell which have live updating turned on and which don't.  As a result, many of my smart playlists don't get updated when I would like them to because I have to go in and manually edit each one again to turn it back on.  Maybe showing which have live updating turned on or off would be useful, too? 
    On a related note, my iPod Classic will actually not work when I sync too many smart playlists.  This has been a long-standing problem with iPods, and I'm surprised that it hasn't been fixed yet.  I have not experimented to see if it makes any difference if the smart playlists have live updating turned on or off. 
    Thanks.

    this is exactly what i was hoping to see -- but i can't get it to run right on iTunes 12, yosemite
    it also doesn't seem to recognize "Home Videos" or "Audiobooks" as special and tries to turn them off -- i modified the earlier script to add those two to the lists of exclusions, but still get the same error message on the first "unspecial/real" smart playlist -- hasn't somethign changed in the object model perhaps?
    thank you muchly!
    =mjb
    tell application "iTunes"
    activate
      set allPlaylists to (get every user playlist)
      repeat with thisPlaylist in allPlaylists
      if (thisPlaylist is smart) and name of thisPlaylist is not in {"Music", "Movies", "Podcasts", "Books", "TV Shows", "iTunes U", "Apps", "Ping", "iTunes DJ", "Genius", "Home Videos", "Audiobooks"} then
      set view of browser window 1 to thisPlaylist
      tell application "System Events"
      tell process "iTunes"
      tell menu bar 1
      tell menu bar item "File"
      tell menu "File"
      click menu item "Edit Smart Playlist"
      end tell
      end tell
      end tell
      set theCheckbox to checkbox "Live updating" of window 1
      tell theCheckbox
      if (its value as boolean) then click theCheckbox
      end tell
      click button "OK" of window 1
      end tell
      end tell
      end if
      end repeat
    end tell

  • What is your strategy for form validation when using MVC pattern?

    This is more of a general discussion topic and will not necessarily have a correct answer. I'm using some of the Flex validator components in order to do form validation, but it seems I'm always coming back to the same issue, which is that in the world of Flex, validation needs to be put in the view components since in order to show error messages you need to set the source property of the validator to an instance of a view component. This again in my case seems to lead to me duplicating the code for setting up my Validators into several views. But, in terms of the MVC pattern, I always thought that data validation should happen in the model, since whether or not a piece of data is valid might be depending on business rules, which again should be stored in the model. Also, this way you'd only need to write the validation rules once for all fields that contain the same type of information in your application.
    So my question is, what strategies do you use when validating data and using an MVC framework? Do you create all the validators in the views and just duplicate the validator if the exact same rules are needed in some other view, or do you store the validators in the model and somehow reference them from the views, changing the source properties as needed? Or do you use some completely different strategy for validating forms and showing error messages to the user?

    Thanks for your answer, JoshBeall. Just to clarify, you would basically create a subclass of e.g. TextInput and add the validation rules to that? Then you'd use your subclass when you need a textinput with validation?
    Anyway, I ended up building sort of my own validation framework. Because the other issue I had with the standard validation was that it relies on inheritance instead of composition. Say I needed a TextInput to both check that it doesn't contain an empty string or just space characters, is between 4 and 100 characters long, and follows a certain pattern (e.g. allows only alphanumerical characters). With the Flex built in validators I would have to create a subclass or my own validator in order to meet all the requirements and if at some point I need another configuration (say just a length and pattern restriction) I would have to create another subclass which duplicates most of the rules, or I would have to build a lot of flags and conditional statements into that one subclass. With the framework I created I can just string together different rules using composition, and the filter classes themselves can be kept very simple since they only need to handle a single condition (check the string length for instance). E.g. below is the rule for my username:
    library["user_name"] = new EmptyStringFilter( new StringLengthFilter(4,255, new RegExpFilter(/^[a-z0-9\-@\._]+$/i) ) );
    <code>library</code> is a Dictionary that contains all my validation rules, and which resides in the model in a ValidationManager class. The framework calls a method <code>validate</code> on the stored filter references which goes through all the filters, the first filter to fail returns an error message and the validation fails:
    (library["user_name"] as IValidationFilter).validate("testuser");
    I only need to setup the rule once for each property I want to validate, regardless where in the app the validation needs to happen. The biggest plus of course that I can be sure the same rules are applied every time I need to validate e.g. a username.
    The second part of the framework basically relies on Chris Callendar's great ErrorTipManager class and a custom subclass of spark.components.Panel (in my case it seemed like the reasonable place to put the code needed, although perhaps extending Form would be even better). ErrorTipManager allows you to force open a error tooltip on a target component easily. The subclass I've created basically allows me to just extend the class whenever I need a form and pass in an array of inputs that I want to validate in the creationComplete handler:
    validatableInputs = [{source:productName, validateAs:"product_name"},
                         {source:unitWeight, validateAs:"unit_weight", dataField:"value"},
                   {source:unitsPerBox, validateAs:"units_per_box", dataField:"value"},
                        {source:producer, validateAs:"producer"}];
    The final step is to add a focusOut handler on the inputs that I want to validate if I want the validation to happen right away. The handler just calls a validateForm method, which in turn iterates through each of the inputs in the validatableInputs array, passing a reference of the input to a suitable validation rule in the model (a reference to the model has been injected into the view for this).
    Having written this down I could probably improve the View side of things a bit, remove the dependency on the Panel component and make the API easier (have the framework wire up more of the boilerplate like adding listeners etc). But for now the code does what it needs to.

  • Best strategy for variable aggregate custom component in dataTable

    Hey group, I've got a question.
    I'd like to write a custom component to display a series of editable Things in a datatable, but the structure of each Thing will vary depending on what type of Thing it is. So, some Things will display radio button groups (with each radio button selecting a small set of additional input elements, so we have a vertical array radio buttons and beside each radio button, a small number of additional input elements), some will display text-entry fields, and so on.
    I'm wondering what the best strategy for tackling this sort of thing is. I'm sort of thinking I'll need to do something like dynamically add to the component tree in my custom component's encodeBegin(), and purge the extra (sub-) components in encodeEnd().
    Decoding will be a bit of a challenge, maybe.
    Or do I simply instantiate (via constructor calls, not createComponent()) the components I want and explicitly call their encode*() and decode() methods, without adding them to the view tree?
    To add to the fun of all this, I'm only just learning Faces (having gone through the Dudney book, Mastering JSF, and writing some simpler custom components) and I don't have experience with anything other than plain vanilla JSP. (No EJB, no Struts, no Tapestry, no spiffy VisualDevStudioWysiwyg++ [bah humbug, I'm an emacs user]). I'm using JSP 2.0, JSF 1.1_01, JBoss 4.0.1 and JDK 1.4.2. No, I won't upgrade to 1.5 (yet).
    Any hints, pointers to good sample code? I've looked at some of the sample code that came with the RI and I've tried to navigate the JSF Blueprints stuff, but I haven't really found anything on aggregating components into a custom component. Did I miss something obvious?
    If this isn't a good question, please let me know how I can sharpen it up a bit.
    Thanks.
    John.

    Hi,
    We're doing something very similar. I had a look at the Tomahawk Date component, and it seems to dynamically created InputText components in the encodeEnd(). However, it doesn't decode this directly (it only expects a single textual value). I expect you may have to check the request yourself in decode().
    Other ideas would be appreciated, though - I'm still new to JSF.

Maybe you are looking for

  • Advance level drawing problem with Jframe and JPanel need optimize sol?

    Dear Experts, I m trying to create a GUI for puzzle game following some kind of "game GUI template", but i have problems in that,so i tried to implement that in various ways after looking on internet and discussions about drawing gui in swing, but i

  • ".. is an abstract class.  It can't be instantiated"

    Does anyone have an idea of how I can get rid of the above error message? Here is a bit of my code: import java.io.*; import java.util.Vector; class Project3 { private BiTree company = new BiTree(); public static void main(String[] args) throws IOExc

  • Create PDF from text fields in website

    Hi! Is it possible to hook up text fields in a website to a PDF document? I have a website developed in Dreamweaver/asp and I would like the ability to create a PDF based on values from different text fields (10-12). Today we first insert the values

  • Webservice problems under Business Connector 4.6

    Hi. I've done this scenario Webservice (request) --> BC --> R3 --> Webservice (response). With Business developer i've made my wsdl (very strange, because i get two files .. wsdl and xsd). When i use a client such as SOAPUI, input parameters of the B

  • Line weight will not reduce below 1pt

    I'm creating vector files for a laser cutter, which requires cut lines to be weighted at .001. This normally works fine, but for certain shapes, the weight keeps resetting to 1. This seems to happen at random, whenever I create a shape with a pen of