Best practice for centralized material planning in distributed environment

Hi All- Hope everybody is doing fine. Things are going fine,at my end.
We are debating on the organization hierarchy decision at our current utilities client. We are proposing that:
1)     We will have central warehouse as plant/storage location/warehouse.
2)     Create one plant (it will be virtual only) and 65 storage locations. Plant maintenance orders will be created at the storage location and it will generate the reservation at the plant level. This reservation will be converted into STO during MRP run. Central warehouse plant will use this STO to create the outbound delivery and then the shipping documents. APO will be ok with arrangement as all the requirements are centralized. Now my problem is, the reservation doesn’t have any delivery address to copy it in the STO. To populate the delivery address automatically, I believe we will need the modification / enhancement. Am I right? If this is right then what type of modification we will require? I mean effort wise (hours etc.) & time wise.
ANy help is highly appreciated.
best regards,
KHAN

Hi,
If all 65 storage locations are going to be just different areas within the same physical site (address) then that should be OK, but if any of those storage locations have a different site address to the others then I would suggest that they should be Plants and NOT storage locations.
The reason you are having problems with addresses is that in a Plant to Plant reservation the plants have different addresses.
PLEASE don't use storag locations for different sites, if you DO then this will only be the first of MANY modifications that you will have to do.
Yes there will be more master data if they are all set as plants, but how many times does master data change compared to how many receipts, issues and transfers etc. will occur. I would rather compromise on the master data maintenance than have to compromise on EVERY movement.
I can't stress this strongly enough, if you use storage locations when you should be using plants you WILL lose major functionality and the SAP implementation will be seen as a poor one.
I have seen many implementations at first hand where this has been the one influencing factor that has resulted in a poor implementation instead of a good one.
Steve B

Similar Messages

  • Best practices for loading apo planning book data to cube for reporting

    Hi,
    I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
    I have seen 2 types of Design:
    1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
    2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
    We do these data loads during evening hours once in a day.
    Rgds
    Gk

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • What is the best practice for APO - Demand planning implementation

    Hi,
    M client wants to implement demand planning.
    Cient has come up with one scenario like a New Customer is created in ECC, and if I use BI and then APO flow for Demand planning, user will have to wait for another day. (AS BI is always having one day delay).
    For this scenarios user is insisting on ECC and APO-DP interface.
    Will anybody suggest what should be the best practice for Demand planning.
    ECC -> Standalone BI -> Planning area (Planning is done in APO) -> Stand alone BI
    Or ECC -> APO-DP (Planning is done in APO) -> Standalone BI system
    I hope I am able to explain my scenario.
    Regards,
    Saurabh

    Any suggestions !!

  • Best Practice for FlexConnect Wireless roaming in MediaNet environment?

    Hello!
    Current Cisco best practice recommendations for enterprise MediaNet design, specify that VLANs be local to a switch / switch stack (i.e., to limit the scope of spanning-tree). 
    In the wireless world, this causes problems if you want users while roaming to keep real-time applications up and running.  Every time they connect to a new AP on a different VLAN, then they will need to get a new IP address, which interrupts real-time apps. 
    So...best practice for LAN users causes real problems for wireless users.
    I thought I'd post here in case there's a best practice for implementing wireless roaming in a routed environment that we might have missed so far!
    We have a failover pair of FlexConnect 7510s, btw, configured for local switching for Internal users, and central switching with an anchor controller on the DMZ for Guest users.
    Thanks,
    Deb

    Thanks for your replies, Stephen and JSnyder.
    The situation here is that the original design engineer is no longer here, and the original design was not MediaNet-friendly, in that it had a very few /20 subnets bridged over entire large sites. 
    These several large sites (with a few hundred wireless users per site), are connected to an HQ location (where the 7510s in failover mode are installed) via 1G ethernet hand-offs (MPLS at the WAN provider).  The 7510s are new, and are replacing older contollers at the HQ location. 
    The internal employee wireless users use resources both local to their site, as well as centralized resources.  There are at least as many Guest wireless users per site as there are internal employee users, and the service to them consists of Internet traffic only.  (When moved to the 7510s, their traffic will continue to be centrally switched and carried to an anchor controller in the DMZ.) 
    (1) So, going local mode seems impractical due to the sheer number of users whose traffic bound for their local site would be traversing the WAN twice.  Too much bandwidth would be used.  So, that implies the need to use Flex / HREAP mode instead.
    (2) However, re-designing each site's IP environment for MediaNet would suggest to go routed to the closet.  However, this breaks seamless roaming for users....
    So, this conundrum is why I thought I'd post here, and see if there was some other cool / nifty solution I wasn't yet aware of. 
    The only other (possibly friendly to both needs) solution I'd thought of was to GRE tunnel a subnet from each closet to the collapsed Core / Disti switch at each site.  Unfortunately, GRE tunnels are not supported in the rev of IOS on the present equipment, and so it isn't possible to try this idea.
    Another "blue sky" idea I had (not for this customer, but possibly elsewhere in the future), is to use LAN switches such as 3850s that have WLC functionality built-in.  I haven't yet worked with the WLC s/w available on those, but I was thinking it looks like they could be put into a mobility group, and L3 user roaming between them might then work.  Do you happen to know if this might be a workable solution to the overall big-picture problem? 
    Thanks again for taking the time and trouble to reply!
    Deb

  • Best Practice for making material obselete

    Hai Experts,
    Could any one please advice me what s the best practice to make a material obselete. Is there any best practice or incase if it is not there what would be the diff ways it cant be made obselete.
    Please advice me on this.
    Thanking you in advance
    Regards,
    Gopalakrishnan.S

    Archiving is a process, rather than a transaction. you have to take care that you fulfill local laws and retain your data for x years for internal and external auditors.
    archiving requires customizing, you have to tell SAP where your archive is and what name it has and how big it can be, and when (definition of retention period) a document can be archived.
    you have to talk with your business how long they want the data keep in the production system (this depends on how long they need to run reports on this data)
    you have to define who can access the archived data and how this data is accessed.
    Who runs archiving?
    Tough question. Data owner is the business, so they should run.
    But archiving just buckets fragments your tables.
    so many companies have a person who is fulltime responsible for archiving and does it for all business units and organisations.
    Archiving is nothing what can be done "on the fly", you will encounter all kind of errors, processes that are not designed well will certainly create a lot problems in archiving and need to be reworked.
    I had an archving project with 50 days project time, and could develope guidelines to archive about 30 different objects. there are 100 more objects possible, and we have still not archived master data like material, vendors and customers because of so many dependencies and object that need to be archived prior to those objects.

  • Best Practice for Managing Cookies in an Enterprise Environment

    We are upgrading to IE11 for our enterprise. One member of the team wants to set a group policy that will delete all cookies every time the user exits IE11.  We have some websites that users access that use cookies to track progress in training,
    but are deleted when the user closes the browser.  What is the business best practice regarding deleting all history, temp internet files and, especially cookies when closing a browser.
    If you can point me to a white paper on this topic, that would be helpful.
    Thanks
    Bill

    Hi,
    Regarding cookie settings, we could manage IE privacy settings using Administrative templates for IE 11:
    Administrative templates and Internet Explorer 11
    Delete and manage cookies
    The Administrative templates for IE 11, we could download from here:
    Administrative Templates for Internet Explorer 11
    Hope this may help
    Best regards
    Michael Shao
    TechNet Community Support

  • Need Best Practice for creating BE in ZFS boot environment with zones

    Good Afternoon -
    I have a Sparc system with ZFS Root File System and Zones. I need to create a BE for whenever we do patching or upgrades to the O/S. I have run into issues when testing booting off of the newBE where the zones did not show up. I tried to go back to the original BE by running the luactivate on it and received errors. I did a fresh install of the O/S from cdrom on a ZFS filesystem. Next ran the following commands to create the zones, and then create the BE, then activate it and boot off of it. Please tell me if there are any steps left out or if the sequence was incorrect.
    # zfs create –o canmount=noauto rpool/ROOT/S10be/zones
    # zfs mount rpool/ROOT/S10be/zones
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z1
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z2
    # zfs mount rpool/ROOT/s10be/zones/z1
    # zfs mount rpool/ROOT/s10be/zones/z2
    # chmod 700 /zones/z1
    # chmod 700 /zones/z2
    # zonecfg –z z1
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z1
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zonecfg –z z2
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z2
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zoneadm –z z1 install
    # zoneadm –z z2 install
    # zlogin –C –e 9. z1
    # zlogin –C –e 9. z2
    Output from zoneadm list -v:
    # zoneadm list -v
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    2 z1 running /zones/z1 native shared
    4 z2 running /zones/z2 native shared
    Now for the BE create:
    # lucreate –n newBE
    # zfs list
    rpool/ROOT/newBE 349K 56.7G 5.48G /.alt.tmp.b-vEe.mnt <--showed this same type mount for all f/s
    # zfs inherit -r mountpoint rpool/ROOT/newBE
    # zfs set mountpoint=/ rpool/ROOT/newBE
    # zfs inherit -r mountpoint rpool/ROOT/newBE/var
    # zfs set mountpoint=/var rpool/ROOT/newBE/var
    # zfs inherit -r mountpoint rpool/ROOT/newBE/zones
    # zfs set mountpoint=/zones rpool/ROOT/newBE/zones
    and did it for the zones too.
    When ran the luactivate newBE - it came up with errors, so again changed the mountpoints. Then rebooted.
    Once it came up ran the luactivate newBE again and it completed successfully. Ran the lustatus and got:
    # lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    s10s_u8wos_08a yes yes no no -
    newBE yes no yes no -
    Ran init 0
    ok boot -L
    picked item two which was newBE
    then boot.
    Came up - but df showed no zones, zfs list showed no zones and when cd into /zones nothing there.
    Please help!
    thanks julie

    The issue here is that lucreate add's an entry to the vfstab in newBE for the zfs filesystems of the zones. You need to lumount newBE /mnt then edit /mnt/etc/vfstab and remove the entries for any zfs filesystems. Then if you luumount it you can continue. It's my understanding that this has been reported to Sun, and, the fix is in the next release of Solaris.

  • What is the best practice for QOS in an MDS 9000 environment for the FCIP traffic?

    Our storage group is implementing an MDS 9000 FCIP environment.  The person configuring the Nexus equipment is asking what QOS markings and bandwidth requirements should be used for the FCIP traffic.
    Is there a document that describes the QOS recommendations for MDS FCIP?

    Hello
    You can change the DB isolation level to Read uncommitted
    http://technet.microsoft.com/en-us/library/ms378149(v=sql.110).aspx
    or use WITH (NOLOCK)
    I do use NOLOCK option for the dirty reads to avoid locks on the tables
    Javier Villegas |
    @javier_vill | http://sql-javier-villegas.blogspot.com/
    Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you

  • What is the Best practice for ceramic industry?

    Dear All;
    i would like to ask two questions:
    1- which manufacturing category (process or discrete) fit ceramic industry?
    2- what is the Best practice for ceramic industry?
    please note from the below link
    [https://websmp103.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000409682008E ]
    i recognized that ceramic industry is under category called building material which in turn under mill product and mining
    but there is no best practices for building material or even mill product and only fabricated meta and mining best practices is available.
    thanks in advance

    Hi,
    I understand that you refer to production of ceramic tiles. The solution for PP was process, with these setps: raw materials preparation (glazes and frits), dry pressing (I don't know extrusion process), glazing, firing (single fire), sorting and packing. In Spain, usually are All-in-one solutions (R/3 o ECC solutions). Perhaps the production of decors have fast firing and additional processes.
    In my opinion, the curiosity is in batch determination in SD, that you must determine in sales order because builders want that the order will be homogeneus in tone and caliber, and he/she can split the order in diferents deliveries. You must think that batch is tone (diferents colours in firing and so on) and in caliber.
    I hope this helps you
    Regards,
    Eduardo

  • Best practice for running multiple sites on 1 CF install?

    Hi-
    I'm setting up a new hosting environment (Windows Server 2008 Standard 64 bit VPS  configuration, MySQL, IIS 7, CF 9)
    Has anyone seen any docs or can anyone suggest best practices for configuring multiple sites in this environment? At this point I'm thinking simple is best, one new site in IIS for each client (domain) and point it to CF.
    Given this environment, is anyone aware of any gotchas within the setup of CF 9 on IIS 7?
    Thank you in advance,
    Rich

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • Best practices for deploying EMGrid Control

    Can i use one db for OEM & RMAN repository? Looking for Best practices for deploying EMGrid Control in our environment, I have experience working with EMGrid control it was very slow , how to make it fast ? Like i enjoy the speed of EMDBControl....

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practice for Using Static Data in PDPs or Project Plan

    Hi There,
    I want to make custom reports using PDPs & Project Plan data.
    What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
    Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
    Thanks,
    EPM Consultant
    Noman Sohail

    Hi Dale,
    I have a Project Level custom field "Supervisor Name" that is used for Project Information.
    For the purpose of viewing that "Project Level custom field Data" in
    Project views , I have made Task Level custom field
    "SupName" and used Formula:
    [SupName] = [Supervisor Name]
    That shows Supervisor Name in Schedule.aspx
    ============
    Question: I want that Project Level custom field "Supervisor Name" in
    My Work views (Tasks.aspx).
    The field is enabled in Task.aspx BUT Data is not present / blank column.
    How can I get the data in "My Work views" ?
    Noman Sohail

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

Maybe you are looking for