Pharmaceuticals Best Practice - 21 CFR Part 11

I work for a SAP partner and our company would like to learn more about and provide support for 21 CFR Part 11 compliance - tracking changes and who made the changes.  Do we need to have the pharmaceuticals best practice installed in our SAP installation, or can I simply just use the AUT01 - AUT10 transactions?
Any help / suggestions is appreciated
Kathleen

Hi Kathleen,
The trick with best practices is that you can use different building blocks from most best practices in other environmnents. U can use the configuration guide P45: Configuration Guide (http://help.sap.com/bp_pharmav1600/Pharma_US/HTML/business_support_EN_US.htm), by 'simply' following the steps in this guide.
Ofcourse there is a but, because you need to be sure that you implemented the right notes and baseline packages.
Hope I gave you some direction in where to look.
Kind regards,
Ruud

Similar Messages

  • Best Practice for Plan for Every Part (PFEP) Database/Dashboard?

    Hello All-
    I was wondering if anyone had experience with implementing / developing a Plan for Every Part (PFEP) Database in SAP. My company is looking to migrate its existing PFEP solution (Custom developed Excel/Access system) into SAP. If you are unfamiliar, a PFEP is a dashboard view of a part/material that provides various business groups with dedicated views to data from Material Masters, Info Records, and Vendor Master Records and combines it with historical/forecasting information. The goal is to provide a single source to all the part/material settings for a given part.
    Is there a Best Practice PFEP in SAP? Or if this is something that most companies custom develop in ERP or BI?
    Thanks in advance.
    -Ron

    I think you will likely get a response in SAP ERP - Logistics Materials Management (SAP MM)
    additionally you might want to do some searches based on SAP Lean Inventory, perhaps Kanban. I am assuming you are not using WM or EWM either?
    Where I have seen PFEP incorporated into the supply chain strategy this typically requires not inconsiderable additions to the alternate UoM in MM dropping of automatic replenishment levels (reorder level) and rethinking aspects of the MRP plan so be prepared or significant additional data management work if you haven't already started on that. I believe Ryder logistics uses PFEP and theirSAP infrstructure is managed by IBM; might be an idea to try and find a linkedin  resource from there. You may also find one of the ASUG supply chain,logistics,  MM or WM sigs a good place to also ask questions and look for answers.

  • Best practice to develop the news web part to retrieve news data across the farms

    Hi,
    We have developed the News Web part. The functionality  it pulls the news from other SharePoint Farms and display it in the site. Currently we are using secure store service to  to connect to different FARM to retrieve the news from that FARM.
    The issue with this approach is that every time user hits the page it  will always hit the Secure Store service.There are almost 80K users who will be using this web part. Moreover this web part connects to multiple sites for multiple news from different
    division.What should be the best practice to develop such kind of web part without consuming the server resources and keep hitting the server till we get the new news.News does not change very often.
    Regards
    Rajaniesh

    Hi,
    According to your description, my understanding is that you want to know which is the best way  to handle the large complication of SharePoint cross farm retrieving data.
    If you are developing the custom web part to retrieve data from other farm, I suggest you can firstly create a custom timer job to get data hourly in the backend and restore the data in a list. Then you can create a web part to link to the list to display
    the data.
    For cross farm accessing data, I suggest you can create a custom web service to achieve it.
    Also, you can use ajax to display the web part data asynchronously. It will improve the performance and reduce the server pressure.
    Here are some detailed articles for your reference:
    Create and Deploy Custom Timer Job Definition in SharePoint Programatically
    Creating a Custom ASP.NET Web Service
    Create asynchronous web parts for Sharepoint
    Thanks
    Best Regards
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Jerry Guo
    TechNet Community Support

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best Practices in SharePoint 2010 ( Out of the box feature vs Custom Web Part, development )

    Hi
    How do we differentiate on when to allow custom web parts and when to use out of the box.
    What are the performance issues involved when we deploy a custom web part into the SharePoint server. 
    Why do some companies prefer to allow only out of the box features, and no custom work is done?

    SharePoint is a powerful, flexible server product that can provide a wealth collaboration environment right out of box.
    Best answer for your question is depend upon your requirement. Sometime Out of Box features will solve all the problem with little designing. But sometime your requirement need a Custom Web part / solution.
    With OOTB implementation the big advantage is easy to trouble shoot & fix the issues. You will also found tons of blogs on internet for OOTB features. In custom development, its hard to troubleshoot & identifying whether its SharePoint issue or Custom
    code issue.
    check the below article for more ideas.
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/1e7845ef-61e0-4d01-bb6c-92519e6d7139/sharepoint-2010-outofbox-best-practices?forum=sharepointgeneralprevious
    http://www.cdh.com/media/articles/Pages/SharePoint-out-of-the-box---To-customize-or-not-to-customize.aspx
    Master List of SharePoint 2010 On-Premises Custom Development Best Practices
    http://i.zdnet.com/whitepapers/Quest_WPW_SharepointDev_Custom_US_KS_v3.pdf
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Installing the SAP Best Practices for Pharmaceuticals

    Hi!!
    Is it necessary to install SAP Best Practices for Pharmaceuticals on different clint os same SAP SERVER?
    Best Regards,
    Ajit Dubal.

    Hi Ajit,
                     You can do either way so check with functional people how they want?
    Regards,
    Hari.

  • Is Adobe Connect part of Adobe Creative Cloud? Are there any best practices ideas from people who use Connect and Creative Cloud?

    Is Adobe Connect part of Adobe Creative Cloud? Are there any best practices ideas from people who use Connect and Creative Cloud?
    I have an Adobe Connect account and I'm are also in the early stages of developing a webinar. I am looking for any tips and advice from anyone who uses both of these services.

    As the £27, was an introductory offer. Upon the completion of one year, the price will change to the normal creative cloud cost which is at £46.88. However if you have the previous versions of the creative suites like CS 3, 4, 5, 5.5 or the CS 6. You can avail the offer at £27.34 per month incl. VAT. However this Requires annual commitment; billed monthly.

  • Best practice for creating RFC destination entries for 3rd parties(Biztalk)

    Hi,
    We are on SAP ECC 6 and we have been creating multiple RFC destination entries for the external 3rd party applications such as Biz-talk and others using TCP/IP connection type and sharing the programid.
    The RFC connections with IDOC as data flow have been made using Synchronous mode for time critical ones(few) and majority through asynchronous mode for others. The RFC destination entries have been created for many interfaces which have unique RFC destinations with its corresponding ports defined in SAP. 
    We have both inbound and outbound connectivity.with the large number of RFC destinations being added we wanted to review the same. We wanted to check with others who had encountered similar situation and were keen to learn their experiences.
    We also wanted to know if there are any best practices to optimise on number of RFC destinations.
    Here were a few suggestions we had in mind to tackle the same.
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    I have done checks on SAP best practices website, sap oss notes and help pages but could not get specific information I was after.
    I do understand we can have as unlimited number of RFC destinations and maximum connections using appropriate profile parameters for gateway, RFC, client connections, additional app servers.
    I would appreciate if you can suggest the best architecture or practice to achieve  RFC destinations in an optimized manner.
    Thanks in advance
    Sam

    Not easy to give a perfect answer
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    -> be careful if you have multi cllients ( for example in acceptance) RFC's are client independ but ports are not! you could run in to trouble
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    -> could be the best solution... its easier to create partner profiles and the control record will contain the correct partner.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    -> consider this option 2.
    We send to you messagebroker with 1 RFC destination , sending multiple idoctypes, different partners , different ports.

  • Best Practice Life Science Pharmaceuticals

    Dear all,
    I was wondering if there is any news about the Life Science Best Practice Pharma. The latest Pharma Best Practice dates back 2007.
    Maybe with 7.0 release there will also be an update on the best practice, but there is nothing to be found, regarding this subject.
    Anybody got some insight?
    Thanks.
    Regs,
    Ruud

    Would love to get some insights too!
    Agne
    [indijos viza voyage|http://www.voyage-voyage.lt/viza-i-indija/]

  • BASELINE PACKAGE - V1-V2.603 and Best Practices for Pharmaceuticals

    Hi All,
             > Recently we have upgraded to EHP4 stack 4
             > I am trying to install BASELINE PACKAGE - V1-V2.603 I chose that because its localized for India.
             > When i am trying to install  BP-ERP05 603V7 and BP-INSTASS 600V7 its asking for BBPCRM 600, BBPCRM
                700 and SAP_APPL 600, SAP_APPL 603.
             > Why does the installer ask for lower versions? ofcourse i have read that if the version is Higher than the one mentioned
                in the "Quick Guide to Installing the SAP Best Practices Baseline Package (IN)" will not work.
             > But do we have any BaseLine Packages Specific to EHP4?
             > If not, then could any one tell me where and how to download and install the BBPCRM as an add-on.
             > I only need that Software component because I strongly feel that all the interdependencies are linked with this one only.
    Any help and suggestions are welcome.
    Regards,
    Antony Chaitanya.

    Hi Sunny,
                    Thanks very much for your response.
    The major problem that i am having is I some how did not include the BBPCRM software component or the related software components for CRM at the time of upgrade is my best guess.
    So, the latest add ons BP-CRM60 600V3 and BP-CRM70 700V1 or the add ons BP-ERP05 600VD/ 603V7 along with that BP-INSTASS 600V7 are not able to install because the prerequisite are not meeting. Only BP-INSTASS 600V1 got installed.
    Can any one tell me how and what to do now.... plzzz???
    Things like how to get the CRM related s/w components installed or if there is a work around to get the Baseline Packages (IN) activated with out the CRM.
    Regards,
    Antony Chaitanya.

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best Practice for setting systems up in SMSY

    Good afternoon - I want to cleanup our SMSY information and I am looking for some best practice advice on this. We started with an ERP 6.0 dual-stack system. So I created a logical component Z_ECC under "SAP ERP" --> "SAP ECC Server" and I assigned all of my various instances (Dev, QA, Train, Prod) to this logical component. We then applied Enhancement Package 4 to these systems. I see under logical components there is an entry for "SAP ERP ENHANCE PACKAGE". Now that we are on EhP4, should I create a different logical component for my ERP 6.0 EhP4 systems? I see in logical components under "SAP ERP ENHANCE PACKAGE" there are entries for the different products that can be updated to EhP4, such as "ABAP Technology for ERP EHP4", "Central Applications", ... "Utilities/Waste&Recycl./Telco". If I am supposed to change the logical component to something based on EhP4, which should I choose?
    The reason that this is important is that when I go to Maintenance Optimizer, I need to ensure that my version information is correct so that I am presented with all of the available patches for the parts that I have installed.
    My Solution Manager system is 7.01 SPS 26. The ERP systems are ECC 6.0 EhP4 SPS 7.
    Any assistance is appreciated!
    Regards,
    Blair Towe

    Hello Blair,
    In this case you have to assign products EHP 4 for ERP 6 and SAP ERP 6 for your system in SMSY.
    You will then have 2 entries in SMSY, one under each product, the main instance for EHP 4 for ERP 6 must be central applications and the one for SAP ERP 6 is SAP ECC SERVER.
    This way your system should be correctly configured to use the MOPZ.
    Unfortunately I'm not aware of a guide explaining these details.
    Some times the System Landscape guide at service.sap.com/diagnostics can be very useful. See also note 987835.
    Hope it can help.
    Regards,
    Daniel.
    Edited by: Daniel Nicol on May 24, 2011 10:36 PM

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • What is a best practice for managing a large amount of ever-changing hyperlinks?

    I am moving an 80+ page printed catalog online. We need to add hyperlinks to our Learning Management System courses to each reference of a class - there are 100s of them. I'm having difficulty understanding what the best practice is for consistent results when I need to go back and edit (which we will have to do regularly).
    These seem like my options:
    Link the actual text - sometimes when I go back to edit the link I can't find it in InDesign but can see it's there when I open up the PDF in Acrobat
    Draw an invisible box over the text and link it - this seems to work better but seems like an extra step
    Do all of the linking in Acrobat
    Am I missing anything?
    Here is the document in case anyone wants to see it so far. For the links that are in there, I used a combination of adding the links in InDesign then perfecting them using Acrobat (removing additional links or correcting others that I couldn't see in InDesign). This part of the process gives me anxiety each month we have to make edits. Nothing seems consistent. Maybe I'm missing something obvious?

    what exatly needs to be edited, the hyperlink or content or?

  • Best practices of having a different external/internal domain

    In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
    The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
    So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
    Thanks in advance for any advice you all can offer!

    Part of this is personal preference, but there are some technical elements to it, too.
    You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
    For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
    For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
    Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
    It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
    Of course, you can always drive DNS manually by editing the zone files directly.

Maybe you are looking for