Best Practices using Pre-configurator

Hi
Anybody aware of doing QM configuration as per the Best Practices through Pre-configurator.

Hi
By using sap library you can configure the QM module . Basically you have to know the process of Quality Managment
Regards
M B raju

Similar Messages

  • Best practice steps of configuration

    Hello All
    Can anyone write back on What is the best practice steps of configuration in HCM for a new implementation.
    Thanks

    Hi,
    SAP Best Practices is prepacked ready to use solutions for small and medium size Business units (SMBs)
    SAP Best practices is based on  Building Block methodology
    SAP Best Practices is fully documentation including preconfigured business processes, training material, data conversion tools, and test catalogs.
    Best Practcies is totally Buidleing block Methods.
    Building Blockss Contains:
    Business Configuration Sets (BC Sets)
    Sample master data
    Configuration documentation
    Print forms or reports
    Business Configuration Sets are group of Configuration Sets. As We can say group of tables for specific Business Process.
    for more details go help.sap.com., then go for Best practices you will very useful information.
    Regs,
    Brahma

  • Best Practices Used in CMS

    Hi,
    Can anyone share the best practices used in CMS transport.
    Basically, why I need this is that, we want to have a track of all the transport that are done in QA/Prod
    Regards,
    Sreenivas

    Hi
    U can try checking this document to know info about CMS
    (How To… Transport XI Content Using CMS)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f85ff411-0d01-0010-0096-ba14e5db6306
    Also...
    How to configuring the CMS for XI?
    Business System Groups - CMS

  • Favorite / Best Practice / Useful MTE's ??!?!

    Hi Everyone,
    I'm setting up monitoring on a multi-system architecture; i've got all of the agents working and reporting to my CEN. I've even got the auto-response email all set up, great!
    I've been looking around for any best practice on what MTE's (out of the 100's there!) I should be setting up virtual and rule based nodes for.
    So come on everyone...what are your favorite MTE's. Of course I'm looking at Dialog Response Times, and Filesystem %'s -- but any other tips? hints? tricks? Any neat MTE's out there that you love to check.
    A best practice / useful guide would be brilliant?
    Thanks in advance.
    Nic Doodson

    ??

  • What is the BEST practice - use BO or Java Object in process as webservice

    Hi All,
    I have my BP published as web service. I have defined My process input & output as BOs. My BP talks to DB through DAO layer(written in JAVA) which has Java objects. So I have BO as well as java Objects. Since I am collecting user input in BO, I have to assign individual values contained in BO to Java object's fields.
    I want to reduce this extra headache & want to use either of BO or Java object.I want to know What is the best practice - use BO or Java object as process input. If it is BO,how I can reuse BOs in Java?
    Thanks in advance.
    Thanks,
    Sujata P. Galinde

    Hi Mark,
    Thanks for your response. I also wanted to use java object only. When I use java object as process input argument..it is fine. But when I try to create Process web service, I am getting compilation error - "data type not supported".....
    To get rid of this error, I tried to use heir (BO inheriting from java class). But while invoking process as web service, it is not asking for fields that are inherited from java class.
    Then I created Business Object with a field of type java class... This also is not working. While sending request, it is giving an error that - field type for fields from java class not found.
    Conclusion - not able to use java object as process(exposed as web service) input argument .
    What is the Best & feasible way to accomplist the task - Process using DAO in Java & exposed as web service.
    Thanks & Regards,
    Sujata

  • Request for any Best Practice document for configuring Service Parameters on CUCM 9.1.1

    Hi Team,
    Could you please send if you have any  Best Practice document for configuring Service Parameters on CUCM 9.1.1. That would really help.
    Thanks,
    Guru

    Hi
    There's no 'best practice' as such, but there are a few that I think should be default:
    Enabling CDR, On-Hook Pickup, CFwdAll Override... but really the settings are specific to the requirements of the deployment.
    Aaron

  • Looking for best practices using Linux

    I use Linux plataform to all the Hyperion tools, we has been problems with Analyzer V7.0.1, the server hangs up ramdomly.<BR>I'm looking for a Linux best practices using Analyzer, Essbsae, EAS, etc.<BR>I'll appreciate any good or bad comments related Hyperion on Linux OS.<BR><BR>Thanks in advance.<BR><BR>Mario Guerrero<BR>Mexico

    Hi,<BR><BR>did you search for patches? It can be known problem. I use all Hyperion tools on Windows without any big problem.<BR><BR>Hope this helps,<BR>Grofaty

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best practice MPLS design/configuration for small service provider

    We are a small regional service provider and did not have MPLS supported on our network.  To start supporting MPLS, I’d like to get opinions and recommendations on the best practice configuration. 
    Here is what we have today –
    We have our own BGP AS and multiple /24s.
    We are running OSPF on the Cores and BGP on the Edge routers peering with ISPs.
    We peer with multiple tier-1 ISPs for internet traffic. We do not provide public transit.
    What we want for phase one MPLS implementation –
    Configure basic MPLS /vpn functionality.
    No QoS optimization required for phase 1.
    We have Cisco ME 3600X for  PE. Any recommendations will be appreciated.

    Not sure what kind of devices or routers you have in your network but looks for if you have support for labeled multicast for MVPN support. That will avoid other complexity of using other control protocols (like PIM) in core.
    PE redundancy can be obtained by BGP attributes, CE-PE connectivity can be tunned using IGP or VRRP/HSRP...
    You can have mutiple RSVP TEs for various contract traffic and you can bind various kind of traffic to different RSVP Tunnels based on contract or service with your customer.
    RSVP-TE with link/node protection design will be of great help to achieve quicker failover.

  • Tabular Model Best Practice - use of views

    Hi,
    I've read in some sites that using views to get the model data is a best practice.
    Is this related to the fact tables only right? Friendly names can be configured in the model, so views can be used to restrict data volume but besides from that what are the other advantages?
    Model needs to know all the relation between tables, so using a unique view that combines joins to present one big view with all the data isn't useful.
    Best regards

    Yes, I think most people would agree that it isn't helpful to "denormalise" multiple tables into a single view. The model understands the relationships between tables and queries are more efficient with the multiple smaller related tables.
    Views can be helpful in giving a thin layer of independence from the data. You might want to change data types (char() to date etc), split first/last names, trim irrelevant columns or simply isolate the model from future physical table changes.
    In my view, there aren't any hard and fast rules. Do what is pragmatic and cleanest.
    Hope that helps,
    Richard

  • Best practices in wireless configuration?

    Hi,
    Is there a best practice document that shows 3500 AP with 5508 controllers? The questions I have are below.
    1. Do I configure each AP to non overlapping neighbor channels(1,6,11 for 2.4GHz) or leave that to controller to decide? Does controller change the channel of an AP when it sees congestion on a specific frequency?
    2. For 5 GHz is it good idea to bond the channels? What frequency to use for neighboring APs? OR again, leave it to controller to shift as needed?
    3. For security what's best practices? 802.1x or different?
    Thanks,
    Sm

    1. Do I configure each AP to non overlapping neighbor channels(1,6,11 for 2.4GHz) or leave that to controller to decide?
    Let the controller(s) decide.  By default the Dynamic Channel Assignment (DCA) verifies the channel for interferrence every 600 seconds.  Because you have 3500 then make sure you enable Event Driven RRM (Radio Resource Management) on both channels.
    Does controller change the channel of an AP when it sees congestion on a specific frequency?
    The controllers will not change the channel when it sees congestion.  The controller will change the channel if it sees interference on the same channel.  The CleanAir will change the channel when it sees interference from non-AP interference like Bluetooth, Microwave ovens, cordless phones, etc.
    2. For 5 GHz is it good idea to bond the channels?
    Sure.
    What frequency to use for neighboring APs? OR again, leave it to controller to shift as needed?
    Leave this option in default.
    3. For security what's best practices? 802.1x or different?Sure.

  • Best practices on wlan configuration

    I have 6 accesspoints with controller . We are using voice video and data on the wireless with multiple SSID and vlans what could be the best practice on the accesspoints QOS and the switch port configuration of 3750 is it trunk or accessport?
    Sent from Cisco Technical Support iPad App

    Well you can start out by looking at the 792x deployment guide if that's what your using for voice.
    http://www.cisco.com/en/US/docs/voice_ip_comm/cuipph/7925g/7_0/english/deployment/guide/7925dply.pdf
    http://www.cisco.com/en/US/tech/tk722/tk809/technologies_tech_note09186a0080810880.shtml
    Sent from Cisco Technical Support iPhone App

  • 2nd Mac - best practices using iPhoto on both?

    Hi -
    I just got a new MacBook and have an iMac that is still the "hub" of my photo library. It is, in fact, about a 180 GB iPhoto library. I know that I can't sync libraries between Macs (a shame - someone should come up with a way to that assuming they haven't already!) so I'm just looking for any best practices?
    I got the MacBook to be able to work on some photos while on the road - I can at least work on post processing in Photoshop, etc. I'm thinking now that my best strategy is to possibly work with the images on my MacBook, importing them into the iPhoto library if desired. Then use my Photo sharing service - Phanfare - to "sync" them? It requires me to download them on the other side and pull them again into the iPhoto Library on the iMac?
    I don't use the Mobile Me Gallery but I suppose that would be another way to have access to them on the alternate computer?
    Any other best practices or suggestions?
    Thx!

    So, if there are times when I'm not home to access my external drive, then going with the two libraries is the best solution, yes?
    Perhaps, but you can get very small and portable external HDs these days.
    I'm not sure though if I should really make both a 180 GB iPhoto library, do you? It is a back up true, but seems like a chunk to move
    But you only do it once. The first time. Thereafter you're simply updating the other with the changes.
    At least maybe I could split into pictures from 2009 - 2010 and have that library for both my iMac and the MacBook. I very rarely access before then (only if I need something specific) so then I could access that via the iMac exclusively?
    That would be viable.
    I would maintain a +full Library+ on the Desktop, the mobile versions a Smaller subset.
    I'm sort of ruling out the one library on the external solution because it eliminates the possibility of being remote -
    As I said above you can get tiny portable drives...
    unless there is some swanky Login to My Computer or something that works with a Mac that can go remotely to my computer and then to my external drive.
    *_This_* might help.
    Regards
    TD

  • Best practice using regular properties

    What is considered best practice when it comes to using properties ? Example like hostname, port number when connecting towards an external resource.
    Should property files be used ? Is this considered a bad practice ? Should deployment descriptors be used - if so - how do one update these properties when changed ?
    Are there any utility classes that makes easy access to this kind of properties ?
    ---- Trond

    Depends on what properties. Many properties like hostname etc can be retrieved using different API calls - such as the request object or other portal specific objects.
    Properties that you applications need, that might change - can be stored in a properties file. I use a singleton to retreive them - and have a reload method on the singleton that I can call if I need to reload the properties once the server has started.
    Kunal

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

Maybe you are looking for