Re engineering of existed process / Best Practice (customization)

Hi all of you,
We are implementing SAP ECC 6.0 for one of our clients. Client is asking us to compare their existed business process with Best Practice / standard process and based on the result, asking to prepare a GAP analysis between the existed and best practice for his business.
SAP itself is a best practice in the respective domains / business processes. By implementing SAP ERP,  the client will have best practice for his business processes as I know. But thing is, how can I explain to the client that SAP has given the best practice and based on which, client will consider the SAP practice as Best Practice for the business??
Please give me a solution
Regards,
Ramki

f l,
I'm not sure deleting keys from the registry is ever a best practice, however Xcelsius has listings in:
HKEY_CURRENT_USER > Software > Business Objects > Xcelsius
HKEY_LOCAL_MACHINE > SOFTWARE > Business Objects > Suite 12.0 > Xcelsius
The current user folder holds temporary settings, such as how you've modified your interface.
The local machine folder holds more important information.
As always, it's recommended that you backup the registry and/or create a restore point before modifying or deleting any keys.
As for directories, the only directory Xcelsius uses is the one you install to.  It also places some install logs in the temp directory, but they have no effect on the application.

Similar Messages

  • Order Process Best Practice Suggestions?

    Hey CF World,
    I have to revamp an online order process. The process is broken into 4 steps.
    The app as it exists today was built by a different developer and for the life of me, I have wasted about 5 hours trying to figure out exactly what the person is doing in the code just so I can make some basic tweaks to the process.
    Could anyone offer what might be considered today's best practice for a step by step order process?
    The thought is, if the user could complete step 1, upon clicking next the data elements of the form would be validated and then they would be taken to step 2, etc, etc... until the end where upon submission, the order would then be written to the database and next process triggered internally.
    Should I have one page that upon submit of step 1 cycles back to itself, processes the data and then loads a separate div of info for step 2 or...?
    Any suggestions would be great.  Thank you so much in advance for your help, I sincerely appreciate it.
    Ciao'
    D.

    Hello,
    Thank you so much for that. Let me qualify a few things as I probably should have in the first place. (my apologies)
    Coldfusion 8
    SQL Server  2005
    There is no payment or credit card information being provided.
    The user comes online, goes through a basic order process for some work to be done. As mentioned, it is a multi step process for gathering their information.
    Once the entire order is in and all the fields validated along the way to ensure they were populated where required, the order is to be written into the pending orders table and an email is sent to the branch closest to the customer notifying them of the new order with a link into the details. The branch then calls them directly to confirm the details of the order before activating it.
    So, the code I received, is next to impossible to follow through, for the life of me I can not figure out what the former developer has done. I need to make some changes to the process and if I can not even follow the flow to figure out where to make my changes, that could pose a problem.
    I have not coded too much in Coldfusion for the past two years but did so quite extensively before that. I totally agree on the CFTransaction suggestion. I guess what I was looking for is, are there any best practices for coding that I should be aware of, especially considering what I want to accomplish? Previously we used the "fusebox" concept of coding and had most of our code in CustomTags in a very reusable and easy to follow structure and flow.
    Any thoughts/suggestions would be great! Thank you very much!
    D.

  • Deadline Branche in Correlation Process - Best Practice

    Hello,
    I have an integration process with a correlation - there is a asynchronous send step which activates a correlation and afterwards an asynchronous receive step that uses that correlation.
    Furthermore I have a deadline branch to cancel the process after 24 hours.
    My question now is:
    There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received message will block the inbound queue as no active correlation can be found anymore. Is this correct? How can I avoid this situation, I guess a blocked queue would also block other messages that are sent to the integration process?
    What would be best practice to handle such a scenario? I could leave the process intance open for 1 month, however this might have a significant impact on system performance.....
    Thank you for your advice.

    There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received
    essage will block the inbound queue as no active correlation can be found anymore
    No correlation found error will occur only when the BPM instance is running and the message tries to enter into the relevant receive step (not the first one)
    However when you say the process is cancelled you need not worry about the message going into the queue and blocking the BPM queue.
    Regards,
    Abhishek.

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Af:dialog model-restore / cancel-button processing best practice ?

    Using JDev 11.1.1.3; if I have an af:dialog running in an af:popup which contains auto-submit components (for cross-component enablement, validation etc.). My question is what are the preferred ways of discarding the model submitted changes made through popup processing if/when the af:dialog cancel button is pressed by user ? Figured that using a task flow for the content that is the popup could be an option, and using the task flow savepoint restore feature, but that looks more like database restore than model restore. I want to be able to restore the model content to the way it looked before the popup executed, without necessitating a submit to the database. How is this most commonly and best achieved ?
    Thanks,

    Taskflow savepoints are not database savepoints. Transactional BTF can be configured to issue automatic savepoints at TF entry and eventually to "rollback" to them at the TF exit. The internal implementation uses the ApplicationModule's passivation/activation mechanism to passivate the AM state at the TF entry and eventually to activate the AM state at the TF exit back to the passivated state at the entry. In this way it is simulated that you have not made any modifications in ADF BC, so your model layer will be restored to the state before TF entry. (Of course, you must not perform any DB commits durring the lifetime of this TF). I have used successfully this mechanism for the same goal you are asking about.
    Also there are savepoints managed by the ADF Controller, but I could be of little help here because I have never used them. I suspect that this mechanism could be what you need, so you may have a look here for more details:
    Adding Save Points to a Task Flow
    and in this thread:
    {thread:id=2128956}
    Dimitar

  • Filming to film editing process - best practice?

    Hey everyone,
    This is an odd question, and I'm not really sure where to start looking. But basically, I've been doing a lot of film editing for my company. They send out 2 people to do the filming, then hand me the ra footage to edit. I am given a rough storyboard as well.
    Now, my problem is this - because the filming is done by easiest access at the time (so it wont necessarily be in the order of the story board) and I'm not involved in the actual filming (so I don't know roughly when things are filmed) it's taking me so much longer to go through all the footage, and figure out what I need.
    So does anyone have any good links or advice on how best to bridge these two processes? Or any experience in the industy to help me out?
    I get very tight deadlines, and don't really have time to troll through hours of video. Which then I forget and get lost with it anyway, so I have to search through it again.
    Thank you for you help

    Steven L. Gotz wrote:
    Or, learn to use Adobe Prelude. I don't use it, but those who do might chime in here with their own opinions on the subject.
    It's been a little while since I've worked extensively in Prelude so some of the features and details may have changed but the workflow is the same so I'll give it a shot...
    Prelude is an app specifically designed to log and manage your footage before (and during...) the editing process. There's a lot you can do with it to edit metadata, add markers or selectively ingest footage, but at its moist basic level it's good for marking things up into subclips and then ordering them as a sort of rough cut to bring directly in PrPro. But of course that all takes time too, so depending on what your needs and proficiency level are, it still might be better to just use bins and things to organize it all in PrPro and then just edit directly from that.
    Any way you cut it, 'logging' is a huge job, big enough that we developed a whole app just to help people do it effectively. That's also wht production houses often have dedicated logging pro's for that reason (but it still often falls to the editor in a big way). The production people in the field could certainly help your plight by trying to organize things a little as they go, but that sort of work comes with its own heavy pressures and priorities so that probably won't happen.

  • PL/SQL After submit process - best practice?

    I have after submit process which fires PL/SQL procedure. In this PL/SQL procedure I do some updates and would also like to generate some XML output and send it to browser so that user can save it in file. What I'm asking is, what is proper way to handle this.
    I realize that starting procedure from "after submit process" is too late. If I understand correctly, the page is already rendered at that time so htp.p output from PL/SQL procedure in not showing (but procedure is executed). So I create branch to PL/SQL procedure (after button is pressed). That way procedure actualy creates new window and I can use htp.p functions. Altough now I have trouble closing window but I hope I could manage this.
    Is there some other, better way to do export? Maybe javascript popup and calling procedure from there? Any suggestions?
    Thanks!
    Marko

    How should I send this content to user so that his browser recognize this as a file (for opening or saving)?
    Put that code in a onLoad process similar to how Scott shows at http://spendolini.blogspot.com/2006/04/custom-export-to-csv.html
    With this in place, when you issue a show request on that page, your generated content will be offered by the browser using a open/save dialog box.

  • Arranging fields in a table-like form: best-Practice-Solution wanted

    Hello Experts,
    I´m wondering if there exists a 'best practice' considering how to arrange fields in a table-like form.
    I know about cross-tables, but that´s not what we need. Most of the requirements that I have come to known are just that certain fields should be put in a certain order in a table-like outfit.
    We have tried to do this using the drawing functions (e.g. putting a square around the fields and certain border styles), but it often happens that the lines overlap or there are breaks between the lines, so that you have to do a lot of manual configuration with the 'table'.
    Since this is a requirement I´ve come upon with many reports, I can´t believe that this is supposed to be the best solution for this.
    I don´t understand why there isn´t a table-like element in Crystal Reports to use for this. E.g. put a table with x rows and y columns in the header or group head section section and then just put the fields in it.
    Many thanks in advance for your help !

    Hi Frank,
    You can use build in templates available in Template expert.
    Click on Report menu-> Template Expert.
    Select the desired template. ( Table grid template would suite best here) and click OK.
    There is no facility of inserting a table directly as you said. You will have to do it manually by using lines and boxes.
    Hope this is helpful.
    Regards

  • Best practice for select access to users

    Not sure if this is the correct forum to post, if not then let me know where should I post.
    From my understanding this is the best forum to ask this questions.
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    Asked in this forum as this is related to PL/SQL access.

    Welcome to the forum!
    Whenever you post provide your 4 digit Oracle version.
    >
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    >
    There are many best practices documents about various aspects of security for Oracle DBs but none are specific to developers doing invenstigation.
    Here is the main page for Oracles' OPAC white papers about security.
    http://www.oracletechnetwork-ap.com/topics/201207-Security/resources_whitepaper.cfm
    Take a look at the ones on 'Oracle Identity Management' and on 'Developers and Identity Services'.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    This paper by Database Specialists shows how to use Oracle Identity Management to limit access to users such as developers through the use of roles. It shows some examples of users using their own account but having limited privileges based on the role they are given.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    And this Oracle White Paper, 'Oracle Database Security Checklist', is a more basic security doc that discusses the entire range of security issues that should be considered for an Oracle Database.
    http://www.oracle.com/technetwork/database/security/twp-security-checklist-database-1-132870.pdf
    You don't mention what environment (PROD/QA/TEST/DEV) you are even talking about or whether the access is to deal with emergency issues or general reproduction and fixing of bugs.
    Many sites create special READONLY roles, eg. READ_ONLY_APP1, and then grant privileges to those roles for tables/objects that application uses. Then that role can be granted to users that need privileges for that application and can be revoked when they no longer need it.
    Some sites prefer creating special READONLY users that have those read only roles. If a user needs access the DBA changes the password and provides the account info to the user. When the user has completed their duties the DBA resets the password to something no one else knows.
    Those special users have auditing on them and the user using them is responsible for all activity recorded in the logs during the time the user has access to that account.
    In general you grant the minimum privileges needed and revoke them when they are no longer needed; generally through the use of roles.
    >
    Asked in this forum as this is related to PL/SQL access.
    >
    Please explain that. Your question was about 'access to different tables'. How does PL/SQL access fit into that?
    The important reason for the difference is that access is easily controlled thru the use of roles but in named PL/SQL blocks roles are disabled. So those special roles and accounts mentioned above are well-suited to allowing developers to query data but are not well-suited if the user needs to execute PL/SQL code belonging to another schema (the app schema).

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best Practice Process Chains

    Hi All,
          What is the Best Practice Process chains recommended from SAP mainly for FICO related stuff.
           Do anyone have the structure of FI staging ,reporting layer Process chains step by step.
    Thanks in Advance
    Regards
    con
    Edited by: con on May 21, 2009 10:01 PM

    Hello con ,
    two thing to follow..
    1)have a Change run Step Once master data load completes for all the infoobjects which you have loaded.
    The Change Run activates the master data and adapts all aggregates for newly loaded master data and hierarchies.
    2)If you want to replace existing data in the DataTarget completely, first delete the data in DataTarget(also in PSA if present )and load afterwards. This will help to improve on timing parameters.
    Hope this helps you!!!
    - Nandita

  • Customize OIM jsp forms using xlWebApp.war: Best Practice?

    Based on searching the forum, it appears that inflating the war files, making changes and putting them back again is the way to customize the jsp forms in OIM. Is this a best practice though? What happens when I want to upgrade? Do I lose all my customizations or is there another way to do this?
    Edited by: user4486549 on Jun 5, 2009 5:11 AM

    Hi,
    That is the only way to do it.In case of upgrade every time you will have to merge the changes in new war file and redeploy it. Just take care of one thing that do not modify existing jsp or classes.Create your own jsps and classes.
    Regards

  • Customization approach as per best practice for SharePoint Online

    Hi All,
    I am working for a customer for customization on SharePoint Online. I need to create following customization.
    For each department one site collection is required to be created. There will be 15 site collections.
    Each site collection will have couple of team sites.
    Each team will have couple of document libraries and customer list.
    Custom lists and document libraries will have custom views.
    MaterPage and Layout will be customized to apply the UI Branding.
    Customer wants that configuration management should be as per the Microsoft best practice. I am wondering what the approach I should use is.
    Should I create visual studio solution, but since there are 15 different site collections are required to be created I believe sandboxed solution will not be feasible sine sandboxed solution are scoped with site collection.
    I also believe if I do create visual studio solution that development efforts will be extensive.
    I am not sure whether it is feasible, use the SharePoint Designer to apply this customization but I am confused in this case. If it is possible then how I will promote the customization to production.
    I am also confused in case SharePoint online how I will keep production and development environment separate? What is the best practice around it?
    Regards 
    Unrest Spirit
    Regards Restless Spirit

    Hi,
    You can create Custom Master page using SharePoint Designer. And for first four points from creating Sitecollection to creating views you can create a hierarchy of objects in site using csv file and then create Powershell script to create sitecollection,
    team site, list/libraries and view by reading csv files.
    http://blogs.technet.com/b/fromthefield/archive/2013/08/22/create-a-site-structure-using-powershell.aspx
    http://blog.falchionconsulting.com/index.php/2009/12/creating-a-sharepoint-2010-site-structure-using-powershell/
    Details about SharePoint Online Powershell management shellcan be found on below links:
    http://technet.microsoft.com/en-us/library/fp161362%28v=office.15%29.aspx
    https://support.office.com/en-GB/article/Introduction-to-the-SharePoint-Online-Management-Shell-c16941c3-19b4-4710-8056-34c034493429
    Best Regards,
    Brij K

  • Looking for Some Examples / Best Practices on User Profile Customization in RDS 2012 R2

    We're currently running RDS on Windows 2008 R2. We're controlling user's Desktops largely with Group Policy. We're using Folder Redirection to configure their Start Menus as well.
    We've installed a Server 2012 R2 RDS box and all the applications that users will need. Should we follow the same customization steps for 2012 R2 that we used in 2012 R2? I would love to see some articles on someone who has customized a user profile/Desktop
    in 2012 R2 to see what's possible.
    Orange County District Attorney

    Hi Sandy,
    Here are some related articles below for you:
    Easier User Data Management with User Profile Disks in Windows Server 2012
    http://blogs.msdn.com/b/rds/archive/2012/11/13/easier-user-data-management-with-user-profile-disks-in-windows-server-2012.aspx
    User Profile Best Practices
    http://social.technet.microsoft.com/wiki/contents/articles/15871.user-profile-best-practices.aspx
    Since you want to customize user profile, here is another blog for you:
    Customizing Default users profile using CopyProfile
    http://blogs.technet.com/b/askcore/archive/2010/07/28/customizing-default-users-profile-using-copyprofile.aspx
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]

Maybe you are looking for

  • Awesome not reading rc.lua

    I'm trying to give awesome3.1 a go, but it isn't reading my rc.lua. It just keeps the default everything. Really all I've changed is the theme. -- Include awesome libraries, with lots of useful function! require("awful") require("beautiful") -- {{{ V

  • Airplay - playback in multiple rooms no longer in sync

    In the wake of Mac OS X v10.8.3 and iTunes 11.02 upgrades (and after years of automatically working flawlessly and seamlessly), Airplay music playback no longer is in sync between an AppleTV v3 connected to a circa 2004 Pioneer Elite VSX-23TXHK A/V R

  • Connecting database in dreamweaver to coldfusion

    i did access database. i created site with all the files and images. i did the connection from coldfusion to the database in dreamweaver through the "datasources " in the coldfusion administrator, but when i try to make the page appear by clicking vi

  • Do I need a new GFX card for FCS 2?

    "An AGP or PCI Express Quartz Extreme graphics card (Final Cut Studio is not compatible with integrated Intel graphics processors)" I don't speak this language. I have a Power Mac G5 Quad. When I go to "About this MAC" it says that I have PCI somethi

  • Plug-in Error for Acrobat 6 Professional

    Working with Vista on a 2.66Ghz laptop with GB memory. The application fails in one regard.. when navigating to URLs ending in "pdf" e.g. http://www.adobe.com/education/pdf/cib/acro6/acro6_cib_16.pdf the error message: Acrobat Plug-in... could not fi