System Emails Best Practices

Hi, I need a professional recommendation on what is the best practice when dealing with our customers system emails.
I'm asking this because BC create all the workflows and system emails with the partner details, which I find really ridiculous.
Then you have to go and change all of these emails to the client email. I realised lately that BC have a default email setting for system emails. I changed all these to my client email address, but still I'm receiving inquiry and they are getting my name somehow.
This is very embarassing and I hate to see it happening again. Can someone please help me with what procedures I need to take to avoid this.
As partners do we usually leave ourselves as administrators and receiving every workflow our client receives? What if you have 100 clients?
Shouldnt BC add a "BC Partner" to users? like a super administrator in Joomla instead of being together with other administrator?
Thanks
Michel

Hi Michel,
The system is a framework in terms of site elements, so of course will have default information which you can choose to change or not (such as workflows).
In terms of forms and notification emails:
- Make sure you update forms notifications and emails
- Make sure you update notification emails for mailing lists
- Make sure the mass change for system emails is applied and you choose the correct template if you wish to for each one.
For Workflows:
- System comes with some basic ones and of course you can build your own.
- Normally the thing you will do is modify or add your clients to "users" or their own role such as "busines name admin" for example. You may have multiple users and for them to have multiple roles. Under the new interface permisions control what they see and have access too.
- Workflows are set to you for testing mainly, allowing you to get workflows and email notifications as you develop to see how things are shaping up and are working.
- Go into workflows and change them so that emails / txts / steps go to the right people.
You can find more information if you need to on workflows in the knowledgebase if you not gone through them already.

Similar Messages

  • Site System Roles - Best Practices

    Hi all -
    I was wondering if there wwere any best practice recommendations for how to configure Site System Roles? We had a vendor come onsite and setup our environment and without going into a lot of detail on why, I wasn't able to work with the vendor. I am trying
    to understand why they did certain things after the fact.
    For scoping purposes we have about 12,000 clients, and this how our environment was setup:
    SERVERA - Site Server, Management Point
    SERVERB - Management Point, Software Update Point
    SERVERC - Asset Intelligence Synchronization Point, Application Catalog Web Service Point, Application Catalog Website Point, Fallback Status Point, Software Update Point
    SERVERD - Distribution Point (we will add more DPs later)
    SERVERE - Distribution Point (we will add more DPs later)
    SERVERF - Reporting Services Point
    The rest is dedicated to our SQL cluster.
    I was wondering if this seems like a good setup, and had a few specific questions:
    Our Site Server is also a Management Point. We have a second Management Point as well, but I was curious if that was best practice?
    Should our Fallback Status Point be a Distribution Point?
    I really appreciate any help on this.

    The FSP role has nothing to do with the 'Allow
    fallback source location for content' on the DP.
    http://technet.microsoft.com/en-us/library/gg681976.aspx
    http://blogs.technet.com/b/cmpfekevin/archive/2013/03/05/what-is-fallback-and-what-does-it-mean.aspx
    Benoit Lecours | Blog: System Center Dudes

  • Portal System Transport (Best Practice)

    Hello,
    We have DEV, QA and PRD landscape. We have created systems that connect to backend ECC systems. Since the DEV and QA ECC system has one application server, we have created a portal system of type singe application server in the DEV Portal that points to DEV ECC system. Subsequently we have transported this portal system to QA portal and make it point to QA ECC.
    Now the Prd ECC systems is of type load balancing with multiple servers. The portal system that connects to Prd ECC system should also be of type Load Balancing. Now we cannot transport the QA portal system that connects to QA ECC system to prd since its of type Single Application Server.
    What will be the best strategy to create the portal system in prd portal that points to PRD ECC.
    1. Create the portal system freshly in Prd system of type Load Ballancing. Does it adhere to the best practise approach that suggest Not to Create anyting in prd system directly.
                                                       OR
    2, Is there any other way that should I follow to make sure that Best Practices for Portal Dvelepment is followed.
    Regards
    Deb

    I don't find it useful to transport system objects so I make them manually.

  • Development System Backup - Best Practice / Policy for offsite backups

    Hi, I have not found any recommendations from SAP on best practices/recommendations on backing up Development systems offsite and so would appreciate some input on what policies other companies have for backing up Development systems. We continuously make enhancements to our SAP systems and perform daily backups; however, we do not send any Development system backups offsite which I feel is a risk (losing development work, losing transport & change logs...).
    Does anyone know whether SAP have any recommendations on backuping up Development systems offsite? What policies does your company have?
    Thanks,
    Thomas

    Thomas,
    Your question does not mention consideration of both sides of the equation - you have mentioned the risk only.  What about the incremental cost of frequent backups stored offsite?  Shouldn't the question be how the 'frequent backup' cost matches up with the risk cost?
    I have never worked on an SAP system where the developers had so much unique work in progress that they could not reproduce their efforts in an acceptable amount of time, at a acceptable cost.  There is typically nothing in dev that is so valuable as to be irreplaceable (unlike production, where the loss of 'yesterday's' data is extremely costly).  Given the frequency that an offsite dev backup is actually required for a restore (seldom), and given that the value of the daily backed-up data is already so low, the actual risk cost is virtually zero.
    I have never seen SAP publish a 'best practice' in this area.  Every business is different; and I don't see how SAP could possibly make a meaningful recommendation that would fit yours.  In your business, the risk (the pro-rata cost of infrequently  needing to use offsite storage to replace or rebuild 'lost' low cost development work) may in fact outweigh the ongoing incremental costs of creating and maintaining offsite daily recovery media. Your company will have to perform that calculation to make the business decision.  I personally have never seen a situation where daily offsite backup storage of dev was even close to making  any kind of economic sense. 
    Best Regards,
    DB49

  • System.out vs. System.err -- best practices?

    When I catch an exception, where is the best place to write the stack trace, etc? Should I write it to System.out or System.err?
    Thanks,
    Curt

    If you call printStackTrace() it will by default print to the to the standard error stream.
    But really you should look at log4j.

  • Forward Terminated Employee Emails Best Practice

    I'll start off by saying Exchange is not my bread and butter, but it is mine now, so this is my logical thought...
    When we have an employee leave and their email needs to be forwarded to a manager or another employee, the current process is to disable/delete their account and add their SMTP address to the person who would now receive their emails. Is this the most logical
    way to accomplish this? To me it seems like the quickest way, but doesn't allow much flexibility or logic. All actionable rules would need to take place in Outlook of the new receiver, like auto-response, rejection, etc.
    I don't know how rampant the SMTP process is through the environment but I can find out via Powershell, I'm sure.
    But lets say it happens on a lot of mailboxes...If I create a Transport Rule for each of these instances will the TR's create a lot of extra overhead on the servers having to process the rules? How do others handle this type of task, it has to be pretty
    common. 

    While that will work, you can also put a forwarding address on the terminated mailbox.  On the
    Mail Flow Settings tab, get the properties of the Delivery Options. Add a forwarding address, and if you want rules in the original mailbox to handle auto-responses, etc, select the
    Deliver message to both forwarding address and mailbox tickbox. You can also do this in the shell with the following command:
    Set-Mailbox -Identity <mailbox ID> -DeliverToMailboxAndForward $true -ForwardingAddress <forwarding address>
    The advantage of this is that when the account is finally deleted, there is no cleanup required.
    And you do NOT want to use Transport Rules, as you surmise. (-;

  • SPARC system asset - best practice

    Hello,
    Is it possible to have OS asset and Service Processor asset for a single host system as one asset subtree?
    (Platform is T1000, EM is 12.1.1.0.0)

    I don't find it useful to transport system objects so I make them manually.

  • System Management Best Practice Questions

    I help out with IT at a small fire department, town is around 3,000 people. They have about 10 PCs. The only time I really do anything is when someone has a question, or points out a problem. Everything is in place, and works well. However, is there anything
    I should be regularly checking on? We have the usual in place: Anti-virus on all PCs, Windows Updates are automatic, all PCs have passwords, Wi-Fi has a password, etc.

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Storage Server 2012 best practices? Newbie to larger storage systems.

    I have many years managing and planning smaller Windows server environments, however, my non-profit has recently purchased
    two StoreEasy 1630 servers and we would like to set them up using best practices for networking and Windows storage technologies. The main goal is to build an infrastructure so we can provide SMB/CIFS services across our campus network to our 500+ end user
    workstations, taking into account redundancy, backup and room for growth. The following describes our environment and vision. Any thoughts / guidance / white papers / directions would be appreciated.
    Networking
    The server closets all have Cisco 1000T switching equipment. What type of networking is desired/required? Do we
    need switch-hardware based LACP or will the Windows 2012 nic-teaming options be sufficient across the 4 1000T ports on the Storeasy?
    NAS Enclosures
    There are 2 StoreEasy 1630 Windows Storage servers. One in Brooklyn and the other in Manhattan.
    Hard Disk Configuration
    Each of the StoreEasy servers has 14 3TB drives for a total RAW storage capacity of 42TB. By default the StoreEasy
    servers were configured with 2 RAID 6 arrays with 1 hot standby disk in the first bay. One RAID 6 array is made up of disks 2-8 and is presenting two logical drives to the storage server. There is a 99.99GB OS partition and a 13872.32GB NTFS D: drive.The second
    RAID 6 Array resides on Disks 9-14 and is partitioned as one 11177.83 NTFS drive.  
    Storage Pooling
    In our deployment we would like to build in room for growth by implementing storage pooling that can be later
    increased in size when we add additional disk enclosures to the rack. Do we want to create VHDX files on top of the logical NTFS drives? When physical disk enclosures, with disks, are added to the rack and present a logical drive to the OS, would we just create
    additional VHDX files on the expansion enclosures and add them to the storage pool? If we do use VHDX virtual disks, what size virtual hard disks should we make? Is there a max capacity? 64TB? Please let us know what the best approach for storage pooling will
    be for our environment.
    Windows Sharing
    We were thinking that we would create a single Share granting all users within the AD FullOrganization User group
    read/write permission. Then within this share we were thinking of using NTFS permissioning to create subfolders with different permissions for each departmental group and subgroup. Is this the correct approach or do you suggest a different approach?
    DFS
    In order to provide high availability and redundancy we would like to use DFS replication on shared folders to
    mirror storage01, located in our Brooklyn server closet and storage02, located in our Manhattan server closet. Presently there is a 10TB DFS replication limit in Windows 2012. Is this replicaiton limit per share, or total of all files under DFS. We have been
    informed that HP will provide an upgrade to 2012 R2 Storage Server when it becomes available. In the meanwhile, how should we designing our storage and replication strategy around the limits?
    Backup Strategy
    I read that Windows Server backup can only backup disks up to 2TB in size. We were thinking that we would like
    our 2 current StoreEasy servers to backup to each other (to an unreplicated portion of the disk space) nightly until we can purchase a third system for backup. What is the best approach for backup? Should we use Windows Server Backup to be capturing the data
    volumes?
    Should we use a third party backup software?

    Hi,
    Sorry for the delay in reply.
    I'll try to reply each of your questions. However for the first one, you may have a try to post to Network forum for further information, or contact your device provider (HP) to see if there is any recommendation.
    For Storage Pooling:
    From your description you would like to create VHDX on RAID6 disks for increasment. It is fine and as you said it is 64TB limited. See:
    Hyper-V Virtual Hard Disk Format Overview
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    Another possiable solution is using Storage Space - new function in Windows Server 2012. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    It could add hard disks to a storage pool and creating virtual disks from the pool. You can add disks later to this pool and creating new virtual disks if needed. 
    For Windows Sharing
    Generally we will have different sharing folders later. Creating all shares in a root folder sounds good but actually we may not able to accomplish. So it actually depends on actual environment.
    For DFS replication limitation
    I assume the 10TB limitation comes from this link:
    http://blogs.technet.com/b/csstwplatform/archive/2009/10/20/what-is-dfs-maximum-size-limit.aspx
    I contacted DFSR department about the limitation. Actually DFS-R could replicate more data which do not have an exact limitation. As you can see the article is created in 2009. 
    For Backup
    As you said there is a backup limitation (2GB - single backup). So if it cannot meet your requirement you will need to find a third party solution.
    Backup limitation
    http://technet.microsoft.com/en-us/library/cc772523.aspx
    If you have any feedback on our support, please send to [email protected]

  • Best practices for data entry online system

    Hi all
    I am(with a team of 4 members) going to build an online data entry system which may have approximately 30 screens. I am going to use Spring BlazeDS remoting to connect middleware.
    Anyone could please suggest me some good practices to follow in flex side to do such a "DATA ENTRY" application.
    The below points are some very few common best practices we need to follow while doing coding .But i am not sure how to achive them in flex side.
    User experience (Probably i can get little info regarding this from my client)
    Code maintanability
    Code extendibility
    memory and CPU optimization
    Able to work with team members(Multiple checkouts)
    Best framework
    So i am looking for valueble suggestion from great minds.

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best Practice for setting systems up in SMSY

    Good afternoon - I want to cleanup our SMSY information and I am looking for some best practice advice on this. We started with an ERP 6.0 dual-stack system. So I created a logical component Z_ECC under "SAP ERP" --> "SAP ECC Server" and I assigned all of my various instances (Dev, QA, Train, Prod) to this logical component. We then applied Enhancement Package 4 to these systems. I see under logical components there is an entry for "SAP ERP ENHANCE PACKAGE". Now that we are on EhP4, should I create a different logical component for my ERP 6.0 EhP4 systems? I see in logical components under "SAP ERP ENHANCE PACKAGE" there are entries for the different products that can be updated to EhP4, such as "ABAP Technology for ERP EHP4", "Central Applications", ... "Utilities/Waste&Recycl./Telco". If I am supposed to change the logical component to something based on EhP4, which should I choose?
    The reason that this is important is that when I go to Maintenance Optimizer, I need to ensure that my version information is correct so that I am presented with all of the available patches for the parts that I have installed.
    My Solution Manager system is 7.01 SPS 26. The ERP systems are ECC 6.0 EhP4 SPS 7.
    Any assistance is appreciated!
    Regards,
    Blair Towe

    Hello Blair,
    In this case you have to assign products EHP 4 for ERP 6 and SAP ERP 6 for your system in SMSY.
    You will then have 2 entries in SMSY, one under each product, the main instance for EHP 4 for ERP 6 must be central applications and the one for SAP ERP 6 is SAP ECC SERVER.
    This way your system should be correctly configured to use the MOPZ.
    Unfortunately I'm not aware of a guide explaining these details.
    Some times the System Landscape guide at service.sap.com/diagnostics can be very useful. See also note 987835.
    Hope it can help.
    Regards,
    Daniel.
    Edited by: Daniel Nicol on May 24, 2011 10:36 PM

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • External System Authentication Credentials Best practice

    We are in the process of an 5.0 upgrade.
    We are using NTLM as our authentication source to get teh users and the groups and authenticate against the source. So currently we only have the NT userid, group info(NT domain password is not stored).
    We need to get user credentials to other systems/applications so that we can pass that on the specfic applications when we search/crawl or integrate with those apps/systems.
    We were thinking of getting the credentials(App userid and password) for other applications by developing a custom Profile Web service to gather the information specific to these users. However, don't know if external application password is secured when retrieving from the external repository via a PWS and storing into the Portal database.
    Is this the best approach to take to gather the above information? If not, please recommend the best practice to follow.
    Alternatively, can have the users enter the external system credentials by having them edit their user profile. However, this approach is not preferred.
    If we can't store the user credential to the external apps, we won't eb able to enhance the user experience when doing a search/or click-thorugh to tthe other applications.
    Any insight would be appreciated.
    Thanks.
    Vanita

    Hi Vanita,
    So your solution sounds fine - however, it might be easier to use an SSO Token or the Plumtree UserID in your external applications as a difinitive authentication token.
    For example if you have some external application that requires a username and password, then if you are in a portlet view of the application the application should be able to take the userid plumtree sends it to authenticate that it is the correct user.  You should limit this sort of password bypass to traffic being gatewayed by the portal (i.e. coming from the portal server only).
    If you want to write a Profile Web Service, the data the gets stored in the Plumtree Database is exactly what the Profile Web Service send it as the value for a particular attribute.  For example if your PWS tells Plumtree that the APP1UserName and APP1Password for user My Domain\Akash is Akash and password then that is what we save.  If your PWS encrypts the password using some 2-way encryption before hand, then that is what we will save.  These properties are simply attached to the user, and can be sent to different portlets.
    Hope this helps,
    -aki-

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Best practice in migrating to a production system

    Dear experts,
    Which is the best practice to follow during an implementation project to organize development, quality and production environment?
    In my case, considering that SRM is connected to the back-end development system, what should be done to connect SRM to back-end quality environment:
    - connect the same SRM server to back-end quality environment, even if in this case the old data remain in SRM or
    - connect another SRM server to back-end quality environment?
    thanks,

    Hello Gaia,
    If yo have 3 landscape, backend connection should be like this.
    SRM DEV   - ERP DEV
    SRM QAS   - ERP QAS
    SRM PRD - ERP PRD
    If you have 2 landscape.
    SRM(client 100) - ERP DEV
    SRM(client 200) - ERP QAS
    SRM PRD         - ERP PRD
    Regards,
    Masa

Maybe you are looking for