Best Practices for JMS Service Documentation
Our software consists of a variety of JMS producers and consumers used to transform and transmit business to business messages on a large scale.
The service nodes do a variety of things, and we've had trouble over the years ensuring every queue-based service clearly identifies the parameters and payload it accepts, so that everyone from programmers to system administrators can easily see what services are available and how they are to be used.
Some have advocated always adding a web service in front of each message-based service to guarantee interface contracts are all well publicized. I think there are reasons to choose web services and reasons to choose message-based services, and am not convinced this is the right answer to address limitations in expressing the design contract for a message-based service. That said, I really like what we've been able to do with self-documenting web services based on annotations, and wonder if there's a conceptual equivalent in message-based software.
What are your best practices for ensuring your message-based services are as self-documenting as your modern web service?
Thanks in advance for your advice!
Edited by: lsamaha on Apr 16, 2012 11:46 AM
*bump
Similar Messages
-
SAP Best Practice for Self Service Procuement
Hello All,
I am trying to find Best Practice for Self Service Procuement in the help.sap.com.
But I couldn't find them. Please help me to locate.HI Chek these below links if usefull for you
http://www50.sap.com/businessmaps/59E32671A32A411692387571253E292A.htm
help.sap.com/.../SAP_Best_Practices_whatsnew_AU_V3600_EN.ppt
www50.sap.com/.../DADF68FA02AB4E0482021E98D8BB986F.htm
Thanks,
Batchu -
Best practices for search service in a sharepont farm
Hi
in a sharepoint web application there is many BI dashboards are deployed and also we have plan to
configure enterprise search for this application.
in our sharepoint 2010 farm we have
2 application server s
2 WFE servers
here one application server is running
c.a + webanalytics service and itself is a domain controller
second application server is for only running secure store service+ Performance point service only
1 - here if we run search server service in second application server can any issues to BI performance and
2 - its best practice to run Performance point service and search service in one server
3 -also is it best practice to run search service in a such a application server where already other services running
and where we have only one share point web application need to be crawled and indexed with below crawl schedule.
here we only run full crawl per week and incremental crawl at midnight daily
adilHi adil,
Based on your description, you want to know the best practices for search service in a SharePoint farm.
Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
The article is about the guidance for different farms.
Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
The articles below describe the best practices for enterprise search.
https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
Best regards
Sara Fan
TechNet Community Support -
Best Practice for the Service Distribution on multiple servers
Hi,
Could you please suggest as per the best practice for the above.
Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
Capacity : We have 12 Servers excluding SQL server.
Please do not just refer any URL, Suggest as per the requirements.
Thanks
srabonHow about a link to the MS guidance!
http://go.microsoft.com/fwlink/p/?LinkId=286957 -
Highly Required CRM 5.0 Best practices for CRM Service Module
Dear all,
I have been searching for CRM 5.0 version best practices in Internet quiete a long period, but could not find anywhere.
currently SAP is providing only best practices for SAP CRM 2007 version.
since most of configuration is differing because of Webclient Interface, I request you to refer a source from where I can get the CRM 5.0 Best Practices for Service module.
Your suggestions and help will be highly appreciated.
Best regards
Raghu ramHi Srini,
<removed by moderator>
Thank you & Best regards
Raghu ram
Edited by: Raghu Ram on Jul 16, 2009 6:09 AM
Edited by: Raghu Ram on Jul 16, 2009 6:11 AM
Moderator message please review the rules of engagement located here:
https://www.sdn.sap.com/irj/scn/wiki?path=/display/home/rulesofEngagement
Edited by: Stephen Johannes on Jul 16, 2009 8:12 AM -
Best Practices for Using Service Controller for Entity Framework Database
I'm running into an issue in my first time creating a Web Service with a .NET backend with Azure. I designed a database in Entity Framework and had it create the models, but I couldn't create a controller for the table unless I made the model inherit from
EntityData. Here's the catch, the Database Model has int Id, but EntityData has string Id, so, of course, I'm getting errors. What is best practice for what I'm trying to do?
Michael DiLeohi Michael,
Thanks for you posting!
Sorry for I am not totally understanding your issue. Maybe two points need your confirm:
1. I confuse with the "Service controller"? IS your meaning MVC controller? Or ServiceController(http://www.codeproject.com/Articles/31688/Using-the-ServiceController-in-C-to-stop-and-start
2.whether The type of ID in the model is match to the database ? In other words, Is the type of IDin .edmx matched to the database?
By the way, it seems that this issue is more related to EF. You could post this issue on EF discussion for better support.
Thanks & Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Current best practice for Time service settings for Hyper-V 2012 R2 Host and guest OS's
I am trying to find out what the current best practice is for Time service settings in a Hyper-V 2012 environment. I find conflicting information. Can anyone point me in the right direction. I have found some different sources (links below) but again the
are not consistent. Thanks
http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/11/19/time-synchronization-in-hyper-v.aspx
http://technet.microsoft.com/en-us/library/virtual_active_directory_domain_controller_virtualization_hyperv(v=ws.10).aspx
http://social.technet.microsoft.com/wiki/contents/articles/12709.time-services-for-a-domain-controller-on-hyper-v.aspxFrom the first link provided by Brian, it does state that the time service should be off, but then the update changes that statement. Still best to rely on the first link in the OP - it was written by the guy that has been responsible for much of what
gets coded into Hyper-V, starting from before there ever was a Hyper-V. I'd say that's a pretty reliable source.
Time service
For virtual machines that are configured as domain controllers, it is recommended that you disable time synchronization between the host system and guest operating system acting as a domain controller. This enables your guest domain controller to synchronize
time from the domain hierarchy.
To disable the Hyper-V time synchronization provider, shut down the VM and clear the Time synchronization check box under Integration Services.
Note
This guidance has been recently updated to reflect the current recommendation to synchronize time for the guest domain controller from only the domain hierarchy, rather than the previous recommendation to partially disable time synchronization between the
host system and guest domain controller.
. : | : . : | : . tim -
Best practice for web service call
I can add a web service using the standard data connection wizard - works fine. I also can do it all in Javascript which give me a bit more flexibility. Is there some guideline or wisdom for which is best?
It all depends on your requirement..
For example, if you know your webservice address at design time itself, then it would be better to put in the data connection tab.
But if your webservice address changes at run time based on the environment your application is deployed in, then you can use the java script code to change the webservice address dynamically.
Thanks
Srini -
Best practice for implementing services
I am doing some testing with implementing webservices and i am wondering what it the best way...
First some background about the project. The idea is that the user interfaces uses webservices for almost everything. The complete datalayer is created somewhere else and i just use those services. For example i have a WSDL that holds the UserServices. It describes the service for creating,updating,deleting, getUserByCompany, getUserByKey and some other stuff.
As far as i can see, i have 2 options in JDev
1) Create a datacontrol based upon the WSDL. This way i can easily drag&drop the services and use the databindings.
I'm afraid that this approach is not that flexibel. It isn't realy easy to update the DC once the WSDL has been changed.
I have a popup to edit/create a user. It also does not seem easy to implement this because when i open the popup for create, my input fields should be bound to the parameters for the createService but when i open the popup to edit a user, those fields should be bound to the editService instead. This does not look easy...
Also, the table that lists the users depends on the role of the user. When an admin request the page with the user table, he must see all the users but when another user request the page, he can only see the users from his company so we have 2 services for this: getUsers (gets all the users) and getUsersByCompany so here also, my table can be bound to 2 services... Does not seem to be easy to implement.
2) The second way of implementing services is using a proxy. This seems way more flexible. I just creates a java interface to call the service. This way i can create my own pojo's and create a DC from that.
This way i can create a function getUsers(String company). When i drop that to my page, i can bind the company parameter to a backing bean. THis way i can write some logic in the pojo based upon the value. If company is null i return the result of the getUser service, else i use the getUsersByCompany service instead.
It's also very easy to regenerate the proxy if the wsdl has been changed. Somethign that isn't possible with the first way.
What do you do when you use webservices this way? If their any difference in performance?
Any other tips.You got it basically right.
With the proxy approach you write get code that wraps your calls to the Web service - and this allows you to do various modifications on how the service is called, what to do with the results etc.
However check out this example to see how you can use the same result set in a Web service data control for both a query and an update/insert:
http://blogs.oracle.com/shay/2010/05/updateinsert_with_adf_web_serv.html -
Best Practices for Service Entry Sheet Approval
Hi All
Just like to get some opinion on best practices for external service management - particularly approval process for Service Entry Sheet.
We have a 2 step approval process using workflow:
1 Entry Sheet Created (blocked)
2. Workflow to requisition creator to verify/unblock the Entry Sheet
3. Workflow to Cost Object owner to approve the Entry Sheet.
For high volume users (e.g. capital projects) this is cumbersome process - we looking to streamline but still maintain control.
What do other leaders do in this area? To me mass release seems to lack control, but perhaps by using a good release strategy we could provide a middle ground?
Any ideas or experiences would be greatly appreciated.
thanks
AC.Hi,
You can have purchasing group (OME4) as department and link cost center to department (KS02). Use user exit for service entry sheet release and can have two characteristics for service entry sheet release, one is for value (CESSR- LWERT) and another one for department (CESSR-USRC1) .Have one release class for service entry sheet release & then add value characteristics (CESSR- LWERT) and department characteristics (CESSR-USRC1). Now you can design release strategies for service entry sheet based on department & value, so that SES will created and then will be released by users with release code based on department & value assigned to him/her.
Regards,
Biju K -
Best practices for deploying forms in a 'cluster'?
Anyone know of any public docs that discuss typical best practices for
- forms deployment;
- forms apps management and version control; and/or
- deploying (and keeping) the .frm/frx in sync when using multiple forms servers in a HA or load balancing envrionment?Hi adil,
Based on your description, you want to know the best practices for search service in a SharePoint farm.
Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
The article is about the guidance for different farms.
Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
The articles below describe the best practices for enterprise search.
https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
Best regards
Sara Fan
TechNet Community Support -
Is there a list of best practices for Azure Cloud Services?
Hi all;
I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
opposed to when we go live.
Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
We will be placing the cloud services in multiple datacenters and using traffic manager to point people to the right one. The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client.
Mostly it will pass all requests & responses across from one to the other.
thanks - dave
What we did for the last 6 months -
Made the world's coolest reporting & docgen system even more amazinghi dave,
>>Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
For this issue, I have collected some blogs and document about best practices for azure cloud service, you can view them, but I am not sure they are your need.
http://msdn.microsoft.com/en-us/library/azure/xx130451.aspx
http://gauravmantri.com/2013/01/11/some-best-practices-for-building-windows-azure-cloud-applications/
http://www.hanselman.com/blog/CloudPowerHowToScaleAzureWebsitesGloballyWithTrafficManager.aspx
http://msdn.microsoft.com/en-us/library/azure/jj717232.aspxhttp://azure.microsoft.com/en-us/documentation/articles/best-practices-performance/
>>The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client. Mostly it will pass all requests & responses across from one to the other.
For your scenarioes, If you'd like to communicate with each instances, I recommend you refer to this document (
http://msdn.microsoft.com/en-us/library/azure/hh180158.aspx ). And generally, if we want connect the client to server on Azure, the service bus is a good choice (http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-multi-tier-app-using-service-bus-queues/
If I misunderstood, please let me know.
Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Best Practice for Securing Web Services in the BPEL Workflow
What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
They are all deployed on the same oracle application server.
Defining agent for each?
Gateway for all?
BPEL security extension?
The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
Regards
FarbodIt doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
You can enforce rules at your network layer to allow access to the App server only from Gateway.
When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
The next BPEL developer in your project may not be aware of Security extensions
Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
Thanks
Ram -
Best practice for integrating oracle atg with external web service
Hi All
What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
With Thanks & Regards
AbhishekUsing Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
Cheers
R
Edited by: Rajeev_R on Apr 29, 2013 3:49 AM -
Best practice for loading config params for web services in BEA
Hello all.
I have deployed a web service using a java class as back end.
I want to read in config values (like init-params for servlets in web.xml). What
is the best practice for doing this in BEA framework? I am not sure how to use
the web.xml file in WAR file since I do not know how the name of the underlying
servlet.
Any useful pointers will be very much appreciated.
Thank you.It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
You can enforce rules at your network layer to allow access to the App server only from Gateway.
When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
The next BPEL developer in your project may not be aware of Security extensions
Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
Thanks
Ram
Maybe you are looking for
-
Cisco Gigabit 3504 Performance
I have a Cisco Gigabit 3504 to which all the uplinks of my switches are connected.The bandwidth utilization when monitored is 0.7% and the users report slow performance despite connected directly to the switches.CDP,SNMP etc are disabled and the swit
-
Purchase requisitions in MRP run
Hi , When MRP is run for the total plant , MD01,what should be the control parameter for create pur req ?. I checked by altering the control parameters for MRP run(MD01). When MRP is run with option 1 (in create pur. req) it creates pur requistion an
-
1)how to change logical business date to update extracted data to b/w staging area, 2) what are dead locks 3)what is reconciliation
-
Lucreate fails with update 8 packages on system with zone
I have a Solaris 10 update 7 system with a single zone. Before installing the update 8 LU packages I was able to create a new BE without error. After adding the packages, lucreate gives the following error: Creating clone for <rpool/ROOT/10u7ZFSa/zon
-
Intermedia drawback with UTF-8
Hi , There seems to be a limitation in Intermedia . It cannot index characters that are 2 bytes. For example , Iam indexing an xml which has encoding set as UTF-8 .The indexer is unable to recognise the characters like i g ,etc Any ideas Regards, Sye