Best practice for developing with CRM 2013 (On Premises)

Hello all.  I'm just starting to work with CRM, and I have some questions that hopefully will be simple for the seasoned developers.  It's mostly just some best practice or general how-to questions for the group.
- When creating a new Visual Studio CRM Project I can connect to my CRM Instance and create new WebResources which deploy to the CRM instance just fine, but how can I pull all the existing items that are in the CRM Solution into the Visual Studio CRM project?
 Or do I need to export the solution to a ZIP, expand it with SolutionPackager.exe, then copy these into my Visual Studio project to get it into sync?
- When multiple developers are working on changes is it best to keep everything in a Visual Studio project as I mentioned above, or is it better for everyone to have their own instance of CRM to code with so they can Export/Import solutions as needed then
these solutions be manually merged before moving into a common Test/QA environment?
- When modifying the submenu on a CRM form is it suggested to use Ribbon Workbench or is it better/easier to just export the solution, expand it with SolutionPackager.exe,  modify ribbondiff and anything else required for the change, package it
back up, then reimport to CRM?  I've heard from some that Ribbon Workbench has some limitations, but being green I wasn't sure what those limitations might be or if it'd be best to just manually make these changes.  Or is thre any way to have a copy
of ribbondiff in Visual Studio and deploy this without having to repackage the Solution and Import in the ZIP?
I think that's it for now :)  Thanks for any advise or suggestions.  I really want to start learning the in's and out's of CRM and how all the pieces fit together.  Also can someone direct me to some documentation or books that might give
more insight on developing for CRM 2013 or 2015 (moving to this soon)?
Thanks for your time.

Hi Sam
Also interested in best practice around this area - especially recommended development routes, unit testing, continuous integration etc - it would be great if you posted here if you find any good articles etc. At the moment we tend to just push changes
onto a live system as and when appropriate and I'd prefer to move away from that...
Thanks
Stuart

Similar Messages

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best practice for development using REST API - OData

    Hi All, I am new to REST. I am a developer who works mostly in server-side code using Visual Studio. Now that Microsoft is advocating to write code using REST API instead of server-side code or client side object model, I am trying to use REST API.
    I googled and most of the example shows to write a code and put it on Content Editor/Script Editor. How to organize code and deploy to the staging/production in this scenario? Is there any Best Practice or example around this?
    Regards,
    Khushi

    If you are writing code in aspx or cs it does not mean that you need to deploy it in the SharePoint server, it could be any other application running from your remote server. What I mean it you can use C# & Rest API to connect to SharePoint server.
    REST API in SharePoint 2013 provides the developers with a simple standardized method of retrieving information from SharePoint and it can be used from any technology that is capable of sending standard HTTP requests.
    Refer to the following blog that provide your more details about comparison of the major features of these programming choices/
    http://msdn.microsoft.com/en-us/library/jj164060.aspx#RESTODataA
    http://dlr2008.wordpress.com/2013/10/31/sharepoint-2013-rest-api-the-c-connection-part-1-using-system-net-http-httpclient/
    Hope this helps
    --Cheers

  • What is the best practice for developing web service?

    Hi All,
    I'm a newbee to web services...
    I was wondering what would be the best approach in developing a web service,
    using tools or programmatic approach?
    If I use WebLogic Workshop, am I tied to a vendor?
    Is it possible for me to develop web services using workshop and deploy in
    another app server..?
    I would appreciate if somebody could give me a pointer to start.
    I have already referred BEA's docs.
    I'm still confused on a good starting point on the best approach to develop
    protable web services.
    Thanks in advance for any inputs.
    K K

    K K-
    You have a very valid point on the simplify or complicate matters. If you are
    going for clean and not-so-time-centric code, then there are several different
    programs and packages out there you can choose from.
    Since you are specialized in J2EE, than the Sun package may be what you are looking
    for. BEA's classes simplify much of the work you will be doing, but you could
    emulate their classes or extend yours above the functions provided in theirs.
    It all boils down to how much work are you willing to do.
    If you are asking for more detailed, coding 'Design Patterns' to utilize, I would
    wait for a few more posts from other folks as my work often requires me to utilize
    the tools provided.
    Sincerely,
    Eric Ballou
    "K K" <[email protected]> wrote:
    Eric,
    Thanks for the response.
    I was also looking at Sun's WSDP 1.1, which is more programmatic approach.
    Some how, I feel being a J2EE developer, I should go on the direction
    of the
    programmatic approach.
    Using the tools could simplify or complicate things. Also, the Workshop
    samples import all weblogic specific packages.
    My code looks so dirty with many vendor specific packages being imported.
    Could you give me your suggestions for a clean and neat approach?
    I would personally prefer to avoid the quick and dirty approach.
    Thanks again.
    "Eric Ballou" <[email protected]> wrote in message
    news:[email protected]...
    K K-
    The best approach in developing portable web services is knowing whatyou
    are
    planning on using them for as well as how much is willing to be spent,etc.
    BEA's Workshop is portable to other frameworks, but the ease ofintegrating a
    developed client or a developed server can very greatly. Even moreof an
    issue
    is migration from one framework to another. If you choose to developin
    Workshop
    and your company later deploys .Net solutions, some of your work mayhave
    to be
    redone unless the company is willing to keep portions of the 'old'system
    around
    until new versions of the service are available. However, Workshophas
    several
    ant tools available that would assist you in deploying to other appservers or
    even a stand-alone application should you need cross framework abilities.
    If you are just starting out in web services, http://www.webservices.org
    is a
    good place to start checking out vendors in the space.
    Sincerely,
    Eric Ballou
    "K K" <[email protected]> wrote:
    Hi All,
    I'm a newbee to web services...
    I was wondering what would be the best approach in developing a web
    service,
    using tools or programmatic approach?
    If I use WebLogic Workshop, am I tied to a vendor?
    Is it possible for me to develop web services using workshop and deploy
    in
    another app server..?
    I would appreciate if somebody could give me a pointer to start.
    I have already referred BEA's docs.
    I'm still confused on a good starting point on the best approach todevelop
    protable web services.
    Thanks in advance for any inputs.
    K K

  • What is best practice for integration with freight forwarders?

    Hello,
    We are looking into the possibilities for automatically exchanging data with one of our freight forwarders. We will send them our shipment information and they will send back shipment status and date information including some additional information like the house bill. Sending the shipment data from our SAP (ECC 6) system is no issue, we have done that before. What is new to us is receiving back the status updates from the forwarder. Is there a kind of best practice of where to store this information on the shipment (or in a separate tabel) and what standard function module or BADI to use for this?
    We are using ECC 6.0 sales and distribution, but no transportation management or SCM modules.
    Would, like to hear the experiences of people who have done this type of intergration with their forwarders.
    Regards,
    Ed

    SAP have added SAP TM 8.10 as a separate package which is also integrated with R/3 which means, a separate server is required if SAP TM needs to be implemented which will take care of your expectations.  For more information on this, search in Google so that you will get couple of documentations on this topic.
    G. Lakshmipathi

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Any Best Practices for developing custom ABAP reports for Portal?

    Hello,
    The developers on our project are debating the best way to develop custom reports and make them available on the portal.  Of these options that we can think of, can you give any pros & cons, or experiences, or other options?
    - Web-enabled Abap report programs
    - WebDynpro for Abap
    - WebDynpro for Abap using ALV
    - Adobe forms
    Does a "Best Practices" document or blog exist on this topic?
    Thanks,
    Colleen

    Re: Using p_trace=YES

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • Best practices for start with itunesu

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

  • Best Practices for VidConf with external parties

    We currently use Sony Video conference units for our end units. On the backend, we use a Cisco 3515 for multipoint conferences.
    We now need to be able to do video conferences with external parties using our MCU.
    Since there are many ways of doing this, are there solutions that work better than others? Basically I need to publish a doc that says 'here is what you will need. This port open on your firewall, a public IP address that is not NATd....'
    Any help would be appreciated.

    David,
    Unfortunately, Cisco has not (currently) chosen to implement H.460.x in their H.323 infrastructure solutions, so a Cisco GK could not be used in combination with a Tandberg SBC (session border controller). However, we currently use Cisco gatekeepers for everything but firewall traversal. For traversal, the Tandberg GK functions as the inside portion of the traversal solution and proxies for all my codecs (a mix of six different Tandberg and Polycom product lines)--even those that are H.460-capable. Combined with the SBC (session border controller is a Cisco MCM (25xx router) setting on the public side of the firewall and neighbored to the session border controller.
    The "public" GK is for the entities who write custom policies in their firewalls or set their codecs on the public side of their security. If not H.460 compliant, they have to register with a GK, they can't register directly with the SBC, hence, the public GK which is simply neighbored to the SBC.
    I would encourage you to closely look at the Polycom V2IU as well. It has come a LONG way since it was introduced a couple of years ago.
    Personally, I still don't feel like the V2IU has as much flexibility nor does it implement a dialing methodology best suited for converging networks. It is an ALG (App Layer GW) that has been tweaked to support H.323 traversal, so I don't think it will ever truly match the Tandberg Expressway solution apples-apples, but it is dramatically less expensive and thus worth considering.
    We tested both the Tandberg and Polycom traversal solutions with our internal CAC (call admission and control infrastructure), which is made up of multiple Cisco GK products in a fully meshed neighboring scenario prior to purchase of a traversal solution, and the Cisco products interoperated with both traversal solutions.
    Cisco did present a solution that proposed a Layer 3 solution, but we felt it best to pursue something based within the H.323 umbrella standard.
    If you want to talk more, please email me at [email protected] w/ direct contact info and I'll be happy to assist you in any way I can.
    Greg

Maybe you are looking for

  • SQL error 3113 occurred when executing EXEC SQL.

    Hi, We are facing one typical problem, One background is failing regularly with below dump. as we now got all notes giving information, if database  restarted taking backup, these type of failures occur, but our database is only down for backup once

  • Percentage in alv report

    Hi Expert,             In alv report percentage calculation.Here we have a problem, In total column it will adding  the fields        in percentage column, but i want to calculate the percentage in run time.             Example:                      

  • How can I get a dynamic list of Classes Loaded

    I assume this may need to use some sort of reflection. Does anyone have code or ideas where I can: a) I can get a list of all Classes loaded and their properties. This would probably be all instances of Class b) A list of all global instance variable

  • Multiple Openings for InDesign Plug-in developers

    Hello InDesign Plug-in Developers We have got multiple openings for InDesign plug-in developers in our company. Excellent package, work environment and perks assured. Details ======= Location: Noida, India Years of Experience required: 2-5 years Job

  • Problem with Cover-Flow display

    Hello! Whenever I play compilations Cover-Flow doesn´t display "Interpret" in the third column but "Album-Interpret" (screenshot 1). However, I set the "Album-Interpret" Tag to the whole compilation´s tite so that Itunes displays all tracks in one Bl