Best Practice for Maintaining a web scoped feature

I am in the process of creating a web scoped feature that creates a few lists and some event recievers for those lists.
My plan was to have the lists created on feature activating. This way, users can simply activate the feature if they want to take advantage of this functionality on a site by site basis.
While trying to flesh out this idea, I realized that if we ever need to make changes to this feature in the future, we'll likely have to uninstall and reinstall the solution.  After uninstalling/reinstalling the solution, the feature will
no longer be active on any site by default. This means that all of those users who were using the functionality will have to go in and re-activate the feature to continue using the event recievers.
What I'd like to happen is that the sites that had this feature activated, still have the feature activated after I uninstall and reinstall the solution. Those sites that didn't have the feature active, still dont' have the feature activated.
I know that there is the option to update a solution, which will do what I want, but updating is limited in that I cannot add additional items to the solution.  Which means eventually I'll have to uninstall and reinstall.
Is there a good way to go about this?

If you are modifying or expanding the features, then your best option would be using Feature upgrade
http://www.sharepointnutsandbolts.com/2010/06/feature-upgrade-part-1-fundamentals.html
>> know that there is the option to update a solution, which will do what I want, but updating is limited in that I cannot add additional items to the solution.  Which means eventually I'll have to uninstall and reinstall.
Yes this is correct. Update solution works only for same set of files and features in the existing solution. If there is any change then you have to retract and deploy the new solution.
My Blog- http://www.sharepoint-journey.com|
If a post answers your question, please click Mark As Answer on that post and Vote as Helpful

Similar Messages

  • Best Practices for Maintaining SSAS Projects

    We started using SSAS recently and we maintain we one project to deploy to both DEV and PROD instances by changing the deployment properties. However, this gets messy when we introduce new fact tables in to DEV data warehouse (that are not promoted to
    Production data warehouse). While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business
    users. Sorry for long question but is there is a best practice to manage projects and migrations? Thanks.

     While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business users.
    Hi Sbc_wisc,
    You can create a new project by importing the metadata from the production cube on the server, using the template, Import from Server (Multidimensional and Data Mining) Project, in SQL Server Data Tools (SSDT). And then make some changes on this project
    and then redeploy it to production server.
    Referencec:
    Import a Data Mining Project using the Analysis Services Import Wizard
    Regards,
    Charlie Liao
    TechNet Community Support

  • Best practice for maintaining URLs between Dev, Test, Production servers

    We sometimes send order confirmations which include links to other services in requestcenter.
    For example, we might use the link <href="http://#Site.URL#/myservices/navigate.do?query=orderform&sid=54>Also see these services</a>
    However, the service ID (sid=54) changes between our dev, test, and production environments.  Thus we need to manually go through notifications when we deploy between servers.
    Any best practices out there?

    Your best practice in this instance depends a bit on how much work you want to put into it at the front end and how tied to the idea of a direct link to a service you are.
    If your team uses a decent build sheet and migration checklist then updating the various URL’s can just be part of the process. This is cumbersome but it’s the least “technical” solution if you want to continue using direct links.
    A more technical solution would be to replace your direct links with links to a “broker page”. It’s relatively simple to create an asp page that can accept the name of the service as a parameter and then execute an SQL query against the DB to return the ServiceID, construct the appropriate link and pass the user through.
    A less precise, but typically viable, option would be to use links that take advantage of the built in search query functionality. Your link might display more results than just one service but you can typically tailor your search query to narrow it down. For example:
    If you have a service called Order New Laptop or Desktop and you want to provide a link that will get the user to that service you could use: http://#Site.URL#/RequestCenter/myservices/navigate.do?query=searchresult&&searchPattern=Order%20New%20Desktop%20or%20Laptop
    The above would open the site and present the same results as if the user searched for “Order New Desktop or Laptop” manually. It’s not as exact as providing a direct link but it’s quick to implement, requires no special technical expertise and would be “environment agnostic”.

  • Best practice for testing a web form after converting to a connection file

    I was using InfoPath to test a web form using a web service for the connection. I decided to deploy the form to SharePoint to make sure it worked outside of InfoPath preview mode. This required me to convert the connection to a connection file and publish
    that in SharePoint. After getting all that set up, it appears to work from SharePoint.
    Now I want to go back and work on the form some more. But in InfoPath it appears preview is no longer an option because the form is using a connection file. While trying to preview the form I am given a message that basically says the form will have to be
    published in order for it to make the connection. This would mean I would have to overwrite the form on SharePoint just to test.
    I suppose I could publish to a different form library and switch back when I'm done but it would seem that InfoPath preview should still be an option for testing. Is there something I'm missing in InfoPath that would allow me to still test with this form?

    Not sure what you're doing, but previewing IP forms with data connections should be quite possible.
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs
    To put it simply, here are the basic steps:
    Create IP form based on web service
    Fill out form and test via Preview
    Convert data connection to a connection file
    Publish IP form and test on SharePoint
    Go back into IP and try to test again via Preview <--- no longer works - now stuck with testing on SharePoint which is "live"
    This could be a factor of how our SharePoint environment is configured. I have no other experience to base it on.

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for loading config params for web services in BEA

    Hello all.
    I have deployed a web service using a java class as back end.
    I want to read in config values (like init-params for servlets in web.xml). What
    is the best practice for doing this in BEA framework? I am not sure how to use
    the web.xml file in WAR file since I do not know how the name of the underlying
    servlet.
    Any useful pointers will be very much appreciated.
    Thank you.

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for consuming web services

    Hi
    we are consuming web service in orchestration by "Add Generated Item".By using this option it creates 1 orch,1xsd file and some bindings.
    we have different projects for schemas,maps and orchestration under our solution in visual studio.
    Now i need to know that what will be the best practice for consuming web service in orchestration i mean in which project should i use "add generated item" (in orchstration project or in schemas project) coz it generates both 1 orch and 1
    schema.
    thanks

    From a service orientation perspective you should abstract the service artifacts from the other artifacts. Otherwise it will be very difficult to update the service interface without affecting the other artifacts. For example you don't want to have to redeply
    your entire application if only one field changes in the service you consume.
    So I typically generate the items, remove the unnecessary stuff, and put them in a separate project.
    Depending on the control you have over the services you want to consume, it would even be better to create another layer of abstraction. By that I mean create your own interface (schema) and map that one to the one the service exposes. This basically
    is only necessary if you consume external services that are beyond your control. By abstracting the interface it exposes, you limit the impact of changes of that interface on the rest of your system. All changes are abstracted behind your own interface.
    If you consume internal services, you can probably control the way the interface is defined. In a service oriented world all internal services expose a well known interface, based on the domain objects you have within your organisation.
    Jean-Paul Smit | Didago IT Consultancy
    Blog |
    Twitter | LinkedIn
    MCTS BizTalk 2006/2010 + Certified SOA Architect
    Please indicate "Mark as Answer" if this post has answered the question.

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

  • Best practice for simply invoking a web service

    Hello,
    We have web services deployed and accessible as wsdl documents in the SOA service manager/UDDI product. What is the best practice for simply calling a web service method deployed without regard to whether it is deployed on Tomcat, WebSphere, Oracle, etc.
    I'd just like to create a java client to make a web service call and JAXRPC has me confused because you seem to have to have custody of the web service implementation code to call a service. Can someone point me in the right direction?
    Thanks,
    Sean

    Thanks. Here is my wsdl, how would you say this is encoded?
    <?xml version="1.0" encoding="UTF-8"?>
    <definitions
         name="OracleProcess"
         targetNamespace="http://xmlns.oracle.com/OracleProcess"
         xmlns="http://schemas.xmlsoap.org/wsdl/"
         xmlns:tns="http://xmlns.oracle.com/OracleProcess"
         xmlns:plnk="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
         xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
         xmlns:client="http://xmlns.oracle.com/OracleProcess"
        >
        <types>
            <schema attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://xmlns.oracle.com/OracleProcess"
                 xmlns="http://www.w3.org/2001/XMLSchema">
                <element name="OracleProcessProcessRequest">
                    <complexType>
                        <sequence>
                            <element name="input" type="string"/>
                            <element name="input2" type="string"/>
                        </sequence>
                    </complexType>
                </element>
                <element name="OracleProcessProcessResponse">
                    <complexType>
                        <sequence>
                            <element name="result" type="string"/>
                        </sequence>
                    </complexType>
                </element>
            </schema>
        </types>
        <message name="OracleProcessRequestMessage">
            <part name="payload" element="tns:OracleProcessProcessRequest"/>
        </message>
        <message name="OracleProcessResponseMessage">
            <part name="payload" element="tns:OracleProcessProcessResponse"/>
        </message>
        <portType name="OracleProcess">
            <operation name="process">
                <input message="tns:OracleProcessRequestMessage"/>
                <output message="tns:OracleProcessResponseMessage"/>
            </operation>
        </portType>
        <binding name="OracleProcessBinding" type="tns:OracleProcess">
            <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
            <operation name="process">
                <soap:operation style="document" soapAction="process"/>
                <input>
                    <soap:body use="literal"/>
                </input>
                <output>
                    <soap:body use="literal"/>
                </output>
            </operation>
        </binding>
        <service name="OracleProcess">
            <port name="OracleProcessPort" binding="tns:OracleProcessBinding">
                <soap:address location="http://st4s:9700/orabpel/default/OracleProcess/1.0"/>
            </port>
        </service>
      <plnk:partnerLinkType name="OracleProcess">
        <plnk:role name="OracleProcessProvider">
          <plnk:portType name="tns:OracleProcess"/>
        </plnk:role>
      </plnk:partnerLinkType>
    </definitions>

  • Best Practice for embedding webi reports in hyperlinks..

    SAP Community
    What is the best practice for embedding hyperlinks (email) from webi and sending to a user?so the users can quickly consume the report with minimal effort (click link and launch infoview/report).
    John

    As mentionned already,  BI has it's own inbox, and/or SMTP integration for broadcasting.
    Else, if you go to Folders, right click on your report instance, then select "Copy URL" (or 'docuemnt link' i cannot remember the exact term.)  - that woudl give you an open document link to invoke the viewer .
    Regards,
    H

  • Best practice for keeping a mail session open in web application?

    Hello,
    We have a webmail like application where users login with their IMAP credentials, then are taken to an authenticated area of the site where they can manage different things about their email account.
    Right now the application is opening and closing a mail store connection (via a new javax.mail.Session) for each page load based on the current logged in user credentials. To me this seems like it would be a bad practice to keep opening and closing a connection each page load.
    Are there any best practices for this situation? It would be nice to be able to open the connection to the mail server on login, then keep that connection open until the person logs out, session expires, etc.
    I can probably put the javax.mail.Session into the HTTP session, but that seems like it would break any clustering functionality of tomcat. This would be fine if the machine the user is on didn't fail, but id assume if they failed over to another the mail session would be gone. Maybe keeping the mail session in the http session, checking for a connection, then first attempting to reconnect with the logged in credentials before giving up would be a possiblity?
    Any pointers would be appreciated

    If you keep the connection open across pages, you're going to need to deal with
    timeouts - from the http session and from the mail server.
    If you don't keep the connection open, you're going to need to "resynchronize"
    your view of the store/folder with each operation, in case the folder is modified
    by another session.
    The former is easier in the common cases, especially if you don't care how gracefully
    you handle failures. The latter is more difficult in the common cases, but handles
    failure better, and in particular handles clustering better. You'll need to measure it to
    see if it meets your performance and scalability requirements. You may need to mix
    the two approaches to get acceptable performance.

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • Best practice for real time requierement

    All,
    I am trying to find out what is the best practice for reporting against real time data ?
    is it while using
    1 - webi against universe/bex query on the top of hybrid cubes ?
    2 - Crystal report directly against the ECC data ?
    3 - using another solution such as Data Fedrator ? or something different ?
    I am looking to know if anyone got such req and also to share their experience 
    did they get some huge challenge against hybrid cubes ?
    Thanks in advance for your help
    Philippe

    Well their first requierement was to get real time data .. if i am in Xcelsius and click refresh then i want it to load my last data ..
    with live office , i can either schedule a crystal report and get the data delayed or use the option from live office to make iterfresh as right now .. is that a correct assumption ?
    I was talking about BW, just in case they are willing to change the requierement to go from Real time to every 5 min
    Just you know we are also thinking of the following option:
    1 - modify the virtual provider on the  CRM machine to get all the custom fields needed for the Xcelsius Dashboard
    2 - Build some interactive report on the top of these Virtual Provider within CRM
    3 - get the link to this report , it is one of the Report feature within CRM
    4 - design and build your dashboard on the top of it
    5 - EXport your swf file to the cRM web ui
    we are trying to see which one is the best one
    Philippe

Maybe you are looking for

  • App store keeps telling me I have an update but I don't.

    The app store on my iPad had an update for an app the other day but rather than updating I chose to delete the app as it was of no use. However the app store keeps telling me I have an update but when I open the app store none are listed. Anyone expe

  • Folder Action + Move Finder Items = Zero KB files

    I've got a folder action set that is supposed to run the Automator Action "Move Finder Items" on any file that gets placed in a specific folder (called "Folder A") and move it to "Folder B". I'm having an issue with larger files though. As soon as I

  • HT5037 I clicked the finder icon and pulled down the Finder menu and never found a "go" menu.  Where is the ******* go menu?

    I am trying to install the update on iPhoto on my newMac laptop.  It tell me to go to finder and hit the "go meu" in the finder.   pulled down the go menu at the top of the screen and the finder icon on the bottom of ther screen.  Where is this "go m

  • Where can i find plsql callback procedure error

    I am using Oracle 11.2.0.3. I have registered a callback procedure to my queue (point to point persistent message queue). Is the error from callback procedure stored somewhere? I did see some entries to  SYS.AQ_SRVNTFN_TABLE_1 , which is exception ta

  • Error at the time of database backup

    Dear Experts , We are on ecc 6.0 windows maxdb 7.6 version .My quality got log full I am trying to reduce by taking backup of logfile but iam getting an error "-24988 SQl error {backup_save"SNQ_LOG2" LOg Recovery]; error in online user mode. Even if