Search Fragment - Best Practice?

We are designing a search fragment that will reside at the beginning of every form. The search fragment will allow a user to select one customer from a number of customers displayed. Once the user selects a customer, the search fragment will populate the main (parent) form with the customer data and then the search fragment should disappear.
We have a mock-up of the search fragment, but questions are surfacing as to the best way to integrate this into the form. Each form has different "customer data" fields that it needs from the search, for example, one form needs customer name and account while another form only needs customer name.
Here are some questions that are surfacing:
1. Should the search fragment "poke" the information into the parent form once the user makes a selection? If so, the fragment needs to detect which form it's on and send only the data the form needs. This will make the search fragment's javascript more complex.
2. Or, should the parent form "grab" only the information from the search fragment that it needs? This localizes the javascript to the parent form, leaving the search frag more generic. In this scenario I envision that once a user selects a customer, all customer information will be plopped into hidden fields on the search frag and the parent will use only the data that it needs.
Is there a best practice for doing something like this? Lessons learned?
Thanks,
Elaine

How is the search fragment going to know what to extract or is there goinng to be the same info everytime? I assume that is what you are doing. Depending on how the data is returned to you will dictate the best pratice. If it is a stream of XML then I woudl load that stream into the datadom and allow each of the fields to get their own data from the Dom (this would minimize the code needed on each form). If you are getting the info back field by field then the hidden field route is th ebest way to go (it is simple and the coding is very simple as well).

Similar Messages

  • Sharepoint 2013 search component best practice

    hi
    With a big SharePoint 2013 farm:
    2 WFE
    2 App server
    2 Search App server (havent set up these servers yet)
    What would the best way to divide the search componment between the two serarch app servers?
    Crawl component
    Crawls content sources to collect crawled properties and metadata from crawled items and sends this information to the content processing component.
    Content processing component
    Transforms the crawled items and sends them to the index component. This component also maps crawled properties to managed properties.
    Analytics processing component
    Carries out search analytics and usage analytics.
    Index component
    Receives the processed items from the content processing component and writes them to the search index. This component also handles incoming queries, retrieves information from the search index and sends back the
    result set to the query processing component.
    Query processing component
    Analyzes incoming queries. This helps optimize precision, recall and relevance. The queries are sent to the index component, which returns a set of search results for the query.
    Search administration component
    Runs the system processes for search, and adds and initializes new instances of search components.
    If you ignore the need for redunancy (we may add another 2 search app servers for that later).
    brgs
    Bjorn

    Hi Bjorn,
    According to your description, my understanding is that you want to deploy the search components in two search app servers.
    It depends on how many search service applications you need and the size of your environment.
    Here is a link about how to configure topology for one search service application with multiple search components across 2 servers for redundancy and performance:
    http://blogs.msdn.com/b/chandru/archive/2013/02/19/sharepoint-2013-configuring-search-service-application-and-topology-using-powershell.aspx
    You can also run the Query Processing component in web front server and run the other components in the search servers or application servers.
    Please refer to the link below:
    http://blogs.technet.com/b/meamcs/archive/2013/04/09/configuring-sharepoint-2013-search-topology.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+yahoo%2FZfiM+(Team+blog+of+MCS+%40+Middle+East+and+Africa)
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Implementing a search box - best practices

    I'm implementing a simple search box, to allow visitors to search for merchandise, which is held in a table. I can see two main approaches, each with their pro's and cons:
    The merchandise data has several fields that could be potentially employed in the search. long description, short description and title.
    A thorough search would look through each long description field, which is 100 chars long. The downside being the speed hit, searching such a large field.
    A quick search would look through the title field - quick but not thorough
    Alternatively I could create a separate table, searchTags, which contains a list of keywords for each item of merchandis - quicker but not as thorough
    Just wondering what type of apporach people use ?

    Ah, ok, I'll do it with LIKE
    I plan to get it up and running and record the type of things people are searching for. Having seen some of the things people type into a search box, I'll need to employ some of CF's stirng and list functions to break down the search string in a series of words, then search for each one.
    For example if someone typed "Silver Jewellery", it wouldn't bring up any results, as there's no occurence of that string in the database.
    However, if I break that down into "Silver" and "Jewellery" that would produce results.
    Think I can use CF's string and list functions such as listToArray, for that.

  • Searching for best practice recommendations regarding connecting Exchange calendars to SP 2010

    A user has requested assistancing setting up a web part on SP 2010 to display their "on call" calendar from Exchange.
    When she attempted it, she got the error that says, in part, "[...]
    This could be due to the
    fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This could also be caused by a mismatch of the security binding between the client and the server"
    I am a bit puzzled as to where to start. The admins for the Exchange server are in another group - it sounds to me like there is some sort of issue going on between SP 2010 and Exchange 2010, but I am not clear on how to proceed so that we can fix it.
    I do know that I have noticed in the event logs an Application error event that says that "an operation failed because the following certificate has validation errors" and later "SSL policy errors have been encountered. Error code '0x2'".
    In asking around, I was told that the site mentioned elsewhere in the error is a verisign signed site which has chained certificates. I have no idea what specific action triggered the error - the message says
    Source: Microsoft-SharePoint Products-SharePoint Foundation
    Task Category: Topology
    These may be two totally separate situations. I don't understand the certificate concept well enough to be able to determine that.
    Does this ring any bells with anyone?

    Hi,
    Thank you for your question. I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience. Thank you for your understanding and support.
    Thanks,
    Linda Li
    Linda Li
    TechNet Community Support

  • Searching for best practices or step by step directions for setting up governance framework specific to Project server 2013

    Hi! Could anyone lead me to the information i requested .
    thanks
    Aby

    Hope this blog helps
    http://blogs.technet.com/b/projectadministration/archive/2010/09/03/implementing-governance-in-sharepoint-2010-whitepaper.aspx
    Cheers! Happy troubleshooting !!! Dinesh S. Rai - MSFT Enterprise Project Management Please click Mark As Answer; if a post solves your problem or Vote As Helpful if a post has been useful to you. This can be beneficial to other community members reading
    the thread.

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • Expired Updates Removal - Best Practices

    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I was searching for best practices for removing expired updates from environment and found this useful link.
    There are some queries :
    1) It actually says to remove all the expired updates in one go without removing them from SUG's first. When the expired updates which are also part of active SUG are removed, wouldn't this trigger a software update rescan request for all clients in the collection
    to which these SUG's were targeted to, to rescan the patches required as there was a change in the SUG ? 
    2) How about deleting the deployments from collections and then removing the expired updates from only those SUG and proceed in this way. Wouldn't this lower the processing ?
    3) The expired update not part of any SUG will be removed, just to make sure, if the expired update is part of SUG but not targeted to any collection, will it still be removed ?
    4) Once the expired update is removed, what will be the process of its removal from the Distribution Point. What other automated tasks will be triggered for this like updatation of software update packages on DP once there is any change etc? I have been prestaging
    software update packages and extracting them on DP's. For any new DP, as the prestage still contains the older updates (expired, which were removed), Will they get extracted on new DP ? 
    Are all the steps i mentioned above valid in case of superseded updates instead of expired ?

    I am not clear of the below Jacob :
     If you delete the deployment, all of the policy for those updates will be removed. 
    But, that removes every single update and not just the ones you removed.  A bit more processing goes into removing everything.
    What my concern here is, suppose there are 10 SUG's deployed to 40 collections each. lets
    say there are 1000 updates.
    If i select all the expired updates and just edit their membership, suppose random udpates
    are part of all 10 SUG deployments. Removing these will trigger the policy cycle for all the collection clients.
    What i was talking about is, if I pick up 1 SUG out of 10 and remove it from 40 collections
    first. Once it is done, then go ahead with removing the expired updates from this SUG.
    This is what i need some clarification on.

  • How to get Best practices solution

    Hi,
           We want to apply Best practices in our server.
           Please inform how to get best practices by solution manager or SAP servive merket place.
    Best regards,
       Gaito

    Use your Secure user id to login to hxxp://service.sap.com . Click on "Quick Links" on top right corner of market place. Under "Software downloads", navigate to "Installations and Upgrades" link.
    You can search for "Best Practices" from here.

  • Best practice on dynamically changing drop down in page fragment

    I have a search form, which is a page fragmant that is shared across many pages. This form allows users to selct a field from the drop down list, and search for a particular value, as seen in the screenshot here:
    http://photos1.blogger.com/blogger2/1291/174258525936074/1600/expanded.5.gif
    Please note that the search options are part of page fragmant embedded within a page - so that I can re-use it across many pages.
    The drop down list changes,based on the page this fragment is embedded. For users page, it will be last name, first name, etc. For parts page, it will be part number, part desc., etc.
    Here is my code:
              Iterator it=getTableRowGroup1().getTableColumnChildren();
            Option options[]=new Option[getTableRowGroup1().getColumnCount()];
            int i=0;
            while (it.hasNext()){
                TableColumn column=(TableColumn)it.next();
                if (column.getSort()!=null){
                    options=new Option(column.getSort().toString(), column.getHeaderText());
    }else{
    options[i]=new Option(column.getHeaderText(), column.getHeaderText());
    i++;
    search search=(search)getBean("search");
    search.getSearchDDFieldsDefaultOptions().setOptions(options);
    This code works, but it gives me all fields of the table available in the drop down. I want to be able to pick and choose. May be have an external properties file associated with each page, where I can list the fields available for search drop down??
    What is the best practice to load the list of options available for drop down on each page? (i.e last name, first name, etc.) I can have the code embedded in prerender of each page, or use sort of a resouce bundle for each page or may be use a bean?

    I have to agree with Pixlor and here's why:
    http://www.losingfight.com/blog/2006/08/11/the-sordid-tale-of-mm_menufw_menujs/
    and another:
    http://apptools.com/rants/menus.php
    Don't waste your time on them, you'll only end up pulling your hair out  :-)
    Nadia
    Adobe® Community Expert : Dreamweaver
    Unique CSS Templates |Tutorials |SEO Articles
    http://www.DreamweaverResources.com
    Book: Ultimate CSS Reference
    http://www.sitepoint.com/launch/005dfd4/3/133
    http://twitter.com/nadiap

  • Best practice for searching on surname/lastname/name in Dutch

    I'm looking for a best practice to store names of persons, but also names of companies, in my database.
    I always store them as is (seems logical since you need to be able to display the original input-name) but I also want to store them transformed in some sort of way so I can easily search on them with LIKE! (Soundex, Metaphone, Q-Gram, ...)
    I know SOUNDEX and DIFFERENCE are included in SQLServer, but they don't do the trick.
    If somebody searches for the phrase "BAKKER", you should find names like "Backer", "Bakker", ... but also "De Backer", "Debecker", ... and this is where SOUNDEX fails ...
    Does someone know some websites to visit, or someone already wrote a good function to transform a string that I can use to store the names but also to transform my search data?
    (Example:  (Pseudo lang :-))
    function MakeSearchable (sString)
      sString = sString.Replace(" ", ""); //Remove spaces
      sString = sString.Replace("CK", "K");
      sString = sString.Replace("KK", "K");
      sString = sString.Replace("C", "S");
      sString = sString.Replace("SS", "S");
      return sString;
    Greetz,
    Tim

    Thanks for the response, but unfortunately the provided links are not much help:
    - The first link is about an article I don't have access to (i'm not a registered user)
    - The second link is about Integration Services. This is nice for Integration stuff, but I need to have a functionality within a frontend. 
    - The third link is for use in Excel.
    Maybe I'm looking for the wrong thing when wanting to create an extra column with "cleaned" up data. Maybe there's another solution from within my frontend or business layer, but I simply want a textbox on a form where users can type a search-value like
    "BAKKER". The result of the search should return names like "DEBACKER", "DE BEKKER", "BACKER", "BAKRE", ...
    I used to work in a hospital where they wrote their own SQL-function (on an Interbase database) to do this: They had a column with the original name, and a column with a converted name:
    => DEBACKER => Converted = DEBAKKER
    => DE BEKKER => Converted = DEBEKKER
    => BACKER => Converted = BAKKER
    => BAKRE => Converted = BAKKER
    When you searched for "BAKKER", you did a LIKE operation on the converted column ...
    What I am looking for is a good function to convert my data as above.
    Greetz,
    Tim

  • Best Practice for saving all fieds and searches in capital letters

    I want to save all fields in my all pages in CAPS and also to search with CAPS e.g user enters search criteria in small letters, then automatically it should convert to caps. What is the best practice to do that?

    Hi,
    There are already so many discussions on this in this forum, some of the links are:
    Uppercase
    How to convert user input in the page to upper case?
    Sireesha

  • Azure Search Best Practice

    I have a few questions regarding best practices for implementing Azure Search, I am working on loading Titles and Usernames into Azure Search with a unique ID as key, search would be on various matching words in Titles or Usernames:
    - Is there a detailed article or whitepaper that discusses best practice?
    - Should we always use filter instead of search to improve response time?
    - I don't know how things work under the hood, is it possible to turn off certain features; for example, scoring profiles, to improve response time?
    - Can i run a load test on GET queries?  How many different GET queries?  Does the same query get cached?
    - I have setup an indexer with a AzureSQL data source with a data change policy against a DateModified column in the table.  This indexer runs every 5 minutes.  I'd imagine an index is necessary on this column in the SQL table?  Also, when
    the indexer runs, does it check all documents in the search index against the data source?
    Thanks in advance,
    Ken

    We don't have an end-to-end whitepaper that covers all of this yet. Here are notes on your specific questions, feel free to add more questions as details come up:
    Filter vs search: in general, more selective queries (where the combination of filter + search matches less documents of the overall index) will run faster since we need to score and sort less documents. Whether you choose filter vs search: if you want
    an exact boolean predicate then use a filter; if you want soft search criteria (with linguistics and such applied to it) then use search.
    Scoring profiles are off by default. They only kick in if you create a scoring profile in the index explicitly and either reference it in queries or mark it as default. With no scoring profiles present, scoring of documents is based on the properties of
    the search text and document text.
    Yes, you can do your perf testing using GET for search requests. While the same query doesn't get cached the underlying data ends up being warmer. A good pattern is to have a bunch of keywords and have your test create different searches with 2-3 words each
    (or whatever is typical in your scenario) based on those keywords.
    For the SQL table question, yes, it's better if you have an index in the column you use as high-watermark so SQL doesn't need to do a table scan each time the indexer runs. The larger the table the more important this is.
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Best Practices - Removing table fragmentations.

    What is the best practices for removing the fragmentation in the tables. Does this process be used after every week month or every day.

    Hi Chunm,
    What is the best practices for removing the fragmentation in the tables.I monitor chained rows (table fetch continued rows), reorg with dbms_redefinition (or move to larger blocksize), and ALWAYS adjust PCTFREE to prevent future fragmentation. Read this:
    http://www.dba-oracle.com/t_identify_chained_rows.htm
    http://www.dba-oracle.com/oracle_tips_fetch_cont_rws.htm
    Hope this helps. . .
    Don Burleson
    Oracle Press author

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • BEST PRACTICE FOR AN EFFICIENT SEARCH FACILITY

    Good Morning,
    Whilst in Training, our Trainer said that the most efficiency from the Sharpoint Search would be to install the Search Facility on a separate server (hardware).
    Not sure how to have this process done.
    Your advice and recommendation would be greatly appreciated.
    thanks a mil.
    NRH

    Hi,
    You can
    create a dedicated search server that hosts all search components, query and index role, and crawl all on one physical server.
    Here are some article for your reference:
    Best practices for search in SharePoint Server 2010:
    http://technet.microsoft.com/en-us//library/cc850696(v=office.14).aspx
    Estimate performance and capacity requirements for SharePoint Server 2010 Search:
    http://technet.microsoft.com/en-us/library/gg750251(v=office.14).aspx
    Below is a similar post for your reference:
    http://social.technet.microsoft.com/Forums/en-US/be5fcccd-d4a3-449e-a945-542d6d917517/setting-up-dedicated-search-and-crawl-servers?forum=sharepointgeneralprevious
    Best regards
    Wendy Li
    TechNet Community Support

Maybe you are looking for