Azure Search Best Practice

I have a few questions regarding best practices for implementing Azure Search, I am working on loading Titles and Usernames into Azure Search with a unique ID as key, search would be on various matching words in Titles or Usernames:
- Is there a detailed article or whitepaper that discusses best practice?
- Should we always use filter instead of search to improve response time?
- I don't know how things work under the hood, is it possible to turn off certain features; for example, scoring profiles, to improve response time?
- Can i run a load test on GET queries?  How many different GET queries?  Does the same query get cached?
- I have setup an indexer with a AzureSQL data source with a data change policy against a DateModified column in the table.  This indexer runs every 5 minutes.  I'd imagine an index is necessary on this column in the SQL table?  Also, when
the indexer runs, does it check all documents in the search index against the data source?
Thanks in advance,
Ken

We don't have an end-to-end whitepaper that covers all of this yet. Here are notes on your specific questions, feel free to add more questions as details come up:
Filter vs search: in general, more selective queries (where the combination of filter + search matches less documents of the overall index) will run faster since we need to score and sort less documents. Whether you choose filter vs search: if you want
an exact boolean predicate then use a filter; if you want soft search criteria (with linguistics and such applied to it) then use search.
Scoring profiles are off by default. They only kick in if you create a scoring profile in the index explicitly and either reference it in queries or mark it as default. With no scoring profiles present, scoring of documents is based on the properties of
the search text and document text.
Yes, you can do your perf testing using GET for search requests. While the same query doesn't get cached the underlying data ends up being warmer. A good pattern is to have a bunch of keywords and have your test create different searches with 2-3 words each
(or whatever is typical in your scenario) based on those keywords.
For the SQL table question, yes, it's better if you have an index in the column you use as high-watermark so SQL doesn't need to do a table scan each time the indexer runs. The larger the table the more important this is.
This posting is provided "AS IS" with no warranties, and confers no rights.

Similar Messages

  • Use MS Account or Organization Account to create Azure Account - Best Practice?

    Hi, I see that it is now possible to create Azure accounts as an org: http://azure.microsoft.com/en-us/documentation/articles/sign-up-organization/. Previously, you needed a MS Account. I also note that http://msdn.microsoft.com/en-us/library/azure/hh531793.aspx
    says “Use organizational accounts for all administrative roles”. Note the word "all" which I guess includes the main admin account itself. Is this now considered MS's best practice for organizations? I have to admit that at the moment, I can't see
    what difference it really makes in practice. Any thoughts?
    TIA.
    Mark

    Hi,
    "Mark as answer" means that the post could help you, of course, we hope our posts could give you some help, if not, please feel free unmark, if you still have issues with this topic, we welcome to post again, Thanks for your understanding.
    For this topic, as mention by Bharath Kumar P, we usually use Microsoft account for single user, you could try to sign up a one-month free trial at:
    http://azure.microsoft.com/en-us/pricing/free-trial/, here are Azure Free Trial FAQ:
    http://azure.microsoft.com/en-us/pricing/free-trial-faq/, if you have any questions, please feel free to let me know.   
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Best practice for searching on surname/lastname/name in Dutch

    I'm looking for a best practice to store names of persons, but also names of companies, in my database.
    I always store them as is (seems logical since you need to be able to display the original input-name) but I also want to store them transformed in some sort of way so I can easily search on them with LIKE! (Soundex, Metaphone, Q-Gram, ...)
    I know SOUNDEX and DIFFERENCE are included in SQLServer, but they don't do the trick.
    If somebody searches for the phrase "BAKKER", you should find names like "Backer", "Bakker", ... but also "De Backer", "Debecker", ... and this is where SOUNDEX fails ...
    Does someone know some websites to visit, or someone already wrote a good function to transform a string that I can use to store the names but also to transform my search data?
    (Example:  (Pseudo lang :-))
    function MakeSearchable (sString)
      sString = sString.Replace(" ", ""); //Remove spaces
      sString = sString.Replace("CK", "K");
      sString = sString.Replace("KK", "K");
      sString = sString.Replace("C", "S");
      sString = sString.Replace("SS", "S");
      return sString;
    Greetz,
    Tim

    Thanks for the response, but unfortunately the provided links are not much help:
    - The first link is about an article I don't have access to (i'm not a registered user)
    - The second link is about Integration Services. This is nice for Integration stuff, but I need to have a functionality within a frontend. 
    - The third link is for use in Excel.
    Maybe I'm looking for the wrong thing when wanting to create an extra column with "cleaned" up data. Maybe there's another solution from within my frontend or business layer, but I simply want a textbox on a form where users can type a search-value like
    "BAKKER". The result of the search should return names like "DEBACKER", "DE BEKKER", "BACKER", "BAKRE", ...
    I used to work in a hospital where they wrote their own SQL-function (on an Interbase database) to do this: They had a column with the original name, and a column with a converted name:
    => DEBACKER => Converted = DEBAKKER
    => DE BEKKER => Converted = DEBEKKER
    => BACKER => Converted = BAKKER
    => BAKRE => Converted = BAKKER
    When you searched for "BAKKER", you did a LIKE operation on the converted column ...
    What I am looking for is a good function to convert my data as above.
    Greetz,
    Tim

  • Best Practice: Configuring Windows Azure Management Services

    I have a 3 Websites, 1 Blob Storage, and 1 SQL Server that I would like to configure for basic stability and performance monitoring. I know I can set up alerts through Management Services based on various metrics. My question is, can someone give me a recommended
    set of metrics that are good baselines?
    It is nice that Azure is so customizable, but frankly I have no idea how much CPU Time in milliseconds over a given evaluation window is appropriate. Or how many Http Server Errors? More than 0 seems bad, no? Wouldn't I want to know of any/all errors?
    So if anyone has some "best practice" metrics for me, that would be really helpful.
    Thanks.

    Hi,
      >> can someone give me a recommended set of metrics that are good baselines?
    Actually, many metrics depend on your scenario. For instance, if there're a lot of concurrent requests or if a single request is expected to take some heavy computation, then it is expected to have a high CPU usage, thus it is difficult to give
    you a specific number.
    In general, you may want the CPU usage of a web server to be as high as possible (idle CPU costs money but does not provide valuable results), but if it is low enough, if additional concurrent requests are received, they can be served without too much
    delay. In Windows Azure, you may want to setup auto scaling so that if CPU usage is high enough during a period, you create a new instance. If CPU usage is low enough during a period, you remove an instance. You may also want to use response time in addition
    to CPU to monitor whether you need to add/remove an instance.
      >> Or how many Http Server Errors? More than 0 seems bad, no? Wouldn't I want to know of any/all errors?
    As for server error, in general you want to get notified by all errors (> 0), however they're unexpected and need to be investigated. But if in your scenario you expect a certain level of server errors, then it is fine to use a larger number.
    Best Regards,
    Ming Xu
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • Best Practice for saving all fieds and searches in capital letters

    I want to save all fields in my all pages in CAPS and also to search with CAPS e.g user enters search criteria in small letters, then automatically it should convert to caps. What is the best practice to do that?

    Hi,
    There are already so many discussions on this in this forum, some of the links are:
    Uppercase
    How to convert user input in the page to upper case?
    Sireesha

  • Best Practice while configuring Traffic Manager for Azure Website

    Hi Team,
    I want to understand What is the best practice while we configure traffic manager for Azure website.
    To give you the base, Here let me explain my requirement. I have one website which 40% target audiences would be East US, while  40% would be UK and rest 20% would be from Asia-pacific.
    Now, What I want is Failover + Performance based Traffic Manager Configuration.
    My thinking:
    1) we need to create 1 website with 2 instances in each region (east us, east asia, west us for an example). so, total 3 deployment of website. (give region based url for the website)
    2) create traffic manager based on performance and add 3 of those instances. that would become website-tmonperformance
    3) create traffic manager based on failover and add 3 of those instances. that would become website-tmonfailover
    4) create traffic manager and ?? don't know the criteria but add both above traffic manager here and take your final url for end user.
    I am not sure (1) this may be the right approach or not (2) if this is right, in the 4th step which criteria we should select while creating final traffic manager round-robin/ performance/ failover?
    after all these if use try to access site from US.. traffic manager will divert that to US Data-Centre or it will wait for failover and till that it will be served from east-asia if in configuration, east-asia is my 1st instance?
    Regards, Brijesh Shah

    Hi Jonathan,
    Thanks for your quick reply. actually question is bit different. Let me explain you different way.
    I was asking for recommendation from Azure Traffic Manager team. whether my understanding is correct or not.We want Performance with Failover.
    So, One azure website we have: take an example todoapp. I deployed that in 3 different region. now, I want to have performance based routing as well as failover based routing. but obviously I can't give two URL to my end user. so, at the top of that I will
    require one more traffic manager. So,
    step 1: I will create one traffic manager with performance criteria named: TMForPerformance.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 2: I will create one more traffic manager with failover criteria named: TMForFailover.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 3: I will create one final traffic manager with performance criteria named: todoapp.trafficmanager.com where I will add these two traffic manager instead of 3 different region's website.
    Question 1) Is it correct structure if we want to achieve Performance with Failover or Is there any better solution?
    Question 2) in step 3, what criteria we should select? performance/ round robin/ failover
    Regards, Brijesh Shah

  • BEST PRACTICE FOR AN EFFICIENT SEARCH FACILITY

    Good Morning,
    Whilst in Training, our Trainer said that the most efficiency from the Sharpoint Search would be to install the Search Facility on a separate server (hardware).
    Not sure how to have this process done.
    Your advice and recommendation would be greatly appreciated.
    thanks a mil.
    NRH

    Hi,
    You can
    create a dedicated search server that hosts all search components, query and index role, and crawl all on one physical server.
    Here are some article for your reference:
    Best practices for search in SharePoint Server 2010:
    http://technet.microsoft.com/en-us//library/cc850696(v=office.14).aspx
    Estimate performance and capacity requirements for SharePoint Server 2010 Search:
    http://technet.microsoft.com/en-us/library/gg750251(v=office.14).aspx
    Below is a similar post for your reference:
    http://social.technet.microsoft.com/Forums/en-US/be5fcccd-d4a3-449e-a945-542d6d917517/setting-up-dedicated-search-and-crawl-servers?forum=sharepointgeneralprevious
    Best regards
    Wendy Li
    TechNet Community Support

  • Search Fragment - Best Practice?

    We are designing a search fragment that will reside at the beginning of every form. The search fragment will allow a user to select one customer from a number of customers displayed. Once the user selects a customer, the search fragment will populate the main (parent) form with the customer data and then the search fragment should disappear.
    We have a mock-up of the search fragment, but questions are surfacing as to the best way to integrate this into the form. Each form has different "customer data" fields that it needs from the search, for example, one form needs customer name and account while another form only needs customer name.
    Here are some questions that are surfacing:
    1. Should the search fragment "poke" the information into the parent form once the user makes a selection? If so, the fragment needs to detect which form it's on and send only the data the form needs. This will make the search fragment's javascript more complex.
    2. Or, should the parent form "grab" only the information from the search fragment that it needs? This localizes the javascript to the parent form, leaving the search frag more generic. In this scenario I envision that once a user selects a customer, all customer information will be plopped into hidden fields on the search frag and the parent will use only the data that it needs.
    Is there a best practice for doing something like this? Lessons learned?
    Thanks,
    Elaine

    How is the search fragment going to know what to extract or is there goinng to be the same info everytime? I assume that is what you are doing. Depending on how the data is returned to you will dictate the best pratice. If it is a stream of XML then I woudl load that stream into the datadom and allow each of the fields to get their own data from the Dom (this would minimize the code needed on each form). If you are getting the info back field by field then the hidden field route is th ebest way to go (it is simple and the coding is very simple as well).

  • Best Practices for highly dynamic features like Search

    For a project I need to implement a "Search" Component which will most probably use lucene that is built into CQ5. Since most of the other content on the site is dynamic and cached on dispatcher my concern is regarding the load such a dynamic feature will create for the publish instance.
    What are the best practices to minimize the load on publish instance for such a scenario?

    One option is to have your search results display via AJAX rather than a full page request. That way most of the page is cached in dispatcher and only the AJAX request with the search results is dynamic.

  • Best Practices for Handling queries for searching XML content

    Experts: We have a requirement to get the count of 4 M rows from a specific XML tag with value begins with, I have a text index created but the query is extremely slow when I use the contains operator.
    select count(1) from employee
    where
    contains ( doc, 'scott% INPATH ( /root/element1/element2/element3/element4/element5)') >0
    what is oracle's best practices recommendation to query / index such searches?
    Thanks

    Can you provide a test case that shows the structure of the data and how you've generated the index? Otherwise, the generic advice is going to be "use prefix indexing".

  • Best practices for search service in a sharepont farm

    Hi
    in a sharepoint web application there is many BI dashboards are deployed and also we have plan to
    configure enterprise search  for this application.
    in our sharepoint 2010 farm we have
    2  application server s
    2 WFE servers
    here one application server is running
    c.a + webanalytics service and itself is a domain controller
    second application server is for only running secure store service+ Performance point service only
    1 - here if we  run search server service in second application server can any issues to BI performance and
    2 - its best practice to run Performance point service and search service in one server
    3 -also is it  best practice to run search service in a such a application server where already other services running
    and where we have only one share point web application need to be crawled and indexed with  below crawl schedule.
    here we only run full crawl per week and incremental crawl at midnight daily
    adil

    Hi adil,                      
    Based on your description, you want to know the best practices for search service in a SharePoint farm.
    Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
    The article is about the guidance for different farms. 
    Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
    If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
    In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
    The articles below describe the best practices for enterprise search.
    https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
    https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
    Best regards      
    Sara Fan
    TechNet Community Support

  • Best Practice/Validation for deploying a Package to Azure

    Before deploying a package to Azure, What kind of best practice/Validation can be done to know the Package compatibility with Azure Enviroment?

    What do you mean by the compatibility of the azure package with the azure environment? what do you want to validate? would be great if you provide bit of a background for your question.
    As far as the deployment best practice is concerned, the usual way is to upload your azure cloud service deployment package and configuration files (*.cspkg and *.cscfg) to the blob container first and upload it to the cloud service by referring from uploaded
    container. This will not only give you flexibility to keep different versions of your deployments which you can use to roll back entire service but also the process of the deployment will be comparatively faster than that of deploying from VS or by uploading
    manually from file system.
    You can refer link - http://azure.microsoft.com/en-in/documentation/articles/cloud-services-how-to-create-deploy/#deploy
    Bhushan | Blog |
    LinkedIn | Twitter

  • Is there a list of best practices for Azure Cloud Services?

    Hi all;
    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    We will be placing the cloud services in multiple datacenters and using traffic manager to point people to the right one. The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client.
    Mostly it will pass all requests & responses across from one to the other.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    hi dave,
    >>Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    For this issue, I have collected some blogs and document about best practices for azure cloud service, you can view them, but I am not sure they are your need.
    http://msdn.microsoft.com/en-us/library/azure/xx130451.aspx
    http://gauravmantri.com/2013/01/11/some-best-practices-for-building-windows-azure-cloud-applications/
    http://www.hanselman.com/blog/CloudPowerHowToScaleAzureWebsitesGloballyWithTrafficManager.aspx
    http://msdn.microsoft.com/en-us/library/azure/jj717232.aspxhttp://azure.microsoft.com/en-us/documentation/articles/best-practices-performance/
    >>The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client. Mostly it will pass all requests & responses across from one to the other.
    For your scenarioes, If you'd like to communicate with each instances, I recommend you refer to this document (
    http://msdn.microsoft.com/en-us/library/azure/hh180158.aspx ). And generally, if we want connect the client to server on Azure, the service bus is a good choice (http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-multi-tier-app-using-service-bus-queues/
    If I misunderstood, please let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Azure table design best practice

    What's the best practice for designing tables in Azure Tables to optimize query performance?

    Hi Raj,
    When we design the azure table, we need to consider the scalability of the azure table.
    and selecting the PartitionKey is very more important to scalability.
    Basically, we have two options which have their advantages and disadvantages:
    One Option: having a single partition by having the same value for PartitionKey for all entities to
    Second Option: having a unique value for PartitionKey for every entity
    More information about how to get the most out of windows azure tables ,please refer to the link below:
    http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
    There is  also a detailed article which explain how to design a scalable partitioning strategy for Windows Azure Storage,please refer to the link below:
    http://msdn.microsoft.com/en-us/library/hh508997.aspx
    Best Regards,
    Kevin Shen

Maybe you are looking for