Best practice for a distributed BI4 deployment?

Hello Experts,
Need suggestion on the following?
We plan to deploy the BI4 software across 6 Servers (1 node on each server)
We plan to dedicate 2 servers for the Management/Intelligent tier and 4 servers for the processing Tier (Webi, crystal and dashboards)
WebApp and Database will be hosted on separate servers and will not be part of the above 6 servers.
1 Option:
-Install BI4 on the first server (Start a new deployment)
-Then on the remaining 5 servers, use the option (Expand an existing deployment)
In this approach CMS will be created on all 6 nodes however since our requirement is to have CMS on only 2 nodes (stopping it on the remaining 4 servers after install), is this a suggested approach? Since CMS poll each other a lot, we feel this might not be the best approach? Is there a possibility that there might be a negative impact on the overall platform if such an approach is considered?
Or should we be considering other options to install BI4 software independently on each server, clustering the 2 nodes with all default services and then adding the other 4 nodes to this cluster (however creating only processing servers on these 4 nodes)?
Look forward to your recommendations!
Cheers,
Vikram

Hello,
maybe you want to check out the SAP BI Pattern Book?!
SAP Business Intelligence Platform Pattern Books - Business Intelligence (BusinessObjects) - SCN Wiki
Regards
-Seb.

Similar Messages

  • Best Practice for Server 2012 R2 deployment

    Hello all,
      I am trying to deploy Server 2012 R2 into a Ser ver 2008 R2 forest.  Should I install a 2012 R2 DC first then the Application servers or install Application servers first then a 2012 R2 DC?
    By application servers in this case I am referring to WSUS.  We already have a WSUS server in place but are having trouble patching Windows 8.1 and Server 2012 R2 in our test lab.  My concern is that I will have the same problem in production.
    So should a deploy a WSUS server on 2012 R2 first before I deploy a 2012 R2 DC?

    Sorry, I had considered this a closed issue after getting your reply about doing the DC first.  That makes sense after all.  Plus I have been dealing with some technical issue with the software.  Could not activate it initially but figured
    out how to do so later.  
    You can close this thread.

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best Practice for SRST deployment at a remote site

    What is the best practice for a SRST deployment at a remote site? Should a separate router such as a 3800 series be deployed for telephony in addition to another router to be deployed for Data? Is there a need for 2 different devices?

    Hi Brian,
    This is typically done all on one ISR Router at the remote site :)There are two flavors of SRST. Here is the feature comparison;
    SRST Fallback
    This feature enables routers to provide call-handling support for Cisco Unified IP phones if they lose connection to remote primary, secondary, or tertiary Cisco Unified Communications Manager installations or if the WAN connection is down. When Cisco Unified SRST functionality is provided by Cisco Unified CME, provisioning of phones is automatic and most Cisco Unified CME features are available to the phones during periods of fallback, including hunt-groups, call park and access to Cisco Unity voice messaging services using SCCP protocol. The benefit is that Cisco Unified Communications Manager users will gain access to more features during fallback ****without any additional licensing costs.
    Comparison of Cisco Unified SRST and
    Cisco Unified CME in SRST Fallback Mode
    Cisco Unified CME in SRST Fallback Mode
    • First supported with Cisco Unified CME 4.0: Cisco IOS Software 12.4(9)T
    • IP phones re-home to Cisco Unified CME if Cisco Unified Communications Manager fails. CME in SRST allows IP phones to access some advanced Cisco Unified CME telephony features not supported in traditional SRST
    • Support for up to 240 phones
    • No support for Cisco VG248 48-Port Analog Phone Gateway registration during fallback
    • Lack of support for alias command
    • Support for Cisco Unity® unified messaging at remote sites (Distributed Exchange or Domino)
    • Support for features such as Pickup Groups, Hunt Groups, Basic Automatic Call Distributor (BACD), Call Park, softkey templates, and paging
    • Support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0 on same computer
    • No support for secure voice in SRST mode
    • More complex configuration required
    • Support for digital signal processor (DSP)-based hardware conferencing
    • E-911 support with per-phone emergency response location (ERL) assignment for IP phones (Cisco Unified CME 4.1 only)
    Cisco Unified SRST
    • Supported since Cisco Unified SRST 2.0 with Cisco IOS Software 12.2(8)T5
    • IP phones re-home to SRST router if Cisco Unified Communications Manager fails. SRST allows IP phones to have basic telephony features
    • Support for up to 720 phones
    • Support for Cisco VG248 registration during fallback
    • Support for alias command
    • Lack of support for features such as Pickup Groups, Hunt Groups, Call Park, and BACD
    • No support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0
    • Support for secure voice during SRST fallback
    • Simple, one-time configuration for SRST fallback service
    • No per-phone emergency response location (ERL) assignment for SCCP Phones (E911 is a new feature supported in SRST 4.1)
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/prod_qas0900aecd8028d113.html
    These SRST hardware based restrictions are very similar to the number of supported phones with CME. Here is the actual breakdown;
    Cisco 880 SRST Series Integrated Services Router
    Up to 4 phones
    Cisco 1861 Integrated Services Router
    Up to 8 phones
    Cisco 2801 Integrated Services Router
    Up to 25 phones
    Cisco 2811 Integrated Services Router
    Up to 35 phones
    Cisco 2821 Integrated Services Router
    Up to 50 phones
    Cisco 2851 Integrated Services Router
    Up to 100 phones
    Cisco 3825 Integrated Services Router
    Up to 350 phones
    Cisco Catalyst® 6500 Series Communications Media Module (CMM)
    Up to 480 phones
    Cisco 3845 Integrated Services Router
    Up to 730 phones
    *The number of phones supported by SRST have been changed to multiples of 5 starting with Cisco IOS Software Release 12.4(15)T3.
    From this excellent doc;
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/data_sheet_c78-485221.html
    Hope this helps!
    Rob

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Best practice for RDGW placement in RDS 2012 R2 deployment

    Hi,
    I have been setting up a RDS 2012 R2 farm deployment and the time has come for setting up the RDGW servers. I have a farm with 4 SH servers, 2 WA servers, 2 CB servers and 1 LS.
    Farm works great for LAN and VPN users.
    Now i want to add two domain joined RDGW servers.
    The question is; I've read a lot on technet and different sites about how to set the thing up, but no one mentions any best practices for where to place them.
    Should i:
    - set up WAP in my DMZ with ADFS in LAN, then place the RDGW in the LAN and reverse proxy in
    - place RDGW in the DMZ, opening all those required ports into the LAN
    - place the RDGW in the LAN, then port forward port 443 into it from internet
    Any help is greatly appreciated.
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    The deployment is totally depends on your & company requirements as many things to taken care such as Hardware, Network, Security and other related stuff. Personally to setup RD Gateway server I would not prefer you to select 1st option. But as per my research,
    for best result you can use option 2 (To place RDG server in DMZ and then allowed the required ports). Because by doing so outside network can’t directly connect to your internal server and it’s difficult to break the network by any attackers. A perimeter
    network (DMZ) is a small network that is set up separately from an organization's private network and the Internet. In a network, the hosts most vulnerable to attack are those that provide services to users outside of the LAN, such as e-mail, web, RD Gateway,
    RD Web Access and DNS servers. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network called a perimeter network in order to protect the rest of the network if an intruder were to succeed. You can refer
    beneath article for more information.
    RD Gateway deployment in a perimeter network & Firewall rules
    http://blogs.msdn.com/b/rds/archive/2009/07/31/rd-gateway-deployment-in-a-perimeter-network-firewall-rules.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Practice for Deploying ADF application

    I am tasked with developing a best or prefered practice of feploying a large ADF application. Background: we are in the process of redeveloping a UI for a large system. We have broken the system down into susbsytems. Each of these susbsystems UI will be a ADF aaplicaion(?). This is a move from a MS .Net front end. The backend (Batch processes etc) is being dveloped in Java. So my question is if I have several ADF projects for each subsystem and common components that they all will use - what is the best practice to compile package and deploy? The deployment will be to weblogic server or servers(Cluster).
    We have a team of at least 40 -50 developers worldwide so we are looking for an automated build and deploy and would like to follow Oracle best practice. So far I have read Deploying ADF Applications (http://download.oracle.com/docs/cd/E15523_01/web.1111/e15470/deploy.htm#BGBJHGFH) and have followed the links. I have also look at the ADF evangalist blogs - lots of chatter about ojdeploy. My concern about ojdeploy is that dependent files are also being compiled at the same time. I expected that we want shared dependent files compiled only once (Is that a valid concern)?
    So then when we build the source out of subversion (ojdeploy ? Ant? ) then what is best practice to deploy to a weblogic server (wslt admin console) - again we want it to be automated.
    Thank you in advance for replies.
    RK

    Rule 1: Never use the "Automatically Expose UI Componentes in a New Managed Bean" option, create your bindings manually;
    Rule 2: Rule 1 is always right;
    Rule 3: In doubts, refer to rule 2.
    You may also want to check out :
    http://groups.google.com/group/adf-methodology
    And :
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf

  • Best practice for customizing EJB property after deployment

    Hi Gurus,
      What is the best practice for customizing EJB property after deployment in NW7.1? I have a stateless session bean and it needs to get some environment information before acting. While the information can only be known at runtime. What should I do to achieve it? I thought I can bind the property with a JNDI context but I did not find out where to declare and change the context value. Please advise. Thanks.
    B.R.

    Hi.
    I have a similar problem. But I still can not edit the properties of the ejb-jar.xml.
    I tried to stop the web service, but the properties still remain unmodifiable.
    Could you advise me how to change them?
    We have installed SAP Server 7.0.2

  • What is the best practice for AppleScript deployment on several machines?

    Hi,
    I am developing some AppleScripts for my colleagues at work and I don't want to visit each of them to deploy my AppleScript on their Macs.
    So, what is the best practice for AppleScript deployment on several machines?
    Is there an installer created by the Automator available?
    I would like to have something like an App to run which puts all my AppleScript relevant files into the right place onto a destination Mac.
    Thanks in advance.
    Regards,

    There's really no 'right place' to put applescripts.  folder action scripts nees to go in ~/Library/Scripts/Folder Action Scripts (or /Library/Scripts/Folder Action Scripts), anything you want to appear in the script menu needs to go in ~/Library/Scripts (or /Library/Scripts), script applications should probably go in the Applications folder, but otherwise scripts can be placed anywhere.  conventional places to put them are in ~/Library/Scripts or in a subfolder of ~/Library/Application Support if they are run by an application.  The more important issue is to make sure you generalize the scripts: use the path to command to get local paths rather than hard-coding them in, make sure you test to make sure applications or unic executables you call are present ont he machine, use script bundles rather tna scripts if you scripts have private resources.
    You can write a quick installer script if you want to make sure scripts go where you want them.  Skeleton verion looks like this:
    set scriptsFolder to path to scripts folder from user domain
    set scriptsToExport to path to resource "xxx.scpt" in directory "yyy"
    tell application "Finder"
      duplicate scriptsToExport to scriptsFolder with replacing
    end tell
    say "Scripts are installed"
    save this as a script application, then open the application pacckage and create a folder called "yyy" in the resources folder and copy your script "xxx.scpt" into it.  other people can run the app to install the script.

  • Best practice for locations to deploy AP3602i's?

    I am doing an install for a new building for my company. We have 2 office floors and 19x 2602's. They will be switched through a WLC-5508. I am looking to find a best practices guide on how to deploy them (location wise). For instance, should they be spaced certain distances from each other, at least X feet away from walls/obstacles, at least X high off the floor, in a triangular pattern, etc etc. Anyone know where to find documentation such as this?
    Each one of our floors are rectangular in shape, around 20k square feet, have mechanical and elevator shafts in the center of the floors. So it is a large squarish oval. Nine AP's per floor, I thought about placing them along the center line of the oval, but in sort of a zig-zag pattern, not in a straight line along the perimeter. They would all be ceiling mounted. I am looking for any info to let me know if that is a good plan, or how I should change it to confirm to best practices.

    Either you do voice or just data only, I recommend you do site survey before and after.  Make sure you leave some slacks of cables so that you can re-position APs after the post-deployment site survey.  Lowest SNR for a good solid connection is 20~25dB due to the fact that you need some fade margine, so perform site survey with that in mind.
    Those lighting fixtures are not big of deal if they are recessed and mounted above access points.  Bigger concerns are firedoors, leaded walls, elevator columns, etc.
    Remember that a site survey after the deployment is very important for a good wireless network.
    Good luck.

  • What is best practice for deploying agent(10204) on RAC 9i

    Hello,
    What would be best practice for deploying agent(10204) on RAC 9i? Should the agent be deployed on each node or should the agent be deployed on the cluster file system? What are the advantages/disavantages deploy on individual nodes vs. on cluster file system? Please advice. Thank you in advance.

    Please use agent push application to deploy agent on all the nodes at one shot
    Please refer the obe
    http://www.oracle.com/technology/obe/obe10gemgc_10203/agentpush/agentpush.htm

  • Best practices for deploying forms in a 'cluster'?

    Anyone know of any public docs that discuss typical best practices for
    - forms deployment;
    - forms apps management and version control; and/or
    - deploying (and keeping) the .frm/frx in sync when using multiple forms servers in a HA or load balancing envrionment?

    Hi adil,                      
    Based on your description, you want to know the best practices for search service in a SharePoint farm.
    Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
    The article is about the guidance for different farms. 
    Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
    If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
    In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
    The articles below describe the best practices for enterprise search.
    https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
    https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
    Best regards      
    Sara Fan
    TechNet Community Support

  • Best Practices for Packaging and Deploying Server-Specific Configurations

    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks can
    configure each server's properties at the server. But it appears that an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please let me
    know how.
    Or do we have to build a unique ear for each server? This is possible, of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

    The one draw back to this is you have to go all the way back to ant and the
    build system to make changes. You really want these env variables to be
    late binding.
    cheers
    mbg
    "Sai S Prasad" <[email protected]> wrote in message
    news:[email protected]...
    >
    Paul,
    I have a similar situation in our project and I don't create ear filesspecific
    to the environment. I do the following:
    1) Create .properties file for every environment with the same attributename
    but different values in it. For example, I have phoneix.properties.NT,phoenix.properties.DEV,
    phoenix.properties.QA, phoenix.properties.PROD.
    2) Use Ant to compile, package and deploy the ear file
    I have a .bat file in NT and .sh for Solaris that in turn calls theant.bat or
    ant.sh respectively. For the wrapper batch file or shell script, you canpass
    the name of the environment. The wrapper batch file will copy theappropriate
    properties file to "phonenix.properties". In the ant build.xml, I alwaysrefer
    to phonenix.properties which is available all the time depending on theenvironment.
    >
    It works great and I can't think of any other flexible way. Hope thathelps.
    >
    Paul Hodgetts <[email protected]> wrote:
    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as
    an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks
    can
    configure each server's properties at the server. But it appears that
    an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please
    let me
    know how.
    Or do we have to build a unique ear for each server? This is possible,
    of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure
    that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

Maybe you are looking for

  • Can't create a new service for thunderbolt/ethernet in 10.10

    First, I'm not sure how long this has been going on since I generally have WiFi on in addition to a Ethernet connection if I'm on the LAN (wire).  Second, I've got a new system board in the machine from about 3 weeks ago replaced by Apple via Apple C

  • HP C5800 all in one, wireless scanning

    I have a C5800 all in one connected to a desktop - print, scan, etc. work fine. I also have a Dell XPS laptop that I print from wirelessly with the bluetooth. (How) can I "scan to computer" using bluetooth?

  • Gap in ibase component number by using FM 'CRM_IBASE_COMP_CREATE'

    Hi, i've created a program which creates components for existing IBase from a flat file. For the creation im using the fm 'CRM_IBASE_COMP_CREATE' and it's working fine. The problem right now is, that is have a gap in the component number. For instanc

  • Measuring point based PM

    Team, I am trying to create a measuring point based PM. The requierment is i am checking the wear of an equipment and if it goes below 50 mm, i want a notication to be created. I am sure the IMRC001 is activated in my client. I did the following proc

  • How to install Oracle Express with different NLS_CHARACTERSET

    Hi, When I install Oracle 10g Express on my PC, the characterset is always set to AL32UTF8. For compatibility reasons, I need to have my db in the WE8ISO8859P1 characterset. How can I achieve this? I allready tried by installing both the Western Euro