Load balancing &scalability when OSB service is deployed in a single server
Hi
We have a file write service which we are deploying to clustered domain with a single admin and managed server.
This we are doing since there can be issue of file locking if it is deployed cluster with multiple managed server and when more than one instance has to write to a same file (writing to the same file is the requirement).
What are our options of load balancing and scalability and fail over?
As i understand scalability in this case is limited to increasing the capacity of the hardware. There is no load balancing.
For fail over, we can go for a backup domain is PASSIVE mode.
Can someone please suggest any other options?
Thanks
kedar
Hi,
One SSA is ok, but you should think about access rights. If the access is clear cut between all the web apps you should be ok with one SSA. Multiple result sources limiting on content source also works, but could easily be bypassed.
Multiple SSA's will eat up RAM/CPU like a mother :)
As for popular etc.. it could be due to how those sources are set up, but haven't investigated or tested this much.
Thanks,
Mikael
Search Enthusiast - SharePoint MVP/MCT/MCPD - If you find an answer useful, please up-vote it.
http://techmikael.blogspot.com/
Author of Working with FAST Search Server 2010 for SharePoint
Similar Messages
-
Load-balancing between Analytical Provider service nodes in a cluster
Hi All,
- First a little background on my architecture. My EPM environment consist of 3 Solaris servers:
Server1: Foundation Services + APS + EAS + WLS Admin server
Server2: Foundation Services + APS + EAS
Server3: Essbase server + Essbase Studio server
All above services are deployed to a single domain. We have a load-balancer sitting in front of server1 and server2 that redirects request based on availability of the services.
- Consider APS:
We have a APS cluster "AnalyticProviderServices" with members AnalyticProviderServices1 deployed on Server1 and AnalyticProviderServices2 deployed on Server2.
So I connect to APS and login as user1. Say the load-balancer decides to forward my request to server1, so all my request are then managed by APS on Server1. Now if APS on server1 is brought down, then any request to APS on server1 are redirected by weblogic to APS on server2.
Now ideally APS on server2 should say "hey I see APS on server1 is down so I will take up your session where it left off". So I expect the 2nd APS node in the cluster to tale up my session. But this does not happen.. I need to login again when I hit refresh in excel as I get the error "Invalid session.. Please login again". When I open EAS I see I have been logged in with a new session ID. So it seems that the cluster nodes simply act as load-balancers and are not smart enough to take up a failed nodes sessions where it left off.
Is my understanding correct or have I to configure something to allow for this to happen?
Thanks,
KentThanks for your reply John!
I was hoping APS could do something like that .. I am not sure if restoring sessions of a dead APS cluster node on another APS would be helpful but I can think of one situation where a drill-through report is running for a long time on the Essbase server and APS goes down.. it would be good to have the other APS to take up the session and return the drill-through output to the user. -
Question on how does load balancing work on Firewall Services Module (FWSM)
Hi everyone,
I have a question about the algorithm of load balancing on Firewall Services Module (FWSM).
I understand that the FWSM supports up to three equal cost routes on the same interface for load balancing.
Please see a lower simple figure.
outside inside
--- L3 SW --+
|
MHSRP +--- FWSM ----
|
--- L3 SW --+
I am going to configure the following default routes on FWSM point to each MHSRP VIP (192.168.13.29 and 192.168.13.30) for load balancing.
route outside_1 0.0.0.0 0.0.0.0 192.168.13.29 1
route outside_1 0.0.0.0 0.0.0.0 192.168.13.30 1
However I don't know how load balancing work on FWSM.
On FWSM, load balancing work based on
Per-Destination ?
Per-Source ?
Per-Packet ?
or
Other criteria ?
Your information would be greatly appreciated.
Best Regards,Configuring "tunnel default gateway' on the concentrator allowed traffic to flow as desired through the FWSM.
FWSM is not capable of performing policy based routing, the additional static routes for the VPN load balancing caused half of the packets to be lost. As a result, it appears that the VPN concentrators will not be able to load balance. -
Load Balance Forms and Reports Services
Is it possible to Load Balance an application (written in Developer 9i) between
two computers running Oracle Application Server 10g (9.0.4) Forms and Reports Services ?
If it is possible, can anybody give an example or some kind of documentation, because
the product documentation is not clear.
Thanks in advance...Yes it is possible, but it all depends on what is your definition of load balancing?? Do you mean hardware machine balancing or service availability??? this is a large territory and you have to be specific, you even have the RAC.
Oracle has [Oracle Application Server High Availability |http://download.oracle.com/docs/cd/B14099_19/core.1012/b14003/toc.htm] and metalink Note:740202.1 explains in detail how to configure it.
Tony -
Load balance traffic to a service based on a added field in the HTTP Header
I am trying to use HTTP Header Load balancing but the field we want to use in order to load balance is "user-defined", example HTTP_TOTO = toto1.
Do you have any idea on how I could perform this ?
Thanks in advanceLoad balancing using pre-defined headers is supported. Not sure if load balancing using user defined fields is possible. You could refer to the following document.
http://www.cisco.com/univercd/cc/td/doc/product/webscale/css/css_710/bsccfggd/httphead.htm
We would appreciate it if someone could share their experience if they know more about this. -
I have created a wcf cloud service which is being deployed on cloud through bitbucket repository.
I want to create a .svclog file to trace logs on my azure local storage.
For that, I have refered so many posts and finally configured my solution as below:
ServiceConfiguration.Cloud.cscfg:
<Role name="MyServiceWebRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=StorageName;AccountKey=MyStorageKey" /> </ConfigurationSettings> <Certificates> <Certificate name="Certificate" thumbprint="certificatethumbprint" thumbprintAlgorithm="sha1" /> </Certificates> </Role>
ServiceConfiguration.Local.cscfg:
<Role name="MyServiceWebRole">
<Instances count="1" /> <ConfigurationSettings> <!--Also tried with value = "UseDevelopmentStorage=true"--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=StorageName;AccountKey=MyStorageKey" /> </ConfigurationSettings> <Certificates> <Certificate name="Certificate" thumbprint="certificatethumbprint" thumbprintAlgorithm="sha1" /> </Certificates> </Role>
ServiceDefinition.csdef:
<WebRole name="MyServiceWebRole" vmsize="Small"> <Sites> <Site name="Web"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1" /> </Bindings> </Site> </Sites> <Endpoints> <InputEndpoint name="Endpoint1" protocol="http" port="80" /> </Endpoints> <Imports> <Import moduleName="Diagnostics" /> </Imports> <LocalResources> <LocalStorage name="MyServiceWebRole.svclog" sizeInMB="1000" cleanOnRoleRecycle="false" /> </LocalResources> <Certificates> <Certificate name="Certificate" storeLocation="LocalMachine" storeName="My" /> </Certificates> </WebRole>
web.config (MyServiceWebRole project):
<system.diagnostics> <trace autoflush="false"> <listeners> <add name="AzureDiagnostics" type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </listeners> </trace> </system.diagnostics> ............<system.serviceModel> <diagnostics> <messageLogging maxMessagesToLog="3000" logEntireMessage="true" logMessagesAtServiceLevel="true" logMalformedMessages="true" logMessagesAtTransportLevel="true" /> </diagnostics> ............ <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Microsoft.WindowsAzure.Diagnostics" publicKeyToken="31bf3856ad364e35" culture="neutral" /> <!--<bindingRedirect oldVersion="0.0.0.0-1.8.0.0" newVersion="2.2.0.0" />--> </dependentAssembly> </assemblyBinding> </runtime>
WebRole.cs (MyServiceWebRole project):
public override bool OnStart() { //Trace.Listeners.Add(new DiagnosticMonitorTraceListener()); Trace.Listeners.Add(new AzureLocalStorageTraceListener()); Trace.AutoFlush = false; Trace.TraceInformation("Information"); Trace.TraceError("Error"); Trace.TraceWarning("Warning"); TimeSpan tsOneMinute = TimeSpan.FromMinutes(1); // To enable the AzureLocalStorageTraceListner, uncomment relevent section in the web.config DiagnosticMonitorConfiguration diagnosticConfig = DiagnosticMonitor.GetDefaultInitialConfiguration(); // Transfer logs to storage every minute diagnosticConfig.Logs.ScheduledTransferPeriod = tsOneMinute; // Transfer verbose, critical, etc. logs diagnosticConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; // Start up the diagnostic manager with the given configuration DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagnosticConfig); // For information on handling configuration changes // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357. return base.OnStart(); }
AzureLocalStorageTraceListener.cs (MyServiceWebRole project):
public class AzureLocalStorageTraceListener : XmlWriterTraceListener { public AzureLocalStorageTraceListener() : base(Path.Combine(GetLogDirectory().Path, "MyServiceWebRole.svclog")) { } public static DirectoryConfiguration GetLogDirectory() { try { DirectoryConfiguration directory = new DirectoryConfiguration(); // SHOULD I HAVE THIS CONTAINER ALREADY EXIST IN MY LOCAL STORAGE? directory.Container = "wad-tracefiles"; directory.DirectoryQuotaInMB = 10; directory.Path = RoleEnvironment.GetLocalResource("MyServiceWebRole.svclog").RootPath; var val = RoleEnvironment.GetConfigurationSettingValue("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"); return directory; } catch (ConfigurationErrorsException ex) { throw ex; } } }
I also tried to comment out element in ServiceDefinition.csdef file. but here, I am having build time error (The XML specification is not valid).
In my case, I am pushing all source code to bitbucket repository and from there it is deployed to the azure "WebSite". Here is more details:
I need help to know:
Why my service did not creating .svclog file from local to azure?
It's also not doing the same even it has been deployed to azure?
On which location(container) I can get the .svclog file into local storage?
Please suggest correct way or modification so that I can overcome with this issue. Please replay fast.
Thanks.Hello _Adian,
Thanks for response.
I uploaded all my code on bitbucket repository and configured a website on portal using "Integrate source control" (please refer: http://azure.microsoft.com/en-in/documentation/articles/web-sites-publish-source-control/).
(NOTE: This is the way my client is following.)
Here is the structure of my solution:
1. a wcf service application (.svc)
2. few class library projects
3. Azure cloud service (with Project 1 as web role).
Now whenever I push my updated code to bitbucket, It automatically deployed to azure.
So, please suggest me how can I create a separate .svclog file into local storage (using above environment).
I hope this info will helpful to you for answer. -
How can I observe load balance event when I have setup FCF?
Dear expert,
I have setup two nodes RAC on linux and implement FCF. HA event works fine, because after one node down, connections to failed node in OC4J connection pool were cleaned up immediately. And I can observe HA event like
FINER: eventType= 256, svcName= orclx, instName= orclx1, db Name= orclx, hostName= linuxrac1, status= down, cardinality= 0
But I cannot observe LB event, it should be received too, shouldn't it?Enabling Event Notification for Connection Failures in Oracle Real Application Clusters
Event notification is enabled if the SQL_ORCLATTR_FAILOVER_CALLBACK and SQL_ORCLATTR_FAILOVER_HANDLE attributes of the SQLSetConnectAttr function are set when a connection failure occurs in an Oracle RAC Database environment. Both attributes are set using the SQLSetConnectAttr function. The symbols for the new attributes are defined in the sqora.h file. The SQL_ORCLATTR_FAILOVER_CALLBACK attribute is used to specify the address of a routine to call when a failure event takes place.
The SQL_ORCLATTR_FAILOVER_HANDLE attribute is used to specify a context handle which will be passed as one of the parameters in the callback routine. This attribute is necessary in order for the ODBC application to determine which connection the failure event is taking place on.
The function prototype for the callback routine is as follows:
void failover_callback(void *handle, SQLINTEGER fo_code)
The handle parameter is the value that was set by the SQL_ORCLATTR_FAILOVER_HANDLE attribute. Null is returned if the attribute has not been set.
The fo_code parameter identifies the failure event that is taking place. The failure events map directly to the events defined in the OCI programming interface. The list of possible events is as follows:
ODBC_FO_BEGIN
ODBC_FO_ERROR
ODBC_FO_ABORT
ODBC_FO_REAUTH
ODBC_FO_END
see Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide
chapter 6 for an example -
Load balancing SMA web service and SMA end point URL
Hi,
We have set up the recommended 3 servers with Azure Pack, SMA Web Service and Runbook Worker. We are now wanting to configure the Azure pack portal to setup the SMA endpoint url for the web service. Before we do that, we are assuming we should
load balance the web services to answer on 1 url (ie, smaws.domainname.com).
1. Is there any guidance or things to consider when load balancing the 3 web services to answer to 1 url. We will probably use f5 since that is what we use.
2. The end point url that we configure for Azure Pack automation should be this load balanced URL correct?
3. Should we have the Azure pack installed on just one of the servers or all 3. We did all 3 but it seems like server2 and 3 just redirect to 1 anyway so I am assuming the URL for Azure pack is stored in a db somewhere.
4. Are there any other components of SMA/Azure Pack that should also be load balanced?
Thanks
Thanks LanceSo in this case you need to register the SMA Runbook Workers (do this on machine 1):
$webService
= "https://localhost"
$workers
= (Get-SmaRunbookWorkerDeployment
-WebServiceEndpoint
$webService).ComputerName
if($workers
-isnot [system.array]) {$workers
= @($workers)}
$workers
+= "MachineName2"
$workers += "MachineName3"
New-SmaRunbookWorkerDeployment
-WebServiceEndpoint
$webService -ComputerName
$workers -
Load balancing when creating Broadcast setting
Hi guys,
how can i use the load balancing possibility when creating a broadcast setting of a workbook?
i have 2 precalculation server determined in RSprecadmin. When the first precalculation server has status maximum capacity reached the next free available precalculation server is used. But in my case the first 10 workbooks are precalculated with one server then the other free and available server is not used anymore.
How can i have more than 10 workbooks in the queue?
Thanks,
MuratHello,
Please check if process based load distribution helps (SAP BW Precalculation Service Multi Instance)
Apply these 2 Notes:
1275837 PrecServer: process based load distribution (ABAP part)
1275828 PrecServer: process based load distribution (Frontend part)
Thanks,
Michael
Edited by: Michael Devine on Aug 23, 2010 10:21 AM -
OBPM Enterprise Deployment on WLS - No Cluster, But Load Balanced
All,
Does anyone know of any gotchas when deploying BPM to WLS on 2 separate nodes, sharing the same directory, but not clustered. The system is load balanced based on F5. Basically we are talking a hot server/cold server deployment.
When we deploy projects, they default to the hot server even if the cold server is specified for deployment.
Anyone done this before?
TIA,
IGSHi,
Sorry, but I could not understand completely your architecture.
Are you talking about the Workspace (not clustered but load balanced). That's supported.
Or, are you trying to load balance the engine? (a single engine with 2 or more nodes)
If so.... I wouldn't recommend that you to do that.
Let me explain you why.
The engine uses the queue to balance the work among the different nodes. (that's why you have to configure a Distributed Queue and disable the server affinity in the connection factory).
Even more, the engine has some internal mechanism of synchronization among nodes so as to avoid some inter-node locking. If your engine nodes are not in a cluster, that mechanism will be disabled and the overall engine performance will be significantly degraded.
I'm not sure if I have answered your question. If not, please add more details of your configuration.
Hope this helps,
Ariel -
A question about the SharePoint services load balancer
Let's consider a farm with one WFE and two app servers, A and B. Both app servers are running the Managed Metadata Service (MMS).
User requests a page from the WFE, which talks to the database server. The operation needs information from the MMS, so the WFE requests information from the round robin load balancer for SharePoint web services. Let's say server A is down.
Here's my question - what happens next?
a) The round robin load balancer tells the WFE the MMS is on servers A & B. The WFE tries server A, fails, and returns a failure.
b) The round robin returns servers A & B. The WFE tries server A, which fails. The WFE then tries server B.
c) The round robin returns either A or B, depending on which is next in rotation. The WFE tries the server returned. If the server returned is A, the WFE returns a failure.
d) The round robin returns either A or B, depending on which is next in rotation. The WFE tries the server returned. If the server returned is A, the WFE queries the round robin service again.
e) The round robin knows server A is down, returns only server B to the WFE.
Philo Janus, MCP Bridging business & Technology: http://www.saintchad.org/ Telecommuter? http://www.homeofficesurvival.com/ Author: Pro InfoPath 2007 & Pro InfoPath 2010 Pro PerformancePoint 2007 Pro SQL Server Analysis Services 2008 Building Integrated
Business Intelligence SolutionsWhen a Service Application is down, the application load balancer removes that endpoint from the load balancer. When it becomes available again, it adds it back. This way the WFE would just contact the MMS endpoint that was available, not try and timeout
against an unavailable endpoint.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Hi,
I have a question about load-balancing facilities for Time service. If I
schedule a TimeScheduleDef object to run at certain intervals, does the
run-time environment take care of load balancing it across multiple machines
on a cluster?
Thanks,
DeepakWebLogic Time is not a clustered service -- it is configured per server and
load balancing is performed.
Thanks,
Michael
Michael Girdley
Product Manager, WebLogic Server & Express
BEA Systems Inc
Deepak Goel <[email protected]> wrote in message
news:8gvrdb$n18$[email protected]..
Hi,
I have a question about load-balancing facilities for Time service. If I
schedule a TimeScheduleDef object to run at certain intervals, does the
run-time environment take care of load balancing it across multiplemachines
on a cluster?
Thanks,
Deepak -
SMA Web service load balancing issues.
Hi, We have configured 3 servers with the SMA web service
https://machinename1.domain.com:9090
https://machinename2.domain.com:9090
https://machinename3.domain.com:9090
We have load balanced these 3 web services using F5 load balancer and have the LB site set as https://smaws.domain.com
We used this URL with Windows Azure Pack automation endpoint and that seemed to work.
We are not sure if this is working properly or not.
How can we test? Should the URL to the web service contain a guid?
https://yoursmaserver:9090/00000000-0000-0000-0000-000000000000
We have tried the following but all of them produce errors.
PS C:\Users\lance_lyons> Get-SmaRunbookWorkerDeployment -WebServiceEndpoint https://smaws.domain.com
Get-SmaRunbookWorkerDeployment : Exception has been thrown by the target of an invocation.
At line:1 char:1
+ Get-SmaRunbookWorkerDeployment -WebServiceEndpoint https://smaws.domain.co ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-SmaRunbookWorkerDeployment], TargetInvocationException
+ FullyQualifiedErrorId : System.Reflection.TargetInvocationException,Microsoft.SystemCenter.ServiceManagementAuto
mation.GetSmaRunbookWorkerDeployment
PS C:\Users\lance_lyons> Get-SmaRunbookWorkerDeployment -WebServiceEndpoint https://machinename:9090
Get-SmaRunbookWorkerDeployment : Invalid URI: Invalid port specified.
At line:1 char:1
+ Get-SmaRunbookWorkerDeployment -WebServiceEndpoint https://machinename:9090
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-SmaRunbookWorkerDeployment], UriFormatException
+ FullyQualifiedErrorId : System.UriFormatException,Microsoft.SystemCenter.ServiceManagementAutomation.GetSmaRunbo
okWorkerDeployment
PS C:\Users\lance_lyons> Get-SmaRunbookWorkerDeployment -WebServiceEndpoint https://machinename.domain.local:9090
Get-SmaRunbookWorkerDeployment : Invalid URI: Invalid port specified.
At line:1 char:1
+ Get-SmaRunbookWorkerDeployment -WebServiceEndpoint https://machinename.domain.local: ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-SmaRunbookWorkerDeployment], UriFormatException
+ FullyQualifiedErrorId : System.UriFormatException,Microsoft.SystemCenter.ServiceManagementAutomation.GetSmaRunbo
okWorkerDeployment
We do have the webservice configured in IIS with host header smaws.domain.com, machinename and machinename.domain.local.
Thanks
Thanks LanceSo in this case you need to register the SMA Runbook Workers (do this on machine 1):
$webService
= "https://localhost"
$workers
= (Get-SmaRunbookWorkerDeployment
-WebServiceEndpoint
$webService).ComputerName
if($workers
-isnot [system.array]) {$workers
= @($workers)}
$workers
+= "MachineName2"
$workers += "MachineName3"
New-SmaRunbookWorkerDeployment
-WebServiceEndpoint
$webService -ComputerName
$workers -
Hello, I am researching the best way to load balance our FIM Service infrastructure and I wanted to get some advice from others who have been down this road. Here is our current set up and what we are trying to to achieve:
We currently have two FIM Service machines in place that share a FIM Service DB and use the same AD FIM Service account
Machine one has a FIM service address of fimservice.acme.com (FQN= myfirstmachine.acme.com)
The second machine has a FIM service address of fimserviceOther.acme.com (FQN-mysecondmachine.acme.com)
Each FIM service has its own
partition
Our goal is to load balance the two FIM services under one address as fimservice.acme.com. The NLB would route traffic to the original fimservice.acme.com instance as well as the fimserviceOther.acme.com instance.
Under this scenario, are there any changes that we need to make to our environment? Or will simply setting up the VIP with an address of fimservice.acme.com suffice and then just have the two nodes as myfirstmachine.acme.com and mysecondmachine.acme.com
work?
Are there any changes that we need to make to the FIM partition or is keeping them separate as they currently are ok?
Cheers!Hi any thoughts on this would be appreciated!
Cheers -
Lync 2013 Enterprise load balancing on the front end and edge pool
Hi,
I am setting up a Lync 2013 Enterprise deployment consisting of a Front End pool (x2 FE servers) and an Edge pool (x2 Edge servers). I'm seeing some conflicting advice regarding load balancing using hardware or DNS for the front end and the edge.
On the front end I have 2 internal DNS records 'lyncfepool1.contoso.local' each of which map to one of the IPs of the FE servers. I've used my details to populate the Detailed Design Planner excel spreadsheet and am told that I require a HLB to load
balance my front end pool. I'm aware of the need to load balance HTTPS traffic internally (which will be done by TMG) however other traffic to the front end (SIP, etc) can be balanced by DNS only, and not require a HLB?
Can someone clarify the front end requirement?
Also - looking now at the edge pool - this site again have two edge servers in a pool. We are using a total of six private IP addresses, two per edge service (2 x av.contoso.com, 2 x sip.contoso.com and 2 x webcon.contoso.com). These will be
NAT'ed by the external firewall and directed to the respective external (DMZ) IP addresses on the Edge servers on port 443. I know this isn't true roundrobin due to the intelligence of the Lync client when connecting (in that the Lync client will connect
to one of the public IPs and if it can't connect, it will know to connect to the other service IP), however I want to clarify this set up, particularly the need to direct the external public IP traffic at the DMZ Edge IP specified in the topology builder.
I've attached a basic diagram of the external/DMZ/Edge side which hopefully helps with this question
Persevere, Persevere, Per..That is because you will always need HLB for a front-end server since it hosts the Lync webservices which use HTTP/HTTPS traffic.
The description on the calculation tool also describes this correctly:
Supports Standard and Enterprise pools (up to 12 nodes), with pure device-based load balancing or a combination of DNS load balancing and device-based load balancing (for
Lync web services)
You can use either Hardware or DNS loadbalancing for SIP traffic only, but you will always need a HLB for the webservices. Both are applicable for the Front-End so you have either
full HLB for both SIP and HTTP(S) traffic
DNS LB for SIP traffic and HLB for HTTP(S) traffic
Hope this is more clear :-)
Lync Server MVP | MCITP Lync Server 2010 | If you think my post is the answer to your question, please mark it as answer so future visitors can easily find it.
Maybe you are looking for
-
Help needed in creating user defined attribute
Hi all, I want to create user defined attributes and make it available for all users in sun LDAP5.2,I have followed the below mentioned steps, 1.Under configuration-schema i have created attribute named "ldapproducts" 2.I have created new object clas
-
Executable loads for Admin but for user has DLL complaints
Hi All, When they give imbeciles handicap-parking, I won't have so far to walk!
-
8310 At&t supporter. OS 4.5 from AT&T help.
Hey well im with at&t and i want to download load OS 4.5 from AT&T so i can start streaming shows like youtube and etc. But i was reading something and said that at&t doesnt suppoert this download. Is that true? and if so is their another way i can d
-
How do I remove iCloud app icons from the iphone screen in iOS 7 screen?
-
Weird characters in g++ output
When using Arch and ssh/xterm to remotely logging into an Ubuntu machine (university lab computer) to run g++, the complier output (when there's a compile error) has an accented character "a hat" (i.e. the character "a" with a caret symbol above it)