VIP still reachable even if primary server farm is down

Hi,
I want to make sure that the a VIP is not PING-able anymore when the primary server farm is down (all servers are down).
For that I have the following configuration :
serverfarm host NCL_FARM_TEST
probe NCL_PROBE_HTTP
rserver CHPAUN028 443
inservice
policy-map type loadbalance http first-match L7_POLICY_NCL_TEST_HTTP
description *** Load balancing rule for test in http mode ***
class L7_CLASS_TEST
serverfarm NCL_FARM_TEST backup NCL_REDIRECT_FARM_SORRY
compress default-method gzip
insert-http Source-IP header-value "%is"
insert-http Remote-Port header-value "%pd"
ssl-proxy client NCL_SSL_CLIENT
policy-map multi-match VIP_PROD_AND_TEST
class L4_CLASS_NCL_TEST_HTTP
loadbalance vip inservice
loadbalance policy L7_POLICY_NCL_TEST_HTTP
loadbalance vip icmp-reply active primary-inservice
nat dynamic 2 vlan 115
appl-parameter http advanced-options NCL_HTTP_PARAM
While testing this feature, I realize that the VIP is still reachable (PING), even if the server in the farm is in PROBE_FAILED status (For test, I have only one srserver in the farm).
Here is the server farm status, while PING is still possible :
CH01AC03/P-115-A# sh serverfarm NCL_FARM_TEST detail
serverfarm : NCL_FARM_TEST, type: HOST
total rservers : 1
active rservers: 0
description : *** Test Server Farm ***
state : INACTIVE
predictor : ROUNDROBIN
failaction : -
back-inservice : 0
partial-threshold : 0
num times failover : 27
num times back inservice : 28
total conn-dropcount : 0
Probe(s) :
NCL_PROBE_HTTP, type = HTTP
----------connections-----------
real weight state current total failures
---+---------------------+------+------------+----------+----------+---------
rserver: CHPAUN028
10.240.3.128:443 8 PROBE-FAILED 0 609 8
description : -
max-conns : - , out-of-rotation count : -
min-conns : -
conn-rate-limit : - , out-of-rotation count : -
bandwidth-rate-limit : - , out-of-rotation count : -
retcode out-of-rotation count : -
In the documentation, the following is written regarding the command "vip loadbalance icmp-reply active primary-inservice" it is stated that the ACE shold discard ping packets if all servers in the primary server farm are down.
I probably missed something, but what ?
Here is the service-policy status :
Policy-map : VIP_PROD_AND_TEST
Status : ACTIVE
Interface: vlan 1 115
class: L4_CLASS_NCL_TEST_HTTP
nat:
nat dynamic 2 vlan 115
curr conns : 0 , hit count : 56
dropped conns : 0
client pkt count : 809 , client byte count: 231750
server pkt count : 1262 , server byte count: 1375334
conn-rate-limit : 0 , drop-count : 0
bandwidth-rate-limit : 0 , drop-count : 0
loadbalance:
L7 loadbalance policy: L7_POLICY_NCL_TEST_HTTP
VIP ICMP Reply : ENABLED-WHEN-PRIMARY-SF-UP
VIP State: INSERVICE
Persistence Rebalance: ENABLED
curr conns : 0 , hit count : 56
dropped conns : 0
client pkt count : 809 , client byte count: 231750
server pkt count : 1262 , server byte count: 1375334
conn-rate-limit : 0 , drop-count : 0
bandwidth-rate-limit : 0 , drop-count : 0
compression:
bytes_in : 1052393
bytes_out : 309229
Compression ratio : 70.61%
Parameter-map(s):
NCL_HTTP_PARAM
Thank you for any hints,
Yves Haemmerli

Gilles,
I have effectively four diferent policy maps :
- one for PROD when the client arrives withh HTTP
- one for PROD when the client arrives with HTTPS
- one for TEST when the client arrives with HTTP
one for TEST when the client arrives with HTTPS
However, the PROD and the TEST environemnts use different server farms. I am testing the icmp-reply feature on the TEST environment. In the TEST environment, both Layer-7 policy maps use the same server farm.
Here are the four polici maps :
policy-map type loadbalance http first-match L7_POLICY_NCL_PROD_HTTP
description *** Load balancing rule for production in http mode ***
class L7_CLASS_PROD
serverfarm NCL_FARM_PROD backup NCL_REDIRECT_FARM_SORRY
insert-http Source-IP header-value "%is"
insert-http Remote-Port header-value "%pd"
ssl-proxy client NCL_SSL_CLIENT
class L7_CLASS_REDIRECT
serverfarm NCL_REDIRECT_FARM_PROD_HTTP
policy-map type loadbalance http first-match L7_POLICY_NCL_PROD_HTTPS
description *** Load balancing rule for production in https mode ***
class L7_CLASS_PROD
serverfarm NCL_FARM_PROD backup NCL_REDIRECT_FARM_SORRY
insert-http Source-IP header-value "%is"
insert-http Remote-Port header-value "%pd"
ssl-proxy client NCL_SSL_CLIENT
class L7_CLASS_REDIRECT
serverfarm NCL_REDIRECT_FARM_PROD_HTTPS
policy-map type loadbalance http first-match L7_POLICY_NCL_TEST_HTTP
description *** Load balancing rule for test in http mode ***
class L7_CLASS_TEST
serverfarm NCL_FARM_TEST backup NCL_REDIRECT_FARM_SORRY
compress default-method gzip
insert-http Source-IP header-value "%is"
insert-http Remote-Port header-value "%pd"
ssl-proxy client NCL_SSL_CLIENT
class L7_CLASS_REDIRECT
serverfarm NCL_REDIRECT_FARM_TEST_HTTP
policy-map type loadbalance http first-match L7_POLICY_NCL_TEST_HTTPS
description *** Load balancing rule for test in https mode ***
class L7_CLASS_TEST
serverfarm NCL_FARM_TEST backup NCL_REDIRECT_FARM_SORRY
insert-http Source-IP header-value "%is"
insert-http Remote-Port header-value "%pd"
ssl-proxy client NCL_SSL_CLIENT
class L7_CLASS_REDIRECT
serverfarm NCL_REDIRECT_FARM_TEST_HTTPS
Yves

Similar Messages

  • Failover between server farms

    Hi,
    I'm requesting advise on problem below :
    - I have 2 datacenter with one server farm on each DC and 5 servers behind each server farm
    - each server has 5k max connection limit on each server farm
    - I want to be able to be able to failover to one SF to another when max connection for the server farm reach 25k (that mean each of 5 servers has reached its max conn)
    Can I do that with partial-threshold ?
    in Cisco documentation it's stated : "
    Each time that a server is taken out of service (for example, using the CLI, a probe failure, or the retcode threshold is exceeded), the ACE is updated"
    Would max-conn exceed be equivalent to "out of service" ?
    thanks for any contribution
    cheers

    Hi,
    I beleive Cisco ACE platform because of H/w design will not do failover for partial-threshold when primary server farms servers reached "MAXCONN" state and partial-threshold trigger. you will observe connection drop in that condition.
    for your setup i will suggest to use simple backup server farm with no partial threshold. this work and when all the server in serverfarm are no longer usable (out of service or maxconn) back server farm will be activated.

  • Satellite Server traffic v Primary Server traffic

    I recently did a packet trace on a workstation and noticed a lot of traffice going to the primary servers and not just the satellite server.
    I have a 10.3.3 environment with Windows 2003 servers and Full SQL 2005. A Windows XP workstation as a satellite server. That satellite server has Content, Authentication and Collection configures. It also has a Closest Server Rule set to a CIDR /16 IP address.
    I am not sure why, but the workstation seems to hit the primary server first and even another primary server second. Then during login it does seem to hit the satellite server a bit but it hits the primary server just as much and like I said earlier it seems to hit another primary server that isn't configured for that site. I do have content on all Primary servers so I understand it hitting these servers some, but during authentication and during content retrieval I am confused at why it wouldn't be smacking the satellite server to no end and not the primary servers.
    I need to make sure I have satellite servers and primary server configured correctly due to connection issues. Please, any help on configuration would be appreciated. Thanks.

    melnikok,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • CSM: Is it possible to access IPv4 server farms via IPv6 vIP?

    Dear all
    Before we start a more extensive testing programme I would like to ask the experts whether or not it should be possible to access already existing server farms (with IPv4 vIP) via an additional IPv6 vIP configured on the load balancer.
    The system in question is 6509 with Sup720 and CSM WS-X6066-SLB-APC.
    The idea is simple: Take an existing server farm (running completely on v4) and add an additional v6 vIP on the load balancer without the need to change the actual v4 networking behind the load balancer.
    Might this work (at least for some protocols like http, ftp, etc.)?
    Any "yes" or "no" or "maybe" or "with restrictions" appreciated.;)
    Thanks in advance,
    Grischa

    Fairly sure this isn't possible.  Unless I've missed something, the CSM doesn't support IPv6 at all.  Even if it did, I don't think a v6 VIP to a v4 real would work.  The only place I've seen this work was on a NetScaler, because the NetScaler holds independent connections open to the client and to the servers as a HTTP proxy, passing the request between the two.  I forget how the ACE operates; it may be able to act as a proxy, but don't think it supports v6 either.
    v6 support on CSMs would be totally awesome, but I'm not holding my breath.

  • Can a real Server be applied in two different server farms associated with two different VIP IP and TCP Port

    Good day everyone,
    I have a question in regard to real server operation with different server farms, and VIPs
    Can a Real Server be associated ( for simpliciy) with two different Server Farms that have a VIP associated with each, servicing the same TCP Port (443).
    Example:
    SF-A
    RSRV-1: 192.168.1.10 /24
    RSRV-2: 192.168.1.11 /24
    VIP-A: 192.168.1.20 /24
    VIP-A: https:web-A
    Protocol: HTTPS
    SF-B
    RSRV-2: 192.168.1.11 /24
    RSRV-3: 192.168.1.12 /24
    VIP-B: 192.168.1.30 /24
    VIP-b: https:web-B
    Protocol: HTTPS
    Client-A: 172.16.128.10
    Client-B: 172.16.128.15
    I have attached an sketch depicting the connectivity.
    As always any feedback/Suggestions will be greatly apprecaited.
    Cheers,
    Raman Azizian

    Raman,
    This type of config is no problem. What the server is doing is virtual web hosting. The server would have two different web services running for the same IP, but each listening for a unique host header.
    From an IP point of view both connections would be destined to the rserver address on port 80, but in the http header they would have two different Host headers.
    one for www.example1.com and the second for www.example2.com. If the web server is configured correct so each host name is tied to one web service it will not have any issues.
    The config you attached looks ok. The way you have the sticky group is ok doing source IP. If you use cookies for the sticky group I would suggest you create two sticky groups each with a different cookie name and add the same serverfarm to both groups. The client will only send a cookie for the domain it received it from so using the same cookie in two vips could cause problems if the same client hits both vips.
    Hope that helps
    Regards
    Jim

  • In a Sharepoint 2013 Server farm how far away can the backup of failover cluster servers be located away from the primary servers

    Hello Community
        When you setup a Sharepoint 2013 Server farm
    if you want to provide failover clustering, can
    the servers that the Sharepoint 2013 Server farm
    failover to be located off-site in a different
    location (i.e. how far away can the servers the
    the primary server in the Sharepoint 2013 Server
    farm be away from the failover cluster of backup
    Sharepoint 2013 Server farm servers in distance?
        Thank you
        Shabeaut

    Hiya,
    this article describes in detail the different aspects of creating a highly available SharePoint 2013 server farm.
    Create a high availability architecture and strategy for SharePoint 2013
    http://technet.microsoft.com/en-us/library/cc748824%28v=office.15%29.aspx

  • Can two server farm share the same VIP?

    Hello,
    Can i create two server farm and share the same VIP? for example:
    is posible this configuration?
    rserver host des1
      ip address 10.24.18.34
      inservice
    rserver host des2
      ip address 10.24.18.35
      inservice
    rserver host was1
      ip address 10.24.18.10
      inservice
    rserver host was2
      ip address 10.24.18.11
      inservice
    serverfarm host farm1
      rserver des1
        inservice
      rserver des2
        inservice
    serverfarm host farm2
      rserver was1
        inservice
      rserver was2
        inservice
    class-map type http loadbalance match-all Check-Headers-10
      2 match http url .*
      3 match http header Host header-value "10.24.16.*"
      4 match http header User-Agent header-value ".*MSIE.*"
    class-map type http loadbalance match-all Check-Headers-s-10
      2 match http url .*
      3 match http header Host header-value "10.24.16.*"
      4 match http header User-Agent header-value ".*MSIE.*"
    class-map type http loadbalance match-all other-http-10
      2 match http url .*
    class-map type http loadbalance match-all other-http-s-10
      2 match http url .*
    class-map match-all server-vlan-vip-10-http
      2 match virtual-address 10.24.16.10 tcp eq www
    class-map match-all server-vlan-vip-10-https
      2 match virtual-address 10.24.16.10 tcp eq https
    policy-map type loadbalance first-match http-10-lb
      class Check-Headers-10
        serverfarm farm2
      class other-http-10
        serverfarm farm2
    policy-map type loadbalance first-match http-10-s-lb
      class Check-Headers-s-10
        serverfarm farm1
      class other-http-s-10
        serverfarm farm1
    policy-map type loadbalance first-match lb-logic-10
      class class-default
        serverfarm farm2
    policy-map type loadbalance first-match lb-logic-s-10
      class class-default
        serverfarm farm1
    policy-map multi-match server-vip-service-policy-10
      class server-vlan-vip-10-http
        loadbalance vip inservice
        loadbalance policy http-10-lb
        loadbalance policy http-10-s-lb
        loadbalance vip icmp-reply
      class server-vlan-vip-10-https
        loadbalance vip inservice
        loadbalance policy lb-logic-10
        loadbalance policy lb-logic-s-10
        loadbalance vip icmp-reply
    interface vlan 233
      description Servidores_Balanceados_outside
      peer ip address 10.24.16.7 255.255.255.0
      access-group input anyone
      access-group output anyone
      service-policy input client-vips
      no shutdown
    interface vlan 242
      description Servidores_desarrollo1
      peer ip address 10.24.18.33 255.255.255.240
      access-group input anyone
      access-group output anyone
      service-policy input server-vip-service-policy-10
      no shutdown

    Hello gdufour,
    Actually i've got this configuration:
    1.) One serverfarm (farm1).
    2.) In this serverfarm, i have two real servers des1 and des2.
    3.) The real servers are using VIP 10.24.16.10.
    4.) The loadbalance is roundrobin using http with headers.
    I want to have:
    1.) One new server (a.b.c.d), it can be in the same subnett.
    2.) This server don't know if can belong to serverfarm farm1.
    2.) When i reach to http://index/url/url1, this has to be to VIP 10.24.16.10.
    3.) When i reach the link, the VIP 10.24.16.10 redirect to server a.b.c.d.
    4.) When the server a.b.c.d down, the serverfarm farm1 have to take the load of the url.
    Is posible this configuration?
    Thank you.
    Best Regards

  • ACE with sticky http-cookies across two server farms issue

    Hi,
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman","serif";}
    We need the same sticky http cookie to applied to two server farms (which are actually the same servers but listening on different ports in each farm) to persist sessions to the same real backend server.
    e.g.
    Farm1 (front end HTTP service) - StickyGroup1
    rserver1 - 192.168.0.1:80
    rserver2 - 192.168.0.2:80
    rserver3 - 192.168.0.3:80
    Farm2 (SSL front end authentication service) - StickyGroup2
    rserver1 - 192.168.0.1:443
    rserver2 - 192.168.0.2:443
    rserver3 - 192.168.0.3:443
    We have setup two Sticky Groups (one for each of the farms above) both using the same cookie name e.g. cookieXYZ
    Our service is behind a single virtual server configured as follows (example URL and addresses):
    Virtual Server Configuration
    Virtual server name: www.somedomain.com
    Virtual IP: 2.2.2.2
    TCP/443 (https)
    SSL Termination - Proxy service name: www.somedomain.com (all keys and certs loaded and correct)
    L7 Load Balancing - **inline** rule match HTTP URL:(/AuthenticateMe/).*  Action : Sticky, Group: StickyGroup2, SSL Initiation enabled (www.somedomain.com)
    Default L7 Load Balancing action : Sticky, Group: StickyGroup1
    So normally we would expect users to first hit www.somedomain.com first and therefore Farm1, get cookieXYZ from the ACE (cookie insert is only enabled on StickyGroup1) and then be redirected to www.somedomain.com/AuthenticateMe which matches the inline URL L7 rule which directs the request at Farm2 - at this point we expected the ACE to use cookieXYZ to persist the user to the same real server hit in Farm1 but instead the stickiness doesn't seem to work.
    We suspect that the ACE uses IP:port as the unique value in the Cookie ID and therefore the ACE fails to match the same real host in a different farm because we are using a mix of port numbers across farms. Is this correct? Is there another way of accomplishing what we are after with a different configuration but still the same setup with single VIP and multiple services on the backend servers?
    Any suggestions or solutions appreciated.
    Thanks
    Paul

    The issue is related to the fact that it's not about persistence because there are only "new" services in the backend in SSL, you want to keep the IP address.
    With a little bit of dev, the only way to acheive this is to redirect the user when he has been sent to http and adding a "tag" (cookie / token in the URL), then on the SSL virtual server, when performing SSL offload matching this tag to send to user to the right server. But it will be a 1-to-1 mapping.

  • Agent Primary Server Failover

    Hi All
    Can someone tell me how the adaptive agent decides to choose which server to connect to?, we have two primary servers in our zone and it always connects to the same one, even when the first primary server is switched off they agent authentication will fail.
    We moved the sybase database from one server to another and still the agent will not authenticate. So we added a DNS round robin so that the second server had the same DNS name as the primary and that kind of works (slow access) but seems like a very dodgy way of getting things to work from the second server.
    So just to clarify:-
    Server1 (First Primary On) When this is on all works well.
    Server1 (First Primary Down) The Second Zone Member Server2 does not work even when it holds the database, the agent still tries to connect to Server1.
    We make a DNS Round Robin so that Server1&Server2 resolve to the DNS of Server1.ourcomany.com it appears to work but seems a bit of a cowboy fix to me, is this really the way we would recover from a primary crash if we did not use remote databases.

    I'm guessing the failure to the 2nd Server is failing when the first server
    is down because that is your Certificate Authority.
    Are you using an Internal or External Cert?
    Craig Wilson - MCNE, MCSE, CCNA
    Novell Support Forums Volunteer Sysop
    Novell does not officially monitor these forums.
    Suggestions/Opinions/Statements made by me are solely my own.
    These thoughts may not be shared by either Novell or any rational human.
    "106666" <[email protected]> wrote in message
    news:[email protected]..
    >
    > Hi All
    >
    >
    > Can someone tell me how the adaptive agent decides to choose which
    > server to connect to?, we have two primary servers in our zone and it
    > always connects to the same one, even when the first primary server is
    > switched off they agent authentication will fail.
    >
    > We moved the sybase database from one server to another and still the
    > agent will not authenticate. So we added a DNS round robin so that the
    > second server had the same DNS name as the primary and that kind of
    > works (slow access) but seems like a very dodgy way of getting things
    > to work from the second server.
    >
    > So just to clarify:-
    >
    > Server1 (First Primary On) When this is on all works well.
    >
    > Server1 (First Primary Down) The Second Zone Member Server2 does not
    > work even when it holds the database, the agent still tries to connect
    > to Server1.
    >
    > We make a DNS Round Robin so that Server1&Server2 resolve to the DNS of
    > Server1.ourcomany.com it appears to work but seems a bit of a cowboy fix
    > to me, is this really the way we would recover from a primary crash if
    > we did not use remote databases.
    >
    >
    > --
    > 106666
    > ------------------------------------------------------------------------
    > 106666's Profile: http://forums.novell.com/member.php?userid=6795
    > View this thread: http://forums.novell.com/showthread.php?t=314104
    >

  • Server farm design under 3 tier design

    Hi guys,
    Just wondering which is best practise?
    connect server farm into distribution switches or connect directly to core switches?
    Understand from different articles stated different methology but from what i see in cisco network design, server farm is always connected to the distribution switches.
    What other factors to consider when connecting to distribution and when to consider when connecting to core ?
    Thanks

    Disclaimer
    The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.
    Liability Disclaimer
    In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.
    Posting
    Jon, very nice reply.  Thanks for joining this thread.
    This makes sense because what real value would a dedicated core provide. If traffic is routing between vlans it does this on the distro switches and the only time it routes anywhere else is to the WAN and you do not high speed core switches to do this.
    Yep, in in my cited example, the 4th 6509 uses its 6708's for two off-site 10g links.
    I have also done the same as Joseph and connected servers to a collapsed pair of distro/core switches primarily because of cost but also because your core/distro switches tend to have the greater throughput so it is a logical place to put them.. In addition because they are on the core/distro switches you do not have to worry about oversubscribing uplinks from a different pair of switches, although there might still be oversubscription on the core/distro switches.
    Re: cost, again, yep, why buy another box?
    Re: greater throughput, also again, yep, for example, note I noted 4th box is all CEF720, i.e. all fabric, vs. classic bus in user 6509s.
    Re: oversubscription, and again, yep, 4th 6509's server 6748 cards are 40 Gbps to fabric, vs. gig or even 10g uplinks from a separate server switch.
    Jon, one reason I enjoyed, so much, reading you've done similar for similar reasons, late last year the business unit came to me and told me they want a separate dual core to increase reliability (as call centers are considered critical).  I noted that, yes, adding a second "core" box, by adding a redundant chassis (vs. the single chassis with redundant everything else) would decrease the MTBF by about 2 hours a year for the off-site links (expensively, IMO, for those 2 hours, but as they are footing the bill, who am I to say no), but I didn't see any advantage for adding another (2nd) "core" box (vs. continuing using the existing box as a collapsed core).  Well I got overruled because you just can't share a "core" device for anything else .
    Unfortunately, a case of, I think, some reading some design guides, which say "core" devices do X, and so therefore, you can never do otherwise.  Again, so very much enjoyed reading someone else not following the 3 tier model, always, literally.

  • Xserve stopped responding as primary server

    we are using this server as primary server and rely on Dyndns as secondary server for a couple of domain names. We are doing so since 2 years.
    Since yesterday the primary server suddenly stopped responding to hostname requests, even though nothing has been changed at all on our side, nor on our data-centre side.
    Domain name is not expired and is still correctly registered. Any idea where the problem might be? (server side or not?)

    some update: if we replace the records of aliases with 'machine' records then it works. this seem to be the case w/ all domains/records hosted on this server. We have no clue as to why the aliases suddenly stopped working.

  • SP 2013 - Error: This operation can be performed only on a computer that is joined to a server farm

    Hello Community!
    I am working with SharePoint 2013 and I built a farm inside the firewall.  Then a decision was made to move the two WFE's to the DMZ.  Since that time, whenever I try to access the site collections, I get the error below.  Other information: All
    the web applications are on port 80, but I'm not having any problem with accessing Central Administration which is on port 8080; the network team did screw up the DNS originally, but I'm told it's correct now; the two WFE's servers show in my Manage servers
    on the Farm interface 2X, once with the server name, once with the fully Qualified Domain Name, the ones with the FQDN shows a Services Running status of Not Configured.
    Has anyone else ever seen this error and if so please provide guidance and instruction for fixing the error.
    Thanks!
    Tom
    Server Error in '/' Application.
    This operation can be performed only on a computer that is joined to a server farm by users who have permissions in SQL Server
    to read from the configuration database. To connect this server to the server farm, use the SharePoint Products Configuration Wizard, located on the Start menu in Microsoft SharePoint 2010 Products.
    Description:
    An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the
    code.
    Exception Details: System.InvalidOperationException: This operation can be performed only on a computer that is joined to a server farm by users who have permissions in SQL Server to read from the configuration database. To connect this server
    to the server farm, use the SharePoint Products Configuration Wizard, located on the Start menu in Microsoft SharePoint 2010 Products.
    Source Error:
    An unhandled exception was generated during the   execution of the current web request. Information regarding the origin and   location of the exception can be identified
    using the exception stack trace   below.  
    Stack Trace:
    [InvalidOperationException: This operation can be   performed only on a computer that is joined to a server farm by users who   have permissions in SQL Server to read from the configuration
    database. To   connect this server to the server farm, use the SharePoint Products   Configuration Wizard, located on the Start menu in Microsoft SharePoint 2010   Products.]
         Microsoft.SharePoint.Utilities.SPUtility.AlternateServerUrlFromHttpRequestUrl(Uri   url) +262
         Microsoft.SharePoint.Administration.SPAlternateUrl.GetContextUri(HttpContext   ctx) +385
         Microsoft.SharePoint.SPAppRequestContext.InitCurrent(HttpContext context)   +1013
         Microsoft.SharePoint.SPAppRequestContext.get_Current() +175
         Microsoft.SharePoint.SPGlobal.CreateSPRequestAndSetIdentity(SPSite site,   String name, Boolean bNotGlobalAdminCode, String strUrl, Boolean   bNotAddToContext, Byte[] UserToken,
    SPAppPrincipalToken appPrincipalToken,   String userName, Boolean bIgnoreTokenTimeout, Boolean bAsAnonymous) +400
         Microsoft.SharePoint.SPRequestManager.GetContextRequest(SPRequestAuthenticationMode   authenticationMode) +120
         Microsoft.SharePoint.Administration.SPFarm.get_RequestAny() +370
         Microsoft.SharePoint.SPLanguageSettings.GetGlobalInstalledLanguages(Int32   compatibilityLevel) +39
         Microsoft.SharePoint.Administration.SPTemplateFileSystemWatcher.RefreshInstalledLocales()   +103
         Microsoft.SharePoint.Administration.SPTemplateFileSystemWatcher.Initialize()   +130
         Microsoft.SharePoint.ApplicationRuntime.SPRequestModule.System.Web.IHttpModule.Init(HttpApplication   app) +873
       System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr   appContext, HttpContext context, MethodInfo[] handlers) +582
         System.Web.HttpApplication.InitSpecial(HttpApplicationState state,   MethodInfo[] handlers, IntPtr appContext, HttpContext context) +322
         System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr   appContext, HttpContext context) +384
         System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext)   +397
    [HttpException (0x80004005): This operation can be performed only on a computer that is joined to a server farm by users who have permissions in SQL Server to read from the configuration database. To
    connect this server to the server farm, use the SharePoint Products Configuration Wizard, located on the Start menu in Microsoft SharePoint 2010 Products.]
         System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +646
         System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +159
       System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest   wr, HttpContext context) +771
    Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.18010
    Tom Molskow - Senior SharePoint Architect - Microsoft Community Contributor 2011 and 2012 Award -
    Linked-In - SharePoint Gypsy

    Hi Tom,
    According to your description, my understanding is that you got an error when you moved your SharePoint.
    This error occurs when the SharePoint farm administrator cannot connect to your configuration database. Please verify the followings:
    Make sure that from the SharePoint front end and application servers that you can ping your SQL server.
    Make sure that your Farm account has permission to the configuration database.
    Lastly verify that your database didn't for some reasons go into recovery mode.
    once everything is fine and you are still having issues, restart the SQL host service on the SQL server. Once the service is restarted you will need to reboot Central Admin and then your front end servers.
    In addition, as you built your farm inside the firewall, please disable the firwall, or create rules for SQL Server service in the firwall on SQL server. More information about creating rules in firewall, please refer to the following posts:
    http://social.technet.microsoft.com/Forums/en-US/c5d4d0d0-9a3b-4431-8150-17ccfbc6fb82/can-not-create-data-source-to-an-sql-server
    http://www.mssqltips.com/sqlservertip/1929/configure-windows-firewall-to-work-with-sql-server/
    Here is a similar post for you to take a look at:
    http://social.technet.microsoft.com/Forums/en-US/ea54e26c-1728-48d4-b2c5-2a3376a1082c/this-operation-can-be-performed-only-on-a-computer-that-is-joined-to-a-server-farm-by-users-who-have?forum=sharepointgeneral
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • My Mail program has gone south on Leopard on my 27-month old Macbook. I can't send, even though the server details are correct. I tried reinstalling from the install DVD - but no go: no longer compatible, evidently.. What to do?

    My Mail program has gone south on Leopard on my 27-month old Macbook. I can't send, even though the server details are correct. I tried reinstalling the Mail program from the install DVD, but no go; apparently that two-year old Mail is no longer compatible with my up-to-date Leopard. I tried deleting the account (hotmail) from Mail and setting up a different account (Yahoo). After loading all the inbox two things happened: first, I still couldn't send, and second, when I closed the Mail window the whole inbox then disappeared and doesn't come back. Although I couldn't reinstall the Mail program from the install DVD, would it still be possible to reinstall the whole system from that DVD? If I do, will I lose files or will there be another problem since that DVD is now over two years old?
    Thanks for any suggestions; they will be much appreciated.
    P.S. I've just noticed that now I can't change the desktop picture: I go through the motions in Preferences, but the new picture doesn't appear on the desktop. Is there a systemic problem?

    You are waiting for an apology to something that happened over a year ago? Really? This is why there is a manager in the store. You have a problem with an employee you speak to the manager. Just like you did on the phone. You would have gotten your apology in July 2013.
    Here is the information about your upgrade fee.
    Upgrade Fee
    It is because when you have a problem you (customers) go running to the store and want to take up the time of the reps to fix it. Other carriers have third parties that deal with technical support and those locations are few and far between. VZW provides this directly through their stores. Also, when you subsidize a $650 and pay $200 VZW has to pay $400. Your monthly service fee doesn't begin to scratch the surface of paying that back. Not with all the money that is put into the network and its improvements.
    Then over a year later you get someone on the phone who apologized and offered to waive the fee on your phone and you didn't take it? That offer won't come down the pike again.
    One thing you should know is that all these employees are people and as such they sometimes come off cross. I doubt that you speak to everyone so sweetly all the time. Cut them a little slack and put this whole thing behind you after 15 months. Either upgrade with VZW or move on.

  • Do you need an additional slot for the RBS and does the RBS count as an additional hard drive in the Sharepoint 2013 Server farm?

    Hello Community
        If you have a clustered Sharepoint 2013 Server farm running RAID 10, on a
    clustered WS2012 R2 server that has 10 hard drive slots:
            (4) drives being part of the Sharepoint 2013 Server farm
            (1) drive for the Web App Server
            (5) are for hot swap if any of the production hard drives fail
        The question(s) is:
            - If the server only has 10 slots for hard drives are you still allowed to add a
    NAS/RBS drive and does the NAS/RBS drive get clustered also, or does this scenario require
    reducing the RAID level down to RAID 5 to allow for NAS/RBS hard drive?
            - Does the NAS/RBS hard drive need a hard drive for hot swap?
            - Does the NAS/RBS drive have to have Sharepoint 2013 Server or WS2012 R2 running on it?
            - Does the Web App Server have Sharepoint 2013 running or just WS2012 R2 running on it?
        Thank you
        Shabeaut

    RAID 10 requires a minimum of 4 hard drives. The LUN for RBS must be attached to the SQL Server, not SharePoint server, as a local drive. You just need to be running the minimum required OS for the version of SQL Server you're using.
    Do you mean Office Web Apps by "Web App Server"? If so, that cannot have SharePoint installed on it. Otherwise, if you're just referring to it as another SharePoint server, yes it would have SharePoint installed on it.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Logging into a specific server in a terminal server farm

    We have several terminal server farms and in each farm we have the need for 1 user to always log into a specific server in the farm.   This is due to a little piece of sortware that is required for a device that only this one user has and
    the fact the it is licensed to only one server.   The user must use that server for it to work.  I want to include this server in the farm because it seems silly to have a server for only one user.    How can I point one PC/user
    to the same server in the farm all the time?  We are using the Connection Broker and NLB which seems to work just fine for all other users. 
    Thanks

    Hi Steve,
    What operating system version are you running on your servers?  Server 2008 R2?  Server 2012?
    When you configure a RDS farm to be load-balanced by the connection broker, all servers in the unique farm are intended to have the exact same applications installed.  The idea is the RDCB can redirect users to different servers as needed to balance
    the load, and that you may take any particular server (or servers, if you have enough) offline and your farm will still work.
    Now, there are always exceptions and I understand it would be nice if you could assign a user/app to a specific server to handle a case like yours.  For example, you would understand this particular user or app would not be load-balanced or highly
    available and if the one server was down it would not work, but other users/RemoteApps would be load-balanced as usual.  This is
    not a feature of the current versions of RDS.
    To do what you want the "best way" would require writing a custom plugin for RDCB.  In your custom plugin you would specifiy the load-balancing logic.  For example, when one of the "special" users logs on, your logic would direct them to the
    correct specific server, but when a regular user logs on you would allow the normal RDCB load-balancing logic to apply.  Please see here for more information:
    Terminal Services Session Broker Plug-in reference
    http://msdn.microsoft.com/en-us/library/windows/desktop/cc644962(v=vs.85).aspx
    Besides writing a custom plugin I suggest you consider the following workarounds:
    1. Instead of running the app under RDSH, run it in a Win7/Win8 VM pool if possible.  Either a pool of identical VMs or assign each user that needs to run the app to a dedicated VM.  Downside of this is added complexity, licensing for VDI,
    and an increase in hardware resources required to run the VMs.
    2. Have the user connect to the server using /admin.  You can change the permissions so that a specific group may connect using a /admin connection, without them being administrators.  Downside of this is that some features
    of RDSH are not present when connected as an administrative RDP session, and only two Active admin sessions are permitted.
    3. If running Server 2008 R2 you could set the server so that it does not participate in load balancing and have the users that need to run this special app connect directly to the server's ip address instead of to the farm name.  Downside of this
    is that you will get more uneven load distribution, however, it may not be that bad if you are balancing your initial connections using NLB and you have all of your regular users connecting to the farm name as usual.
    4. Have a separate server in each farm (not joined to the farm) just for this one app.  If possible they could be VMs with not much resources dedicated to each.  I know this is what you did not want to do, but I mention it because an
    extra base Windows Server license, one for each farm, is likely less additional cost than licensing the special software on
    all servers.  If you can run the app in VMs then the additional hardware cost of doing it this way is reduced.
    -TP

Maybe you are looking for

  • How does this work (I just got GarageBand 11')

    Hello Apple Community ! I want to record me playong my piano with Garageband, but when I record it with a voice track, the level is way to high and it sounds very bad... And when I record using a pianotrack, it doesn't appear unless I play with the v

  • ATV1 not displaying in iTunes "speakers" for streaming.

    Ever since the last update for itunes, my ATV1 is not showing up in the speaker list on my main itunes page, which I use to stream music from computer wirelessly to stereo.  My ATV1 DOES show up under my DEVICE listing on the itunes page and does see

  • Concerns Regarding Adobe LifeCycle Licensing

    There is starting to be a growing buzz over at Javaranch.com about FLEX and Lifecycle in particular. It would be nice if someone for Adobe could post a few responses at Javaranch.com regarding the LifeCycle Product: Developers Responses: D1: "RemoteO

  • Movign Average Calculation

    hoe the moving average calcualte? i got one case. the client return the goods at the date that material stock is zero. will this influnce to the MB03?

  • Reduce/Expire on post office - advice needed

    My organization has (had) an email retention policy of 30 days for email and 365 days for appointments. We have been on legal hold for the last 3 years so I am looking at roughly that amount of data that will need to be removed when we get the green