Single-point failure of proxy server??

Three questions regarding using the proxy server for WLS clustering:
          1. What happen if the proxy server fails? The entire cluster won't be
          accessible?
          If so, is there a implication of having "meta-configuration" for
          proxy server failover?
          2. Does the term "in-memory persistence" clustering (over JDBC) does
          imply
          the session data is "shared" in memory between the primary &
          secondary
          servers, or they are compleleted replicated in each server??
          3. How bad is it in terms of performance using the JDBC session
          persistence
          and the in-memory replication? Any one has experimented it??
          Any thoughts and comments are helpful and appreciated??
          Frank Wang
          

Frank,
          If using File/JDBC persistence, it doesn't matter which server gets the request
          because all servers have access to every session's data.
          When you assign a hostname "to the cluster" and map that hostname to the IP
          address of each server in the cluster, this sets up something called DNS
          round-robining. What this does is round-robin which IP address is returned when
          the cluster hostname is resolved. Unfortunately, DNS is not very good at
          detecting failed machines so it may continue to hand out IP addresses of failed
          machines.
          A better way to do this is with a hardware router that is made for this purpose
          (e.g., LocalDirector). The router can detect a failed machine and redirect the
          request to another machine in the cluster. Unlike the in-memory replication
          case, it doesn't matter which server gets each request so no proxy is required.
          Hope this helps,
          Robert
          Frank Wagn wrote:
          > Hi, Robert,
          >
          > Thank you for the answers !!
          >
          > It is still not very clear to me when using the File/JDBC session
          > persistence.
          > Since there is no such a "coordinating" proxy server who knows how to
          > load balancing among the servers in the cluster (based on the specified
          > algorithm in the proxy server properties file), how does the load balancing
          > (and failover) work, when the requests are directed to a "mask" of cluster IP
          >
          > (virtual proxy) which in terms broadcast to all the servers??
          >
          > Frank
          >
          > Robert Patrick wrote:
          >
          > > Hi Frank,
          > >
          > > Frank Wang wrote:
          > >
          > > > Three questions regarding using the proxy server for WLS clustering:
          > > >
          > > > 1. What happen if the proxy server fails? The entire cluster won't be
          > > > accessible?
          > > > If so, is there a implication of having "meta-configuration" for
          > > > proxy server failover?
          > >
          > > If you have a single proxy server, this is correct in that it is a single
          > > point of failure. A common configuration is to use multiple proxy
          > > servers (with something like LocalDirector sitting in front to do routing
          > > and load balancing to the proxy servers) that proxy requests to a cluster
          > > of WLS servers.
          > >
          > > > 2. Does the term "in-memory persistence" clustering (over JDBC) does
          > > > imply
          > > > the session data is "shared" in memory between the primary &
          > > > secondary
          > > > servers, or they are compleleted replicated in each server??
          > >
          > > HttpSession state can be shared across the servers in a cluster in one of
          > > three ways.
          > >
          > > 1.) Using File-based persistence (i.e., serialization) - This requires a
          > > shared file system across all of the servers in the cluster. In this
          > > configuration, all servers are equal and can access the session state.
          > > As you might imagine, this approach is rather expensive since file I/O is
          > > involved.
          > >
          > > 2.) Using JDBC-based persistence - This requires that all servers in the
          > > cluster be configured with the same JDBC connection pool. As with method
          > > 1, all servers are equal and can access the session state. As you might
          > > imagine, this approach is rather expensive since database I/O is
          > > involved.
          > >
          > > 3.) In-memory replication (not really persistence) - This scheme uses a
          > > primary-secondary replication scheme so that each session object is kept
          > > on only two machines in the cluster (which two machines vary depending on
          > > the particular session instance). In this scheme, we need a proxy server
          > > sitting in front of the cluster that can route the requests to the server
          > > with the primary copy of the session for each request (or to the
          > > secondary if the primary has failed). The location information is
          > > encoded in the session id and the proxy knows how to decode this
          > > information and route the requests accordingly (because the proxy is
          > > using code supplied by BEA -- the NSAPI or ISAPI plugins or the
          > > HttpClusterServlet).
          > >
          > > > 3. How bad is it in terms of performance using the JDBC session
          > > > persistence
          > > > and the in-memory replication? Any one has experimented it??
          > >
          > > JDBC session persistence performance is highly dependent on the
          > > underlying DBMS. In my experience in doing benchmarks with WLS,
          > > in-memory replication (of a reasonably small amount of session data) does
          > > not add any measurable overhead. Of course, the key words here are
          > > "reasonably small amount of session data". The more data you stuff into
          > > the HttpSession, the more data that needs to be serialized between
          > > servers, more performance will be impacted.
          > >
          > > Just my two cents,
          > > Robert
          

Similar Messages

  • How to configure SharePoint HNSC with a reverse proxy server so that HNSC Share Point URLs are not exposed to end users.

    Could you please let me know how SharePoint HNSC can be configured with a reverse proxy server so that HNSC Share Point URLs are not exposed to end users.
    In normal path based site collections/web applications, reverse proxy configuration can be done using alternate access mappings with  Public URL = "proxy URL", internal = "HNSC Share Point URL" so that share point sends response back
    to Public URL = "proxy URL".
    In Host Named Site Collections,  alternate access mappings  are not supported. Each HNSC is designed to have only one URL in each zone. Zone is one of the five zones(Default,Intranet,Internet,Custom,Extranet) with each of which only one alternate
    URL is associated.  This is what we are able to get using power shell command "Set-SPSiteUrl", but this will not help us to get the response back to proxy URL after a request sent to share point because we could not find any mechanism in share
    point HNSC to respond  to a different URL(proxy URL). Consequently, Share Point URLs are exposed to  external users.
    Below share point article in MSDN blog is symmetrical to what we are observing with Share Point 2013 and Proxy Server. It mentions that internal HNSC URLs can’t be hidden using any proxy server. If  hiding the internal Share Point URLS is a requirement,
    it suggests to use a web application instead of host named site collections.
    Though I’m also observing the same behavior with Share Point 2013 HNSC, Could you please confirm my understanding is correct.
    http://blogs.msdn.com/b/kaevans/archive/2012/03/27/what-every-sharepoint-admin-needs-to-know-about-host-named-site-collections.aspx
    Excerpt from above article-
    "Host Named Site Collections Only Use One Host Name
    Continuing on the discussion on AAMs and host named site collections, you cannot use multiple host names to address a site collection in SharePoint 2010. Because host-named site collections have a single URL, they do not support alternate access mappings and
    are always considered to be in the Default zone.  This is important if you are using a reverse proxy to provide access to external users. Products like Unified Access Gateway 2010 allow external users to authenticate to your gateway and access a site
    as http://uag.sharepoint.com and forward the call to http://portal.sharepoint.com. Remember that URL rewriting is not permitted. Further, a site collection can only respond to one host name. This means if you are using a reverse proxy, it must forward the
    calls to the same URL.  If your networking team has a policy against exposing internal URLs externally, you must instead use web applications and extend the web application using an alternate access mapping."<u5:p></u5:p>

    Hi Satish,
    You are right that only one URL is allowed for each zone of the host-name site collections in both SharePoint 2010 and SharePoint 2013.
    It is by design that each host-name site collection only support one URL for each zone.
    The article below is about RTM version of SharePoint, and it is the same for SharePoint 2013 with the latest CU.
    https://support.microsoft.com/en-us/kb/2826457
    So to make the URL of HNSC not exposed to external users is not supported, you need to use path-based sites instead.
    Best regards.
    Thanks
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Lync Edge and Proxy server public DNS records port forwarding rules

    Hi All
    I have question in regards to port forwarding rules for port 443 of simple url.
    I have 4 public ip addresses.
    1 edge server (4 nics , 3 running with different ip for sip, meet and dialin in DMZ network, 1 connected to internal local network).
    1 proxy server (2 nics, 1 running with an ip which is in DMZ same as edge, and 1 connected to internal local network)
    1 front end (lync 2013 standard installed.) connected to internal local network
    1 office web apps . connected to internal local network
    The question is that I am using 3 public ip addresses respectively on public DNS records for sip, meet and dialin(av) and using port 443 which has been set on edge server. So , I can use 3 DMZ network ip address on edge for sip, meet
    and dialin (av) port forwarding from 3 public ip addresses as per in Microsoft document.
    However, I also have a reverse proxy .Hence, my understanding is all public DNS records except SIP and port 443 should be pointed and port forwarded to reverse proxy ip address which is in DMZ network as it would redirect 443 and 80 to 4443 and 8080 to front
    end.
    Now the question has been clear, if simple URLs public DNS record and port forwarding rules for port 443 should be pointed to reverse proxy server, why they need to be set on each ip address and port number in Front end server topology to edge server?
    If anyone knows, please give a help how to set it correct and what is supposed to be a correct configuration for a topology lync 2013

    Hi George
    Thanks for your reply. Attached is my topology which could make my it bit clear. You may see the public dns host record from the image. I set sip, meet, dialin , and owa 4 host records. The first 3 records are pointed to lync edge by doing a NAT with port
    443 which is the same as per you said. However my understanding is they should be pointed to reverse proxy instead as for instance, I need meet.xxx.com with port 443 to be redirected to port 4443 through reverse proxy server to the front end. So when the external
    customers who do not have lync client installed to their machine then we can shoot a lync meeting and send to them via outlook and they just need to click on join lync meeting link in the email to join in such a meeting based on IE. (Is my understanding correct?)
    If lync web meeting works like so , then the question is why I need to set three SAME addresses in front end topology builder for edge and make them point to edge server instead? 
    1. Access Edge service (SIP.XXX.COM) ---> I understand that it is used for external login lync front end.
    2. Webconf edge server(Can I set to meet.xxx.com which is the same as simple URL that points to reverse proxy?) ----> If I can set this address to be the same as simple url address that points to reverse proxy, why should it need to be NATed to edge
    instead? TO BE HONEST, if I HAVE tested, if I set this url as sip.xxx.com which means to use a single FQDN and ip address with port 444 and points simple url meet.xxx.com to reverse proxy, it will still work to join lync meeting sent by
    outlook.I DO NOT REALLY UNDERSTAND WHAT this URL used for at this stage.
    3. AV edge --- same as webconf
    Regards
    Wen Fei Cao

  • Is a cluster proxy a single-point-of-failure?

    Our group is planning on configuring a two machine cluster to host
              servlets/jsp's and a single backend app server to host all EJBs and a
              database.
              IIS is going to be configured on each of the two cluster machines with a
              cluster plugin. IIS is being used to optimize performance of static HTTP
              requests. All servlet/jsp request would be forwarded to the Weblogic
              cluster. Resonate's Central Dispatch is also going to be installed on the
              two cluster machines. Central Dispatch is being used to provide HTTP
              request load-balancing and to provide failover in case one of the IIS
              servers fails (because the IIS process fails or the cluster machine it's on
              fails).
              Will this configuration work? I'm most concerned about the failover of the
              IIS cluster proxy. If one of the proxies is managing a sticky session (X),
              what happens when the machine (the proxy is on) dies and we failover to the
              other proxy? Is that proxy going to have any awareness of session X?
              Probably not. The new proxy is probably going to believe this request is
              new and forward the request to a machine which may not host the existing
              primary session. I believe this is an error?
              Is a cluster proxy a single-point-of-failure? Is there any way to avoid
              this? Does the same problem exist if you use Weblogic's HTTP server (as the
              cluster proxy)?
              Thank you.
              Marko.
              

    We found our entity bean bottlenecks using JProbe Profiler. It's great for
              watching the application and seeing what methods it spends its time in. We
              found an exeedingly high number of calls to ejbLoad were taking a lot of
              time, probably due to the fact that our EBs don't all have bulk-access
              methods.
              We also had to do some low-level method tracing to watch WebLogic thrash EB
              locks, basically it locks the EB instance every time it is accessed in a
              transaction. Our DBA says that Oracle is seeing a LOT of lock/unlock
              activity also. Since much of our EB data is just configuration information
              we don't want to incur the overhead of Java object locks, excess queries,
              and Oracle row locks just to read some config values. Deadlocks were also a
              major issue because many txns would access the same config data.
              Our data is also very normalized, and also very recursive, so using EBs
              makes it tricky to do joins and recursive SQL queries. It's possible that we
              could get good EB performance using bulk-access methods and multi-table EBs
              that use custom recursive SQL queries, but we'd still have the
              lock-thrashing overhead. Your app may differ, you may not run into these
              problems and EBs may be fine for you.
              If you have a cluster proxy you don't need to use sticky sessions with your
              load balancer. We use sticky sessions at the load-balancer level because we
              don't have a cluster proxy. For our purposes we decided that the minimal
              overhead of hardware ip-sticky session load balancing was more tolerable
              than the overhead of a dog-slow cluster proxy on WebLogic. If you do use the
              proxy then your load balancer can do round-robin or any other algorithm
              amongst all the proxies.
              Marko Milicevic <[email protected]> wrote in message
              news:[email protected]...
              > Sorry Grant. I meant to reply to the newsgroup. I am putting this reply
              > back on the group.
              >
              > Thanks for your observations. I will keep them all in mind.
              > Is there any easy way for me to tell if I am getting acceptable
              performance
              > with our configuration? For example, how do I know if my use of Entity
              > beans is slow? Will I have to do 2 implementations? One implementation
              > using entity beans and anther implementation that replaces all entity use
              > with session beans, then compare the performance?
              >
              > One last question about the cluster proxy. You mentioned that you are
              using
              > Load Director with sticky sessions. We too are planning on using sticky
              > sessions with Central Dispatch. But since the cluster proxy is stateless,
              > does it matter if sticky sessions is used by the load balancer? No matter
              > which cluster proxy the request is directed to (by load balancing) the
              > cluster proxy will in turn redirect the request to the correct machine
              (with
              > the primary session). Is this correct? If I do not have to incur the
              cost
              > of sticky sessions (with the load balancer) I would rather avoid it.
              >
              > Thanks again Grant.
              >
              > Marko.
              > .
              >
              > -----Original Message-----
              > From: Grant Kushida [mailto:[email protected]]
              > Sent: Monday, May 01, 2000 5:16 PM
              > To: Marko Milicevic
              > Subject: RE: Is a cluster proxy a single-point-of-failure?
              >
              >
              > We haven't had too many app server VM crashes, although our web server
              > typically needs to be restarted every day or so due to socket strangeness
              or
              > flat out process hanging. Running 2 app server processes on the same box
              > would help with the VM stuff, but remember to get 2 NICs, because all
              > servers on a cluster need to run on the same port with different IP addrs.
              >
              > We use only stateless session beans and entity beans - we have had a
              number
              > of performance problems with entity beans though so we will be migrating
              > away from them shortly, at least for our configuration-oriented tables.
              > Since each entity (unique row in the database) can only be accessed by one
              > transaction at a time, we ran into many deadlocks. There was also a lot of
              > lock thrashing because of this transaction locking. And of course the
              > performance hit of the naive database synching (read/write for each method
              > call). We're using bean-managed persistence in 4.5.1, so no read-only
              beans
              > for us yet.
              >
              > It's not the servlets that are slower, it's the response time due to the
              > funneling of requests through the ClusterProxy servlet running on a
              WebLogic
              > proxy server. You don't have that configuration so you don't really need
              to
              > worry. Although i have heard about performance issues with the cluster
              proxy
              > on IIS/netscape, we found performance to be just fine with the Netscape
              > proxy.
              >
              > We're currently using no session persistence. I have a philosophical issue
              > with going to vendor-specific servlet extensions that tie us to WebLogic.
              We
              > do the session-sticky load balancing with a Cisco localdirector, meanwhile
              > we are investigating alternative servlet engines (Apache/JRun being the
              > frontrunner). We might set up Apache as our proxy server running the
              > Apache-WL proxy plugin once we migrate up to 5.1, though.
              >
              > > -----Original Message-----
              > > From: Marko Milicevic [mailto:[email protected]]
              > > Sent: Monday, May 01, 2000 1:08 PM
              > > To: Grant Kushida
              > > Subject: Re: Is a cluster proxy a single-point-of-failure?
              > >
              > >
              > > Thanks for the info Grant.
              > >
              > > That is good news. I was worried that the proxy maintained
              > > state, but since
              > > it is all in the cookie, then I guess we are ok.
              > >
              > > As for the app server, you are right. It is a single point
              > > of failure, but
              > > the machine is a beast (HP/9000 N-class) with hardware
              > > redundancy up the
              > > yin-yang. We were unsure how much benefit we would get if we
              > > clustered
              > > beans. There seems to be a lot of overhead associated with
              > > clustered entity
              > > beans since every bean read includes a synch with the
              > > database, and there is
              > > no fail over support. Stateful session beans are not load
              > > balanced and do
              > > not support fail over. There seems to be real benefit for
              > > only stateless
              > > beans and read-only entities. Neither of which we have many
              > > of. We felt
              > > that we would probably get better performance by locating all
              > > of our beans
              > > on the same box as the data source. We are considering creating a two
              > > instance cluster within the single app server box to protect
              > > against a VM
              > > crash. What do you think? Do you recommend a different
              > > configuration?
              > >
              > > Thanks for the servlet performance tip. So you are saying
              > > that running
              > > servlets without clustering is 6-7x faster than with
              > > clustering? Are you
              > > using in-memory state replication for the session? Is this
              > > performance
              > > behavior under 4.5, 5.1, or both? We are planning on
              > > implementing under
              > > 5.1.
              > >
              > > Thanks again Grant.
              > >
              > > Marko.
              > > .
              >
              >
              > Grant Kushida <[email protected]> wrote in message
              > news:[email protected]...
              > > Seems like you'll be OK as far as session clustering goes. The Cluster
              > > proxies running on your IIS servers are pretty dumb - they just analyze
              > the
              > > cookie and determine the primary/secondary IP addresses of the WebLogic
              > web
              > > servers that hold the session data for that request. If one goes down
              the
              > > other is perfectly capable of analyzing the cookie too. As long as one
              > proxy
              > > and one of your two clustered WL web servers survives your users will
              have
              > > intact sessions.
              > >
              > > You do, however, have a single point of failure at the app server level,
              > and
              > > at the database server level, compounded by the fact that both are on a
              > > single machine.
              > >
              > > Don't use WebLogic to run the cluster servlet. It's performance is
              > > terrible - we experienced a 6-7x performance degredation, and WL support
              > had
              > > no idea why. They wanted us to run a version of ClusterServlet with
              > timing
              > > code in it so that we could help them debug their code. I don't think
              so.
              > >
              > >
              > > Marko Milicevic <[email protected]> wrote in message
              > > news:[email protected]...
              > > > Our group is planning on configuring a two machine cluster to host
              > > > servlets/jsp's and a single backend app server to host all EJBs and a
              > > > database.
              > > >
              > > > IIS is going to be configured on each of the two cluster machines with
              a
              > > > cluster plugin. IIS is being used to optimize performance of static
              > HTTP
              > > > requests. All servlet/jsp request would be forwarded to the Weblogic
              > > > cluster. Resonate's Central Dispatch is also going to be installed on
              > the
              > > > two cluster machines. Central Dispatch is being used to provide HTTP
              > > > request load-balancing and to provide failover in case one of the IIS
              > > > servers fails (because the IIS process fails or the cluster machine
              it's
              > > on
              > > > fails).
              > > >
              > > > Will this configuration work? I'm most concerned about the failover
              of
              > > the
              > > > IIS cluster proxy. If one of the proxies is managing a sticky session
              > > (X),
              > > > what happens when the machine (the proxy is on) dies and we failover
              to
              > > the
              > > > other proxy? Is that proxy going to have any awareness of session X?
              > > > Probably not. The new proxy is probably going to believe this request
              > is
              > > > new and forward the request to a machine which may not host the
              existing
              > > > primary session. I believe this is an error?
              > > >
              > > > Is a cluster proxy a single-point-of-failure? Is there any way to
              avoid
              > > > this? Does the same problem exist if you use Weblogic's HTTP server
              (as
              > > the
              > > > cluster proxy)?
              > > >
              > > > Thank you.
              > > >
              > > > Marko.
              > > > .
              > > >
              > > >
              > > >
              > > >
              > > >
              > > >
              > >
              > >
              >
              >
              

  • Administrative Server - Single Point of Failure?

    From my understanding, all managed servers in a cluster get their
              configuration by contacting the administrative server in the cluster.
              So i assume in the following scenario, the administrative server
              could be a single point of failure.
              Scenrario:
              1. The machine, on which the administrative server was running got a
              hardware defect.
              2. due to some bad coding one of the managed servers on another machine
              crashed.
              3. a small script tries to restart the previously failed server from step 2.
              i assume, that step 3. is not possible, because there is no backup
              administrative server
              in the whole cluster. so the script will fail, wen trying to start the
              crashed managed server
              again.
              did i understand this right? do you have some suggestions, how to avoid this
              situation?
              what does BEA recommend to their enterprise customers?
              best regards
              Thomas
              

    Hi Thomas,
              There is no reason why you couldnt keep a backup administration server
              that is NOT running available. So that if the primary administration server
              went down, you could launch a secondary server with the same administration
              information and the managed server could retrieve the required information
              from the backup administration server.
              regards,
              -Rob
              Robert Castaneda [email protected]
              CustomWare http://www.customware.com
              "Thomas E. Wieger" <[email protected]> wrote in message
              news:[email protected]...
              > From my understanding, all managed servers in a cluster get their
              > configuration by contacting the administrative server in the cluster.
              > So i assume in the following scenario, the administrative server
              > could be a single point of failure.
              >
              > Scenrario:
              > 1. The machine, on which the administrative server was running got a
              > hardware defect.
              > 2. due to some bad coding one of the managed servers on another machine
              > crashed.
              > 3. a small script tries to restart the previously failed server from step
              2.
              >
              > i assume, that step 3. is not possible, because there is no backup
              > administrative server
              > in the whole cluster. so the script will fail, wen trying to start the
              > crashed managed server
              > again.
              >
              > did i understand this right? do you have some suggestions, how to avoid
              this
              > situation?
              > what does BEA recommend to their enterprise customers?
              >
              > best regards
              >
              > Thomas
              >
              >
              >
              

  • Primary site server a single point of failure?

    I'm installing ConfigMgr 2012 R2, and employing a redundant design as much as possible. I have 2 servers, call them CM01,CM02, in a single primary site, and on each server I have installed the following roles: Management Point, Distribution Point, Software
    Update Point, as well as the installing the SMS Provider on both servers. SQL is on a 3rd box.
    I am now testing failover from a client perspective by powering down CM01 and querying the current management point on the client: (get-wmiobject -namespace root\ccm -class ccm_authority).CurrentManagementPoint . The management point assigned to
    the client flips to the the 2nd server, CM02, as expected. However, when I try to open the CM management console, I cannot connect to the Site, and reading SMSAdminUI log reveals this error: "Provider machine not found". 
    Is the Primary site server a single point of failure? 
    Why can't I point the console to a secondary SMS provider?
    If this just isn't possible, what is the course of action to restore console access once the Primary Site server is down?
    Many Thanks

    Yes, that is a completely false statement. Using a CAS and multiple primaries in fact will introduce multiple single points of failure. The only technical Eason for a CAD a multiple primary sites is for scale out; i.e., supporting 100,000+ managed systems.
    HA is achieved from a client perspective by adding multiple site systems hosting the client facing roles: MP, DP, SUP, App Catalog.
    Beyond that, all other roles are non-critical to client operations and thus have no built-in HA mechanism. This includes the site server itself also.
    The real question is what service that ConfigMgr provides do you need HA for?
    Jason | http://blog.configmgrftw.com

  • Cluster Single Point of Failure

              It appears from the documentation that in order to have a cluster that
              you need to setup a proxy server which act as the director to
              all the other servers in the cluster.
              What happens if this proxy server goes down then it doesn't matter
              how many servers you have in the cluster.....
              Doesn't this represent a single point of failure ?
              Thanks in advance!
              

    George,
              You may have multiple proxies, avoiding SPF. They are "stateless", so it
              doesn't matter what request goes to which, so you can use hardware load
              balancers in front of the proxies. Most hardware load balances also support
              failover, avoiding SPF there as well.
              In 6.0, the proxy is not required, but unless you have reasonably good
              hardware load balancers in front, you should still use one or more proxies.
              One other thing to consider is distribution of work. The proxy (such as
              Apache) can act as the web server for static content and is much more
              efficient in terms of execution and in terms of dollars than Weblogic for
              such simple tasks.
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "George M. Pieri" <[email protected]> wrote in message
              news:[email protected]..
              >
              > It appears from the documentation that in order to have a cluster that
              > you need to setup a proxy server which act as the director to
              > all the other servers in the cluster.
              >
              > What happens if this proxy server goes down then it doesn't matter
              > how many servers you have in the cluster.....
              >
              > Doesn't this represent a single point of failure ?
              >
              >
              > Thanks in advance!
              >
              >
              >
              >
              >
              

  • How can I design Load Balancing for distant Datacenters? without single point of failure

    Dear Experts,
    We are using the following very old and passive method of redundancy for our cload SaaS but it's time to make it approperiate. Can youplease advise:
    Current issues:
    1. No load balancing. IP selection is based on primary and secondary IP configurations. If Primary fails to respond, IP record for DNS changes to secondary IP with TTL=1min
    2. When primary server fails, it takes around 15 min for clients to access the servers. Way too long!
    The target:
    A. Activate a load balancing mechanism to utilized the stand-by server.
    B. How can the solution be designed to avoid single point of failure? In the previous example, UltraDNS is a single point of failure.
    C. If using GSS is the solution, how can it be designed in both server locations (for active redundancy) using ordinary DNS server?
    D. How can HSRP, GSS, GSLB, and/or VIP be used? What would be the best solution?
    Servers are running ORACLE DB, MS SQL, and tomcat with 2x SAN of 64TB each.

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • Single points of failure?

    So, we are looking into the xServe RAID, and I'd like some insight into making things as bulletproof as possible.
    Right now we plan to have:
    a load balancer and a failover load balancer (running on cheap BSD hardware, since hardware load balancers are so damned expensive) feeding into
    two application servers, which communicate with
    one back-end server, which serves as both a database server and an NFS server for the app servers
    And the volumes that will be NFS-mounted would be on our xServe RAID, which would be connected directly to the back-end server.
    The networking hardware would all be failover through multiple switches and cards and so forth.
    The idea here is to avoid as many single points of failure as possible. Unfortunately at the moment we don't have a DBA who is fluent in clustering, so we can't yet get rid of the back-end server as a single point of failure. (Which is also why I'm mounting the RAID on it and sharing via NFS... if the database goes down, it won't matter that the file service is down too.) However, in the current setup, there's one other failure point: the RAID controllers on the xServe RAID.
    Performance is less important to us on this than reliability is. We can't afford two RAID units at the moment, but we can afford one full of 500 gig drives, and we really only need about 4 TB of storage right now, so I was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc. As far as I understand it, this eliminates the RAID controllers as a single point of failure, and as far as I know they are at least supposedly the only single point of failure in the xServe RAID system. (I could also do RAID 10 that way, but due to the way we store files, that wouldn't buy us anything except added complexity.)
    And later on, down the road, when we have someone good enough to figure out how to cluster the database, if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it, thus cutting out the middle man NFS service. (I am under the impression that this sort of volume-sharing is possible via FC... is that correct?)
    Comments? Suggestions? Corrections to my misapprehentions?
    --Adam Lang

    Camelot wrote:
    A couple of points.
    was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc.
    Really? Assuming you're using fourteen 500GB drives this will give you seven volumes mounted on the server, each a 500GB mirror split on the two controllers. That's fine from a redundancy standpoint, but it ***** from the standpoint of managing seven direct mountpoints on the server, as well as seven NFS shares, and 14 NFS mount points on the clients. Not to mention file allocations between the volumes, etc.
    If your application is such that it's easy to dictate which volume any particular file should be on and you don't mind managing all those volumes, go ahead, otherwise consider creating two RAID 5 volumes, one on each controller, using RAID 1 to mirror them on the back-end server and exporting a single NFS share to the clients/front-end servers.
    Quite simple, actually. But admittedly, two RAID 5s RAID-1-ed together would be much more efficient, space-wise.
    if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it
    Yes, although you'll need another intermediate server as the metadata controller to arbitrate connections from the two machines. It becomes an expensive option, but your performance will increase, as will the ease with which you can expand your storage network (adding more storage as well as more front-end clients).
    But then that means that the metadata controller is a single point of failure...?
    --Adam Lang

  • Linux cluster, no single point of failure

    I'm having difficulty setting up a Business Objects cluster in linux with no single point of failure.  Following the instructions for custom install I am ending up connecting to the CMS server on the other server and no CMS running on the server i'm doing the install on. Which is a cluster, however we only have CMS running on one server in this scenario and we can't have a single point of failure.  Could someone explain how to setup a 2 server clustered solution on linux that doesn't have a single point of failure.

    not working, I can see my other node listed in the config, but the information for the servers state that the SIA is available, I've checked network/port connectivity between the boxes and SIA is running and available for each box.
    Via the instructions for installing on a system with windows capabilities I read about a step to connect to an existing CMS.
    http://wiki.sdn.sap.com/wiki/download/attachments/154828917/Opiton1_add_cms3.jpg
    http://wiki.sdn.sap.com/wiki/display/BOBJ/Appendix1-Deployment-howtoaddanotherCMS-Multipleenvironments
    via the linux install.sh script, no matter what I do I'm not coming across any way that allows me to reach that step.

  • Single point of failure for web dispatcher

    Hi
    I need advise on how can i resolve single point of failure for web
    dispatcher in case the web dispatcher goes down on another system, what
    are the alternative which can be used to avoid this.
    In our enviroment we have db server with two application server and web
    dispatcher is installed on db server and i need to know what can i do when
    the web dispatcher on db server crashes and cannot be restarted at all.
    We are running oracle 10.2.0.2.0 on AIX 5.3.
    Regards,
    Codlick

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • Forms 10g 2 ApplicationServers Single Point of Failure

    Hi,
    we are planning a migration from Forms6i to Forms10g and we are thinking about eliminating as much as possible a single point of failure.
    Today we have all those Clients running Forms-Runtime with the FMBs ...
    They all create a connection against the Database which we have secured as much as possible against Loss of Service.
    After the migration we will have all those Clients running a browser and calling a URL which point to the Application-Server(s) running the Forms-Runtime processes. If this machine fails, none of the Clients can work anymore. Because of that, we are planning for 2 AS to be on a safer side for a Loss of one Server.
    But here starts the question :
    When a clients starts, he will point to an URL which lead to an IP-Address.
    The IP-Address could be of a Hardware-Loadbalancer, if so the LB will forward to Oracle Webcache on one of the AS. If not, The IP-Address leads directly to one Webcache.
    From there it proceeds to the HTTP-Server on one of the AS and then further to the MOD-OC4J Instance, which could be duplicated as well.
    All those "Instances" : Hardware-Loadbalancer, Webcache, HTTP-Server, MOD-OC4J-Instances can be doubled or more but that only makes sense if they run on different hardware, which means different IP-Addresses. I can imagine using a virtual IP-Address for connecting to the HLB or the Webcache but where is it split to the different real addresses with having one Box as a single point of failure.
    I'm looking for a solution to double the ApplicationServer as easy as possible but without having the Clients to decide on which Server they can work and without having a single box in front which would lead to a S.P.O.F.
    I know, that there are HLBs out there which can act as a Cluster so that should eliminate the problem, but I would like to know, whether that cann be done on the AS only.
    Thanks,
    Mark

    Thanks wilfred,
    yes I've read that manual. Probably not every single page ;-)
    I agree that High-Availability is a very broad and complex topic, but my question is (although it was difficult to explain what i mean) only on a small part of it:
    I understand that I can have mutiple instances on each level OC4J, HTTP, WEB-Cache, LBR But where or who excepts one single URL and leads the requests to the available AS
    As mentioned in my post before, we may etst the Microsoft NLB-Cluster to divide the requests to the WEB-Cache Instances on the 2 AS and then the 2 Web-Cache proceed to the 2 HTTP and so on.
    The Idea of that is that Windows offers a virtual IP-Adress from those 2 Windows-Server and somehow the requests will be transferred to a running WEB-Cache.
    Does that work correctly with session-Binding ...
    We'll see
    thanks,
    Mark

  • Deploying OracleAS Single Sign-On Server Cluster setup with a Proxy Server

    I have a question regarding setting up a OracleAS Single Sign-On Server in a cluster mode along with a Apache Proxy Server.
    Step1 - I'm planning to install OracleAS Single Sign-On Server on two nodes sso1.oracle.com and sso2.oracle.com in a Cluster. Both the nodes in the cluster accesed via Load balancer i.e sso.oracle.com.
    Step2 - Then I'm planning to setup two Apache Servers as Proxy Server i.e apache1.oracle.com and apache2.oracle.com. These two apache servers are accessed via Load balancer i.e apache.oracle.com
    The question I have is
    1)while setting up OracleAS Single Sign-On cluster I would provide Load balancer host i.e sso.oracle.com as part of the install. So that all the user requests coming to sso1.oracle.com/sso2.oracle.com get redirected back to Load balancer.
    2)But as part of the Apache Server proxy setup I am also supposed to redirect from SSO server to apache.oracle.com
    But using ssocfg.sh I can only provide either sso.oracle.com or apache.oracle.com NOT BOTH.
    In this case what I should
    1) avoid redirecting to sso.oracle.com instead redirect only to apache server OR are there any other methods to configure.
    I have above setup working fine in DEV environment, where there is only one sso server and one apache proxy server. Problem really comes when I go for setting OSSO server as a cluster in this case I have to redirect to load balancer as well as proxy server?

    why not using webcacheclustering between the apache and the 2 sso's?

  • Does OAM is single point of failure ?

    Hi Adam
    I have a serious a doubt about OAM implementation...
    What is the best practice for OAM implementation and fall back plans for those critical web application integration ?
    Once the web applications was integrated with OAM, the login traffic will always redirect to OAM for authentication and authorization...
    But once OAM is down, all the critical applications are down !!
    So, from customer point of view, OAM seems like single point of failure..
    Do you have any brilliant ideas on this ?
    Thanks in million...
    Best Regards
    John

    john,chong wrote:
    Hi Pramod
    Yup, HA always must be in placed for this kinds critical implementation..
    BUT for esso(desktop esso) implementation; even esso is down, user still able to do manual login to their application..Really? What if the password has been changed by ESSO to a random one for some application, that's very common in ESSO implementations. User doesn't know the password, only ESSO does.

  • AP Extreme (WiFi Access Point)... LAN... Web Proxy Server help.

    Hello...
    I need a little help configuring this Airport Extreme as a Wireless Access point, serving a bunch of iPads via the schools LAN connection for which traffic is routed through a Web Proxy Server. I've been told to set it up as a bridge as the PC LAN and Proxy are providing NAT but can't seem to crack it.
    The WiFi side of things is up and running, we can all see and connect to the AP.
    I'm told that it was working fine before the school break in the summer, then something was changed and the position of the AP altered.
    The Web Proxy Server is normally accesses from the PC's via the following address... IP > 10.12.14.122  //  PORT > 3128
    I'm not certain where the Proxy settings need to go in the new 'simple' Airport Utility, can't see a place for Port at all?!?
    (I've taken the AP home, tried it on my home network and it works fine, so we know its all OK and its down to config).
    Here are some screen images of the settings as they are, that do not work.
    (I was trying a few different settings hence the screens like Static/DHCP etc.)
    Any help is greatly appreciated.

    Hi Daniel,
    >>Now when I go on a client site my internet access on the host laptop is via a web proxy on a LAN connection.
    "LAN connection" means physical NIC (Realtek PCIe GBE Family Controller) ?
    " web proxy " means adding a proxy server IP in IE ?
    Bounding the NIC (Realtek PCIe ) to external virtual switch then connect all VMs to that external virtual switch ,still can not access ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for

  • Poor image quality when exporting Solidworks Drawings to PDF

    When "printing" to PDF in Solidworks the drawings become very low quality, within Solidworks they look great, printing to a printer...they look great as well. When printed from another computer to PDF the documents look fine. This door is contained w

  • Importing Data from MS Access into SQL Serevr 2005 via Linked Server: User-Dependent failure

    I am operating an Express version of SQL Server 2005 on a Windows XP machine. I have successfully setup a linked server to a remote Access (.mdb) database. With my user account (Windows authentication) I can retrieve and store data without any proble

  • Hypertext Link Data Format

    I want to be able, from a request's column to access a certain URL, for example, suppose I have a column in the database that has URL data in it, and I want the user to be able to access the URL from the report. I know i can use the 'Hypertext Link'

  • XI Actions: Add Document Descriptions - Values not showing

    1. Using Adobe Pro XI (version 11.0.0) I create a new Action and I add the Add Document Description step. 2. I click Specify Settings and enter default values for Title, Subject, Author and Keyword (e.g. as in the screenshot below). 3. I close the di

  • Deployment error when using several ejb-jar files

    Hello All, I am having problems with deployment of my J2EE application. My EAR file has several EJB-JARs, one for each entity bean. However I refer to the other EJBs in one EJB. Hence I included the <ejb-ref> element in the deployment descriptor of t