Is a cluster proxy a single-point-of-failure?

Our group is planning on configuring a two machine cluster to host
          servlets/jsp's and a single backend app server to host all EJBs and a
          database.
          IIS is going to be configured on each of the two cluster machines with a
          cluster plugin. IIS is being used to optimize performance of static HTTP
          requests. All servlet/jsp request would be forwarded to the Weblogic
          cluster. Resonate's Central Dispatch is also going to be installed on the
          two cluster machines. Central Dispatch is being used to provide HTTP
          request load-balancing and to provide failover in case one of the IIS
          servers fails (because the IIS process fails or the cluster machine it's on
          fails).
          Will this configuration work? I'm most concerned about the failover of the
          IIS cluster proxy. If one of the proxies is managing a sticky session (X),
          what happens when the machine (the proxy is on) dies and we failover to the
          other proxy? Is that proxy going to have any awareness of session X?
          Probably not. The new proxy is probably going to believe this request is
          new and forward the request to a machine which may not host the existing
          primary session. I believe this is an error?
          Is a cluster proxy a single-point-of-failure? Is there any way to avoid
          this? Does the same problem exist if you use Weblogic's HTTP server (as the
          cluster proxy)?
          Thank you.
          Marko.
          

We found our entity bean bottlenecks using JProbe Profiler. It's great for
          watching the application and seeing what methods it spends its time in. We
          found an exeedingly high number of calls to ejbLoad were taking a lot of
          time, probably due to the fact that our EBs don't all have bulk-access
          methods.
          We also had to do some low-level method tracing to watch WebLogic thrash EB
          locks, basically it locks the EB instance every time it is accessed in a
          transaction. Our DBA says that Oracle is seeing a LOT of lock/unlock
          activity also. Since much of our EB data is just configuration information
          we don't want to incur the overhead of Java object locks, excess queries,
          and Oracle row locks just to read some config values. Deadlocks were also a
          major issue because many txns would access the same config data.
          Our data is also very normalized, and also very recursive, so using EBs
          makes it tricky to do joins and recursive SQL queries. It's possible that we
          could get good EB performance using bulk-access methods and multi-table EBs
          that use custom recursive SQL queries, but we'd still have the
          lock-thrashing overhead. Your app may differ, you may not run into these
          problems and EBs may be fine for you.
          If you have a cluster proxy you don't need to use sticky sessions with your
          load balancer. We use sticky sessions at the load-balancer level because we
          don't have a cluster proxy. For our purposes we decided that the minimal
          overhead of hardware ip-sticky session load balancing was more tolerable
          than the overhead of a dog-slow cluster proxy on WebLogic. If you do use the
          proxy then your load balancer can do round-robin or any other algorithm
          amongst all the proxies.
          Marko Milicevic <[email protected]> wrote in message
          news:[email protected]...
          > Sorry Grant. I meant to reply to the newsgroup. I am putting this reply
          > back on the group.
          >
          > Thanks for your observations. I will keep them all in mind.
          > Is there any easy way for me to tell if I am getting acceptable
          performance
          > with our configuration? For example, how do I know if my use of Entity
          > beans is slow? Will I have to do 2 implementations? One implementation
          > using entity beans and anther implementation that replaces all entity use
          > with session beans, then compare the performance?
          >
          > One last question about the cluster proxy. You mentioned that you are
          using
          > Load Director with sticky sessions. We too are planning on using sticky
          > sessions with Central Dispatch. But since the cluster proxy is stateless,
          > does it matter if sticky sessions is used by the load balancer? No matter
          > which cluster proxy the request is directed to (by load balancing) the
          > cluster proxy will in turn redirect the request to the correct machine
          (with
          > the primary session). Is this correct? If I do not have to incur the
          cost
          > of sticky sessions (with the load balancer) I would rather avoid it.
          >
          > Thanks again Grant.
          >
          > Marko.
          > .
          >
          > -----Original Message-----
          > From: Grant Kushida [mailto:[email protected]]
          > Sent: Monday, May 01, 2000 5:16 PM
          > To: Marko Milicevic
          > Subject: RE: Is a cluster proxy a single-point-of-failure?
          >
          >
          > We haven't had too many app server VM crashes, although our web server
          > typically needs to be restarted every day or so due to socket strangeness
          or
          > flat out process hanging. Running 2 app server processes on the same box
          > would help with the VM stuff, but remember to get 2 NICs, because all
          > servers on a cluster need to run on the same port with different IP addrs.
          >
          > We use only stateless session beans and entity beans - we have had a
          number
          > of performance problems with entity beans though so we will be migrating
          > away from them shortly, at least for our configuration-oriented tables.
          > Since each entity (unique row in the database) can only be accessed by one
          > transaction at a time, we ran into many deadlocks. There was also a lot of
          > lock thrashing because of this transaction locking. And of course the
          > performance hit of the naive database synching (read/write for each method
          > call). We're using bean-managed persistence in 4.5.1, so no read-only
          beans
          > for us yet.
          >
          > It's not the servlets that are slower, it's the response time due to the
          > funneling of requests through the ClusterProxy servlet running on a
          WebLogic
          > proxy server. You don't have that configuration so you don't really need
          to
          > worry. Although i have heard about performance issues with the cluster
          proxy
          > on IIS/netscape, we found performance to be just fine with the Netscape
          > proxy.
          >
          > We're currently using no session persistence. I have a philosophical issue
          > with going to vendor-specific servlet extensions that tie us to WebLogic.
          We
          > do the session-sticky load balancing with a Cisco localdirector, meanwhile
          > we are investigating alternative servlet engines (Apache/JRun being the
          > frontrunner). We might set up Apache as our proxy server running the
          > Apache-WL proxy plugin once we migrate up to 5.1, though.
          >
          > > -----Original Message-----
          > > From: Marko Milicevic [mailto:[email protected]]
          > > Sent: Monday, May 01, 2000 1:08 PM
          > > To: Grant Kushida
          > > Subject: Re: Is a cluster proxy a single-point-of-failure?
          > >
          > >
          > > Thanks for the info Grant.
          > >
          > > That is good news. I was worried that the proxy maintained
          > > state, but since
          > > it is all in the cookie, then I guess we are ok.
          > >
          > > As for the app server, you are right. It is a single point
          > > of failure, but
          > > the machine is a beast (HP/9000 N-class) with hardware
          > > redundancy up the
          > > yin-yang. We were unsure how much benefit we would get if we
          > > clustered
          > > beans. There seems to be a lot of overhead associated with
          > > clustered entity
          > > beans since every bean read includes a synch with the
          > > database, and there is
          > > no fail over support. Stateful session beans are not load
          > > balanced and do
          > > not support fail over. There seems to be real benefit for
          > > only stateless
          > > beans and read-only entities. Neither of which we have many
          > > of. We felt
          > > that we would probably get better performance by locating all
          > > of our beans
          > > on the same box as the data source. We are considering creating a two
          > > instance cluster within the single app server box to protect
          > > against a VM
          > > crash. What do you think? Do you recommend a different
          > > configuration?
          > >
          > > Thanks for the servlet performance tip. So you are saying
          > > that running
          > > servlets without clustering is 6-7x faster than with
          > > clustering? Are you
          > > using in-memory state replication for the session? Is this
          > > performance
          > > behavior under 4.5, 5.1, or both? We are planning on
          > > implementing under
          > > 5.1.
          > >
          > > Thanks again Grant.
          > >
          > > Marko.
          > > .
          >
          >
          > Grant Kushida <[email protected]> wrote in message
          > news:[email protected]...
          > > Seems like you'll be OK as far as session clustering goes. The Cluster
          > > proxies running on your IIS servers are pretty dumb - they just analyze
          > the
          > > cookie and determine the primary/secondary IP addresses of the WebLogic
          > web
          > > servers that hold the session data for that request. If one goes down
          the
          > > other is perfectly capable of analyzing the cookie too. As long as one
          > proxy
          > > and one of your two clustered WL web servers survives your users will
          have
          > > intact sessions.
          > >
          > > You do, however, have a single point of failure at the app server level,
          > and
          > > at the database server level, compounded by the fact that both are on a
          > > single machine.
          > >
          > > Don't use WebLogic to run the cluster servlet. It's performance is
          > > terrible - we experienced a 6-7x performance degredation, and WL support
          > had
          > > no idea why. They wanted us to run a version of ClusterServlet with
          > timing
          > > code in it so that we could help them debug their code. I don't think
          so.
          > >
          > >
          > > Marko Milicevic <[email protected]> wrote in message
          > > news:[email protected]...
          > > > Our group is planning on configuring a two machine cluster to host
          > > > servlets/jsp's and a single backend app server to host all EJBs and a
          > > > database.
          > > >
          > > > IIS is going to be configured on each of the two cluster machines with
          a
          > > > cluster plugin. IIS is being used to optimize performance of static
          > HTTP
          > > > requests. All servlet/jsp request would be forwarded to the Weblogic
          > > > cluster. Resonate's Central Dispatch is also going to be installed on
          > the
          > > > two cluster machines. Central Dispatch is being used to provide HTTP
          > > > request load-balancing and to provide failover in case one of the IIS
          > > > servers fails (because the IIS process fails or the cluster machine
          it's
          > > on
          > > > fails).
          > > >
          > > > Will this configuration work? I'm most concerned about the failover
          of
          > > the
          > > > IIS cluster proxy. If one of the proxies is managing a sticky session
          > > (X),
          > > > what happens when the machine (the proxy is on) dies and we failover
          to
          > > the
          > > > other proxy? Is that proxy going to have any awareness of session X?
          > > > Probably not. The new proxy is probably going to believe this request
          > is
          > > > new and forward the request to a machine which may not host the
          existing
          > > > primary session. I believe this is an error?
          > > >
          > > > Is a cluster proxy a single-point-of-failure? Is there any way to
          avoid
          > > > this? Does the same problem exist if you use Weblogic's HTTP server
          (as
          > > the
          > > > cluster proxy)?
          > > >
          > > > Thank you.
          > > >
          > > > Marko.
          > > > .
          > > >
          > > >
          > > >
          > > >
          > > >
          > > >
          > >
          > >
          >
          >
          

Similar Messages

  • Cluster Single Point of Failure

              It appears from the documentation that in order to have a cluster that
              you need to setup a proxy server which act as the director to
              all the other servers in the cluster.
              What happens if this proxy server goes down then it doesn't matter
              how many servers you have in the cluster.....
              Doesn't this represent a single point of failure ?
              Thanks in advance!
              

    George,
              You may have multiple proxies, avoiding SPF. They are "stateless", so it
              doesn't matter what request goes to which, so you can use hardware load
              balancers in front of the proxies. Most hardware load balances also support
              failover, avoiding SPF there as well.
              In 6.0, the proxy is not required, but unless you have reasonably good
              hardware load balancers in front, you should still use one or more proxies.
              One other thing to consider is distribution of work. The proxy (such as
              Apache) can act as the web server for static content and is much more
              efficient in terms of execution and in terms of dollars than Weblogic for
              such simple tasks.
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "George M. Pieri" <[email protected]> wrote in message
              news:[email protected]..
              >
              > It appears from the documentation that in order to have a cluster that
              > you need to setup a proxy server which act as the director to
              > all the other servers in the cluster.
              >
              > What happens if this proxy server goes down then it doesn't matter
              > how many servers you have in the cluster.....
              >
              > Doesn't this represent a single point of failure ?
              >
              >
              > Thanks in advance!
              >
              >
              >
              >
              >
              

  • Linux cluster, no single point of failure

    I'm having difficulty setting up a Business Objects cluster in linux with no single point of failure.  Following the instructions for custom install I am ending up connecting to the CMS server on the other server and no CMS running on the server i'm doing the install on. Which is a cluster, however we only have CMS running on one server in this scenario and we can't have a single point of failure.  Could someone explain how to setup a 2 server clustered solution on linux that doesn't have a single point of failure.

    not working, I can see my other node listed in the config, but the information for the servers state that the SIA is available, I've checked network/port connectivity between the boxes and SIA is running and available for each box.
    Via the instructions for installing on a system with windows capabilities I read about a step to connect to an existing CMS.
    http://wiki.sdn.sap.com/wiki/download/attachments/154828917/Opiton1_add_cms3.jpg
    http://wiki.sdn.sap.com/wiki/display/BOBJ/Appendix1-Deployment-howtoaddanotherCMS-Multipleenvironments
    via the linux install.sh script, no matter what I do I'm not coming across any way that allows me to reach that step.

  • Administrative Server - Single Point of Failure?

    From my understanding, all managed servers in a cluster get their
              configuration by contacting the administrative server in the cluster.
              So i assume in the following scenario, the administrative server
              could be a single point of failure.
              Scenrario:
              1. The machine, on which the administrative server was running got a
              hardware defect.
              2. due to some bad coding one of the managed servers on another machine
              crashed.
              3. a small script tries to restart the previously failed server from step 2.
              i assume, that step 3. is not possible, because there is no backup
              administrative server
              in the whole cluster. so the script will fail, wen trying to start the
              crashed managed server
              again.
              did i understand this right? do you have some suggestions, how to avoid this
              situation?
              what does BEA recommend to their enterprise customers?
              best regards
              Thomas
              

    Hi Thomas,
              There is no reason why you couldnt keep a backup administration server
              that is NOT running available. So that if the primary administration server
              went down, you could launch a secondary server with the same administration
              information and the managed server could retrieve the required information
              from the backup administration server.
              regards,
              -Rob
              Robert Castaneda [email protected]
              CustomWare http://www.customware.com
              "Thomas E. Wieger" <[email protected]> wrote in message
              news:[email protected]...
              > From my understanding, all managed servers in a cluster get their
              > configuration by contacting the administrative server in the cluster.
              > So i assume in the following scenario, the administrative server
              > could be a single point of failure.
              >
              > Scenrario:
              > 1. The machine, on which the administrative server was running got a
              > hardware defect.
              > 2. due to some bad coding one of the managed servers on another machine
              > crashed.
              > 3. a small script tries to restart the previously failed server from step
              2.
              >
              > i assume, that step 3. is not possible, because there is no backup
              > administrative server
              > in the whole cluster. so the script will fail, wen trying to start the
              > crashed managed server
              > again.
              >
              > did i understand this right? do you have some suggestions, how to avoid
              this
              > situation?
              > what does BEA recommend to their enterprise customers?
              >
              > best regards
              >
              > Thomas
              >
              >
              >
              

  • Single points of failure?

    So, we are looking into the xServe RAID, and I'd like some insight into making things as bulletproof as possible.
    Right now we plan to have:
    a load balancer and a failover load balancer (running on cheap BSD hardware, since hardware load balancers are so damned expensive) feeding into
    two application servers, which communicate with
    one back-end server, which serves as both a database server and an NFS server for the app servers
    And the volumes that will be NFS-mounted would be on our xServe RAID, which would be connected directly to the back-end server.
    The networking hardware would all be failover through multiple switches and cards and so forth.
    The idea here is to avoid as many single points of failure as possible. Unfortunately at the moment we don't have a DBA who is fluent in clustering, so we can't yet get rid of the back-end server as a single point of failure. (Which is also why I'm mounting the RAID on it and sharing via NFS... if the database goes down, it won't matter that the file service is down too.) However, in the current setup, there's one other failure point: the RAID controllers on the xServe RAID.
    Performance is less important to us on this than reliability is. We can't afford two RAID units at the moment, but we can afford one full of 500 gig drives, and we really only need about 4 TB of storage right now, so I was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc. As far as I understand it, this eliminates the RAID controllers as a single point of failure, and as far as I know they are at least supposedly the only single point of failure in the xServe RAID system. (I could also do RAID 10 that way, but due to the way we store files, that wouldn't buy us anything except added complexity.)
    And later on, down the road, when we have someone good enough to figure out how to cluster the database, if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it, thus cutting out the middle man NFS service. (I am under the impression that this sort of volume-sharing is possible via FC... is that correct?)
    Comments? Suggestions? Corrections to my misapprehentions?
    --Adam Lang

    Camelot wrote:
    A couple of points.
    was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc.
    Really? Assuming you're using fourteen 500GB drives this will give you seven volumes mounted on the server, each a 500GB mirror split on the two controllers. That's fine from a redundancy standpoint, but it ***** from the standpoint of managing seven direct mountpoints on the server, as well as seven NFS shares, and 14 NFS mount points on the clients. Not to mention file allocations between the volumes, etc.
    If your application is such that it's easy to dictate which volume any particular file should be on and you don't mind managing all those volumes, go ahead, otherwise consider creating two RAID 5 volumes, one on each controller, using RAID 1 to mirror them on the back-end server and exporting a single NFS share to the clients/front-end servers.
    Quite simple, actually. But admittedly, two RAID 5s RAID-1-ed together would be much more efficient, space-wise.
    if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it
    Yes, although you'll need another intermediate server as the metadata controller to arbitrate connections from the two machines. It becomes an expensive option, but your performance will increase, as will the ease with which you can expand your storage network (adding more storage as well as more front-end clients).
    But then that means that the metadata controller is a single point of failure...?
    --Adam Lang

  • Forms 10g 2 ApplicationServers Single Point of Failure

    Hi,
    we are planning a migration from Forms6i to Forms10g and we are thinking about eliminating as much as possible a single point of failure.
    Today we have all those Clients running Forms-Runtime with the FMBs ...
    They all create a connection against the Database which we have secured as much as possible against Loss of Service.
    After the migration we will have all those Clients running a browser and calling a URL which point to the Application-Server(s) running the Forms-Runtime processes. If this machine fails, none of the Clients can work anymore. Because of that, we are planning for 2 AS to be on a safer side for a Loss of one Server.
    But here starts the question :
    When a clients starts, he will point to an URL which lead to an IP-Address.
    The IP-Address could be of a Hardware-Loadbalancer, if so the LB will forward to Oracle Webcache on one of the AS. If not, The IP-Address leads directly to one Webcache.
    From there it proceeds to the HTTP-Server on one of the AS and then further to the MOD-OC4J Instance, which could be duplicated as well.
    All those "Instances" : Hardware-Loadbalancer, Webcache, HTTP-Server, MOD-OC4J-Instances can be doubled or more but that only makes sense if they run on different hardware, which means different IP-Addresses. I can imagine using a virtual IP-Address for connecting to the HLB or the Webcache but where is it split to the different real addresses with having one Box as a single point of failure.
    I'm looking for a solution to double the ApplicationServer as easy as possible but without having the Clients to decide on which Server they can work and without having a single box in front which would lead to a S.P.O.F.
    I know, that there are HLBs out there which can act as a Cluster so that should eliminate the problem, but I would like to know, whether that cann be done on the AS only.
    Thanks,
    Mark

    Thanks wilfred,
    yes I've read that manual. Probably not every single page ;-)
    I agree that High-Availability is a very broad and complex topic, but my question is (although it was difficult to explain what i mean) only on a small part of it:
    I understand that I can have mutiple instances on each level OC4J, HTTP, WEB-Cache, LBR But where or who excepts one single URL and leads the requests to the available AS
    As mentioned in my post before, we may etst the Microsoft NLB-Cluster to divide the requests to the WEB-Cache Instances on the 2 AS and then the 2 Web-Cache proceed to the 2 HTTP and so on.
    The Idea of that is that Windows offers a virtual IP-Adress from those 2 Windows-Server and somehow the requests will be transferred to a running WEB-Cache.
    Does that work correctly with session-Binding ...
    We'll see
    thanks,
    Mark

  • How can I design Load Balancing for distant Datacenters? without single point of failure

    Dear Experts,
    We are using the following very old and passive method of redundancy for our cload SaaS but it's time to make it approperiate. Can youplease advise:
    Current issues:
    1. No load balancing. IP selection is based on primary and secondary IP configurations. If Primary fails to respond, IP record for DNS changes to secondary IP with TTL=1min
    2. When primary server fails, it takes around 15 min for clients to access the servers. Way too long!
    The target:
    A. Activate a load balancing mechanism to utilized the stand-by server.
    B. How can the solution be designed to avoid single point of failure? In the previous example, UltraDNS is a single point of failure.
    C. If using GSS is the solution, how can it be designed in both server locations (for active redundancy) using ordinary DNS server?
    D. How can HSRP, GSS, GSLB, and/or VIP be used? What would be the best solution?
    Servers are running ORACLE DB, MS SQL, and tomcat with 2x SAN of 64TB each.

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • Single point of failure for web dispatcher

    Hi
    I need advise on how can i resolve single point of failure for web
    dispatcher in case the web dispatcher goes down on another system, what
    are the alternative which can be used to avoid this.
    In our enviroment we have db server with two application server and web
    dispatcher is installed on db server and i need to know what can i do when
    the web dispatcher on db server crashes and cannot be restarted at all.
    We are running oracle 10.2.0.2.0 on AIX 5.3.
    Regards,
    Codlick

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • Does OAM is single point of failure ?

    Hi Adam
    I have a serious a doubt about OAM implementation...
    What is the best practice for OAM implementation and fall back plans for those critical web application integration ?
    Once the web applications was integrated with OAM, the login traffic will always redirect to OAM for authentication and authorization...
    But once OAM is down, all the critical applications are down !!
    So, from customer point of view, OAM seems like single point of failure..
    Do you have any brilliant ideas on this ?
    Thanks in million...
    Best Regards
    John

    john,chong wrote:
    Hi Pramod
    Yup, HA always must be in placed for this kinds critical implementation..
    BUT for esso(desktop esso) implementation; even esso is down, user still able to do manual login to their application..Really? What if the password has been changed by ESSO to a random one for some application, that's very common in ESSO implementations. User doesn't know the password, only ESSO does.

  • Primary site server a single point of failure?

    I'm installing ConfigMgr 2012 R2, and employing a redundant design as much as possible. I have 2 servers, call them CM01,CM02, in a single primary site, and on each server I have installed the following roles: Management Point, Distribution Point, Software
    Update Point, as well as the installing the SMS Provider on both servers. SQL is on a 3rd box.
    I am now testing failover from a client perspective by powering down CM01 and querying the current management point on the client: (get-wmiobject -namespace root\ccm -class ccm_authority).CurrentManagementPoint . The management point assigned to
    the client flips to the the 2nd server, CM02, as expected. However, when I try to open the CM management console, I cannot connect to the Site, and reading SMSAdminUI log reveals this error: "Provider machine not found". 
    Is the Primary site server a single point of failure? 
    Why can't I point the console to a secondary SMS provider?
    If this just isn't possible, what is the course of action to restore console access once the Primary Site server is down?
    Many Thanks

    Yes, that is a completely false statement. Using a CAS and multiple primaries in fact will introduce multiple single points of failure. The only technical Eason for a CAD a multiple primary sites is for scale out; i.e., supporting 100,000+ managed systems.
    HA is achieved from a client perspective by adding multiple site systems hosting the client facing roles: MP, DP, SUP, App Catalog.
    Beyond that, all other roles are non-critical to client operations and thus have no built-in HA mechanism. This includes the site server itself also.
    The real question is what service that ConfigMgr provides do you need HA for?
    Jason | http://blog.configmgrftw.com

  • Single Point of Failure

    How to built a network without single point of failure?

    Hi Friend,
    Your question is very vast.
    Redundancy can be on LAN , WAN , Routing etc etc.
    On LAN there can be many feature you can use to have complete redundancy like STP (Root Bridge and Secondary Root Bridge) , Etherchnnel between access and distribution and between distributon and core, also you can have uplink fast for fast convergence,
    you can also run HSRP. So I am saying there are many features which are available to have complete redundancy.
    On WAN interface you can implement Lease line and rame relays and isdn as backup or you can play around with your routing protocols and static routes for redundancy.
    Now a days there are many features on IOS on LAN and WAN which can used for complete redundancy so if you are aware of the features you can design your network very well.
    HTH
    Ankur

  • N5K - single point of failure?

    When both N5K (running 4.2(1)N1(1b) are powered down, if one of them fails to power up, all N2K connected to the two N5K fails to power up. This scenario could happen in situation where there is a power maintainence when both N5K are brought down. It looks like it is related to the below. Beginning with Cisco NX-OS Release 5.0(2)N1(1), you can configure the Cisco Nexus 5000 Series switch to restore vPC services when its peer switch fails to come online by using the reload restore command. You must save this setting in the startup configuration. On reload, Cisco NX-OS Release 5.0(2)N1(1) starts a user-configurable timer (the default is 240 seconds). If the peer-link port comes up physically or the peer-keepalive is functional, the timer is stopped. Can anyone confirm that ? Thanks Eng Wee

    This design option works
    However keep in mind that your design has single point of failure in the nexus side if you need it end to end redundant you need to consider adding a second switch to the topology
    Hope this help
    Sent from Cisco Technical Support iPad App

  • Nics and Switch - single point of failure ?

    Hi all
    we have installed our RAC on two Dell Poweredge 1855 Blades.
    These blades have a limitaitton on two nics. The blades are in a chassis with two embedded switches.
    Now our problem is: is it possible to configure nics and switches so that the switches will not be a single point of failure ?
    We have tried different configurations but we did not see any solution.
    My opinion is that three nics are needed to achieve intracluster redundancy with two switches so that this hardware is not a good solution.
    What do You think about ?
    Regards
    Paolo

    We pull the cable on the public NIC, and the VIP
    never fails over. This is on Solaris 10.When you pull the interface cable out on node1 can you still ping from this node itself (node1) and its VIP (node1-vip)?
    Could you paste output from "crs_stat -p <vip_resource_name>"?
    (vip_resource_name is like "ora.node1.vip")
    I want to see if VIP monitoring interval is not 0 (in this case it's not monitored).
    If not, than I would check logs into $ORA_CRS_HOME/logs/<nodename>/racg/ora.<nodename>.vip.log
    I doubt you will see lots of details but you can enable debug tracing by setting USR_ORA_DEBUG=1 (what you see in crs_stat -p is USR_ORA_DEBUG=0 be default).

  • Single Point of failure with Administrative Server?

              We are in the process of upgrading from wls 5.1 to 6.1.
              With 5.1 clustering you could run any instance of the cluster and the cluster would
              be up. Further you could bring any of those instances up and down after a failure
              of any node w/o any problem.
              It looks like with 6.1 if you administrative server goes down your cluster is still
              up as long as your managed servers don't go down or get bounced.
              What happens if you need to restart one of your managed servers, while the administrative
              server is out? My guess is you are SOL. Is is true that while the administrative
              server is down you can't restart any of your managed servers?
              If this is the case I think WLS 6.1 clustering took a big step backwards from 5.1
              clustering (fail-over wise at least). With 5.1 if one of your nodes in a cluster
              blew out, you could take as long as you needed to fix it. The other nodes were fully
              functional. Now it looks like if the 'super node' aka administrative server goes
              down, you need to fix it ASAP or you can't relase any new code or restart any servers.
              Am I missing something?
              

              Scott W wrote:
              > We are in the process of upgrading from wls 5.1 to 6.1.
              > With 5.1 clustering you could run any instance of the cluster and the cluster would
              > be up. Further you could bring any of those instances up and down after a failure
              > of any node w/o any problem.
              >
              > It looks like with 6.1 if you administrative server goes down your cluster is still
              > up as long as your managed servers don't go down or get bounced.
              >
              > What happens if you need to restart one of your managed servers, while the administrative
              > server is out? My guess is you are SOL. Is is true that while the administrative
              > server is down you can't restart any of your managed servers?
              >
              Yes.
              But this is solved in 7.0 i.e. Managed Server Independence.
              see http://e-docs.bea.com/wls/docs70//////adminguide/startstop.html#1057374
              Kumar
              > If this is the case I think WLS 6.1 clustering took a big step backwards from 5.1
              > clustering (fail-over wise at least). With 5.1 if one of your nodes in a cluster
              > blew out, you could take as long as you needed to fix it. The other nodes were fully
              > functional. Now it looks like if the 'super node' aka administrative server goes
              > down, you need to fix it ASAP or you can't relase any new code or restart any servers.
              >
              > Am I missing something?
              >
              

  • Shared Application Tier -- NFS Single Point of Failure

    I am trying to convince mgmt that Shared Application Tier via NFS is the only way to go.
    Currently, we have a 4 node environment, 1 dedicated to database, then 3 application tiers.
    Is there any reasonable fault tolerant solution for NFS ??
    I am not a storage expert, but can a storage area be created on a SAN, then have all tiers NFS to the SAN area ??
    Inherently, the SAN devices are fault tolerant and dual HBA's to dual SAN switches provide protection.
    Am I missing something ?? By the way, we are a Solaris shop.

    Sawwan
    Look at Steven Chan Forum discussion section
    http://blogs.oracle.com/stevenChan/2007/05/reducing_patching_downtimes_vi.html
    Posted on June 1, 2009 14:30
    Robin Chatterjee:
    Hi atul, I notice that this may be a dead thread but I belive the answer to your question is that when you run autoconfig and chek the log you will notice that there is a preliminary stage where it says updating contect values int he databse. at that point if the database has a higher serrial number than the filesystem file then the filesystem contect file will be updated by the values in the database *( i assume not all of the values. this will then result in createion of an updated xml file. in fact the xml file is rewritten every time you run autoconfig*

Maybe you are looking for

  • IDOC Sender Adapter acknoledgements

    Hi, we have a scenario IDOC -> PI => HTTP Plain (ASync) . I have configured the Message Type ALEAUD and AUD1 in sender system. I also have the ACK_SYSTEM_FAILURE = 1 Now , when I get a permanent error from the HTTP receiver adapter. The acknolegement

  • Using Active Directory (LDAP Plugin) Across Multiple AD Servers

    Hi, I need to give an existing application the ability to talk to multiple active directories using the AD LDAP interface from a J2EE Applcation running on Apache 2.x/Tomcat 5.x (there are 4 independent AD trees and users from ANY of the trees can ac

  • Connecting SQL*PLUS to Micrsoft SQL Server

    I want to setup a connection to My MS Sql Server using SQL*PLUS. I've looked at the Documents but have not been able to get it to work I have download the Connector. Thank you for any help you can provide..

  • Read child nodes in Jtree

    Need some help in reading the child nodes in a jTree after a parent node has been selected. I've created a Jtree using DefaultMutableTreeNode. When I select a node, I'm able to read the contents of that node but can't figure out how to read all the c

  • Cannot pickup Disks\Luns in Device Manager

    Hi, We've recently re-installed to Windows Server 2008 R2, and wanted to assign Storage to our server. HBA we're using is a Qlogic which came with SANSurfer, however upon assigning the luns. The luns are not visible within device manager or disk mana