Eliminating points of failure...

Hello Everyone,
I am looking into a SAN system for a client and wanted to get some feedback regarding the current status of XSAN
1. First and foremost, Lion server OS and XSAN 2, is this combination something that is production ready? Should I present such a system to a client or should I consider and alternative software or hardware solution?
If we are production ready, then would the following be possible?
I would like to eliminate all single points of failure. In a SAN system I can identify several key components. In there you will find my confusion.
1. The Fibre channel switch create the fibre channel fabric- CAN use a secondary switch, which would connect to the second slot on my Fibre channel cards.
2. The metadata controller-CAN use a standby secondary (or I imagine as many as I want) back up metadata controller. How many can I use?
3. Client machines, these will actually be running file sharing services and allowing access for my users to connect. CAN use 2 more servers connected to the fibre channel switch to server AFP and SMB users. Could load balance by splitting up users or make only one available at a time.
4. The private switch-CAN user another private switch with a THIRD ethernet card, but mac mini's cannot use a third 1gb card, sooo what do I do? Mac pro's only??
Now confusion sets in...
5. Metadata RAID Array-How can I make this redundant, obviously it would be RAID 1 if a drive failed but what if this Raid unit physically fails, like blows up? Can I have a second physical Raid unit also acting as the Backup metadata raid array? Also in the setup Scripts the promise raids are built with Metadata Raid LUN and Storage Raid LUN in the same box? Do people use Dedicated Raid unit just for the Metadata LUN or are they combined into the same box in an ideal set up?
6. That leads me in the second part, the storage pool. Can I spread the data redundantly across two Raid unit but keep it 1 volume for my client server, so if the same thing happens, 1 raid unit blows up with my whole storage pool, the 2nd one already has the data redundantly written on it.
I am still reading up and would appreciate anyone's help in this matter. Thank you,

I love this little conversation I am having with myself. MY end goal is to correct an AFP/SMB high availability cluster or syste,.
Basically what I have discovered is the following:
SAN-The SAN system I envisioned is going to cost well into the $35,000 and above range. The problem is the manuals give you examples of systems that still have points of failure (needing 2 metadata ethernet switches and 2 metadata fibre channel controllers.) If you want to build a true high availability system it needs a lot of hardware, which makes me wonder, why even TEASE people with including XSAN knowing it's a whole bag of hurt if set up incorrectly. And on top of that, NO SAN mirroring, therefore if a Backplane fails on a RAID/LUN you are so SOL!!!
Poor Man's System-On the other side of the spectrum I envision building a system containing TWO Mac pro servers handling AFP, but not silmutaneously. Each will has be connecting to a RAID unit using fibre channel, but also only mounted on ONE of the systems and never silmutaneously. Wow seams like a great solution, I could even drop $6,000 for a Fibre channel switch to make it a bit cleaner. BUT wait.... What happened to IP failover in Lion? That's right, it has been removed, silently assassinated in the night without a sound or cry.
Fail over replacement software-My only option was to create a process made up of 3 scripts that fully simulate what heartbeatd and failoverd provided on the previous systems. They check for constant availability and monitor services. When issues are detected they try and repair the issue, then fail over if not successful. Right I am using this with Two servers that do not share the same file pool. They nightly sync so if a fail over occurs, the data is from the night before until we can bring the main server up and sync it.
I would like to incorporate my Poor man's system into my custom fail over software so that the data is relevent from the moment before the fail over.
Thoughts?????????

Similar Messages

  • Forms 10g 2 ApplicationServers Single Point of Failure

    Hi,
    we are planning a migration from Forms6i to Forms10g and we are thinking about eliminating as much as possible a single point of failure.
    Today we have all those Clients running Forms-Runtime with the FMBs ...
    They all create a connection against the Database which we have secured as much as possible against Loss of Service.
    After the migration we will have all those Clients running a browser and calling a URL which point to the Application-Server(s) running the Forms-Runtime processes. If this machine fails, none of the Clients can work anymore. Because of that, we are planning for 2 AS to be on a safer side for a Loss of one Server.
    But here starts the question :
    When a clients starts, he will point to an URL which lead to an IP-Address.
    The IP-Address could be of a Hardware-Loadbalancer, if so the LB will forward to Oracle Webcache on one of the AS. If not, The IP-Address leads directly to one Webcache.
    From there it proceeds to the HTTP-Server on one of the AS and then further to the MOD-OC4J Instance, which could be duplicated as well.
    All those "Instances" : Hardware-Loadbalancer, Webcache, HTTP-Server, MOD-OC4J-Instances can be doubled or more but that only makes sense if they run on different hardware, which means different IP-Addresses. I can imagine using a virtual IP-Address for connecting to the HLB or the Webcache but where is it split to the different real addresses with having one Box as a single point of failure.
    I'm looking for a solution to double the ApplicationServer as easy as possible but without having the Clients to decide on which Server they can work and without having a single box in front which would lead to a S.P.O.F.
    I know, that there are HLBs out there which can act as a Cluster so that should eliminate the problem, but I would like to know, whether that cann be done on the AS only.
    Thanks,
    Mark

    Thanks wilfred,
    yes I've read that manual. Probably not every single page ;-)
    I agree that High-Availability is a very broad and complex topic, but my question is (although it was difficult to explain what i mean) only on a small part of it:
    I understand that I can have mutiple instances on each level OC4J, HTTP, WEB-Cache, LBR But where or who excepts one single URL and leads the requests to the available AS
    As mentioned in my post before, we may etst the Microsoft NLB-Cluster to divide the requests to the WEB-Cache Instances on the 2 AS and then the 2 Web-Cache proceed to the 2 HTTP and so on.
    The Idea of that is that Windows offers a virtual IP-Adress from those 2 Windows-Server and somehow the requests will be transferred to a running WEB-Cache.
    Does that work correctly with session-Binding ...
    We'll see
    thanks,
    Mark

  • Cold backup can use archivelog to recover database to point of failure?

    Hi,
    I have the small doubts. In any DB let us say we have following condtions:
    1) DB is running in ARCHIVELOG mode.
    2) Cold backup is performed every sunday.
    3) No hot backup or RMAN backup is taken.
    so if any failure happens during weekday say on friday. can we recover our database till the point of failure with the help of cold backup? If yes then pls specify the steps how to do it.
    could archive log be applied on cold backup?
    Thanks,
    Shailesh

    Shailesh.mishra wrote:
    Hi,
    I have the small doubts. In any DB let us say we have following condtions:
    1) DB is running in ARCHIVELOG mode.
    2) Cold backup is performed every sunday.
    3) No hot backup or RMAN backup is taken.
    so if any failure happens during weekday say on friday. can we recover our database till the point of failure with the help of cold backup? If yes then pls specify the steps how to do it.
    could archive log be applied on cold backup?
    Shaliesh,
    Cold or hot backup doesn't govern teh recovery. Whether the recovery is going to be a complete or incomplete recovery, I mean we can recover teh complete data or would be thrown back to the last good backup stage is governed by the availability of the archives. If you have teh archive log enabled , this means yiou have all teh transaction records already with you. So if you have taken your backup on Sunday, and faced a crash on Tuesday, all you have to do is to restore the last good bcakup from Sunday, give a recover command and oracle would start applying the archivelogs to the files. This would be a complete recovery providied anything else had not happen which can stop it, for example, loss of an archive log.
    So teh answer would be, yes archive logs are very well applicable on teh cold backup.
    HTH
    Aman....

  • How can I design Load Balancing for distant Datacenters? without single point of failure

    Dear Experts,
    We are using the following very old and passive method of redundancy for our cload SaaS but it's time to make it approperiate. Can youplease advise:
    Current issues:
    1. No load balancing. IP selection is based on primary and secondary IP configurations. If Primary fails to respond, IP record for DNS changes to secondary IP with TTL=1min
    2. When primary server fails, it takes around 15 min for clients to access the servers. Way too long!
    The target:
    A. Activate a load balancing mechanism to utilized the stand-by server.
    B. How can the solution be designed to avoid single point of failure? In the previous example, UltraDNS is a single point of failure.
    C. If using GSS is the solution, how can it be designed in both server locations (for active redundancy) using ordinary DNS server?
    D. How can HSRP, GSS, GSLB, and/or VIP be used? What would be the best solution?
    Servers are running ORACLE DB, MS SQL, and tomcat with 2x SAN of 64TB each.

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • Is a cluster proxy a single-point-of-failure?

    Our group is planning on configuring a two machine cluster to host
              servlets/jsp's and a single backend app server to host all EJBs and a
              database.
              IIS is going to be configured on each of the two cluster machines with a
              cluster plugin. IIS is being used to optimize performance of static HTTP
              requests. All servlet/jsp request would be forwarded to the Weblogic
              cluster. Resonate's Central Dispatch is also going to be installed on the
              two cluster machines. Central Dispatch is being used to provide HTTP
              request load-balancing and to provide failover in case one of the IIS
              servers fails (because the IIS process fails or the cluster machine it's on
              fails).
              Will this configuration work? I'm most concerned about the failover of the
              IIS cluster proxy. If one of the proxies is managing a sticky session (X),
              what happens when the machine (the proxy is on) dies and we failover to the
              other proxy? Is that proxy going to have any awareness of session X?
              Probably not. The new proxy is probably going to believe this request is
              new and forward the request to a machine which may not host the existing
              primary session. I believe this is an error?
              Is a cluster proxy a single-point-of-failure? Is there any way to avoid
              this? Does the same problem exist if you use Weblogic's HTTP server (as the
              cluster proxy)?
              Thank you.
              Marko.
              

    We found our entity bean bottlenecks using JProbe Profiler. It's great for
              watching the application and seeing what methods it spends its time in. We
              found an exeedingly high number of calls to ejbLoad were taking a lot of
              time, probably due to the fact that our EBs don't all have bulk-access
              methods.
              We also had to do some low-level method tracing to watch WebLogic thrash EB
              locks, basically it locks the EB instance every time it is accessed in a
              transaction. Our DBA says that Oracle is seeing a LOT of lock/unlock
              activity also. Since much of our EB data is just configuration information
              we don't want to incur the overhead of Java object locks, excess queries,
              and Oracle row locks just to read some config values. Deadlocks were also a
              major issue because many txns would access the same config data.
              Our data is also very normalized, and also very recursive, so using EBs
              makes it tricky to do joins and recursive SQL queries. It's possible that we
              could get good EB performance using bulk-access methods and multi-table EBs
              that use custom recursive SQL queries, but we'd still have the
              lock-thrashing overhead. Your app may differ, you may not run into these
              problems and EBs may be fine for you.
              If you have a cluster proxy you don't need to use sticky sessions with your
              load balancer. We use sticky sessions at the load-balancer level because we
              don't have a cluster proxy. For our purposes we decided that the minimal
              overhead of hardware ip-sticky session load balancing was more tolerable
              than the overhead of a dog-slow cluster proxy on WebLogic. If you do use the
              proxy then your load balancer can do round-robin or any other algorithm
              amongst all the proxies.
              Marko Milicevic <[email protected]> wrote in message
              news:[email protected]...
              > Sorry Grant. I meant to reply to the newsgroup. I am putting this reply
              > back on the group.
              >
              > Thanks for your observations. I will keep them all in mind.
              > Is there any easy way for me to tell if I am getting acceptable
              performance
              > with our configuration? For example, how do I know if my use of Entity
              > beans is slow? Will I have to do 2 implementations? One implementation
              > using entity beans and anther implementation that replaces all entity use
              > with session beans, then compare the performance?
              >
              > One last question about the cluster proxy. You mentioned that you are
              using
              > Load Director with sticky sessions. We too are planning on using sticky
              > sessions with Central Dispatch. But since the cluster proxy is stateless,
              > does it matter if sticky sessions is used by the load balancer? No matter
              > which cluster proxy the request is directed to (by load balancing) the
              > cluster proxy will in turn redirect the request to the correct machine
              (with
              > the primary session). Is this correct? If I do not have to incur the
              cost
              > of sticky sessions (with the load balancer) I would rather avoid it.
              >
              > Thanks again Grant.
              >
              > Marko.
              > .
              >
              > -----Original Message-----
              > From: Grant Kushida [mailto:[email protected]]
              > Sent: Monday, May 01, 2000 5:16 PM
              > To: Marko Milicevic
              > Subject: RE: Is a cluster proxy a single-point-of-failure?
              >
              >
              > We haven't had too many app server VM crashes, although our web server
              > typically needs to be restarted every day or so due to socket strangeness
              or
              > flat out process hanging. Running 2 app server processes on the same box
              > would help with the VM stuff, but remember to get 2 NICs, because all
              > servers on a cluster need to run on the same port with different IP addrs.
              >
              > We use only stateless session beans and entity beans - we have had a
              number
              > of performance problems with entity beans though so we will be migrating
              > away from them shortly, at least for our configuration-oriented tables.
              > Since each entity (unique row in the database) can only be accessed by one
              > transaction at a time, we ran into many deadlocks. There was also a lot of
              > lock thrashing because of this transaction locking. And of course the
              > performance hit of the naive database synching (read/write for each method
              > call). We're using bean-managed persistence in 4.5.1, so no read-only
              beans
              > for us yet.
              >
              > It's not the servlets that are slower, it's the response time due to the
              > funneling of requests through the ClusterProxy servlet running on a
              WebLogic
              > proxy server. You don't have that configuration so you don't really need
              to
              > worry. Although i have heard about performance issues with the cluster
              proxy
              > on IIS/netscape, we found performance to be just fine with the Netscape
              > proxy.
              >
              > We're currently using no session persistence. I have a philosophical issue
              > with going to vendor-specific servlet extensions that tie us to WebLogic.
              We
              > do the session-sticky load balancing with a Cisco localdirector, meanwhile
              > we are investigating alternative servlet engines (Apache/JRun being the
              > frontrunner). We might set up Apache as our proxy server running the
              > Apache-WL proxy plugin once we migrate up to 5.1, though.
              >
              > > -----Original Message-----
              > > From: Marko Milicevic [mailto:[email protected]]
              > > Sent: Monday, May 01, 2000 1:08 PM
              > > To: Grant Kushida
              > > Subject: Re: Is a cluster proxy a single-point-of-failure?
              > >
              > >
              > > Thanks for the info Grant.
              > >
              > > That is good news. I was worried that the proxy maintained
              > > state, but since
              > > it is all in the cookie, then I guess we are ok.
              > >
              > > As for the app server, you are right. It is a single point
              > > of failure, but
              > > the machine is a beast (HP/9000 N-class) with hardware
              > > redundancy up the
              > > yin-yang. We were unsure how much benefit we would get if we
              > > clustered
              > > beans. There seems to be a lot of overhead associated with
              > > clustered entity
              > > beans since every bean read includes a synch with the
              > > database, and there is
              > > no fail over support. Stateful session beans are not load
              > > balanced and do
              > > not support fail over. There seems to be real benefit for
              > > only stateless
              > > beans and read-only entities. Neither of which we have many
              > > of. We felt
              > > that we would probably get better performance by locating all
              > > of our beans
              > > on the same box as the data source. We are considering creating a two
              > > instance cluster within the single app server box to protect
              > > against a VM
              > > crash. What do you think? Do you recommend a different
              > > configuration?
              > >
              > > Thanks for the servlet performance tip. So you are saying
              > > that running
              > > servlets without clustering is 6-7x faster than with
              > > clustering? Are you
              > > using in-memory state replication for the session? Is this
              > > performance
              > > behavior under 4.5, 5.1, or both? We are planning on
              > > implementing under
              > > 5.1.
              > >
              > > Thanks again Grant.
              > >
              > > Marko.
              > > .
              >
              >
              > Grant Kushida <[email protected]> wrote in message
              > news:[email protected]...
              > > Seems like you'll be OK as far as session clustering goes. The Cluster
              > > proxies running on your IIS servers are pretty dumb - they just analyze
              > the
              > > cookie and determine the primary/secondary IP addresses of the WebLogic
              > web
              > > servers that hold the session data for that request. If one goes down
              the
              > > other is perfectly capable of analyzing the cookie too. As long as one
              > proxy
              > > and one of your two clustered WL web servers survives your users will
              have
              > > intact sessions.
              > >
              > > You do, however, have a single point of failure at the app server level,
              > and
              > > at the database server level, compounded by the fact that both are on a
              > > single machine.
              > >
              > > Don't use WebLogic to run the cluster servlet. It's performance is
              > > terrible - we experienced a 6-7x performance degredation, and WL support
              > had
              > > no idea why. They wanted us to run a version of ClusterServlet with
              > timing
              > > code in it so that we could help them debug their code. I don't think
              so.
              > >
              > >
              > > Marko Milicevic <[email protected]> wrote in message
              > > news:[email protected]...
              > > > Our group is planning on configuring a two machine cluster to host
              > > > servlets/jsp's and a single backend app server to host all EJBs and a
              > > > database.
              > > >
              > > > IIS is going to be configured on each of the two cluster machines with
              a
              > > > cluster plugin. IIS is being used to optimize performance of static
              > HTTP
              > > > requests. All servlet/jsp request would be forwarded to the Weblogic
              > > > cluster. Resonate's Central Dispatch is also going to be installed on
              > the
              > > > two cluster machines. Central Dispatch is being used to provide HTTP
              > > > request load-balancing and to provide failover in case one of the IIS
              > > > servers fails (because the IIS process fails or the cluster machine
              it's
              > > on
              > > > fails).
              > > >
              > > > Will this configuration work? I'm most concerned about the failover
              of
              > > the
              > > > IIS cluster proxy. If one of the proxies is managing a sticky session
              > > (X),
              > > > what happens when the machine (the proxy is on) dies and we failover
              to
              > > the
              > > > other proxy? Is that proxy going to have any awareness of session X?
              > > > Probably not. The new proxy is probably going to believe this request
              > is
              > > > new and forward the request to a machine which may not host the
              existing
              > > > primary session. I believe this is an error?
              > > >
              > > > Is a cluster proxy a single-point-of-failure? Is there any way to
              avoid
              > > > this? Does the same problem exist if you use Weblogic's HTTP server
              (as
              > > the
              > > > cluster proxy)?
              > > >
              > > > Thank you.
              > > >
              > > > Marko.
              > > > .
              > > >
              > > >
              > > >
              > > >
              > > >
              > > >
              > >
              > >
              >
              >
              

  • Administrative Server - Single Point of Failure?

    From my understanding, all managed servers in a cluster get their
              configuration by contacting the administrative server in the cluster.
              So i assume in the following scenario, the administrative server
              could be a single point of failure.
              Scenrario:
              1. The machine, on which the administrative server was running got a
              hardware defect.
              2. due to some bad coding one of the managed servers on another machine
              crashed.
              3. a small script tries to restart the previously failed server from step 2.
              i assume, that step 3. is not possible, because there is no backup
              administrative server
              in the whole cluster. so the script will fail, wen trying to start the
              crashed managed server
              again.
              did i understand this right? do you have some suggestions, how to avoid this
              situation?
              what does BEA recommend to their enterprise customers?
              best regards
              Thomas
              

    Hi Thomas,
              There is no reason why you couldnt keep a backup administration server
              that is NOT running available. So that if the primary administration server
              went down, you could launch a secondary server with the same administration
              information and the managed server could retrieve the required information
              from the backup administration server.
              regards,
              -Rob
              Robert Castaneda [email protected]
              CustomWare http://www.customware.com
              "Thomas E. Wieger" <[email protected]> wrote in message
              news:[email protected]...
              > From my understanding, all managed servers in a cluster get their
              > configuration by contacting the administrative server in the cluster.
              > So i assume in the following scenario, the administrative server
              > could be a single point of failure.
              >
              > Scenrario:
              > 1. The machine, on which the administrative server was running got a
              > hardware defect.
              > 2. due to some bad coding one of the managed servers on another machine
              > crashed.
              > 3. a small script tries to restart the previously failed server from step
              2.
              >
              > i assume, that step 3. is not possible, because there is no backup
              > administrative server
              > in the whole cluster. so the script will fail, wen trying to start the
              > crashed managed server
              > again.
              >
              > did i understand this right? do you have some suggestions, how to avoid
              this
              > situation?
              > what does BEA recommend to their enterprise customers?
              >
              > best regards
              >
              > Thomas
              >
              >
              >
              

  • Re run a failed map from the point of failure.

    Can anyone please let me know if you can re-run a map from the point of failure.
    Rdgs,
    Dominic

    Hi Dominic,
    Interesting question, I've never run into a requirement for that, even though it seems pretty standard.
    Just philosophising about this, you can build a mapping with a relatively low commit frequency value (say 1% of total rows = 20000), and a check mechanism on the PK value that checks your target whether the PK already exists; if so you don't insert it, if nog you do insert it.
    Check mechanism possible by building view on source with NOT EXISTS clause looking up target, or by using the target as a source in the mapping to compare against the source outer joining it and then only picking up the ones for which there is no match. Pick the one with the best performance.
    Of course this only works if the source in your mapping is a table that contains all data necessary to do this check (i.e. if the 'end' source for your target table is the result of lots of the work of lots of mapping operators, it might prove to be harder).
    Then when you have built this mapping, you can simply re-run it without worrying about introducing duplicates etc.
    Hope this helps.
    Good luck, Patrick

  • Load plan not starting from point of failure in obia 11

    hi,
    I am using obia 11..1.1.7 & once load plan is started. few session are failed due to duplicates. After removal of duplicates, once i start load plan from configuration manager. It is starting again from start not from point of failure. I tried restarting load plan from ODI studio also from load plan & scanarios too but it again started from start not from point of failure.
    Can someone explain how to use ODI start from point of failure feature in OBIA.
    Thanks,
    Paresh

    JamesW wrote:
    That's exactly that. Shutdown the database. make the correction in the PFILE, started the DB using the PFILE and then created a new SPFILE .. Restarted the DB (using the SPFILE), restated the upgraded assist.. its not at 9% and running
    Pesky Oracle Thanks guys for your help
    FWIW, and for your future understanding, you didn't actually have to *start* the database with the pfile in order to create the spfile.
    In this example, I don't even have a database named fubar ...
    oracle:fubar$ pwd
    /u01/app/oracle/product/11.2.0/db_1/dbs
    2013-08-15 10:52:29
    oracle:fubar$ ls -l init*
    -rw-r--r-- 1 oracle oinstall 2851 May 15  2009 init.ora
    -rw-r----- 1 oracle oinstall   35 Jan 31  2013 initorcl.ora
    2013-08-15 10:52:36
    oracle:fubar$ ls -l spfile*
    -rw-r----- 1 oracle asmadmin 2560 Aug 15 10:47 spfilekilroy.ora
    2013-08-15 10:52:39
    oracle:fubar$ export ORACLE_SID=fubar
    2013-08-15 10:52:48
    oracle:fubar$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.1.0 Production on Thu Aug 15 10:52:53 2013
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    Connected to an idle instance.
    SQL> create spfile from pfile='init.ora';
    File created.
    SQL> exit
    Disconnected
    2013-08-15 10:53:05
    oracle:fubar$ ls -l spfile*
    -rw-r----- 1 oracle asmadmin 1536 Aug 15 10:53 spfilefubar.ora
    -rw-r----- 1 oracle asmadmin 2560 Aug 15 10:47 spfilekilroy.ora
    2013-08-15 10:53:07
    oracle:fubar$

  • Cluster point of failure

    I'm trying to setup an environment where if my primary web server goes down then request will be sent to the backup. I think clustering can help me here but my fear is that I have a single point of failure on the managing server. If i have a cluster is one machine managing all traffic? and if that machine were to go down my entire site would be down. Any suggestion at how to handle this at the router level would be appreciated also.
    Scott

    I'm not sure I understand your question completely.
    You can certainly run multiple managed servers and/or a cluster of managed servers to give you some redundancy.
    You can run multiple physical and/or virtual machines.
    You can run multiple sites etc for disaster recovery.
    I can't recall a site I've visited in a long time that didn't do all of these.
    Was there a specific question you had about HA or failure scenarios?
    -- Rob
    WLS Blog http://dev2dev.bea.com/blog/rwoollen/

  • How to recover till point of failure

    dear all,
    i m a newbie and i have one q regarding rman --i have yesterday backup and i want to recover the database till point of failure (eg.point of failure of any datafile or tablespace or redolog or controlfile).how can i recover the database? tour help higly apreciated.thanks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    938946 wrote:
    dear friends,
    i drop two tables i.e table1 at 8:00 and table2 at 8:20.i recovered the database using rman with catalog until time 8:18.i got the dropped table
    table2 but when i checked the table1 i m getting erro-table1 doesnt exist in the database.why RMAN didnt recover the table1(though table1 is dropped before table2) your help is higly appreciated.thanks to all my dear friends.Because you recovered until 8:18. At 8:18, table 1 had already been dropped. So at 8:18 the state of the database was that there was no table1. And that the is point to which you recovered your database.
    Rman did this by applying all of the redo (that is, by re-applying all of the DDL and DML) that occurred between the time of the restored data file and the 'recover to' time you specified. Since that time was after you dropped table1, it re-applied the DROP statement.

  • Single points of failure?

    So, we are looking into the xServe RAID, and I'd like some insight into making things as bulletproof as possible.
    Right now we plan to have:
    a load balancer and a failover load balancer (running on cheap BSD hardware, since hardware load balancers are so damned expensive) feeding into
    two application servers, which communicate with
    one back-end server, which serves as both a database server and an NFS server for the app servers
    And the volumes that will be NFS-mounted would be on our xServe RAID, which would be connected directly to the back-end server.
    The networking hardware would all be failover through multiple switches and cards and so forth.
    The idea here is to avoid as many single points of failure as possible. Unfortunately at the moment we don't have a DBA who is fluent in clustering, so we can't yet get rid of the back-end server as a single point of failure. (Which is also why I'm mounting the RAID on it and sharing via NFS... if the database goes down, it won't matter that the file service is down too.) However, in the current setup, there's one other failure point: the RAID controllers on the xServe RAID.
    Performance is less important to us on this than reliability is. We can't afford two RAID units at the moment, but we can afford one full of 500 gig drives, and we really only need about 4 TB of storage right now, so I was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc. As far as I understand it, this eliminates the RAID controllers as a single point of failure, and as far as I know they are at least supposedly the only single point of failure in the xServe RAID system. (I could also do RAID 10 that way, but due to the way we store files, that wouldn't buy us anything except added complexity.)
    And later on, down the road, when we have someone good enough to figure out how to cluster the database, if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it, thus cutting out the middle man NFS service. (I am under the impression that this sort of volume-sharing is possible via FC... is that correct?)
    Comments? Suggestions? Corrections to my misapprehentions?
    --Adam Lang

    Camelot wrote:
    A couple of points.
    was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc.
    Really? Assuming you're using fourteen 500GB drives this will give you seven volumes mounted on the server, each a 500GB mirror split on the two controllers. That's fine from a redundancy standpoint, but it ***** from the standpoint of managing seven direct mountpoints on the server, as well as seven NFS shares, and 14 NFS mount points on the clients. Not to mention file allocations between the volumes, etc.
    If your application is such that it's easy to dictate which volume any particular file should be on and you don't mind managing all those volumes, go ahead, otherwise consider creating two RAID 5 volumes, one on each controller, using RAID 1 to mirror them on the back-end server and exporting a single NFS share to the clients/front-end servers.
    Quite simple, actually. But admittedly, two RAID 5s RAID-1-ed together would be much more efficient, space-wise.
    if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it
    Yes, although you'll need another intermediate server as the metadata controller to arbitrate connections from the two machines. It becomes an expensive option, but your performance will increase, as will the ease with which you can expand your storage network (adding more storage as well as more front-end clients).
    But then that means that the metadata controller is a single point of failure...?
    --Adam Lang

  • Linux cluster, no single point of failure

    I'm having difficulty setting up a Business Objects cluster in linux with no single point of failure.  Following the instructions for custom install I am ending up connecting to the CMS server on the other server and no CMS running on the server i'm doing the install on. Which is a cluster, however we only have CMS running on one server in this scenario and we can't have a single point of failure.  Could someone explain how to setup a 2 server clustered solution on linux that doesn't have a single point of failure.

    not working, I can see my other node listed in the config, but the information for the servers state that the SIA is available, I've checked network/port connectivity between the boxes and SIA is running and available for each box.
    Via the instructions for installing on a system with windows capabilities I read about a step to connect to an existing CMS.
    http://wiki.sdn.sap.com/wiki/download/attachments/154828917/Opiton1_add_cms3.jpg
    http://wiki.sdn.sap.com/wiki/display/BOBJ/Appendix1-Deployment-howtoaddanotherCMS-Multipleenvironments
    via the linux install.sh script, no matter what I do I'm not coming across any way that allows me to reach that step.

  • Single point of failure for web dispatcher

    Hi
    I need advise on how can i resolve single point of failure for web
    dispatcher in case the web dispatcher goes down on another system, what
    are the alternative which can be used to avoid this.
    In our enviroment we have db server with two application server and web
    dispatcher is installed on db server and i need to know what can i do when
    the web dispatcher on db server crashes and cannot be restarted at all.
    We are running oracle 10.2.0.2.0 on AIX 5.3.
    Regards,
    Codlick

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • Does OAM is single point of failure ?

    Hi Adam
    I have a serious a doubt about OAM implementation...
    What is the best practice for OAM implementation and fall back plans for those critical web application integration ?
    Once the web applications was integrated with OAM, the login traffic will always redirect to OAM for authentication and authorization...
    But once OAM is down, all the critical applications are down !!
    So, from customer point of view, OAM seems like single point of failure..
    Do you have any brilliant ideas on this ?
    Thanks in million...
    Best Regards
    John

    john,chong wrote:
    Hi Pramod
    Yup, HA always must be in placed for this kinds critical implementation..
    BUT for esso(desktop esso) implementation; even esso is down, user still able to do manual login to their application..Really? What if the password has been changed by ESSO to a random one for some application, that's very common in ESSO implementations. User doesn't know the password, only ESSO does.

  • Primary site server a single point of failure?

    I'm installing ConfigMgr 2012 R2, and employing a redundant design as much as possible. I have 2 servers, call them CM01,CM02, in a single primary site, and on each server I have installed the following roles: Management Point, Distribution Point, Software
    Update Point, as well as the installing the SMS Provider on both servers. SQL is on a 3rd box.
    I am now testing failover from a client perspective by powering down CM01 and querying the current management point on the client: (get-wmiobject -namespace root\ccm -class ccm_authority).CurrentManagementPoint . The management point assigned to
    the client flips to the the 2nd server, CM02, as expected. However, when I try to open the CM management console, I cannot connect to the Site, and reading SMSAdminUI log reveals this error: "Provider machine not found". 
    Is the Primary site server a single point of failure? 
    Why can't I point the console to a secondary SMS provider?
    If this just isn't possible, what is the course of action to restore console access once the Primary Site server is down?
    Many Thanks

    Yes, that is a completely false statement. Using a CAS and multiple primaries in fact will introduce multiple single points of failure. The only technical Eason for a CAD a multiple primary sites is for scale out; i.e., supporting 100,000+ managed systems.
    HA is achieved from a client perspective by adding multiple site systems hosting the client facing roles: MP, DP, SUP, App Catalog.
    Beyond that, all other roles are non-critical to client operations and thus have no built-in HA mechanism. This includes the site server itself also.
    The real question is what service that ConfigMgr provides do you need HA for?
    Jason | http://blog.configmgrftw.com

Maybe you are looking for