CSS stateful fail-over

The current version of the CSS allows stateful fail-over using a direct connection between two CSS.
I am working on a project for a customer where the two CSS are away from each-other. Stateful fail-over is a strong requirement for this customer. What is the Cisco position about this requirement ?
Thank you
Yves Haemmerli

how far away ?
You can use fiber for the statefull connection and I think it can go up to 10km.
If you need the servers to be further away than 10km, you have to contact a Cisco Sales person to explain your requirements.
Regards,
Gilles.

Similar Messages

  • WLS6.1sp1 stateful EJB problem =   load-balancing and fail over

              I have three problem
              1. I have 2 clustered server. my weblogic-ejb-jar.xml is here
              <?xml version="1.0"?>
              <!DOCTYPE weblogic-ejb-jar PUBLIC '-//BEA Systems, Inc.//DTD WebLogic 6.0.0 EJB//EN'
              'http://www.bea.com/servers/wls600/dtd/weblogic-ejb-jar.dtd'>
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
                   <ejb-name>DBStatefulEJB</ejb-name>
                   <stateful-session-descriptor>
                   <stateful-session-cache>
                        <max-beans-in-cache>100</max-beans-in-cache>
                        <idle-timeout-seconds>120</idle-timeout-seconds>
                   </stateful-session-cache>
                   <stateful-session-clustering>
                        <home-is-clusterable>true</home-is-clusterable>
                        <home-load-algorithm>RoundRobin</home-load-algorithm>
                        <home-call-router-class-name>common.QARouter</home-call-router-class-name>
                        <replication-type>InMemory</replication-type>
                   </stateful-session-clustering>
                   </stateful-session-descriptor>
                   <jndi-name>com.daou.EJBS.solutions.DBStatefulBean</jndi-name>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              when i use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              and deploy this ejb, exception cause
              <Warning> <Dispatcher> <RuntimeException thrown b
              y rmi server: 'weblogic.rmi.cluster.ReplicaAwareServerRef@9 - jvmid: '2903098842
              594628659S:203.231.15.167:[5001,5001,5002,5002,5001,5002,-1]:mydomain:cluster1',
              oid: '9', implementation: 'weblogic.jndi.internal.RootNamingNode@5f39bc''
              java.lang.IllegalArgumentException: Failed to instantiate weblogic.rmi.cluster.B
              asicReplicaHandler due to java.lang.reflect.InvocationTargetException
              at weblogic.rmi.cluster.ReplicaAwareInfo.instantiate(ReplicaAwareInfo.ja
              va:185)
              at weblogic.rmi.cluster.ReplicaAwareInfo.getReplicaHandler(ReplicaAwareI
              nfo.java:105)
              at weblogic.rmi.cluster.ReplicaAwareRemoteRef.initialize(ReplicaAwareRem
              oteRef.java:79)
              at weblogic.rmi.cluster.ClusterableRemoteRef.initialize(ClusterableRemot
              eRef.java:28)
              at weblogic.rmi.cluster.ClusterableRemoteObject.initializeRef(Clusterabl
              eRemoteObject.java:255)
              at weblogic.rmi.cluster.ClusterableRemoteObject.onBind(ClusterableRemote
              Object.java:149)
              at weblogic.jndi.internal.BasicNamingNode.rebindHere(BasicNamingNode.jav
              a:392)
              at weblogic.jndi.internal.ServerNamingNode.rebindHere(ServerNamingNode.j
              ava:142)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              2)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              So do i must use it or not???
              2. When i don't use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              , there's no exception
              but load balancing does not happen. According to the document , there's must load
              balancing when i call home.create() method.
              my client program goes here
                   DBStateful the_ejb1 = (DBStateful) PortableRemoteObject.narrow(home.create(),
              DBStateful.class);
                   DBStateful the_ejb2 = (DBStateful) PortableRemoteObject.narrow(home.create(3),
              DBStateful.class);
              the result is like that
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@acf6e)/398
                   or
                   the_ejb1 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@252fdf)/380
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
                   I think the result should be like under one... isn't it??
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
              In this case i think the_ejb1 and the_ejb2 must have instance in different cluster
              server
              but they go to one server .
              3. If i don't use      "<home-call-router-class-name>common.QARouter</home-call-router-class-name>",
              "<replication-type>InMemory</replication-type>" then load balancing happen but
              there's no fail-over
              So how can i get load-balancing and fail over together??
              

              I have three problem
              1. I have 2 clustered server. my weblogic-ejb-jar.xml is here
              <?xml version="1.0"?>
              <!DOCTYPE weblogic-ejb-jar PUBLIC '-//BEA Systems, Inc.//DTD WebLogic 6.0.0 EJB//EN'
              'http://www.bea.com/servers/wls600/dtd/weblogic-ejb-jar.dtd'>
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
                   <ejb-name>DBStatefulEJB</ejb-name>
                   <stateful-session-descriptor>
                   <stateful-session-cache>
                        <max-beans-in-cache>100</max-beans-in-cache>
                        <idle-timeout-seconds>120</idle-timeout-seconds>
                   </stateful-session-cache>
                   <stateful-session-clustering>
                        <home-is-clusterable>true</home-is-clusterable>
                        <home-load-algorithm>RoundRobin</home-load-algorithm>
                        <home-call-router-class-name>common.QARouter</home-call-router-class-name>
                        <replication-type>InMemory</replication-type>
                   </stateful-session-clustering>
                   </stateful-session-descriptor>
                   <jndi-name>com.daou.EJBS.solutions.DBStatefulBean</jndi-name>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              when i use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              and deploy this ejb, exception cause
              <Warning> <Dispatcher> <RuntimeException thrown b
              y rmi server: 'weblogic.rmi.cluster.ReplicaAwareServerRef@9 - jvmid: '2903098842
              594628659S:203.231.15.167:[5001,5001,5002,5002,5001,5002,-1]:mydomain:cluster1',
              oid: '9', implementation: 'weblogic.jndi.internal.RootNamingNode@5f39bc''
              java.lang.IllegalArgumentException: Failed to instantiate weblogic.rmi.cluster.B
              asicReplicaHandler due to java.lang.reflect.InvocationTargetException
              at weblogic.rmi.cluster.ReplicaAwareInfo.instantiate(ReplicaAwareInfo.ja
              va:185)
              at weblogic.rmi.cluster.ReplicaAwareInfo.getReplicaHandler(ReplicaAwareI
              nfo.java:105)
              at weblogic.rmi.cluster.ReplicaAwareRemoteRef.initialize(ReplicaAwareRem
              oteRef.java:79)
              at weblogic.rmi.cluster.ClusterableRemoteRef.initialize(ClusterableRemot
              eRef.java:28)
              at weblogic.rmi.cluster.ClusterableRemoteObject.initializeRef(Clusterabl
              eRemoteObject.java:255)
              at weblogic.rmi.cluster.ClusterableRemoteObject.onBind(ClusterableRemote
              Object.java:149)
              at weblogic.jndi.internal.BasicNamingNode.rebindHere(BasicNamingNode.jav
              a:392)
              at weblogic.jndi.internal.ServerNamingNode.rebindHere(ServerNamingNode.j
              ava:142)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              2)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              So do i must use it or not???
              2. When i don't use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              , there's no exception
              but load balancing does not happen. According to the document , there's must load
              balancing when i call home.create() method.
              my client program goes here
                   DBStateful the_ejb1 = (DBStateful) PortableRemoteObject.narrow(home.create(),
              DBStateful.class);
                   DBStateful the_ejb2 = (DBStateful) PortableRemoteObject.narrow(home.create(3),
              DBStateful.class);
              the result is like that
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@acf6e)/398
                   or
                   the_ejb1 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@252fdf)/380
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
                   I think the result should be like under one... isn't it??
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
              In this case i think the_ejb1 and the_ejb2 must have instance in different cluster
              server
              but they go to one server .
              3. If i don't use      "<home-call-router-class-name>common.QARouter</home-call-router-class-name>",
              "<replication-type>InMemory</replication-type>" then load balancing happen but
              there's no fail-over
              So how can i get load-balancing and fail over together??
              

  • Stateful bean not failing over

              I have a cluster of two servers and a Admin server. Both servers are running NT
              4 sp6 and WLS6 sp1.
              When I stop one of the servers, the client does n't automatically failover to
              the other server, instead it fails unable to contact server that has failed.
              My bean is configured to have its home clusterable and is a stateful bean. My
              client holds onto the remote interface, and makes calls through this. If Server
              B fails then it should automatically fail over to server A.
              I have tested my multicast address and all seems to be working fine between servers,
              my stateless bean work well, load balancing between servers nicely.
              Does anybody have any ideas, regarding what could be causing the stateful bean
              remote interface not to be providing failover info.
              Also is it true that you can have only one JMS destination queue/topic per cluster..The
              JMS cluster targeting doesn't work at the moment, so you need to deploy to individual
              servers?
              Thanks
              

    Did you enable stateful session bean replication in the
              weblogic-ejb-jar.xml?
              -- Rob
              Wayne Highland wrote:
              >
              > I have a cluster of two servers and a Admin server. Both servers are running NT
              > 4 sp6 and WLS6 sp1.
              > When I stop one of the servers, the client does n't automatically failover to
              > the other server, instead it fails unable to contact server that has failed.
              >
              > My bean is configured to have its home clusterable and is a stateful bean. My
              > client holds onto the remote interface, and makes calls through this. If Server
              > B fails then it should automatically fail over to server A.
              >
              > I have tested my multicast address and all seems to be working fine between servers,
              > my stateless bean work well, load balancing between servers nicely.
              >
              > Does anybody have any ideas, regarding what could be causing the stateful bean
              > remote interface not to be providing failover info.
              >
              > Also is it true that you can have only one JMS destination queue/topic per cluster..The
              > JMS cluster targeting doesn't work at the moment, so you need to deploy to individual
              > servers?
              >
              > Thanks
              Coming Soon: Building J2EE Applications & BEA WebLogic Server
              by Michael Girdley, Rob Woollen, and Sandra Emerson
              http://learnweblogic.com
              

  • Failing over Oracle connections in a pool

              Hi,
              This message is probably a bit out of context (I've already posted
              it to the JDBC group). I post here as well, since I guess it's
              the place where people have the most experience with clustering
              and HA. Original posting below...
              Could you please tell me whether, yes or no, connections to an
              Oracle database should fail over (when the database fails over
              to another machine)? I use Oracle's Transparent Application Failover
              (configured via Net8) with Weblogic 6 on Linux and Oracle 8.1.7
              on Solaris/SPARC.
              If this doesn't work in my configuration, is there any configuration
              where it should work? (Another version of Oracle, WLS, OS, ...)
              When I try TAF using the PetStore application, I get exceptions
              related to no being connected to the database.
              If TAF doesn't work with WebLogic, is there a way to work around
              the problem? Can I catch these exceptions and renew the connections
              in the pool? Or, what else is possible...?
              I'd appreciate any help. I'd like to demonstrate our HA product
              with WLS. If it doesn't work, I'll turn to iPlanet instead. Pity,
              I really like WLS!
              Thanks in advance for any help or advice!
              Regards, Frank Olsen
              

              Hi (Frank ;-)
              I got carried away a bit too fast...
              Some more testing shows that it doesn't work in all cases:
              - when someone is trying to check out the shopping cart when the
              the database fails (and fails over), I get exceptions once the
              databses has restarted on the backup node
              - the exceptions are related to some transactions being rolled
              back and Oracle stating that it couldn't safely replay the transactions
              - browsing the categories still works fine
              - all access to the shopping cart and sign-in/sign-out causes time-outs
              and exceptions
              Any ideas what may cause this problem, please?
              Regards,
              Frank Olsen
              "Frank Olsen" <[email protected]> wrote:
              >
              >Hi,
              >
              >TAF worked with WLS 6 on NT with the Oracle 8.1.7 client!
              >
              >Has anyone tested it on Solaris/SPARC?
              >
              >Regards,
              >Frank Olsen
              >
              >
              >
              >"Frank Olsen" <[email protected]> wrote:
              >>
              >>Hi,
              >>
              >>Most of my question below is still valid (in particular
              >>concerning
              >>whether TAF should work with WLS on some or all platforms
              >>and
              >>versions).
              >>
              >>However, when I tested TAF with the Oracle client (sqlplus)
              >>there
              >>also was no failover of the (one) connection. I then
              >checked
              >>the
              >>`V$SESSION' view and the colums related to failover showed
              >>that
              >>TAF was not correctly configured. Strange because I copied
              >>the
              >>`tnsnames.ora' parameters from the Oracle documentation
              >>for TAF.
              >>
              >>Has anyone managed to configure and use TAF, with or
              >without
              >>WLS?!
              >>
              >>Thanks in advance for your help!
              >>
              >>Regards,
              >>Frank Olsen
              >>
              >>
              >>"Frank Olsen" <[email protected]> wrote:
              >>>
              >>>Hi,
              >>>
              >>>This message is probably a bit out of context (I've
              >already
              >>>posted
              >>>it to the JDBC group). I post here as well, since I
              >guess
              >>>it's
              >>>the place where people have the most experience with
              >>clustering
              >>>and HA. Original posting below...
              >>>
              >>>----
              >>>
              >>>Could you please tell me whether, yes or no, connections
              >>>to an
              >>>Oracle database should fail over (when the database
              >fails
              >>>over
              >>>to another machine)? I use Oracle's Transparent Application
              >>>Failover
              >>>(configured via Net8) with Weblogic 6 on Linux and Oracle
              >>>8.1.7
              >>>on Solaris/SPARC.
              >>>
              >>>If this doesn't work in my configuration, is there any
              >>>configuration
              >>>where it should work? (Another version of Oracle,
              >WLS,
              >>>OS, ...)
              >>>
              >>>
              >>>When I try TAF using the PetStore application, I get
              >>exceptions
              >>>related to no being connected to the database.
              >>>
              >>>If TAF doesn't work with WebLogic, is there a way to
              >>work
              >>>around
              >>>the problem? Can I catch these exceptions and renew
              >the
              >>>connections
              >>>in the pool? Or, what else is possible...?
              >>>
              >>>I'd appreciate any help. I'd like to demonstrate our
              >>HA
              >>>product
              >>>with WLS. If it doesn't work, I'll turn to iPlanet instead.
              >>>Pity,
              >>>I really like WLS!
              >>>
              >>>Thanks in advance for any help or advice!
              >>>
              >>>Regards, Frank Olsen
              >>>
              >>
              >
              

  • GSLB Zone-Based DNS Payment Gw - Config Active-Active: Not Failing Over

    Hello All:
    Currently having a bit of a problem, have exhausted all resources and brain power dwindling.
    Brief:
    Two geographically diverse sites. Different AS's, different front ends. Migrated from one site with two CSS 11506's to two sites with one 11506 each.
    Flow of connection is as follows:
    Client --> FW Public Destination NAT --> CSS Private content VIP/destination NAT --> server/service --> CSS Source VIP/NAT --> FW Public Source NAT --> client.
    Using Load Balancers as DNS servers, authoritative for zones due to the requirement for second level Domain DNS load balancing (i.e xxxx.com, AND FQDNs http://www.xxxx.com). Thus, CSS is configured to respond as authoritative for xxxx.com, http://www.xxxx.com, postxx.xxxx.com, tmx.xxxx.com, etc..., but of course cannot do MX records, so is also configured with dns-forwarders which consequently were the original DNS servers for the domains. Those DNS servers have had their zone files changed to reflect that the new DNS servers are in fact the CSS'. Domain records (i.e. NS records in the zone file), and the records at the registrar (i.e. tucows, which I believe resells .com, .net and .org for netsol) have been changed to reflect the same. That part of the equation has already been tested and is true to DNS Workings. The reason for the forwarders is of course for things such as non load balanced Domain Names, as well as MX records, etc...
    Due to design, which unfortunately cannot be changed, dns-record configuration uses kal-ap, example:
    dns-record a http://www.xxxx.com 0 111.222.333.444 multiple kal-ap 10.xx.1.xx 254 sticky-enabled weightedrr 10
    So, to explain so we're absolutely clear:
    - 111.222.333.444 is the public address returned to the client.
    - multiple is configured so we return both site addresses for redundancy (unless I'm misunderstanding that configuration option)
    - kal-ap and the 10.xx.1.xx address because due to the configuration we have no other way of knowing the content rule/service is down and to stop advertising the address for said server/rule
    - sticky-enabled because we don't want to lose a payment and have it go through twice or something crazy like that
    - weighterr 10 (and on the other side weightedrr 1) because we want to keep most of the traffic on the site that is closer to where the bulk of the clients are
    So, now, the problem becomes, that the clients (i.e. something like an interac machine, RFID tags...) need to be able to fail over almost instantly to either of the sites should one lose connectivity and/or servers/services. However, this does not happen. The CSS changes it's advertisement, and this has been confirmed by running "nslookups/digs" directly against the CSSs... however, the client does not recognize this and ends up returning a "DNS Error/Page not found".
    Thinking this may have something to do with the "sticky-enabled" and/or the fact that DNS doesn't necessarily react very well to a TTL of "0".
    Any thoughts... comments... suggestions... experiences???
    Much appreciated in advance for any responses!!!
    Oh... should probably add:
    nslookups to some DNS servers consistently - ALWAYS the same ones - take 3 lookups before getting a reply. Other DNS servers are instant....
    Cheers,
    Ben Shellrude
    Sr. Network Analyst
    MTS AllStream Inc

    Hi Ben,
    if I got your posting right the CSSes are doing their job and do advertise the correct IP for a DNS-query right?
    If some of your clients are having a problem this might be related to DNS-caching. Some clients are caching the DNS-response and do not do a refresh until they fail or this timeout is gone.
    Even worse if the request fails you sometimes have to reset the clients DNS-demon so that they are requesting IP-addresses from scratch. I had this issue with some Unixboxes. If I remeber it corretly you can configure the DNS behaviour for unix boxes and can forbidd them to cache DNS responsed.
    Kind Regards,
    joerg

  • How to 'fail-over' CSS11503-AC when ALL 5 Reals Servers (Services) die

    Hi all,
    Could anyone out there possibly provide an idea/config, of how it is possible to'fail-over' a CSS11503 set-up in Active/Standby mode with "ASR" enabled when:-
    - ALL your real servers(Services) for a particular VIP 'die'/OR nic is faulty.
    - So NOT just 1 of the real servers, but when ALL 5 are not reachable, I need to 'failover'
    My initial thought are to use the "critical reporter" or "critical service" to report back to the 'active' CSS.
    Anyone who has done this scenario before , please advise..
    thanks

    Thanks very much Syed fo rthis.I was thiking that no-one could answer this query.
    After a little tsting, I set the following config in the lab and it works but is different to yours. I cannot seem to configure the servive as "type local". When I input 'type ?; I get options such as nci-direct-return, nci-info-only, proxy-cache, redirect etc...etc..NO 'local'...!!
    Please advise..Thanks in advance
    ************************* INTERFACE ************************* interface 1/1 bridge vlan 800 phy 1Gbits-FD-no-pause
    nterface 1/2
    phy 1Gbits-FD-no-pause
    bridge vlan 20
    nterface Ethernet-Mgmt
    description "Management Interface"
    nterface 2/1
    description "1st ASR Link"
    isc-port-one
    nterface 2/3
    description "2nd ASR Link"
    isc-port-two
    ************************** CIRCUIT ************************** circuit VLAN800
    description "FE_CORE"
    ip address 192.168.83.249 255.255.255.0
    ip virtual-router 1 priority 110
    ip redundant-vip 1 192.168.83.148
    ip redundant-vip 1 192.168.83.158
    ip critical-service 1 DTSFE01
    ip critical-service 1 DTSFE02
    ip critical-service 1 DTSFE03
    ip critical-service 1 DTSFE04
    ip critical-service 1 DTSFE05
    ip critical-reporter 1 Physical_if_DWN
    ip critical-reporter 1 r1
    ircuit VLAN20
    description "LBAL"
    ip address 192.168.20.1 255.255.255.0
    ip virtual-router 2 priority 110
    ip redundant-interface 2 192.168.20.3
    ip critical-service 2 DTSFE01
    ip critical-service 2 DTSFE02
    ip critical-service 2 DTSFE03
    ip critical-service 2 DTSFE04
    ip critical-service 2 DTSFE05
    ip critical-reporter 2 Physical_if_DWN
    ip critical-reporter 2 r1
    ************************** REPORTER **************************
    reporter Physical_if_DWN
    type critical-phy-all-up
    phy 1/1
    phy 1/2
    active
    reporter r1
    type vrid-peering
    vrid 192.168.83.249 1
    vrid 192.168.20.1 2
    active
    ************************** SERVICE **************************
    service FE01
    ip address 192.168.20.183
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 4
    service FE02
    ip address 192.168.20.184
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 5
    service FE03
    ip address 192.168.20.185
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 6
    service FE04
    ip address 192.168.20.186
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 7
    service NWFE02
    ip address 192.168.20.204
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 10
    active
    !*************************** OWNER *************************** owner SERVICES
    content DTS_192.168.83.148_443
    add service DTSFE01
    add service DTSFE02
    add service DTSFE03
    add service DTSFE04
    add service DTSFE05
    vip address 192.168.83.148
    port 443
    protocol tcp
    advanced-balance sticky-srcip
    redundant-index 1
    sticky-inact-timeout 5
    owner NW_SERVICES
    content NWCS_192.168.83.158_443
    add service NWCSFE01
    add service NWCSFE02
    vip address 192.168.83.158
    protocol tcp
    port 443
    sticky-inact-timeout 5
    redundant-index 2
    advanced-balance sticky-srcip
    active

  • Help With Fail Over

    I have been playing around with Directory Server fail over with IMS 5.2 however its not working too well, so far I have:
    configured local.ugldaphost to ldap-a ldap-b
    No problem there, on ldap server a (Master) I have setup replication of the following:
    dc=domain,dc=blah,dc=blah
    o=internet
    Using this method I get "Can't connect to LDAP server" once I have logged in to IMS 5.2 web mail, trying to access my folders (which work) and personal address book (which doesnt), looking at the logs for HTTP it states:
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Debug: ldappool::new_conn failed: Can't connect to the LDAP server Connection refused
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Debug: ldappool::access_pool_get 0/0 valid connections
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Debug: PAB_Search() error: Can't connect to the LDAP server
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Error: Cannot search address book at ou=user, ou=people, o=internet,dc=domain,dc=BLAH,dc=BLAH,o=pab: Can't connect to the LDAP server
    Do I also need to replicate:
    o=NetscapeRoot
    o=pab
    I have tried replicating simply o=NetscapeRoot and I could no longer login as admin into the directory server console on the REPLICA (ldap-b).
    Can anybody help me out?

    Hi,
    To successfully have a 'low cost' failover iDS5/iMS5 scenerio, you need to do
    a number of things. Or if you have a wad of cash use Vertias cluster
    (HA iplanet agents), etc. Directory server proxies (iDAR) :(
    Currently I'm using a 'low cost' failover technique.
    None of what I'm about to describe is in the SunONE doco. for iMS.
    I have tested this in the Lab, all works, it's now in production.
    Before you do anything, test in the lab first, so you feel comfortable with
    the setups.
    OK my scenerio.
    Primary LDAP = ldap-a
    Secondary LDAP = ldap-b
    Mailserver = mta1
    iDS5 = iDS5.1p1
    iMS5 = iMS5.2p1
    1. Install iDS5 on ldap-a. Acts a User dir. and Config dir. server.
    2. Prep. ldap-a for iMS5 install (run ims_dssetup.pl).
         - YES to schema files/indices
    3. Install iMS5 on mta1, using ldap-a as User and Config dir. server.
         - I have iMS5 configured in Direct LDAP mode.
    4. Install iDS5 on ldap-b, use ldap-b as User and ldap-a as Config dir. server.
         - No need populate the User tree on ldap-b (ie. example users from install)
         - MUST USE ldap-a as config server, as you will be replicating this tree.
    if not you will not be able to access the admin server, as you stated.
    5. Also run ims_dssetup.pl on ldap-b.
         - YES to schema files/indices
    6. Setup multi-master replication between ldap-a and ldap-b.
         - See Admin Guide, have a good read, needs correct setup !
         - This allows read/write, thus seamless to email user for password
         changing, PAB writes...if LDAP has failed over.
         - Replicate all suffices
              o=isp           (User dir.)
              o=internet      (DC tree)
              o=pab          (PAB tree)
              o=NetscapeRoot     (Config. dir)
         - Init consumer(ldap-b) from ldap-a for all suffices.
    Note: Only problem with multi-master is uniqueness plugins, (if using)
    it's no problem aslong as you use ldap-a as master. See iDS Admin guide.
    7. Now ldap-b requires a change to allow iMS5 to allow writes to o=NetscapeRoot
    in the event of failover. Otherwise the error message "ldap server unavailable,
    no configuration server, using locally cached values...whatever blah blah !
    Managing Console Fail Over
    If you have a multi-master installation with o=NetscapeRoot replicated
    between your two masters, ldap-a and ldap-b, you can modify the console
    on the second server (ldap-b) so that it uses ldap-b's instance instead
    of ldap-a's. (By default, writes with ldap-b's console would be made to
    ldap-a then replicated over.)
    To accomplish this, you must:
    Shut down the Administration Server and Directory Server.
    Change these files to reflect ldap-b's values:
    'serverRoot'/userdb/dbswitch.conf:
    directory default ldap://ldap-b:389/o%3DNetscapeRoot
    'serverRoot'/admin-serv/config/adm.conf:
    ldapHost: ldap-b
    ldapPort: 389
    'serverRoot'/shared/config/dbswitch.conf:
    directory default ldap://ldap-b:389/o%3DNetscapeRoot
    'serverRoot'/slapd-serverID/config/dse.ldif:
    nsslapd-pluginarg0: ldap://ldap-b:389/o%3DnetscapeRoot
    Note: assuming your LDAP TCP port is 389
    Turn off the pass through authentication (PTA) plug-in on ldap-b by editing
    its dse.ldif file.
    In a text editor, open the 'serverRoot'/slapd-serverID/config/dse.ldif file.
    Locate the entry for the the PTA plug-in:
    dn: cn=Pass Through Authentication,cn=plugins,cn=config
    Change nsslapd-pluginEnabled: on to nsslapd-pluginEnabled: off.
    Restart the Directory Server and Administration Server.
    8. Now on mta1, using configutil, set options to these values
    **a. local.ldaphost = "ldap-a ldap-b"
              - Required to use both servers as Config dir. servers,
    in the event of failover, config. is taken from ldap-b.
         b. local.ugldaphost = "ldap-a ldap-b"
              - Required to use both servers for User dir. lookups in event of
    failover.
    c. local.service.pab = "ldap-a ldap-b"
    - Required to use both servers for PAB lookup/additions in the event
    of failover.
         ** To make config dir. failover, shutdown Admin server. and change,
    'serverroot'/shared/config/dbswith.conf:
    directory default ldap://ldap-a ldap-b:389/o%3DNetscapeRoot
         Restart Admin server.     
    OK that's all there is too it.
    Now test everything, failover LDAP, test logins for email POP/Webmail, IMAP if used.
    Test email connections (ie. inbound/outbound email conns)
    Note:
    Whatever LDAP server fails, iMS5 will continue to use other LDAP server, even when
    failed LDAP server comes online. Either stop/start iDS5 on current LDAP server or
    stop/start iMS5.
    .....Well it worked for me !
    Good luck ;)

  • Fail over is not happening in Weblogic JSP Server

    Hi..
    We have 6 Weblogic instances running as application server (EJB) and 4 Weblogic
    instances running as web server (JSP). We have configured one cluster for EJB
    servers and one cluster for JSP servers. In front-end we are using four Apache
    servers to proxy the request to Weblogic JSP cluster. In my httpd.conf file I
    have configured with the Weblogic cluster. I can see the requests are going in
    all the servers and believe the cluster is working fine in terms of load balancing
    (round-robin). The clients are accessing the servers using CSS (Cisco Load Balancer).
    But when we test the fail-over in the cluster, we are facing problems. Let me
    explain the scenarios of the fail-over test:
    1.     The load was generated by the Load Generator
    2.     When the load is there, we shut down one Apache server, even though there was
    some failed transaction, immedialty the servers become stable. So fail-over is
    happening in this stage.
    3.     When I shutdown one EJB instance, again after some failed transactions, the
    transactions become stable
    4.     But, when I shutdown one JSP instance, immediately the transaction failed and
    it is not able to fail over to another JSP server and the number of failed transactions
    increased.
    So I guess, there is some problem in the proxy plug-in configuration, so that
    when I shutdown one JSP server, still the requests are being send to the JSP server
    by the Apache proxy plug-in.
    I have read various queries posted in the News Groups and found some information
    about configuring session and cookie information in the Weblogic.xml file. Also
    I’m not sure what are all the configurations needs to be done in the Weblogic.xml
    and httpd.conf file. Kindly help me to resolve the problem. I would appreciate
    your response.
    ===============================================================
    My httpd.conf file plug-in configuration:
    ###WebLogic Proxy Directives. If proxying to a WebLogic Cluster see WebLogic
    Documentation.
    <IfModule mod_weblogic.c>
    WebLogicCluster X.X.X.X1:7001,X.X.X.X2:7001,X.X.X.X3:7001,X.X.X.X4:7001
    MatchExpression *.jsp
    </IfModule>
    <Location /apollo>
    SetHandler weblogic-handler
    DynamicServerList ON
    HungServerRecoverSecs 600
    ConnectTimeoutSecs 40
    ConnectRetrySecs 2
    </Location>
    ==============================================================
    Thanks in advance,
    Siva.

    Hi,
    I can see that bug 13703600 is already got fixed in 12.1.2 but still you same problem please raise ticket with oracle support.
    Regrds,
    Kal

  • SQL Server 2014 Always on HA takes 8-14 seconds to fail over. Application side timeouts occur

    Hi All,
    I have a very similar post in the SQL Server 2014 forums too (https://social.technet.microsoft.com/Forums/sqlserver/en-US/adb5e338-907e-4405-aa62-d3ea93c7a98a/sql-server-2014-always-on-ha-takes-814-seconds-to-fail-over-application-side-timeouts-occur?forum=sqldisasterrecovery) -
    advice in the end was to post a question here.
    SQL Server Nodes, 2014 (12.0.2480.0)
    1 Share witness (on separate subnet)
    1 Cluster
    1 Listener
    I have been testing the response time to failovers – both manual (right-click, fail over in SSMS) and Automatic (shut down the primary host). The way I am testing response is to have a SSMS query running on my desktop, connected to the listener querying
    a small table and hit execute.
    The Query response time, from execute to receiving the result, has been between 8 and 14 seconds based on my testing. My previous experience (in a separate environment) showed around 2 second fail over times in a very similar configuration.
    Availability DB is 200Mb and is not actively used. The nodes are synchronised.
    SQL Server Hosts: Windows 2012, 2 cpu, 8gb RAM.
    Questions:
    1: It’s a big question but what should I expect for a ‘normal’ fail over time. Keep in mind this scenario is about as simple as it gets.
    2: As it stands an 8 to 14 second ‘outage’ could cause some applications to time out. Or am I being un-reasonable? I am seeing the very simple query in SSMS to time out with this:
    Msg 983, Level 14, State 1, Line 2
    Unable to access availability database 'DATABASE' because the database replica is not in the PRIMARY or SECONDARY role. Connections to
    an availability database is permitted only when the database replica is in the PRIMARY or SECONDARY role. Try the operation again later.
    Cluster logs are long - this section accounts for 8 seconds of the 11 second outage I experienced. I can supply the full log if required. Also this log is just the 2 cluster nodes, I removed the witness share to make sure it was as simple as possible.
    00001090.00002128::2015/02/25-03:05:08.255 INFO  [GEM] Node 2: Deleting [1:65 , 1:71] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:10.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:11.888 INFO  [GEM] Node 2: Deleting [1:72 , 1:73] (both included) as it has been ack'd by every node
    00001090.00002698::2015/02/25-03:05:11.889 INFO  [GUM] Node 2: Processing RequestLock 2:49
    00001090.00002128::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 67)
    00001090.00002698::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: executing request locally, gumId:68, my action: /dm/update, # of updates: 1
    00001090.00002128::2015/02/25-03:05:12.890 INFO  [GEM] Node 2: Deleting [1:74 , 1:74] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:15.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:16.988 INFO  [GUM] Node 2: Processing RequestLock 1:28
    Thanks in advance.
    Keegan

    Hi Keegan,
    From these event log , what I can see is "Sending request Netname" wasted the time .
    Could you please tell us the network configuration of that cluster nodes ?
    If I recall correctly , it is recommended to only remain Tcp/IP protocol and disable NetBIOS over TCP/IP for "Private Network" , also do not configure DNS/Wins default gateway for "Private Network" :
    https://support.microsoft.com/kb/258750?wa=wsignin1.0
    After that please test again .
    Best Regards,
    Elton JI
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Front End pool failed over

    Hi all,
    1. I setup a pool with three Front End servers (FQDN of pool is pool.site1.sip96x2.com and it's pointed to IP address of three Front End servers). Everything works fine. But When I disable network interface on FE1 and FE2, the Lync clients are disconnected.
    I haven't understood clearly how the Lync clients failed over in a pool? Please clarify to me.
    2. I have two central site (Root site and Primary site, they have different domain sip96x2.com and site1.sip96x2.com). The simple URL dialin is pointed to Front End server at Root site. So if the link between Root site and Primary site is down, how can the
    users at Primary site connect to dialin URL? 
    3. In building topology for Front End pool, I checked Override FQDN internal web service and the FQDN is "poolint.site1.sip96x2.com". I created three A records "poolint.site1.sip96x2.com" and pointed to three IP addresses of Front End
    servers. Is it right?
    Thanks so much!

    Ah ok, well first thing if I am reading this correctly, pool pairing Standard with Enterprise is not supported. You should only pair Standard with Standard and Enterprise with Enterprise (even though topology builder won't stop you) Take a look here for
    support scenarios http://technet.microsoft.com/en-us/library/jj204697.aspx
    To deal with the simple URLs in the event of failover you need to add them using Powershell. Take a look at this article which explains and gives an example: http://blogs.perficient.com/microsoft/2012/01/configuring-simple-urls-for-multiple-lync-pools/
    If this helped you please click "Vote As Helpful" if it answered your question please click "Mark As Answer"
    Georg Thomas | Lync MVP
    Blog www.lynced.com.au | Twitter
    @georgathomas
    Lync Edge Port Check (Beta)

  • How Front End pool deals with fail over to keep user state?

         Hello to all, I searched a lot of articles to understand how Lync 2010 keeps user state if a fail happens in a Front Pool node, but didn't find anything clear.
         I found a MS info. about ths topic : " The Front End Servers maintain transient information—such as logged-on state and control information for an IM, Web, or audio/video (A/V) conference—only for the duration of a user’s session.
    This configuration
    is an advantage because in the event of a Front End Server failure, the clients connected to that server can quickly reconnect to another Front End Server that belongs to the same Front End pool. "
        As I read, the client uses DNS to reconnect to another Front End in the pool. When it reconnects to an available server, does he lose what he/she was doing at Lync client? Can the server that is now hosting his section recover all
    "user's session data"? Is positive, how?
       Regards, EEOC.

    The presence information and other dynamic user data is stored in the RTCDYN database on the backend SQL database in a 2010 pool:
    http://blog.insidelync.com/2011/04/the-lync-server-databases/  If you fail over to another pool member, this pool member has access to the same data.
    Ongoing conversations and the like are cached at the workstation.
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications

  • Is it possible to add hyper-V fail over clustering afterwards?

    Hi,
    We are testing Windows 2012R2 Hyper-V using only one stand alone host without fail over clustering now with few virtual machines. Is it possible to add fail over clustering afterwards and add second Hyper-V node and shared disk and move virtual
    machines there or do we have to install both nodes from scratch?
    ~ Jukka ~

    Hi Jukka,
    Inaddition, before you build hyper-v failover cluster please refer to these requirements within the article below :
    http://technet.microsoft.com/en-us/library/jj863389.aspx
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Failover cluster server - File Server role is clustered - Shadow copies do not seem to travel to other node when failing over

    Hi,
    New to 2012 and implementing a clustered environment for our File Services role.  Have got to a point where I have successfully configured the Shadow copy settings.
    Have a large (15tb) disk.  S:
    Have a VSS drive (volume shadow copy drive) V:
    Have successfully configured through Windows Explorer the Shadow copy settings.
    Created dependencies in Failcover Cluster Server console whereby S: depends on V:
    However, when I failover the resource and browse the Client Access Point share there are no entries under the "Previous Versions" tab. 
    When I visit the S: drive in windows explorer and open the Shadow copy dialogue box, there are entries showing the times and dates of the shadow copies ran when on the original node.  So the disk knows about the shadow copies that were ran on the
    original node but the "previous versions" tab has no entries to display.
    This is in a 2012 server (NOT R2 version).
    Can anyone explain what might be the reason?  Do I have an "issue" or is this by design?
    All help apprecieated!
    Kathy
    Kathleen Hayhurst Senior IT Support Analyst

    Hi,
    Please first check the requirements in following article:
    Using Shadow Copies of Shared Folders in a server cluster
    http://technet.microsoft.com/en-us/library/cc779378(v=ws.10).aspx
    Cluster-managed shadow copies can only be created in a single quorum device cluster on a disk with a Physical Disk resource. In a single node cluster or majority node set cluster without a shared cluster disk, shadow copies can only be created and managed
    locally.
    You cannot enable Shadow Copies of Shared Folders for the quorum resource, although you can enable Shadow Copies of Shared Folders for a File Share resource.
    The recurring scheduled task that generates volume shadow copies must run on the same node that currently owns the storage volume.
    The cluster resource that manages the scheduled task must be able to fail over with the Physical Disk resource that manages the storage volume.
    If you have any feedback on our support, please send to [email protected]

  • Which role do I need DFS or File server on fail over cluster server 2012 R2?

    what I want to achieve is that I want to share all my user data files in a central location and to be highly available all the time whether it's a general share or folder redirection data. BUT I'm a bit confused;  I have fail over cluster  set-up
    on server 2012, now I would like to add DFS as a role but than we have another role called File server and virtually it does the same thing as DFS? Means it creates a namespace share that can be access even one of the nodes goes down. Now I am thinking is
    that DFS does the replication between two physical location but fail over cluster works slightly differently  and with file server it pretty much does the same thing except for replicating data from one drive to another. Now what do you suggest I do or
    did I get the concept wrong like a noob?

    DFS and Failover Clustering for file shares provides a similar end result for file access, but they are significantly different implementations.
    Clustering provides high availability to files by presenting shared access to set a files served from a cluster.  With 2012 R2 Microsoft added the ability to create a Scale-out File Server that even allows all nodes of the cluster to server access to
    the files for a higher level of performance and other great things.  Bottom line with Failover Clusters for files is that there is a single copy of the file presented from the cluster.
    DFS on the other hand provides high availability to files by presenting multiple copies of the file by making a copy in two or more locations and presenting a naming space that allows access to the file through any of the network paths.  DFS works very
    well for files that are primarily read-only.  When you get into a situation where there is a lot of updating of the shared files, DFS is not a very good solution.  There are ways to implement DFS for read/write files, but it generally requires a
    good knowledge of how the files are used and how you want to manage them.
    The key to answering your question comes in your first sentence "I want to share all my user data files in a central location and to be highly available all the time".  My initial reaction to this is that central location means Failover Cluster
    - there is only a single copy of the file.  However, "all the time" can be compromised by network failures to the central site.  Remote sites would not have access if they can't access the central site.  DFS provides the ability to
    have copies remotely, but then if you allow updating at multiple sites, you have to manage the merging of the changes, among other things.
    . : | : . : | : . tim

Maybe you are looking for

  • Problem using KeyListener in a JFrame

    Let's see if anyone can help me: I'm programming a Maze game, so I use a JFrame called Window, who extends JFrame and implements ActionListener, KeyListener. Thw thing is that the clase Maze is a JPanel, so I add it to my Frame and add the KeyListene

  • User color choice not saved in export preset?

    I'm currently writing my first plugin and still in a learning curve. My export plugin has a color_well control to choose a color. I have defined a default value of LrColor(1,1,1) which works well. However, the chosen color is not saved so every time

  • How to check the tables

    Dear experts! Thank you for your attention! I have studied the SAP SD for half a year, though I still don't know how to check certain table like: KNVS---customer master shipping data KNBK---customer master (bank detail) ADR---address table ect. could

  • Hours spent on a WBS

    HI all, My company wants to input Hours spent by month somewhere in the WBS master record. Do you know where this can be done? I will also need 2 more fields where I can enter the Employee name and an ID. Any help will be appreciated thanks Brian

  • Yet another Applescript file renaming question...

    Hi all, Well, having been a full-time Applescripter for five years at Apple, I thought it would be easy to dust my chops off and write a simple script to get a file name from one folder and paste that name onto a file in another folder.... but I gues