Loadsharing between 2 Datacenters

Hi,
I have been assigned the following challenge: a customer wants to build 2 datacenters and connect these to a L3 backbone. I have attached a (very) simplified network diagram. 
Challenge: The customer wants to span a L2 domain across both DCs and needs to have an Active/Active firewall. This in turn means that the traffic flow needs to be symmetric. Since both firewalls (gateways) and clients in this example are in the same stretched L2 subnet, how do I get the clients in DC1 to primarily use the FW 10.0.0.1 as default gateway, and clients in DC2 using 10.0.0.254 as default gateway. Of course clients need to use DHCP ;)
Possible solution: See attached diagram. However this relies on the additional delay of the DCI to assign different default gateways to clients. e.g. a DHCP request from a client in DC 1 will get the quickest response from a DHCP server in DC1, which assigns 10.0.0.1 as default gateway. The DHCP response from DC 2 (which would assign 10.0.0.254 as deault gateway) would arrive late and ignored by the client.
This does not seem like the perfect solution to me, since we are relying on many factors (e.g. delay may change due to other circumstances). Does anybody have other suggestions?
Thanks in advance!

Relying on the timing of DHCP offer is not deterministic.
Maybe:
keep your DHCP split-brain idea
block DHCP offers going out of the DCI link (to the other DC) on both sides
this way hosts in DC1 never get .254 as the default gateway, and vice versa.
make sure DHCP server has redundancy in each DC1 (e.g. two DHCP servers in each DC)
bring the first hop down to the L3 switch the hosts connect touse a routing protocol between the FW's and the switches.
Use a FHRP between the two DC access switches
two vIP's, .1 and .254.
DC1 is active for .1, DC2 is standby
DC2 is active for .254, DC1 is standby
You could track interfaces, routes, etc. for when to change who is active forwarder.

Similar Messages

  • Communication between data centres

    Hi,
    I'm new to vCloudAir & I want to establish communication between 2 datacenters.
    DataCenter.USA -> Gateway (public IP) -> NAT (private IP) -> <Server>, <Client1>
    DataCenter.GER -> Gateway (public IP) -> NAT -> <Client2>
    I'm trying to establish bi-directional connectivity between,
    - server & client2
    - client1 & client2
    Please suggest.
    Vinay

    Hi Vinay.
    You may want to have a look at the tutorials (here: Cloud Computing Tutorials | vCloud Air by VMware)
    Particularly the NAT tutorial: Introduction to Gateway Services: Network Address Translation - VMware vCloud Air)
    Consider also that you will need to configure the firewall services to allow / block traffic to/from those VMs.

  • Active/Active datacenters with CSS

    This is going to sound pretty general at first, but is there any way to do active/active balancing between two datacenters using Cisco CSS's?
    let me explain that a bit further.
    In my case, I'm not necessarily interested in having clients load-balanced between two datacenters as a whole, but more like having members of a load-balanced service group being located in two different datacenters.
    For example, I have two Datacenters, DC1 an DC2, each with a public IP for a certain TCP application. The public IP's are the front end addresses for a load-balanced cluster of application servers. In the past, we've had it so that each datacenter was completely seperate -- each public IP went to a group of servers within that datacenter. I want to change it though so that each public IP is the front end for a load-balanced group of servers that are located in both datacenters. Currently, each datacenter is it's own /24 subnet.
    I do realize that it might not be possible to adapt the existing infrastructure to support inter-datacenter load-balancing, in that case, can anybody point me in the right direction to build a new infrastructure that can support that?
    Thanks in advance for the help.

    Thanks Gilles. Just to clarify, your saying I need to NAT the incoming client connections to something unique per datacenter, right? So, for example, incoming connections in DC1 will be NAT'd to 10.250.0.0/16, and incoming connections in DC2 will be NAT'd to 10.240.0.0/16. Then I configure routing back through the CSS for those ranges?
    Thanks again,
    --Brandon

  • Questions in Lync 2013 HADR

    Hi Team,
    One of the customer raised the query:
    In our scenario, we want Active/Active High availability between different geolocations with RPO=0 and RTO near zero (seconds).
    Questions:
    1. Isn’t this possible with pool pairing and database availability AlwaysOn synchronous commit?
    2. What is the bandwidth needed between both sites?
    3. Do you think to achieve Active/Active high availability (RPO=0, RTO=+/-0) for Lync between 2 datacenters we should go with the following scenario:
    --> Storage: virtualization (stretched LUNs)
    -->Compute: Hyper-v Clustering (failover cluster)
    -->DNS: Global Datacenter Server Load Balancer
    4. What is the RTO and RPO in your proposed solution?
    Please advise. Many Thanks.

    1) No.  Pool paring doesn't automatically failover, therefore it does not meet the requirements.  Also, HA within a pool isn't supported across geographic locations, so I don't believe this requirement can be met within the supported model. 
    It's possible if you have a solid enough pipe between the locations with very low latency that you could go unsupported with the old Metropolitan Site Resiliency model:
    https://technet.microsoft.com/en-us/library/gg670905(v=ocs.14).aspx but not supported in 2013.
    2) This can't be answered easily, it depends on what they're doing and using. How many users, how much archived data... the SQL mirroring will be quite a bit, as well as the shared presence data on front ends.  Will they use video between sites?  
    Too many questions to get any kind of reliable answer.
    3) If RTO/RPO is this critical, then I'm assuming it's voice.  If it's not, then a short outage should be more tolerable.  If it is voice, do not leave the supported model... just don't.  You don't want to be in that
    situation when systems are down and it's your phone.  No live migrations, just what's supported via TechNet and virtualization whitepapers.
    4) My proposed solution would be HA pools in both datacenters, built big enough it's unlikely to go down.   If the site does go down, pool failover can happen in a reasonable amount of time, perhaps 15 minutes if you're well prepared,
    but phones could potentially stay online during this time. 
    -Anthony
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications
    This forum post is based upon my personal experience and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • CWMS HA and MDC in the same time

    Hi,
    One of our customer has Main DC and DR. He is required to have CWMS 2.5 in Main DC with HA for 250 ports and in case Main DC is down ,DR will be the backup which contain CWMS 2.5 for 250 without HA. He is required to have both Main DC and DR to work as active/active.
    Is that scenario is possible and valid to have 250 ports with HA in one data center and another 250 ports without HA in another data center and have MDC license between both Datacenters to work as active/active?
    Please advise.
    Regards,

    Hi Asherif,
    This will not be possible to have a CWMS HA system in Primary DC and then without HA in DR DC acting as one MDC. The reason being a system running HA can not be joined to a MDC.
    Ref:
    http://www.cisco.com/c/en/us/td/docs/collaboration/CWMS/2_5/Administration_Guide/Administration_Guide/Administration_Guide_chapter_0111.html
    The HA system has the following constraints:
    A system running HA cannot join a Mult-data Center (MDC)
    However, the above scenario is perfect fit for a MDC system. You can have two CWMS servers one at each DC and joined in a MDC set up. Both the servers will function as a one system. The end user will access both using one URL/number. To the end users MDC will be visible as a normal Single CWMS system. Yet it will provide all the HA, DRS functionality you would normally seek.
    You will need MDC license for each DC.
    Let me know if you have any more queries on this.
    -Terry
    Please rate all helpful posts.

  • Network Ports requirements in Enterprise Lync Infrastructure with DR

    Hello,
    We are in process of Lync Server 2013 enterprise deployment where all the workloads will be used. We have two distinct datacenters representing two different sites. Below is the infrastructure information
    Single Forest
    ContoBill.Com
    Empty root domain with three child domains
    Users.contobill.com
    Resources.contobill.com
    Staging.contobill.com
    Two AD Sites Default and DR both of them will have DC/GC/DNS for all the domains
    Physically we have three tier architecture
    Two Datacenters both are identical, details below
    Tier1: DMZ that will host the servers that are not in domain and are used for internet gateways like Edge, Proxy etc
    Tier2: Internal network (intranet) that will host the database servers and applications
    Tier3: Internal network (Intranet) that will host Exchange, SharePoint and Lync
    Each tier is a different IP subnet and there is a firewall between each of them, and also there is a firewall between the datacenters as well.
    I have following questions in order to plan the lync deployment in highly available environment and maintain supportability.
    Q1. Does Microsoft support having firewall between a pool pair, the pool is active/passive and will be utilized only in case of disaster recover.
    Q2. If this scenario in Q1 is supported what are the required ports that are required to be opened on the network firewall. In order to successfully synchronize between the pairs
    Q3. Does Microsoft support having a firewall between the frontend and backend servers, if yes what are the required ports to be opened on the network firewall

    You can find all the Ports and Protocols required by all Lync components at http://technet.microsoft.com/en-us/library/gg398833.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Lync Sorted blog

  • Site Resilence

    Greetings,
    New Exchange Server 2013 Deployment,
    Two Sites (Main Site, and DR Site)
    Each Site has one Edge Server, CAS Server and Two MBX Servers
    - External DNS Records Resolve to Edge Servers with Round Robin
    - DR Site should automatically respond in case of any failure in Main Site
    - CAS Server in DR should respond to users in case of CAS Server down in Main Site, and Vise versa
    Question
    1) What is the External and Internal DNS ( DNS Records) Configuration to Achieve the High Availability on Edge and CAS Servers? Should the DNS be configured with Round Robin on EDGE and CAS?
    2- What DNS records should be presented in case of DNS Split design (Internal / External) ?
    Thanking you
    Jamil

    Li Zhen ...
    First, I did not ask about DAG, if you just read the entire thread you could seen I am asking about CAS and EDGE roles ... specifically about DNS records related to high-availability scenario of two sites
    Second, this is for you and Andy, I am quite sure you are out of technology, I am asking about Exchange Server 2013, just search for "High Availability and Site Resilience" in Exchange Online help, and see the section
    of "Site resilience" you will find:
    ==================
    Site resilience
    <content xmlns="http://ddue.schemas.microsoft.com/authoring/2003/5">
    Although Exchange 2013 continues to use DAGs and Windows Failover
    Clustering for Mailbox server role high availability and site resilience, site
    resilience isn't the same in Exchange 2013. Site resilience is much better in
    Exchange 2013 because it has been simplified. The underlying architectural
    changes that were made in Exchange 2013 have significant impact on the recovery
    aspects of a site resilience configuration.
    In Exchange 2010, mailbox (DAG) and client access (Client Access
    server array) recovery were tied together. If you lost all of your Client Access
    servers, the VIP for the array, or a significant portion of your DAG, you were
    in a situation where you needed to do a datacenter switchover. This is a
    well-documented and generally well-understood process, although it takes time to
    perform, and requires human intervention to begin the process.
    In Exchange 2013, if you lose your Client Access server array for
    whatever reason (for example, the load balancer fails), you don't need to
    perform a datacenter switchover. With the proper configuration, failover happens
    at the client level and clients are automatically redirected to a second
    datacenter that has operating Client Access servers, and those operating Client
    Access servers proxy the communication back to the user's Mailbox server, which
    remains unaffected by the outage (because you don't do a switchover). Instead of
    working to recover service, the service recovers itself and you can focus on
    fixing the core issue (for example, replacing the failed load balancer).
    Furthermore, with the namespace simplification, consolidation of
    server roles, de-coupling of Active Directory site server role requirements,
    separation of Client Access server array and DAG recovery, and load balancing
    changes, there are changes in Exchange 2013 that now enable both Client Access
    server and DAG recovery to be separate and automatic across sites, thereby
    providing datacenter failover scenarios, if you have three locations.
    In Exchange 2010, you could deploy a DAG across two datacenters and
    host the witness in a third datacenter and enable failover for the Mailbox
    server role for either datacenter. But you didn't get failover for the solution
    itself, because the namespace still needed to be manually changed for the
    non-Mailbox server roles.
    In Exchange 2013, the namespace doesn't need to move with the DAG.
    Exchange leverages fault tolerance built into the namespace through multiple IP
    addresses, load balancing (and if need be, the ability to take servers in and
    out of service). Modern HTTP clients work with this redundancy automatically.
    The HTTP stack can accept multiple IP addresses for a fully qualified domain
    name (FQDN), and if the first IP address it tries fails hard (that is, it can't
    connect), it will try the next IP address in the list. In a soft failure
    (connection is lost after the session is established, perhaps due to an
    intermittent failure in the service where, for example, a device is dropping
    packets and needs to be taken out of service), the user might need to refresh
    their browser.
    This means the namespace is no longer a single point of failure as
    it was in Exchange 2010. In Exchange 2010, perhaps the biggest single point of
    failure in the messaging system is the FQDN that you give to users because it
    tells the user where to go. In the Exchange 2010 paradigm, changing where that
    FQDN goes isn't easy because you have to change DNS, and then handle DNS
    latency, which in some parts of the world is challenging. And you have name
    caches in browsers that are typically about 30 minutes or more that also have to
    be handled.
    One of the changes in Exchange 2013 is to enable clients to have
    more than one place to go. Assuming the client has the ability to use more than
    one place to go (almost all the client access protocols in Exchange 2013 are
    HTTP based (examples include Outlook, Outlook Anywhere, EAS, EWS, OWA, and EAC),
    and all supported HTTP clients have the ability to use multiple IP addresses),
    thereby providing failover on the client side. You can configure DNS to hand
    multiple IP addresses to a client during name resolution. The client asks for
    mail.contoso.com and gets back two IP addresses, or four IP addresses, for
    example. However many IP addresses the client gets back will be used reliably by
    the client. This makes the client a lot better off because if one of the IP
    addresses fails, the client has one or more other IP addresses to try to connect
    to. If a client tries one and it fails, it waits about 20 seconds and then tries
    the next one in the list. Thus, if you lose the VIP for the Client Access server
    array, recovery for the clients happens automatically, and in about 21
    seconds.
    The benefits include the following:
    In Exchange 2010, if you lose the load balancer in your primary datacenter
    and you don't have another one in that site, you had to do a datacenter
    switchover. In Exchange 2013, if you lose the load balancer in your primary
    site, you simply turn it off (or maybe turn off the VIP) and repair or replace
    it. Clients that aren't already using the VIP in the secondary datacenter will
    automatically fail over to the secondary VIP without any change of namespace,
    and without any change in DNS. Not only does that mean you no longer have to
    perform a switchover, but it also means that all of the time normally associated
    with a datacenter switchover recovery isn't spent. In Exchange 2010, you had to
    handle DNS latency (hence, the recommendation to set the Time to Live (TTL) to 5
    minutes, and the introduction of the failback URL). In Exchange 2013, you don't
    need to do that because you get fast failover (20 seconds) of the namespace
    between VIPs (datacenters).
    Because you can fail over the namespace between datacenters, all that's
    needed to achieve a datacenter failover is a mechanism for failover of the
    Mailbox server role across datacenters. To get automatic failover for the DAG,
    you simply architect a solution where the DAG is evenly split between two
    datacenters, and then place the witness server in a third location so that it
    can be arbitrated by DAG members in either datacenter, regardless of the state
    of the network between the datacenters that contain the DAG members.
    In this scenario, the administrator's efforts are geared toward simply
    fixing the problem, and not spent restoring service. You simply fix the thing
    that failed; while service has been running and data integrity has been
    maintained. The urgency and stress level you feel when fixing a broken device is
    nothing like the urgency and stress you feel when you're working to restore
    service. It's better for the end user, and less stressful for the
    administrator.
    You can allow failover to occur without having to perform
    switchbacks (sometimes mistakenly referred to as failbacks). If you lose Client
    Access servers in your primary datacenter and that results in a 20 second
    interruption for clients, you might not even care about failing back. At this
    point, your primary concern would be fixing the core issue (for example,
    replacing the failed load balancer). After it's back online and functioning,
    some clients will start using it, and other clients might remain operational
    through the second datacenter.
    Exchange 2013 also provides functionality that enables
    administrators to deal with intermittent failures. An intermittent failure is
    where, for example, the initial TCP connection can be made, but nothing happens
    afterward. An intermittent failure requires some sort of extra administrative
    action to be taken because it might be the result of a replacement device being
    put into service. While this repair process is occurring, the device might be
    powered on and accepting some requests, but not really ready to service clients
    until the necessary configuration steps are performed. In this scenario, the
    administrator can perform a namespace switchover by simply removing the VIP for
    the device being replaced from DNS. Then during that service period, no clients
    will be trying to connect to it. After the replacement process has completed,
    the administrator can add the VIP back to DNS, and clients will eventually start
    using it.
    </content>
    ==================
    1. As I already said, if you only failover CAS but not mailbox role, it does not make any sense while all database are dismounted.
    2. In terms of DAG technology, there is no difference between 2010 and 2013. I don't think I am out of technology. If you look into my profile, I am MCSE on Exchange 2013 charter member.

  • ISE node group behind load balancer

    I'm trying to gather info on distributed deployment w/ multiple PSN nodes.
    Having read through some documents, it looks like you can put multiple PSN's in a node group, and then place the node group behind a load balancer.
    Q1:
    Node group config requires multicast.
    Cisco ACE LB doesn't support multicast, except in brige mode.
    How do people support distributed deployment in node group behind Ciso ACE?
    Q2:
    User guide says: "We recommend that you have two, three, or a maximum of four nodes in a node group."
    http://www.cisco.com/en/US/docs/security/ise/1.1.1/user_guide/ise_dis_deploy.html#wp1134272
    What if we need more than 4 PSN nodes to support our network & user base?
    Q3:
    Has anyone been able to implement distributed deployment between two datacenters behind GSS?
    If GSS isn't possible, we'll be happy to just have it in working state behind ACE LB.
    thx!

    I have had close to zero experience with LBs so my answers will be limited:
    Q1: I don't think the multicast plays any role with the LB. The multicast address is needed for the ISE nodes for replication
    Q2: You will have to create a new node group with a new multicast address
    Q3: No help here
    Couple of other things to remember:
    1. The nodes must be layer 2 adjacent
    2. You must use routed mode...no NAT/SNAT. Each node must be reachable directly from the end clients
    3. You must perform sticky
    4. The Load balancers must be listed as NADs in ISE
    Hope this provides some help to you.
    Thank you for rating!

  • Site to Site Data Replication

    Hello,
    We are currently working to design a DR site with site to site data replication via Oracle products.
    CWDM and DWDM seems interesting but I would like to know how does the commucation over these solutions happen. Is it over IP network with IP source/target addressess or something else. What is the protocol used.
    Secondly, we are looking for 40Mbps bandwidth over fibre between Active and DR site. Is CWDM/DWDM the right option. Or there could be other options like through plain routing via edge routers between the data center.
    Thanks.

    Recently i setup replication between 2 datacenters that were about 35 miles apart.  We had a DWDM/PtP between them running at 10Gbits total bandwidth.  I had a 2Gbit slice of that.  the latency of that DWDM and the speed of my connection allowed me to run a simple ISL between the 2 sites.  Nothing special required.  just built a channel and trunked it.  Worked great.  Just thought i would share a past experience in case it is in line with what you are trying to do.

  • CWDM Question

    To to pick your brain i have 2 pairs of dark fibers between two datacenters. Dark fibers are terminating on 8 Port channel CWDM and have two pairs of catalyst 6509 switches connecting to the CWDM using 2 channels each at each datacenters. The connection between the 6509's via CWDM are Layer 3 routing usinf EIGRP. I would like to also have a layer 2 connecting between the datacenters since i have few spare channels on the CWDM for server clustering ( two servers between DC need to be on same subnet) and don't want to encounter any STP issues between the 6509 switches. How can i achieve this?
    My idea:
    Dont have any layer 2 connection on the 6500's since they are core and connect another pairs of Catalyst 4507 at each datacenter to the spare channels on the CWDM and have a layer 2 trunk between the datacenters from the catalyst 4507 acting as a distribution switches passing frames between the two servers via layer 2 for vlan 95 and have the 4507 connect to the Core 6509 via layer 3 routing. So the cat catalyst 4707 will act as a distribution and access.
    will my idea work?
    Thanks.

    Hi
    At a high level i cannot see why this wouldn't work although perhaps you may consider eiher 3750-E or 4948 switches rather than the 4500 switch which seems slight overkill for forming a separate L2 link.
    You don't say what function your 6500's serve but assuming they are core within your DC's i would support separate switches for L2 connectivty if you can afford it.
    HTH
    Jon

  • ACE over NEXUS with Overlay Transport Virtualization (OTV) HELP!!!!

    Hi guys,
    I need to implement   two datacenters on different location. I have a little question.
    Scenary:
    Both Datacenter will have cisco nexus 7010 in the core, and I need to configure two ACE in each site.
    My  question is,   can I configure 2 ACE in redundancy mode, over   one layer 3 circuit  between two datacenters, connected  to nexus and  running Overlay Transport Virtualization.
    I hope, with this design, I can configure  server farm and back up server farm in different locations, over the same ACE and ensuring  good redundancy between ACE and balanced applications.
    Thanks for all your help and if you have example configuration or some information please let me know.
    Luis Alonso
    Costa Rica.

    Hello Luis,
    Just to clarify are you asking to run one pair (active/standby) in one DC and another pair (active/standby) in the other DC, or are you asking if OTV will "enable" you to split the ACE FT pair across the DC's?  If the later is what you are looking to acheive, then OTV isn't what you need.. As it relates to ACE and the 7K DC - OTV was intended to enable/deliver functionality of dynamic workload scaling (DWS) between the ACE and VMWare hosts/vm's.
    However, if you are looking for active/standby or even an active/active DC load balancing solution the ACE GSS (Global site selector) deployed in tandem with the ACE LB's will provide either of these for you.  The GSS provide intellegent DNS / load based global load balancing for Active/Active DC serverfarms..
    Here is a quick white paper overview related to OTV/DWS and ACE..
    http://tools.cisco.com/search/display?url=http%3A%2F%2Fwww.cisco.com%2Fen%2FUS%2Fprod%2Fcollateral%2Fmodules%2Fps2706%2Fat_a_glance_c89-644619.pdf&pos=1&strqueryid=&websessionid=xpM_tWFOyN-1zcp8veaCVJn
    BR.

  • How connect to airport from windows partition on mbp?

    I have recently added Windows 7 to my MBP. The MBP is connected to my home network via Airport Extreme. When I'm in Windows, there is no connection to a network. Anything I do there involving setting up a network connection, a screen shows "no connection." Stops me cold. I do get a screen that asks several tech questions, such as ISP addr., password, etc.
    I did find my "IP" addr. in Airport Utility, but I don't know if that's same as "ISP." If anyone has successfully installed Windows 7 on their MAC & established a connection w/ Airport Extreme, I'd like to know how they did it. If you need more info, let me know.

    Hi,
    To connect to the virtual network with a point-to-site VPN, you’ll need to install a VPN package on the VPN client computer. Since Windows 10 and MAC OS are not supported, the related VPN packages are unavailable.
    In addition, if you have a on-premise VPN device, you can create a site-to-site VPN instead.Or you can use ExpressRoute to create private connections between Azure datacenters and infrastructure that’s on your premises.
    For more detailed information, please refer to the links below:
    Configure a Point-to-Site VPN in the Management Portal
    ExpressRoute or Virtual Network VPN – What’s right for me?
    Best regards,
    Susie
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • WAN Redundancy

    Between two datacenter sites we have a WAN connection provided by a local Telco and routing between sites is using BGP.
    We also have a layer 2 fibre end to end connection between the datacenters for diversity, although this fibre link is not part of the BGP routing process.
    I would like to enable automated failover of the routing between sites utilizing the secondary end to end fibre connection in the event that we lose the the primary connection.
    Currently, I am only utilizing the end to end fibre connection for some traffic using policy based routing. In the event of a BGP outage I have to manually add static routing via the secondary fibre link to reconnect the sites.
    I would be interested in how I could better automate this process and utilize both circuits for redundancy, bgp, ip sla's?.
    I would appreciate any recommendations or direction on a possible solution.
    thanks, Peter.

    Thanks guys for your recommendations...I just want to get a little more detail to move forward
    We have a pretty minimal setup with a router at each datacenter. The current primary connection currently runs E-BGP via a local Telco and most of our traffic is routed this way.
    A second Telco then provided a layer 2 fiber connection which we have terminated at both ends on the datacenter routers as a /30 end to end directly connected connection.
    Both connections are approx 20Mb metro between the datacenters with 100Mb access connections terminations.
    I would like to utilize both links as much as I can and use dynamic routing to have routes fail over if one of the links goes down.
    As mentioned, I only currently use the direct fiber connection by implementing policy based routing and if the primary E-BGP connection goes down I have to add static routing to use the direct router connection.
    I don't really care which connection is the primary / secondary but I do want to use both links and have dynamic failover.
    Would you recommend declaring an internal routing process for the direct fibre connection using OSPF, RIP v2 or EIGRP for this?
    If the traffic then prefers the direct fiber link for routing the datacenter to datacenter traffic is it possible to use Policy based routing across the E-BGP link?
    Any other things to watch out or be careful with this set up?
    I know that we should have dual routers etc at each end, but we currently just have a warm spare at each end and I want to move forward from the static routing arrangement and then tackle the single point of failure issue a little bit later.
    thanks, Peter.

  • DAG on WAN with Inconsistent Link

    We want to implement Exchange 2013 DAG in two different sites Site A and Site B, both sites are geographically different and running on MPLs and Satellite link. But MPLs link on Site-B is very inconsistent and it goes down every day for 3-4 hours. The response
    time is 45ms  TTL=120 only. So whenever link goes down we switch link to Satellite link which is worst than MPLs. Response time on Satellite link is 800ms. 
    We had created test lab and configure DAG between two sites, but whenever MPLs link goes down or switch to Satellite link, we had to reseed database copy to remote sites, and total database size is 4TB. Our ultimate target is to run Exchange 2013 high availability
    between each local site and site resilience between two different sites.

    or read this from exchange team ..you don't even need two name spaces and quorum on 3rd site will also avoid the issue of DAG going down
    http://blogs.technet.com/b/scottschnoll/archive/2012/11/01/storage-high-availability-and-site-resilience-in-exchange-server-2013-part-3.aspx
    Since you can failover the namespace between datacenters now, all that is needed to achieve a datacenter failover is a mechanism for failover of the Mailbox role across datacenters. To get automatic failover for the DAG, you simply architect a solution where
    the DAG is evenly split between two datacenters, and then place the witness server in a
    third location so that it can be arbitrated by DAG members in either datacenter, regardless of the state of the network between the datacenters that contain the DAG members. The key is that third location is isolated
    from network failures that affect the first and/or second location (the locations containing the DAG members).
    Where Technology Meets Talent

  • Non http GSLB with failover pairs in each DC

    Hi guys,
    I'm fairly confident that it's possible to use GSLB between two datacenters for non http traffic. My question is, can you do this while having redundant CSS in each datacenter?
    Thanks

    Just to make sure I understand...
    You can either configure box-to-box redundancy or VIP redundancy within the datacenter (locally) -- not both. Between the datacenters you use GSLB.
    YOu mentioned that the easiest solution on the local level would be box-to-box, but could you use vip redundancy locally if you still wanted local services to be redundant within each datacenter? Basically, can you mix it so that you have some services redundant across datacenters, but other services redundant within datacenters?
    Thanks again,

Maybe you are looking for

  • Airport extreme and Printing : a solution that works for me

    I post a new topic in order to avoid people going throught various topics. _Explanation of the problem I had :_ When I plugged directly my printer (xerox 6110) to the usb of my PowerBook it worked fine. When I connected the same printer with the same

  • Is it possible to change the outside of the screen on MB?

    I have the Macbook Mid-2010, and I want to change the surrounding part of the screen. Is it possible, and how much does it cost?

  • Value cont

    HI, In value contract after creating release order when I will create delivery system showing error that No delivery-relevant items in order 0010000120, order type ZFOR. I have used item catg grp--VCIT,Item catgory WKN etc.Already assigned in VOV$. S

  • Simple Query returns no result

    We have a problem with a simple query on a "old" Table in our Database. The Table has following Structure: CREATE TABLE <table_name>     ROLE_ID INTEGER NOT NULL,     ROLE_NAME VARCHAR (99) ascii NOT NULL,     OBJECTDATA LONG BYTE,     UNIQUE (ROLE_N

  • Bought QT Pro, no download link!

    Annoyed. Bought QT Pro from Apple store, received three successful purchase emails, none of which had a download link or any indication of how to get the software. Logging into My Account, it indicates that delivery is via download, and it gives a pr