Data Center InterConnect with Dark Fibre

Dear all,
We are designing a Data Center InterConnection for our two Data Centers on top of a 10G Dark Fibre.
Our primary goal is to:-
extend a few vlans between the two DCs;
Support VMware vMotion betwen the two DCs;
asymmetric SAN synchronization;
FCoE for SAN connectivity between the two DCs;
So may I ask if we could run both LAN and SAN connections on this DF connection? We have NX5K on one DC and NX7K on the other, are there specific devices required to enable both LAN and SAN connections?
It would be really appreciated if anyone could shed any lights on this. Any suggestions are welcome!
Best Regards,
James Ren

Hello.
If you are running Active/Backup DC scenario, I would suggest to make network design and configuration exactly the same. This includes platforms, interconnectivity types and etc.
Do you know what is the latency on the fiber between these two DCs?
Another question: why do you run 6880 in VSS, do you really need this?
Q about the diagram: are you going to utilize 4 fibers for DC interconnection?
PS: did you think about OTV+LISP instead of MPLS?

Similar Messages

  • Data Center Interconnect - Layer 2 Extension using vPC

    Hi, I wanna if possible try to validate the design to connect 4 nexus 7010 to permit data center interconnect and layer 2 extension using the same vpc and the same port channel number and only 2 links between them as showed in the attach ppt
      Is anybody using a design like that ??

    this will work if it is *only* layer2 between the two pairs of N7K. You cannot create a L3 SVI and attempt to route it via the vpc port channel. It won't work.  If you need both L3 and L2, one option will be to use OTV.  Rgds Eng Wee

  • Data Center Interconnect using MPLS/VPLS

    We are deploying a backup data center and need to extend couple of Vlans over the backup data center.These two DC's which are interconnected by a fibre link which we manage and terminates on the ODC2MAN and ODCXMAN.We run  MPLS on these devices ODC2MAN and ODCXMAN(Cisco 6880) as PE routers. I configured  OSPF between these devices and advertised their loopbacks.
    I need configuration assistance on my PE (odcxman and odc2man) to run the VFI and the VPLS instances.The vlans on the ODCXAGG need to extend to the ODC2AGG. 
     Also, I am looking for the  configuration assistance such that each core devices should have 3 eigrp neighbors. 
    For example:
    ODC2COR1 should have Eigrp neighbors with ODCXCOR1, ODCXCOR2 and ODC2COR2 and my  VPLS Cloud should be emulated as a transparent bridge to my core devices such that it appears that ODC2COR1 is directly connected to ODCXCOR1 and ODCXCOR2 and have cdp neighbor relation. I have attached the diagram.Please let me know your inputs.

    Hello.
    If you are running Active/Backup DC scenario, I would suggest to make network design and configuration exactly the same. This includes platforms, interconnectivity types and etc.
    Do you know what is the latency on the fiber between these two DCs?
    Another question: why do you run 6880 in VSS, do you really need this?
    Q about the diagram: are you going to utilize 4 fibers for DC interconnection?
    PS: did you think about OTV+LISP instead of MPLS?

  • Ask the Expert: Scaling Data Center Networks with Cisco FabricPath

    With Hatim Badr and Iqbal Syed
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco FabricPath with Cisco technical support experts Hatim Badr and Iqbal Syed. Cisco FabricPath is a Cisco NX-OS Software innovation combining the plug-and-play simplicity of Ethernet with the reliability and scalability of Layer 3 routing. Cisco FabricPath uses many of the best characteristics of traditional Layer 2 and Layer 3 technologies, combining them into a new control-plane and data-plane implementation that combines the immediately operational "plug-and-play" deployment model of a bridged spanning-tree environment with the stability, re-convergence characteristics, and ability to use multiple parallel paths typical of a Layer 3 routed environment. The result is a scalable, flexible, and highly available Ethernet fabric suitable for even the most demanding data center environments. Using FabricPath, you can build highly scalable Layer 2 multipath networks without the Spanning Tree Protocol. Such networks are particularly suitable for large virtualization deployments, private clouds, and high-performance computing (HPC) environments.
    This event will focus on technical support questions related to the benefits of Cisco FabricPath over STP or VPC based architectures, design options with FabricPath, migration to FabricPath from STP/VPC based networks and FabricPath design and implementation best practices.
    Hatim Badr is a Solutions Architect for Cisco Advanced Services in Toronto, where he supports Cisco customers across Canada as a specialist in Data Center architecture, design, and optimization projects. He has more than 12 years of experience in the networking industry. He holds CCIE (#14847) in Routing & Switching, CCDP and Cisco Data Center certifications.
    Iqbal Syed is a Technical Marketing Engineer for the Cisco Nexus 7000 Series of switches. He is responsible for product road-mapping and marketing the Nexus 7000 line of products with a focus on L2 technologies such as VPC & Cisco FabricPath and also helps customers with DC design and training. He also focuses on SP customers worldwide and helps promote N7K business within different SP segments. Syed has been with Cisco for more than 10 years, which includes experience in Cisco Advanced Services and the Cisco Technical Assistance Center. His experience ranges from reactive technical support to proactive engineering, design, and optimization. He holds CCIE (#24192) in Routing & Switching, CCDP, Cisco Data Center, and TOGAF (v9) certifications.
    Remember to use the rating system to let Hatim and Iqbal know if you have received an adequate response.  
    They might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community Unified Computing discussion forum shortly after the event. This event lasts through Dec 7, 2012.. Visit this support forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Sarah,
    Thank you for your question.
    Spanning Tree Protocol is used to build a loop-free topology. Although Spanning Tree Protocol serves a critical function in these Layer 2 networks, it is also frequently the cause of a variety of problems, both operational and architectural.
    One important aspect of Spanning Tree Protocol behavior is its inability to use parallel forwarding paths. Spanning Tree Protocol forms a forwarding tree, rooted at a single device, along which all data-plane traffic must flow. The addition of parallel paths serves as a redundancy mechanism, but adding more than one such path has little benefit because Spanning Tree Protocol blocks any additional paths
    In addition, rooting the forwarding path at a single device results in suboptimal forwarding paths, as shown below, Although a direct connection may exist, it cannot be used because only one active forwarding path is allowed.
    Virtual PortChannel (vPC) technology partially mitigates the limitations of Spanning Tree Protocol. vPC allows a single Ethernet device to connect simultaneously to two discrete Cisco Nexus switches while treating these parallel connections as a single logical PortChannel interface. The result is active-active forwarding paths and the removal of Spanning Tree Protocol blocked links, delivering an effective way to use two parallel paths in the typical Layer 2 topologies used with Spanning Tree Protocol.
    vPC provides several benefits over a standard Spanning Tree Protocol such as elimination of blocker ports and both vPC switches can behave as active default gateway for first-hop redundancy protocols such as Hot Standby Router Protocol (HSRP): that is, traffic can be routed by either vPC peer switch.
    At the same time, however, many of the overall design constraints of a Spanning Tree Protocol network remain even when you deploy vPC such as
    1.     Although vPC provides active-active forwarding, only two active parallel paths are possible.
    2.     vPC offers no means by which VLANs can be extended, a critical limitation of traditional Spanning Tree Protocol designs.
    With Cisco FabricPath, you can create a flexible Ethernet fabric that eliminates many of the constraints of Spanning Tree Protocol. At the control plane, Cisco FabricPath uses a Shortest-Path First (SPF) routing protocol to determine reachability and selects the best path or paths to any given destination in the Cisco FabricPath domain. In addition, the Cisco FabricPath data plane introduces capabilities that help ensure that the network remains stable, and it provides scalable, hardware-based learning and forwarding capabilities not bound by software or CPU capacity.
    Benefits of deploying an Ethernet fabric based on Cisco FabricPath include:
    • Simplicity, reducing operating expenses
    – Cisco FabricPath is extremely simple to configure. In fact, the only necessary configuration consists of distinguishing the core ports, which link the switches, from the edge ports, where end devices are attached. There is no need to tune any parameter to get an optimal configuration, and switch addresses are assigned automatically.
    – A single control protocol is used for unicast forwarding, multicast forwarding, and VLAN pruning. The Cisco FabricPath solution requires less combined configuration than an equivalent Spanning Tree Protocol-based network, further reducing the overall management cost.
    – A device that does not support Cisco FabricPath can be attached redundantly to two separate Cisco FabricPath bridges with enhanced virtual PortChannel (vPC+) technology, providing an easy migration path. Just like vPC, vPC+ relies on PortChannel technology to provide multipathing and redundancy without resorting to Spanning Tree Protocol.
    Scalability based on proven technology
    – Cisco FabricPath uses a control protocol built on top of the powerful Intermediate System-to-Intermediate System (IS-IS) routing protocol, an industry standard that provides fast convergence and that has been proven to scale up to the largest service provider environments. Nevertheless, no specific knowledge of IS-IS is required in order to operate a Cisco FabricPath network.
    – Loop prevention and mitigation is available in the data plane, helping ensure safe forwarding that cannot be matched by any transparent bridging technology. The Cisco FabricPath frames include a time-to-live (TTL) field similar to the one used in IP, and a Reverse Path Forwarding (RPF) check is also applied.
    • Efficiency and high performance
    – Because equal-cost multipath (ECMP) can be used the data plane, the network can use all the links available between any two devices. The first-generation hardware supporting Cisco FabricPath can perform 16-way ECMP, which, when combined with 16-port 10-Gbps port channels, represents a potential bandwidth of 2.56 terabits per second (Tbps) between switches.
    – Frames are forwarded along the shortest path to their destination, reducing the latency of the exchanges between end stations compared to a spanning tree-based solution.
        – MAC addresses are learned selectively at the edge, allowing to scale the network beyond the limits of the MAC addr

  • Single CAS NameSpace in Multi-Data Center Model With Exchange 2013

    Hi
    We are in process of transitioning from Exchange 2007 to Exchange 2013. Our Exchange 2007 infrastructure is as follows:
    2 Data centers (DC 1 and DC 2). Both with active user population. Both have their own direct Internet Connectivity
    Standalone Exchange 2007 mailbox servers in each data center
    Load Balanced CAS (HT co-located) servers using Hardware Load Balancers in each data center. Load balancers are configured with VIP and FQDNs (LoadBalancer1.Com and LoadBalancer2.com)
    Currently No access allowed from Internet except ActiveSync (No OWA or OA)
    Outlook anywhere is disabled in Exchange 2007 organization but once mailboxes will be moved to Exchange 2013, OA will definitely be used – we will provide OA on Intranet as well as Internet
    All the internal URLs including Autodiscover point to VIP (Load Balancer IP)
    Autodiscover is not currently published on Internet, but we have a plan to publish it now once Exchange 2013 is introduced
    We want to keep a single CAS NameSpace BYOD.ABC.Com for our ActiveSync and OA (and not going to allow OWA) access from Internet. We want to have Split-DNS for our new Exchange 2013 infrastructure due to
    the simplicity it brings. So we are going to use one name BYOD.ABC.Com from the Internet. We have GSLB that provide Fault Tolerance and Geo-Load Balance to external requests coming from Exchange clients, between two data centers. When we will
    install new Exchange 2013 servers, they’ll be part of new VIP so:
    In a 2 data center model, can we name our internal VIPs same in both data centers (i:e BYOD.ABC.Com) as we have decided to go with Split-DNS? Do you see any caveats to this strategy
    If the above strategy will not work, what are the alternate approach(es).
    If we configure same names for the VIPs in both data centers, it will mean that the Autodiscover SCPs for all the Exchange 2013 CAS objects (and Exchange 2007 CAS objects during co-existence) will point to BYOD.ABC.Com. This should not be a problem for
    AD joined systems as they’ll find and contact Autodiscover endpoints in their own sites (based on Keywords attribute that tells which AD site SCP belongs to) –
    Please correct me if this is wrong.
    If we configure same names for the VIIPs in both data centers, this also means that we have to configure BYOD.ABC.Com on External as well as Internal URLs on all the Exchange 2013 servers across both the data centers – Wouldn’t that be a problem – in terms
    of loops during CAS-CAS Proxy/Redirection?
    If we configure different names of the VIPs (say BYOD1.ABC.Com and BYOD2.ABC.Com), how will the Outlook Anywhere requests be handled in both data centers. The OA requests from DC1 will expect the Certificate Principle Name to be BYOD1.ABC.Com and requests
    from DC2 will expect the Certificate Principle Name to be BYOD2.ABC.Com. How to get this stuff working. As far as I know, OA expects CPN to match with it’s name.
    Thanks
    Taranjeet Singh
    zamn

    Any comments/suggestions from community......
    Thanks
    Taranjeet Singh
    zamn

  • Configuration Exchange 2013 DAG on two Server Data center 2012 with Hyper -V roles

    Dears,
    I try to planning and installation two hosts ( Data center 2012 servers) then install Hyper-V role on both this server, then create VM on each Data center server to be install Exchange 2013 on them.
    After that I want to configure DAG between Exchanges servers, so what are the prerequisite to do that ?
    Note: I use external IBM storage that will be located all VMs and DAG
    Many thanks 

    Hi Moon,
    In addition to Gulab's suggestion, I would like to clarify the following things:
    1. Yes, we can use Standard or Datacenter version of the Windows Server 2012 operating system to configure Exchange 2013 DAG.
    2. Each member of the DAG should be running the same operating system.
    3. The DAG with an even number of members should have a witness server. A witness server is a server outside a DAG that's used to achieve and maintain quorum when the DAG has an even number of members.
    What's more, here are some helpful articles for your reference.
    Planning for High Availability and Site Resilience
    http://technet.microsoft.com/en-us/library/dd638104(v=exchg.150).aspx#HR
    High Availability and Site Resilience
    http://technet.microsoft.com/en-us/library/dd638137(v=exchg.150).aspx
    Hope it helps.
    If there are any problems, please feel free to let me know.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • Simplest Data Center Interconnect?

    Hi all,
    What's a simple way to implement a L2 network across 2 L3 DCs connected by 2x1Gig links using a 6504-E with SUP720-3C?! The DCs are only a few kilometres apart and our local service provider can only provide 1Gig fiber links between DCs (which I can then configure as L2 or L3). I do not want to simply configure flat L2 across both DCs - I would like to keep each DC as a separate L3 site and run OSPF for fast convergence and therefore avoid spanning-tree altogether.
    At the moment each DC uses 3750 switches connected by L3 links and runs EIGRP. We then use separate hardware (7200) and L2TPv3 to create some shared L2 networks across that. We're moving to the 6500 platform and so it's a good opportunity to redesign things - and hopefully I can minimise the amount of hardware needed and consolidate using only the 6500 platform in each DC.
    I also have a Cisco ACE appliance to fit at each site and to have redundancy for these they need to live in a shared network! That's what happens when the design process starts after the kit has already been brought (not my choice btw!).
    Any ideas?

    Howdy,
    The 2x1Gig links are to connect the 2 DCs together - but the question is what's the best way to do this? For example, best practice dictates that sites should be L3 only. However, I also need some kind of L2 connectivity for certain clustered services which require L2.
    What I've ended up doing is a bit of both L2 and L3. Basically I created a L2 etherchannel which only allows 2 things - a VLAN which is used to provide a small /30 link so that I can create SVIs on each end and run L3 on top; and VLANs which are used as pure L2 which run HSRP. Here's the config:
    DC1 switch
    interface Port-channel1
    description Link to DC2 - Po1
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 2
    switchport trunk allowed vlan 2,120
    switchport mode trunk
    interface Vlan2
    ip address 10.x.x.9 255.255.255.252
    ip ospf network point-to-point
    interface Vlan120
    description Shared VLAN
    ip address 10.120.0.253 255.255.255.0
    standby 120 ip 10.120.0.254
    standby 120 priority 150
    standby 120 preempt
    DC2 switch
    interface Port-channel1
    description Link to DC1 - Po1
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 2
    switchport trunk allowed vlan 2,120
    switchport mode trunk
    interface Vlan2
    ip address 10.x.x.10 255.255.255.252
    ip ospf network point-to-point
    ip ospf priority 0
    interface Vlan120
    description Shared VLAN
    ip address 10.120.0.252 255.255.255.0
    standby 120 ip 10.120.0.254
    standby 120 preempt
    It does seem to work ok - for example I have different networks at each DC which I can reach independently and I have a couple of VLANs which stretch across sites. The only problem that I can see is that there would be serious problems if the 2 Gig links went down between the 2 switches - each would then be HSRP master. Also there's a trombone effect with traffic from DC2 using DC1 as its default gateway but there's no way around this unless we use OTV or similar!
    Any thoughts are very welcome! Thank you.

  • Data Center to Data Center Layer 2 connectivity

    What would be the best way
    to provide layer 2 connectivity between 2 data centers? Sample router configs?
    Thanks!!
    Gary

    What would be the best way
    to provide layer 2 connectivity between 2 data centers? Sample router configs?
    Thanks!!
    Gary
    Hi Gary,
    Data Center to Data Center can be conencted in diffterent ways like point to point link,over the MPLS or some other means and cofniguration all depend on the connectivity what exactly is with your current network setup.
    Check out the below link on Data Center interconnect consideration.
    http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/white_paper_c11_493718.html
    Hope to Help !!
    Remember to rate the helpful post
    Ganesh.H

  • Deploying Cisco Overlay Transport Virtualization (OTV) in Data Center Networks

    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about how to plan, design, and implement Cisco Overlay Transport Virtualization (OTV) in your Data Center Network with Cisco experts Anees Mohamed Abdulla and Pranav Doshi.
    Anees Mohamed Abdulla is a network consulting engineer for Cisco Advanced Services, where he has been delivering plan, design, and implementation services for enterprise-class data center networks with leading technologies such as vPC, FabricPath, and OTV. He has 10 years of experience in the enterprise data center networking area and has carried various roles within Cisco such as LAN switching content engineer and LAN switching TAC engineer. He holds a bachelor's degree in electronics and communications and has a CCIE certification 18764 in routing and switching. 
    Pranav Doshi is a network consulting engineer for Cisco Advanced Services, where he has been delivering plan, design, and implementation services for enterprise-class data center networks with leading technologies such as vPC, FabricPath, and OTV. Pranav has experience in the enterprise data center networking area and has carried various roles within Cisco such as LAN switching TAC engineer and now network consulting engineer. He holds a bachelor's degree in electronics and communications and a master's degree in electrical engineering from the University of Southern California.
    Remember to use the rating system to let Anees and Pranav know if you have received an adequate response.  
    Because of the volume expected during this event, Anees and Pranav might not be able to answer each question. Remember that you can continue the conversation on the Data Center, sub-community forum shortly after the event. This event lasts through August 23, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Dennis,
        All those Layer 2 extension technologies require STP to be extended between Data Centers if you need to have multiple paths between Data Centers. OTV does not extend STP rather it has its own mechanism (AED election) to avoid loop when multiple paths are enabled. It means any STP control plane issue, we don't carry to the other Data Center.
        OTV natively suppresses Unknown Unicast Flooding across the OTV overlay. Unknown unicast flooding is a painful problem in layer 2 network and difficult to troubleshoot to identify the root cause if you don't have proper network monitoring tool.
       It has ARP optimization which eliminates flooding ARP packets across Data Center by responding locally with cached ARP messages. One of the common issues I have seen in Data Center is some server or device in the network sends continuous ARP packets which hits Control plane in the Aggregation layer which in turn causes network connectivity issue.
    The above three points proves the Layer 2 domain isolation between data centers. If you have redundant Data Centers with Layer 2 extended without OTV, the above explained layer 2 issue which happens in one Data Center carries the same failure to the second data center which creates the question of what is the point of having two different Data Centers if we can not isolate the failure domain.
      OTV natively supports HSRP localization with few command lines. This is a very important requirement in building Active/Active Data Center.
    Even though your question is related to L2TP, OTV deserves the comparison with VPLS and those comparison will also be applicable for L2TP. The below link explains in detail...
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-574984.html
    Thanks,
    Anees.

  • Data center design guide

    Hi all,
    does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?
    thanks,

    Hi all,does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?thanks,
    Hi ,
    Check out the below link on Data center design with Nexus switches
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572831-00_Dsgn_Nexus_vPC_DG.pdf
    Hope to Help !!
    Ganesh.H
    Remember to rate the helpful post

  • Data Center Design: Nexus 7K with VDC-core/VDC-agg model

    Dear all,
    I'm doing with a collapsed VDC-core/VDC-agg model on the same chassis with 2  Redundant Cisco Nexus 7010 and a pair of Cisco 6509 used as a Service  Chassis without VSS. Each VDC Core have redundant link to 2 PE based on  Cisco 7606.
    After reading many design document of Cisco, I'm asking  what is the need of a Core Layer in a Data Center especially if it is  small or medium size with only 1 aggregation layer and dedicated for a Virtualized Multi-Tenanted environement? What is driving to have a core layer?
    Thanx

    If your data center is small enough to not require a core, then its fine to run with a collapsed core (distribution + core as the same device).  For a redundant design you need to uplink all your distribution switches to each of your cores.  If you have no cores, then you need full mess at your distribution layer (for full redundancy).
    Lets say you have only 4 distribution pairs...so 8 switches  For full redundancy each one needs uplink to each other.  This means you need 28 total ports used to connect all the switches together (n(n-1)/2).  Thats also assuming 1 link to each device.  However if you had redundant cores, the number of links used for uplinks reduces to 21 total links (this includes links between each distribution switch in a site, and link between the two cores).  So here you see your only saving 7 links.  Here your not gaining much by adding a core.
    However if you have 12 distribution pairs...so 24 switches.  Full redundancy means you have 276 links dedicated for this.  If you add a core, this drops to 61 links.  Here you see the payoff.

  • Registering with the WebEx Data Center and the Cisco WebEx Node Management System

    Dear guys, ...
    Please help,
    i want to implement to webex node ASR1000, i have read in "Configuring the Cisco Webex Node for ASR 100.pdf", there is prerequisites to implement it, that is "Registering with the WebEx Data Center and the Cisco WebEx Node Management System"
    Can someone tell how to "Registering with the WebEx Data Center and the Cisco WebEx Node Management System"
    Are there any step by step documentation to "Registering with the WebEx Data Center and the Cisco WebEx Node Management System"?
    Thank you
    BR

    You should have received a PAK Key with your order.  Go to Cisco licensing and enter the PAK Key as this will start the process.  Once the PAK Key is validated a screen will be displayed to enter your request for ASR 100 integration.  It normally takes a few days to a couple of weeks to get the information back from WebEx needed to configure your ASR.
    If you did not get a PAK Key contact your WebEx rep to get the process started to integrate your ASR to your WebEx site.
    Hope this helps
    John

  • Windows server 2012 Data Center with VDI configuration error message ( The remote session was disconnected because there are no remote desktop license servers available)

    Dears,
    I have two windows server 2012 Data Center and I configured (Virtual Desktop Interface)VDI on it's.
    All my clients connected on both of servers by used Remote Desktop sessions ,5 months since.
    Currently,when the clients is connected on the both of servers they received the following error:
    "The remote session was disconnected because there are no remote desktop license servers available to provide license"
    Kindly note, I installed windows Licenses Server Data Center on the both of servers. 
    Regards.

    Hi,
    Please let us know if you have purchased RDS CALs and install it in your RD licensing server.
    Also, on RD Session host servers, please make sure that you have specified the license mode and point them to the RD licensing server.
    Remote Desktop Services Client Access Licenses (RDS CALs)
    http://technet.microsoft.com/en-us/library/cc753650.aspx
    RD Licensing Configuration on Windows Server 2012
    http://blogs.technet.com/b/askperf/archive/2013/09/20/rd-licensing-configuration-on-windows-server-2012.aspx
    Hope this helps.
    Jeremy Wu
    TechNet Community Support

  • Ask the Expert: Data Center Integrated Systems and Solutions

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about utilizing Cisco data center technology and solutions with subject matter expert Ramses Smeyers. Additionally, Ramses will answer questions about FlexPOD, vBlock, Unified Computing Systems, Nexus 2000/5000, SAP HANA, and VDI.
    Ramses Smeyers is a technical leader in Cisco Technical Services, where he works in the Datacenter Solutions support team. His main job consists of supporting customers to implement and manage Cisco UCS, FlexPod, vBlock, VDI, and VXI infrastructures. He has a very strong background in computing, networking, and storage and has 10+ years of experience deploying enterprise and service provider data center solutions. Relevant certifications include VMware VCDX, Cisco CCIE Voice, CCIE Data Center, and RHCE.
    Remember to use the rating system to let Ramses know if you have received an adequate response.
    Because of the volume expected during this event, Ramses might not be able to answer every question. Remember that you can continue the conversation in the Data Center Community, under the subcommunity Unified Computing, shortly after the event. This event lasts through August 1, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Ramses,
    I have dozen questions but will try to restrain myself and start with the most important ones :)
    1. Can cables between IOM and FI be configured in a port-channel? Let me clarify what I"m trying to achieve: if I have only one chassis with only one B200M3 blade inside, will the 2208 IOM and FI6296 allow me to achieve more than 10Gbps throughput between the blade and the Nexus 5k? Of course, we are talking here about clean ethernet environment.
         B200M3 --- IOM2208 --- 4 links --- FI6296 --- port-channel (4 links) --- Nexus5548
    2. Is it possible to view/measure throughput for Fibre Channel interfaces?
    3. Here is one about FlexPod: I know that in case of vBlock there is the company that delivers fully preconfigured system and offers one universal support point so customer don't have to call Cisco or VMware or storage supports separately. What I don't know is how it works for FlexPod. Before you answer that you are not sales guy, let me ask you more technical questions: Is FlexPod Cisco product or is NetApp product or this is just a concept developed by two companies that should be embraced by various Cisco/NetApp partners? As you obviously support Datacenter solutions, if customer/partner calls you with are FlexPod related problem, does it matter for you, from support side, if you are troubleshooting fully compliant FlexPod system or you'll provide same level of support even is the system is customized (not 100% FlexPod environment)?
    4. When talking about vCenter, can you share your opinion about following: what is the most important reason to create the cluster and what will be the most important limitation?
    5. I know that NetApp has feature called Rapid Clones that allows faster cloning than what vCenter offers. Any chance you can compare the two? I remember that NetApp option should be much faster but didn't understand what is actually happening during the cloning process and I'm hoping you can clarify this? Maybe a quick hint here: seems to me it will be helpful if I could understand the traffic path that is used in each case. Also, it will be nice to know if Vblock (i.e. EMC) offers similar feature and how it is called.
    6. Can I connect Nexus 2000 to the FI6xxx?
    7. Is vBlock utilizing Fabric Failover? Seems to me not and would like to hear your opinion why.
    Thanks for providing us this opportunity to talk about this great topic.
    Regards,
    Tenaro

  • Data Center Redundancy

    Hi, dear experts!
    I) My  Input data is (read please, or see attach):
    - I have one active data center (main office), one backup data center (backup office), and several branch offices and many corporate internet users
    - Each of the offices has redundant internet connection: Main office via ISP1 and ISP2, backup office via ISP3 and ISP4.
    - Standby data center duplicates corporates services (such as Exchange, Sharepoint, FileStorage).
    - Main office and backup office are long-distanced from each other (about 800 km), and interconnected via 1Gb fiberoptic.
    II) My tasks are:
    1. Provide redundant network  connection for local ofiice users to corporate services.
    2. Provide redundant network connection for branch offices and internet users to corporate services.
    III) My ideas are:
    1. Accordingly to the 1-st task. Here I suppose to use load balancers in redundant configuration.
    2. Accordingly to the 2-nd task. To my mind there are two scenarios.
    2.1 First scenario. To built a DMVPN topology using main and backup offices as a hubs, and branch offices as a spokes.
    2.2 Second scenario. To by provider independet IPv4-adress block and ASN, to advertise main and backup office networks in internet.
    IV) My questions are:
    -What scenario according to the 2-nd task is better: using a DMVPN-topology or using an ASN-redundancy?
    -Is it possible to avoid assymetric routing problems in case of using a an ASN-redundancy?
    Thank you!

    I think Global loadblancer device will solve your both issue or there is an other solution for 2nd question,
    to use BGP confedration, that means use two private ASN internaly one in each DC, and put them both DC in one confedration, use one public ASN with all your  ISP's.
    Regards,

Maybe you are looking for