Data Center switching oversubscription

Dear All,
I manage a data center that consist in 4 stacked 3750x and 3 HP c-Class Enclosures with 2 Cisco 3020 switches each.
On each 3020 switch I currently have 2 links (aggregated) to the 3750x stack.
Since we are going to fill one of the Enclosures with 16 blade I was concerting on the uplink bandwith to the 3750 stack.
Is there any tool or best practice to calculare the minimal oversubscription ratio to avoid drops?
I saw the 20:1 rule for access layer but I believe that it is used for switches where end-devices are connected (PC, printers , etc..) , not for Data Center.
thanks,
Paolo

Hi Paolo,
There is no such thing, you can have a oversubscription of 100:1, if you wish. What tells you if this oversubscription is good or not is the application behavior, how are you using your network.
There are a lot of tools in the market (some are free) that helps you monitoring your consumption of network bandwidth, take this data with the growth forecast of the network to get a number.
I'll not listing the networking monitoring tools, because that are hundreds out there.
Regards.
Richard

Similar Messages

  • Exchange Data Center Switching Database Not Mounting

    -Primary DC > 02 Exchange Servers ( All roles Installed)  MAPI & Replication Network Between two
    -DR Site > 01 Server (All roles) + 01 Server ( Mailbox Roles) Only MAPI network
    -All the Servers in both Sites are members of DAG called DAG1
    - All Mailbox Server have Database Copies in healthy states
    I shutdown the Primary Database Completely and I saw that Mailbox Databases not mounting On DR Site.
    What need to be configured? to Switch over and Automatic mount in case Primary Data Center Down.
    here is the Screen Shot:
    After Some time

    Hi,
    All DAG members should have the same number of networks, MAPI and Replication networks. Base on your description, members in DR site only have MAPI network.
    If your primary datacenter fails, you need to switchover to secondary datacenter manually.
    You can look at the following article.
    http://technet.microsoft.com/en-us/library/dd351049(v=exchg.141).aspx
    Best regards,
    Belinda Ma
    TechNet Community Support

  • Welcome to the Solutions and Architectures Data Center & Virtualization Community

    Welcome to the Solutions and Architectures Data Center & Virtualization Community. We encourage everyone to share their knowledge  and start conversations related to Data Center and Virtualization  Solutions and architectures.All topics are welcome, including  Servers – Unified Computing, Data Center Security, Data Center  Switching, Data Center Management and Automation, Storage Networking,  Application Networking Services and solutions to solve business  problems.
    Remember,  just like in the workplace,  be courteous to your fellow forum  participants. Please refrain from  using disparaging or obscene language  or posting advertisements.
    Cheers,
    Dan Bruhn 

    Hi,
    I have a question...
    I going to install two Nexus 7009 with three N7K-F248XP-25  modules on each one, I am planning to create 3 VDC, but at the initial configuration the system does not show the ethernets ports of these modules, even with the show inventory and show module I can see tah the modules are recognized and its status is OK. There is something that I have to do before start to configure these modules...? enable some feature or license in order to see the ports with show running CLI...?

  • Data Center Aggregation/Access SW Nexus

    i have a design scinario for backup email data center, some difficulties faced when trying to match the requirements to Boxes.
    the design required a Nexus 5548UP with addition to 2x Virtualized Data center switches, also it required 12 x CPU license for VM Virtual Network Switch. i suggested to add Nexus 1000 series but the consern is can i use it without adding Nexus 2k . if i have to use N2k and N1k what is the best configuration scinario?

    Hi Shakeeb,
    I don't understand your question very well, but I will try clarify some points.
    You don't need a Nexus 2000 if you have enough ports available in your Nexus 5500, even if you will use nexus 1000v.
    In this scenario what I recommend to you is connect the both Nexus 5548 each other and create a vpc with upstreams routers and downstream blades and storage.
    Richard

  • Welcome to the Enterprise Data Center Networking Discussion

    Welcome to the Cisco Networking Professionals Connection Network Infrastructure Forum. This conversation will provide you the opportunity to discuss general issues surrounding Enterprise Data Center Networking. We encourage everyone to share their knowledge and start conversations on issues such as Mainframe connectivity, SNA Switching Services, DLSw+, managing SNA/IP and any other topic concerning Enterprise Data Center Networking.
    Remember, just like in the workplace, be courteous to your fellow forum participants. Please refrain from using disparaging or obscene language or posting advertisements.
    We encourage you to tell your fellow networking professionals about the site!
    If you would like us to send them a personal invitation simply send their names and e-mail addresses along with your name to us at [email protected]

    Hi together,
    Since the release of SAP NetWeaver 2004s to 'Unrestricted Shipment' as of 6th of June 2006, we have renamed the forum 'SAP NetWeaver2004s Ramp-Up' to 'BI in SAP NetWeaver2004s'.
    The forum should continue to adress BI issues particular to the release SAP NetWeaver 2004s. Please post general BI, project, etc. question to the other existing BI forums.
    The SAP NetWeaver BI organisation will also use this forum to communicate / roll-out information particular to the release of SAP NetWeaver 2004s (in addtion to the FAQs and other material on the SAP Service Marketplace and information in other areas of the SDN).
      Cheers
         SAP NetWeaver BI Organisation

  • Deploying Cisco Overlay Transport Virtualization (OTV) in Data Center Networks

    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about how to plan, design, and implement Cisco Overlay Transport Virtualization (OTV) in your Data Center Network with Cisco experts Anees Mohamed Abdulla and Pranav Doshi.
    Anees Mohamed Abdulla is a network consulting engineer for Cisco Advanced Services, where he has been delivering plan, design, and implementation services for enterprise-class data center networks with leading technologies such as vPC, FabricPath, and OTV. He has 10 years of experience in the enterprise data center networking area and has carried various roles within Cisco such as LAN switching content engineer and LAN switching TAC engineer. He holds a bachelor's degree in electronics and communications and has a CCIE certification 18764 in routing and switching. 
    Pranav Doshi is a network consulting engineer for Cisco Advanced Services, where he has been delivering plan, design, and implementation services for enterprise-class data center networks with leading technologies such as vPC, FabricPath, and OTV. Pranav has experience in the enterprise data center networking area and has carried various roles within Cisco such as LAN switching TAC engineer and now network consulting engineer. He holds a bachelor's degree in electronics and communications and a master's degree in electrical engineering from the University of Southern California.
    Remember to use the rating system to let Anees and Pranav know if you have received an adequate response.  
    Because of the volume expected during this event, Anees and Pranav might not be able to answer each question. Remember that you can continue the conversation on the Data Center, sub-community forum shortly after the event. This event lasts through August 23, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Dennis,
        All those Layer 2 extension technologies require STP to be extended between Data Centers if you need to have multiple paths between Data Centers. OTV does not extend STP rather it has its own mechanism (AED election) to avoid loop when multiple paths are enabled. It means any STP control plane issue, we don't carry to the other Data Center.
        OTV natively suppresses Unknown Unicast Flooding across the OTV overlay. Unknown unicast flooding is a painful problem in layer 2 network and difficult to troubleshoot to identify the root cause if you don't have proper network monitoring tool.
       It has ARP optimization which eliminates flooding ARP packets across Data Center by responding locally with cached ARP messages. One of the common issues I have seen in Data Center is some server or device in the network sends continuous ARP packets which hits Control plane in the Aggregation layer which in turn causes network connectivity issue.
    The above three points proves the Layer 2 domain isolation between data centers. If you have redundant Data Centers with Layer 2 extended without OTV, the above explained layer 2 issue which happens in one Data Center carries the same failure to the second data center which creates the question of what is the point of having two different Data Centers if we can not isolate the failure domain.
      OTV natively supports HSRP localization with few command lines. This is a very important requirement in building Active/Active Data Center.
    Even though your question is related to L2TP, OTV deserves the comparison with VPLS and those comparison will also be applicable for L2TP. The below link explains in detail...
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-574984.html
    Thanks,
    Anees.

  • Ask the Expert: Scaling Data Center Networks with Cisco FabricPath

    With Hatim Badr and Iqbal Syed
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco FabricPath with Cisco technical support experts Hatim Badr and Iqbal Syed. Cisco FabricPath is a Cisco NX-OS Software innovation combining the plug-and-play simplicity of Ethernet with the reliability and scalability of Layer 3 routing. Cisco FabricPath uses many of the best characteristics of traditional Layer 2 and Layer 3 technologies, combining them into a new control-plane and data-plane implementation that combines the immediately operational "plug-and-play" deployment model of a bridged spanning-tree environment with the stability, re-convergence characteristics, and ability to use multiple parallel paths typical of a Layer 3 routed environment. The result is a scalable, flexible, and highly available Ethernet fabric suitable for even the most demanding data center environments. Using FabricPath, you can build highly scalable Layer 2 multipath networks without the Spanning Tree Protocol. Such networks are particularly suitable for large virtualization deployments, private clouds, and high-performance computing (HPC) environments.
    This event will focus on technical support questions related to the benefits of Cisco FabricPath over STP or VPC based architectures, design options with FabricPath, migration to FabricPath from STP/VPC based networks and FabricPath design and implementation best practices.
    Hatim Badr is a Solutions Architect for Cisco Advanced Services in Toronto, where he supports Cisco customers across Canada as a specialist in Data Center architecture, design, and optimization projects. He has more than 12 years of experience in the networking industry. He holds CCIE (#14847) in Routing & Switching, CCDP and Cisco Data Center certifications.
    Iqbal Syed is a Technical Marketing Engineer for the Cisco Nexus 7000 Series of switches. He is responsible for product road-mapping and marketing the Nexus 7000 line of products with a focus on L2 technologies such as VPC & Cisco FabricPath and also helps customers with DC design and training. He also focuses on SP customers worldwide and helps promote N7K business within different SP segments. Syed has been with Cisco for more than 10 years, which includes experience in Cisco Advanced Services and the Cisco Technical Assistance Center. His experience ranges from reactive technical support to proactive engineering, design, and optimization. He holds CCIE (#24192) in Routing & Switching, CCDP, Cisco Data Center, and TOGAF (v9) certifications.
    Remember to use the rating system to let Hatim and Iqbal know if you have received an adequate response.  
    They might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community Unified Computing discussion forum shortly after the event. This event lasts through Dec 7, 2012.. Visit this support forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Sarah,
    Thank you for your question.
    Spanning Tree Protocol is used to build a loop-free topology. Although Spanning Tree Protocol serves a critical function in these Layer 2 networks, it is also frequently the cause of a variety of problems, both operational and architectural.
    One important aspect of Spanning Tree Protocol behavior is its inability to use parallel forwarding paths. Spanning Tree Protocol forms a forwarding tree, rooted at a single device, along which all data-plane traffic must flow. The addition of parallel paths serves as a redundancy mechanism, but adding more than one such path has little benefit because Spanning Tree Protocol blocks any additional paths
    In addition, rooting the forwarding path at a single device results in suboptimal forwarding paths, as shown below, Although a direct connection may exist, it cannot be used because only one active forwarding path is allowed.
    Virtual PortChannel (vPC) technology partially mitigates the limitations of Spanning Tree Protocol. vPC allows a single Ethernet device to connect simultaneously to two discrete Cisco Nexus switches while treating these parallel connections as a single logical PortChannel interface. The result is active-active forwarding paths and the removal of Spanning Tree Protocol blocked links, delivering an effective way to use two parallel paths in the typical Layer 2 topologies used with Spanning Tree Protocol.
    vPC provides several benefits over a standard Spanning Tree Protocol such as elimination of blocker ports and both vPC switches can behave as active default gateway for first-hop redundancy protocols such as Hot Standby Router Protocol (HSRP): that is, traffic can be routed by either vPC peer switch.
    At the same time, however, many of the overall design constraints of a Spanning Tree Protocol network remain even when you deploy vPC such as
    1.     Although vPC provides active-active forwarding, only two active parallel paths are possible.
    2.     vPC offers no means by which VLANs can be extended, a critical limitation of traditional Spanning Tree Protocol designs.
    With Cisco FabricPath, you can create a flexible Ethernet fabric that eliminates many of the constraints of Spanning Tree Protocol. At the control plane, Cisco FabricPath uses a Shortest-Path First (SPF) routing protocol to determine reachability and selects the best path or paths to any given destination in the Cisco FabricPath domain. In addition, the Cisco FabricPath data plane introduces capabilities that help ensure that the network remains stable, and it provides scalable, hardware-based learning and forwarding capabilities not bound by software or CPU capacity.
    Benefits of deploying an Ethernet fabric based on Cisco FabricPath include:
    • Simplicity, reducing operating expenses
    – Cisco FabricPath is extremely simple to configure. In fact, the only necessary configuration consists of distinguishing the core ports, which link the switches, from the edge ports, where end devices are attached. There is no need to tune any parameter to get an optimal configuration, and switch addresses are assigned automatically.
    – A single control protocol is used for unicast forwarding, multicast forwarding, and VLAN pruning. The Cisco FabricPath solution requires less combined configuration than an equivalent Spanning Tree Protocol-based network, further reducing the overall management cost.
    – A device that does not support Cisco FabricPath can be attached redundantly to two separate Cisco FabricPath bridges with enhanced virtual PortChannel (vPC+) technology, providing an easy migration path. Just like vPC, vPC+ relies on PortChannel technology to provide multipathing and redundancy without resorting to Spanning Tree Protocol.
    Scalability based on proven technology
    – Cisco FabricPath uses a control protocol built on top of the powerful Intermediate System-to-Intermediate System (IS-IS) routing protocol, an industry standard that provides fast convergence and that has been proven to scale up to the largest service provider environments. Nevertheless, no specific knowledge of IS-IS is required in order to operate a Cisco FabricPath network.
    – Loop prevention and mitigation is available in the data plane, helping ensure safe forwarding that cannot be matched by any transparent bridging technology. The Cisco FabricPath frames include a time-to-live (TTL) field similar to the one used in IP, and a Reverse Path Forwarding (RPF) check is also applied.
    • Efficiency and high performance
    – Because equal-cost multipath (ECMP) can be used the data plane, the network can use all the links available between any two devices. The first-generation hardware supporting Cisco FabricPath can perform 16-way ECMP, which, when combined with 16-port 10-Gbps port channels, represents a potential bandwidth of 2.56 terabits per second (Tbps) between switches.
    – Frames are forwarded along the shortest path to their destination, reducing the latency of the exchanges between end stations compared to a spanning tree-based solution.
        – MAC addresses are learned selectively at the edge, allowing to scale the network beyond the limits of the MAC addr

  • Please shed some light on Data Center design

    Hi,
        I want you guys to recommend what the design should be. I'm familiar with HP blade system. Let me clarify the existing device.
    1. HP Blade with Flex Fabric. It supports FCOE.
    2. MDS SAN switch for the storage
    3. Network Switch for IP network.
    4. HP Storage.
        HP Blade has 2 interface types for IP Network(Network Switch) and Fiberchannel(SAN).
       What is the benifit for using Nexus switch and FCOE for my exising devices. What should be a new design with Nexus switch? Please guide me ideas.
    THX
    Toshi 

    Hi, Toshi:
    Most of these chat boards have become quite boring. Troubleshooting OSPF LSA problems is old news. But I do pop my head in every now and then. Also, there are so many other companies out there doing exciting things in the data center. You have Dell, Brocade, Arista, Juniper, etc. So one runs the risk of developing a myopic view of the world of IT by lingering around this board for too long.
    If you want to use the new B22 FEX for the HP c7000 blade chassis, you certainly can. That means the Nexus will receive the FCoE traffic and leverage its FCF functionality; either separate the Ethernet and FC traffic there, or create a VE-port instantiation with another FCF for multihop deployments. Good luck fighting the SAN team with that one! Another aspect of using the HP B22 is the fact that the FEX is largely plug and play, so you dont have to manage the Flex Fabric switches.
    HTH

  • Collapsed Data Center Tier - Best Practice

    Hey guys,
    I'm working with a company who's doing a Data Center build-out. This is not a huge build out and I don't believe I really need a 2 tier design (access, core/aggregation). I'm looking for a 1 tier design. I say this because they only really have one rack of hosts - and we are not connected to a WAN or campus network - we are a dev shop (albeit a pretty damn big dev shop) who hosts internet sites and web applications to the public. 
    My network design relies heavily on VRF's. I treat every web application published to the internet as it's town "tenant" with one leaked route which is my managment network so I have any management servers ( continues deployment, monitoring, etc...) sitting in this subnet that is leaked. Each VRF has their own route to a virtual firewall context of their own and out to the internet. 
    Right now we are in a managed datacenter. I'm going to be building out their own switching environment utilizing the above design and moving away from the managed data center. That being said I need to pick the correct switches for this 1 tier design. I need a good amount of 10gbe port density (124 ports minimum). I was thinking about going with 4 5672UP or 4 C3064TQ-10GT - these will work as both my access and core (about 61 servers, one fiber uplink to my corporate network, and one fiber uplink to a firewall running multiple device contexts via multiple vlans) 
    That being said - With the use of VRFs, VLAN, and MP-BGP (used to leak my routes) what is the best redundancy topology for this design. If I was using catalyst 6500's I would do VSS and be done with it - but I don't believe vPC on the nexus switches traffic and is really more for a two tier model (vPC on two cores, aggregation/access switch connects up to both cores but it looks like one.) What I need to accomplish sounds to me that I'm going to be doing this the old fashion way , running a port channel between each switch, and hopefully using a non STP method to avoid loops. 
    Am I left with any other options? 

    ISP comes into the collapsed core after a router. A specific firewall interface (firewall is in multi context mode) sits on the "outside" vlan specific to each VRF. 

  • Layer 2 connect - data center web hosting

    hi, i need your help!!
    i have data center with the nexus 7000 , i have servers connecting to the cisco 7000 with web servers. my company do hosting for customers.
    the poing that we have shared resources like vmwares on blades and so on.. mean that the ports of the blade are connecting physically to the nexus 7000 with trunk and vlans for every customers.
    my nexus connecting to FW than to WAN stiches than to Routers connecting to the internet so if i asked to to hosting from the internet its easy.
    the problem is now i have cusomer that wants to connect his switch over the wan directly to his area at my datacenter....  we make for him servers that are the same like his servers with the same subnet and he makes replications...
    he dont have router, he connect his switch over wan provider at layer 2 to me..
    should i connect him direcly to my nexus??? with his vlan?? should i need other solution like eompls??? what is the safest way to connect him with layer 2.. and i repeat the problem that our servers are shared between many customers - the same nexus ports, please help!!

    Hello,
    1.PIX is the precursor to the ASA so at this point the ASA is probably a better choice since it'll be around longer plus I'm sure they have beefed up the base hardware compared to the pix.
    2.Your external router is dependant on how much traffic your going to be dropping into your hosting site. A 7200 series router is a fairly beefy router and should be able to handle what you need if your looking.
    3.One of the nice things about the 6500 is you can put a FWSM and segment all your different hosting servers to provide a more granular network control.
    I don't have any case studys but will look around and post them if I find some.
    Patrick

  • Question about GG if Data Center Fails

    Hi i came across a question from my team about, what happens to Golden Gate replication when a data center failure occurs or an automatic switch-over to VCS cluster happens?
    Automatic switch-over to VCS cluster did happen in our environment once, when our NIC card was not working for our Primary server in Production VCS Config shut down the primary node and automatically switched over to failover node in VCS cluster.
    I am also looking into option what will happen to on-going Extract and Pump process on source? How will i be able to syncronize the residual data from source side to target side, if such things happen?
    Any comments or experience will help me.
    Thanks

    All of our filesystems on primary gets failed over to failover cluster node, like how it happens with database bins and data files.
    We have tested it and verified if we can start the Golden Gate after failover happens on failover node.
    Thanks

  • Data Center Design: Nexus 7K with VDC-core/VDC-agg model

    Dear all,
    I'm doing with a collapsed VDC-core/VDC-agg model on the same chassis with 2  Redundant Cisco Nexus 7010 and a pair of Cisco 6509 used as a Service  Chassis without VSS. Each VDC Core have redundant link to 2 PE based on  Cisco 7606.
    After reading many design document of Cisco, I'm asking  what is the need of a Core Layer in a Data Center especially if it is  small or medium size with only 1 aggregation layer and dedicated for a Virtualized Multi-Tenanted environement? What is driving to have a core layer?
    Thanx

    If your data center is small enough to not require a core, then its fine to run with a collapsed core (distribution + core as the same device).  For a redundant design you need to uplink all your distribution switches to each of your cores.  If you have no cores, then you need full mess at your distribution layer (for full redundancy).
    Lets say you have only 4 distribution pairs...so 8 switches  For full redundancy each one needs uplink to each other.  This means you need 28 total ports used to connect all the switches together (n(n-1)/2).  Thats also assuming 1 link to each device.  However if you had redundant cores, the number of links used for uplinks reduces to 21 total links (this includes links between each distribution switch in a site, and link between the two cores).  So here you see your only saving 7 links.  Here your not gaining much by adding a core.
    However if you have 12 distribution pairs...so 24 switches.  Full redundancy means you have 276 links dedicated for this.  If you add a core, this drops to 61 links.  Here you see the payoff.

  • Nexus 5K - Data Center to Data Center

    Hi All,
    A scenario was recently presented to me that involved connecting two SANs between two Data Centers.  One of the Data Centers is existing and utilizes Nexus 5K switches while the other Data Center will be greenfield and is currently just a shell.
    From what I've been reading the Nexus 5K does not support FCIP and I have some concerns regarding the feasibility to perform SAN to SAN over a Service Providers WDM infrastructure; mainly due to cost.  More reading suggested that this could not be done with FCoE.
    Just wanted to pick everyones brain regarding what solutions are available to interconnect two SANs between Data Centers.  The solution would involve using a Service Provider, Ethernet and WDM services are available to both locations.

    You can add a pair of MDS9222i in the middle. This will take care of FCIP, DWDM traffic.

  • Data Center Network Design

    I'm looking at a couple options for a small network in a data center.  I seem to be getting hung up on all the different options.  One of the options I'm looking at is end or row using both 2960Ss and Blade Center chassis switches with each physical server dual homed into a 2960, each ESX server dual homed into a blade switch and each of the switches with a Layer 2 10Gb uplink (20 total with etherchannel) to one of two 4900Ms.  The 4900Ms would then have a layer 2 uplink between them to accomodate VLANs that span the access layer switches.  This would be an inverted U topology.  That's simple enough, and maybe that is where I should leave it, but there is the now available stacking feature of 2960s that has me wondering if there is another option available with dual homing a stack.  Is there such a beast?  Would it be better to stack 2960s, or even 3750s, so as to make each end of row with 2 redundant switches appear as one logical stack, and then uplink that stack to an aggregate multilayer switch such as a pair of 4900Ms?  Or might that limit me to keeping VLANs within a stack and end or row?
    thank you,
    Bill

    Hi Bill-
    First, I personally would not use the 2960S for the data center, no matter the size. That switch was purposely built for user access and has some limitations. Also, depending on what you need to accomplish will determine your design. I recently did a design similar to what you are describing. We ended up putting 3750X's at the top of rack as a stack. This allows for etherchannel to your servers with both server NICs being active. From there we uplinked to a pair of 6509's in VSS. From a layer 2 point of view this was about as simple as it gets; 1 switch connected to another switch connected to a server. No spanning tree! If you can't afford stackable switches, you may want to look at routing at the top of rack. However you will lose functionality like moving VLAN's between racks, relying on server NIC software for active/passive links and the moving of VM's could be limited.

  • Data center design guide

    Hi all,
    does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?
    thanks,

    Hi all,does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?thanks,
    Hi ,
    Check out the below link on Data center design with Nexus switches
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572831-00_Dsgn_Nexus_vPC_DG.pdf
    Hope to Help !!
    Ganesh.H
    Remember to rate the helpful post

Maybe you are looking for