N7k as redundant core with vpc to 4510/3750 as distribution switch

Hi - basic question here
Got 2 qty N7k as redundant core with vpc to 4510 and 3750 as redundand distribution switch running MST. I got stuck with some bad cabling design from our IDF to Datacenter so have 2 access switch whereby each one will have a etherchannel to both distribution 4510 and 3750. My question is this is  a doable design as I am not sure about the vpc upstream on how it effects etherchannel with MST for my distribution and access.
Thanks

vPC will be considered as one logical link by both upstream and downstream connected devices
the question here are you going to run L3 between the distribution and Core devices ? (  this is recommended design ) if yes, then you do not need to worry about MST and VPC if you going to have it L3 from distribution devices up to the Core
one thing to consider is the distribution switch in your design has big difference in terms of backplane throughput i mean between the 4500 and 3750 !
if you can have both as 4500 will be better and more consistent design
Good luck
if helpful Rate

Similar Messages

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • Nexus 7000 and 2000. Is FEX supported with vPC?

    I know this was not supported a few months ago, curious if anything has changed?

    Hi Jenny,
    I think the answer will depend on what you mean by is FEX supported with vPC?
    When connecting a FEX to the Nexus 7000 you're able to run vPC from the Host Interfaces of a pair of FEX to an end system running IEEE 802.1AX (802.3ad) Link Aggregation. This is shown is illustration 7 of the diagram shown on the post Nexus 7000 Fex Supported/Not Supported Topologies.
    What you're not able to do is run vPC on the FEX Network Interface that connect up to the Nexus 7000 i.e., dual-homing the FEX to two Nexus 7000. This is shown in illustrations 8 and 9 of under the FEX topologies not supported on the same page.
    There's some discussion on this in the forum post DualHoming 2248TP-E to N7K that explains why it's not supported, but essentially it offers no additional resilience.
    From that post:
    The view is that when connecting FEX to the Nexus 7000, dual-homing does not add any level of resilience to the design. A server with dual NIC can attach to two FEX  so there is no need to connect the FEX to two parent switches. A server with only a single NIC can only attach to a single FEX, but given that FEX is supported by a fully redundant Nexus 7000 i.e., SE, fabrics, power, I/O modules etc., the availability is limited by the single FEX and so dual-homing does not increase availability.
    Regards

  • Running new 10.8.2 on my pro 2x3 Ghz Quad core with 32 GB DDR2. everything seem to slow and frezze up some time

    I had fresh installed 10.8.2 on eight core with 32 GB of Ram. Everything seem to slow and frezze up sometime.  but running OSX 10.6.8 is fine. Can anyone help
    System Spec:
    Mac OSX 10.8.2
    Pro: 2x3 Ghz Quad-core Intel Xeon
    Memory: 32 GB 667 MHz DDR2
    Hard drive Raid 1-0: 500 GB

    I am certainly confused. 10.8.2 is not supported on pre-2008's. There was an 8-core 2008 that has 800MHz DDR2 FBDIMMs (and can accept 667MHz as well).
    A clean install is the safest and best way, you can migrate or let Setup Assistant import your settings.
    Old PowerPC apps, drivers and plugins are a no-no, no Rosetta. Check Roaringapps.com
    Put your system on an SSD.
    Put your data on a non-array or if you must use a mirror only. Stripe is fine. Both should have 2-3 backups and extra redundancy instead of trying software based 1+0 with four drives.
    Upgrade to new(er) drives. And format them with ML. Build or rebuild the arrays in ML.
    Where is SL located compared to ML? same drive with different partition? or different disk drives?
    10.8.5 is current and other than support for 3TB drives, you should not be using anything less than 10.8.3.

  • Question re. behaviour of single homed FEX with vPC

    Hi Folks,
    I have been looking at configuring Nexus 5Ks with FEX modules.  Referring to the Cisco documentation;
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_n1_1/b_Cisco_n5k_layer2_config_gd_rel_513_N1_1_chapter_01001.html
    In figure 3. showing a single homed FEX with vPC topology, I'm curious what happens if one of the 5Ks fail.  For example if the 5K on the left hand side of the diagram fails do the ports on the attached FEX that the server is attached to drop? If not I would assume that the server has no way of knowing that there is no longer a valid path through those links and will continue to use them?
    Many thanks in advance,
    Shane.

    Hello Shane.
    Depending of type of the failureboth n5k can tace corrective actions and end host will always know that one of the port-channel members is down.
    For example if one 5k will crash or will be reloaded - all connected fexes alre will go offline. FEX are not standalone switches and cannot work without "master" switch.
    Also links which will go from fex to the end-host will be in vpc mode which means that all vpc redundancy features/advantages will be present.
    HTH,
    Alex 

  • GLBP with vPC configuration acceptable?

    Hello,
    I'm reposting this discussion here.
    I have designed GLBP with vPC configured in a pair of N7K switches. But in Cisco documentation, the best practice configuration uses HSRP in vPC environment. Customer doesn't feel comfortable with GLBP since Cisco's best practice using HSRP. Is there any potential issue using GLBP in vPC environment?
    Thanks,
    Jason

    Hi, there is no need for GLBP with vPC. Both HSRP peers are active.
    http://www.netcraftsmen.net/component/content/article/69-data-center/1260.html
    "...since both  peers forward. This behavior also provides HSRP load-balancing without needing to  switch to GLBP."
    Don't forget to rate all posts that are helpful.

  • Nexus 7000 with VPC and HSRP Configuration

    Hi Guys,
    I would like to know how to implement HSRP with the following setup:
    There are 2 Nexus 7000 connected with VPC Peer link. Each of the Nexus 7000 has a FEX attached to it.
    The server has two connections going to the FEX on each Nexus 7k (VPC). FEX's are not dual homed as far as I now they are not supported currently.
    R(A)              R(S)
    |                     |
    7K Peer Link 7K
    |                     |
    FEX              FEX
    Server connected to both FEX
    The question is we have two routers connected to each of the Nexus 7k in HSRP (active and one is standby). How can I configure HSRP on the nexus switches and how the traffic will routed from the Standby Nexus switch to Active Nexus switch (I know HSRP works differently here as both of them can forward packets). Will the traffic go to the secondary switch and then via the peer link to the active switch and then to the active router ? (From what I read the packet from end hosts which will go via the peer link will get dropped)
    Has anyone implemented this before ?
    Thanks

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Multicast: duplicated packets on nexus 7k with vpc and HSRP

    Hi guys,
    I'm testing multicast deployment on the lab shown below. The sender and the receiver are connected to the 6500 in two different vlans. The sender is in vlan 23 and the reciever in vlan 500. They are connected to the 6500 with a trunk link. There is VPc between the two nexus 7k and the 6500.
    Furthermore, there is HSRP running on the two vlan interface 23 and 500 on both nexus.
    I have configured the minimum to use PIM-SM with static RP. The RP is the 3750 above the nexus. (*,G) and (S,G) states are created correctly.
    IGMP snopping is enabled on 6500, and the two nexus.
    I'm using iperf to generate my flow, and netflow and snmp to monitor what happens.
    All works correctly, my receiver receive the flow and it takes the good route. My problem is that I have four times more multicast traffic on the vlan interface 500 on both nexus but this traffic is only sent one time to the receiver (which is the good comportment) and the rest of the traffic is not shown on any other physical interface in outbound.
    Indeed, I'm sending one flow, the two nexus receive it (one from peer link and the other from the 6500) in the vlan 23 (for example 25 packets inbound).
    But when the flow is routed in the vlan 500, there is 100 packets on each interface vlan 500 on each nexus in outbound.
    And when monitoring all physical interfaces, I only see 25 packets outbound on the interface linked with the receiver and the overflow isn't outgone.
    I have joined the graphs I obtain on one of the nexus for the vlan 23 and the vlan 500. Netflow says the same things in bits/s.
    Had someone already seen that? Any idea about the duplication of the packets?
    Thanks for any comment,
    Regards,
    Configuration:
    Nexus 1: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
    Nexus 2: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
    6500: s72033-adventerprisek9_wan-mz.122-33.SXI5.bin (12.2(33)SXI5)
    3750: c3750-ipservicesk9-mz.122-50.SE5.bin (12.2(50)SE5)

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • I have 12 core with Quatro 4000 and 5770, I want to use dual monitor setup, monitors are NEC with Spectraview-II.  How do I connect?  4000 only has 1 Display Port and 1 DVI.  5770 has 2 of each, if I use both 5770 Display Ports, does the 4000 contribute?

    I just bought a 12 core with Quatro 4000 and 5770, I want to use dual monitor setup, monitors are NEC with Spectraview-II.  How do I connect?  4000 only has 1 Display Port and 1 DVI.  5770 has 2 of each, if I use both 5770 Display Ports, does the 4000 contribute any work at all?  I read where on a PC they would work together, but on a MAC they do not.
    I read that Display Port has higher band width than DVI, NEC monitors for best performance they recommend using DIsplay Port.
    When I was setting this up I looked at a Nvidia Quadro 4000, unfortunately it was for PC, it had 2 Display Ports, in the Mac version they reduce it to one.  I did not think there could be a difference.
    Mainly want to use it for CS6 and LR4.
    How to proceed??? 
    I do not want to use the Quadro 4000 for both, that would not optimize both monitors, one DP and 1 DVI.  Using just the 5770 would work but I do not think the 4000 would be doing anything, and the 5770 has been replaced by the 5870.more bandwidth.
    Any ideas, I am a Mac newbie, have not ever tried a Mac Pro, just bought off ebay and now I have these problems.
    As a last resort I could sell both and get a 5870.  That would work, I'm sure of that, it's just that I wanted the better graphics card.
    Thanks,
    Bill

    The Hatter,
    I am a novice at Mac so I read all I can.  From what I understand the NEC monitors I bought require Display Port for their maximum performance.  The GTX 680 only has DVI outputs.  Difference from what I understand is larger bandwidth with the DP.
    You said I have the 4000 for CUDA.  I am not all that familiar with CUDA and when I do read about it I do not understand it. 
    A concern I have is, that if I connect the 2 high end NEC monitors via the 5770, using it's 2 Display Ports I would have nothing connected to the 4000.  Is the 4000 doing anything with nothing connected?  I read where in a PC system the 2 cards would interact but in a Mac system they do not.
    Bottom line, as I see it, the 4000 will not be useful at all to me, since I want a dual monitor set-up.
    So far the 5870 seems the best choice, higher band width than the 5770, and it has 2 Display Ports to optimize the NEC monitors.
    I'm not sure how fine I am splitting hairs, nor do I know how important those hairs are.  I am just trying to set up a really fast reliable system that will mainly be used for CS6 and LR4.  Those NEC monitors are supposed to be top notch.

  • Problems in using Windows Explorer with VPC Virtual PC?

    Has anybody experienced problems in using Windows Explorer with VPC Virtual PC?
    Lacking any "forbidden" or "appropriate usage" guidelines, I regularly use Windows Explorer (Windows 2000) to transfer file from the desktop. I have occasionally sensed that this might be wrong. Today I inadvertently clicked the MAC harddrive instead of the Desktop (within Windows Explorer) and caused all manner of mischief.
    Any other views please?

    Let me correct this:
    I regularly use Windows Explorer (Windows 2000) to transfer files from the "Mac" desktop
    Any ideas please Virtual PC VPC users?

  • 13'' dual-core with 16gb ram vs 15'' quad-core 8gb ram -- ram vs cores?

    Hello.  I am at a point between choosing a 13'' dual-core i5 with 16gb ram vs the 15'' quad-core with 8gb ram - both retinas.
    I will be using photoshop and illustrator while uploading/downloading large files constantly.  After much headscratching I almost decided on the 13'' i5 because from my experience, RAM limitations is where I usually run into problems, and don't see the extra price of i7 worth it when it is still a dual-core.  For the same price I can get a 15'' quad, but will be stuck with 8gb ram.
    I don't care about the screen size, I have good eyes and the weigth trade-off makes this a non-issue.
    I'm thinking about down the road, as either of these I'm sure will be fine solutions.  I'm just wondering where does the edge really come in this case?  Ram or processor / cores?
    Thanks.

    Thanks for your reply tjk, of course with 16gb on the 15'' for $200 more  I'd have easier time, but in this case I am comparing a refurbished - the lowest 15'' 16gb I could find would cost $500 more, I just can't do it. 
    I should make clear that this particular comparison between 13''w/16 and 15''w/8, they are the same price (the 13 being new) - which is why I'm stuck.
    To rephrase - for the same price - which will I regret less in 4 years?
    As a side note, I can't believe I'm even thinking about paying this much money for something that is not upgradeable, but that's another discussion entirely.

  • Unstable vMotion behavior over DCI with vPC?

    hi out there
    I need some ideas to track a problem - we have a DC running with a wmware esxi4.1 cluster (2 x 2 sets of blade-servers - one set at each site) with a DR site which is interconnected with 4 10G fiber where we have established 2 x 2 port-channels (Cisco Nexus 5k with vPC) between - 1 vpc portchannel with 2 10g connections for iSCSI and 1 vpc portchannel also with 2 10g connections for "non-iSCSI traffic" - eg the rest - we have seperated the iscsi traffic fully from the rest of the network. We have hereby a "simple" Layer 2 DC interconnection with a latencey between the sites of ~ 1mSec - and no erros reported by any of the involved devices. The iscsi consist of two EMC VNX 5500 controlleres - one at each site with a "local" san array.
    My problem is that from time to time when we issue a vMotion or clone of a vm between the sites we get either an extrem slow response (will probably end in a timeout) or the operations fails with a timeout - could be "disk clone failed. Canceling Storage vMotion. Storage vMotion clone operation failed. Cause: Connection timed out"
    Any suggestions to track this? It is a bit hard to track the network connections since it is 10 gig (haven't got any sniffer equipment yet which can catch up with a 10 gig interface). Could there be some buffer allocation problems on the nexus switches (no errors logged - any suggestions on which debug level?)
    best regards /ti

    Hi - we have a similar setup but where we use nx5k to service the DCI and VPC as solely L2 and then run the L3 on the NX7k. You need to have all the same vlans on the vpc as far as I know. You can't fool it - but you might be able to tricks something with some q-in-q trunks between the 2 sets of nx7k's
    best regards /ti

  • Has anyone been able to successfully use more than 8 cores with Logic?

    I've posted this before and was recommended to phrase the question like this, has anyone been able to successfully use more than 8 cores with Logic?
    If so or if not please specify what version of logic you have, what computer since only the new Mac Pro's have 16 virtual cores, and any other information or reference saying how to or how its not possible. This seems to be a question many users need to find out before upgrading to a new mac. Thanks!

    DrumStudio wrote:
    I've posted this before and was recommended to phrase the question like this, has anyone been able to successfully use more than 8 cores with Logic?
    If so or if not please specify what version of logic you have, what computer since only the new Mac Pro's have 16 virtual cores, and any other information or reference saying how to or how its not possible. This seems to be a question many users need to find out before upgrading to a new mac. Thanks!
    No,
    But I've been able to record a quartet using only 4 musicians.

  • Redundant installation with 4200 sensors

    Hi
    We are in a process of starting to work on a design with several IPS 4270. The demand is to make the design redundant and with high availability.
    As I am aware of there no redundant support (e.g. no protocol support like HSRP) within the IPS itself but several ways to make a redundant installation. I'm looking for white papers, case studies or design suggestions involving a redundant installation. Could you please guide me where to find such information?
    Thanks
    Johan Kellerman

    Hi,
    See the following white paper:
    "IPS Deployments in Enterprise Data Centers"
    http://www.cisco.com/en/US/prod/collateral/vpndevc/ps5729/ps5713/ps4077/prod_white_paper0900aecd806e724b.html
    Regards,

  • Support för Palomino core with OLD bios?

    I have a question regarding MS-6380 compability
    with the Palomino core with and old BIOS.
    The problem is that my Tbird 1000MHz just burned up
    and I haven't updated my BIOS at all, and now I in
    pursuit of an new CPU. I dont remember what version it
    is but I'm sure that its older than the 6380v15 that is
    supposed to add support for "Palomin CPU support 1.5GHz -
    1.7GHz".
    So my questions are this,
    will it work if I put in an AMD 1500+ or 1600+ cpu
    (with the Palomoni core but no Higher clockspeed than
    1400 Mhz)? Will it work long enough so I can update the
    BIOS? Or will it not work at all?
    In short will the Moderbord work at all with an Palomino
    based CPU, even with a old BIOS (probably ver 1.0)?

    Hi!
    The XP1800+ will normally work at 1533 MHz, assuming a fsb of 133. If fsb is set to 100, the result will be 1533/1.33 = 1150MHz approximately.
    Hans

Maybe you are looking for

  • File to Proxy----Tables not getting updated.

    Hi all, I have File to proxy scnerio, where Data from file is uploaded in to BAPI in turn updated in to tables. If i take pay load from Moni  and test in SPROXY then tables is getting updated. But when i run Scnerio form XI tables are not getting upd

  • Query regarding data loading from xls

    Hi I want to read data ( integers , only one column) from xls file. I donot want to load it in a table otherwise I could have tried using loader. But what I need to do is I have lakhs of rows in excel sheet and I need to pick them up in a query . I c

  • Weighted Blend Tool

    I would like to see weighting added to the blend tool. So the user can define the spacing to be smaller on one side and further apart on the other, similar to Ease In/Out in Flash.

  • Failover Question

    I am new to JMS. I have a cluster with two instances (A & B), and I am wanting to implement failover with the brokers. My producer creates a transactional Topic on instance A's broker. My consumer subscribes to that same Topic via instance B's broker

  • Losing Mouse "hand" over buttons when publish for Web.

    Hi, I'm new to Flash. I'm publishing a web-site and I lose the "Hand" icon for the mouse over buttons after I publish and run the HTML=>Swf files from the Browser. When I "Test the movie" from flash I see the "Hand" over buttons just fine. Is there a