6500 migration to nexus 7k

Hello,
I need help with CLNS in NX-OS. I am channging 6500 for Nexus 7K, i have a problem with CLNS.
I have in IOS this configuration:
interface Tunnel9999
no ip address
ip broadcast-address 0.0.0.0
tunnel source Vlan402
tunnel destination 10.130.68.1
tunnel mode eon
clns router isis
isis hello-multiplier 15
isis hello-interval 1
router isis
net 39.724f.3000.001a.0000.0000.1013.1013.2000.0000.00
interface Vlan402
ip address 10.40.2.2 255.255.255.0
no ip redirects
ip pim sparse-dense-mode
ip ospf cost 4
standby ip 10.40.2.1
standby timers 1 3
standby priority 120
standby preempt
standby authentication airtel
clns router isis
I need to set that ocnfiguration in NX-OS but I read there isn´t CLNS in NX-OS, only ISIS IP be able to set in NXOS.is it true?, someone know what is the soluttion?
Thanks everebody

  Just make sure the priority is lower on the 6500 .  Do you have a vpc between your 6500 and the 7K's ?    We are in the same boat , we have 2 6500's connected to 2 7k's  and most of the routing is stil being done on the 6500's as we haven't had the time to migrate the routing to the 7k's because the customer never gives any downtime and quite frankly moving all that scares the heck out of me .  :-) 

Similar Messages

  • 6500-VSS and NEXUS 56XX vPC interoperability

    Hello, is it possible to establish a PORT CHANNEL between a couple of Cisco 6500 running VSS mode and a couple of NEXUS 5000 running vPC? . Design should be " Back-to-Back" :  VSS-- Port-Channel--vPC.
    I want also to support L2 and L3 flows between the two couples.
    I read many forums but i am not sure it runs.
    Is such design, if it runs; supported by Cisco?
    Thanks a lot for your help.

    Hi Tlequertier,
    We have VSS 6509Es with Sup 2Ts & 6908 modules. These have a 40gb/sec (4 x 10gb/sec) uplink to our NEXUS 5548UP vPC switches.
    So we have a fully meshed ether-channel between the 4 physical switches (2 x N5548UP & 2x6509E)
    Kind regards,
    Tim

  • Migration of Nexus 5010 to 5548

    Is there document/procedure for replacing the 5010 Pair with 5548 pair?  From what I understand the VPC's are compatible between a 50XX and 55XX hardware due to a change in the ASIC, is this correct?  Has anyone completed this type of upgrade?
    Thanks,
    Joe

    hi Isabel,
    >> Jul 5, 2009, 9:14am PST
    >> Both the Nexus 5000 and Nexus 7000 will add support for a
    >> feature called Virtual Port Channels (VPC) which allows
    >> the formation of multi-chassis ether channels like that
    >> with VSS on the Catalyst 6500.
    Nexus 7000 has had virtual Port Channel (vPC) since the release of NX-OS 4.2 around January this year.
    see http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
    you are correct that Nexus 5000 doesn't yet have this. N5K will gain vPC when NX-OS 4.2 is released on N5K in/on/around Q3 CY2009.
    cheers,
    lincoln.

  • Migration from Nexus 7000 without VDC to VDC

    Hi all
    I am working on a DataCenter architecture where we would like to implement Nexus 7000.
    For the time being, there only one "context" but we may take the opportunity to implement VDC in a later future
    I was not able to find a clear answer on the following :
      Can we add the VDC licence & configure a new VDC on a Nexus 7000 running without VDC ?
      I suppose this is possible. but does it need to have the whole configuration changed or adding a VDC can be done without any interruption on the current environnement ?
    Thanks in advance !

    Hello
    To have VDC support on n7k you will require following license:
    LAN_ADVANCED_SERVICES_PKG
    To configure new vdc you need to run:
    Nexus(config)# vdc
    This will create new VDC which is separate from the current one. It shouldn't affect productional environment since separate processes started for new VDC.
    Then you can allocate some interfaces to it and configure.
    But you need to be careful to check whether you allocate unused interfaces and don't add resource excessive configuration.
    Here is a very good explanation of what is VDC and how it works:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/White_Paper_Tech_Overview_Virtual_Device_Contexts.html
    And here is VDC config guide:
    http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/virtual_device_context/configuration/guide/vdc_nx-os_cfg.html
    HTH,
    Alex

  • Cisco VSS Dual-active PAgP detection via Nexus and vPC

    Hi!
    We will soon implement Cisco nexus 5595 in our Datacenter.
    However we will still be using a pair of C6500 in a VSS in some part of the network.
    Today we are running dual-active detection via a PAgP Port-channel, but those Port-channels will be removed and the only Port-channel we will have is to a pair of Nexus 5596.
    Does anyone know if we can run PAgP dual-active detection via this MEC/vPC?
    Thank you for your time!
    //Olle

    Hi,
    The easies way is to connect both links from the 6500s to only one of the Nexus switches. Than create a portchannel on both the 6500 and the Nexus.
    The other option would be to connect the 6500s directly together via a gig link and use fast hello instead of epagp.
    I would use fast hello since it is supported on the 6500s.
    HTH

  • NTP on Nexus5k and 3560

    I have begun moving NTP from our 6500 to 4 Nexus 5k as part of a core upgrade.  The Nexus will act as our internal NTP server for all switches.  Any switches that are on the same vlan as the Nexus have no issues syncing NTP from them.  However any switch that has to have the traffice routed to the Nexus is showing that the time source as insane.
    The configuration on our Nexus is as follows the Nexus are .11,12,13 and 14:
    ntp peer 172.24.1.12
    ntp peer 172.24.1.13
    ntp peer 172.24.1.14
    ntp server 192.43.244.18
    clock timezone CST -6 0
    clock summer-time CDT 2 Sun Mar 2:00 1 Sun Nov 2:00 60
    Here is the configuration on one of our 3560's:
    clock timezone CST -6
    clock summer-time CDT recurring
    ntp server 172.24.1.11
    ntp server 172.24.1.13
    ntp server 172.24.1.12
    ntp server 172.24.1.14
    This same configuration worked when the switches were configured as NTP Peers to our 6500 (172.24.1.1).  The ip for the 6500 has been moved to an HSRP address across the Nexus so I have pointed the switches at the individual IP for each Nexus.
    Here is a debug ntp packet ouput from one of the 3560s:
    .Mar  7 17:21:22: NTP: xmit packet to 172.24.1.11:
    .Mar  7 17:21:22:  leap 3, mode 3, version 3, stratum 0, ppoll 64
    .Mar  7 17:21:22:  rtdel 2445 (141.678), rtdsp C804D (12501.175), refid AC180101
    (172.24.1.1)
    .Mar  7 17:21:22:  ref D2F4A4F5.9CBFA919 (06:32:53.612 CST6 Sun Feb 26 2012)
    .Mar  7 17:21:22:  org 00000000.00000000 (18:00:00.000 CST6 Thu Dec 31 1899)
    .Mar  7 17:21:22:  rec 00000000.00000000 (18:00:00.000 CST6 Thu Dec 31 1899)
    .Mar  7 17:21:22:  xmt D3021792.8D0B8963 (11:21:22.550 CST6 Wed Mar 7 2012)

    Thanks for your reply.
    My issue may be a little different than you encountered. In my configuration I am able to get some, but not all, SVIs on Nexus 5548s to funciton as NTP servers.
    I have two Nexus 5548 vPC peers configured (N5K-1 and N5K-2) for HSRP and as NTP servers. A downstream 2960S switch stack (STK-7) is the NTP client. STK-7 is connected to N5K-1 and N5K-2 with a physical link each bundled into a port channel (multi-chassis Etherchannel on the STK-7 stack and vPC on the 5548 peers).
    When the STK-7 NTP client is configure for NTP server IP addresses on the same network as the switch stack (10.3.0.0 in the diagram below) all possible IP addresses work (IP addresses in green), the “real” IP addresses of each SVI on the 5548s (10.3.0.111 & 10.3.0.112) as well as the HSRP IP address (10.3.0.1).
    When the STK-7 NTP client is configured for NTP server IP addresses on a different network than the switch stack (10.10.0.0 in the diagram below) only the “real” IP address of the SVI on the 5548 to which the Etherchannel load-balancing mechanism directs the client to server NTP traffic (N5K-2) works. In the diagram above the client to server NTP traffic is sent on the link to N5K-2. In the diagram below NTP server 10.10.0.112 is reported as sane but NTP servers 10.10.0.111 and 10.10.0.1 are reported as insane (in red).
    I am concerned that the issue is related to my vPC configuration.
    Cisco TAC has indicated that this behavior is normal.

  • VPC+, aka L3 on back-to-back vPC domains

    Hi,
    Please consider this scenario, where L2 VLANS are spanning 2 data centers and where R1-R4 are L2/L3 N7K routers (replacing existing 6K).
    (I wish VSS would be available also in N7K to make life 10x easier!!).
                          R1                  |                     R2
                           ||                    |                     ||
    vPC peer-link  ||    =======MAN=======   ||  vPC peer-link
                           ||                    |                     ||
                          R3                  |                    R4
                                   Site A         Site B
    Attached to R1 and R3 there are (dual-attached via 6K access switches) servers that may need to communicate to other servers in the same VLAN on the other side of the MAN. Over the MAN the VLANs are trunked, so its fine. This traffic can go over R1 or R3 both for L2 (vPC) and for L3 (HSRP vPC enhancements).
    Anway, there is also a global OSPF domain for inter-VLANs communication and for going outside the DCs via other routers attached to the above cloud.
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman","serif";}
    I've heard there is a kind of enhancement request (or bug?,
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman","serif";}
    CSCtc71813) to have this kind of back-to-back vPC scenario to handle transparently L3 data (peer-gateway command should deliver also control-plane L3-info??). There are 2 workarounds available for this design:
    1.       Define an additional router-in-a-stick using an extra VDC on each 7k. In this case, for example for R1 we would use 3 VDCs: 1 VDC for admin, 1 VDC for L2, 1 VDC for R1.
    2.       Define static routes to tell each 7k how to reach the other 7k L3 next-hops.
    a) What is the best workaround to choose in order to smooth the upgrade later to the version of vPC that will handles this issue?
    b) Are there any more caveats I dont see? I havent seeen any link in CCO, so I am unsure how to proceed the design.
    c) I would be tempted to think that using additional static routes is a better choice because it would easier to remove them once vPC+ is there.
    What static routes shall I add? R1 to R2, R1 to R4 and so on and so forth? I miss the details of this implementation.
    d) How would vPC+ looks like once (when?) is there?
    Thansk for your valuable input in advance.
    G.

    To expand on Lucian's comment, because I'm sure the next though will be...can I run OSFP over a vlan and just carry THAT over my VPC.  You don't want to do this either.
    We don't support running routing protocols over VPC enabled VLANs.
    What happens is that your 6500 will form routing adj with each Nexus 7000....lets say Nexus 7000-1 and 7000-2.  Note my picture below.
    Lets say that R1 is trying to send to a network that is behind R2.  R1 is adj to 7000-1 and 7000-2...we have equal cost paths.  CEF chooses that 7000-1 to route the packet, however Etherchannel load balancing chooses the physical link to 7000-2.  7000-2 will need to switch the packet over the VPC peer-link to 7000-1.  7000-1 receives the packet and tries to send it out VPC member port to R2....however egress port drops the packet.  This happens because we don't allow packets received from VPC member link send over VPC peer-link to be sent out another VPC member link.
    I'd suggest to run an L3 link from your 6500 to each Nexus 7000 if you do want to do L3 on it.

  • Cisco UCS network uplink on aggregation layer

    Hello Cisco Community,
    we are using our UCS (Version 2.21d) for ESX Hosts. Each host has 3 vnics as follows:
    vnic 0 = VLAN 10 --> Fabric A, Failover Fabric B
    vnic 1 = VLAN 20 --> Fabric B, Failover Fabric A
    vnic 2 = VLAN 100 --> Fabric A, Failover Fabric B
    Actually UCS is connected to the Access Layer (Catalyst 6509) and we are migrating to Nexus (vPC). As you know, Cisco UCS Fabric Interconnects can handle layer2 traffic itself. So we are planning to connect our UCS Fabric Interconnects directly to our new l3 nexus switch.
    Does anyone have connect the UCS directly to l3? Do we have to pay attention to something? Are there some recommendations?
    thanks in advance
    best regards
    /Danny

    we are using ESXi 5.5 with dvswitche (distributed vswitch). In our cisco ucs powerworkshop, we discuss pros and contras of hard- and softwarefailover and we commit to use hardwarefailover. It is very fast and we have no problems actually.
    This is a neverending and misunderstood story: your design should provide load balancing AND failover. Hardware failover only gives you the latter.In your design, you just use one fabric per Vlan, what a waste !
    And think about the situation of a failover on a ESXi host with 200 VM's; one has to send out at least 200 GARP messages, and this is load on the FI CPU. Most likely dozens or more ESXi server are impacted......
    Cisco best practise: if you use softswitch, let it do the loadbalancing and failover, don't use hardware failover.
    see attachment (the paper is not up to date,
    For ESX Server running vSwitch/DVS/Nexus 1000v and using Cisco UCS Manager Version 1.3 and below, it is recommended that fabric failover not be enabled, as that will require a chatty server for predictable failover. Instead, create regular vNICs and let the soft switch send
    gARPs for VMs. vNICs should be assigned in pairs (Fabric A and B) so that both fabrics are utilized.
    Cisco UCS version 1.4 has introduced the Fabric Sync feature, which enhances the fabric failover functionality for hypervisors as gARPs for VMs are sent out by the standby FI on failover. It does not necessarily reduce the number of vNICs as load sharing among the fabric is highly recommended. Also recommended is to keep the vNICs with fabric failover disabled, avoiding the use of the Fabric Sync feature in 1.4 for ESX based soft switches for quicker failover.

  • VPC for L3 links

    Hi,
    I have 2 cat 6509 working as core switches mostly on L3 interfaces running OSPF and further connected to the Campus distribution ( 2x6509) and datacentre distribution ( 2x6509).I have to replace both the core switches with 2 Nexus 7K with the same configuration.Is there any possibility that I can use VPC on L3 links , Is it recommended using VPC on L3 links or What is the way that both Nexus can act as a single cluster.
    Sanjay

    To expand on Lucian's comment, because I'm sure the next though will be...can I run OSFP over a vlan and just carry THAT over my VPC.  You don't want to do this either.
    We don't support running routing protocols over VPC enabled VLANs.
    What happens is that your 6500 will form routing adj with each Nexus 7000....lets say Nexus 7000-1 and 7000-2.  Note my picture below.
    Lets say that R1 is trying to send to a network that is behind R2.  R1 is adj to 7000-1 and 7000-2...we have equal cost paths.  CEF chooses that 7000-1 to route the packet, however Etherchannel load balancing chooses the physical link to 7000-2.  7000-2 will need to switch the packet over the VPC peer-link to 7000-1.  7000-1 receives the packet and tries to send it out VPC member port to R2....however egress port drops the packet.  This happens because we don't allow packets received from VPC member link send over VPC peer-link to be sent out another VPC member link.
    I'd suggest to run an L3 link from your 6500 to each Nexus 7000 if you do want to do L3 on it.

  • LACP / Loadbalance

    I want to try and understand the error messages listed below.
    My setup has two 3560 switches both working in sync and providing failover in the event of any port going down respectively.  Here are the messages:
    402942: *Mar  6 23:30:13.243: %SW_MATM-4-MACFLAP_NOTIF: Host 000c.29d4.3265 in vlan 10 is flapping between port Po12 and port Gi0/1
    402946: *Mar  6 23:30:37.427: %SW_MATM-4-MACFLAP_NOTIF: Host 000c.29d4.3265 in vlan 10 is flapping between port Gi0/1 and port Po12
    402947: *Mar  6 23:30:39.306: %SW_MATM-4-MACFLAP_NOTIF: Host 000c.29d4.326f in vlan 60 is flapping between port Gi0/1 and port Gi0/2
    402937: *Mar  6 23:30:01.776: %SW_MATM-4-MACFLAP_NOTIF: Host 000c.29d4.326f in vlan 60 is flapping between port Gi0/2 and port Gi0/1
    From a little research I understand this could be a loop or incorrectly configured loadbalancing.  Here are the configurations of both switches:
    Port:
    interface GigabitEthernet0/1
    description esx1
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 2-100
    switchport mode trunk
    channel-group 1 mode active
    spanning-tree portfast trunk
    Here is the port channel:
    interface Port-channel1
    description esx1
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 2-100
    switchport mode trunk
    spanning-tree portfast trunk
    The other switch which has an interconnect on ports 47 and 48 with Portchannel 12.
    Any advice or ways to debug this would be very helpful.  I must add, that I cannot use the debug option in IOS as the swicthes are remote and I cannot risk the ports being unavailable, and of course they are in production.

    Hi Chris,
    How are the ESX servers configured in this setup? The way I understand it based on what you've explained previously I think you have all four server NICs connected to the same vSwitch in the ESX server and using "Route based on IP hash" as the load balancing mechanism. If that is the case then you effectively have a single "aggregate link" (port channel) from the ESX server and what you have configured is as shown in the diagram below.
    If that is indeed the case, then this is not a supported configuration on the Catalyst 3560 switches. When you connect the NICs that form a single aggregate on the ESX server connecting across two physical switches you need switches that support some kind of Multi-Chassis Link Aggregation (MLAG) e.g., Catalyst 3750 "stack", Catalyst 6500 with VSS, Nexus 5000 with vPC etc.
    This would also explain why you're seeing MAC flaps. The ESX server is sending traffic from a single MAC on any of the physical NICs as it's a single aggregate, but as far as the network is concerned the MAC is seen to move from one switch to another.
    Unless you have a very specific reason to use port-channels to the ESX server i.e., you need a single VM to be able to use more bandwidth than is available on a single physical NIC, then I would personally remove the port-channels and use the default "Route based on originating virtual port ID" load balancing on the ESX servers. If you have a good number of VMs hosted on a the ESX server then you'll still get good load balancing across all four links.
    Regards

  • Catalyst 6500 - Nexus 7000 migration

    Hello,
    I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
    In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
    The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
    From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
    Topology drawing is attached.
    Thanks
    /Ulrich

    In a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
    I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
    I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
    HTH,
    jerry

  • VPC on Nexus 5000 with Catalyst 6500 (no VSS)

    Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
    The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.
    Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.
    Questions I have are.
    - Is this my best deployment choice?
    - vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
         - one of the 6500 goes down
              - STP?
              - What is going to happend with the Etherchannels on the remaining  6500?
         - the Management interface goes down for any other reason
              - which one is going to be the primary NEXUS?
    Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
    Any help is appreciated.
    Devices
    ·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)
    ·         2 Cisco Nexus 5010
    ·         2 Cisco UCS 6120xp
    ·         2 UCS Chassis
         -    4  Cisco  B200-M1 blades (2 each chassis)
              - Dual 10Gb Intel card (1 per blade)
    vPC Configuration on Nexus 5000
    TACSWN01
    TACSWN02
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.10
    role priority 10
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.9
    role priority 20
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    vPC Verification
    show vpc consistency-parameters
    !--- show compatibility parameters
    Show feature
    !--- Use it to verify that vpc and lacp features are enabled.
    show vpc brief
    !--- Displays information about vPC Domain
    Etherchannel configuration on TAC 6500s
    TACSWC01
    TACSWC02
    interface range GigabitEthernet2/38 - 43
    description   TACSWN01 (Po61 vPC61)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 61   mode active
    interface range GigabitEthernet2/38 - 43
    description   TACSWN02 (Po62 vPC62)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 62   mode active

    ihernandez81,
    Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.
    All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.
    The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
    As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.
    If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.
    I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
    For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
    Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.
    Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
    better ?
    one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

  • What is the best way to migrate zoning from an MDS9216i to a Nexus 5596?

    I am migrating from a MDS9216i to a Nexus 5596.  We have dual paths, so I can migrate one leg at a time and the full function for these switches is fiber channel attached storage.
    What is the best way to migrate the zoning information? I have been told I can ISL the new and old switch together and then push the zoning to the new switch.  Is that better than just cutting and pasting the zoning information to the new switch?
    Also, I have to move four Brocade M4424 switches to the new switches - they are currently attached via interop mode 1, but will now be attached using NPIV.  Has anyone done this before, and did you have any issues?
    Any help or advice would be appreciated. Thanks!

    Use an ethernet cable to connect the two computers, and set up file sharing. After that copying files from one computer to the other is exactly like copying from one hard drive to another:
    http://docs.info.apple.com/article.html?artnum=106658

  • Connection 10 gig between 6500 and Nexus 7009

    I am not able to get link between a 10 gig interface with a 10gbase-lrm on a nexus 7k trying to connect to a 6500 with a 10 gig 10gbase-lx4 interface. I was told this would work. Anyone know if the interfaces are compatable. If not what interface do i need to get for the nexus 7k box.
    Thank You.                   

    Please check mentioned links. I hope it will help.
    http://www.cisco.com/en/US/prod/collateral/modules/ps2797/ps5138/product_data_sheet09186a008007cd00.pd
    http://www.cisco.com/en/US/prod/collateral/modules/ps5455/eol_c51_599855.html

  • LMS 4.1 Los equipos 6500 y Nexus 7k aparecen unreacheble en Discovery

    Buenas tardes,
    Llevo días configurando un LMS 4.1 pero tengo 2 problemas principales.
    Tengo equipos 6500 y Nexus 7k que al realizar el Discovery siempre aparecen "unreachable", a pesar de que al revisar el SNMP, telnel, ssh estos estan bien y además se recogen sus traps y configuraciones.
    El otro problema es que tengo equipos 3600 que manejan VRF y la looback de snmp esta en VRF, el quipo es descubierto por el Discovery pero lo realiza por otra IP de otra VRF y luego si manda sus traps por la ip de la loopback adecuada.
    Ya he aumentado los atributos del timeout snmp a 10
    Alguien podría ayudarme?

Maybe you are looking for

  • Getting error message when attempting to access iTunes store. UNknow error occurred (0x800903318). make sure your netwrok connection is active.

    In iTunes software get error message when attempting to access iTunes Store. Could not connect. Unknown error occurred (0xx800903318). Make sure your network connection is active and try again...NG

  • Default settings for HP Photosmart

    How can I set the default size for copies?  I can use the "Edit" function to set the size (Normally Actual Size) but the next time I use the copier it often copies to only 50% of the size of the original and I then have to go back and change the sett

  • Moving links in CS2?

    Here's my problem: I just finished working on a publication using photographs with massive file sizes. The printer that the job was sent to didn't require an InDesign package, so a high-res PDF was sent instead. As a result, I never made a package of

  • Merge to Panorama/Photomerge hangs up in PS CC

    I have 5 images I select to make a panorama.  Usiing PS CC 64 bit on Windows 7 OS, File/Automate/Photomerge works fine.  But when I try to select the photos in LR 5 and Edit in/Merge to Panorama in PS, it opens PS CC and loads the 5 images and opens

  • Strange accout called other on my mac book

    After logging in the other day I noticed a account called other the account wanted a username and password which i didnt know I went the the account section in system pref. but the account "other account" was not my question is were did this come fro