Connect Nexus 5600 and Qlogic

Hi,
Have a new DC solution with Nexus 56128 switches running LAN + Native FC.
Storage Arrays shall be connected to the Nexus switches via 8G FC
Customer has Dell blade with Qlogic SAN switches, where existing zoning is done. Today these are connected to the Storage Array.
Is it possible to connect the Qlogic switches to Nexus and disconnect the Storage from Qlogic?
Dell blade server -> Qlogic FC switch -> Nexus FC switch -> EMC VNX Storage
Thinking about interoperability issues between Nexus and Qlogic.
/Jorgen

Hi,
No problem with that, just bear in mind that you have to configure NPV mode (or the equivalent qlogic name...) on the qlogic switch.
Rgds,
Felipon

Similar Messages

  • Nexus 5596 and Qlogic 8150

    I am having a problem with getting the Qlogic 8150 working in a windows 2008 R2 server. I have the SAN connectivity working but I am having problem with  the networking.
    I have setup VLAN 666 as the data vlan
    On the ethernet trunk I have setup
    ethernet 1/17
    switchport mode trunk
    switchport trunk allowed vlan 666 spanning-tree port type edge trunk
    This does not work but if I allow VLAN 1 on the trunk it works and I have full IP connectivity.
    switchport trunk allowed vlan 666,1 Any ideas why adding VLAN 1 on the trunk makes everything workThanks

    You set vlan 666 as the VlanID under the Qlogic adapter advanced properties, right?  Should be the last value in the scroll box.

  • Interoperability issues between Nexus 5k and HP storageworks (8/20q)

    Hello community,
    I am trying to get a VM host and a windows server to connect to their storage across a nexus and HP (Qlogic) fabric switch. This is currently having issues with the VM host unable to see the datastores, possibly due to interoperability between Cisco and HP (Qlogic)
    I have configured and tested the connectivity using only the cisco nexus and this worked, I then tested it using only the HP fabric switch (HP 8/20q) and this also worked.
    However, when using the HP and Cisco Nexus as shown in the attached diagram, things stop working.
    The connection is using Native Fibre channel, On the Cisco side I performed the following steps
    Configured the Nexus with Domain ID 10 and the HP with Domain ID 20.
    Connected the 2 fabric switches on fc1/48 (Cisco) and port 0 (HP) and confirmed that the ISL came up (E_port 8G), I confirmed connectivity using fcping both ways.
    I connected the SAN to the Nexus and the servers to the HP
    Configured VSAN 10
    Added interfaces fc1/41 to 48 in VSAN 10
    Created 2 zones ( ESXI and Windows)
    Added the PWWN for the ESXI server and the MSA2040 to the ESXI zone
    Added the PWWN for the Windows 2k8 server and MS2040 to the Windows zones
    Created zoneset (Fabric-A) and added both the above zones in it
    Activated the FABRIC-A zoneset
    The result is that the zones and zoneset are synchronised to the HP switch .I confirmed that I was able to see the servers and SAN WWN in the correct zones on the HP.
    From the 8/20q switch I am able to fcping the SAN, Nexus and servers, however the Nexus is only able to fcping the SAN and the HP, it returns a “no response from destination”  when pinging the servers.
    I have added the FCID for all the units in the same zones to see if it makes any difference to no avail the result seem to be the same. I have gone through various Nexus/MDS/HP/Qlogic user guides and forums; unfortunately I have not come across any that shows this specific topology.
    source for HP user guide is here: http://h20565.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c02256394
    I’m attaching the nexus config and partial view of the “show interface brief” showing the fibre channel port status
    Interface  Vsan   Admin  Admin   Status          SFP    Oper  Oper   Port
                      Mode   Trunk                          Mode  Speed  Channel
                             Mode                                 (Gbps)
    fc1/47     10     auto   on      up               swl    F       8    --
    fc1/48     10     auto   on      up               swl    E       8    --
    Any help and advice would be greatly appreciated. thanks in advance

    Hi all, after much reading, Walter Dey provided the hint to put me on the right track. 
    By default the Nexus 5k is in interop mode 1. However, one of the requirement for this to be "interoperable" with other vendor the FCDomain ID in the entire fabric needs to be between 97 and 127 as stated in the Cisco website.
    http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/interoperability/guide/ICG_test.html
    Another issue that had me and my colleague scratching our heads, was that we were seeing high level of CRC errors on the ISL interfaces. This was caused by ARBFF settings mismatch between the Nexus and the HP. This was resolved by ensuring that the ARBFF setting on the HP was set to false and the command "switchport fill-pattern ARBFF speed 8000" is configured on the ISL interface linking the 2 switches. (note that Cisco's default setting for the ports is IDLE, until this is changed the link will not stabilise)
    Thanks for all your help guys.

  • LACP passive mode on EvPC on Nexus 5600

    I have a Nexus 5600 in a co-lo facility. The server administrators want to be able to PXE boot servers connected to an active active FEX and once provisioned bundle the two ports with LACP. On a typical switch you would put the ports in passive mode where they would act like normal ports until the server started talking active LACP after it was built. However on the Nexus while the ports show up and connected, they don't seem to operate until the port-channel comes up. While in a channel group they won't pull a MAC address off the host. If I take them out of the channel group they both pick up MAC addresses and they can PXE boot. 
    Anyone else run into this issue and is there something extra I need to do on the Nexus to make it work?

    Hi Scott,
    Could you please share the network diagram along with the configuration, so that I can help you further.
    Best Regards
    Sachin Garg

  • NEXUS 56128 and FEX with beakout cable

    Hello,
    using two NEXUS 56128 , I wish to create enhanced VPC architecture with FEX 2248TP.
    My question is : can I use 40Gbs ports on NX56128 with QSFP breakout DAC cable (40G to 4x10G) to connect FEX 2248 ?
    regards

    Hi Yallies,
    For the Nexus 56128 (http://cs.co/9002c2aC) , it supports 8 true 40 GE QSFP ports and up to 96 10GE ports ( of which 48 are UP). Since the Nexus 5600 supports high density fabric extender aggregation platform, they can be used with the Nexus 2248PQ, 2248TP. Feel free to message me directly for further assistance. Hope this helps! :)
    Thanks,
    Angela ([email protected])

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • SAN Port-Channel between Nexus 5000 and Brocade 5100

    I have a Nexus 5000 running in NPV mode connected to a Brocade 5100 FC switch using two FC ports on a native FC module in the Nexus 5000. I would like to configure these two physical links as one logical link using a SAN Port-Channel/ISL-Trunk. An ISL trunking license is already installed on the Brocade 5100. The Nexus 5000 is running NX-OS 4.2(1), the Brocade 5100 Fabric OS 6.20. Does anybody know if this is a supported configuration? If so, how can this be configured on the Nexus 5000 and the Brocade 5100? Thank you in advance for any comments.
    Best regards,
    Florian

    I tried that and I could see the status light on the ports come on but it still showed not connected.
    I configured another switch (a 3560) with the same config and the same layout with the fiber and I got the connection up on it. I just cant seem to get it on the 4506, would it be something with the supervisor? Could it be wanting to use the 10gb port instead of the 1gb ports?

  • Connecting Nexus 5K to HP 3800 stack on vPC

    As the title says, I am connecting a stack of 3800 switches to a Nexus 5K pair using a virtual Port Channel. However, the second connection never comes up, and in the one case we've had where connection 1 goes down, the second connection didn't take over. Is anyone out there doing this successfully?
    Thanks,
    Russell
         +------------+__________+------------+
         |  Nexus 5K  |__________|  Nexus 5K  |
         +------------+    vPC   +------------+
              |                          |
             c|                          |
             o|                          |
             n|                          |
             n|                          |c
              |      +------------+      |o
             1+------| HP 3800-1  |      |n
                     +------------+      |n
                          |S|            |
                          |T|            |2
                          |A|            |
                          |C|            |
                          |K|            |
                     +------------+      |
                     | HP 3800-5  |------+
                     +------------+

    Duplicate of connect nexus 5k to hp 3800 via vPC in the LAN forum.

  • Nexus 5600 QoS

    Hi
    I want to apply QoS scheduling using 8 classes per interface in the nexus 5672
    As far as I know , Nexus 5600 have 8 queues per port
    So in the ingress port, I want to apply  COS mark using 8 class-maps and policy-map type qos
    but I don't understand why I must apply "set qos-group" in the policy-map
    and then in the egress port, to schedule the bandwidth for each class-map 
    I applied class-map and policy-map type queueing
    but in the class-map, there is only "match qos-group", qos-group has only1~5 value
    so how to apply bandwidth percent for 8 each class-map in the policy-map per port?
    is it impossible?
    please let me know that reason and how to do that
    Thank you

    Duplicate posts.  
    Go here:  http://supportforums.cisco.com/discussion/12142791/nexus-7000-qos

  • Trunking nexus 5596 and netapp or exsi issue

    hi
    i have 2 issues with trunking between nexus 5596 and a esxi server .....can not get the servers to ping out
    and the netapp connected to the same 5596 cannot ping.
    if the server is a access port it works fine.
    is there any tricks that are required to be configured on the nexus to make this work.

    Make sure we are actually tagging for those vlans on the host (Netapp/ESXi). If we are not, then this would explain why it works in access mode on the switch.

  • Connect Nexus 5548UP-L3 to Catalyst 3750G-24T-E Layer 3 Switch

    Please help!
    Could anyone out there please assist me with basic configuration between Nexus Switch and Catalyst Switch, so that devices connected on the catalyst switch can talk to devices connected on nexus switch and vice-versa? In my current setup all servers on VLAN 40 are connected on the Catalyst Switch A as shown in the diagram below, and all desktops and all other peripherals are connected on the Catalyst Switch B.  I am required to implement/add a new Nexus Switch 5548 that in the future will replace the Switch A. From now I just need to connect both switches together and start moving the server from Switch A to the Nexus Switch.
    The current network setup is shown as per diagram below:
    SWITCH A – this is a layer 3 switch. All servers are connected to this switch on the VLAN 40.
    SWITCH B – all desktops, VoIP telephones, and printers are connected on tis switch. This switch is also a layer 3 switch.
    I have connected together the Nexus 5548UP and SWITCH A (3750G) using the GLC-T= 1000BASE-T SFP transceiver module for Category 5 copper wire. The new network is shown as per diagram below:
    Below is the configuration I have created in both Switches:
    SWITCH A - 3750G
    interface Vlan40
    description ** Server VLAN **
    ip address 10.144.40.2 255.255.255.128
    ip helper-address 10.144.40.39
    ip helper-address 10.144.40.40
    interface Vlan122
    description connection to N5K-C5548UP Switch mgmt0
    ip address 172.16.0.1 255.255.255.128
    no ip redirects
    interface Port-channel1
    description UpLink to N5K-C5548UP Switch e1/1-2
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 1,30,40,100,101,122
    switchport mode trunk
    interface GigabitEthernet1/0/3
    description **Connected to server A**
    switchport access vlan 40
    no mdix auto
    spanning-tree portfast
    interface GigabitEthernet1/0/20
    description connection to N5K-C5548UP Switch mgmt0
    switchport access vlan 122
    switchport mode access
    spanning-tree portfast
    interface GigabitEthernet1/0/23
    description UpLink to N5K-C5548UP Switch e1/1
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 1,30,40,100,101,122
    switchport mode trunk
    channel-group 1 mode active
    interface GigabitEthernet1/0/24
    description UpLink to N5K-C5548UP Switch e1/2
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 1,30,40,100,101,122
    switchport mode trunk
    channel-group 1 mode active
    N5K-C5548UP Switch
    feature interface-vlan
    feature lacp
    feature dhcp
    feature lldp
    vrf context management
      ip route 0.0.0.0/0 172.16.0.1
    vlan 1
    vlan 100
    service dhcp
    ip dhcp relay
    interface Vlan1
      no shutdown
    interface Vlan40
      description ** Server VLAN **
      no shutdown
      ip address 10.144.40.3/25
      ip dhcp relay address 10.144.40.39
      ip dhcp relay address 10.144.40.40
    interface port-channel1
      description ** Trunk Link to Switch A g1/0/23-24 **
      switchport mode trunk
      switchport trunk allowed vlan 1,30,40,100-101,122
      speed 1000
    interface Ethernet1/1
      description ** Trunk Link to Switch A g1/0/23**
      switchport mode trunk
      switchport trunk allowed vlan 1,30,40,100-101,12
      speed 1000
      channel-group 1 mode active
    interface Ethernet1/2
      description ** Trunk Link to Switch A g1/0/24**
      switchport mode trunk
      switchport trunk allowed vlan 1,30,40,100-101,122
      speed 1000
      channel-group 1 mode active
    interface Ethernet1/3
      description **Connected to server B**
      switchport access vlan 40
      speed 1000
    interface mgmt0
      description connection to Switch A g2/0/20
      no ip redirects
      ip address 172.16.0.2/25
    I get a successful response from Server A when I ping the N5K-C5548UP Switch (VLAN 40 interface (10.144.40.3) .But if I try to ping from Server A to Server B or vice-versa the ping fails. From N5K-C5548UP I can ping successful either Server A or Server B. What am I doing wrong here? Is there any additional configuration that I need to add on the Nexus Switch? Please Help. Thank you.

    no, no secret aukhadiev
    I made a mistake without realising and the interface e1/3 was showing "Interface Ethernet1/3 is down (Inactive)". After spending sometime trying to figure out what was wrong with that interface or switch, it turned out to be that i forgot to add the vlan 40. Now the config looks like this:
    N5K-C5548UP Switch
    feature interface-vlan
    feature lacp
    feature dhcp
    feature lldp
    vrf context management
      ip route 0.0.0.0/0 172.16.0.1
    vlan 1
    vlan 40
    vlan 100
    service dhcp
    ip dhcp relay
    interface Vlan1
      no shutdown
    interface Vlan40
      description ** Server VLAN **
      no shutdown
      ip address 10.144.40.3/25
      ip dhcp relay address 10.144.40.39
      ip dhcp relay address 10.144.40.40
    interface port-channel1
      description ** Trunk Link to Switch A g1/0/23-24 **
      switchport mode trunk
      switchport trunk allowed vlan 1,30,40,100-101,122
      speed 1000
    interface Ethernet1/1
      description ** Trunk Link to Switch A g1/0/23**
      switchport mode trunk
      switchport trunk allowed vlan 1,30,40,100-101,12
      speed 1000
      channel-group 1 mode active
    interface Ethernet1/2
      description ** Trunk Link to Switch A g1/0/24**
      switchport mode trunk
      switchport trunk allowed vlan 1,30,40,100-101,122
      speed 1000
      channel-group 1 mode active
    interface Ethernet1/3
      description **Connected to server B**
      switchport access vlan 40
      speed 1000
    interface mgmt0
      description connection to Switch A g2/0/20
      no ip redirects
      ip address 172.16.0.2/25
    Thank you,
    JN

  • Nexus 7k and native vlan 1

    Hi, is it recommended to use a native vlan other than 1 on the trunks connecting Nexus box's. It used to be that you should not use native vlan 1 on the trunks between switches. Is this not an issue anymore.
    Thanks

    Hi Chuck,
    It is recomended to use a different vlan other than vlan 1 as your default vlan.
    This is one of the best practices for secure the overall network.
    For eg.
    In a switch spoofing attack, an attacking host imitates a trunking  switch by speaking the tagging and trunking protocols (e.g. Multiple  VLAN Registration Protocol, IEEE 802.1Q, VLAN Trunking Protocol) used in  maintaining a VLAN. Traffic for multiple VLANs is then accessible to  the attacking host. 
    HTH,
    Aman

  • Nexus 5548 and Define static route to forward traffic to Catalyst 4500

    Dear Experts,
    Need your technical assistance for the Static routing in between Nexus 5548 and Catalyst 4500.
    Further I connected both Nexus 5548 with Catalyst 4500 as individual trunk ports because there is HSRP on Catalyst 4500. So I just took 1 port from each nexus 5548, make it trunk with the Core Switch (Also make trunk from each Switch each port). Change the speed on Nexus to 1000 because other side on Catalyst 4500 line card is 1G RJ45.
    *Here is the Config on Nexus 5548 to make port a Trunk:*
    N5548-A/ N5548-B
    Interface Ethernet1/3
    Switchport mode trunk
    Speed 1000
    Added the static route on both nexus for Core HSRP IP: *ip route 0.0.0.0/0 10.10.150.39 (Virtual HSRP IP )*
    But I could not able to ping from N5548 Console to core Switch IP of HSRP? Is there any further configuration to enable routing or ping?
    Pleas suggest

    Hello,
    Please see attached config for both Nexus 5548. I dont have Catalyst 4500 but below is simple config what I applied:
    Both Catalyst 4500
    interface gig 3/48
    switchport mode trunk
    switchport trunk encap dot1q
    On Nexus 5548 Port 1/3 is trunk
    Thanks,
    Jehan

  • My battery died and then when i charged the phone it is asking me to connect to itunes and restore. if i restore how can i get my data back

    My iphone battery died and on charging the phone it asked me to connect to itunes and restore the phone.
    If i restore the phone how can i get my data back and how can i check when i last backed up my phone

    If you have been syncing regularly as the iphone is designed, then you can sync the data back.
    Regardless you will have to restore the iphone.

  • Diff b/w Nexus 5548P and 5548UP

    What is the
    Diff b/w Nexus 5548P and 5548UP
    regards.

    Hi,
    A UP or Unified ports allow you to configure ports as  Ethernet, native Fibre Channel or Fibre Channel over Ethernet (FCoE)  ports. By default, the ports are Ethernet ports but you can change the  port mode to Fibre Channel on the following unified ports:
    Any port on the Cisco Nexus 5548UP switch  or the Cisco Nexus 5596UP switch.
    The  ports on the Cisco N55-M16UP expansion module that is installed in a  Cisco Nexus 5548P switch.
    More details:
    http://www.cisco.com/web/techdoc/dc/reference/cli/nxos/commands/l2/port.html
    Comapre 5548 and 5548:
    http://www.cisco.com/en/US/products/ps9670/prod_models_comparison.html
    ./Abhinav

Maybe you are looking for