Conectar Cisco UCS via Nexus a NFS via IP

Tenemos una solución Cisco UCS la cual esta conectada a los Switches de Agregación Nexus y cuando se trata de conectar un o unos ServidoresVirtuales o SX a un NFS via IP no establece la conexión generando time-out, pero si la solución Cisco UCS es conectada a unos 6500 que se comportan como unas extensiones del Nexus, si se establece la conexión sin problemas.

Si efectivamente tenemos este problema, los UCS conectados a través del Nexus llegan a establecer la conexión a los NFS luego de varios intentos se establece, cuando se hace la consulta de información se vuelve extremadamente lenta , los nexus en nuestra red son los Switches Cabecera de Acceso manejan la capa 3 del Datacenter y se encuentran conectados a ellos 4  Switches 6513 vía vpc como extensiones en capa 2, la conexión de los Fabrics de los UCS conectados tambien via vpc hacia ambos nexus, cuando se tenían conectados los fabrics en los 6500 la conexión hacia los NFS se establecía normalmente, tuvimos que sacar los servidores que requieren este tipo de conexión a unos Blade de IBM que están conectados a los 6500

Similar Messages

  • Monitoring Cisco UCS Manager via HP System Information Manager 7.1 (SIM)

    I am working with a customer to configure HP System Information Manager 7.1 (SIM) to monitor their Cisco UCS Manager.
    The customer is looking to monitor the following:
    - CPU Utilization on manager, blades, servers, etc...
    - Memory utilization
    - Network utilization
    - System inventory
    Alerting is needed for the following:
    - Hardware failures: memory, power supply, drive, etc...
    - Predictive failures
    - Alert messages
    I have the list of all the MIBs provided by Cisco but an having the following issues while loading them into HP SIM.
    While loading MIB "CISCO-UNIFIED-COMPUTING-TC-MIB" I get the following error message:
    Line 128: Error defining object: expected a label, found reserved symbol {
    Line in MIB: SYNTAX Gauge32 {
    Guage32 is imported from SNMPv2-SMI MIB
    To get past this error I found a version of the MIB that removes all the textual conventions that where causing errors.  I have attached the fixed MIB file to this discussion. With the fixed version of the MIB installed in SIM everything compiles and installs except the following two MIBS. CISCO-UNIFIED-COMPUTING-NOTIFS-MIBCISCO-UNIFIED-COMPUTING-CONFORM-MIB Questions:
    1. Is there any way to get the CISCO-UNIFIED-COMPUTING-TC-MIB MIB to install correctly into HP SIM?
    2. Is my MIB load order setup correctly?
    3. Has anyone had success getting HP SIM to monitor and alert for Cisco UCS manager?
    MIB Load Order:
    SNMPv2-SMI
    SNMPv2-TC
    SNMP-FRAMEWORK-MIB
    RFC1213-MIB
    IF-MIB
    CISCO-SMI
    CISCO-ST-TC
    ENTITY-MIB
    INET-ADDRESS-MIB
    CISCO-UNIFIED-COMPUTING-MIB
    CISCO-UNIFIED-COMPUTING-TC-MIB
    CISCO-UNIFIED-COMPUTING-FAULT-MIB
    CISCO-UNIFIED-COMPUTING-NOTIFS-MIB
    CISCO-UNIFIED-COMPUTING-AAA-MIB
    CISCO-UNIFIED-COMPUTING-ADAPTOR-MIB
    CISCO-UNIFIED-COMPUTING-BIOS-MIB
    CISCO-UNIFIED-COMPUTING-BMC-MIB
    CISCO-UNIFIED-COMPUTING-CALLHOME-MIB
    CISCO-UNIFIED-COMPUTING-CAPABILITY-MIB
    CISCO-UNIFIED-COMPUTING-COMM-MIB
    CISCO-UNIFIED-COMPUTING-COMPUTE-MIB
    CISCO-UNIFIED-COMPUTING-CONFORM-MIB
    CISCO-UNIFIED-COMPUTING-DCX-MIB
    CISCO-UNIFIED-COMPUTING-DHCP-MIB
    CISCO-UNIFIED-COMPUTING-DIAG-MIB
    CISCO-UNIFIED-COMPUTING-DPSEC-MIB
    CISCO-UNIFIED-COMPUTING-EPQOS-MIB
    CISCO-UNIFIED-COMPUTING-EQUIPMENT-MIB
    CISCO-UNIFIED-COMPUTING-ETHER-MIB
    CISCO-UNIFIED-COMPUTING-EVENT-MIB
    CISCO-UNIFIED-COMPUTING-EXTMGMT-MIB
    CISCO-UNIFIED-COMPUTING-EXTVMM-MIB
    CISCO-UNIFIED-COMPUTING-FABRIC-MIB
    CISCO-UNIFIED-COMPUTING-FC-MIB
    CISCO-UNIFIED-COMPUTING-FCPOOL-MIB
    CISCO-UNIFIED-COMPUTING-FIRMWARE-MIB
    CISCO-UNIFIED-COMPUTING-FLOWCTRL-MIB
    CISCO-UNIFIED-COMPUTING-HOSTIMG-MIB
    CISCO-UNIFIED-COMPUTING-IMGPROV-MIB
    CISCO-UNIFIED-COMPUTING-IMGSEC-MIB
    CISCO-UNIFIED-COMPUTING-IPPOOL-MIB
    CISCO-UNIFIED-COMPUTING-IQNPOOL-MIB
    CISCO-UNIFIED-COMPUTING-ISCSI-MIB
    CISCO-UNIFIED-COMPUTING-LICENSE-MIB
    CISCO-UNIFIED-COMPUTING-LLDP-MIB
    CISCO-UNIFIED-COMPUTING-LSBOOT-MIB
    CISCO-UNIFIED-COMPUTING-LSMAINT-MIB
    CISCO-UNIFIED-COMPUTING-LS-MIB
    CISCO-UNIFIED-COMPUTING-MACPOOL-MIB
    CISCO-UNIFIED-COMPUTING-MAPPINGS-MIB
    CISCO-UNIFIED-COMPUTING-MEMORY-MIB
    CISCO-UNIFIED-COMPUTING-MGMT-MIB
    CISCO-UNIFIED-COMPUTING-NETWORK-MIB
    CISCO-UNIFIED-COMPUTING-NWCTRL-MIB
    CISCO-UNIFIED-COMPUTING-ORG-MIB
    CISCO-UNIFIED-COMPUTING-OS-MIB
    CISCO-UNIFIED-COMPUTING-PCI-MIB
    CISCO-UNIFIED-COMPUTING-PKI-MIB
    CISCO-UNIFIED-COMPUTING-PORT-MIB
    CISCO-UNIFIED-COMPUTING-POWER-MIB
    CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB
    CISCO-UNIFIED-COMPUTING-PROC-MIB
    CISCO-UNIFIED-COMPUTING-QOSCLASS-MIB
    CISCO-UNIFIED-COMPUTING-SOL-MIB
    CISCO-UNIFIED-COMPUTING-STATS-MIB
    CISCO-UNIFIED-COMPUTING-STORAGE-MIB
    CISCO-UNIFIED-COMPUTING-SW-MIB
    CISCO-UNIFIED-COMPUTING-SYSDEBUG-MIB
    CISCO-UNIFIED-COMPUTING-SYSFILE-MIB
    CISCO-UNIFIED-COMPUTING-TOP-MIB
    CISCO-UNIFIED-COMPUTING-TRIG-MIB
    CISCO-UNIFIED-COMPUTING-UUIDPOOL-MIB
    CISCO-UNIFIED-COMPUTING-VM-MIB
    CISCO-UNIFIED-COMPUTING-VNIC-MIB
    References:
    ftp://ftp.cisco.com/pub/mibs/supportlists/ucs/ucs-manager-supportlist.html#_Toc303691433
    http://www.hp.com/wwsolutions/misc/hpsim-helpfiles/simsnmp.pdf

    Please post "debug ccsip messages".
    Based on your debug you are getting "Cause No. 38 - network out of order."
    You may want to bind SIP to an interface that the IP address is defined which Lync points to.
    Chris

  • ISCSI- Adding iSCSI SRs to XS booting via iSCSI [Cisco UCS]

    Hello all-
    I am currently running XS 6.0.2 w/ HF XS602E001 & XS602E002 applied running on Cisco UCS B200-M3 blades. XS seems to be functioning ok, except that i cannot see the subnet assigned to my SAN, and as a result, i cannot add any SR for VM storage.
    The subnet and vNIC that i am configuring is also used for the iSCSI boot.
    The vNIC is on LAN 2 and is set to Native
    The SAN is a LeftHand P4300 connected directly to appliance ports (LAN2) on the Fabrics
    Used the following commands during installation to get the installer to see the LUN.
    echo "InitiatorName=my.lun.x.x.x" > /etc/iscsi/initiatorname.iscsi
    /opt/xensource/installer/init --use_ibft
    If i missed anything or more info is needed, please let me know.

    Thanks Padramas,
    I have 2 NICs that show up in XenCenter. Nic A is the vNIC associated with VLAN1 and provide the uplink to the rest of the network. The second NIC in XenCenter matches the MAC of the vNIC used to boot XenServer, so the NIC used for booting the OS is visible in XenCenter.
    I have tried many different thing on my own to resolve this issue. I believe i have already tried adding a 3rd vNIC to the service profile for VM iSCSI traffic, but with make another attempt.
    When configuring the vNIC for VM iSCSI traffic, do I only need to add the 3rd vNIC, or do i need to create both a vNIC and second iSCSI vNIC with Overlay vNic pointing to newly created (3rd) vNIC? My understanding is that the iSCSI vNICs are only used for booting, but I am not 100% sure.
    Thanks again for the help!

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

  • FCoE options for Cisco UCS and Compellent SAN

    Hi,
    We have a Dell Compellent SAN storage with iSCSI and FCoE module in pre-production environment.
    It is connected to new Cisco UCS infrastructure (5108 Chassis with 2208IOM + B200 M2 Blades + 6248 Fabric Interconnect) via 10G iSCSI module (FCoE module isn't being used at th is moment).
    I reviewed compatibility matrix on interconnect but Compellent (Dell) SAN is only supported on FI NXOS 1.3(1), 1.4(1) without using 6248 and 2208 IOM which is what we have. I'm sure some of you have similar hardware configuration as ours and I'd like to see if there's any supportive Cisco FC/FCoE deployment option for the Compellent. We're pretty tight on budget at this moment so purchasing couple of Nexus 5K switches or something equipvalent for such a small number of chassis (only only have one) is not a preferred option. If additional hardware acquisition is inevitable, what would be the most cost effective solution to be able to support FCoE implementation?
    Thank you in advance for your help on this.

    Unfortunatly there isn't really one - with direct attach storage there is still the requirement that an upstream MDS/N5k pushes the zoning to it.  Without a MDS to push the zoning the system it's recommended for production.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_0101.html#concept_05717B723C2746E1A9F6AB3A3FFA2C72
    Even if you had a MDS/N5K the 6248/2208's wouldn't support the Compellent SAN - see note 9.
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix8.pdf
    That's not to say that it won't work, it's just that we haven't tested it and don't know what it will do and thus TAC cannot troubleshoot SAN errors on the UCS.
    On the plus side iSCSI if setup correctly can be very solid and can give you a great amount of throughput - just make sure to configure the QoS correctly and if you need more throughput then just add some additional links

  • IPv6 Support on Cisco UCS 6248UP Device

    Hi,
    We are in process of implementing IPv6 in our production environment and wanted to know what needs to be done on UCS level to achive the same ?
    We have Cisco 4948 and Nexus 3048 switches in our production environment above 6248 switch
    Thanks,
    Amit Vyas           

    Starting with 2.2.1b (El Capitan) IPv6 is supported (don't forget, Fabric Interconnect is a L2 switch, no L3 = routing)
    IPv6 Management Support:
    All 3 IP addresses  (2 physical and 1 cluster) are now able to have IPv6 addresses as are the new CIMC “in band” addresses. Services such as NTP, DNS are also reachable via IPv6.

  • Cisco UCS - Syslog /Trap vs Event Subcription

    Cisco UCS allows for clients to subscribe for XML based events that can be used to receive configuration change information over http/https. Is the same/similar information available via Syslog messages or SNMP traps from UCS? If not, what information is available via the Syslog messages or Trap notifications ? Also is the Syslog or Snmp trap functionality supported in the Cisco UCSM simulators ?

    You can set SNMP traps to monitor fans, power supplies, and temperature settings, or to test a call home application. Use the following commands to set SNMP traps:
    test pfmtest-SNMP-trap fan
    test pfmtest-SNMP-trap power supply
    test pfmtest-SNMP-trap temp-sensor
    SNMP Traps and Informs:
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/mib/reference/UCS_MIBRef.html#wp75887
    Set up Syslog for Cisco UCS:
    http://www.cisco.com/en/US/products/ps10281/products_configuration_example09186a0080ae0f24.shtml

  • Cisco UCS 6200 Fabric InterConnects Needed?

    Currently looking to put Cisco UCS 5100 blades into our datacenter. Our core currently has a 7K with F2 48-Port blades with FCOE licenses. We currently have our ESXi servers terminating directly to the 7K for both 10G Ethernet and storage via FCOE. If we decide to replace our standalone rackmount servers with UCS blades, can we connect the blade chassis directly to the 7K, or are the 6200 fabric interconnects required? If so, what purpose do they serve. Thanks.

    Yes, fabric interconnect is required for a UCS blade solution; it can aggregate a maximum of 20 chassis a 8 blades, or a total of 160 blades.
    On the fabric interconnect runs UCS manager as a application, which allows you to do firmware management for the server farm (bios, I/O adaptor, CIMC,....); the concept of service profiles implements server abstraction and mobility across hardware.

  • Windows 2008 R2 on Cisco UCS B200M networking problems

    This is driving me completely nuts.  Let me start by saying I am new to blade servers and Cisco UCS.  I did take an introduction class, and it seemed straight-forward enough.  I have a chassis with two B200M blades, on which I am trying to configure two Windows 2008 R2 servers, which I will eventually make Hyper-V servers.  This is all in a test environment, so I can do anything I want to on them.
    Right now I have installed W2008 directly on hard disks on the B200M hardware.
    The problem is this: even though I think I've configured the network hardware correctly, using the Cisco VIC driver software, I cannot get networking to work in any reliable way.  I cannot even get ping to work consistantly.  I can ping my local server address, but I cannot ping my gateway (HSRP address).  When I try, I get a "Reply from 10.100.1.x: Destination host unreachable (x being each particular server's last octet). I CAN, however, ping the individual IP addresses of the core switches.  I can also ping some, but not all, the other devices that share the servers' subnet.  There are no errors being generated, the arp tables  (for those devices I can ping) look good, netstat looks OK.  But I cannot get outside the local subnet...
    Except when I can.
    There are times when I can get all the way out to the Internet, and I can download patches from Microsoft.  When it works, it works as expected.  But if I reboot the server, oftentimes networking stops working.  Yet another reboot can get things going again.  This happens even though I've made no changes to either the UCS configs or the OS.
    I cannot figure out any reason when it works at some times and not at others.  I've made sure I have a native VLAN set, I've tried pinning to specific ports on the Fabric Interconnects.  There is just no rhyme or reason to it.
    Anyone know of where I can look?  I'm very familiar with Windows on stand-alone boxes (although it's no longer my area of expertise), and I manage a global WAN (BGP, OSPF, Nexus 7k, etc.) so I'm no dummy when it comes to networking, but I am utterly stumped on this one.        

    The problem was this: while the NICs on the blade server are called vNIC0 and vNIC1, Windows was calling vNIC1 "Local Area Connection" and vNIC0 "Local Area Connection 2".  So what I configured on UCS did not match what I was configuring in Windows.  Completely, utterly ridiculous.
    Anyway, networking is working now without any issues.  Thanks for you suggestion; it did get me looking in the right direction.

  • Ask the Expert: Cisco UCS B-Series Latest Version New Features

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco UCS Manager 2.2(1) release, which delivers several important features and major enhancements in the fabric, compute, and operational areas. Some of these features include fabric scaling, VLANs, VIFs, IGMP groups, network endpoints, unidirectional link detection (UDLD) support, support for virtual machine queue (VMQ), direct connect C-Series to FI without FEX, direct KVM access, and several other features.
    Teclus Dsouza is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has over 15 years of total IT experience. He has worked across different technologies and a wide range of data center products. He is an expert in Cisco Nexus 1000V and Cisco UCS products. He has more than 6 years of experience on VMware virtualization products.  
    Chetan Parik is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has seven years of total experience. He has worked on a wide range of Cisco data center products such as Cisco UCS and Cisco Nexus 1000V. He also has five years of experience on VMware virtualization products.
    Remember to use the rating system to let Teclus and Chetan know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through May 9, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Jackson,
    Yes its is possible.    Connect the storage array to the fabric interconnects using two 10GB links per storage processor.  Connect each SP to both fabric interconnects and configure the ports on the fabric interconnect as “Appliance” ports from UCSM
    For more information on how to connect Netapp storage using other protocols like iSCSI or FCOE  please check the url below.
    http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6100-series-fabric-interconnects/whitepaper_c11-702584.html
    Regards
    Teclus Dsouza

  • Cisco UCS with FEX system Component requirements

    Hi ,
    I am looking for quick suggestion for required components to have a VMware vSAN implemented using FEX system with maximum Fabric trough put. Ideally a configuration for without multipath using single Fabric switch and later on to be able to upgrade to Multipath fabric.
    We are in great rush to provide POC concept and was looking if you folks can suggest me on this. If need be you want to talk to me more in detail I can available.
    Appreciate your help here.
    Who can I reach to able to pull up build and quote the configurations for implementing complete UCS solution.
    Are these below components absolutely required ?
    2 x Nexus 2232PP 10GE Fabric Extenders
    The Cisco Nexus 2232PP 10GE provides 32 x 10Gb loss-less low latency Ethernet and Fiber Channel Over Ethernet (FCoE) Small Form-Factor Pluggable Plus (SFP+) server ports and eight 10 Gb Ethernet and FCoE SFP+ uplink ports in a compact 1 rack unit (1RU) form factor. They enable Seamless inclusion of UCS rack servers into UCS Manager domain with UCS blade servers when connected to the UCS 6200 Fabric Interconnects, providing converged LAN, SAN, and management connectivity.
    We currently have these below servers :
    2 x  UCSC-BASE-M2-C460
            524 GB RAM
            No SSDs
            No VIC Card
            4 x SAS 1 TB Drives
            1 x L1 Intel 1 Gbps Ethernet Adapter
            1 x L2 10Gbps Ethernet Adapter
            1 x LSI Mega RAID SAS 9240-8i (no RAID 5 support, Need a card that supports RAID5)
    1 x  UCSC-C240-M3L
            132 GB RAM
            30 TB SAS HDD
            1 x VIC Card
            1 x Intel I350 Gigabit Ethernet Card with 4 ports
            1 x LSI Mega RAID SAS 9240-8i (no RAID 5 support, Need a card that supports RAID5)
    1 x 5548UP Nexus Switch ( Will I be able to use this switch in place of Nexus 2232PP 10GE Fabric Extenders to achieve complete UCS solution)

    Cisco UCS Manager 2.2 supports an option to connect the C-Series Rack-Mount Server directly to the Fabric Interconnects. You do not need the Fabric Extenders. This option enables Cisco UCS Manager to manage the C-Series Rack-Mount Servers using a single cable for both management traffic and data traffic.
    If you need high performance low latency (storage, for VMware VSAN), direct connection to UCS Fabric Interconnect and/or N5k is recommended.

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • Design question, UCS and Nexus 5k - FCP

    Hi,
    I need some advice from (Mainly a Nexus person); I have drawn and attached the proposed solution (below).
    I am designing a solution with 3 UCS chassis, Nexus 5K and 2X NetApp 3240 (T1 and T2). FC will be used to access disk on the SAN. Also, Non UCS compute will need access to the T2 SAN only. (UCS will access T1 and T2). It is a requirement for this solution that non UCS devices do not connect to the same Nexus switches that the UCS chassis use.
    UCS Compute:
    the 3 X chassis will connect to 2 X 6296 FIs which will cross connect to the a 2 X Nexus 5K through a FC port channel, the Nexus 5Ks will be configured in NPIV mode to provide access to the SAN. FC from each Nexus 5K to the NetApp controllers will be provided through a total of 4 X FC Port Channels (2 FC member ports per PC) from each nexus 5K one going to controller A and the other to controller B.
    Non UCS compute:
    These will connect directly through their HBAs to their own Nexus 5Ks and then to the T2 SAN, these will be zoned to never have access to the T1 SAN.
    Questions:
    1-      As the UCS compute will need to have access to the T1, what is the best way to connect the Nexus 5Ks on the LHS in the image below to the Nexus on the RHS (This should be FC connection).
    2-      Can fiber channel be configured in a vPC domain as Ethernet? Is this a better way for this solution?
    3-      Is FC better than FCoE for this solution? I hear FCoE is still not highly recommended.
    4-      Each NetApp controller is only capable of pushing 20GBps max, this is why each port channel connecting to each controller is only 2 members. However, I’m connecting 4 port channel members from each Fabric Interconnect (6296) to each Nexus switch. Is this a waste? Remember that connectivity form each FI is also required to the T2 SAN.

    Max,
    What you are implementing is traditional FlexPod design with slight variations.
    I recommend to look at FlexPod design zone for some additional material if you have not done so yet.
    http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns1050/landing_flexpod.html
    To answer your questions:
    1) FC and FCoE do not support vPC. If UCS needs to have access to T1, then there is no need to have ISL between sites. If UCS needs to have access to T1 and T2, then best option would be to set up VSAN trunking on N5K and UCS and configure vHBAs accordingly.
    2) Both should work just fine. If you go with FCoE, then UCS would need to be on the latest version for multi-hop FCoE support.
    3) If you only worried about storage throughput then yes, you will never utilize 40Gb PO if your source will be 20Gb PO. What are your projected peak and average loads on this network?

  • Cisco UCS Requirements

    Does UCS (Servers and Storage) have to have Nexus infrastructure or it can also work with Cisco Catalyst 6509 series switches.
    If I understand correctly, Nexus is recommended switches for optimal performance and UCS should also work with Cisco Catalyst infrastructure as well.
    Thanks

    Both switches will work fine. If you already have Catalyst, then you can use that for IP connectivity with UCS.
    Nexus platform offering higher speeds and more connectivity options reducing number of devices you have in the rack.

  • Cisco UCS network uplink on aggregation layer

    Hello Cisco Community,
    we are using our UCS (Version 2.21d) for ESX Hosts. Each host has 3 vnics as follows:
    vnic 0 = VLAN 10 --> Fabric A, Failover Fabric B
    vnic 1 = VLAN 20 --> Fabric B, Failover Fabric A
    vnic 2 = VLAN 100 --> Fabric A, Failover Fabric B
    Actually UCS is connected to the Access Layer (Catalyst 6509) and we are migrating to Nexus (vPC). As you know, Cisco UCS Fabric Interconnects can handle layer2 traffic itself. So we are planning to connect our UCS Fabric Interconnects directly to our new l3 nexus switch.
    Does anyone have connect the UCS directly to l3? Do we have to pay attention to something? Are there some recommendations?
    thanks in advance
    best regards
    /Danny

    we are using ESXi 5.5 with dvswitche (distributed vswitch). In our cisco ucs powerworkshop, we discuss pros and contras of hard- and softwarefailover and we commit to use hardwarefailover. It is very fast and we have no problems actually.
    This is a neverending and misunderstood story: your design should provide load balancing AND failover. Hardware failover only gives you the latter.In your design, you just use one fabric per Vlan, what a waste !
    And think about the situation of a failover on a ESXi host with 200 VM's; one has to send out at least 200 GARP messages, and this is load on the FI CPU. Most likely dozens or more ESXi server are impacted......
    Cisco best practise: if you use softswitch, let it do the loadbalancing and failover, don't use hardware failover.
    see attachment (the paper is not up to date,
    For ESX Server running vSwitch/DVS/Nexus 1000v and using Cisco UCS Manager Version 1.3 and below, it is recommended that fabric failover not be enabled, as that will require a chatty server for predictable failover. Instead, create regular vNICs and let the soft switch send
    gARPs for VMs. vNICs should be assigned in pairs (Fabric A and B) so that both fabrics are utilized.
    Cisco UCS version 1.4 has introduced the Fabric Sync feature, which enhances the fabric failover functionality for hypervisors as gARPs for VMs are sent out by the standby FI on failover. It does not necessarily reduce the number of vNICs as load sharing among the fabric is highly recommended. Also recommended is to keep the vNICs with fabric failover disabled, avoiding the use of the Fabric Sync feature in 1.4 for ESX based soft switches for quicker failover.

Maybe you are looking for