Cisco UCS 6248 and Dell EQ 6510x

        We have a Dell EQ 6510x we have in production and we are about to power on a Cisco UCS B series chassis and a pair of 6248UP FIs. I'm planning to connect the 4 6510x 10G ports directly to the 6248 FIs and set them as Applicance Ports.
My question is would be be better to connect the 6510x Primary controller's two 10G ports directly to FI-A and the standby controller 10G ports directly to FI-B, or stagger them? Stagger meaning Primary Controller one10G to FI-A and one 10G to FI-B then Standby controller one 10G to FI-A and one10G to FI-B. Thanks!

Hello there,
In your design always keep in mind that both UCS FI are separate switches, they always will talk each other by the upstream switch. The best pratice is connect anything that have to talk with UCS domain in the upstream switch.
If there is no option to do so, you can connect through the FIs, using a appliance port. In your case I believe that connect stagger mode is the best option, but I'm not sure, depends how your vnics are configured and if your Dell EQ can do port-channel in active-standby mode.

Similar Messages

  • Cisco UCS Blades and vSphere DPM

    I followed this guide:
    https://supportforums.cisco.com/docs/DOC-8582
    And it worked, but I have 2 problems.
    1) The blade being put into Standby starts back up immediately - almost like a reboot -
    and doesn't stay in Standby mode
    2) The Blade being put into Standby mode has Faults on all vFabric's etc because it is
    "off"
    Any suggestions?
    Jim

    Hi Rob and Jim,
    How did you guys progress with this one?  I am having the same issue and be interested to know the solution.
    My environment is vSphere 5.0 with B230 M2 and DPM is not yet enabled.  UCS Manager and CIMC are running 2.0(1s).
    On testing for Standby from the vSphere client, the blade reboots automatically (no power down).  A shutdown command gives the same result (which is a reboot) with a host connection failure alert.
    Any help is appreciated.
    Thanks,
    Noli

  • Cisco UCS components and Heartbleed bug

    I was reading about Cisco products affected by heartbleed vulnerability at following Cisco security advisory
    http://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20140409-heartbleed#@ID
    I couldn't find whether below products/components are affected by this vulnerability.. can someone confirm if these products/components are vulnerable to heartbleed?
    Cisco UCS Manager
    Cisco Integrated Management Controller (CIMC)
    Cisco UCS Blade Chassis
    tia

    I agree that phrasing is a bit off, note that the notice is talking about _products_ affected (or not), not particular _components_ of a product. 
    UCS seems to be off the hook. Not affected are: 
    Cisco UCS B-Series (Blade) Servers
    Cisco UCS C-Series (Stand alone Rack) Servers
    Cisco UCS Central
    Cisco UCS Fabric Interconnects
    Cisco UCS Invicta Series Solid State Systems
    CIMC and UCSM would be part of FI or B-or-C-series, etc.

  • Cisco UCS Vmware and HDS

    We about to build a new data center with a Hitachi VSP disk array,  Cisco UCS and Vmware.   I have some information from the Hitachi side but it really says nothing about the UCS. Is there a "best practices" , "configuration guide" for the UCS side of this? 

    see
    http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_hds.html
    Cisco Solution for Hitachi Unified Compute Platform Select with VMware vSphere

  • Connectivity between UCS 6249 - UCS 6248 Switches and 10GB iscsi on EMC Storage - VNXe 3200

    Hi
    I am trying to check VNXe 3200 Connectivity with SFP+ ( Cisco UCS 6249 - UCS 6248 Switches )
    will the 10 GB iscsi fiber / Copper connect directly to the UCS 6248 and what is the type of Cables ?
    will SFP and SFP+ connect to each other or that will cause some issues?
    i tried to search the EMC side but getting difficulty to get the Document showing the Comparability but expecting soon so wanted to check from Cisco side
    I know this is the Guide to refer to
    http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6200-series-fabric-interconnects/data_sheet_c78-675245.pdf
    but please help me finding answers
    Thanks

    Hi Sami
    https://community.emc.com/message/841141#841141
    has the answer
    EMC does not support Passive Twinaxial cables in an iSCSI or FCOE environment.
    EMC only supports Active Twinaxial cables in an iSCSI or FCOE environment.
    We currently support EMC, Brocade and Cisco cables.
    SFP-H10GB-ACU7M and SFP-H10GB-ACU10M are active twinax cables
    see http://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibility/matrix/10GE_Tx_Matrix.html
    https://community.emc.com/message/837419 might also be useful.
    Walter.

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

  • Port-Channel issue between UCS FI and MDS 9222i switch

    Hi
    I have a problem between UCS FI and MDS switch port-channel. When MDS-A is powered down the port-channel fails but UCS blade vHBA does not detect the failure of the port-chanel on UCS-FI and leaves the vHBA online. However, if there is no port-channel between FI-->MDS it works fine.
    UCS version   
    System version: 2.0(2q)
    FI - Cisco UCS 6248 Series Fabric Interconnect ("O2 32X10GE/Modular Universal Platform Supervisor")
    Software
      BIOS:      version 3.5.0
      loader:    version N/A
      kickstart: version 5.0(3)N2(2.02q)
      system:    version 5.0(3)N2(2.02q)
      power-seq: Module 1: version v1.0
                 Module 3: version v2.0
      uC:        version v1.2.0.1
      SFP uC:    Module 1: v1.0.0.0
    MDS 9222i
    Software
      BIOS:      version 1.0.19
      loader:    version N/A
      kickstart: version 5.0(8)
      system:    version 5.0(8)
    Here is the config from MDS switch
    Interface  Vsan   Admin  Admin   Status          SFP    Oper  Oper   Port
                      Mode   Trunk                          Mode  Speed  Channel
                             Mode                                 (Gbps)
    fc1/1      103    auto   on      trunking         swl    TF      4    10
    fc1/2      103    auto   on      trunking         swl    TF      4    10
    fc1/9      103    auto   on      trunking         swl    TF      4    10
    fc1/10     103    auto   on      trunking         swl    TF      4    10
    This is from FI.
    Interface  Vsan   Admin  Admin   Status          SFP    Oper  Oper   Port
                      Mode   Trunk                          Mode  Speed  Channel
                             Mode                                 (Gbps)
    fc1/29     103    NP     on      trunking         swl    TNP     4    103
    fc1/30     103    NP     on      trunking         swl    TNP     4    103
    fc1/31     103    NP     on      trunking         swl    TNP     4    103
    fc1/32     103    NP     on      trunking         swl    TNP     4    103
    Any thoughts on this?

    Sultan,
    This is a recently found issue and is fixed in UCSM 2.0.3a version .
    http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCua88227
    which got duped to  CSCtz21585
    It happens only when following conditions are met
    FI in End host mode
    FC uplinks are configured for portchannel + trunking
    Certain link event failures ( such abrupt power loss by upstream MDS switch )
    Padma

  • Oracle 11g - Solaris 10 on cisco UCS server

    I would like to know if anyone has experience with installing Oracle databases on cisco UCS servers.  I recently took over a dba shop and my client has purchased a cisco UCS server and is planning on migrating some databases currently on dedicated servers running on Solaris 10 and others running on Linux RH 5.4 platforms.  I need to find out if Oracle 11g and Solaris 10 is compatible and covered by Oracle licenses.  Does anyone have any specs and/or information on this topic?  Thanks in advance.
    Jonathan Begazo-Leon

    Jonathan,
    there's a dedicated Oracle database forum (Database) where you can post your issue. In this forum here only 3rd party to Oracle migrations using the Oracle tools migration workbench or SQL Developer will be covered. As I can't move your thread, please close this one here and post your issue again in the database forum.
    Thanks,
    Klaus

  • Cisco UCS Director UCS Manager/Central Comparison

    Hello Community,
    Can someone please tell me what are the fundamental differences between Cisco UCS Director and UCS Manager/Central?
    Cheers
    Carlton

    Carlton,
    I am pretty sure this post (3 in total) from Jeremy Waldrop's blog (Jeremy usually visits this community too) will help explain UCS Director:  http://jeremywaldrop.wordpress.com/2014/04/01/cisco-ucs-director-part-1/
    Besides what you can see in the above link... UCS Manager is usually used for a single domain while UCS Central is used to integrate more than domain into a unique central point...
    HTH,
    -Kenny

  • FCoE options for Cisco UCS and Compellent SAN

    Hi,
    We have a Dell Compellent SAN storage with iSCSI and FCoE module in pre-production environment.
    It is connected to new Cisco UCS infrastructure (5108 Chassis with 2208IOM + B200 M2 Blades + 6248 Fabric Interconnect) via 10G iSCSI module (FCoE module isn't being used at th is moment).
    I reviewed compatibility matrix on interconnect but Compellent (Dell) SAN is only supported on FI NXOS 1.3(1), 1.4(1) without using 6248 and 2208 IOM which is what we have. I'm sure some of you have similar hardware configuration as ours and I'd like to see if there's any supportive Cisco FC/FCoE deployment option for the Compellent. We're pretty tight on budget at this moment so purchasing couple of Nexus 5K switches or something equipvalent for such a small number of chassis (only only have one) is not a preferred option. If additional hardware acquisition is inevitable, what would be the most cost effective solution to be able to support FCoE implementation?
    Thank you in advance for your help on this.

    Unfortunatly there isn't really one - with direct attach storage there is still the requirement that an upstream MDS/N5k pushes the zoning to it.  Without a MDS to push the zoning the system it's recommended for production.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_0101.html#concept_05717B723C2746E1A9F6AB3A3FFA2C72
    Even if you had a MDS/N5K the 6248/2208's wouldn't support the Compellent SAN - see note 9.
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix8.pdf
    That's not to say that it won't work, it's just that we haven't tested it and don't know what it will do and thus TAC cannot troubleshoot SAN errors on the UCS.
    On the plus side iSCSI if setup correctly can be very solid and can give you a great amount of throughput - just make sure to configure the QoS correctly and if you need more throughput then just add some additional links

  • Ask the Expert : Initial Set Up and LAN Connectivity for Cisco UCS Servers

    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about related to the initial setup of UCS C & B Series which include LAN connectivity from the UCS perspective with Cisco subject matter expert Kenny Perez.
    In particularly, Kenny will cover topics such as: ESXi/Windows  installations, RAID configurations (best practices for good performance and configuration), VLAN/Jumbo Frames configuration for B series and C series servers, Pools/Policies/Upgrades/Templates/Troubleshooting Tips for blade and rack servers, Fabric Interconnects configuration, general compatibility of Hardware/Software/drivers amongst other topics
    Kenny Perez is a technical leader in Cisco Technical Assistance Center, where he works in Server Virtualization support team. His main job consists of supporting customers to implement and manage Cisco UCS B series and C series. He has background in computing, networking, and Vmware ESXi and has 3+ years of experience support UCS servers and is VCP certified.
    Remember to use the rating system to let Kenny know if he has given you an adequate response. 
    This event lasts through October 10th, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi,
    Actually  we have UCS 6248 fabric interconnect - first twelve ports are enabled  and same in Cisco UCS Manager.
    But when more port will be active by expansion module  then UCSM can manage that too or need any other licence  for UCSM too?

  • Cisco UCS model to match Dell R900

    I am trying to replace existing Dell R900 ( 1 TB of drive space, 24GB RAM and 2 Quad core CPU ) with Cisco UCS but not sure which model I should go with.  any response will be appreciated!

    It looks like the Dell R900 is a rack server, so if you are looking for something similar, you may want to try the C460-M4:
    Specifications at a Glance
    Four rack-unit (4RU) chassis
    Either 2 or 4 Intel® Xeon® processor E7-4800 v2 or E7-8800 v2 product family CPUs
    Up to 6 terabytes (TB)* of double-data-rate 3 (DDR3) memory in 96 dual in-line memory (DIMM) slots
    Up to 12 Small Form Factor (SFF) hot-pluggable SAS/SATA/SSD disk drives
    10 PCI Express (PCIe) Gen 3 slots supporting the Cisco UCS Virtual Interface Cards and third-party adapters and GPUs
    Two Gigabit Ethernet LAN-on-motherboard (LOM) ports
    Two 10-Gigabit Ethernet ports
    A dedicated out-of-band (OOB) management port
    *With 96 x 64 GB DIMMs when available.
    http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c460-m4-rack-server/index.html
    I hope this helps.
    Let me know if you have questions, if none, please mark the question as answered.
    -Kenny

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • Cisco Unity Connection 8.X and Cisco UCS

    Hi
    We are in a planning phase for Unity Connection 8.X on Cisco UCS C-Series. The Cisco Unified Communications SRND 8.0 states that requires reserving one physical core per physical server.
    what does it really mean?

    See sample depiction below of applications and physical core usage on a server with 2 CPUs and 8 total physical cores. In white is the reservation of one physical core.
    Regards.

  • Cisco 2950 and Dell Powerconnect 5224

    I am trying to Cascade a cisco 2950 and dell powerconnect 5224. I am connecting port 32 on the 2950 and port 24 (gigport) on the dell. Any idea on how I can get the cascading to work? This is what I have on the Dell and the cisco.
    Dell Powerconnect 5224:
    interface ethernet 1/24
    switchport allowed vlan add 1 untagged
    switchport native vlan 1
    switchport mode trunk
    switchport allowed vlan add 1,10 tagged
    Cisco 2950:
    interface FastEthernet0/32
    switchport access vlan 10
    switchport mode trunk
    Dell documentationon casdcading between powerconnect and catalyst 4000 talks about setting up GVRP on both the dell and cisco switches. However, 2950 doesn't have GVRP.
    http://www.dell.com/downloads/global/products/pwcnt/en/app_note_4.pdf
    Any ideas, tips. Thanks.

    Try this instead:
    Dell:
    interface ethernet 1/24
    switchport allowed vlan add 1 untagged
    switchport native vlan 1
    switchport mode trunk
    switchport allowed vlan add 10 tagged
    Cisco 2950:
    interface FastEthernet0/32
    switchport mode trunk
    switchport trunk allow vlan 1,10
    switchport nonegotiate
    You don't need "switchport access vlan 10" on the Cisco because it's not in access mode, it's in trunk mode. And on the Dell you don't want vlan 1 to be tagged and untagged.
    Good luck.

Maybe you are looking for

  • Why can't I setup my HP Photosmart C4580 Printer with my Macbook?

       Okay, first of all, I've got to say that this has to be one of the most troublesome pieces of computer hardware that I've ever had the misfortune of running across. The only thing that keeps me from going all "Office Space" on it, (as another frus

  • Copy and add standard report

    I have a request to add few column in the standard report (T-code F.19, program, RFWERE00), so i copy the program and proceed modification, when i check the code, it has error. I would like to know can I have any solution to add column into the repor

  • Glitch with Indesign 2014

    I am having an issue with Indesign 2014. The document is fine on Indesign - I have imported drawings as EPS files, some images and I have text. I then exported the document as a PDF and all the text in the EPS drawings is distorted. This has not happ

  • Added fixed spaces in output columns

    Hello experts, how to add fixed spaces in between columns? i am using concatenated statement. concatenate... into line separated by... (here i want 8 spaces). I am uploading data on application server. how to add fixed spaces?? Thanks in advance. Sau

  • Block Storage Location for Material Movement

    Dear MM Gurus, pl let us know how to block storage location for a material in Plant. Requirement is transfer posting (MIGO)  should not allow by using 311 mov type in sloc, error message to receive. example ; material ; 5000 sloc ; 0001 sloc ; 0002 m