VM Fex on Fabric Path

Hi all,
Does VM Fex work on Fabric path environment?
If so, please direct me to some references
Thanks
Jagath

Thanks
One more question.
According to your first post, VM FEX is supported by all N5K switches.
But following cisco document states that it is supported only by N5500.
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_n1_1/b_Cisco_n5k_layer2_config_gd_rel_513_N1_1_chapter_010101.html
Switch
VM-FEX is supported by the Cisco Nexus 5500 Platform running Cisco NX-OS Release 5.1(3)N1(1) or later.
So, which one is correct?

Similar Messages

  • Layer 3 config design on Nexus 5500 with Fabric Path

    I trying to Network deisgn for new data Center ,  i am new to DataCenter desgin, i attached the network diagram
    i would like to know if can configure my layer3 on 5500 and configure Fabric path to uplink switch
    please help give your suggestions on this design 

    You can configure layer-3 on the 5500 series, but you need to install a daughter cards in each 5500.
    See this link:
    Layer 3 Daughter Card and Expansion Module Options for Cisco Nexus 5548P, 5548UP, 5596UP, and 5596T Switches
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
    HTH

  • Fabric path vlan question

    Setup as shown in the attached. S1-S4 are Fabric path spine switches. Will it work ? or I need to configure both vlan 10 and vlan 20 in "mode fabricpath" on all S1 to S4 even though only vlan 10 is required in S1-S2 and vlan 20 required in S3-S4.

    Have you tried looking on Github? You might a Python script that you can re-use.

  • Nexus 7000, 2000, FCOE and Fabric Path

    Hello,
    I have a couple of design questions that I am hoping some of you can help me with.
    I am working on a Dual DC Upgrade. It is pretty standard design, customer requires a L2 extension between the DC for Vmotion etc. Customer would like to leverage certain features of the Nexus product suite, including:
    Trust Sec
    VDC
    VPC
    High Bandwidth Scalability
    Unified I/O
    As always cost is a major issue and consolidation is encouraged where possible. I have worked on a couple of Nexus designs in the past and have levergaed the 7000, 5000, 2000 and 1000 in the DC.
    The feedback that I am getting back from Customer seems to be mirrored in Cisco's technology roadmap. This relates specifically to the features supported in the Nexus 7000 and Nexus 5000.
    Many large enterprise Customers ask the question of why they need to have the 7000 and 5000 in their topologies as many of the features they need are supported in both platforms and their environments will never scale to meet such a modular, tiered design.
    I have a few specific questions that I am hoping can be answered:
    The Nexus 7000 only supports the 2000 on the M series I/O Modules; can FCOE be implemented on a 2000 connected to a 7000 using the M series I/O Module?
    Is the F Series I/O Module the only I/O Module that supports FCOE?
    Are there any plans to introduce the native FC support on the Nexus 7000?
    Are there any plans to introduce full fabric support (230 Gbps) to the M series I/O module?
    Are there any plans to introduce Fabric path to the M series I/O module?
    Are there any plans to introduce L3 support to the F series I/O Module?
    Is the entire 2000 series allocated to a single VDC or can individual 2000 series ports be allocated to a VDC?
    Is Trust Sec only support on multi hop DCI links when using the ASR on EoMPLS pwire?
    Are there any plans to inroduce Trust Sec and VDC to the Nexus 5500?
    Thanks,
    Colm

    Hello Allan
    The only IO card which cannot co-exist with other cards in the same VDC is F2 due to specific hardware realisation.
    All other cards can be mixed.
    Regarding the Fabric versions - Fabric-2 gives much bigger throughoutput in comparing with Fabric-1
    So in order to get full speed from F2/M2 modules you will need Fab-2 modules.
    Fab2 modules won't give any advantages to M1/F1 modules.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/prodcut_bulletin_c25-688075.html
    HTH,
    Alex

  • Fabric Path

    Does anyone have a deep-dive white paper on Fabric Path?
    All I can find on Cisco's website is an 8-page overview.
    Thanks

    Hi
    Please see the configuration guide for fabric path (Requires Cisco.com login)
    http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/fabricpath/configuration/guide/fp_cli_Book.html
    Thanks
    Hatim Badr

  • Python script for Fabric Path Vlan audit?

    Has anyone written any kind of scripts/automation to audit a NXOS FP domain and ensure CU vlans are properly configured?

    Have you tried looking on Github? You might a Python script that you can re-use.

  • Fabric Path / 801.Q tag insertion on leaf

    Hello Gents
    Is it correct to state that when frame arrives on leaf's CE-port which is in access mode of VLAN X, leaf will also insert 801.Q tag in original frame before adding an FabricPath header to frame?                 
    Will it also happen with frame of native (untagged) VLAN?

    Is it correct to state that when frame arrives on leaf's CE-port  which is in access mode of VLAN X, leaf will also insert 801.Q tag in  original frame before adding an FabricPath header to  frame?      
    Yes.
    Will it also happen with frame of native (untagged) VLAN?
    Yes.

  • Fabricpath with FEX topology

    I am currently deploying Nexus 5500 with Fabric path & deploying FEX as well. If you look at the FEX data sheet they talked about 4 different topology.
    I have 2 question.
    Please see link below:
    http://www.cisco.com/c/en/us/products/collateral/switches/nexus-2000-series-fabric-extenders/data_sheet_c78-507093.html
    With Fabricpath vPC+ I am seeing only Single Home FEX.
    1 Does this mean only Single home FEX supported with Fabricpath vPC+ ?
    2 Also I am seeing lot of debate on Single Home FEX vs Dual Home FEX & there is no right answer to the question what is better but for the design simplicity & troubleshooting I like the Single home FEX with Servers dual homed to the different FEX.
    Thank in Advance
    Sincerely
    Viral Patel

    Simens,
    Please check the below link if this is helpfull:
    http://adamraffe.com/2013/08/23/anycast-hsrp-4-way-hsrp-with-fabricpath/

  • Adapter FEX and VM Fex

    Hi!
    I have a questions about Cisco Adapter FEX and VM Fex.
    As i understand, Cisco Adapter FEX gives multiple vNIC on one mezzanine card, and each of them appears as virtual port on Fabric Interconnect.
    Also, same thing for VM-FEX, each VM NIC appears as virtual port on Fabric Interconnect.
    Am i right?
    Thank you!

    Adaptor FEX is I/O adaptor virtualization, which is agnostic to the OS and therefore works in virtualized as well as non virtualized environments.
    VM-FEX (also called Hypervisor Bypass) is available for different Hypervisors
    Cisco UCS Manager VM-FEX for Hyper-V CLI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for KVM CLI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for VMware CLI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for Hyper-V GUI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for KVM GUI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for VMware GUI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for Hyper-V CLI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for KVM CLI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for VMware CLI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for Hyper-V GUI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for KVM GUI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for VMware GUI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for KVM CLI Configuration Guide   
    Cisco UCS Manager VM-FEX for KVM GUI Configuration Guide   
    Cisco UCS Manager VM-FEX for VMware CLI Configuration Guide   
    Cisco UCS Manager VM-FEX for VMware GUI Configuration Guide   
    example VM-FEX for VMware ESX
    VM-FEX (previously known as VN-link) is a method to extend the network fabric completely down to the VMs. With VM-FEX, the Fabric Interconnects handle switching for the ESXi host's VMs. UCSM utilizes the vCenter dVS Application Programming Interfaces (API) to this end. Therefore, VM-FEX shows as a dVS in the ESXi host.
    There are many benefits to VM-FEX:
    Reduced CPU overhead on the ESX host
    Faster performance
    VMware DirectPath I/O with vMotion support
    Network management moved up to the FIs rather than on the ESXi host
    Visibility into vSphere with UCSM

  • Fex 1 Nexus 2148 to 2 Nexus 5010 problem?

    hi all
    I'm having trouble when fex(ing) 1 nexus 2000 to 2 nexus 5000.
    I've configured as dual home from 1 nexus 2k. But after success config, just 1 nexus 5k can see 48 port on it. other cannot see it and it showed as module offline
    Here's is results:
    N5K-01(config-if)# sh int e1/4 fex-intf
    Fabric           FEX
    Interface        Interfaces
    Eth1/4        
    N5K-01(config-if)#
    N5K-02# sh int e1/4 fex-intf
    Fabric           FEX
    Interface        Interfaces
    Eth1/4          Eth100/1/48   Eth100/1/47   Eth100/1/46   Eth100/1/45 
                     Eth100/1/44   Eth100/1/43   Eth100/1/42   Eth100/1/41 
                     Eth100/1/40   Eth100/1/39   Eth100/1/38   Eth100/1/37 
                     Eth100/1/36   Eth100/1/35   Eth100/1/34   Eth100/1/33 
                     Eth100/1/32   Eth100/1/31   Eth100/1/30   Eth100/1/29 
                     Eth100/1/28   Eth100/1/27   Eth100/1/26   Eth100/1/25 
                     Eth100/1/24   Eth100/1/23   Eth100/1/22   Eth100/1/21 
                     Eth100/1/20   Eth100/1/19   Eth100/1/18   Eth100/1/17 
                     Eth100/1/16   Eth100/1/15   Eth100/1/14   Eth100/1/13 
                     Eth100/1/12   Eth100/1/11   Eth100/1/10   Eth100/1/9  
                     Eth100/1/8    Eth100/1/7    Eth100/1/6    Eth100/1/5  
                     Eth100/1/4    Eth100/1/3    Eth100/1/2    Eth100/1/1
    ===============================
    N5K-01(config-if)# sh fex 100 detail
    FEX: 100 Description: FEX0100   state: Offline
      FEX version: 4.1(3)N2(1a) [Switch version: 4.1(3)N2(1a)]
      FEX Interim version: 4.1(3)N2(1a)
      Switch Interim version: 4.1(3)N2(1a)
      Extender Model: N2K-C2148T-1GE,  Extender Serial: JAF1405CKHC
      Part No: 73-12009-06
      Card Id: 70, Mac Addr: 00:0d:ec:fd:48:82, Num Macs: 64
      Module Sw Gen: 12594  [Switch Sw Gen: 21]
    pinning-mode: static    Max-links: 1
      Fabric port for control traffic: Eth1/4
      Fabric interface state:
        Eth1/4 - Interface Up. State: Active
      Fex Port        State  Fabric Port  Primary Fabric
           Eth100/1/1  Down      Eth1/4      Eth1/4
           Eth100/1/2  Down      Eth1/4      Eth1/4
           Eth100/1/3  Down      Eth1/4      Eth1/4
           Eth100/1/4  Down      Eth1/4      Eth1/4
           Eth100/1/5  Down      Eth1/4      Eth1/4
           Eth100/1/6  Down      Eth1/4      Eth1/4
           Eth100/1/7  Down      Eth1/4      Eth1/4
           Eth100/1/8  Down      Eth1/4      Eth1/4
           Eth100/1/9  Down      Eth1/4      Eth1/4
          Eth100/1/10  Down      Eth1/4      Eth1/4
          Eth100/1/11  Down      Eth1/4      Eth1/4
          Eth100/1/12  Down      Eth1/4      Eth1/4
          Eth100/1/13  Down      Eth1/4      Eth1/4
          Eth100/1/14  Down      Eth1/4      Eth1/4
          Eth100/1/15  Down      Eth1/4      Eth1/4
          Eth100/1/16  Down      Eth1/4      Eth1/4
          Eth100/1/17  Down      Eth1/4      Eth1/4
          Eth100/1/18  Down      Eth1/4      Eth1/4
          Eth100/1/19  Down      Eth1/4      Eth1/4
          Eth100/1/20  Down      Eth1/4      Eth1/4
          Eth100/1/21  Down      Eth1/4      Eth1/4
          Eth100/1/22  Down      Eth1/4      Eth1/4
          Eth100/1/23  Down      Eth1/4      Eth1/4
          Eth100/1/24  Down      Eth1/4      Eth1/4
          Eth100/1/25  Down      Eth1/4      Eth1/4
          Eth100/1/26  Down      Eth1/4      Eth1/4
          Eth100/1/27  Down      Eth1/4      Eth1/4
          Eth100/1/28  Down      Eth1/4      Eth1/4
          Eth100/1/29  Down      Eth1/4      Eth1/4
          Eth100/1/30  Down      Eth1/4      Eth1/4
          Eth100/1/31  Down      Eth1/4      Eth1/4
          Eth100/1/32  Down      Eth1/4      Eth1/4
          Eth100/1/33  Down      Eth1/4      Eth1/4
          Eth100/1/34  Down      Eth1/4      Eth1/4
          Eth100/1/35  Down      Eth1/4      Eth1/4
          Eth100/1/36  Down      Eth1/4      Eth1/4
          Eth100/1/37  Down      Eth1/4      Eth1/4
          Eth100/1/38  Down      Eth1/4      Eth1/4
          Eth100/1/39  Down      Eth1/4      Eth1/4
          Eth100/1/40  Down      Eth1/4      Eth1/4
          Eth100/1/41  Down      Eth1/4      Eth1/4
          Eth100/1/42  Down      Eth1/4      Eth1/4
          Eth100/1/43  Down      Eth1/4      Eth1/4
          Eth100/1/44  Down      Eth1/4      Eth1/4
          Eth100/1/45  Down      Eth1/4      Eth1/4
          Eth100/1/46  Down      Eth1/4      Eth1/4
          Eth100/1/47  Down      Eth1/4      Eth1/4
          Eth100/1/48  Down      Eth1/4      Eth1/4
    Logs:
    [11/24/2010 03:16:20.624283] Module register received
    [11/24/2010 03:16:20.627639] Registration response sent
    [11/24/2010 03:16:21.115021] Module Online Sequence
    [11/24/2010 03:16:22.379395] Module Online
    [11/24/2010 03:52:09.779773] Module disconnected
    [11/24/2010 03:52:09.781691] Offlining Module
    [11/24/2010 03:52:09.782231] Module Offline Sequence
    [11/24/2010 03:52:11.735462] Module Offline
    [11/24/2010 04:06:25.258659] Module disconnected
    [11/24/2010 04:06:25.259890] Offlining Module
    =================
    N5K-02# sh fex 100 detail
    FEX: 100 Description: FEX0100   state: Online
      FEX version: 4.1(3)N2(1a) [Switch version: 4.1(3)N2(1a)]
      FEX Interim version: 4.1(3)N2(1a)
      Switch Interim version: 4.1(3)N2(1a)
      Extender Model: N2K-C2148T-1GE,  Extender Serial: JAF1405CKHC
      Part No: 73-12009-06
      Card Id: 70, Mac Addr: 00:0d:ec:fd:48:82, Num Macs: 64
      Module Sw Gen: 12594  [Switch Sw Gen: 21]
    pinning-mode: static    Max-links: 1
      Fabric port for control traffic: Eth1/4
      Fabric interface state:
        Eth1/4 - Interface Up. State: Active
      Fex Port        State  Fabric Port  Primary Fabric
           Eth100/1/1  Down      Eth1/4      Eth1/4
           Eth100/1/2  Down      Eth1/4      Eth1/4
           Eth100/1/3  Down      Eth1/4      Eth1/4
           Eth100/1/4  Down      Eth1/4      Eth1/4
           Eth100/1/5  Down      Eth1/4      Eth1/4
           Eth100/1/6  Down      Eth1/4      Eth1/4
           Eth100/1/7  Down      Eth1/4      Eth1/4
           Eth100/1/8  Down      Eth1/4      Eth1/4
           Eth100/1/9  Down      Eth1/4      Eth1/4
          Eth100/1/10  Down      Eth1/4      Eth1/4
          Eth100/1/11  Down      Eth1/4      Eth1/4
          Eth100/1/12  Down      Eth1/4      Eth1/4
          Eth100/1/13  Down      Eth1/4      Eth1/4
          Eth100/1/14  Down      Eth1/4      Eth1/4
          Eth100/1/15  Down      Eth1/4      Eth1/4
          Eth100/1/16  Down      Eth1/4      Eth1/4
          Eth100/1/17  Down      Eth1/4      Eth1/4
          Eth100/1/18  Down      Eth1/4      Eth1/4
          Eth100/1/19  Down      Eth1/4      Eth1/4
          Eth100/1/20  Down      Eth1/4      Eth1/4
          Eth100/1/21  Down      Eth1/4      Eth1/4
          Eth100/1/22  Down      Eth1/4      Eth1/4
          Eth100/1/23  Down      Eth1/4      Eth1/4
          Eth100/1/24  Down      Eth1/4      Eth1/4
          Eth100/1/25  Down      Eth1/4      Eth1/4
          Eth100/1/26  Down      Eth1/4      Eth1/4
          Eth100/1/27  Down      Eth1/4      Eth1/4
          Eth100/1/28  Down      Eth1/4      Eth1/4
          Eth100/1/29  Down      Eth1/4      Eth1/4
          Eth100/1/30  Down      Eth1/4      Eth1/4
          Eth100/1/31  Down      Eth1/4      Eth1/4
          Eth100/1/32  Down      Eth1/4      Eth1/4
          Eth100/1/33  Down      Eth1/4      Eth1/4
          Eth100/1/34  Down      Eth1/4      Eth1/4
          Eth100/1/35  Down      Eth1/4      Eth1/4
          Eth100/1/36  Down      Eth1/4      Eth1/4
          Eth100/1/37  Down      Eth1/4      Eth1/4
          Eth100/1/38  Down      Eth1/4      Eth1/4
          Eth100/1/39  Down      Eth1/4      Eth1/4
          Eth100/1/40  Down      Eth1/4      Eth1/4
          Eth100/1/41  Down      Eth1/4      Eth1/4
          Eth100/1/42  Down      Eth1/4      Eth1/4
          Eth100/1/43  Down      Eth1/4      Eth1/4
          Eth100/1/44  Down      Eth1/4      Eth1/4
          Eth100/1/45  Down      Eth1/4      Eth1/4
          Eth100/1/46  Down      Eth1/4      Eth1/4
          Eth100/1/47  Down      Eth1/4      Eth1/4
          Eth100/1/48  Down      Eth1/4      Eth1/4
    Logs:
    [11/24/2010 03:49:38.824900] Module timed out
    [11/24/2010 03:54:08.225000] Module register received
    [11/24/2010 03:54:08.226093] Registration response sent
    [11/24/2010 03:54:08.540805] Module Online Sequence
    [11/24/2010 03:54:10.7287] Module Online
    Anyone can help me with this?
    Thanks.

    I already configured vPC between 2 Nexus 5010
    N5K-01# sh vpc brief
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
    vPC domain id                   : 1  
    Peer status                     : peer adjacency formed ok     
    vPC keep-alive status           : peer is alive                
    Configuration consistency status: success
    vPC role                        : primary                     
    vPC Peer-link status
    id   Port   Status Active vlans   
    1    Po1    up     1                                                      
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    100    Po100       down   success     success                    - 
    N5K-02# sh vpc
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
    vPC domain id                   : 1  
    Peer status                     : peer adjacency formed ok     
    vPC keep-alive status           : peer is alive                
    Configuration consistency status: success
    vPC role                        : secondary                   
    vPC Peer-link status
    id   Port   Status Active vlans   
    1    Po1    up     1                                                      
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    100    Po100       down*  success     success                    -        
    N5K-02#
    may i miss some config ?

  • N7K F1 TCAM limitations ("ERROR: Hardware programming failed. Reason: Tcam will be over used, please enable bank chaining and/or turn off atomic update")

    Platform/versions:
    # sh ver
      kickstart: version 6.2(2)
      system:    version 6.2(2)
    # sh mod
    Mod  Ports  Module-Type                         Model              Status
    1    32     10 Gbps Ethernet Module             N7K-M132XP-12      ok
    2    48     1000 Mbps Optical Ethernet Module   N7K-M148GS-11      ok
    3    32     1/10 Gbps Ethernet Module           N7K-F132XP-15      ok
    4    32     1/10 Gbps Ethernet Module           N7K-F132XP-15      ok
    5    0      Supervisor Module-1X                N7K-SUP1           ha-standby
    6    0      Supervisor Module-1X                N7K-SUP1           active *
    I recently tried to add a couple of "ip dhcp relay address" statements to an SVI and received the following error message:
    ERROR: Hardware programming failed. Reason: Tcam will be over used, please enable bank chaining and/or turn off atomic update
    Studying this page I was able to determine, that I seemed to be hitting a 50% TCAM utilization limit on the F1 modules, which prevented atomic updates:
    # show hardware access-list resource entries module 3
             ACL Hardware Resource Utilization (Module 3)
                              Instance  1, Ingress
    TCAM: 530 valid entries   494 free entries
    IPv6 TCAM: 8 valid entries   248 free entries
                              Used    Free    Percent
                                              Utilization
    TCAM                      530     494     51.75 
    I was able to workaround it be disabling atomic updates:
    hardware access-list update default-result permit
    no hardware access-list update atomic
    I understand, that with this config I am theoretically allowing ACL traffic during updates which shouldn't be allowed (alternately I could drop it), but that's not really my primary concern here.
    First of all I need to understand why adding DHCP relays are apparently affecting my TCAM entry resources?
    Second, I need to understand if there are other implications of disabling atomic updates, such as during ISSU?
    Third,  What are my options - if any - for planning the usage of the apparently relatively scarce resources on the F1 modules?

    Could be CSCua13121:
    Symptom:
    Host of certain Vlan will not get IP address from DHCP.
    Conditions:
    M1/F1 Chassis with Fabric Path Vlan with atomic update enable.
    On such system if configured SVI with DHCP is bounced that is if we shut down and bring it up may cause DHCP relay issue.
    Workaround:
    Disable Atomic update and bounce the Vlan. Disabling Atomic update may cause packet drops .
    Except that ought to be fixed in 6.2(2).

  • NEXUS Multicast questions

    I have a question in regards to multicast support for the NEXUS 1000V/4001i/5548 w/L3 Daughter card.  Before the questions a quick background:  We are in the process of buying a IBM blade center with the aforementioned network pieces.  We are a modeling and simulations site and the servers we use primarily communicate via multicast.  Typically 6 class C’s worth of multicast addresses are reserved for this per event.  I was looking at doing layer 3 on the 5548’s but it has a limitation of 2000 multicast entries and I’m not even sure what that means i.e per VLAN/VRF or just total.  We have 4 suites - so if we run two events at the same time we will bust that number quick.  So we will continue to do layer 3 at the 6509’s for mcast and the nexus family will handle it at layer 2.
    My real questions and concerns are these:
    The 4001i states it will hold 1000 IGMP (Snooping) entries. What happens when it exceeds this number?  Should we just turn IGMP snooping off and let it flood everything in that VLAN (I have my concerns with that).
    I cannot find any multicast limitations on the 1000v – are there any?
    Any advice/help will be greatly appreciated. 
    Thanks
    Brad

    Ex,
    Yes, all three Nexus products are available for order as of today.
    As for Fabric Path, it's currently only supported on the Nexus 7K, with the F1 Line cards running NXOS 5.1 or later.  Fabric Path will be available on the 5548/5596 running 5.1(3)N1(1) codename "Fairhaven" release of NXOS.  Currenlty targeting mid-late 2011.
    FYI - Fabric Path will require a separate license.
    Regards,
    Robert

  • OS level load balancing in OEL

    Hi,
    Would like to know if we can do OS level load balancing with OEL?
    I know in Windows there is Network Load Balancing NLB, that would do this, so I am hoping is there something similar in OEL?
    Thanks.

    Optimus prime wrote:
    Would like to know if we can do OS level load balancing with OEL?Any kernel automatically "+load balances+" processing across the resources (e.g. CPUs) available to the kernel. The 2.6 kernel uses the Completely Fair Scheduler
    I know in Windows there is Network Load Balancing NLB, that would do this, so I am hoping is there something similar in OEL?Kind of. There are a number of clustering options for Linux. Including commercial products like Oracle Grid.
    I think the one of the oldest (and well known) Linux clustering s/w projects is Beowulf.
    For networking specifically (comparing it with what I read in Microsoft's FAQ for NLB) is Linux Virtual Server. Quote:
    "<i>The Linux Virtual Server as an advanced load balancing solution can be used to build highly scalable and highly available network services, such as scalable web, cache, mail, ftp, media and VoIP services.</i>"
    There are also other options - such a network devices (in addition to standard switches and routers) that specifically provides load balancing for networking. Though personally, I did not like this approach much for Oracle RAC and we rolled our own load balancing using NAT and iptables.
    Bonding, as mentioned above, is one of many technical considerations when implementing a load balancer. This is (in my experience) primarily for high availability - and provides redundant paths from the server to a resource. Does not need to be IP based - it can be Infiniband based too (e.g. like used by Oracle Database Machine for redundant storage fabric paths to the Exadata Storage Server).
    Typically 2 separate interfaces/dual ports will be wired into separate switches that in turn will be wired into the greater network or storage system or whatever. This is then bonded on the server as a single logical interface. If an interface, port, cable or even switch fail, that logical interface still have a secondary path providing full connectivity. Bonding is discussed at http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding.
    This is however a very low level of dealing with load balancing (and redundancy). It alone should not be considered as a load balancing solution.

  • FabricPath vPC port-channel err-disabled CE Vlan?

    I have a pair of Nexus 56128 configured with fabric path and vpc+. The Nexus pair has UCS connected downstream using vpc port-channels. When a Vlan is in mode fabricpath, it's ok for the vpc+ peer-link and the vpc port-channel to UCS. However when I changed the vlan to classic Ethernet, it's err-diabled in the vpc port-channels.
    Is this the normal behavior of fabric path domain? In other words, CE Vlans and fabric path Vlans cannot use the same Layer 2 path, correct?
    If I need to transport CE Vlans and fabric path Vlans from Nexus (fabric path vpc+) to UCS, I have to use a separate non-vpc port-channel for the CE Vlans between each Nexus and UCS?
    Thanks

    I have a pair of Nexus 56128 configured with fabric path and vpc+. The Nexus pair has UCS connected downstream using vpc port-channels. When a Vlan is in mode fabricpath, it's ok for the vpc+ peer-link and the vpc port-channel to UCS. However when I changed the vlan to classic Ethernet, it's err-diabled in the vpc port-channels.
    Is this the normal behavior of fabric path domain? In other words, CE Vlans and fabric path Vlans cannot use the same Layer 2 path, correct?
    If I need to transport CE Vlans and fabric path Vlans from Nexus (fabric path vpc+) to UCS, I have to use a separate non-vpc port-channel for the CE Vlans between each Nexus and UCS?
    Thanks

  • L2 across DC's

    Recently we had a vendor come in and recommend that we extend pure Layer 2 links across our cores and over the WAN.  As far as I've ever known it's not a good practice to extend Layer 2 across the core let alone a WAN connection if it can be helped.  I know there are a number of technologies out there such as OTV, VPLS, and Fabric Path that are designed as DC interconnects but what was proposed was just simply a link running at Layer 2.  So my question becomes is this a trend that others are starting to see becoming more common?  If so what is the reasoning as to why/why not?  I have a lot of reservations about this proposed solution because it seems to go against everything that I know.  Any help would be appreciated.
    -Shaun

    The guy I replaced at my current gig ordered a 200 MB L2 circuit connecting my two DC's.  There are no routers, only L3 on the cores.  So I am struggling to reverse engineer what his thought process was.
    My thinking is that this is not the best way to do it.  I personally would have extended a VLAN over L3 using routers on each side so I not only have DR between sites but another route out.
    But these are the cards I have been dealt, so I have to play them.

Maybe you are looking for