Reg Nexus 5000 + 7000 software (Shellshock -bug )

Hi people.
Regarding this bug : Shellshock . What is the recomended software upgrade for Nexus 5000 & 7000.
It is important that the VPC and FCoE is still working, after an upgrade.
Need recommendation for following devices :
Nexus 5000
https://tools.cisco.com/bugsearch/bug/CSCur05017
All current versions of NX-OS on this platform are affected unless 
otherwise stated.. This bug will be updated with detailed affected and 
fixed software versions once fixed software is available.
Exposure is not configuration dependent.
Authentication is required to exploit this vulnerability. 
Nexus 7000
https://tools.cisco.com/bugsearch/bug/CSCuq98748
All current versions of NX-OS on this platform are affected unless 
otherwise stated
Exposure is not configuration dependent.
Authentication is required to exploit this vulnerability.
This bug is fixed in NX-OS versions specified below:
5.2(9a)
6.1(5a)
6.2(8b)
6.2(10) and above
Is there anyone that has some information on this ?
Many thanx in advance,

Hello.
Let me look into this for you. Do you have an existing support contract or SmartNet for these Nexus 5000 and 7000 switches, by the way?
Let me know if you have other concerns as well or e-mail ([email protected]) me directly. 
Kind regards. 

Similar Messages

  • Nexus 5000 - 7000 can you network boot or service config ?

    Hi All
    Just writing up a security doc which has one version for IOS and one for NEXUS.
    Cant find any info about the nexus having a service config or a boot network 
    capability .
    Does anyone know how this would happen on a Nexus ?
    Steve

    Posting as an update to this issue. I've noticed it may not be the Flash card as I previously thought. Getting a chance to get inside the system when booting, I noticed two LEDs would turn on and stay solid on the motherboard.
    One is labeled SEQ B FAIL, the other SEQ C FAIL. I assume these are for POST or something similar, but this points me to believe now that the motherboard itself is fried? If there's any way to fix this, please let me know. Thank you.

  • Nexus 5000 rebooted after receiving %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by on ppm.3523

    Hi,
    We have two Nexus 5000 in HA configuration that use a switch port profile to share configuration changes.
    Earlier this morning the secondary switch rebooted (this has happened before).
    Just prior to the reboot the syslog server received the following command.
    Dec 18 07:08:32 10.91.254.62 : 2013 Dec 18 07:08:32 AEST: %ASCII-CFG-6-INFORMATION: Reading ACFG Runtime information
    Dec 18 07:08:33 10.91.254.62 : 2013 Dec 18 07:08:33 AEST: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by  on ppm.3523
    Dec 18 07:08:33 10.91.254.62 : 2013 Dec 18 07:08:33 AEST: %ASCII-CFG-6-INFORMATION: Reading ACFG Runtime information
    Any help with diagnosing this issue will be appreciated
    Version information below.
    Software
      BIOS:      version 3.5.0
      loader:    version N/A
      kickstart: version 5.1(3)N2(1)
      system:    version 5.1(3)N2(1)
      power-seq: Module 1: version v1.0
                 Module 3: version v2.0
      uC:        version v1.2.0.1
      SFP uC:    Module 1: v1.0.0.0
      BIOS compile time:       02/03/2011
      kickstart image file is: bootflash:///n5000-uk9-kickstart.5.1.3.N2.1.bin
      kickstart compile time:  3/20/2012 0:00:00 [03/20/2012 18:51:29]
      system image file is:    bootflash:///n5000-uk9.5.1.3.N2.1.bin
      system compile time:     3/20/2012 0:00:00 [03/20/2012 20:00:58]
    Hardware
      cisco Nexus5548 Chassis ("O2 32X10GE/Modular Universal Platform Supervisor")
      Intel(R) Xeon(R) CPU         with 8263864 kB of memory.
      Processor Board ID FOC15504PN3
      Device name: AALN5KPDC03
      bootflash:    2007040 kB
    Kernel uptime is 0 day(s), 3 hour(s), 1 minute(s), 48 second(s)
    Last reset at 429902 usecs after  Wed Dec 18 07:03:10 2013
      Reason: Kernel Panic
      System version: 5.1(3)N2(1)
      Service:
    plugin
      Core Plugin, Ethernet Plugin

    Hello,
    In case a unexpected reboot with a kernel panic reason I recommend to you to open a case with Cisco TAC team.
    Probably is related a bug CSCua53582.
    Richard

  • Nexus 5000 - Odd Ethernet interface behavior (link down inactive)

    Hi Guys,
    This would sound really trivial but it is very odd behavior.
    - We have a server connected to a 2, Nexus 5000s (for resiliancy)
    - When there is no config on the ethernet interfaces whatsoever, the ethernet interface is UP / UP, there is minimal amount of traffic on the link etc. E.g.
    Ethernet1/16 is up
      Hardware: 1000/10000 Ethernet, address: 000d.ece7.85d7 (bia 000d.ece7.85d7)
      Description: shipley-p1.its RK14/A13
      MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is access
      full-duplex, 10 Gb/s, media type is 1/10g
      Beacon is turned off
      Input flow-control is off, output flow-control is off
      Rate mode is dedicated
      Switchport monitor is off
      Last link flapped 00:00:07
      Last clearing of "show interface" counters 05:42:32
      30 seconds input rate 0 bits/sec, 0 packets/sec
      30 seconds output rate 96 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 0 bps, 0 pps; output rate 8 bps, 0 pps
      RX
        0 unicast packets  0 multicast packets  0 broadcast packets
        0 input packets  0 bytes
        0 jumbo packets  0 storm suppression packets
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        0 unicast packets  163 multicast packets  0 broadcast packets
        163 output packets  15883 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble
        0 Tx pause
      1 interface resets
    - As soon as I configure the link to be an access port, the link goes down, flagging "inactivity" E.g.
    sh int e1/16
    Ethernet1/16 is down (inactive)
      Hardware: 1000/10000 Ethernet, address: 000d.ece7.85d7 (bia 000d.ece7.85d7)
      Description: shipley-p1.its RK14/A13
      MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is access
      auto-duplex, 10 Gb/s, media type is 1/10g
      Beacon is turned off
      Input flow-control is off, output flow-control is off
      Rate mode is dedicated
      Switchport monitor is off
      Last link flapped 05:38:03
      Last clearing of "show interface" counters 05:41:33
      30 seconds input rate 0 bits/sec, 0 packets/sec
      30 seconds output rate 0 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
      RX
        0 unicast packets  0 multicast packets  0 broadcast packets
        0 input packets  0 bytes
        0 jumbo packets  0 storm suppression packets
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        0 unicast packets  146 multicast packets  0 broadcast packets
        146 output packets  13083 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble
        0 Tx pause
      0 interface resets
    - This behavior is seen on both 5Ks
    - I've tried using a different set of ports, changed SFPs, and fibre cabling to no avail
    - I can't seem to understand this behavior?!  In that, why would configuring the port cause the link to go down?
    - If anyone has experience this before, or could shed some light on this behavior, it would be appreciated.
    sh ver
    Cisco Nexus Operating System (NX-OS) Software
    TAC support: http://www.cisco.com/tac
    Copyright (c) 2002-2010, Cisco Systems, Inc. All rights reserved.
    The copyrights to certain works contained herein are owned by
    other third parties and are used and distributed under license.
    Some parts of this software are covered under the GNU Public
    License. A copy of the license is available at
    http://www.gnu.org/licenses/gpl.html.
    Software
      BIOS:      version 1.2.0
      loader:    version N/A
      kickstart: version 4.2(1)N1(1)
      system:    version 4.2(1)N1(1)
      power-seq: version v1.2
      BIOS compile time:       06/19/08
      kickstart image file is: bootflash:/n5000-uk9-kickstart.4.2.1.N1.1.bin
      kickstart compile time:  4/29/2010 19:00:00 [04/30/2010 02:38:04]
      system image file is:    bootflash:/n5000-uk9.4.2.1.N1.1.bin
      system compile time:     4/29/2010 19:00:00 [04/30/2010 03:51:47]
    thanks
    Sheldon

    I had identical issue
    Two interfaces on two different FEXes were INACTIVE. I have two Nexus 5596 in vPC and A/A FEXes.
    I also use config-sync feature.
    Very same configuration was applied to other ports on other FEXes and they were working with no problems.
    interface Ethernet119/1/1
      inherit port-profile PP-Exchange2003
    I checked VLAN status associated with this profile and it was active (of course it was, other ports were ok).
    I solved it by removing port profile from this port and re-applied it... voila, port changed state to up!
    Very very strange.

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • Tacacs cfs on the Nexus 5000

    Hi
    I want to distribute TACACS+ from the nexus 7000 to theo tne manuals  nexus 5000
    via CFS.
    When i do the 'sh cfs app' i get this....   tacacs         No        Physical-fc-ip
    However you cannot put in the distribute command for tacacs 'tacacs+ distribute'sl
    You also cannot do the following command   'sh cfs app name tacacs'
    Obviously there must be different commands ... but i cannot find them
    If i cant distribute tacacs how can i make this work
    many thanks
    Steve

    I think the command set does not matter.
    Because the Nexus takes only the role and does not use per-command authorization (AFAIK), then it will take the role from the shell profile but selecting the command set does not matter because it does not use per command authorization.
    I used command sets with CRS-1 and they had no effect. Only the shell profile configuration matters.
    What is the situation at your end? do things work fine with/without selecting the command set? or putting empty command set in place?
    Rating useful replies is more useful than saying "Thank you"

  • Nexus 5000 as NTP client

    We run 6509 core routers as NTP servers to other IOS routers/switches & servers of several OS flavours.
    All good.
    Recently added some Nexus 5000s and cannot get them to lock.
    No firewalls or ACLs in the path
    6509 (1 of 4) state:
    LNPSQ01CORR01>sh ntp ass
          address         ref clock     st  when  poll reach  delay  offset    disp
    + 10.0.1.2         131.188.3.220     2   223  1024  377     0.5   -6.23     0.7
    +~130.149.17.21    .PPS.             1   885  1024  377    33.7   -0.26     0.8
    *~138.96.64.10     .GPS.             1   680  1024  377    22.7   -2.15     1.0
    +~129.6.15.29      .ACTS.            1   720  1024  377    84.9   -3.37     0.6
    +~129.6.15.28      .ACTS.            1   855  1024  377    84.8   -3.30     2.3
    * master (synced), # master (unsynced), + selected, - candidate, ~ configured
    Nexus state:
    BL01R01B10SRVS01# sh ntp peer-status
    Total peers : 4
    * - selected for sync, + -  peer mode(active),
    - - peer mode(passive), = - polled in client mode
        remote               local              st  poll  reach   delay
    =10.0.1.1               10.0.201.11            16   64       0   0.00000
    =10.0.1.2               10.0.201.11            16   64       0   0.00000
    =10.0.1.3               10.0.201.11            16   64       0   0.00000
    =10.0.1.4               10.0.201.11            16   64       0   0.00000
    Nexus config:
    ntp distribute
    ntp server 10.0.1.1
    ntp server 10.0.1.2
    ntp server 10.0.1.3
    ntp server 10.0.1.4
    ntp source 10.0.201.11
    ntp commit
    interface mgmt0
      ip address 10.0.201.11/24
    vrf context management
      ip route 0.0.0.0/0 10.0.201.254
    Reachability to the NTP source...
    BL01R01B10SRVS01# ping 10.0.1.1 vrf management source 10.0.201.11
    PING 10.0.1.1 (10.0.1.1) from 10.0.201.11: 56 data bytes
    64 bytes from 10.0.1.1: icmp_seq=0 ttl=253 time=3.487 ms
    64 bytes from 10.0.1.1: icmp_seq=1 ttl=253 time=4.02 ms
    64 bytes from 10.0.1.1: icmp_seq=2 ttl=253 time=3.959 ms
    64 bytes from 10.0.1.1: icmp_seq=3 ttl=253 time=4.053 ms
    64 bytes from 10.0.1.1: icmp_seq=4 ttl=253 time=4.093 ms
    --- 10.0.1.1 ping statistics ---
    5 packets transmitted, 5 packets received, 0.00% packet loss
    round-trip min/avg/max = 3.487/3.922/4.093 ms
    BL01R01B10SRVS01#
    Are we missing some NTP or managment vrf setup in the Nexus 5Ks??
    Thanks
    Rob Spain
    UK

    I have multiple 5020's, 5548's, and 5596's, and they all experience this same problem. Mind you I run strictly layer 2. I don't even have feature interface-vlan enabled. I tried: "ntp server X.X.X.X use-vrf management" as well as "clock protocol ntpt". These didn't help. 
    I was told by TAC that there is a bug (sorry I do not have the ID), but basically NTP will not work over the management VRF. The only way I got NTP to work, was by enabling the feature interface-vlan, and adding a vlan interface with an IP and retrieving NTP through this interface. 
    I upgraded to 5.2 (1) in hopes that this would fix the issue. but it did not. 

  • Nexus 5000 - Securing MGMT Access

    Could anyone comment on whether the capability exists to configure an ACL that protects management access, restricting access to certain source subnets? I want to use inband mgmt access (interface vlan feature)but limit the access by IP. ACLs seem to be only configurable on a per port basis or VLAN mapped basis, not on the VLAN Interface or Line VTY. Thanks in advance to anyone who offers a comment!

    Hi Adam,
    [edit] This is fixed in 4.1(3)N2(1) with defect CSCta26533.  It is also available in 4.2(1)N1(1).  I just tested this to verify, I was confused earlier as to what version my switches were running.
    Here's an exmaple in 4.2(1)N1(1):
    Nexus5010(config)# conf t
    Nexus5010(config)# ip access-list someACL
    Nexus5010(config-acl)# deny ip 192.168.0.0/16 any                      
    Nexus5010(config-acl)# permit ip any any
    Nexus5010(config-acl)# int mgmt0
    Nexus5010(config-if)# ip access-group someACL in
    Nexus5010(config-if)# exit
    Nexus5010# sh ip access-lists summary
    IPV4 ACL someACL
            Total ACEs Configured: 2
            Configured on interfaces:
                    mgmt0 - ingress (Router ACL)
            Active on interfaces:
                    mgmt0 - ingress (Router ACL)
    Also, CSCsq20638 will allow you to put an ACL on VTY lines.  CSCsq20638 slipped the target release since my first answer, but is now committed to the 5.0 train for the Nexus 7000.
    When the Nexus 5000 picks up this enhancement sometime in Q4 of 2010.  I can't be specific about a release date since it's under active development, but it should be called 5.0(2)N1(1)
    Regarding a VACL, that will work for inband management (SVI / VLAN interface), but not for those managing via MGMT0.
    Regards,
    John Gill
    Message was edited by: johgill

  • Shellshock bug

    Is no one curious about whether Apple is working on this?

    If the issue concerns an older vintage obsolete Mac OS X and a former security
    issue, bypassed through upgrade and updates over many years, I'd guess No.
    However there is a new issue that re-uses an old name bug... of different nature.
    I see this page, but wonder about its validity: (consumes resources to view)
    http://www.imore.com/about-bash-shellshock-vulnerability-and-what-it-means-os-x
    A new installation on a wiped hard drive would be a way to remove it from Mac.
    Please define the system and hardware this issue is confined to; if you have it.
    •What does the Shellshock bug affect?
    http://www.thesafemac.com/?s=shellshock&submit=Search
    http://www.thesafemac.com/what-does-the-shellshock-bug-affect/#more-1688
    While I have Leopard on a few machines, I try to not install software from odd
    places that are suspect. See if TheSafeMac has anything about it; email the
    author of the site and ask him. http://www.thesafemac.com/tech-guides/
    Good luck & happy computing!
    edited

  • 1921 integrated services router possibly affected by ShellShock bug?

    Hi all,
    Can anyone advise if this device runs Linux or OSX based software?
    1900 series (1921) integrated services router
    I have been asked to check if this hardware is possibly at risk from the shellshock bug.
    Many thanks,
    James

    Hello:
    please see the following link, it may answer you questions
    http://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20140926-bash
    Carlos

  • Tacacs do not function in Nexus 5000

    Dear Mister
    By someone reason, the Tacas is not functioning in my Nexus 5000. I am using the next configuration :
    tacacs-server key 7 "0310551D121F2D595D"
    ip tacacs source-interface Vlan5
    tacacs-server host 10.20.2.80
    tacacs-server host 10.20.16.138
    aaa group server tacacs+ TACSERVER
        server 10.20.2.80
        server 10.20.16.138
        source-interface Vlan5
        use-vrf default
    aaa authentication login default group TACSERVER
    no aaa user default-role
    aaa authentication login error-enable
    tacacs-server directed-request
    I did a telnet to port 49, in address , and is functioning. That discard a Security problem (FW, ACL, etc).
    When I do the test, nothing is showed in the Tacacs Logs Server.
    The log messages are the next:
    2012 Aug 22 15:54:45 NITE1 %TACACS-3-TACACS_ERROR_MESSAGE: received bad authentication packet from 10.20.2.80
    2012 Aug 22 15:54:45 NITE1 %TACACS-3-TACACS_ERROR_MESSAGE: All servers failed to respond
    2012 Aug 22 15:54:48 NITE1 %AUTHPRIV-3-SYSTEM_MSG: pam_aaa:Authentication failed for user GPALAVE from 10.20.2.80 - login[3087]
    The problem is very strange.
    I need help.
    Best regards

    You config looks fine. Can you ping from VLAN5 to TACACS+? Also, did you add VLAN5's IP address to your TACACS+.
    Regards,
    jerry

  • New major release of Sun Storage 7000 software is available

    I noticed a new major release of the Sun Storage 7000 software is available at the website below.
    http://wikis.sun.com/display/FishWorks/Sun+Storage+7000+Series+Software+Updates.

    The format change is for BDB Java Edition 5.0 only, and does not apply to other BDB products. It's forward compatible, which means that a BDB JE application using JE 5.0 can use environments written with older versions of JE. However, it's not backwards compatible; once your environment has been opened by JE 5.0, it's been converted to the new format, and can no longer be readable by older versions of JE.
    Just for the benefit of others who may not have read the changelog, here's the full text:
    JE 5.0.34 has moved to on-disk file format 8.
    The change is forward compatible in that JE files created with release 4.1 and earlier can be read when opened with JE 5.0.34. The change is not backward compatible in that files created with JE 5.0 cannot be read by earlier releases. Note that if an existing environment is opened read/write, a new log file is written by JE 5.0 and the environment can no longer be read by earlier releases.
    There are two important notes about the file format change.
    The file format change enabled significant improvements in the operation performance, memory and disk footprint, and concurrency of databases with duplicate keys. Environments which contain databases with duplicate keys must run an upgrade utility before opening an environment with this release. See the Performance section for more information.
    An application which uses JE replication may not upgrade directly from JE 4.0 to JE 5.0. Instead, the upgrade must be done from JE 4.0 to JE 4.1 and then to JE 5.0. Applications already at JE 4.1 are not affected. Upgrade guidance can be found in the new chapter, "Upgrading a JE Replication Group", in the "Getting Started with BDB JE High Availability" guide.I may not have fully understood your question, so let us know if that doesn't answer it.
    Edited by: Linda Lee on Dec 1, 2011 10:41 AM

  • UCS C-Series VIC-1225 to Nexus 5000 setup

    Hello,
    I have two nexus 5000 setup with a vpc peer link. I also have an cisco c240 m3 server with a vic-1225 card that will be running esx 5.1. I also have some 4 2248 fabric extenders. I have been searching for some best practice information on how to best setup this equipment. The nexus equipment is already running, so its more about connecting the c240 and the vic-1225 to the nexus switches. I guess this is better to do rather than to connect to the fabric extenders in order to minmize hops?
    All documention I have found involves setup/configuration etc with fabric interconnects which I dont have, and have been told that I do not need. Does anyone have any info on this? and can point me in the right direction to setup this correctly?
    More specifically, how should I setup the vic-1225 card to the nexus? just create a regular vpc/port-channel to the nexuses? use lacp and set it to active?
    Do I need to make any configuration changes on the vic card via the cimc on the c240 server to make this work?

    Hello again, Im stuck
    This is what I have done. I have created the vPC between my esx host and my two nexus 5000 switches, but it doesnt seem to come up:
    S02# sh port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    Group Port-       Type     Protocol  Member Ports
          Channel
    4     Po4(SD)     Eth      LACP      Eth1/9(D)
    vPC info:
    S02# sh vpc 4
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    4      Po4         down*  success     success                    -
    vPC config:
    interface port-channel4
      switchport mode trunk
      switchport trunk allowed vlan 20,27,30,50,100,500-501
      spanning-tree port type edge trunk
      vpc 4
    interface Ethernet1/9
      switchport mode trunk
      switchport trunk allowed vlan 20,27,30,50,100,500-501
      spanning-tree port type edge trunk
      channel-group 4 mode active
    Im unsure what I must configure on the cisco 240M3(esx host) side to make this work. I only have the two default interfaces(eth0 and eth1) on the vic-1225 installed in the esx host, and both have the vlan mode is set to TRUNK.
    Any ideas on what I am missing?
    Message was edited by: HDA

  • VPC on Nexus 5000 with Catalyst 6500 (no VSS)

    Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
    The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.
    Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.
    Questions I have are.
    - Is this my best deployment choice?
    - vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
         - one of the 6500 goes down
              - STP?
              - What is going to happend with the Etherchannels on the remaining  6500?
         - the Management interface goes down for any other reason
              - which one is going to be the primary NEXUS?
    Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
    Any help is appreciated.
    Devices
    ·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)
    ·         2 Cisco Nexus 5010
    ·         2 Cisco UCS 6120xp
    ·         2 UCS Chassis
         -    4  Cisco  B200-M1 blades (2 each chassis)
              - Dual 10Gb Intel card (1 per blade)
    vPC Configuration on Nexus 5000
    TACSWN01
    TACSWN02
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.10
    role priority 10
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.9
    role priority 20
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    vPC Verification
    show vpc consistency-parameters
    !--- show compatibility parameters
    Show feature
    !--- Use it to verify that vpc and lacp features are enabled.
    show vpc brief
    !--- Displays information about vPC Domain
    Etherchannel configuration on TAC 6500s
    TACSWC01
    TACSWC02
    interface range GigabitEthernet2/38 - 43
    description   TACSWN01 (Po61 vPC61)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 61   mode active
    interface range GigabitEthernet2/38 - 43
    description   TACSWN02 (Po62 vPC62)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 62   mode active

    ihernandez81,
    Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.
    All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.
    The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
    As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.
    If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.
    I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
    For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
    Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.
    Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
    better ?
    one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

  • What are best practices for connecting asa to nexus 5000

    just trying to get a feel for the best way to connect redundant asa to redundant nexus 5000
    using a vpc vlan is fine, but then running a routing protocol isn't supported, so putting static routes on 5000 works, but it doesn't support ip sla yet so you cant really stop distributing the default if your internet goes down. just looking for what was recommended.

    you want to test RAC upgrade on NON RAC database. If you ask me that is a risk but it depends on may things
    Application configuration - If your application is configured for RAC, FAN etc. you cannot test it on non RAC systems
    Cluster upgrade - If your standalone database is RAC one node you can probably test your cluster upgrade there. If you have non RAC database then you will not be able to test cluster upgrade or CRS
    Database upgrade - There are differences when you upgrade RAC vs non RAC database which you will not be able to test
    I think the best way for you is to convert your standalone database to RAC one node database and test it. that will take you close to multi node RAC

Maybe you are looking for