Nexus 5548 - DCNM - SAN Connectivity
I'm having an issue seeing the switch and managing VSAN's and zoning from within DCNM Fabric Manager. Device Manager and Fabric Manger discover the switches, however, Fabric Manager won't let me configure anything. When I try to edit the local zone database of the active VSAN I get the error message, "SNMP: No Switches are available". The switch in the picture pane has a red line across the switch too.... What am I missing here? Is there configuration inside the 5548 to allow me to manage?
FC ports are setup and active as FC devices show up in Flogi just fine. I'd rather not manage this via CLI!
I am having this issue two in one of my 3 DCNM environments all of a sudden. Everything is the same DCNM version 6.2(3) and the switches are running system software on all switches are 6.2(1) . The only thing that I have done was connect some Cisco Blade NPV switches.
Similar Messages
-
To be unable to discover a Nexus 5548 wirth DCNM 5.2(2e)
Hello,
I am unable to discover 2 Nexus 5548 with the SAN client of DCNM 5.2(2e)
These Nexus are used like LAN and SAN switch. Each Nexus is a SAN fabric. I would want to use DCNM in order to configurate the zone/zoneset via GUI. These Nexus 5548 run 5.1(3)N2(1b) release.
The Nexus ARE NOT managed via the Mgmt interface (OOB) but they are managed via an interface vlan (InB)
I could not configure rightly
- the snmp-server user (SNMP user V1/v2 or V3 + group ? ) CLI on Nx
- to configure the discovery in order that DCNM discover each fabric either from web GU interface or java SAN client
Please help !I believe DCNM requires an ssh login to the Nexus and not SNMP.
DCNM uses Netconf over SSH protocol. See this earlier posting. -
Connecting IBM v7000 to Nexus 5548
20-Sep-2012 16:51 (in response to feisalb)
IBM V7000 with Nexus 5548UP and Nexus 4000 Design/implemetation guide
Hi Guys
I have a question in regards to connecting IBM v7000 directly to Nexus5548.
CAN WE DO THIS?
Our current setup is IBM v7000 -> MDS 9124 -> Nexus 5548.
But our MDS 9124 are out of warranty now and we need to take them out of production. And only way we can do this is if we connect our IBM v7000 fibre ports directly to our Nexus 5548.
Can someone please point me to the right direction any knowledge base articles etc.
Thanks Heaps
SidDear prkrishn
I am working on the Data Center Solution between two Data Center, details underneath
DC 1 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
DC 2 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
With the above mention setup, can I configure FCIP between DC1 & DC2 using Nexus 7009? or I need FCIP capable Switch such as IBM SAN Switch (SAN06-BR), I was wondering if I can configure FCIP on Nexus 7009 DC Switch.
Hoping for your kind response at earliest.
Kind Regards,
Arnold -
Cisco Nexus 5548UP to Brocade SAN Connectivity
Hi,
I have a Server-SAN-Storage setup as shown in the attachment/below. There are new Cisco UCS rack mount servers with VIC 1225 and Virtual servers on VMware Esxi hypevisor connected to a new Cisco Nexus 5548UP switch in IP+FCoE mode. The new 5548 switch then connects to an existing production SAN of Brocade 48000 Director SAN switch which connects to storage in production environment.
I need to know the best way to connect the 5548 switch to the Brocade SAN switch without disrupting the existing Production SAN environment i.e. Brocade SAN switch configurations & setup e.g. Fabric Principal switch. priority etc.
Would configuring the 5548 switch in NPV mode be best practice?See Figure 6-2 Converged Multi-hop FCoE Network Design Using FCoE NPV
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/fcoe/513_n1_1/ops_fcoe/ops_fcoe_npv.html
You have to use NPV on N5k, there is no FC interop support on N5k, to connect to Brocade. -
Servers connected to Nexus 5548 only getting 200 Mbps of throughput
Servers connected to NExus 5k were only getting 100 Mbps of throughput. So I disabled Flow control recieve on all the ports. After this we are getting 200 Mbps of speed. The servers are connected throuh 10 gig port. Could you guys please suggest why the throguhput is still low? Atleast we should get 1Gbps of thrpoughput.
Hi Adam,
I think we probably need a little more information to go on. Can you answer the following?
What type of servers and NICs?
What OS are you running on the servers?
What cables do you have from the servers to the switch?
Are the two servers in the same subnet or is the traffic between them routed?
If routed, is that in the Nexus 5548 or some other router?
How are you testing throughput?
Presumably you're not seeing any errors on the switch ports that the servers are connected to?
Regards -
DCNM SAN 6.2 + Nexus 5500 not using interface mgmt
When trying to discover a newly setup fabric we have set the SVI of one of the Nexus 5500 switches as seed device in DCNM SAN server. Problem do seem to be the same in DCNM 6.2 as in DCNM 5.x, one can not use a SVI.... After adding the seed switch with the IP adress of the SVI it appears in the Admin->Data sources->Fabric with the ip address of the mgmt interface. Thing is that this ip address can not be reached from the DCNM SAN server since we have only used a cross over ethernet cable for peer-keepalive over the mgmt interfaces.
Is there a way in DCNM SAN 6.2 to force SAN management over a SVI instead of the mgmt interface?Here is the solution:
in the menubar on the top, in the lower one go to Admin > Server Properties
scroll to the middle of the list, locate “fabric.managementIpOverwrite” and set it to false
restart the DCNM Servers -
Installing DCNM-SAN as a VSB in Nexus 1010
Hi everyone,
I'm trying to install DCNM-SAN as a VSB in Nexus 1010, as I've read on the release notes that since 5.2.x release the distribution is done via a single image and there's an image available right now for 5.2(2)...
But, although I've tried numerous times (together with reading a lot through the two available documents - Installation and licensing guide and Fundamentals), every time I create a new vsb with these images, it becomes a DCNM-LAN install.
I have yet to have any prior experience with DCNM, so I might be missing something.
Any ideas? What am I missing? Am I trying to do something that is not supported? Or, am I just missing a critical point there?
NOTE: This is currently a lab topology, but I have a customer that's very keen on using DCNM-SAN with their MDS switches. And I feel that having this on a the 1010 as a vsb is the best way to do it, if it can be done, with regards to having network staff not deal with servers etc.
Thanks,
EmreAndrew,
We are a little frustrated with the VSB version of DCNM-SAN, it does not appear possible to manage fabric without advanced licensing for the switches, in the licensing guides for DCNM it lists essentials capability that should include port, switch, and fabric-level configuration. On the windows version we can manage the fabric unlicensed as long as we do it from the server direcly or in previous versions if we are running stand alone. These options do not appear to exist for the VSB version. Is there any plan to include any essentials capability in the VSB/OVF versions of the DCNM-SAN. If this is not planned, the documentation should be changed to state that essentials functions are only available in the Windows version. -
Connecting NEXUS 5548 1gig interface to 100mbps
Hi,
I have a 5548 that I need to connect to a firewall that supports 100 Mbps only.
Can I configure interface speed on Nexus 5548 interface (GLC-T) to 100 Mbps inorder to connect it to the firewall??
Regards,
SabihHi Sabih,
The interfaces on a Nexus 5548 can NOT be configured as 100 Mbps.
If you wish to connect to the firewall via a 100 Mbps connection, you will need to make use of a Fabric Extender (Nexus 2000) that supports 100 Mbps.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
Thanks,
Michael -
Connectivity Issue between Nexus 5548 to VNX 5300
Hi All,
I am doing a lab setup where i want to connect Nexus 5548 UP to VNX 5300 storage directly. The physical connectivity is established between switch and storage but On Nexus the status of the port shows "linkFailure". I tried matching the port mode (like Auto, F) and speed but the port always shows "linkFailure".
The connectivity from Nexus to VNX is FC.
Anyone can suggest the root cause or any troubleshooting steps.
Regards,
AbhilashLinkFailure might be a GUI status.
show interface fcx/y might say,
Link failure or not connected
The physical layer link is not operational.
This means the switch is not detecting light, so the phyiscal layer is the cable and lasers (sfp's, HBAs or whatever adapter the VNX uses). It could mean you need to turn the interfaace up from the vnx side. -
Does anybody know if (and when) DCNM-SAN will be available as a Nexus 1010 VSB, as it is with DCNM-LAN now?
PS: Here is my previous question on how (as I wrongly assumed it was possible, looking at the documentation), which was answered by Padma (a generous and kind Cisco employee).
Thanks,
EmreHi Andrew,
Great news, thanks... Looking forward for the release -
UCS FI 6248 to Nexus 5548 San port-channel - not working
Hi all,
I'm sure I am missing something fairly obvious and stupid but I need several sets of eyes and help.
Here is the scenario:
I want to be able to create san port-channels between the FI and Nexus. I don't need to trunk yet as I can't even get the channel to come up.
UCS FI 6248:
Interfaces fc1/31-32
Nexus 5548
interfaces fc2/15-16
FI is in end-host mode and Nexus is running NPIV mode with fport-channel-trunk feature enabled.
I'm going to output the relevants configurations below.
Nexus 5548:
NX5KA(config)# show feature | include enabled
fcoe 1 enabled
fex 1 enabled
fport-channel-trunk 1 enabled
hsrp_engine 1 enabled
interface-vlan 1 enabled
lacp 1 enabled
lldp 1 enabled
npiv 1 enabled
sshServer 1 enabled
vpc 1 enabled
interface san-port-channel 133
channel mode active
no switchport trunk allowed vsan all
switchport trunk mode off
interface fc2/15
switchport trunk mode off
channel-group 133 force
no shutdown
interface fc2/16
switchport trunk mode off
channel-group 133 force
no shutdown
NX5KA# show vsan membership
vsan 1 interfaces:
fc2/13 fc2/14
vsan 133 interfaces:
fc2/15 fc2/16 san-port-channel 133
vsan 4079(evfp_isolated_vsan) interfaces:
vsan 4094(isolated_vsan) interfaces:
NX5KA# show san-port-channel summary
U-Up D-Down B-Hot-standby S-Suspended I-Individual link
summary header
Group Port- Type Protocol Member Ports
Channel
133 San-po133 FC PCP (D) FC fc2/15(D) fc2/16(D)
UCS Fabric Interconnect outputs:
UCS-FI-A-A(nxos)# show san-port-channel summary
U-Up D-Down B-Hot-standby S-Suspended I-Individual link
summary header
Group Port- Type Protocol Member Ports
Channel
133 San-po133 FC PCP (D) FC fc1/31(D) fc1/32(D)
UCS-FI-A-A(nxos)#
UCS-FI-A-A(nxos)# show run int fc1/31-32
!Command: show running-config interface fc1/31-32
!Time: Fri Dec 20 22:58:51 2013
version 5.2(3)N2(2.21b)
interface fc1/31
switchport mode NP
channel-group 133 force
no shutdown
interface fc1/32
switchport mode NP
channel-group 133 force
no shutdown
UCS-FI-A-A(nxos)#
UCS-FI-A-A(nxos)# show run int san-port-channel 133
!Command: show running-config interface san-port-channel 133
!Time: Fri Dec 20 22:59:09 2013
version 5.2(3)N2(2.21b)
interface san-port-channel 133
channel mode active
switchport mode NP!Command: show running-config interface san-port-channel 133
!Time: Sat May 16 04:59:07 2009
version 5.1(3)N1(1)
interface san-port-channel 133
channel mode active
switchport mode F
switchport trunk mode off
Changed it as you suggested...
Followed the order of operations for "no shut"
Nexus FC -> Nexus SAN-PC -> FI FC -> FI SAN-PC.
Didn't work:
NX5KA(config-if)# show san-port-channel summary
U-Up D-Down B-Hot-standby S-Suspended I-Individual link
summary header
Group Port- Type Protocol Member Ports
Channel
133 San-po133 FC PCP (D) FC fc2/15(D) fc2/16(D)
NX5KA(config-if)#
Here is the output as you requested:
NX5KA(config-if)# show int san-port-channel 133
san-port-channel 133 is down (No operational members)
Hardware is Fibre Channel
Port WWN is 24:85:00:2a:6a:5a:81:00
Admin port mode is F, trunk mode is off
snmp link state traps are enabled
Port vsan is 133
1 minute input rate 1256 bits/sec, 157 bytes/sec, 0 frames/sec
1 minute output rate 248 bits/sec, 31 bytes/sec, 0 frames/sec
3966 frames input, 615568 bytes
0 discards, 0 errors
0 CRC, 0 unknown class
0 too long, 0 too short
2956 frames output, 143624 bytes
0 discards, 0 errors
46 input OLS, 41 LRR, 73 NOS, 0 loop inits
257 output OLS, 189 LRR, 219 NOS, 0 loop inits
last clearing of "show interface" counters never
Member[1] : fc2/15
Member[2] : fc2/16
NX5KA(config-if)#
NX5KA(config-if)# show int brief
Interface Vsan Admin Admin Status SFP Oper Oper Port
Mode Trunk Mode Speed Channel
Mode (Gbps)
fc2/13 1 auto on sfpAbsent -- -- --
fc2/14 1 auto on sfpAbsent -- -- --
fc2/15 133 F off init swl -- 133
fc2/16 133 F off init swl -- 133 -
Storage Connectivity with Nexus 5548
Hi all,
How to do storage connectivity in N5K. Is is possible to connect my storage in N5k normal port or we have to use FCoE ports.What is the config required for the Same.
Thanks in advance..
Regards,
AjithLo,
To connect Native FC/FCoE into any port on a N-5548 you would require the N-5548UP model, Short for Unified Ports meaning the ports can take FC/FCoE/Eth. If you have a N-5548P you will require one of the expansion cards for your SAN connectivity as all fixed port are Eth only.
The expansion modules available is on the data sheet here.
The configuration guides are available here.
HTH -
Hi,
I have 2 SUN 4270 M2 servers connected with Nexus 5548 switch over 10Gb fiber card. I am getting performance of just 60 MB per second while transfer of 5Gb file across 2 servers. The similar speed i use to get on 1Gb network also. Please suggest how to improve the tranfer speed. On servers, ports ET4 and ETH5 are bonded in bond0 with mode=1. The server envrionment will be used for OVS 2.2.2.
Below are the details of network configuration on server. I quick help will be highly appriciated--
[root@host1 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:5648589 errors:215 dropped:0 overruns:0 frame:215
TX packets:3741680 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2492781394 (2.3 GiB) TX bytes:3911207623 (3.6 GiB)
[root@host1 network-scripts]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:52961 errors:215 dropped:0 overruns:0 frame:215
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3916644 (3.7 MiB) TX bytes:0 (0.0 b)
[root@host1 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]#
[root@host1 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4c
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4d
[root@host1 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host1 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0e224c no bond0.301
vlan302 8000.90e2ba0e224c no vif1.0
bond0.302
vlan303 8000.90e2ba0e224c no bond0.303
vlan304 8000.90e2ba0e224c no bond0.304
[root@host2 test]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:4416730 errors:215 dropped:0 overruns:0 frame:215
TX packets:2617152 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:190977431 (182.1 MiB) TX bytes:3114347186 (2.9 GiB)
[root@host2 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:28616 errors:3 dropped:0 overruns:0 frame:3
TX packets:424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4982317 (4.7 MiB) TX bytes:80029 (78.1 KiB)
[root@host2 test]#
[root@host2 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 test]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth5
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:14
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:15
[root@host2 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host2 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0fc315 no bond0.301
vlan302 8000.90e2ba0fc315 no bond0.302
vlan303 8000.90e2ba0fc315 no bond0.303
vlan304 8000.90e2ba0fc315 no vif1.0
bond0.304
Thanks....
JayHi,
Thanks for reply..but the RX errors count is keep on increasing and the transfer speed between 2 servers are max 60MB/ps on 10GB FC card. Even on storage also, i am getting the same speed when i try to transfer data from server to storage on 10GB FC card. Servers and storage are connected through Nexus 5548 switch.
#ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:21187303 errors:1330 dropped:0 overruns:0 frame:1330
TX packets:17805543 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:624978785 (596.0 MiB) TX bytes:2897603160 (2.6 GiB)
JP -
DCNM SAN Client - SNMP Timeout
Hello Everyone,
right now iam struggling with the configuration of a DCNM SAN Client.
The Installation and licensing (with some trouble) has been finished few days ago.The DCNM LAN Client is working well and there are no problems with connectivity or timeouts. SSH and SNMPv3 are working fine for both Clients (SAN&LAN).
Discovering Fabrics for the DCNM SAN Client is working very well to. But when I start the DCNM SAN Client and open a session to a fabric it looks like this:
If I try to "edit local full zone database" it looks like this:
Tried to troubleshoot this behaviour but at the moment I am not able to figure out the problem.
Maybe someone in this community does?
Best Regards,
SaschaThis can be closed.
I got all answers... For the benefit of others who are searching answers for the same question:
1) I just downloaded the dcnm client for LAN ... is there a seperate client for SAN ? I dont see any download links for SAN... --- Yes .. DCNM SAN has a seperate client. After installation, you can open the DCNM page on http, and you would find the download links on top right corner for DCNM-LAN, SAN and device manager.
2) Is it possible to add brocade switches to DCNM-SAN? Does it require additional licenses? - I dont think this is possible
3) Can we use the fabric manager license "N5000FMS1k9" for DCNM SAN, or do we need "DCNM-SAN-N5K-K9" license for the nexus 5000? - Yes.. Use the same license and generate the PAK file. FMS licenses do work in DCNM-SAN..
Good luck -
FCoE with Cisco Nexus 5548 switches and VMware ESXi 4.1
Can someone share with me what needs to be setup on the Cisco Nexus side to work with VMware in the following scenario?
Two servers with two cards dual port FCoE cards with two ports connected to two Nexus 5548 switches that are clusterd together. We want to team the ports together on the VMware side using IP Hash so what should be done on the cisco side for this to work?
Thanks...Andres,
The Cisco Road Map for the 5010 and 5020 doesn't include extending the current total (12) FEX capabities. The 5548 and 5596 will support more (16) per 55xxk, and with the 7K will support upto 32 FEX's.
Documentation has been spotty on this subject, because the term 5k indicates that all 5000 series switches will support extended FEX's which is not the case only the 55xx will support more than 12 FEX. Maybe in the future the terminology for the 5k series should be term 5000 series and 5500 series Nexus, there are several differences and advancements between the two series.
Maybe you are looking for
-
Purchase Order Notes not displaying in some systems
Hi All, Purchase Order Notes are not displayed in some systems. But the same notes are viewable from a different PC. This is the case of few of my end users. Please help. Thanks Vijay
-
Ken Burns Effect isnt working, arrggggh
I have an older Mac G4 with a dual processor. Today, I installed a new processor (2GHz) in hopes that it would bring back the Ken Burns effects in iMovie. Do I need to upgrade to Tiger for this effect to work? I am also getting a message stating iMov
-
Internal table name which hide statement uses
Hi, To hide several variables, use chain HIDE statement. As soon as the user selects a line for which you stored HIDE fields, the system fills the variables in the program with the values stored. A line can be selected. ¨ By an interactive even
-
Best practice for adding images to a RH8 HTML project?
I'm working on a Word to RH8 HTML conversion. The images in Word are SNAG images but the resolution is poor and some of the images may need to be recreated from scratch again. Going forward I imagine I should work on getting them right in Snagit9. An
-
X201 wireless shuts off when undocking
My x201 laptop wireless shuts of when I undock and I have to re-enable it in network properties everytime. I thought I fixed it in the power managment settings but I must have missed it. Any help would be great. Thanks