PVLANs on Nexus1000v and FI/Nexus7K
Hello All,
I'm trying to implement PVLANs in the following scenario:
desktop1 (vlan 100) == link1 == Nexus1000v == link2 == Fabric Interconnect == link3 == Nexus 7K == link4 == ASA
desktop2 (vlan 100)
server1 (vlan100)
server2 (vlan200)
desktop 1&2 need to talk to servers 1&2 but not to each other, so I'm putting them into isolated secondary vlan (link1 - calling just 1 link for simplicity). Servers 1&2 will be in promiscuous vlan (link1).
What I'm not sure about is whether I need to configure secondary vlans on Nexus7K and UCS FI? Or would configuring promiscuous trunk on Nexus 1K (link2) will be enough? The communication between hosts 1&2 with server 2 is done via ASA firewall (subinterfaces).
So where do I need to span secondary VLANs to?
Thanks
Thanks for your reply,
The drops are not incrementing now, but we experienced network performance issues during the drops..
As soon as possible i'll try the vempkt capture, even if now i've no drops.
Here the port profile for vmk2 and vmk3
vmk3 :
port-profile 1120-VXLAN
type: Vethernet
description:
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
capability vxlan
switchport mode access
switchport access vlan 1120
no shutdown
evaluated config attributes:
capability vxlan
switchport mode access
switchport access vlan 1120
no shutdown
assigned interfaces:
Vethernet2
Vethernet4
Vethernet9
Vethernet11
Vethernet13
Vethernet15
Vethernet16
Vethernet17
Vethernet238
Vethernet240
Vethernet244
Vethernet680
Vethernet744
Vethernet878
port-group: 1120-VXLAN
system vlans: none
capability l3control: no
capability iscsi-multipath: no
capability vxlan: yes
capability l3-vn-service: no
port-profile role: none
port-binding: static
vmk2 :
port-profile 1100-vMotion
type: Vethernet
description:
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
switchport mode access
switchport access vlan 1100
no shutdown
evaluated config attributes:
switchport mode access
switchport access vlan 1100
no shutdown
assigned interfaces:
Vethernet1
Vethernet3
Vethernet5
Vethernet6
Vethernet7
Vethernet8
Vethernet10
Vethernet12
Vethernet233
Vethernet239
Vethernet243
Vethernet673
Vethernet676
Vethernet877
port-group: 1100-vMotion
system vlans: none
capability l3control: no
capability iscsi-multipath: no
capability vxlan: no
capability l3-vn-service: no
port-profile role: none
port-binding: static
Thanks
Federica
Similar Messages
-
Nexus1000v and nexus1010 vlan id
Hi All,
When i read Cisco doc about nexus1010, i am a bit of confused about vlan id between nexus1010 and nexus1000v.
1000v have 3 vlan: mgmt, packet and controll. Let's say mgmt vlan=100, packet vlan=200, controll=300
1010 have mgmt and control vlan as well. I am quite sure that 1010 mgmt vlan should be the same as 1000v which is vlan 100.
How about controll vlan on 1010, should be the same as 1000v control vlan 300 or can be different.
If either case works, what's the recommendation from Cisco? ( I guess different control vlan for 1000v and 1010 is better)Hi David,
The management VLAN is the only VLAN which needs to match between the Nexus 1010 and Nexus 1000v. This means the mgmt0 interface on the VSM needs to be on the same subnet as the mgmt0 interface that the Nexus 1010 appliance uses.
The control VLAN could be the same between the N1010 and N1Kv but it doesn't have to be.
Additionally, the Domain ID used needs to be different between the N1010 and N1Kv. Each svs-domain pair must have a unique domain-id. For example, the N1010 pair could share a domain-id of 10 and the first N1Kv pair would share domain-id 20.
So to make this specific to your example:
* Nexus 1010
- Management: VLAN 100
- Control: Could be VLAN 300 or something different (up to you)
- Domain ID: XX
* Nexus 1000v
- Management: VLAN 100
- Control: VLAN 300
- Domain ID: YY
- Packet: VLAN 200
Hope that helps clarify things for you.
Thanks,
Michael -
I need to work out how to add a single port to a VLAN on a switch that is using PVLANS.
For example port 3/7 is in VLAN 600
I want to add port 4/6 to VLAN 600 but am being told 'unable to add port to vlan use pvlan command'
The only PVLAN is isolated, and I do not want port 4/6 isolated, it is a backup server and must be able to connect to other devices.
Any ideas?
ThanksI guess it will look like this (if you run CatOS):
set pvlan mapping PRI_VLAN SEC_VLAN 4/6
Where PRI_VLAN is your primary VLAN id and SEC_VLAN is your isolated VLAN id.
//Mikhail Galiulin -
UCS 1.4 support for PVLAN
Hi all,
Cisco advise UCS 1.4 supports PVLAN. But i see the following comment about PVLAN in UCS 1.4
"UCS extends PVLAN support for non virtualised deployments (without vSwitch ) . "
"UCS release 1.4(1) provides support for isolated PVLAN support for physical server access ports or for Palo CNA vNIC ports."
Does this means PVLAN won't work for virtual machine if VMs is connected to UCS by Nexus1000v or vDS although i am using PALO (M81KR) card?
Could anybody can confirm that?
Thank you very much!Have not got that working so far...how would that traffic flow work?
1000v -> 6120 -> 5020 -> 6500s
(2) 10Gbe interfaces, one on each fabric to the blades. All VLANs (including the PVLAN parent and child VLAN IDs) are defined and added to the server templates - so propagated to each ESX host.
At this point, nothing can do layer 3 except for the 6500s. Let's say my primary VLAN ID for one PVLAN is 160 and the isolated vlan ID is 161...
On the Nexus 1000v:
vlan 160
name TEN1-Maint-PVLAN
private-vlan primary
private-vlan association 161
vlan 161
name TEN1-Maint-Iso
private-vlan isolated
port-profile type vethernet TEN1-Maint-PVLAN-Isolated
description TEN1-Maint-PVLAN-Isolated
vmware port-group
switchport mode private-vlan host
switchport private-vlan host-association 160 161
no shutdown
state enabled
port-profile type vethernet TEN1-Maint-PVLAN-Promiscuous
description TEN1-Maint-PVLAN-Promiscuous
vmware port-group
switchport mode private-vlan promiscuous
switchport private-vlan mapping 160 161
no shutdown
state enabled
port-profile type ethernet system-uplink
description Physical uplink from N1Kv to physical switch
vmware port-group
switchport mode trunk
switchport trunk allowed vlan all
channel-group auto mode on mac-pinning
no shutdown
system vlan 20,116-119,202,1408
state enabled
This works fine to and from VMs on the same ESX host (PVLAN port-profiles work as expected)...If I move a VM over to another host, nothing works...pretty sure not even if in same promiscuous port-profile. How does the 6120 handle this traffic? What do they get tagged with when they leave the 1000v? -
Greetings forum. So I am working on a project that could potentially make use of PVLANs to isolate some hosting servers we are possibly going to bring online in the coming weeks. We currently have Catalyst 4507R switches as the core at our DC running IOS version 15.0(2)SG7. We are using SVIs for Inter-vlan routing on the 4507Rs. We are running VTP version 2 currently. I am trying to lock down some answers with regards to running PVLANs in my environment. Any help here is much appreciated.
I have read that to run PVLANs, you need to put your switches in transparent mode before enabling PVLANs. The thing I am not sure on is why. Do they say this because only VTP version 3 supports the synchronization of PVLANs in the VTP domain, and without version 3 your PVLANS will not be propagated, or is there an another reason to put your switches in transparent other than this. I understand that without v3 I would have to manually configure the PVLANS on my switches that would needs them. Just trying to understand if thats the reason they say to put VTP in transparent mode before implementing PVLANS or if there is something else to it I am missing. Can I run PVLANs using VTPv2 and manually configure the PVLANs on the switches that need them?
Secondly, in order to switch to using VTPv3 from VTPv2, are there any gotchas I need to be aware of. I have two VTP servers in my VTP domain. I understand VTPv3 works differently with regards to VTP updates. When I change to version 3, will the current VLAN Database be overwritten causing me to loose my current VLANs, or will the current VLANs stay as is while I go about switching all my switches to VTPv3. Would like to avoid wiping out my network if possible.
Third, I have a VMWare ESX setup. These new hosts will be VM servers. We do not have a license to support the Distributed VSwitch which allows PVLAN support for ESX VMs. These VMs are running on a Dell M1000e chassis with Cisco WS-CBS3130G-S switches in it. They have limited support for PVLANs. We have VLAN trunk uplinked to our core switches from these blade switches, and then trunk to the VMWare standard switch so we can control VLAN placement of the VLAN hosts. Looks like this:
4507R---------------->WS-CBS3130G-S----------->VMWareStandardSwitch--------->VMWare Guests
Trunk Trunk Access
I believe there is some way to set up using PVLANs using a setup like this. I think using an isolated PVLAN trunk port. The Blade switch does not seem to support that feature (running 12.2(40)EX1). This is said to be the desired practice when you have upstream switches which do not support PVLANs. Since my VMWare switch does not support them, but the switch linking the core to the vswitch does somewhat I am trying to understand the issues that would be seen there.
Again lots if questions. Any help would be much appreciated.Hi Hoan, we have a new manual that may be of help to you. I have attached it here. It is the newScale Service Library Quick Reference Design Guide. The guide provides an overview of the decisions newScale made when organizing our available service libraries, when pairing images with categories and services, as well as other tips and best practices—all focused on ease of use and navigation for the end user.
The link to the guide on ou -
Hi,
We are planning to install Cisco Nexus 1000v in our environment. Before we want to install we want to explore little bit about Cisco Nexus 1000v
• I know there is 2 elements for Cisco 1k, VEM and VSM. Does VSM is required? Can we configure VEM individually?
• How does Nexus 1k integrated with vCenter. Can we do all Nexus 1000v configuration from vCenter without going to VEM or VSM?
• In term of alarming and reporting, does we need to get SNMP trap and get from individual VEM or can be use VSM to do that. OR can we get Cisco Nexus 1000v alarming and reporting form VMware vCenter.
• Apart from using Nexus 1010 can what’s the recommended hosting location for VSM, (same Host as VEM, different VM, and different physical server)
Foyez AhammedHi Foyez,
Here is a brief on the Nexus1000v and I'll answer some of your questions in that:
The Nexus1000v is a Virtual Distributed Switch (software based) from Cisco which integrated with the vSphere environment to provide uniform networking across your vmware environment for the host as well as the VMs. There are two components to the N1K infrastructure 1) VSM 2) VEM.
VSM - Virtual supervisor module is the one which controls the entire N1K setup and is from where the configuration is done for the VEM modules, interfaces, security, monitoring etc. VSM is the one which interacts with the VC.
VEM - Virtual ethernet module are simply the module or virtual linecards which provide the connectivity option or virtual ports for the VMs and other virtaul interfaces. Each ESX host today can only have one VEM. These VEMs recieve their configuration / programing from the VSM.
If you are aware of any other switching products from Cisco like the Cat 6k switches, the n1k behaves the same way but in a software / virtual environment. Where the VSM are equal of a SUPs and the VEM are similar to the line cards. The control and the packet VLANs in the n1k provide the same kind of AIPC and Inband connectivity as the 6k backplane would for the communication between the modules and the SUP (VSM in this case).
*The n1k configuration is done only from the VSM and is visible in the VC.However the port-profiles created from the VSM are pushed from the VSM to the VC and have to be assigned to the virtual / physical ports from the VC.
*You can run the VSM either on the Nexus1010 as a Virtual service blade (VSB) or as a normal VM on any of the ESX/ESXi server. The VSM and the VEM on the same server are fully supported.
You can refer the following deployment guide for some more details: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
Hope this answers your queries!
./Abhinav -
Private Vlan, Etherchannel and Isolated Trunk on Nexus 5010
I'm not sure if I'm missing something basic here however i though that I'd ask the question. I recieved a request from a client who is trying to seperate traffic out of a IBM P780 - one set of VIO servers/clients (Prod) is tagged with vlan x going out LAG 1 and another set of VIO server/clients (Test) is tagged with vlan y and z going out LAG 2. The problem is that the management subnet for these devices is on one subnet.
The infrastructure is the host device is trunked via LACP etherchannel to Nexus 2148TP(5010) which than connects to the distribution layer being a Catalyst 6504 VSS. I have tried many things today, however I feel that the correct solution to get this working is to use an Isolated trunk (as the host device does not have private vlan functionality) even though there is no requirement for hosts to be segregated. I have configured:
1. Private vlan mapping on the SVI;
2. Primary vlan and association, and isolated vlan on Distribution (6504 VSS) and Access Layer (5010/2148)
3. All Vlans are trunked between switches
4. Private vlan isolated trunk and host mappings on the port-channel interface to the host (P780).
I haven't had any luck. What I am seeing is as soon as I configure the Primary vlan on the Nexus 5010 (v5.2) (vlan y | private-vlan primary), this vlan (y) does not forward on any trunk on the Nexus 5010 switch, even without any other private vlan configuration. I believe this may be the cause to most of the issues I am having. Has any one else experienced this behaviour. Also, I haven't had a lot of experience with Private Vlans so I might be missing some fundamentals with this configuration. Any help would be appreciated.Hello Emcmanamy, Bruce,
Thanks for your feedback.
Just like you, I have been facing the same problematic last months with my customer.
Regarding PVLAN on FEX, and as concluded in Bruce’s previous posts I understand :
You can configure a host interface as an isolated or community access port only.
We can configure “isolated trunk port” as well on a host interface. Maybe this specific point could be updated in the documentation.
This ability is documented here =>
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_N2_1/b_Cisco_n5k_layer2_config_gd_rel_513_N2_1_chapter_0101.html#task_1170903
You cannot configure a host interface as a promiscuous port.
You cannot configure a host interface as a private VLAN trunk port.
Indeed a pvlan is not allowed on a trunk defined on a FEX host interface.
However since NxOS 5.1(3)N2(1), the feature 'PVLAN on FEX trunk' is supported. But a command has to be activated before => system private-vlan fex trunk . When entered a warning about the presence of ‘FEX isolated trunks’ is prompted.
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_N2_1/b_Cisco_n5k_layer2_config_gd_rel_513_N2_1_chapter_0101.html#task_16C0869F1B0C4A68AFC3452721909705
All these conditions are not met on a N5K interface.
Best regards.
Karim -
N1010/N1000v : VMWare Vcenter - VSM (L3) on N1010 - VEM on ESX-host
Hi Vishal Mehta,
I am currently busy with the rollout of a nexus1010/nexus1000v.
Current status :
1/ Nexus 1010 (primary/secondary) setup is done/ok.
2/ Add VSB’s for Nexus1000v is aswell done/ok.
3/ VSB(VSM) towards VMWare VCenter communication (show svs connections J) is aswell done/ok.
4/ VEM activation on a ESX-Host à and this is the part were I’m a bit lost (sorry for that L… ) : how should I “understand” the VSM to VEM communication. And/or in other words : can a ESX-host (with a vswitch or distributed vswitch stay active in parallel with a VEM on that same ESX-Host ? Based on my current testing, everything points into the direction of “migrating from an existing vswitch to a VEM setup”).
[Note : I have a VSM to VEM Layer3 setup configured.]
4bis / what about the mgmt. communication between the ESX-host vswitch & the mgmt.communication between the ESC-host VEM (which is in my case layer3).
Is there a "good" one-pager available which shows the difference between VMware (distributed)vswitch vs VMware with Cisco Nexus1000v VEM. And additionally which parts can run in parallel between the VMware vcenter & an ESX-Host (when it comes to "controlling the vswitch & the nexus 1000v VEM).
Many thanks for your replies,
Best Regards,
Joost.Hi Joost,
You have scored on all the initial setup steps and almost close to using VSM-VEM :)
To answer your main question – Yes a ESXi host can have multiple active Virtual Switches in parallel.
That is to say, you can have VMware’s DVS, Nexus 1000v VEM, vSwitch 1, vSwitch 2, …., vSwitch X all ON at same time.
The separation at switching level happens on basis of which VMs (via Port-Groups) use which Virtual Switch.
The uplinks (network adapters - vmnics) of host are distributed across virtual switches (CANNOT be shared)
So multiple active Virtual Switches gives you flexibility to segregate your virtual workloads across those uplinks
Now regarding the L3 mode between VSM and VEM
You can either use existing mgmt. interface (vmk0) to communicate between ESXi host (VEM) and Nexus 1000v VSM
Or you can have dedicated (separate from mgmt.) IP subnet with new VMkernel (say vmk1) for VEM-VSM communication.
Please refer below document which walks through the scenario you have implemented:
https://communities.cisco.com/docs/DOC-28631
we don’t have specific document to compare Nexus 1000v with other Distributed Virtual Switches
But few of advantages for opting Nexus 1000v are – its free, all NX-OS features, separate entity which can be owned/managed by Network team and other special features which I presented in above webcast recordings
Common deployments I have seen in field is Customers using vSwitch for mgmt. (vmk0) and other host specific functions
And they use Nexus 1000v VEM for NX-OS for additional functionality like LACP, PVLAN, QoS, ERSPAN and Virtual Machines traffic.
Thank you!!
Regards,
Vishal -
Nexus1000v Configuration and Migration from vmware vDS
really need some guidance here....I am trying to deploy the 1000v in l3 mode, which to my understanding using mgmt0 and control0 only. Packet and control traffic are carried on the same interface. So here are my questions:
Do I still need 3 port-groups for mgmt., packet, and control or can I do control and mgmt only?
Do my VSMs still need 3 nic's? If 3 port-groups are not configured but I need 3 nics, do you map two nics to control vlan?
I am not using vlan 1 at all, my native vlan is set to 202 in USCM which is the same vlan my ESXi hosts reside in, so therefore the Esxi management port group on my current vDS does not use a vlan ID. Does the vlan 1 in 1000v map to my native vlan of 202 or do I need to configure the 1000v specifically for vlan 202? Examples I am looking at are using vlan 1 everyone and it confuses me.
The vlan that is created for Control0 has to be different than my mgmt0 vlan, but does this vlan need to exist on the links from the FI to 5108 Chassis?
I am getting really lost on the whole native vlan and vlan 1 thing...I have literally been at this for months with many "hand up, give up" moments.
show run
!Command: show running-config
!Time: Sat Apr 19 01:01:37 2014
version 4.2(1)SV2(2.2)
svs switch edition essential
no feature telnet
username admin password 5 $1$pIdF9m7q$PIhIpsr//2BIkySzd5y9r. role network-admin
banner motd #Nexus 1000v Switch#
ip domain-lookup
ip host N1KV-01 10.170.202.5
switchname N1KV-01
errdisable recovery cause failed-port-state
vem 3
host id c4b52629-fbe7-e211-0000-000000000005
snmp-server user admin network-admin auth md5 0x7bfb0100d1a2c5faf79c77aad3c8ecec p
riv 0x7bfb0100d1a2c5faf79c77aad3c8ecec localizedkey
snmp-server community atieppublic group network-operator
ntp server 10.170.5.10
vrf context management
ip route 0.0.0.0/0 10.170.202.1
vlan 1,5,201-205,900
port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type ethernet VM-Sys-Uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,5,201-205,900
switchport trunk native vlan 202
channel-group auto mode on mac-pinning
no shutdown
system vlan 1,5,201-205,900
state enabled
port-profile type vethernet Mgmt1
vmware port-group
switchport mode access
switchport access vlan 1
no shutdown
state enabled
port-profile type vethernet N1KV-Control
vmware port-group
switchport mode access
switchport access vlan 201
no shutdown
system vlan 201
state enabled
port-profile type vethernet vMotion
vmware port-group
switchport mode access
switchport access vlan 203
no shutdown
state enabled
port-profile type vethernet Servers-Prod
vmware port-group
switchport mode access
switchport access vlan 5
no shutdown
state enabled
port-profile type vethernet N1kV-Mgmt
vmware port-group
switchport mode access
switchport access vlan 202
no shutdown
system vlan 202
state enabled
port-profile type vethernet NS_NI_1_1
vmware port-group
switchport mode access
switchport access vlan 5
no shutdown
state enabled
port-profile type vethernet NFS
vmware port-group
switchport mode access
no shutdown
state enabled
port-profile type vethernet DMZ
vmware port-group
switchport mode access
system storage-loss log time 30
vdc N1KV-01 id 1
limit-resource vlan minimum 16 maximum 2049
limit-resource monitor-session minimum 0 maximum 2
limit-resource vrf minimum 16 maximum 8192
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 1 maximum 1
limit-resource u6route-mem minimum 1 maximum 1
interface mgmt0
ip address 10.170.202.5/24
interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.2.2.bin sup-1
boot system bootflash:/nexus-1000v.4.2.1.SV2.2.2.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.2.2.bin sup-2
boot system bootflash:/nexus-1000v.4.2.1.SV2.2.2.bin sup-2
svs-domain
domain id 202
control vlan 201
packet vlan 201
svs mode L3 interface mgmt0
svs connection vcenter
protocol vmware-vim
remote ip address 10.170.5.35 port 80
vmware dvs uuid "7a 82 10 50 a3 3c 4c fe-df 91 60 28 66 1d 6f 59" datacenter-nam
e Nashville HQ
admin user n1kUser
max-ports 8192
connect
vservice global type vsg
tcp state-checks invalid-ack
tcp state-checks seq-past-window
no tcp state-checks window-variation
no bypass asa-traffic
vnm-policy-agent
registration-ip 0.0.0.0
shared-secret **********
log-level
N1KV-01# show vlan br
VLAN Name Status Ports
1 default active
5 VLAN0005 active
201 VLAN0201 active
202 VLAN0202 active
203 VLAN0203 active
204 VLAN0204 active
205 VLAN0205 active
900 VLAN0900 active
N1KV-01# show mod
Mod Ports Module-Type Model Status
1 0 Virtual Supervisor Module Nexus1000V ha-standby
2 0 Virtual Supervisor Module Nexus1000V active *
Mod Sw Hw
1 4.2(1)SV2(2.2) 0.0
2 4.2(1)SV2(2.2) 0.0
Mod Server-IP Server-UUID Server-Name
1 10.170.202.5 NA NA
2 10.170.202.5 NA NA
* this terminal sessionHi Steven,
Q: "Do I still need 3 port-groups for mgmt., packet, and control or can I do control and mgmt only?"
A: In reality, everything could be on the same VLAN. But that would be poor practice... So the answer is, control and management should be on two different L2 networks. This means two different port-profiles are needed. For the packet adapter (third adapter), I would assign a dummy port-profile.
Q: "I am not using vlan 1 at all, my native vlan is set to 202 in USCM which is the same vlan my ESXi hosts reside in, so therefore the Esxi management port group on my current vDS does not use a vlan ID. Does the vlan 1 in 1000v map to my native vlan of 202 or do I need to configure the 1000v specifically for vlan 202? Examples I am looking at are using vlan 1 everyone and it confuses me."
A: VLAN 1 on N1k does not map to Native VLAN 202 on the UCS. You would need to configure N1k specifically. If your vethernet port-profile is config'd for 'sw acc vlan 202' and your uplink port-profile is 'sw tr native vlan 202', the frames will be sent out of n1k untagged. Which is, i think, what you're going for...
Q: The vlan that is created for Control0 has to be different than my mgmt0 vlan, but does this vlan need to exist on the links from the FI to 5108 Chassis?
A: Yes. The Control0 interface on VSMs are used for HA heartbeats between the two VSMs. The traffic between VSMs is L2. So if the VSMs live on different hosts, that control VLAN needs to be end to end between the hosts.
Feel free to get back with questions and we can try to work through your scenario.
Thanks,
Joe -
Network segmentation policy and QOS settings on Nexus1000v
Hi all,
has someone an example on applying QOS (marking cos) using a network segmentation policy with NSM feature enabled and VshieldManager/Vcloud on Nexus1000v?
I'm using the current default policy
network-segment policy default_segmentation_template
description Default template used for isolation backed pools
type segmentation
import port-profile NSM_template_segmentation
port-profile type vethernet NSM_template_segmentation
no shutdown
description NSM default port-profile for VXLAN networks.
state enabled
port-profile type vethernet <NSM Created>
vmware port-group
port-binding static auto expand
inherit port-profile NSM_template_segmentation
switchport access bridge-domain "xxxxxxxxx"
description NSM created profile.
state enabled
But Now i need to configure a cos marking for some org id (not for all organizations).Committed Information Rate (%) Peak Burst Size (ms) Committed Burst Size (ms) Max Queue Size: Can any of these be changed to optimize the NMH405? Thank you? Latency Measurements: Latency Boundary Boundary 1: 0 ms Boundary 2: 10 ms Boundary 3: 20 ms Boundary 4: 40 ms Boundary 5: 100 ms Boundary 6: 1000 ms Boundary 7: 3000 ms Latency Threshold (ms) Fragmentation Settings: IP Fragmentation Enable IP Fragment Size 100 148 244 292 340 388 436
-
i have a new customer moving into my multi-tenant DC.the customer will have its own VRF but still wants further isolation within this VRF.they've got prod,dev and staging networks and will like to keep traffic strictly restricted within these areas.they also require some of these servers to be load-balanced and and some non-LBed.
i have ACE modules running A2(2.4) and my plan is to use PVLANs within this VRF to keep the different areas isolated but will the the ACE support these PVLANs (even on newer versions) or will i have to use ACLs to keep the real servers in PVLAN_1 from talking to the VIPs of PVLAN_2PVLANs seem to be supported on the ACE : http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/v3.00_A1/configuration/rtg_brdg/guide/vlansif.html#wp1022397
http://www.cisco.com/en/US/partner/docs/interfaces_modules/services_modules/ace/vA4_1_0/configuration/rtg_brdg/guide/vlansif.html -
folks
not sure if this is the correct place to post but i'm hopeful someone here will have used this scenario
i have a existing and working pvlan with the usual community and isolated ports but i need to add a new setup to it
i have a requirement to add a virtual server which will host serveral virtuals and they will all be on the same switch port
has anyone any experience of this and if so are there any pitfalls to look out for
the virtual servers shouldn't need to talk to each other and should only be talking to the promiscuous port on the default gateway
thanks to anyone taking the time to read this or to replyHi Bro
I’ve never configured PVLAN in any of my deployments but I’m familiar with the do’s and don’t’s of PVLAN.
Basically, your question here is, in a physical server, you’ll have many virtual servers. Each of these virtual servers will have a different IP Address, but same MAC Address, connected to a single switchport of a Cisco Switch.
Even if you were to configure that particular switchport as “isolated”, this still wouldn’t resolve your issue. This is because you want to have a complete separation from one virtual server to another virtual server, within the same physical server connected to a single switchport of a Cisco Switch.
I did some homework for you, and I believe VMWARE can do this. Please do refer to
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010691
P/S: If you think this comment is useful, please do rate them nicely :-) -
Trunking between Pix and pvlans
Hi,
I already wrote here for this problem, but no one answered me, so I'm trying again sending the question in a different mode.. :-))
I've a Cisco Switch (Catalyst 6500) and a Cisco Firewall (Pix 535). On the switch, I configured various pvlan, then I associated them to primary vlans.
On the Pix, I created logical interfaces with the same vlans of the switch. The question is: Why the trunk doesn't work? I know that trunking of pvlan between switches works without problems.. I cannot trunk pvlans between a switch and a firewall. If I try to trunk a simply vlan, it works! I don't find documentation about this.. Can you please help me?? Do you think it may be a software version problem??
Many thanks to all!!
If you think it's useful, I can post pix and switch main steps configurations..
Regards and kisses!! :-)
DanielaHi Robert,
The SG200 is a layer 2 switch and the SG300 is defaulted as a layer 2 switch. Lets say your default vlan is 1 for each switch. You may create vlan 10. Then configure the port between each switch as 1u, 10t. This will allow vlan 1 and 10 to communicate. Next, if you assign any port as an access port to vlan 10, anything connecting to a vlan 10 port on the same subnet will talk to each other.
View this picture below; The vlan 10 computers will communicate to each other ONLY on vlan 10. Since this is layer 2 networking, the vlan id will separate the layer 2 traffic while the layer 3 information is separate by subnet. In a fully functional network, the router would need to support the multiple subnet, either through subinterfaces, multiple IP interface or dot1q trunks. If you introduced a router to this mix, the vlan 1 subnet will get to the internet while the vlan 10 subnet will not.
-Tom
Please rate helpful posts -
PVLAN Config with UCS and 1000v Question ??
We are attempting to setup a multi-tenant environment where the customer would like each tenant to have a single subnet which is segmented with private vlans. The design calls for three (3) UCS chassis'. Two (2) of the chassis' are fully populated with half-width blades and the third chassis has two half-width blades. The two chassis' that are fully populated will all be ESX hosts with Palo adapters and will operate with 1000v.
As their current VLAN plan is to have approximately 10 private vlans per tenant/subnet, i'm concerned this design will not scale well at all within a UCS enviroment due to the limitations related to the total number of vnics/vhbas per chassis. I viewed a post where it was indicated that we could bypass the vnic limitation by simply trunking down all VLAN's to the VEM and configure all private-vlans on the 1000v only. This would allegedly alleviate the vnic limitation in a larger multi-tenant environment. Is this a valid and supported design/configuration and/or does this actually work? Or do we instead actually need to create a vnic for every private vlan we want to present to each ESX host as recommended/required in the config guides?Hey Joey,
Thanks for the response. I've added this same post to the TAC case but i'll update this discussion with the same for anyone else who may be interested.
Our customer is still debating their requirement for PVLAN use within this Pod. However, if they choose to move forward, my primary concern is mostly with the UCS configuration related to PVLANs, how the UCS PVLAN configuration differs (if at all) with the integration of the 1000v and if this multitenant setup will cause the Fabric Interconnects to exceed their max VIF count. Related to the UCS configuration, it's my understanding that for every isolated private VLAN we would like to present to a blade, we need to create a separate vNIC in UCS. This customer is attempting to construct an environment based on the FlexPod model where multiple tenants would be present in the environment. Their idea, was to create a single subnet for each tenant and then isolate tier/purpose traffic via layer-2 PVLANs within each tenant subnet. This is where the need for the PVLANs come in. Simply a customer request in their design.
So how this relates to my primary questions; if for every tenant we have to add 5+ vNICs as they are introduced into the Pod, my understanding is that this will easily cause us to have more than 120 VIFs per chassis in no time. It's my understanding that we have a total of (15*[number of I/O Module uplinks] - 2) VIFs available in total. (I'm assuming this is 118 total VIFs for two fully populated 2104's per chassis???) Currently, the design calls for a combination of ten (12) VIFs (10 vNICs and 2 vHBAs) as a standard on each of the eight blades in each chassis. On top of this would be when we begin adding tenant specific vNIC's for each tenant's PVLANs (If this is the proper/required config). However, I have later read that if we integrate the 1000v's into the environment, the necessity to create a new vNIC in UCS for every isolated PVLAN is no longer in play as all that is required is to trunk down all "parent" vlans to the VEMs and there, at the 1000v level only, we can perform the PVLAN config. It has been recommended to configure this environment similar to the strategy where the upstream switch does not understand/perform PVLANs. Is this correct or would we still need to add the vNIC's to the service profiles in UCS even when integrating the Nexus 1000v's?
I have yet to really find a document that discusses the use of PVLANs within the UCS environment when implementing Nexus 1000v which would tie all of these questions together.
Thanks,
Eric -
Hi all,
one simple question: is it possible to trunk a cat 6500 to a cat 4500 with an etherchannel and transport a private vlan (in my case 1 isolated lan) ?
Thanks in advance.I don't think this is possible
Maybe you are looking for
-
How to Print List to spool but skip the print parameter window?
Dear all, I am coding a program for print list into spool. But every times I run the program there must pop up a print parameter window, after I click the 'Continue' button,then the list print to spool. Is there some way that I can just run the
-
Hi, We are getting the following error when running our SSIS packages on Microsoft SQL Server 2012 R2 on Windows Server 2008 R2 SP1: Error: 4014, Severity:20, State: 11. A fatal error occurred while reading the input stream from the network. The se
-
Missing Adobe PDF as an option to print to.
Missing Adobe PDF as an option to print to. I remember allowing my system do an Adobe update a couple of days ago. Don't ask me today what it updated because I don't remember. My problem now is that I can no longer choose print to pdf. When choosing
-
I have a new T530 and just have a few questions.. (ok its at home as of typing this so some i can't test right away) Swapping the fn and ctrl keys, from research I can do this from the bios, correct? Is there a way to use the wireless switch to disab
-
Is there an equivalent of Windows photo viewer for the mac? I don't want to edit images in it, I just want to view in full screen and to go forward and back thought the images, with simple options to 'view at 100%' 'flip left/right', 'enlarge' etc.