Nexus 1000V - High availability of SC + VMotion
Hello,
I'm trying to configure a high availability senario for the Service Console und the VMotion Interface. My goal is, that in default status the Service Console uses one physical NIC and the VMotion Interface uses an other NIC. If one of theses NICs or their connection fails, both Inferfaces (SC + VMotion) should use the same remaining NIC.
Service Console and VMotion use two different VLANs and are connected to the Nexus.
My first idea was vPC-HM with MAC-Pinning, but I'm searching for a simple solution which can be easily deployed on many clusters/ESX.
I would appreciate any idea.
Greetz
Tobias
Hi,
You don't need to configure static pinning to make use of MAC pinning i.e. you do not have to manually assign the SC and VMotion vmknics to specific uplinks. In particular, if you have a dedicated NIC for VMotion which carries only the VMotion VLAN and nothing else, the VMotion vmknic will get pinned to it be default. Similarly, if you have a dedicatedNIC that carries just the two VLANs for SC and VMotion, both those vmknics will get pinned to that specific uplink, without any manual configuration.
Regards,
Sundar.
Similar Messages
-
Nexus 1000v repo is not available
Hi everyone.
Cisco Yum repo for nexus 1000v is not available at the moment. I am wondering, is it Ok and Cisco finished it experiment with free Nexus1k or I need to contact someon (who?) to ask him to fix this problem.
PS Link to the repo: https://cnsg-yum-server.cisco.com/yumrepoLet's set the record straight here - to avoid confusion.
1. VEMs will continue to forward traffic in the event one or both VSM are unavailable - this requires the VEM to remain online and not reboot while both VSMs are offline. VSM communication is only required for config changes (and LACP negociation prior to 1.4)
2. If there is no VSM reachable, and a VEM is reboot, only then will the System VLANs go into a forwarding state. All other non-system VLANs will remain down. This is to faciliate the Chicken & Egg theory of a VEM being able to initially communicate with a VSM to obtain its programming.
The ONLY VLANs & vEth Profiles that should be set as system vlans are:
1000v-Control
1000v-Packet
Service Console/VMkernel for Mgmt
IP Storage (iSCSI or NFS)
Everything else should not be defined as a system VLAN including VMotion - which is a common Mistake.
**Remember that for a vEth port profile to behave like a system profile, it must be define on BOTH the vEth and Eth port profiles. Two factor check. This allows port profiles that maybe are not critical, yet share the same VLAN ID to behave differently.
There are a total of 16 profiles that can include system VLANs. If you exceed this, you can potentially run into issues with the Opaque data pushed from vCenter is truncated causing programming errors on your VEMs. Adhering to the limitations above should never lead to this situation.
Regards,
Robert -
Cisco Nexus 1000v Virtual Switch for Hyper-V Availability
Hi,
Does anyone have any information on the availability of the Cisco Nexus 1000v virtual switch for Hyper-V. Is it available to download from Cisco yet? If not when will it be released? Are there any Beta programs etc?
I can download the 1000v for VmWare but cannot find any downloads for the Hyper-V version.
Microsoft PartnerAny updates on the Cisco Nexus 1000v virtual switch for Hyper-V? Just checked on the Cisco site, however still only the download for VMware and no trace of any beta version. Also posted the same question at:
http://blogs.technet.com/b/schadinio/archive/2012/06/09/windows-server-2012-hyper-v-extensible-switch-cisco-nexus-1000v.aspx
"Hyper-V support isn't out yet. We are looking at a beta for Hyper-V starting at the end of February or the begining of March. "
-Ian @ Cisco Community
|| MCITP: EA, VA, EMA, Lync SA, makes a killer sandwich. || -
Nexus 1000V VMotion between 2 different 1KV Switches
Hello Virtual Experts,
I was informed that you cannot Vmotion from one N1kv in one domain ID instance to another N1kv in a different domain ID.
As I understand, every Nexus 1000v switch needs to be in its own domain.
If this is the case, how does Cisco facilitate VMotion between switches? How does Cisco facilitate long range Vmotion?
Any response is much appreciated.
/r
RobRobert,
You are correct, just as with any vDS and a standard vSwitch you can't VMotion them between (while the Network interfaces are connected anyway). VMotion will fail the Network Port Group validation. The networking is what is tripping you up here, and it's not specific to Cisco, it's a VMware validation requirement.
With long distance vMotion, the VMs are still part of the same DVS so there's no issue here.
You have a couple options here.
1. You can do a cold migration, then re-assign the network binding on the destination switch. This would require VM downtime.
2. If going from a Host connected to a vDS to a Host using a vSwitch, you can create a temporaty vSwitch on the source host, create the Port Group with the same name as the Destination host's Port Group, give it an uplink and then migrate it that way from there. This can be done online w/o downtime of the VM.
Not sure of any other methods, but if anyone else has an idea, feel free to share!
Regards,
Robert -
Can a Nexus 1000v be configured to NOT do local switching in an ESX host?
Before the big YES, use an external Nexus switch and use VN-Tag. The question is when there is a 3120 in a blade chassis that connects to the ESX hosts that have a 1000v installed on the ESX host. So, first hop outside the ESX host is not a Nexus box.
Looking for if this is possible, if so how, and if not, where that might be documented. I have a client who's security policy prohibits switching (yes, even on the same VLAN) within a host (in this case blade server). Oh and there is an insistance to use 3120s inside the blade chassis.
Has to be the strangest request I have had in a while.
Any data would be GREATY appreciated!Thanks for the follow up.
So by private VLANs, are you referring to "PVLAN":
"PVLANs: PVLANs are a new feature available with the VMware vDS and the Cisco Nexus
1000V Series. PVLANs provide a simple mechanism for isolating virtual machines in the
same VLAN from each other. The VMware vDS implements PVLAN enforcement at the
destination host. The Cisco Nexus 1000V Series supports a highly efficient enforcement
mechanism that filters packets at the source rather than at the destination, helping ensure
that no unwanted traffic traverses the physical network and so increasing the network
bandwidth available to other virtual machines" -
Cisco Nexus 1000v stops inheriting
Guys,
I have an issue with the Nexus 1000v, basically the trunk ports on the ESXi hosts stop inheriting from the main DATA-UP link port profile, which means that not all VLANS get presented down that given trunk port, its like it gets completey out of sync somehow. An example is below,
THIS IS A PC CONFIG THAT'S NOT WOKRING CORRECTLY
show int trunk
Po9 100,400-401,405-406,412,430,434,438-439,446,449-450,591,850
sh run int po9
interface port-channel9
inherit port-profile DATA-UP
switchport trunk allowed vlan add 438-439,446,449-450,591,850 (the system as added this not user)
THIS IS A PC CONFIG THAT IS WORKING CORRECTLY
show int trunk
Po2 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,582,591,850
sh run int po2
interface port-channel2
inherit port-profile DATA-UP
I have no idea why this keeps happening, when i remove the manual static trunk configuration on po9, everything is fine, few days later, it happens again, its not just po9, there is at least 3 port-channel that it affects.
My DATA-UP link port-profile configuration looks like this and all port channels should reflect the VLANs allowed but some are way out.
port-profile type ethernet DATA-UP
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,5
82,591,850
channel-group auto mode on sub-group cdp
no shutdown
state enabled
The upstream switches match the same VLANs allowed and the VLAN database is a mirror image between Nexus and Upstream switches.
The Cisco Nexus version is 4.2.1
Anyone seen this problem?
CheersUsing vMotion you can perform the entire upgrade with no disruption to your virtual infrastructure.
If this is your first upgrade, I highly recommend you go through the upgrade guides in detail.
There are two main guides. One details the VSM and overall process, the other covers the VEM (ESX) side of the upgrade. They're not very long guides, and should be easy to follow.
1000v Upgrade Guide:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/upgrade/software/guide/n1000v_upgrade_software.html
VEM Upgrade Guides:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/install/vem/guide/n1000v_vem_install.html
In a nutshell the procedure looks like this:
-Backup of VSM Config
-Run pre-upgrade check script (which will identify any config issues & ensures validation of new version with old config)
-Upgrade standby VSM
-Perform switchover
-Upgrade image on old active (current standby)
-Upgrade VEM modules
One decision you'll need to make is whether to use Update Manager or not for the VEM upgrades. If you don't have many hosts, the manual method is a nice way to maintain control on exactly what's being upgrade & when. It will allow you to migrate VMs off the host, upgrade it, and then continue in this manner for all remaining hosts. The alternate is Update Manager, which can be a little sticky if it runs into issues. This method will automatically put hosts in Maintenance Mode, migrate VMs off, and then upgrade each VEM one by one. This is a non-stop process so there's a little less control from that perspective. My own preference is any environment with 10 or less hosts, I use manual, for more than that let VUM do the work.
Let me know if you have any other questions.
Regards,
Robert -
Firewall between Nexus 1000V VSM and vCenter
Hi,
Customer has multiple security zones in environment, and VMware vCenter is located in a Management Security Zone. VSMs in security zones have dedicated management interface facing Management Security Zone with firewall in between. What ports do we need to open for the communication between VSMs and vCenter? The Nexus 1000V troubleshooting guide only mentioned TCP/80 and TCP/443. Are these outbound from VSM to vCenter? Is there any requirements from vCenter to VSM? What's the best practice for VSM management interface configuration in multiple security zones environment? Thanks.Avi -
You need the connection between vCenter and the VSM anytime you want to add or make any changes to the existing port-profiles. This is how the port-profiles become available to the virtual machines that reside on your ESX hosts.
One problem when the vCenter is down is what you pointed out - configuration changes cannot be pushed
The VEM/VSM relationship is independent of the VSM/vCenter connection. There are separate VLANs or L3 interfaces that are used to pass information and heartbeats between the VSM and its VEMs.
Jen -
Cisco Prime LMS High Availability
Hi,
I am trying to setup prime LMS 4.2 with a pair of soft appliance. As I understand that HA is possible with the use of veritas/vmware for windows/solaris; I was wondering what are the possible high availability options available with a pair of prime LMS appliances? Can it form active/secondary with data synchronization/data redundancy of the LMS on top of the traditional backup/restore of the lms?
Any input is appreciated.
ThanksAs iceman said, in VMWare it is not needed to have a pair of host machines to configure HA. Pairs are managed using third party HA services like veritas.
In VMWare's HA concept all Host machines are pooled into one cluster and in case of host failure the entire cluster is moved to another host. vMotion can also help to move the entire vm to another host.
This is when the host fails where vm resides. In case of failure of vm itself, the HA can be set for various actions lilke Automatic restart when hardware or OS failure is detected. OR it can restart another backup host in other cluster when failure is detected.
You need to check availble HA option on VMWare and you can consider HA options via third party applications like veritas as well.
-Thanks
Vinod
**Support Contributors. Rate them. ** -
Nexus 1000V private-vlan issue
Hello
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:Standardowy;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Times New Roman";
mso-ansi-language:#0400;
mso-fareast-language:#0400;
mso-bidi-language:#0400;}
I need to transmit both the private-vlans (as promiscous trunk) and regular vlans on the trunk port between the Nexus 1000V and the physical switch. Do you know how to properly configure the uplink port to accomplish that ?
Thank you in advance
LucasControl vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host. -
Cisco Nexus 1000v on Hyper-v 2012 R2
Dears;
I have deployed Cisco Nexus 1000v on Hyper-v hosts 2012 R2, and I'm in phase of testing and exploring feature, while doing this I removed the Nexus Virtual Switch {VEM} from HOST, it disappeared from host but I couldn't use the uplink attached previously with the switch as it sees it still attached on Nexus 1000v. I tried to remove it by several ways finally the host gets unusable and I had to setup the host again.
the question here; there is no mention on cisco documents for how to uninstall or remove the VEM attached to a host, can any one help in this ?
Thanks
RegardsZoning is generally a term used with fibre channel, but I think I understand what you mean.
Microsoft Failover Clusters rely on shared storage. So you would configure your storage so that it is accessible from all three nodes of the cluster. Any LUN you want to be part of the cluster should be presented to all nodes. With iSCSI,
it is recommended to use two different IP subnets and configure MPIO. The LUNs have to be formatted as NTFS volumes. Run the cluster validation wizard once you think you have things configured correctly. It will help you find any potential
configuration issues.
After you have run a cluster validation and there aren't any warnings left that you can't resolve, build the cluster. The cluster will form with the available LUNs as storage to the cluster. Configure the storage to be Cluster Shared Volumes
for the VMs, and left the witness as the witness. By default, the cluster will take the smallest LUN to be the witness disk. If you are just using the cluster for Hyper-V (recommended) you do not need to assign drive letters to any of the disks.
You do not need, nor is it recommended to use, pass-through disks. There are many downsides to using pass through disks, and maybe one benefit, and that one is very iffy.
. : | : . : | : . tim -
Nexus 1000v 4.2.1 - Interface Ethernet3/5 has been quarantined due to Cmd Failure
Hello,
i get the error message "Interface Ethernet3/5 has been quarantined due to Cmd Failure" when i try to activate the System Uplink ports on the Nexus 1000v VSM. The symptom occurs under 4.2.1.SV1.4 (has been fresh setup, did before tests with 4.0.4). Unfortunately, the link to the 4.2.1 troubleshooting guide does not work (seems it hasn't been released yet).
Has anyone an idea what the root cause could be?
The VSM and VEM run on a GP DL3xxG7 with 2 x Dual Port 10Gbit CNA Adapters.
Nexus 1k config:
vlan 1
vlan 260
name Servers
vlan 340
name NfsA
vlan 357
name vMotion
vlan 920
name Packet_Control
port-profile type ethernet SYSTEM-UPLINK
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,260,301,303,305,307,357,544,920
spanning-tree port type edge trunk
switchport trunk native vlan 1
channel-group auto mode active
no shutdown
system vlan 1,357,920
state enabled
port-profile type ethernet STORAGE-UPLINK
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 340
channel-group auto mode active
no shutdown
system vlan 340
state enabled
When i do a no shut on the physical ports i get:
switch(config-if)# no shut
2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/7 has been quarantined due to Cmd Failure
2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/5 has been quarantined due to Cmd Failure
The other etherchannel (Port Profile STORAGE-UPLINK) does work pretty well...
The peer switches are two Nexus 5k with VPC.
config:
port-profile type port-channel VMWare-LAN
switchport mode trunk
switchport trunk allowed vlan 260, 301, 303, 305, 307, 357, 544, 920
spanning-tree port type edge trunk
switchport trunk native vlan 1
state enabled!
interface port-channel18
inherit port-profile VMWare-LAN
description CHA vshpvm001 LAN
vpc 18
speed 10000!
interface Ethernet1/18
description CHA vshpvm001 LAN
switchport mode trunk
switchport trunk allowed vlan 260,301,303,305,307,357,544,920
channel-group 18 mode active
switch# show port-profile sync-status
Ethernet3/5
port-profile: SYSTEM-UPLINK
interface status: quarantine
sync status: out of sync
cached commands:
errors:
cached command failed
recovery steps:
unshut interface
Ethernet3/7
port-profile: SYSTEM-UPLINK
interface status: quarantine
sync status: out of sync
cached commands:
errors:
cached command failed
recovery steps:
unshut interface
kind regards,
andySean,
thank you !
"show accounting log" helped me - i had the command spanning-tree port type edge trunk in the config which i somehow didn't realize that we hadn't this command in the 4.0.4 lab setup...so it was a copy/paste error (i copied the port-profile config from the N5k down to the N1k).
Fri Feb 25 07:20:32 2011:update:ppm.13880:admin:configure terminal ; interface Ethernet3/5 ; spanning-tree port type edge trunk (FAILURE)
Fri Feb 25 07:20:32 2011:update:ppm.13890:admin:configure terminal ; interface Ethernet3/5 ; shutdown (FAILURE)
As the N1k doesn't do STP at all (or does it? ) it's no wonder that the cli was complaining ...
Maybe this command should get more attention in the tshoot guide as it seems to be a very helpful one.
Cheers & Thanks,
Andy -
Is there really any true need for port-security on Nexus 1000v for vethernet ports? Can a VM be assigned a previously used vethernet port that would trigger a port-security action?
If you want to prevent admins or malicious users from being able change the mac address of a VM then port-security is a useful feature. Especially in VDI environments where users might have full admin control of the VM and can change the mac of the vnic.
Now about veths ports. A veth gets assigned to a VM and stays with that VM. A veth is only released when either the nic on the VM is deleted or the nic is assigned to another port-profile on the N1KV or a port-group on a vSwitch or VMware DVS. Now when the veth is released it does not retain any of the piror information. It's freed up and added to a pool of available veths. When a veth is needed for a VM in either the same port-profile or a different port-profile the free veth will be grabbed and initialized. It does not retain any of the previous settings.
So assigning a VM to a previsously used veth port should not trigger a violation. The MAC should get learned and traffic should be able to flow. -
Nexus 1000v port-channels questions
Hi,
I’m running vCenter 4.1 and Nexus 1000v and about 30 ESX Hosts.
I’m using one system uplink port profile for all 30 ESX Host; On each of the ESX host I have 2 NICs going to a Catalyst 3750 switch stack (Switch A), and another 2 NICs going to another Catalyst 3750 switch stack (Switch B).
The Nexus is configured with the “sub-group CDP” command on the system uplink port profile like the following:
port-profile type ethernet uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,800,802,900,988-991,996-997,999
switchport trunk native vlan 500
mtu 1500
channel-group auto mode on sub-group cdp
no shutdown
system vlan 988-989
description System-Uplink
state enabled
And the port channel on the Catalyst 3750 are configured like the following:
interface Port-channel11
description ESX-10(Virtual Machine)
switchport trunk encapsulation dot1q
switchport trunk native vlan 500
switchport trunk allowed vlan 800,802,900,988-991
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
end
interface GigabitEthernet1/0/18
description ESX-10(Virtual Machine)
switchport trunk encapsulation dot1q
switchport trunk native vlan 500
switchport trunk allowed vlan 800,802,900,988-991
switchport mode trunk
switchport nonegotiate
channel-group 11 mode on
spanning-tree portfast trunk
spanning-tree guard root
end
interface GigabitEthernet1/0/1
description ESX-10(Virtual Machine)
switchport trunk encapsulation dot1q
switchport trunk native vlan 500
switchport trunk allowed vlan 800,802,900,988-991
switchport mode trunk
switchport nonegotiate
channel-group 11 mode on
spanning-tree portfast trunk
spanning-tree guard root
end
Now Cisco is telling me that I should be using MAC pinning when doing a trunk to two different stacks , and that each interface on 3750 should not be configured in a port-channel like above, but should be configured as individual trunks.
First question: Is the above statement correct, are my uplinks configured wrong? Should they be configured individually in trunks instead of a port-channel?
Second questions: If I need to add the MAC pinning configuration on my system uplink port-profile can I create a new system uplink port profile with the MAC pinning configuration and then move one ESX host (with no VM on them) one at a time to that new system uplink port profile? This way, I could migrate one ESX host at a time without outages to my VMs. Or is there an easier way to move 30 ESX hosts to a new system uplink profile with the MAC Pinning configuration.
Thanks.Hello,
From what I understood, you have the following setup:
- Each ESX host has 4 NICS
- 2 of them go to a 3750 stack and the other 2 go to a different 3750 stack
- all 4 vmnics on the ESX host use the same Ethernet port-profile
- this has 'channel-group auto mode on sub-group cdp'
- The 2 interfaces on each 3750 stack are in a port-channel (just 'mode on')
If yes, then this sort of a setup is correct. The only problem with this is the dependance on CDP. With CDP loss, the port-channels would go down.
'mac-pinning' is the recommended option for this sort of a setup. You don't have to bundle the interfaces on the 3750 for this and these can be just regular trunk ports. If all your ports are on the same stack, then you can look at LACP. The CDP option would not be supported in the future releases. In fact, it is supposed to be removed from 4.2(1)SV1(2.1) but I still see the command available (ignore 4.2(1)SV1(4) next to it) - I'll follow up on this internally:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/interface/configuration/guide/b_Cisco_Nexus_1000V_Interface_Configuration_Guide_Release_4_2_1_SV_2_1_1_chapter_01.html
For migrating, the best option would be as you suggested. Create a new port-profile with mac-pinning and move one host at a time. You can migrate VMs off the host before you change the port-profile and can remove the upstream port-channel config as well.
Thanks,
Shankar -
Nexus 1000v: Control VLAN must be same VLAN as ESX hosts?
Hello,
I'm trying to install nexus 1000v and came across the below prerequisite.
The below release notes for Nexus 1000v states
VMware and Host Prerequisites
The VSM VM control interface must be on the same Layer 2 VLAN as the ESX 4.0 host that it manages. If you configure Layer 3, then you do not have this restriction. In each case however, the two VSMs must run in the same IP subnet.
What I'm trying to do is to create 2 VLANs - one for management and the other for control & Data (as per latest deployment guide, we can put control & data in the same vlan).
However, I wanted to have all ESX host management same VLAN as the VSM management as well as the vCenter Management. Essentially, creating a management network.
However, from the above "VMWare and Host Prerequisites", does this means I cannot do this?
I need to have the ESX host management same VLAN as the control VLAN?
This means that my ESX host will reside in a different VLAN than my management subnet?
Thanks...Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host. -
Nexus 1000v VSM can't comunicate with the VEM
This is the configuration I have on my vsm
!Command: show running-config
!Time: Thu Dec 20 02:15:30 2012
version 4.2(1)SV2(1.1)
svs switch edition essential
no feature telnet
banner motd #Nexus 1000v Switch#
ssh key rsa 2048
ip domain-lookup
ip host Nexus-1000v 172.16.0.69
hostname Nexus-1000v
errdisable recovery cause failed-port-state
vem 3
host vmware id 78201fe5-cc43-e211-0000-00000000000c
vem 4
host vmware id e51f2078-43cc-11e2-0000-000000000009
priv 0xa2cb98ffa3f2bc53380d54d63b6752db localizedkey
vrf context management
ip route 0.0.0.0/0 172.16.0.1
vlan 1-2
port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type ethernet vmware-uplinks
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1-3967,4048-4093
channel-group auto mode on
no shutdown
system vlan 2
state enabled
port-profile type vethernet Management
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet vMotion
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet ServidoresGestion
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet L3-VSM
capability l3control
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
system vlan 2
state enabled
port-profile type vethernet VSG-Data
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet VSG-HA
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
vdc Nexus-1000v id 1
limit-resource vlan minimum 16 maximum 2049
limit-resource monitor-session minimum 0 maximum 2
limit-resource vrf minimum 16 maximum 8192
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 1 maximum 1
limit-resource u6route-mem minimum 1 maximum 1
interface mgmt0
ip address 172.16.0.69/25
interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.1.1.bin sup-1
boot system bootflash:/nexus-1000v.4.2.1.SV2.1.1.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.1.1.bin sup-2
boot system bootflash:/nexus-1000v.4.2.1.SV2.1.1.bin sup-2
svs-domain
domain id 1
control vlan 1
packet vlan 1
svs mode L3 interface mgmt0
svs connection vcenter
protocol vmware-vim
remote ip address 172.16.0.66 port 80
vmware dvs uuid "ae 31 14 50 cf b2 e7 3a-5c 48 65 0f 01 9b b5 b1" datacenter-n
ame DTIC Datacenter
admin user n1kUser
max-ports 8192
connect
vservice global type vsg
tcp state-checks invalid-ack
tcp state-checks seq-past-window
no tcp state-checks window-variation
no bypass asa-traffic
vnm-policy-agent
registration-ip 172.16.0.70
shared-secret **********
policy-agent-image bootflash:/vnmc-vsmpa.2.0.0.38.bin
log-level
for some reason my vsm can't the the vem. I could before, but then my server crashed without doing a copy run start and when it booted up all my config but the uplinks was lost.
When I tried to configure the connection again it wasn't working.
I'm also attaching a screen capture of the vds
and a capture of the regular switch.
I will appreciate very much any help you could give me and will provide any configuration details that you might need.
Thank you so much.Carlos,
Looking at vds.jpg, you do not have any VEM vmkernel interface attached to port-profile L3-VSM. So fix VSM-VEM communication problem, you either migrate your VEM management vmkernel interface to L3-VSM port-profile of the vds, or create new VMkernel port on your VEM/host and attach it to L3-VSM port-profile.
Maybe you are looking for
-
Itunes 10.6.1 installation problem
I have problem to update itunes 10.6.1 and it said : The update "iTunes" can't be installed An unexpected error pccurred. can someone help me? Thanks in advance.
-
I designed a Cost Center report with Cost Center and Cost Element in the rows and a key figure in the Columns along with Fiscal year and Posting Period. When I run the report I receive the following result; Cost Center #1 Cost Element #1 ______
-
Dear SAP GURU, When My account officer have book the bill , at that time generate the Document Number 3100007116 Now the problem is that when we checked this document number 3100007116 The error massege are shown below Document 3100007116 does not ex
-
Difficulty in replacing a source file
Hi, I'm new to scripting, and am trying to replace a layer's source file while keeping all it's attributes. I wrote the following script which I thought would work, but it doesn't... I have searched all the pdf support files for an answer to this wit
-
Publisher Role without Admin privledeges
I would like to know if it is possible to allow someone other than an Admin to publish reports (move from say Dev to UAT to Prod). It seems it would be more efficient to have my developers move the reports without having to wait for a Sys Admin pers