OVM 3.1.1 iSCSI multipathing

I am setting up an OVM 3.1.1 environment at a customer site who is presenting iSCSI LUNs from an EMC Filer. I have a few questions:
* what is the proper way to discover a set of iSCSI LUNs when the storage unit has 4 unique IP addresses on 2 different VLAN's? If I discover all 4 paths, they present in the GUI as 4 separate SAN servers. The LUNs seem to show up scattered across all 4 SAN servers. By my simple logic, my thinking is that if I were to lose access to one of those SAN servers, that the LUNs that happen to be presented via that SAN server would disappear and not be accessible. I know this isn't the case however because multipath -ll on the OVM server shows me that there are 4 distinct paths to each LUN that I'm expecting to see- and I've verified that multipath is working by downing one of the two NICs that are allocated to iSCSI and I can see that two paths of four are failed, but I can still access the disk just fine. Is this just me not setting things up the right way in the GUI, or is the GUI implemented poorly here and needs to be redesigned so it's clear to both myself AND the customer?
* has anyone used the storage connect plugins for either iSCSI or Fiber Channel storage with OVM? What does it actually do for you and is it easy or easier than unmanaged storage to implement? Is it worth the hassle?

Here are the notes I had written down:
== change iSCSI default timeout in /etc/iscsi/iscsid.conf for any future connections ==
* change node.session.timeo.replacement_timeout from 120 to 5
#node.session.timeo.replacement_timeout = 120
node.session.timeo.replacement_timeout = 5
== identify iSCSI lun's ==
# iscsiadm -m session
tcp: [1] xx.xx.xx.xx:3260,4 iqn.1992-04.com.emc:cx.apm00115000338.b9
tcp: [2] xx.xx.xx.xx:3260,3 iqn.1992-04.com.emc:cx.apm00115000338.b8
tcp: [3] xx.xx.xx.xx:3260,1 iqn.1992-04.com.emc:cx.apm00115000338.a8
tcp: [4] xx.xx.xx.xx:3260,2 iqn.1992-04.com.emc:cx.apm00115000338.a9
== confirm current active timeout value before the change ==
cat /sys/class/iscsi_session/session*/recovery_tmo
120
120
120
120
== manually change timeout on each iSCSI lun for current active connections ==
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.b9 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.b8 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.a8 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.a9 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
== restart iscsi to make changes take effect ==
service iscsi stop
service iscsi start
NOTE: service iscsi restart and /etc/init.d/iscsi restart doesn't seem to work. Only by stopping, then implicitly starting the iscsi service does it seem to work consistently.
== restart multipathd ==
# service multipathd restart
Stopping multipathd daemon: [  OK  ]
Starting multipathd daemon: [  OK  ]
== Verify new timeout value on active sessions ==
cat /sys/class/iscsi_session/session*/recovery_tmo
5
5
5
5

Similar Messages

  • Configure iSCSI multipath in OVM 3.1.1 with storage plug-in not possible ?

    I have a configuration with 4 iSCSI paths to storage system. All is working fine if the discover and login
    process was performed manual. Multipath is working well with 4 paths.
    # iscsiadm -m session
    tcp: [13] 192.168.10.1:3260,25 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM0CA3P0
    tcp: [14] 192.168.20.1:3260,26 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM0CA3P1
    tcp: [15] 192.168.10.2:3260,27 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM1CA3P0
    tcp: [16] 192.168.20.2:3260,28 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM1CA3P1
    # multipath -ll
    3600000e00d10000000100000000c0000 dm-2 FUJITSU,ETERNUS_DX400
    size=30G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=enabled
    | |- 25:0:0:1 sdg 8:96 active ready running
    | `- 26:0:0:1 sdi 8:128 active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
    |- 27:0:0:1 sdk 8:160 active ready running
    `- 28:0:0:1 sdm 8:192 active ready running
    I want to configure iSCSI multipath in OVM manager using specific storage plug-in.
    I proceed “Discover SAN Server” with following parameter:
    Name: Storage-Name
    Storage Type: iSCSI Storage Server
    Storage Plug-in: Fujitsu ETERNUS
    Admin Host: IP-Addr. of storage system
    Admin Username: user name to access storage system
    Admin Password: password
    Access Host (IP) Address:           192.168.10.1
    After this configuration I see my storage system and assigned physical volumes to an access group.
    Now the volumes are available on the OVM storage server, >>>> but I lost the multipath functionality <<<<<.
    # multipath -ll
    3600000e00d1000000010000000120000 dm-1 FUJITSU,ETERNUS_DX400
    size=30G features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=50 status=active
    `- 33:0:0:1 sdd 8:48 active ready running
    The reason for this behavior is that only one Access Host (IP) Address can be specified.
    With this IP address a session was established … but only 1 session and not 4 !!
    How can I specify the remaining 3 paths to have proper multipath functionality?
    I know that Citrix XenServer accepts more Access Host (IP) Addresses,
    e.g. 192.168.10.1,192.168.10.2,192.168.20.1,192.168.20.2
    Thanks for help.

    @user12273962
    Yes, you are right. The storage plug-in is for management of the storage within OVM.
    This is working well, but I'm still missing the multipath fuctionality. This storage plug-in is not responisble
    for establish the multipath, but OVM should take care of this.
    @budachst
    Yes, OVM did only open one session and loggin only to one target, because only one Access Host (IP) Address: 192.168.10.1
    can be specified.
    # iscsiadm -m session
    tcp: [21] 192.168.10.1:3260,25 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM0CA3P0
    In my point of view it is not possible to configure the remaining targets, so this is a design problem.
    Any new inputs and informations are welcome.

  • ESX 5.1 and iSCSI multipathing

    Hello.
    I have a problem in configuring vSphere Cluster with iSCSI multipathing.
    Host in cluster have four physical interfaces. Host have four vmk interfaces for iSCSI, which mapped to individual physical interface.
    Storage is IBM Storwize 3700 with four configured targets (10.78.2.2 - 10.78.2.5).
    Why I see only 4 paths to storage? I think there should be 16, I am right? (4if from storage * 4if from host).
    Rescan HBA not resolve this problem.
    P.S. All paths on storage work properly on another host not in cluster.

    Actually found the reference , its in the firmware upgrade guide itself:
    All connectivity may be lost during firmware upgrades if you have configured both Enable Failover on one
    or more vNICs and you have also configured NIC teaming/bonding at the host operating system level. Please
    design for availability by using one or the other method, but never both.
    Is this still valid? Whats the setting to be done on ESX nic teaming if we wanna use Enable failover on UCS?

  • Iscsi multipath issue---all session only created base the first NIC

    Hello everyone, I had a X86 host(x4200) with solaris 10 u6 and 2 Intel NICs, and the array is Equallogic PS5000, for Equallogic array only have one group IP and only one target port id:3260, so after I congiured all host NIC and the array group IP in the same subnet, then I did:
    1.iscsiadm add discover-address ARRAY GROUP IP;
    2.iscsiadm modify initiator-node -c 2;
    3,iscsiadm modify discovery -t enable -s enable,
    I found that every iSCSI session created base NIC1-----only after I disable NIC1 then all session created base NIC2, not one session base NIC1 and one session base NIC2, what's the problem?
    I had trid to do iscsiadm modify target-param -c NIC1IP.NIC2IP targetname, and it no help too.
    Did you have experiment on it?Any reply is very appreciated.

    Since user in logging again from the same machine so i think invalidating the previous session won't work.
    Suppose user opens first instance of the mozilla browser and login as say 'ashok' whose role is normal user now he opens another instance of mozilla by clicking on executable and login as 'mitch' whose role is admin then after successful login of 'mitch' on first window mitch's menus items are getting displayed instead of ashok's.
    What i am doing is,
    While creating new session on login i first check is any existing session available in above case its true
    session = request.getSession(false); //return session if already exists
    System.out.println("Session object: "+session);
    if(session != null)
    System.out.println("Session ID Old: "+session.getId());
    session.invalidate(); //invalidate the session if already exists
    session = request.getSession(true); //Create new session
    System.out.println("Session ID New: "+session.getId());
    so control will go inside the if block, here i first invalidate the existing session (this is not destroying the session but only unbinding the information that was stored in the session) and create new session (this is returning the same session which was already exists) and save mitch's information. Since the previous instance was using the same session that will now get the mitch's information so now both instances will show the mitch's information.
    I am not getting any way to destroy the existing session so next time new session id will get generated.

  • Install OVM server 3.2.1 on iSCSI LUN

    Hello friends,
    I have a Cisco Blade and would like to boot OVM server from an iSCSI lun. I create a LUN from a SAN and presented the lun to the blade. When the server boots up, it sees the iSCSI lun just fine but when I tried to install OVM server 3.2.1, it did not detect that LUN. I tried to install Oracle Linux 6.3 and it sees the LUN ok. Is there a way to make OVM server to see that LUN.
    thanks,
    TD

    I have done to many oel and ovs install lately such that they are blurring together...
    On the OVM 3.2.1 install watch closely for a place to add support for non local storage. I remember seeing a small prompt in some of the installs but it may have been from some of the OEL installs (I have also been doing OEL 4, 5, & 6) & not OVM.
    Sorry I can't remember right now. If I get a moment and try to run the install process I will and report back if no one else does.

  • ISCSI storage with UCS

    Hi All,
    Can i ask a question with regards to connecting iSCSI storage to use with UCS. We are looking at using Nimble storage which is iSCSI based & want to understand best practise recommendations on how to connect it to UCS to get the best level of performance & reliability/ resilience etc.
    Another question is more around how VMware deals with loss of connectivity to one path (where dual connections are setup, from storage to fabrics), would it re-route traffic to running path?
    Any suggestions would be appreciated.
    Kassim

    Hello Kassim,
    Currently Nimble iSCSI storage is certified with UCS 2.0.3 firmware version.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/interoperability/matrix/r_hcl_B_rel2.03.pdf
    The following guide can serve as reference.
    Cisco Desktop Virtualization Solution with Nimble Storage Reference Architecture
    http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns836/ns978/guide_c07-719522.pdf
    In above setup, ESXi software iSCSi multipathing with PSP Round Robin algorithm is implemented to take to care of load balancing IO and failover with dual paths.
    HTH
    Padma

  • Only system vlans forward traffic on 1000v

    I am trying to migrate to a Nexus 1000v vDS but only VM's in the system VLAN can forward traffic. I do not want to make my voice vlan a system VLAN but that is the only way I can get a VM in that VLAN to work properly. I have a host with its vmk in the L3Control port group. From the VSM, a show module shows the VEM 3 with an "ok" status. I currently only have 1 NIC under the vDS control. My VM's using the VM_Network port group work fine and can forward traffic normally. When I put a VM in the Voice_Network port group I lose communication with it. If I add vlan 5 as a system vlan to my Uplink port profile then the VM's in the Voice_Network work properly. I thought you shouldn't create system vlans for each vlan and only use it for critical management functions so I would rather not make it a system vlan. Below is my n1k config. The upstream switch is a 2960X with the "switchport mode trunk" command. Am I missing something that is not allowing VLAN 5 to communicate over the Uplink port profile?
    port-profile type ethernet Unused_Or_Quarantine_Uplink
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type vethernet Unused_Or_Quarantine_Veth
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type vethernet VM_Network
      vmware port-group
      switchport mode access
      switchport access vlan 1
      no shutdown
      system vlan 1
      max-ports 256
      description VLAN 1
      state enabled
    port-profile type vethernet L3-control-vlan1
      capability l3control
      vmware port-group L3Control
      switchport mode access
      switchport access vlan 1
      no shutdown
      system vlan 1
      state enabled
    port-profile type ethernet iSCSI-50
      vmware port-group "iSCSI Uplink"
      switchport mode trunk
      switchport trunk allowed vlan 50
      switchport trunk native vlan 50
      mtu 9000
      channel-group auto mode active
      no shutdown
      system vlan 50
      state enabled
    port-profile type vethernet iSCSI-A
      vmware port-group
      switchport access vlan 50
      switchport mode access
      capability iscsi-multipath
      no shutdown
      system vlan 50
      state enabled
    port-profile type vethernet iSCSI-B
      vmware port-group
      switchport access vlan 50
      switchport mode access
      capability iscsi-multipath
      no shutdown
      system vlan 50
      state enabled
    port-profile type ethernet Uplink
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 1,5
      no shutdown
      system vlan 1
      state enabled
    port-profile type vethernet Voice_Network
      vmware port-group
      switchport mode access
      switchport access vlan 5
      no shutdown
      max-ports 256
      description VLAN 5
      state enabled

    Below is the output you requested. Thank you.
    ~ # vemcmd show card
    Card UUID type  2: 4c4c4544-004c-5110-804a-b9c04f564831
    Card name: synergvm5
    Switch name: synergVSM
    Switch alias: DvsPortset-0
    Switch uuid: 7d e9 0d 50 b3 3b 25 47-64 14 61 c0 3f c0 7b d9
    Card domain: 4094
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 1
    VEM Control (AIPC) MAC: 00:02:3d:1f:fe:02
    VEM Packet (Inband) MAC: 00:02:3d:2f:fe:02
    VEM Control Agent (DPA) MAC: 00:02:3d:4f:fe:02
    VEM SPAN MAC: 00:02:3d:3f:fe:02
    Primary VSM MAC : 00:50:56:aa:70:b9
    Primary VSM PKT MAC : 00:50:56:aa:70:bb
    Primary VSM MGMT MAC : 00:50:56:aa:70:ba
    Standby VSM CTRL MAC : 00:50:56:aa:70:b6
    Management IPv4 address: 172.30.2.64
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 172.30.100.1
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 16
      Processor Cores: 8
    Processor Sockets: 2
      Kernel Memory:   62904468
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ~ # vemcmd show port
      LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
       24     Eth3/8     UP   UP    FWD       0          vmnic7
       49      Veth1     UP   UP    FWD       0            vmk1
       50      Veth2     UP   UP    FWD       0        XP-Voice.eth0
       51      Veth3     UP   UP    FWD       0        synergPresence.eth0
    ~ # vemcmd show port vlans
                              Native  VLAN   Allowed
      LTL   VSM Port  Mode    VLAN    State* Vlans
       24     Eth3/8   T          1   FWD    1
       49      Veth1   A          1   FWD    1
       50      Veth2   A          1   FWD    1
       51      Veth3   A          5   FWD    5
    * VLAN State: VLAN State represents the state of allowed vlans.
    ~ # vemcmd show bd
    Number of valid BDS: 10
    BD 1, vdc 1, vlan 1, swbd 1, 5 ports, ""
    Portlist:
    BD 2, vdc 1, vlan 3972, swbd 3972, 0 ports, ""
    Portlist:
    BD 3, vdc 1, vlan 3970, swbd 3970, 0 ports, ""
    Portlist:
    BD 4, vdc 1, vlan 3969, swbd 3969, 2 ports, ""
    Portlist:
          8
          9
    BD 5, vdc 1, vlan 3968, swbd 3968, 3 ports, ""
    Portlist:
          1  inban
          5  inband port securit
         11
    BD 6, vdc 1, vlan 3971, swbd 3971, 2 ports, ""
    Portlist:
         14
         15
    BD 7, vdc 1, vlan 5, swbd 5, 1 ports, ""
    Portlist:
         51  synergPresence.eth0
    BD 8, vdc 1, vlan 50, swbd 50, 0 ports, ""
    Portlist:
    BD 9, vdc 1, vlan 77, swbd 77, 0 ports, ""
    Portlist:
    BD 10, vdc 1, vlan 199, swbd 199, 0 ports, ""
    Portlist:
    ~ #

  • Case Resolution CSCul82285

    Microsoft KB: http://support.microsoft.com/kb/296930
    I was told this was a Microsoft problem, but the resolution would not be implemented until the next major release. Cisco has implemented a workaround via ENIC driver version 3.0.0.6 or higher. This driver has not been publicly released, but should be in the El Capitain Maintenance Release 2 (2.3.0) which is slated for around Sept. of 2014. The 3.0.0.6 driver I received from TAC has not been though Microsoft's Hardware Quality Labs so it is not signed from Microsoft as of yet. it is signed with Cisco's publicly trusted Code Signing Certificate which will prompt you to accept the validity on the first install.
    I can attest that the driver does work to solve the use of both iSCSI boot and 2012 and 2012 R2 NIC Teaming. At this point, you will have to open a TAC case and your support engineer will need to contact the DEV Team to procure the updated driver.
    Here is what worked for me on a 2012 Failover Clustering Hyper-V using iSCSI Boot, Software NIC Teaming, and VMFEX hypervisor bypass:
    1) During installation install, use the ENIC 2.4.0.15 driver from the UCS Server Utility image.
    2) During first logon, start the Microsoft iSCSI service and use the Cisco VIO install utility to custom install all options. Reboot when prompted.
    3) In Device Manager, install the ENIC 3.0.0.6 driver on each Cisco VIC Ethernet network interface. You will be prompted to reboot when you replace the driver on your iSCSI interfaces. DO NOT REBOOT until you have updated the driver on all interfaces.
    4) Configure your iSCSI multipath.
    4) Team and configure your desired network adapters so that you can use both fabrics simultaneously for Management, Live Migration, and Fault Tolerance.
    5) Setup Hyper-V and failover clustering.

    Hi astroboy,
    there's a property node ("Display.Primary Workspace" in LV7.1) giving you the actual screen size - you can poll this once a second...
    But:
    How do you want to control the resize of LabView controls/indicators? They are maintained by the LV core routines. How to plug-in your "3rd party resize functions"?
    Message Edited by GerdW on 01-22-2008 09:59 AM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • ZFS Configuration - getting different disk target numbers

    Hi ,
    We are in the process of configuring Oracle 11gR2 RAC, we are using ZFS as ISCSI storage. getting different disk target numbers on both machines. ( disk 4,5 and 6). to configure ASM we need to set the them unique, please help us to reset volume names.
    i.e.
    4. c2t25d0 <DEFAULT cyl 3580 alt 2 hd 128 sec 32>
    Disable ISCSI Multipathing on dev2 and dev3:
    cat /kernel/drv/iscsi.conf
    set mpxio-disable="yes"
    [root@dev3 /]$ format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <DEFAULT cyl 17845 alt 2 hd 255 sec 63>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@0,0
           1. c1t1d0 <DEFAULT cyl 17845 alt 2 hd 255 sec 63>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@1,0
           2. c1t2d0 <SEAGATE-ST914602SSUN146G-0603-136.73GB>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@2,0
           3. c1t3d0 <SEAGATE-ST914602SSUN146G-0603-136.73GB>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@3,0
           4. c2t25d0 <DEFAULT cyl 3580 alt 2 hd 128 sec 32>
              /iscsi/[email protected]%3A02%3Aadfcea0d-d88e-e052-eeaa-d523fb12bce40001,0
           5. c2t26d0 <DEFAULT cyl 39159 alt 2 hd 255 sec 63>
              /iscsi/[email protected]%3A02%3Aa0833922-e008-c492-c50e-fe09b494248f0001,0
           6. c2t27d0 <DEFAULT cyl 65267 alt 2 hd 255 sec 63>
              /iscsi/[email protected]%3A02%3A66ba50e2-51ce-665d-e32c-d8449d1e7ec10001,0
    [root@dev4 /]$ format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <DEFAULT cyl 17845 alt 2 hd 255 sec 63>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@0,0
           1. c1t1d0 <DEFAULT cyl 17845 alt 2 hd 255 sec 63>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@1,0
           2. c1t2d0 <SEAGATE-ST914603SSUN146G-0868-136.73GB>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@2,0
           3. c1t3d0 <SEAGATE-ST914603SSUN146G-0868-136.73GB>
              /pci@0,0/pci1022,7458@11/pci1000,3060@4/sd@3,0
           4. c2t23d0 <DEFAULT cyl 39159 alt 2 hd 255 sec 63>
              /iscsi/[email protected]%3A02%3Aa0833922-e008-c492-c50e-fe09b494248f0001,0
           5. c2t24d0 <DEFAULT cyl 65267 alt 2 hd 255 sec 63>
              /iscsi/[email protected]%3A02%3A66ba50e2-51ce-665d-e32c-d8449d1e7ec10001,0
           6. c2t25d0 <DEFAULT cyl 3580 alt 2 hd 128 sec 32>
              /iscsi/[email protected]%3A02%3Aadfcea0d-d88e-e052-eeaa-d523fb12bce40001,0Regards,
    Sachin

    I have no room for the ZFS partition on a slice. I
    have a lot of free space unallocated to any partition
    and this is where I would like the ZFS partition.You would need to allocate it to one yourself.
    I
    saw a lot of confusion between slices and
    partitions.Yes. In many cases they are used interchangably.
    However, I often try to use the term "slice" for the solaris label subdivision, leaving "partition" to refer to x86 architecture-specific divisions.
    Solaris on x86 can only (easily) use a single x86 partition of type "Solaris". You can't use another partition (whether with ZFS or not).
    In my case, I have a 20GB partition where I have 10
    slices created when I installed Solaris on UFS.
    The 120GB left is not allocated to any slice.If it's visible to the Solaris label, then allocate it to a slice.
    Darren

  • Veth drops on nexus1000v toward vmk

    Hi all,
    during a troubleshooting analisys I've noted a lot of drops on some of Nexus1000v interfaces.
    The 1000v is in an architecture with N7k-UCS FI 6296-UCS blade M series- Vmware Esxi5.1
    1000v version is 4.2(1)SV1(5.2)
    In particular the drops occurred on veth interfaces toward vmk for Vmotion and Vxlan (two different vmkx- vmk2/vmk3) , and on uplink portchannel dedicated to vxlan and vmotion.
    Vxlan implementation is a L2 one.
    On portchannel we use mac address pinning toward UCS End Host Mode, so the drops are quite normal ( broadcast/multicast on non designated port), but i'm worried about the drops on vmk.
    I've cleared the counters on one of the veth interface, and the drops are still 0, no increment.
    Someone can help me to investigate about the problem ?
    Maybe some Mtu issue on Dvswitch, or on ESxi  ?
    Or some performance issue ? I think these drops occur  during a peak of traffic toward a customer VM ( the traffic was only 70k ppps , so not so high to think of a performance issue..).
    These are the drops on one of the vmk for vxlan  ( input+output , or output only)
    Vethernet4 is up
      Port description is VMware VMkernel, vmk3
      Owner is VMware VMkernel, adapter is vmk3
      Active on module 10
      VMware DVS port 104
      Port-Profile is 1120-VXLAN
      Port mode is access
      5 minute input rate 88509912 bits/second, 12542 packets/second
      5 minute output rate 22010120 bits/second, 9501 packets/second
      Rx
        90293108855 Input Packets 7293052419905852798 Unicast Packets
        2319412221426068554 Multicast Packets 240550728439 Broadcast Packets
        94313685068147 Bytes
      Tx
        114597486997 Output Packets 65506738625 Unicast Packets
        3775177797832689983 Multicast Packets 62508584 Broadcast Packets 2159481 Flood Packets
        39473265121989 Bytes
        0 Input Packet Drops 1297658861645201427 Output Packet Drops
    or
    Vethernet15 is up
      Port description is VMware VMkernel, vmk3
      Owner is VMware VMkernel, adapter is vmk3
      Active on module 7
      VMware DVS port 103
      Port-Profile is 1120-VXLAN
      Port mode is access
      5 minute input rate 3119448 bits/second, 940 packets/second
      5 minute output rate 8593160 bits/second, 1294 packets/second
      Rx
        7885912029 Input Packets 4121983545487723792 Unicast Packets
        3472330426551476492 Multicast Packets 7307199665338263582 Broadcast Packets
        3607343173490 Bytes
      Tx
        10215155642 Output Packets 9550212849 Unicast Packets
        8386058791768379170 Multicast Packets 7162252202995480725 Broadcast Packets 8030178 Flood
    Packets
        10342867772679 Bytes
        7954892334481738253 Input Packet Drops 2322280091609214343 Output Packet Drops
    Thanks
    Federica

    Thanks for your reply,
    The drops are not incrementing now, but  we experienced network performance issues  during the drops..
    As soon as possible i'll try the vempkt capture, even if now i've no drops.
    Here the port profile for vmk2 and vmk3
    vmk3 :
    port-profile 1120-VXLAN
    type: Vethernet
    description:
    status: enabled
    max-ports: 32
    min-ports: 1
    inherit:
    config attributes:
      capability vxlan
      switchport mode access
      switchport access vlan 1120
      no shutdown
    evaluated config attributes:
      capability vxlan
      switchport mode access
      switchport access vlan 1120
      no shutdown
    assigned interfaces:
      Vethernet2
      Vethernet4
      Vethernet9
      Vethernet11
      Vethernet13
      Vethernet15
      Vethernet16
      Vethernet17
      Vethernet238
      Vethernet240
      Vethernet244
      Vethernet680
      Vethernet744
      Vethernet878
    port-group: 1120-VXLAN
    system vlans: none
    capability l3control: no
    capability iscsi-multipath: no
    capability vxlan: yes
    capability l3-vn-service: no
    port-profile role: none
    port-binding: static
    vmk2 :
    port-profile 1100-vMotion
    type: Vethernet
    description:
    status: enabled
    max-ports: 32
    min-ports: 1
    inherit:
    config attributes:
      switchport mode access
      switchport access vlan 1100
      no shutdown
    evaluated config attributes:
      switchport mode access
      switchport access vlan 1100
      no shutdown
    assigned interfaces:
      Vethernet1
      Vethernet3
      Vethernet5
      Vethernet6
      Vethernet7
      Vethernet8
      Vethernet10
      Vethernet12
      Vethernet233
      Vethernet239
      Vethernet243
      Vethernet673
      Vethernet676
      Vethernet877
    port-group: 1100-vMotion
    system vlans: none
    capability l3control: no
    capability iscsi-multipath: no
    capability vxlan: no
    capability l3-vn-service: no
    port-profile role: none
    port-binding: static
    Thanks
    Federica

  • Identify the storage of our RAC database

    I  am new to RAC - We have setup of 11gr2 two node rac - I am doing a sanity check like below what is our storage How does we identify our shared disks defined (local, SAN, iSCSI)?

    967023 wrote:
    I  am new to RAC - We have setup of 11gr2 two node rac - I am doing a sanity check like below what is our storage How does we identify our shared disks defined (local, SAN, iSCSI)?
    Local disks could not be used in the case of RAC as the last needs shared.
    1. Identify disks used for ASM:
    asmcmd lsdsk
    2. Disks may be presented through HBA, iSCSI, dNFS.
    3. Disks may be presented with a single path. And i.1 will give necessary information.
    Or multipath may be used.
    4. List disks with:
    ls -l /dev/disk/by-id
    Path like
    lrwxrwxrwx 1 root root 24 Mar 14 02:25 scsi-1ATA_VBOX_HARDDISK_VB3ee53981-f184882e -> ../../oracleasm/OCRVOTE3
    will indicate shared VBox disk
    ls -l /dev/disk/by-path
    Paths like
    lrwxrwxrwx 1 root root  9 Feb 16 11:43 ip-xx.xx.xx.xx:7260-iscsi-iqn.2010-06.com.purestorage:flasharray.123456qwerty-lun-1 -> ../../sde
    will indicate iSCSI
    iscsiadm -m session -P 1
    may give additional information in the case of iSCSI
    multipath -ll
    may give additional information in the case of HBA.
    P.S.
    Checked against OL5.

  • OCFS2 Manager low priority access

    Hello,
    I'm currently working to use a SAN with lot of I/O access.
    I'm using a solution with ISCSI, multipath and OCFS2.
    My cluster OCFS2 contains 4 Nodes, all with the same configuration using Debian Squeeze and ocfs2 v 1.4.4-3.
    When I writing on the same file with all the nodes, I can see that only 3 nodes are writing with a regular interval.
    The node using the slot 0 (OCFS2 Manager) doesn't write or write really slowly.
    When there is only two active node in the cluster, the Manager node works well.
    Does someone already have this problem ?
    Something seems you wrong ?
    Sincerly.

    Please do refer to: OCFS2 and SAN Interactions [ID 603038.1]
    There is some mention of your issue at this metalink note.
    Regards,
    Johan Louwers.

  • Nexus 1000v VEM module bouncing between hosts

    I'm receiving these error messages on my N1KV and don't know how to fix it.  I've tried removing, rebooting, reinstalling host B's VEM but that did not fix the issue.  How do I debug this?
    My setup,
    Two physical hosts running esxi 5.1, vcenter appliance, n1kv with two system uplinks and two uplinks for iscsi for each host.  Let me know if you need more output from logs or commands, thanks.
    N1KV# 2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:08 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:09 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:16 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:17 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:28 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:44 N1KV %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable
    N1KV# sh module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          ha-standby
    2    0      Virtual Supervisor Module         Nexus1000V          active *
    3    248    Virtual Ethernet Module           NA                  ok
    Mod  Sw                  Hw     
    1    4.2(1)SV2(1.1a)     0.0                                             
    2    4.2(1)SV2(1.1a)     0.0                                             
    3    4.2(1)SV2(1.1a)     VMware ESXi 5.1.0 Releasebuild-838463 (3.1)     
    Mod  MAC-Address(es)                         Serial-Num
    1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
    Mod  Server-IP        Server-UUID                           Server-Name
    1    192.168.54.2     NA                                    NA
    2    192.168.54.2     NA                                    NA
    3    192.168.51.100   03000200-0400-0500-0006-000700080009  NA
    * this terminal session
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-1
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ! ports 1-6 connected to physical host A
    interface GigabitEthernet1/0/1
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 1 mode active
    ! ports 7-12 connected to phys host B
    interface GigabitEthernet1/0/7
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 2 mode active

    ok after deleteing the n1kv vms and vcenter and then reinstalling all I got the error again,
    N1KV# 2013 Jun 18 17:48:12 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:13 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:41 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:42 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:10 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:11 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:35 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:36 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:59 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:50:00 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    Host A
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 1
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:00
    VEM Packet (Inband) MAC: 00:02:3d:20:02:00
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:00
    VEM SPAN MAC: 00:02:3d:30:02:00
    Primary VSM MAC : 00:50:56:b6:96:f2
    Primary VSM PKT MAC : 00:50:56:b6:11:b6
    Primary VSM MGMT MAC : 00:50:56:b6:48:c6
    Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: No
    Host B
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:a8:f5:f0
    Primary VSM PKT MAC : 00:50:56:a8:3c:62
    Primary VSM MGMT MAC : 00:50:56:a8:b4:a4
    Standby VSM CTRL MAC : 00:50:56:a8:30:d5
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    I used the nexus 1000v java installer so I don't know what it keeps assigning the same UUID nor do I know how to change it.
    Here is the other output you requested,
    N1KV# show vms internal info dvs
      DVS INFO:
    DVS name: [N1KV]
          UUID: [bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3]
          Description: [(null)]
          Config version: [1]
          Max ports: [8192]
          DC name: [Galaxy]
         OPQ data: size [1121], data: [data-version 1.0
    switch-domain 2
    switch-name N1KV
    cp-version 4.2(1)SV2(1.1a)
    control-vlan 1
    system-primary-mac 00:50:56:a8:f5:f0
    active-vsm packet mac 00:50:56:a8:3c:62
    active-vsm mgmt mac 00:50:56:a8:b4:a4
    standby-vsm ctrl mac 0050-56a8-30d5
    inband-vlan 1
    svs-mode L3
    l3control-ipaddr 192.168.54.2
    upgrade state 0 mac 0050-56a8-30d5 l3control-ipv4 null
    cntl-type-mcast 0
    profile dvportgroup-26 trunk 1,51-57,110
    profile dvportgroup-26 mtu 9000
    profile dvportgroup-27 access 51
    profile dvportgroup-27 mtu 1500
    profile dvportgroup-27 capability l3control
    profile dvportgroup-28 access 52
    profile dvportgroup-28 mtu 1500
    profile dvportgroup-28 capability l3control
    profile dvportgroup-29 access 53
    profile dvportgroup-29 mtu 1500
    profile dvportgroup-30 access 54
    profile dvportgroup-30 mtu 1500
    profile dvportgroup-31 access 55
    profile dvportgroup-31 mtu 1500
    profile dvportgroup-32 access 56
    profile dvportgroup-32 mtu 1500
    profile dvportgroup-34 trunk 220
    profile dvportgroup-34 mtu 9000
    profile dvportgroup-35 access 220
    profile dvportgroup-35 mtu 1500
    profile dvportgroup-35 capability iscsi-multipath
    end-version 1.0
          push_opq_data flag: [1]
    show svs neighbors
    Active Domain ID: 2
    AIPC Interface MAC: 0050-56a8-f5f0
    Inband Interface MAC: 0050-56a8-3c62
    Src MAC           Type   Domain-id    Node-id     Last learnt (Sec. ago)
    0050-56a8-30d5     VSM         2         0201      1020.45
    0002-3d40-0202     VEM         2         0302         1.33
    I cannot add Host A to the N1KV it errors out with,
    vDS operation failed on host 192.168.52.100, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
    Host B (192.168.51.100) was added fine, then I moved a vmkernel to the N1KV which brought up the VEM and got the VEM flapping errors.

  • PVLANs on Nexus1000v and FI/Nexus7K

    Hello All,
    I'm trying to implement PVLANs in the following scenario:
    desktop1 (vlan 100) == link1 == Nexus1000v == link2 == Fabric Interconnect == link3 == Nexus 7K == link4 == ASA
    desktop2 (vlan 100)
    server1 (vlan100)
    server2 (vlan200)
    desktop 1&2 need to talk to servers 1&2 but not to each other, so I'm putting them into isolated secondary vlan (link1 - calling just 1 link for simplicity). Servers 1&2 will be in promiscuous vlan (link1).
    What I'm not sure about is whether I need to configure secondary vlans on Nexus7K and UCS FI? Or would configuring promiscuous trunk on Nexus 1K (link2) will be enough? The communication between hosts 1&2 with server 2 is done via ASA firewall (subinterfaces).
    So where do I need to span secondary VLANs to?
    Thanks

    Thanks for your reply,
    The drops are not incrementing now, but  we experienced network performance issues  during the drops..
    As soon as possible i'll try the vempkt capture, even if now i've no drops.
    Here the port profile for vmk2 and vmk3
    vmk3 :
    port-profile 1120-VXLAN
    type: Vethernet
    description:
    status: enabled
    max-ports: 32
    min-ports: 1
    inherit:
    config attributes:
      capability vxlan
      switchport mode access
      switchport access vlan 1120
      no shutdown
    evaluated config attributes:
      capability vxlan
      switchport mode access
      switchport access vlan 1120
      no shutdown
    assigned interfaces:
      Vethernet2
      Vethernet4
      Vethernet9
      Vethernet11
      Vethernet13
      Vethernet15
      Vethernet16
      Vethernet17
      Vethernet238
      Vethernet240
      Vethernet244
      Vethernet680
      Vethernet744
      Vethernet878
    port-group: 1120-VXLAN
    system vlans: none
    capability l3control: no
    capability iscsi-multipath: no
    capability vxlan: yes
    capability l3-vn-service: no
    port-profile role: none
    port-binding: static
    vmk2 :
    port-profile 1100-vMotion
    type: Vethernet
    description:
    status: enabled
    max-ports: 32
    min-ports: 1
    inherit:
    config attributes:
      switchport mode access
      switchport access vlan 1100
      no shutdown
    evaluated config attributes:
      switchport mode access
      switchport access vlan 1100
      no shutdown
    assigned interfaces:
      Vethernet1
      Vethernet3
      Vethernet5
      Vethernet6
      Vethernet7
      Vethernet8
      Vethernet10
      Vethernet12
      Vethernet233
      Vethernet239
      Vethernet243
      Vethernet673
      Vethernet676
      Vethernet877
    port-group: 1100-vMotion
    system vlans: none
    capability l3control: no
    capability iscsi-multipath: no
    capability vxlan: no
    capability l3-vn-service: no
    port-profile role: none
    port-binding: static
    Thanks
    Federica

  • Xen Bridge Problem, dont allow more than 1 guest running

    Hello,
    We have a curious problem with our Oracle VM Xen Bridge configuration. We have a 2 node server pool with a shared OCFS2 repository (this works great, ISCSI, multipath, etc.), HA enabled and some VM Guests.
    The problem is that if we launch 2 or more guests that use the same xen bridge in the same VM Server (xenbr1 for example on node A) we have intermittent connection issues on those guests. Only 1 guest will respond to SSH, PING, etc. while the others don’t. After a while, one of the Guests that don’t have connection will respond and the one that seems to be fine won’t do it (like if they take turns to use that xenbr).
    The VM Server version is a fresh downloaded and installed 2.2.1. Since we are using it as a test for future projects, the VM Servers aren’t patched from the ULN. The logs from xen or the server itself don’t show any related information to network problems.
    We mainly use as a guide the chapters 6 (storage) and 7 (Networking) from the Roddy Rodstein – Underground Oracle VM Manual. We have tried to create the xen bridges manually (as many guides said so) and also we do it dynamic with a modified version of the default xen network-bridges script (My Oracle Support Note 730750.1), the results were the same problem mentioned before using both methods.
    Do you know if this could be a configuration problem, or something already know like a bug??(I couldn’t find any info related for a similar problem)
    *====Additional Information====*
    Guests Info:
    Paravirtualized OEL 5.6, vm.cfg example below (the guests have the same expecs)
    bootloader = '/usr/bin/pygrub'
    disk = ['file:/var/ovs/mount/3C4B132FC4316BE43BB9C81C8EFC8/running_pool/10_testvm01/System.img,xvda,w']
    keymap = 'en-us'
    memory = '4096'
    name = '10_testvm01'
    on_crash = 'restart'
    on_reboot = 'restart'
    uuid = '579cd1c0-b1df-aaa5-1b07-7345ff9490f0'
    vcpus = 2
    vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0,vncpasswd=password']
    vif = ['mac=00:16:3E:75:AB:B8, bridge=xenbr1']
    NICs:
    Intel 82576 Quad-Port x2 (8 total)
    Brctl show output:
    bridge name bridge id STP enabled interfaces
    xenbr0 8000.001b218f9c50 no eth0
    xenbr1 8000.001b218f9c51 no eth1
    xenbr2 8000.001b21a91158 no eth4
    xenbr3 8000.001b21a91159 no eth5
    The other interfaces are used for iSCSI shared volume.
    In advance, Thanks for any information that you could provide me in this problem.

    Hello Sebastian, Thanks for taking your time into this.
    In response to the first question, No, that was the brctl output when the VM Server didn’t have any guests running, my intention was only to show the basic bridge configuration that I made. The next is an example of the same output when there are 2 guests running on xenbr1.
    Node A:
    bridge name     bridge id               STP enabled     interfaces
    xenbr0          8000.001b218f9c60       no              eth0
    xenbr1          8000.001b218f9c61       no              vif8.0
    vif7.0
    eth1
    xenbr2          8000.001b21a91138       no              eth4
    xenbr3          8000.001b21a91139       no              eth5
    Node B:
    bridge name     bridge id               STP enabled     interfaces
    xenbr0          8000.001b21932568       no              eth0
    xenbr1          8000.001b21932569       no              vif6.0
    vif5.0
    eth1
    xenbr2          8000.001b21a86ce8       no              eth4
    xenbr3          8000.001b21a86ce9       no              eth5
    Stopping or starting the iptables service won’t make any change in the behavior. Actually, I am quite confused about the iptables service, for example if I stop the service and then I start a machine (guest) the iptables service is started again. And as I said, stopping the service one more time doesn’t make any difference.
    To the last question, these are the lines for my vif in the VM Guests (those running on xenbr1 only, the same hapends for the rest of the bridges, if you need me to post the rests just tell me):
    Guest 1: vif = ['mac=00:16:3E:75:AB:B8, bridge=xenbr1']
    Guest 2: vif = ['mac=00:16:3E:7B:49:99, bridge=xenbr1']
    Guest 3: vif = ['mac=00:16:3E:17:98:04, bridge=xenbr1']
    Guest 4: vif = ['mac=00:16:3E:46:92:AF, bridge=xenbr1']
    I hope this helps to get a workaround in this problem, Thanks again for you time!

Maybe you are looking for