ESX 5.1 and iSCSI multipathing

Hello.
I have a problem in configuring vSphere Cluster with iSCSI multipathing.
Host in cluster have four physical interfaces. Host have four vmk interfaces for iSCSI, which mapped to individual physical interface.
Storage is IBM Storwize 3700 with four configured targets (10.78.2.2 - 10.78.2.5).
Why I see only 4 paths to storage? I think there should be 16, I am right? (4if from storage * 4if from host).
Rescan HBA not resolve this problem.
P.S. All paths on storage work properly on another host not in cluster.

Actually found the reference , its in the firmware upgrade guide itself:
All connectivity may be lost during firmware upgrades if you have configured both Enable Failover on one
or more vNICs and you have also configured NIC teaming/bonding at the host operating system level. Please
design for availability by using one or the other method, but never both.
Is this still valid? Whats the setting to be done on ESX nic teaming if we wanna use Enable failover on UCS?

Similar Messages

  • Configure iSCSI multipath in OVM 3.1.1 with storage plug-in not possible ?

    I have a configuration with 4 iSCSI paths to storage system. All is working fine if the discover and login
    process was performed manual. Multipath is working well with 4 paths.
    # iscsiadm -m session
    tcp: [13] 192.168.10.1:3260,25 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM0CA3P0
    tcp: [14] 192.168.20.1:3260,26 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM0CA3P1
    tcp: [15] 192.168.10.2:3260,27 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM1CA3P0
    tcp: [16] 192.168.20.2:3260,28 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM1CA3P1
    # multipath -ll
    3600000e00d10000000100000000c0000 dm-2 FUJITSU,ETERNUS_DX400
    size=30G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=enabled
    | |- 25:0:0:1 sdg 8:96 active ready running
    | `- 26:0:0:1 sdi 8:128 active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
    |- 27:0:0:1 sdk 8:160 active ready running
    `- 28:0:0:1 sdm 8:192 active ready running
    I want to configure iSCSI multipath in OVM manager using specific storage plug-in.
    I proceed “Discover SAN Server” with following parameter:
    Name: Storage-Name
    Storage Type: iSCSI Storage Server
    Storage Plug-in: Fujitsu ETERNUS
    Admin Host: IP-Addr. of storage system
    Admin Username: user name to access storage system
    Admin Password: password
    Access Host (IP) Address:           192.168.10.1
    After this configuration I see my storage system and assigned physical volumes to an access group.
    Now the volumes are available on the OVM storage server, >>>> but I lost the multipath functionality <<<<<.
    # multipath -ll
    3600000e00d1000000010000000120000 dm-1 FUJITSU,ETERNUS_DX400
    size=30G features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=50 status=active
    `- 33:0:0:1 sdd 8:48 active ready running
    The reason for this behavior is that only one Access Host (IP) Address can be specified.
    With this IP address a session was established … but only 1 session and not 4 !!
    How can I specify the remaining 3 paths to have proper multipath functionality?
    I know that Citrix XenServer accepts more Access Host (IP) Addresses,
    e.g. 192.168.10.1,192.168.10.2,192.168.20.1,192.168.20.2
    Thanks for help.

    @user12273962
    Yes, you are right. The storage plug-in is for management of the storage within OVM.
    This is working well, but I'm still missing the multipath fuctionality. This storage plug-in is not responisble
    for establish the multipath, but OVM should take care of this.
    @budachst
    Yes, OVM did only open one session and loggin only to one target, because only one Access Host (IP) Address: 192.168.10.1
    can be specified.
    # iscsiadm -m session
    tcp: [21] 192.168.10.1:3260,25 iqn.2000-09.com.fujitsu:storage-system.eternus-dx400:CM0CA3P0
    In my point of view it is not possible to configure the remaining targets, so this is a design problem.
    Any new inputs and informations are welcome.

  • Hello. I have a problem with OEL 6.5 and ocfs2. When I mount ocfs2 with mount -a command all ocfs2 partitions mount and work, but when I reboot no ocfs2 partitions auto mount. No error messages in log. I use DAS FC and iSCSI FC.

    Hello.
    I have a problem with OEL 6.5 and ocfs2.
    When I mount ocfs2 with mount -a command all ocfs2 partitions mount and work, but when I reboot no ocfs2 partitions auto mount. No error messages in log. I use DAS FC and iSCSI FC.
    fstab:
    UUID=32130a0b-2e15-4067-9e65-62b7b3e53c72 /some/4 ocfs2 _netdev,defaults 0 0
    #UUID=af522894-c51e-45d6-bce8-c0206322d7ab /some/9 ocfs2 _netdev,defaults 0 0
    UUID=1126b3d2-09aa-4be0-8826-0b2a590ab995 /some/3 ocfs2 _netdev,defaults 0 0
    #UUID=9ea9113d-edcf-47ca-9c64-c0d4e18149c1 /some/8 ocfs2 _netdev,defaults 0 0
    UUID=a368f830-0808-4832-b294-d2d1bf909813 /some/5 ocfs2 _netdev,defaults 0 0
    UUID=ee816860-5a95-493c-8559-9d528e557a6d /some/6 ocfs2 _netdev,defaults 0 0
    UUID=3f87634f-7dbf-46ba-a84c-e8606b40acfe /some/7 ocfs2 _netdev,defaults 0 0
    UUID=5def16d7-1f58-4691-9d46-f3fa72b74890 /some/1 ocfs2 _netdev,defaults 0 0
    UUID=0e682b5a-8d75-40d1-8983-fa39dd5a0e54 /some/2 ocfs2 _netdev,defaults 0 0

    What is the output of:
    # chkconfig --list o2cb
    # chkconfig --list ocfs2
    # cat /etc/ocfs2/cluster.conf

  • VCB/VADP, ESX 4.1 and NetWare vm snapshot backup issue

    Hi!
    If you are running Netware as a guest in VMware ESX 4.1 and are using backup
    software that uses the snapshot feature to backup the guests vmdk then you
    may run into an issue that causes the snapshot to fail. This was just
    documented by VMware on Feb 23 (two days ago) so you may have not seen this
    yet. Here is the url to the VMware kb:
    http://kb.vmware.com/kb/1029749
    The fix is to install the older v4.0 vmware tools into the NetWare guest.
    Cheers,
    Ron

    Ron,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • Migrade old NW-cluster to VmWare and iSCSI

    That's the job to be done,
    question is just; which way to go;
    Setup today is;
    2x Single-CPU XeonServers, both connected
    to 2xPromise 15HD/Raidkabinetts over 160mb SCSI.
    This install beeing approx 5 years old and is getting slow and full.
    Idea is to buy new, fast 6core Intel servers, run VmWare which makes
    use of the power and make our life easier....
    One question Im wondering about and have a guess regarding the answer
    to is;
    iSCSI based storage for these servers, should it be assigned and
    supplied through VmWare or should the virtual NWserver running under
    VmWare connect directly to the iSCSI storage ?
    Usually, having an additional step on the way should be overhead, but
    with regards to support for load-balancing NIC's, memory,etc,, my
    guess is that the opposite could be true here; meaning that the VmWare
    box should handle all storage and NW getting it's storage through
    VmWare. But,, again just my guess...any input ?
    Another question of course is Netware Vs Linux OES.
    We have long time experience with NW and find our way around it well,
    the Linux part is still something we really don't feel that at home
    with. The install's of OES on Linux we've done for lab, test's etc
    have allways felt rather unfinished. One could think that beeing from
    the Novell/Netware camp, a logical step should be Linux/OES but
    ....Even when following the online doc's to do a basic test-setup works
    quite ..bad...to much manual fixes getting stuff to work, to much
    hazzle getting registration and updates to work....
    Still, going virtual might be a way to make it easier switching to
    OES/Linux since it'll be easier having a backup/image each time one
    try do update,fix,,etc,,etc..
    In the end, the needs are basic, one Groupwise server and one
    FileServer. Going virtual enables us to over time migrate other
    resources to....

    Thanks 4 the quick reply Massimo,
    Well, iSCSI or not, the other part of the question.
    Time IS probably here to replace Netware, that much is obvious.
    With the existing setup, we got working backups,
    with our W2k/03/08, we got working backups and disaster recovery plans
    Moving forward, using OES for us at least, seems a more difficult path
    while any move from Netware today would probaby give us better
    throughput. Using VmWare seems like a managable solution since do
    updates/upgrades and backups could be done easily. To have a image to
    revert to if any update goes wrong is much easier/faster than a
    re-install each time....
    On Mon, 22 Nov 2010 08:54:15 GMT, Massimo Rosen
    <[email protected]> wrote:
    >Hi,
    >
    >[email protected] wrote:
    >>
    >> That's the job to be done,
    >> question is just; which way to go;
    >>
    >> Setup today is;
    >> 2x Single-CPU XeonServers, both connected
    >> to 2xPromise 15HD/Raidkabinetts over 160mb SCSI.
    >>
    >> This install beeing approx 5 years old and is getting slow and full.
    >
    >Hmmmm....
    >
    >> Idea is to buy new, fast 6core Intel servers, run VmWare which makes
    >> use of the power and make our life easier....
    >> One question Im wondering about and have a guess regarding the answer
    >> to is;
    >>
    >> iSCSI based storage for these servers, should it be assigned and
    >> supplied through VmWare or should the virtual NWserver running under
    >> VmWare connect directly to the iSCSI storage ?
    >
    >If your current setup is slow, you shouldn't be using iSCSI at all. It
    >won't be much faster, and iSCSI is becoming increasingly stale
    >currently, unless you have 10GBE. And even then it isn't clear if iSCSI
    >over 10GBE is really much faster than when using 1GB. TCP/IP needs a lot
    >of tuning to achieve that speed.
    >
    >>
    >> In the end, the needs are basic, one Groupwise server and one
    >> FileServer. Going virtual enables us to over time migrate other
    >> resources to....
    >
    >I would *NEVER* put a Groupwise Volume into a VMDK. That said, my
    >suggestion would be OES2, and a RDM at the very least for Groupwise.
    >
    >CU,

  • Fault Tolerance of NFS and iSCSI

    Hello,
    I'm currently designing a new datacenter core environment. In this case there are also nexus 5548 with FEXs involved. On this fex's there are some servers which speak NFS and iSCSI.
    While changing the core component there will be a disruption between the servers.
    What ks the maximim timeout a NFS or iSCSI protocoll can handle while changing the components. Maybe there will be disruption for a maximimum of 1 sekond.
    Regards
    Udo
    Sent from Cisco Technical Support iPad App

    JDW1:  In case you haven't received the ISO document yet, the relevent section of the cited ISO 11898-2:2003 you want to look at is section 7.6 "Bus failure management", and specifically Table 12 - "Bus failure detection" and Figure 19 - "Possible failures of bus lines".

  • RAC and iSCSI thoughts

    Hi,
    I'm in the early stages of designing a RAC installation and am wondering if iSCSI is a possibility for the shared storage. As far as I have been able to tell, the only certified shared storage systems are all FC based.
    I'm wondering if anyone here has any thoughts/experiences with RAC and iSCSI that they'd like to share.
    regards
    iain

    I built my iSCSI solution using linux and off the shelf raid solutions - originally just using gig-e cards, however i have upgraded to the accelerated cards (recognizes iSCSI and offloads the host CPU of procesing) and it works great.
    There are some really affordable solution either way these days. Fibre Channel is still more expensive on the drive side considering moderm SATA drives are catching up in performance.
    I built a 3.2tb iSCSI network that is shared between 6 servers running a web index of 100milliion pages with about 60k queries a day and it works great. I was able to roll it out on the colo's switches without getting specialized cabinets, fiber channel switches and only sent in upgraded nics.. sooner or later as the queries increase i'll send in a seperate gigE switch - however its just on its own vlan now.
    For me it was a lot of work since i built a system from scratch - today lots of vendors offer off the shelf components are bargain basement prices compared to when i ventured into it :)

  • NFS and ISCSI using ip hash load balance policy

    As i know all these days that the best practice for iscsi is to use single nic and one standby with " route based port id" ButI have seen in a client placethat NFS and iscsi are configured to use"route based ip hash" and multiple nic and it has been working all these days. i can not see that iscsi does multi path there.I was told by the sys admin that it is ok to use that since the both protocol are configured in same storage and it does not make sense to separate it ,his explanation that if we want separate policy then use separate storage that is one for nfs and other for iscsi, i do not buy that, i might be wrong.He pointed his link below saying that you can use ip hash.http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalI....Is it ok to use " route based ip hash for iscsi as on the link?
    This topic first appeared in the Spiceworks Community

    When you create your uplink port profile you simply use the auto channel command in your config:
    channel-group auto mode on
    This will create a static etherchannel when two or more ports are added to the uplink port profile from the same host.  Assuming your upstream switch config is still set to "mode on" for the etherchannel config, there's nothing to change.
    Regards,
    Robert

  • Windows 7 answer file deployment and iscsi boot

    Hi, I am trying to prepare an image with windows7 Ent that has been installed, went through "audit" and then shutdown with:
    OOBE+Generalize+Shutdown
    So that I can clone this image and the next time it boots, it will use answer file to customize, join domain etc.
    The complication to this - is that I am using iscsi boot for my image, and working within Vmware ESX.
    I can install Windows without any issues, get the drivers properly working, reboot and OOBE on the same machine - no issues.
    The problems come when I clone the VM and the only part that changes (that I think really matters) is the mac address of the network card. The new clone when it comes up after the OOBE reboot hangs for about 10min and then proceeds without joining to domain.
    Using Panter logs and network traces - I saw that the domain join command was timing out and in fact no traffic was being sent to the DC. So the network was not up. The rest of the answer file customization works fine.
    As a test I brought up this new clone (with new mac) in Audit mode - and Windows reported that it found and installed drivers for new device - VMXNET3 Driver 2. So in fact it does consider this a new device.
    Even though it iscsi boots from this new network card - later in process its unable to use it until driver is reinstalled.
    In my answer file I tried with and without below portion but it didnt help:
    <settings pass="generalize">
            <component>
                <DoNotCleanUpNonPresentDevices>true</DoNotCleanUpNonPresentDevices>
                <PersistAllDeviceInstalls>true</PersistAllDeviceInstalls>
            </component>
        </settings>
    I also tried with E1000 NIC, but couldnt get windows to boot properly after the cdrom installation part.
    So my question - is my only option to use workarounds like post OOBE scripts for join etc? 
    Is it possible to let Windows boot and then initiate an extra reboot once the driver was installed and then allow it to go to Customize phase?
    thank you!

    Hi,
    This might be caused by the iscsi boot.
    iSCSI Boot is supported only on Windows Server. Client versions of Windows, such as Windows Vista® or Windows 7, are not supported.
    Detailed information, please check:
    About iSCSI Boot
    Best regards
    Michael Shao
    TechNet Community Support

  • OVM 3.1.1 iSCSI multipathing

    I am setting up an OVM 3.1.1 environment at a customer site who is presenting iSCSI LUNs from an EMC Filer. I have a few questions:
    * what is the proper way to discover a set of iSCSI LUNs when the storage unit has 4 unique IP addresses on 2 different VLAN's? If I discover all 4 paths, they present in the GUI as 4 separate SAN servers. The LUNs seem to show up scattered across all 4 SAN servers. By my simple logic, my thinking is that if I were to lose access to one of those SAN servers, that the LUNs that happen to be presented via that SAN server would disappear and not be accessible. I know this isn't the case however because multipath -ll on the OVM server shows me that there are 4 distinct paths to each LUN that I'm expecting to see- and I've verified that multipath is working by downing one of the two NICs that are allocated to iSCSI and I can see that two paths of four are failed, but I can still access the disk just fine. Is this just me not setting things up the right way in the GUI, or is the GUI implemented poorly here and needs to be redesigned so it's clear to both myself AND the customer?
    * has anyone used the storage connect plugins for either iSCSI or Fiber Channel storage with OVM? What does it actually do for you and is it easy or easier than unmanaged storage to implement? Is it worth the hassle?

    Here are the notes I had written down:
    == change iSCSI default timeout in /etc/iscsi/iscsid.conf for any future connections ==
    * change node.session.timeo.replacement_timeout from 120 to 5
    #node.session.timeo.replacement_timeout = 120
    node.session.timeo.replacement_timeout = 5
    == identify iSCSI lun's ==
    # iscsiadm -m session
    tcp: [1] xx.xx.xx.xx:3260,4 iqn.1992-04.com.emc:cx.apm00115000338.b9
    tcp: [2] xx.xx.xx.xx:3260,3 iqn.1992-04.com.emc:cx.apm00115000338.b8
    tcp: [3] xx.xx.xx.xx:3260,1 iqn.1992-04.com.emc:cx.apm00115000338.a8
    tcp: [4] xx.xx.xx.xx:3260,2 iqn.1992-04.com.emc:cx.apm00115000338.a9
    == confirm current active timeout value before the change ==
    cat /sys/class/iscsi_session/session*/recovery_tmo
    120
    120
    120
    120
    == manually change timeout on each iSCSI lun for current active connections ==
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.b9 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.b8 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.a8 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.a9 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    == restart iscsi to make changes take effect ==
    service iscsi stop
    service iscsi start
    NOTE: service iscsi restart and /etc/init.d/iscsi restart doesn't seem to work. Only by stopping, then implicitly starting the iscsi service does it seem to work consistently.
    == restart multipathd ==
    # service multipathd restart
    Stopping multipathd daemon: [  OK  ]
    Starting multipathd daemon: [  OK  ]
    == Verify new timeout value on active sessions ==
    cat /sys/class/iscsi_session/session*/recovery_tmo
    5
    5
    5
    5

  • MP-BGP and MPLS multipath load sharing

    Hi,
    I am trying to PoC MPLS multi path load sharing by using per-PE-per-VRF RDs in the network.
    I have a simple lab setup with AS65000 which consists of SITE1 PE1&PE2 routers (10.250.0.101 and 10.250.0.102), route reflector RR in the middle (10.250.0.55) and SITE2 PE1&PE2 routers (10.250.0.201 and 10.250.0.202). PE routers only do iBGP peering with centralized route reflector and passing route to 10.1.1.0/24 prefix (learned from single CE router) with 100:1 and 100:2 RDs for specific VRF.
    Route reflector gets routes with multiple RDs, makes copies of these routes in order to make local comparison to RD 55:55 configured, uses these routes and install multiple paths into its routing table (all PE routers and RR have "maximum-paths eibgp 4" configured):
    RR#sh ip bgp vpnv4 all
    BGP table version is 7, local router ID is 10.250.0.55
    Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
                  r RIB-failure, S Stale
    Origin codes: i - IGP, e - EGP, ? - incomplete
       Network          Next Hop            Metric LocPrf Weight Path
    Route Distinguisher: 55:55 (default for vrf VRF-A) VRF Router ID 10.250.0.55
    * i10.1.1.0/24      10.250.0.102             0    100      0 65001 i
    *>i                 10.250.0.101             0    100      0 65001 i
    Route Distinguisher: 100:1
    *>i10.1.1.0/24      10.250.0.101             0    100      0 65001 i
    Route Distinguisher: 100:2
    *>i10.1.1.0/24      10.250.0.102             0    100      0 65001 i
    RR#sh ip route vrf VRF-A
    <output omitted>
         10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
    B       10.1.1.0/24 [200/0] via 10.250.0.102, 00:45:52
                              [200/0] via 10.250.0.101, 00:46:22
    BUT, for some reason RR doest reflects routes with multiple RDs down to SITE2 PE1&PE2 - its own clients:
    RR#sh ip bgp vpnv4 all neighbors 10.250.0.201 advertised-routes
    Total number of prefixes 0
    RR#sh ip bgp vpnv4 all neighbors 10.250.0.202 advertised-routes
    Total number of prefixes 0
    Here comes RR BGP configuration:
    router bgp 65000
    no synchronization
    bgp router-id 10.250.0.55
    bgp cluster-id 1.1.1.1
    bgp log-neighbor-changes
    neighbor 10.250.0.101 remote-as 65000
    neighbor 10.250.0.101 update-source Loopback0
    neighbor 10.250.0.101 route-reflector-client
    neighbor 10.250.0.101 soft-reconfiguration inbound
    neighbor 10.250.0.102 remote-as 65000
    neighbor 10.250.0.102 update-source Loopback0
    neighbor 10.250.0.102 route-reflector-client
    neighbor 10.250.0.102 soft-reconfiguration inbound
    neighbor 10.250.0.201 remote-as 65000
    neighbor 10.250.0.201 update-source Loopback0
    neighbor 10.250.0.201 route-reflector-client
    neighbor 10.250.0.201 soft-reconfiguration inbound
    neighbor 10.250.0.202 remote-as 65000
    neighbor 10.250.0.202 update-source Loopback0
    neighbor 10.250.0.202 route-reflector-client
    neighbor 10.250.0.202 soft-reconfiguration inbound
    no auto-summary
    address-family vpnv4
      neighbor 10.250.0.101 activate
      neighbor 10.250.0.101 send-community both
      neighbor 10.250.0.102 activate
      neighbor 10.250.0.102 send-community both
      neighbor 10.250.0.201 activate
      neighbor 10.250.0.201 send-community both
      neighbor 10.250.0.202 activate
      neighbor 10.250.0.202 send-community both
    exit-address-family
    address-family ipv4 vrf VRF-A
      maximum-paths eibgp 4
      no synchronization
      bgp router-id 10.250.0.55
      network 10.255.1.1 mask 255.255.255.255
    exit-address-family
    SITE1 PE1 configuration:
    router bgp 65000
    no synchronization
    bgp router-id 10.250.0.101
    bgp log-neighbor-changes
    neighbor 10.250.0.55 remote-as 65000
    neighbor 10.250.0.55 update-source Loopback0
    neighbor 10.250.0.55 soft-reconfiguration inbound
    no auto-summary
    address-family vpnv4
      neighbor 10.250.0.55 activate
      neighbor 10.250.0.55 send-community both
    exit-address-family
    address-family ipv4 vrf VRF-A
      neighbor 10.1.101.2 remote-as 65001
      neighbor 10.1.101.2 activate
      neighbor 10.1.101.2 soft-reconfiguration inbound
      maximum-paths eibgp 4
      no synchronization
      bgp router-id 10.250.0.101
    exit-address-family
    SITE1 PE2 configuration is similar to SITE1 PE1. They both do eBGP peering with dualhomed CE router in AS65001 which announces 10.1.1.0/24 prefix into VRF-A table.
    My question is: clearly, the issue is that RR doesn't reflect any routes to its clients (SITE2 PE1&PE2) for 10.1.1.0/24 prefix with 100:1 and 100:2 RDs that dont match it's locally configured RD 55:55 for VRF-A, although they are present in its BGP/RIB tables and used for multipathing. Is this an expected behavior or some feature limitation for specific platform or IOS version? Currently, in this test lab setup I run IOS 12.4(24)T8 on all the devices.
    Please, let me know if any further details are needed to get an idea of why this well known and widely used feature is not working correctly in my case. Thanks a lot!
    Regards,
    Sergey

    Hi Ashish,
    I tried to remove VRF and address family configurations completely from RR.
    router bgp 65000
    no synchronization
    bgp router-id 10.250.0.55
    bgp cluster-id 1.1.1.1
    bgp log-neighbor-changes
    neighbor 10.250.0.101 remote-as 65000
    neighbor 10.250.0.101 update-source Loopback0
    neighbor 10.250.0.101 route-reflector-client
    neighbor 10.250.0.101 soft-reconfiguration inbound
    neighbor 10.250.0.102 remote-as 65000
    neighbor 10.250.0.102 update-source Loopback0
    neighbor 10.250.0.102 route-reflector-client
    neighbor 10.250.0.102 soft-reconfiguration inbound
    neighbor 10.250.0.201 remote-as 65000
    neighbor 10.250.0.201 update-source Loopback0
    neighbor 10.250.0.201 route-reflector-client
    neighbor 10.250.0.201 soft-reconfiguration inbound
    neighbor 10.250.0.202 remote-as 65000
    neighbor 10.250.0.202 update-source Loopback0
    neighbor 10.250.0.202 route-reflector-client
    neighbor 10.250.0.202 soft-reconfiguration inbound
    no auto-summary
    address-family vpnv4
      neighbor 10.250.0.101 activate
      neighbor 10.250.0.101 send-community both
      neighbor 10.250.0.102 activate
      neighbor 10.250.0.102 send-community both
      neighbor 10.250.0.201 activate
      neighbor 10.250.0.201 send-community both
      neighbor 10.250.0.202 activate
      neighbor 10.250.0.202 send-community both
    exit-address-family
    After this, RR doesn't accept any routes at all from S1PE1&S1PE2 routers, thus not reflecting any routes down to its clients S2PE1&S2PE2 as well:
    S1PE1#sh ip bgp vpnv4 all
    BGP table version is 6, local router ID is 10.250.0.101
    Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
                  r RIB-failure, S Stale
    Origin codes: i - IGP, e - EGP, ? - incomplete
       Network          Next Hop            Metric LocPrf Weight Path
    Route Distinguisher: 100:1 (default for vrf VRF-A) VRF Router ID 10.250.0.101
    *> 10.1.1.0/24      10.1.101.2               0             0 65001 i
    S1PE1#sh ip bgp vpnv4 all neighbors 10.250.0.55 advertised-routes
    BGP table version is 6, local router ID is 10.250.0.101
    Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
                  r RIB-failure, S Stale
    Origin codes: i - IGP, e - EGP, ? - incomplete
       Network          Next Hop            Metric LocPrf Weight Path
    Route Distinguisher: 100:1 (default for vrf VRF-A) VRF Router ID 10.250.0.101
    *> 10.1.1.0/24      10.1.101.2               0             0 65001 i
    Total number of prefixes 1
    S1PE2#sh ip bgp vpnv4 all
    BGP table version is 6, local router ID is 10.250.0.102
    Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
                  r RIB-failure, S Stale
    Origin codes: i - IGP, e - EGP, ? - incomplete
       Network          Next Hop            Metric LocPrf Weight Path
    Route Distinguisher: 100:2 (default for vrf VRF-A) VRF Router ID 10.250.0.102
    *> 10.1.1.0/24      10.1.201.2               0             0 65001 i
    S1PE2#sh ip bgp vpnv4 all neighbors 10.250.0.55 advertised-routes
    BGP table version is 6, local router ID is 10.250.0.102
    Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
                  r RIB-failure, S Stale
    Origin codes: i - IGP, e - EGP, ? - incomplete
       Network          Next Hop            Metric LocPrf Weight Path
    Route Distinguisher: 100:2 (default for vrf VRF-A) VRF Router ID 10.250.0.102
    *> 10.1.1.0/24      10.1.201.2               0             0 65001 i
    Total number of prefixes 1
    RR#sh ip bgp vpnv4 all
    RR#sh ip bgp vpnv4 all neighbors 10.250.0.101 routes
    Total number of prefixes 0
    RR#sh ip bgp vpnv4 all neighbors 10.250.0.102 routes
    Total number of prefixes 0
    Any feedback is appreciated. Thanks.
    Regards,
    Sergey

  • Iscsi multipath issue---all session only created base the first NIC

    Hello everyone, I had a X86 host(x4200) with solaris 10 u6 and 2 Intel NICs, and the array is Equallogic PS5000, for Equallogic array only have one group IP and only one target port id:3260, so after I congiured all host NIC and the array group IP in the same subnet, then I did:
    1.iscsiadm add discover-address ARRAY GROUP IP;
    2.iscsiadm modify initiator-node -c 2;
    3,iscsiadm modify discovery -t enable -s enable,
    I found that every iSCSI session created base NIC1-----only after I disable NIC1 then all session created base NIC2, not one session base NIC1 and one session base NIC2, what's the problem?
    I had trid to do iscsiadm modify target-param -c NIC1IP.NIC2IP targetname, and it no help too.
    Did you have experiment on it?Any reply is very appreciated.

    Since user in logging again from the same machine so i think invalidating the previous session won't work.
    Suppose user opens first instance of the mozilla browser and login as say 'ashok' whose role is normal user now he opens another instance of mozilla by clicking on executable and login as 'mitch' whose role is admin then after successful login of 'mitch' on first window mitch's menus items are getting displayed instead of ashok's.
    What i am doing is,
    While creating new session on login i first check is any existing session available in above case its true
    session = request.getSession(false); //return session if already exists
    System.out.println("Session object: "+session);
    if(session != null)
    System.out.println("Session ID Old: "+session.getId());
    session.invalidate(); //invalidate the session if already exists
    session = request.getSession(true); //Create new session
    System.out.println("Session ID New: "+session.getId());
    so control will go inside the if block, here i first invalidate the existing session (this is not destroying the session but only unbinding the information that was stored in the session) and create new session (this is returning the same session which was already exists) and save mitch's information. Since the previous instance was using the same session that will now get the mitch's information so now both instances will show the mitch's information.
    I am not getting any way to destroy the existing session so next time new session id will get generated.

  • SC 3.2 and iSCSI RAID

    Does SC 3.2 support iSCSI interface to a RAID? To give a real world example: Can I set up a cluster of two 5220 servers using StorageTek 2510 RAID as a shared storage?

    From the pathname of your passw file, I assume it is on a global fileystems, and not locally.
    This global filesystem very probably has a HAStoragePlus resource associated with it.
    Your HA LDOM resource must have a "Resource_dependcies" on that HAStoragePlus resource. Otherwise the clrs create command must assume, that - as the global filesystem has not been mounted yet, the passwd file does not exist. If this is not true for the local node, what about the other node?
    The error message, though, is a bit strange.
    Hartmut

  • ZFS Mountpoints and iSCSI

    It seems when i create a ZFS file system with a command like 'zfs create -V 2g mypool/iscsi_vol_0' I can export it with iscsitadm. It also shows a mountpoint of '-' when I do a'zfs list'.
    -bash-3.00# zfs list
    NAME                       USED  AVAIL  REFER  MOUNTPOINT
    mypool                    4.00G  62.4G    18K  /mypool
    mypool/iscsi_vol_0     2G  62.6G  1.84G  -
    mypool/iscsi_vol_1      18K  10.0G    18K  none
    mypool/iscsi_vol_2     2G  64.4G    30K  -If I create a ZFS file system with a command like 'zfs create mypool/iscsi_vol_1 -o quota=10G' it gets mounted, so I issue a 'zfs set mountpoint=none mypool/iscsi_vol_1' and check if its mounted 'ls /mypool/' or 'mount' and its not, yet I still cant export it?
    -bash-3.00# iscsitadm create target -b /dev/zvol/dsk/mypool/iscsi_vol_1 name-tgt1
    iscsitadm: Error Failed to stat(2) backing for 'disk'What is the significance of the "-" as a mountpoint that allows the target to be exported via iSCSI?
    Thanks!

    jcasale wrote:
    It seems when i create a ZFS file system with a command like 'zfs create -V 2g mypool/iscsi_vol_0' I can export it with iscsitadm. It also shows a mountpoint of '-' when I do a'zfs list'.Right, the -V makes this a "volume". It's just an empty set of blocks (with a specific size). A normal filesystem allocates space in the pool as necessary depending on files that are added. Since it's a set of blocks, you can't access it directly through a mount point. You'd have to put a filesystem on it to do that.
    If I create a ZFS file system with a command like 'zfs create mypool/iscsi_vol_1 -o quota=10G' it gets mounted, so I issue a 'zfs set mountpoint=none mypool/iscsi_vol_1' and check if its mounted 'ls /mypool/' or 'mount' and its not, yet I still cant export it?iSCSI (and SCSI) give access to block devices, not filesystems. So only volumes can be used over iSCSI, not normal ZFS (or other) filesystems.
    Darren

Maybe you are looking for