Cisco UCS iSCSI boot question

I am having trouble with getting this to work.  I don't control the iSCSI side of the equation so I am just trying to make sure everything is correct on the UCS side.  When we boot we get an the "Initialize error 1"
If I attach to FI via ssh I am able to ping the target ip's.  The SAN administrator says that the UCS has to register before itself (which isn't occuring) prior to him giving it space.  Everything I have seen the LUN's and Mask are created prior to boot...is this required?
Thanks,
Joe

UCSM version
2.0(2r)
We have another blade the other chasis that has ESX installed on the local drive and is using the iSCSI SAN as it datastore, therefore I know I connectivity to the SAN.  The storage array is a emc CX4-120.
adapter 2/1/1 (mcp):1# iscsi_get_config
vnic iSCSI Configuration:
vnic_id: 5
          link_state: Up
       Initiator Cfg:
     initiator_state: ISCSI_INITIATOR_READY
initiator_error_code: ISCSI_BOOT_NIC_NO_ERROR
                vlan: 0
         dhcp status: false
                 IQN: iqn.1992-04.com.cisco:2500
             IP Addr: 192.168.0.109
         Subnet Mask: 255.255.255.0
             Gateway: 192.168.0.2
          Target Cfg:
          Target Idx: 0
               State: INVALID
          Prev State: ISCSI_TARGET_GET_SESSION_INFO
        Target Error: ISCSI_TARGET_LOGIN_ERROR
                 IQN: iqn.1992-04.com.emc:cx.apm00101701431:a2
             IP Addr: 192.168.0.2
                Port: 3260
            Boot Lun: 0
          Ping Stats: Success (19.297ms)
          Target Idx: 1
               State: INVALID
          Prev State: ISCSI_TARGET_GET_SESSION_INFO
        Target Error: ISCSI_TARGET_LOGIN_ERROR
                 IQN: iqn.1992-04.com.emc:cx.apm00101701431:b2
             IP Addr: 192.168.0.3
                Port: 3260
            Boot Lun: 0
          Ping Stats: Success (18.229ms)
adapter 2/1/1 (mcp):2#

Similar Messages

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • ISCSI- Adding iSCSI SRs to XS booting via iSCSI [Cisco UCS]

    Hello all-
    I am currently running XS 6.0.2 w/ HF XS602E001 & XS602E002 applied running on Cisco UCS B200-M3 blades. XS seems to be functioning ok, except that i cannot see the subnet assigned to my SAN, and as a result, i cannot add any SR for VM storage.
    The subnet and vNIC that i am configuring is also used for the iSCSI boot.
    The vNIC is on LAN 2 and is set to Native
    The SAN is a LeftHand P4300 connected directly to appliance ports (LAN2) on the Fabrics
    Used the following commands during installation to get the installer to see the LUN.
    echo "InitiatorName=my.lun.x.x.x" > /etc/iscsi/initiatorname.iscsi
    /opt/xensource/installer/init --use_ibft
    If i missed anything or more info is needed, please let me know.

    Thanks Padramas,
    I have 2 NICs that show up in XenCenter. Nic A is the vNIC associated with VLAN1 and provide the uplink to the rest of the network. The second NIC in XenCenter matches the MAC of the vNIC used to boot XenServer, so the NIC used for booting the OS is visible in XenCenter.
    I have tried many different thing on my own to resolve this issue. I believe i have already tried adding a 3rd vNIC to the service profile for VM iSCSI traffic, but with make another attempt.
    When configuring the vNIC for VM iSCSI traffic, do I only need to add the 3rd vNIC, or do i need to create both a vNIC and second iSCSI vNIC with Overlay vNic pointing to newly created (3rd) vNIC? My understanding is that the iSCSI vNICs are only used for booting, but I am not 100% sure.
    Thanks again for the help!

  • ISCSI Booting Win2k8R2 on UCS with EqualLogic

    We've been testing iSCSI booting on UCS with 2.0(1t) and the Palo M81KR with an EqualLogic PS6010. We are connecting through a 10GbE switch via an uplink port. However, we've hit a snag and want to see if anyone else has gotten around this.
    The EQL shows connections from UCS when booting; however, Windows does not have the Palo drivers and doesn't show the target disk. When adding the M81KR driver, the connection gets dropped so Windows still does not show the target disk.
    We've tried connecting the array directly to an appliance port on the FI but got some ENM source pinning failed error, a different issue at this point.
    We've also tried slipstreaming all UCS drivers into the Windows media but that doesn't let us proceed either.
    We've gotten RHEL6 to install so we know the configuration is viable.

    Hello Thomas,
    The EQL shows connections from UCS when booting; however, Windows does not have the Palo drivers and doesn't show the target disk. When adding the M81KR driver, the connection gets dropped so Windows still does not show the target disk.
    There is small trick to make the target disks visible after installing the right drivers. Please make sure that after re-mapping the W2K8 ISO image, click Refresh button on the Installer screen and verify the outcome.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/b/os/windows/install/2008-vmedia-install.html#wp1059547
    Driver location in driver ISO image
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/b/os/windows/install/drivers-app.html#wp1014709
    Padma

  • Cisco UCS C220-M3 Server doesn't boot after adding Intel I350 Quad Port NIC

    Hello,
    We have purchased 2 Cisco UCS C220-M3 rack mounted servers (Single Processor) that have a dual port I350 LOM. We also purchased two Intel I350 Quad Port NICs to expand the available ports on each server from 2 to 6.
    The problem we are facing is that the server will not complete the boot process unless we disable either the LOM or the Quad Port NIC. We have tried numerous settings in the BIOS area but we haven't managed to have them both working.
    The I350 Quad Port NICs are from Intel and not from Cisco and we are running the latest FW/BIOS on the Servers as of March 10 2015.
    We have followed the steps described in the installation guide to install them.
    Could you provide some light in this situation please, what are our options?
    P.S. Although the LOMs are I350, the MAC Address Vendor is Cisco, is it possible that Cisco servers accept only Cisco Branded Expansion Cards?
    Thank you,
    George

    Hi Shawn,
    At least I'm not the only one having the same problem!
    You are trying to use the cards with the LOMs enabled?
    Which screen are you referring to in the first paragraph of your message?
    Mine sits in BIOS POST for about 10 minutes and then resets itself....
    We are also waiting for an answer from Cisco, so any findings/news will be posted here.
    Thanks,
    George

  • Cisco UCS FC Direct Attach Question

    We are looking at the Cisco UCS as a replacement for our existing servers and SAN switches. As I understand the fabric interconnect can replace our existing SAN switches and that we will still be able to zone the ports just like we do on our SAN switches today.
    Can someone confirm how using the fabric interconnects as a replacement for our SAN switches will work? I read that the fabric interconnects have to be in switch mode for this to work. How does this affect the other connections we will have to our Ethernet network?
    Thanks.

    Q1: The 6120s run in NPV (N_Port Virtualization) mode which means it acts as a proxy for the blade vHBAs and doesn't participate in flogi (fabric logins). When a fabric switch runs in this mode it requires an uplink to a faber switch for flogi and zoning. Due to this you cannot direct attach the the 6120 to an array's front-end fiber ports.
    Q2: Yes you can uplink the 6120 to a Brocade fabric as long as the Brocade supports NPIV.

  • Ask the Expert: Cisco UCS B-Series Latest Version New Features

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco UCS Manager 2.2(1) release, which delivers several important features and major enhancements in the fabric, compute, and operational areas. Some of these features include fabric scaling, VLANs, VIFs, IGMP groups, network endpoints, unidirectional link detection (UDLD) support, support for virtual machine queue (VMQ), direct connect C-Series to FI without FEX, direct KVM access, and several other features.
    Teclus Dsouza is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has over 15 years of total IT experience. He has worked across different technologies and a wide range of data center products. He is an expert in Cisco Nexus 1000V and Cisco UCS products. He has more than 6 years of experience on VMware virtualization products.  
    Chetan Parik is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has seven years of total experience. He has worked on a wide range of Cisco data center products such as Cisco UCS and Cisco Nexus 1000V. He also has five years of experience on VMware virtualization products.
    Remember to use the rating system to let Teclus and Chetan know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through May 9, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Jackson,
    Yes its is possible.    Connect the storage array to the fabric interconnects using two 10GB links per storage processor.  Connect each SP to both fabric interconnects and configure the ports on the fabric interconnect as “Appliance” ports from UCSM
    For more information on how to connect Netapp storage using other protocols like iSCSI or FCOE  please check the url below.
    http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6100-series-fabric-interconnects/whitepaper_c11-702584.html
    Regards
    Teclus Dsouza

  • Monitoring Cisco UCS Manager via HP System Information Manager 7.1 (SIM)

    I am working with a customer to configure HP System Information Manager 7.1 (SIM) to monitor their Cisco UCS Manager.
    The customer is looking to monitor the following:
    - CPU Utilization on manager, blades, servers, etc...
    - Memory utilization
    - Network utilization
    - System inventory
    Alerting is needed for the following:
    - Hardware failures: memory, power supply, drive, etc...
    - Predictive failures
    - Alert messages
    I have the list of all the MIBs provided by Cisco but an having the following issues while loading them into HP SIM.
    While loading MIB "CISCO-UNIFIED-COMPUTING-TC-MIB" I get the following error message:
    Line 128: Error defining object: expected a label, found reserved symbol {
    Line in MIB: SYNTAX Gauge32 {
    Guage32 is imported from SNMPv2-SMI MIB
    To get past this error I found a version of the MIB that removes all the textual conventions that where causing errors.  I have attached the fixed MIB file to this discussion. With the fixed version of the MIB installed in SIM everything compiles and installs except the following two MIBS. CISCO-UNIFIED-COMPUTING-NOTIFS-MIBCISCO-UNIFIED-COMPUTING-CONFORM-MIB Questions:
    1. Is there any way to get the CISCO-UNIFIED-COMPUTING-TC-MIB MIB to install correctly into HP SIM?
    2. Is my MIB load order setup correctly?
    3. Has anyone had success getting HP SIM to monitor and alert for Cisco UCS manager?
    MIB Load Order:
    SNMPv2-SMI
    SNMPv2-TC
    SNMP-FRAMEWORK-MIB
    RFC1213-MIB
    IF-MIB
    CISCO-SMI
    CISCO-ST-TC
    ENTITY-MIB
    INET-ADDRESS-MIB
    CISCO-UNIFIED-COMPUTING-MIB
    CISCO-UNIFIED-COMPUTING-TC-MIB
    CISCO-UNIFIED-COMPUTING-FAULT-MIB
    CISCO-UNIFIED-COMPUTING-NOTIFS-MIB
    CISCO-UNIFIED-COMPUTING-AAA-MIB
    CISCO-UNIFIED-COMPUTING-ADAPTOR-MIB
    CISCO-UNIFIED-COMPUTING-BIOS-MIB
    CISCO-UNIFIED-COMPUTING-BMC-MIB
    CISCO-UNIFIED-COMPUTING-CALLHOME-MIB
    CISCO-UNIFIED-COMPUTING-CAPABILITY-MIB
    CISCO-UNIFIED-COMPUTING-COMM-MIB
    CISCO-UNIFIED-COMPUTING-COMPUTE-MIB
    CISCO-UNIFIED-COMPUTING-CONFORM-MIB
    CISCO-UNIFIED-COMPUTING-DCX-MIB
    CISCO-UNIFIED-COMPUTING-DHCP-MIB
    CISCO-UNIFIED-COMPUTING-DIAG-MIB
    CISCO-UNIFIED-COMPUTING-DPSEC-MIB
    CISCO-UNIFIED-COMPUTING-EPQOS-MIB
    CISCO-UNIFIED-COMPUTING-EQUIPMENT-MIB
    CISCO-UNIFIED-COMPUTING-ETHER-MIB
    CISCO-UNIFIED-COMPUTING-EVENT-MIB
    CISCO-UNIFIED-COMPUTING-EXTMGMT-MIB
    CISCO-UNIFIED-COMPUTING-EXTVMM-MIB
    CISCO-UNIFIED-COMPUTING-FABRIC-MIB
    CISCO-UNIFIED-COMPUTING-FC-MIB
    CISCO-UNIFIED-COMPUTING-FCPOOL-MIB
    CISCO-UNIFIED-COMPUTING-FIRMWARE-MIB
    CISCO-UNIFIED-COMPUTING-FLOWCTRL-MIB
    CISCO-UNIFIED-COMPUTING-HOSTIMG-MIB
    CISCO-UNIFIED-COMPUTING-IMGPROV-MIB
    CISCO-UNIFIED-COMPUTING-IMGSEC-MIB
    CISCO-UNIFIED-COMPUTING-IPPOOL-MIB
    CISCO-UNIFIED-COMPUTING-IQNPOOL-MIB
    CISCO-UNIFIED-COMPUTING-ISCSI-MIB
    CISCO-UNIFIED-COMPUTING-LICENSE-MIB
    CISCO-UNIFIED-COMPUTING-LLDP-MIB
    CISCO-UNIFIED-COMPUTING-LSBOOT-MIB
    CISCO-UNIFIED-COMPUTING-LSMAINT-MIB
    CISCO-UNIFIED-COMPUTING-LS-MIB
    CISCO-UNIFIED-COMPUTING-MACPOOL-MIB
    CISCO-UNIFIED-COMPUTING-MAPPINGS-MIB
    CISCO-UNIFIED-COMPUTING-MEMORY-MIB
    CISCO-UNIFIED-COMPUTING-MGMT-MIB
    CISCO-UNIFIED-COMPUTING-NETWORK-MIB
    CISCO-UNIFIED-COMPUTING-NWCTRL-MIB
    CISCO-UNIFIED-COMPUTING-ORG-MIB
    CISCO-UNIFIED-COMPUTING-OS-MIB
    CISCO-UNIFIED-COMPUTING-PCI-MIB
    CISCO-UNIFIED-COMPUTING-PKI-MIB
    CISCO-UNIFIED-COMPUTING-PORT-MIB
    CISCO-UNIFIED-COMPUTING-POWER-MIB
    CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB
    CISCO-UNIFIED-COMPUTING-PROC-MIB
    CISCO-UNIFIED-COMPUTING-QOSCLASS-MIB
    CISCO-UNIFIED-COMPUTING-SOL-MIB
    CISCO-UNIFIED-COMPUTING-STATS-MIB
    CISCO-UNIFIED-COMPUTING-STORAGE-MIB
    CISCO-UNIFIED-COMPUTING-SW-MIB
    CISCO-UNIFIED-COMPUTING-SYSDEBUG-MIB
    CISCO-UNIFIED-COMPUTING-SYSFILE-MIB
    CISCO-UNIFIED-COMPUTING-TOP-MIB
    CISCO-UNIFIED-COMPUTING-TRIG-MIB
    CISCO-UNIFIED-COMPUTING-UUIDPOOL-MIB
    CISCO-UNIFIED-COMPUTING-VM-MIB
    CISCO-UNIFIED-COMPUTING-VNIC-MIB
    References:
    ftp://ftp.cisco.com/pub/mibs/supportlists/ucs/ucs-manager-supportlist.html#_Toc303691433
    http://www.hp.com/wwsolutions/misc/hpsim-helpfiles/simsnmp.pdf

    Please post "debug ccsip messages".
    Based on your debug you are getting "Cause No. 38 - network out of order."
    You may want to bind SIP to an interface that the IP address is defined which Lync points to.
    Chris

  • Cisco UCS architecture

    Hi Everyone,
    I am very new to data center studies and I am trying to build my concepts ralted to Cisco UCS. It would probably be silly questions but I want to know:
    1. what is the difference between a "Service Profile" and Virtual machine? Are they both different names of same concept?
    2. What is the concept behind virtual switch? Is it used to connect VMs or Service Profiles? If it is so, is it installed on a server over a VM/SP or something else?
    2. I would appreciate if somebody could share a logical diagram showing overall concept of integrated service profile, virtual switch, VNIC, vEth, blade servers, etc.
    Thank you very much.

    Please see https://supportforums.cisco.com/thread/2270865?tstart=0 where you get answers to all of your questions
    1. what is the difference between a "Service Profile" and Virtual machine? Are they both different names of same concept?
    NO, not at all, a SP is a new concept, introduced with UCS architecture; it allows you to abstract the hardware of a server. It defines a template and/or a logical server: eg. number of vhba's, number of vnics, mac address pwwn/nwwn, BIOS version, boot policy/boot order. The values of mac, pwwn, nwwn, UUID are taken out of predefined pools, therefore they are hardware independent. Then you associate a SP with a physical blade, which then imposes all the above configuration on the real physical server. The relationship between SP and physical server is 1 to 1; if you need 10 ESXi servers, you need 10 SP's.
    UCS is OS agnostic, it has no clue what the installed OS is. Therefore in UCS there are no OS specific agents.
    VM appear in the context of server virtualization, and are completely different from SP.
    see also http://www.youtube.com/watch?v=0YGJlP2q5Go
    2. What is the concept behind virtual switch? Is it used to connect VMs or Service Profiles? If it is so, is it installed on a server over a VM/SP or something else?
    Each hypervisor (Hyper-v, ESXi, Xen,...) is using a virtual switch (software); it is required to locally switch traffic between 2 VM's on the same physical host. VM's connect to the virtual switch.
    3. I would appreciate if somebody could share a logical diagram showing overall concept of integrated service profile, virtual switch, VNIC, vEth, blade servers, etc.

  • ISCSI boot support on vic P81e

    Hi,
    Is it supported to use P81e for iSCSI boot on C200M2? 
    Last time I know that it's not supported and there was saying that it's on roadmap. I can't seem to find the release notes for it.
    If it's already supported, what version of c-series software do I need to use?
    thanks

    According to the support matrix, it's not supported.
    http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/interoperability/matrix/r_hcl_B_rel2-21.pdf
    my 2c
    - even the VIC1225 doesn't show any iSCSI Boot support
    - the VIC1225 is a prerequisite for single wire management, therefore I doubt, that there will be a lot of new driver development going on for P81e; Cisco PM please clarify !

  • Iscsi booting vmware

    I am trying to setup a UCSB to boot esxi off an iSCSI drive.  I have the iSCSI boot setup and it looks good, but when I boot ESXi, it does not show the iSCSI drive as available to install to.  Any ideas where I should be looking?
    Forgot to mention, its ESXi 5.5 and UCS 2.2.2c

    Everything looks ok:
    vnic iSCSI Configuration: 
    vnic_id: 17
              link_state: Up
           Initiator Cfg: 
         initiator_state: ISCSI_INITIATOR_READY
    initiator_error_code: ISCSI_BOOT_NIC_NO_ERROR
                    vlan: 0
             dhcp status: false
                     IQN: iqn.1992-08.cisco.com:2501
                 IP Addr: 172.25.30.199
             Subnet Mask: 255.255.255.0
                 Gateway: 172.25.30.1
              Target Cfg: 
              Target Idx: 0
                   State: ISCSI_TARGET_READY
              Prev State: ISCSI_TARGET_DISABLED
            Target Error: ISCSI_TARGET_NO_ERROR
                     IQN: c240freenas.local:ucs
                 IP Addr: 172.25.30.198
                    Port: 3260
                Boot Lun: 0
              Ping Stats: Success (12.127ms)
            Session Info: 
              session_id: 0
             host_number: 0
              bus_number: 0
               target_id: 0

  • Cisco UCS C220 with VIC 1225

    Hello,
    We have Cisco UCS C220 M3 with VIC 1225 card installed. We connected only one 10G port of VIC 1225 to Nexus 5548UP switch. But I do not see this port on switch and in CIMI the status for this port shows Link Down. I wanted to to iSCSI boot from storage NetApp FAS2240 but since I don't see this port on my Nexus switch I am unable to do boot from NetApp. I tried both ports on VIC 1225 with same result. Please I need help in this matter. I know I am missing something very simple but can't figure this out.
    Thanks,
    Salman

    Hi Kenny,
    Here is the answers:
    -What slot is this card installed in?
    Mezzenine
    -How many CPUs does the server have installed? (1 OR 2 CPUs can make the difference)
    2x 2.90 GHz E5-2690/135W 8C/20MB Cache/DDR3 1600MHz
    -Is the PCIe slot enabled in BIOS? << Let us know if you don't know how to check it from CIMC
    I believe I did. Server>BIOS>BIOS advanced> PCIe (Let me know if this correct)
    -Have you confirmed that the cable is good?
    I have changed four cables. I am using SFP-H10GB-CU2M= Twinax cable.
    -If the switch does not even see the port and CIMC says it is down, have you confirmed the switch port is properly configured?
    CIMI says “Link Down” in VIC adapter General area. The switch interface config has only “switchport mode access” in default vlan 1.
    -What is the firmware running on the server and the OS?
    We upgraded the firmware with latest ucs-c220-huu-1.5.4-3.iso.
    Let me share with you the complete BoM of the server:
    UCSC-C220-M3S              1x  UCS C220 M3 SFF w/o CPU  mem  HDD  PCIe  PSU  w/ rail kit
    UCS-CPU-E5-2690              2x  2.90 GHz E5-2690/135W 8C/20MB Cache/DDR3 1600MHz
    UCS-MR-1X082RY-A    16x 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
    CAB-9K10A-UK              2x  Power Cord 250VAC 10A BS1363 Plug (13 A fuse) UK
    UCSC-PCIE-CSC-02    1x           Cisco VIC 1225 Dual Port 10Gb SFP+ CNA
    UCSC-PCIE-QSFP              1x  Qlogic QLE8242-CU Dual Port 10 GbE FCoE CNA
    UCSC-HS-C220M3              2x  Heat Sink for UCS C220 M3 Rack Server
    UCSC-RAIL1              1x  Rail Kit for C220 C22 C24 rack servers

  • Problem on installing Solaris 10 on Cisco UCS-C24-M3S2 rack server.

    I've got issue on installing Solaris 10 on Cisco UCS-C24-M3S2 rack server. Solaris 10 doesn't recognize UCSC-RAID-ROM55.

    Hi dipaktimsina,
    From the documentation below it looks like Solaris doesn't have support for the embedded raid controller with the ROM55 chip. 
    http://www.cisco.com/web/techdoc/ucs/interoperability/matrix/matrix.html
    let me know if you have any questions and don't forget to rate any useful answers.

  • Windows 7 answer file deployment and iscsi boot

    Hi, I am trying to prepare an image with windows7 Ent that has been installed, went through "audit" and then shutdown with:
    OOBE+Generalize+Shutdown
    So that I can clone this image and the next time it boots, it will use answer file to customize, join domain etc.
    The complication to this - is that I am using iscsi boot for my image, and working within Vmware ESX.
    I can install Windows without any issues, get the drivers properly working, reboot and OOBE on the same machine - no issues.
    The problems come when I clone the VM and the only part that changes (that I think really matters) is the mac address of the network card. The new clone when it comes up after the OOBE reboot hangs for about 10min and then proceeds without joining to domain.
    Using Panter logs and network traces - I saw that the domain join command was timing out and in fact no traffic was being sent to the DC. So the network was not up. The rest of the answer file customization works fine.
    As a test I brought up this new clone (with new mac) in Audit mode - and Windows reported that it found and installed drivers for new device - VMXNET3 Driver 2. So in fact it does consider this a new device.
    Even though it iscsi boots from this new network card - later in process its unable to use it until driver is reinstalled.
    In my answer file I tried with and without below portion but it didnt help:
    <settings pass="generalize">
            <component>
                <DoNotCleanUpNonPresentDevices>true</DoNotCleanUpNonPresentDevices>
                <PersistAllDeviceInstalls>true</PersistAllDeviceInstalls>
            </component>
        </settings>
    I also tried with E1000 NIC, but couldnt get windows to boot properly after the cdrom installation part.
    So my question - is my only option to use workarounds like post OOBE scripts for join etc? 
    Is it possible to let Windows boot and then initiate an extra reboot once the driver was installed and then allow it to go to Customize phase?
    thank you!

    Hi,
    This might be caused by the iscsi boot.
    iSCSI Boot is supported only on Windows Server. Client versions of Windows, such as Windows Vista® or Windows 7, are not supported.
    Detailed information, please check:
    About iSCSI Boot
    Best regards
    Michael Shao
    TechNet Community Support

  • Cisco UCS RAID SAS 2008M-8i Mezzanine Card Performance?

    I recently purchased a UCS C220 bundle, which includes a Cisco UCS RAID SAS 2008M-8i Mezzanine Card.  I'm planning on deploying this as a standalone server running XenServer 6.2 in the near future.
    I'm happy with the unit and testing is good, but for one aspect.  The disk IO throughput on it seems to be far short of what I expected.  I have a desktop PC with an Intel DZ77GA-70K motherboard as a lab spare, and the disk IO I can achieve from that server with the same disks exceeds what the C220 seems to be able to achieve, on a consistent repeatable basis.
    The testing I am doing is based around two benchmarks:
    1. A 400G file copy between the two machines, over the network back-to-back to note the maximum sustained throughput, and
    2. A mix of runs with 'sysbench' to mix of tests of local IO, of sequential and random reads and writes, with the command: "sysbench --test=fileio --file-total-size=150G --file-test-mode=seqrd --init-rng=on --max-time=300 --max-requests=0 run"
    For #1, I run a copy to/from an HP MicroServer Gen8, which has 12TB of disk space on it in a Linux RAID0 configuration (4x3TB Seagate drives).  If I copy files to or from this HP Server to the Intel DZ77GA-70K I am able to easily saturate the 1G network, achieving a sustained 960MBit/s for an hour or more at a time.  If I then take the exact same SATA disks from the DZ77GA-70K, connect these into the UCS box and do the exact same network copy with the exact same OS, I'm only about to get around ~400-500MBit/sec of sustained throughput.
    For #2, the test results of which are entirely local to the C220, come in around 105-110 MByte/sec on a sequential read, which drops to around 2 MByte/sec on a random read or write test.  No surprise of the enormous drop - because random reads/writes are a pretty tough IO load, but I would expect sequential reading should be much better.  I can get consistent sysbench seqrd results from the MicroServer of around 300 MByte/sec, for example.
    I can consistently replicate this with Redhat 6.5, as well as Gentoo (running the latest linux kernel) as well as from a Xen 6.2SP1 Hypervisor install on the C220 (tested from the Dom0 domain itself, as well as a Linux guest) all 64 bit.  Jumbo frames are enabled end-to-end also, and CPU is not bottlenecking.  Latest firmware is installed on all components.  The ucs-cxxx-drivers.1.5.4a.iso image states that for the Redhat and Xen systems, that the required drivers are included in the OS, so I don't need to worry about installing them separately.  Presumably the Gentoo system has even newer drivers again because it has a very new kernel, but alas the throughputs are the same on all of those systems.
    I have tried with SATA as well as a SAS drive, and the test results are also practically the same.  All disks in all servers are Seagate 6.0 Gb/s units, and none of the servers are swapping to disk at any stage.
    I am happy with network IO - I can completely saturate the 1G ports easily, and I'm convinced that's not a part of the problem here.
    What could cause this sort of performance?  Storage card logs in CIMC don't indicate anything is wrong and none of the OS's are indicating issues of any sort, but it certainly does seem something isn't right in that I'm getting significantly superior performance from a desktop motherboard and the MicroServer, than an enterprise grade server, when testing with the exact same hard drives.
    Questions:
    - Is the 2008M-8i card considered a low-end RAID card or should I be getting reasonable throughput from it?  I was anticipating performance at least as good as a desktop motherboard, but this doesn't seem to be the case.  The RAID card as a component is more expensive than an entire MicroServer or Intel Motherboard so it should run much better, yes?
    - What sort of performance should I expect out of this card on a single sequential read or write?
    - Can this RAID card run drives just as JBOD's or do all disks have to be initialised in an array (even if just a RAID0 array with 1 disk)?  It seems if they are added to the server they do not show up to any OS until they are initialised as part of an array, although I haven't delved into the BIOS settings of the card itself (only from CIMC so far).
    - I recall seeing something about best practice of having two virtual drives on these cards, what is the impact in running more, given the card certainly allows more to be created (I currently have 4 while I am testing)
    -  I noticed on Cacti graphs while rebuilding a RAID1 array that the CPU ran hotter while the array was being rebuilt, and cooled down once the rebuild had completed, which indicates the rebuild was using up CPU on the host hardware.  Should this not have been entirely transparent to the system if the RAID activity is offloaded to this card, or is an increase in CPU to be expected?
    I'm very keen to find out others experiences of this card, what people have done to get good throughput out of it, or if I should go back to a whitebox server with an Intel board   :-)
    Thanks,
    Reuben

    Hello Reuben,
    I reached out to colleagues who are more knowledge on this topic and here is their response.
    - Is the 2008M-8i card considered a  low-end RAID card or should I be getting reasonable throughput from  it?  I was anticipating performance at least as good as a desktop  motherboard, but this doesn't seem to be the case.   The RAID card as a component is more expensive than an entire  MicroServer or Intel Motherboard so it should run much better, yes? -
    2008M-8i card is a entry/value card with an  expected performance which is better than a software RAID. This card  doesn’t utilise memory in a standard RAID 0/1 configuration.
    - What sort of performance should I expect out of this card on a single sequential read or write? - We should expect around 1GB sequential read (refer link http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-computing/whitepaper_c11-728200.html)
    - Can this RAID card run drives  just as JBOD's or do all disks have to be initialised in an array (even  if just a RAID0 array with 1 disk)?  It seems if they are added to the  server they do not show up to any OS until they  are initialised as part of an array, although I haven't delved into the  BIOS settings of the card itself (only from CIMC so far). -
    This card can be used in JBOD mode. To enable JBOD mode you need to use MegaCLI commands.  This option is not present as a default  configuration.
    Please note: Once we enable the JBOD mode – it  cannot be reverted back to the default RAID mode setting.
    - I recall seeing something about  best practice of having two virtual drives on these cards, what is the  impact in running more, given the card certainly allows more to be  created (I currently have 4 while I am testing)  -
    This doesn’t apply for this card as it  does not have any cache. Can you please point us  to the document about best practice of having two virtual drives ?
    -  I noticed on Cacti graphs while  rebuilding a RAID1 array that the CPU ran hotter while the array was  being rebuilt, and cooled down once the rebuild had completed, which  indicates the rebuild was using up CPU on the  host hardware.  Should this not have been entirely transparent to the  system if the RAID activity is offloaded to this card, or is an increase  in CPU to be expected? - Creating/Deleting/Modifying a RAID volume is a CPU independent operation. 

Maybe you are looking for