Boot sparc Solaris 10 from SAN

Hi,
Is it possible to boot Solaris 10 from SAN?
Is there any documentation that explains how?

sure.
docs.sun.com
emulex cards: www.emulex.com
qlogicL www.glogic.com
depending on your array, card, solaris ver, you can find the specifics in the support area at the hba vendor sites

Similar Messages

  • Boot diskless Mac Pro from SAN/NAS/iSCSI

    Hi,
    is there a way to a boot disk-less Mac Pro from a SAN (or NAS with iSCSI)? For reasons of security and ease of administration, we want to deploy a secure centralized storage that literally hosts the entire company in a single redundant rack. I know Windows and most Unixes can do that and wonder if there is a similar solution for Mac OS X.
    Speed is not much a concern (50MB/s is ok). Therefore iSCSI over Gigabit Ethernet should be fine. However, I could not find a bootable iSCSI initiator card for the Mac yet. Is there one?
    Any pointers are much appreciated,
    Thanks!

    There should be no performance penalty at all. A hosted iSCSI drive over Gigabit Ethernet is not slower than a local 3.5" eSATA disk. I checked this with the globalSAN iSCSI initiator in a Mac Pro and an OpenFiler (freeware) server and the drives performed around 70-80 MB/s. Of course, if we were talking about a RAID, a local controller would be much faster, but for single disks this is ok.
    The advantage is consolidation, safety and security. All drives of every Mac in an office would be hosted on a highly redundant high-availability server rack (8, 16, or even 32 drives, RAID 6, hot swappable, expandable while running, etc). Nobody would ever notice a drive failure, as it is simply replaced in the rack without downtime.
    (1) Adding a new "drive" of any size to any Mac is just a matter of a minute.
    (2) Hosted drives behave like a physical local disk. You can partition and format them in any way you like, use hard links (Time Machine?), install other operating systems, anything.
    (3) Redundancy and data safety is guaranteed for the entire office.
    (4) Backup is centralized and more reliable.
    (5) Many iSCSI servers support snapshots: You could return to a previous state of your Mac at any time, as if it was a virtual machine.
    (6) If the iSCSI server supports encryption, your data is still safe even if one or more of your Macs (or the server altogether) got stolen.
    Usually this kind of setup is run on a fibre channel network in large data centers with hundreds of disk-less server blades and one huge SAN rack full of disk drives. Using iSCSI over cheap ethernet allows small to medium size businesses to benefit from this consolitation technique without spending the $$$ for a FC solution.
    I agree that this would certainly not make sense for a single Mac, or two. However, once you have more than, say, five Macs and/or Windows machines, storage consolidation might be very useful.
    If it only worked with Macs
    Thanks for pointing me to another potential solution. I will carefully check that out. I however can't yet see how it would be possible to have the (read-only) boot image load the iSCSI initiatror drivers and then "redirect" the boot process to a hosted iSCSI disk. The thing is, the OS would have to be installed on a writeable iSCSI disk in order to be maintained, updated, upgraded as usual. All that storage consolidation is pointless, if the hosted drives behave like network shares rather than true local disks.

  • Solaris 10 x86 boot from SAN

    Hello
    I am trying to install solaris10 x86 on an IBM Blade booting from SAN. The blade includes a fibre channel extension card (qla2312 from qlogic) and no internal drive.
    The installation procedure does not find any bootable device to install solaris on. I have tried the qlogic driver disk for solaris 9, the driver is loaded (apparetly), but no disk is found (a lun is configured and presented to the blade, the bios of the FC card can see it).
    Is there any solution ?
    thanks in advance

    I just today posted in "Solaris 10 General" about this same topic.
    It appears that only the sparc supports this but I'm not certain.
    As I stated in my other post and as you also state, the installer doesn't see the lun.
    fyi, I was able to do this with RHEL on the same blade and to the same lun.
    Did you find any solution ?

  • Boot from san in solaris 11

    Hi,
    Thanks in advance.
    I have a solaris 11 set up with SPARC(sunfire v-250). I wanted to set up a boot from san environment in Solaris 11.
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?
    Thanks and Regards
    Maneesh

    Glad to hear that you have other supportable systems that you can try this with.
    881312 wrote:
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs. With zfs, the analogs to ufsdump and ufsrestore are 'zfs send' and 'zfs receive'. The process for creating an archive of a root pool and restoring it is documented in the ZFS admin guide at http://docs.oracle.com/cd/E23824_01/html/821-1448/recover-1.html#scrolltoc. Note that instead of sending it to a file and receiving it from the file, you can use a command like "zfs send -R pool1@snap | zfs recv pool2@snap". Read the doc chapters that I mention for actual zfs send and recv options that may be important, as well as other things you need to do to make the other pool bootable.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".I would have expected this to work better than that - but needing to set boot-device in the OBP doesn't surprise me. By any chance, was the pool on the SAN created using the whole disk (e.g. c3t0d0) instead of a slice (ct30d0s0)? Root pools need to be created on a slice.
    Note that beadm only copies the boot environment. Datasets like <rootpool>/export (mounted at /export) and its descendants are not copied. Also, dump and swap are not created in the new pool. Thus, you may have built dependencies into the system that cross the original and new root pools. You may be better off using a variant of the procedure in the ZFS admin guide I mentioned above to be sure that everything is copied across. On the first boot you will likely have other cleanup tasks, such as:
    - 'zpool export' the old pool so that you don't have multiple datasets (e.g. <oldpool>/export and <newpool>/export) both trying to mount datasets on the same mountpoint.
    - Modify vfstab to point to the new swap device
    - Use dumpadm to point to the new dump device
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.I think that once you get past this initial hurdle, you will find that beadm is a great improvement. Note that beadm is not really intended to migrate the contents of one root pool to another - it has a more limited scope.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?Is there any reason to not just install directly to the SAN device? You shouldn't really need to do the extra step of installing to a non-SAN disk first, thus avoiding the troubles you are seeing.
    Good luck and please let us know how it goes.
    Mike

  • 1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk? 2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?

    FYI....boot from SAN is required for physical server (T4-1) (not OVM).
    1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk?
    The SAN disks allocated are visible in ok prompt. below is the output.
    (0) ok show—disks
    a) /pci@400/pci@2/pci@0/pci@f/pci@0/usb@0, 2/hub@2/hub@3/storage@2/disk
    b) /pci@400/pci@2/pci@0/pci€a/SUNW, ezalxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk
    d) /pci@400/pci@2/pci@0/pci@8/SUNW, emlxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@8/SUNW,enlxs@0/fp@0,0/disk
    f) /pci@400/pci@2/pci@0/pci@4/scsi@0/disk
    g) /pci@400/pci@1/pci@0/pci@4/scsi@0/disk
    h) /iscsi—hba/disk
    q) NO SELECTION
    valid choice: a. . .h, q to quit c
    /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk has been selected.
    Type “Y ( Control—Y ) to insert it in the command line.
    e.g. ok nvalias mydev “Y
    for creating devalias mydev for /pci@400/pci@2/pci@0/pci@a/SUNW,emlxs@0/fp@0,0/disk
    (0) ok set—sfs—boot
    set—sfs—boot ?
    We tried selecting a disk and applying sfs-boot at ok prompt.
    Can you please help me providing detailed pre-requesites/steps/procedure to implement this and to start boot from SAN.
    2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?
    As we know that ZFS is the default filesystem in Solaris 11.
    We have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.
    I have seen the solution that using format -e, we change the labelling but all the data will be lost, whats the way to apply a SMI Label/Format on a rpool disks while OS Installation itself.
    Please provide me the steps to SMI Label a disk while installaing Solaris 11.1 OS.

    Oracle recommends below things on rpool: (thats reason wanted to apply SMI Label)
    I have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.

  • Difficulty with SAN boot of solaris 10 (on V245) from HP EVA 4000 -

    Hi
    This is my first post. I'm trying to get my head around what's preventing me booting a V245 solaris 10 from a LUN presented from an HP EVA 4000.
    The machine boots up ok from a local disk. So, i've done the LUN masking (presentation) on the EVA and the Brocade zoning - solaris can see the disk.
    I've formated it close to the current internal root disk, and had used ufsdump/ufsrestore and installboot to copy the contents of / onto the new LUN , which i had mounted as /mnt.
    My problem is i just can't seem to get at the new disk at the ok prompt - i'm never quite sure that i'm booting from the correct disk.
    I believe all the compatibility with o/s qlc card drivers and the EVA are in order.
    The qlc HBA is as follows:
    Sun SG-XPCI2FC-QF2/x6768A 1077 2312 1077 10A and have suitable patch 119139-33
    Here's some output:
    root@bwdnbtsl02# uname -a
    SunOS bwdnbtsl02 5.10 Generic_137111-01 sun4u sparc SUNW,Sun-Fire-V245
    root@bwdnbtsl02#
    root@bwdnbtsl02# luxadm probe
    No Network Array enclosures found in /dev/es
    Found Fibre Channel device(s):
    Node WWN:50001fe15009df00 Device Type:Disk device
    Logical Path:/dev/rdsk/c4t600508B4001066A60000C00002B50000d0s2
    root@bwdnbtsl02#
    Now down at ok prompt, probe-scsi-all can see the EVA (HSV) LUNS
    (Here's a sample of the output)
    ok probe-scsi-all
    /pci@1f,700000/pci@0/SUNW,qlc@2,1
    ************************* Fabric Attached Devices ************************
    Adapter portId - 640100
    Device PortId 640000 DeviceId 1 WWPN 210000e08b8a3e4f
    Device PortId dc0000 DeviceId 2 WWPN 50001fe15009df09
    Lun 0 HP HSV200 6220
    Lun 1 DISK HP HSV200 6220
    Device PortId d20000 DeviceId 3 WWPN 50001fe150092f19
    Lun 0 HP HSV200 6220
    /pci@1f,700000/pci@0/SUNW,qlc@2
    ************************* Fabric Attached Devices ************************
    Adapter portId - 630100
    Device PortId 630000 DeviceId 1 WWPN 210100e08baa3e4f
    Device PortId 780000 DeviceId 2 WWPN 50001fe15009df08
    Lun 0 HP HSV200 6220
    Lun 1 DISK HP HSV200 6220
    Device PortId 6e0000 DeviceId 3 WWPN 50001fe150092f18
    Lun 0 HP HSV200 6220
    So, i attempt to boot with the following:
    ok boot /pci@1f,700000/pci@0/SUNW,qlc@2/fp@0,0/disk@w50001fe15009df08,0:a
    Boot device: /pci@1f,700000/pci@0/SUNW,qlc@2/fp@0,0/disk@w50001fe15009df08,0:a File and args:
    Can't read disk label.
    Can't open disk label package
    Can't open boot device
    {1} ok
    If anyone can put me straight here i'd be most grateful.
    thanks john

    hi,guys:
    i am interesting in your question. i have never try it like your experience.
    maybe the cause is drive . either the OS is running . the drive can load. or relation with the style of server .
    i think you can try it like the following again.
    when you install OS , you chocie EVA disk .
    remeber . if the test is complete. you must tell me .
    the following is my contact: [email protected]
    best regards

  • How to "boot from SAN" from a LUN which belongs to other solaris Machine.

    Hi all
    I have installed solaris on a lun ( boot from SAN).
    Then i had assigned the same os lun to another machine ( the hardware is exactly the same) but now the new machine had detected the os but it reboots and kicksoff.
    I have tried changing vfstab setting,
    can someone help me??
    Thanks in advance.
    sidd.

    disable IDE RAID and ensure SATA is enabled in BIOS
    disconnect any other IDE HDDs before you install Windows to your SATA drive; they can be reconnected again afterwards.
    make sure that the SATA drive is listed first in 'Hard Disk Priority' under 'Boot Order' in BIOS

  • Booting a Zone from SAN?

    I will give details of my case below:
    1. I have one critical server (T4-1 with Oracle Solaris 11.1) in production which boots from SAN & require DR.    (Server in PRODUCTION place)
    2. I have another same model server in DR Location which boots from Internal Disk and having small application running in Global Zone. (Server in DR Location)
    My requirement is When Server in Production goes down which boots from SAN and in that scenario I want boot my zone in DR Location Server with the SAME SAN BOOT (which is being used by Production Server).
    Is my requirement can be met, is that possible. If not please let me know other all possibilities with process and commands.

    Hi.
    You can not use boot device from one server as  boot devvice of zone on another server.
    You can migrate current producation server to zone or LDOM and use zone or LDOM migration.
    Zone migration - via export/import zone.
    Migrating a Non-Global Zone to a Different Machine - Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle So…
    LDOM - can migrate without  service interruption ( Live Migration ).
    Migrating Domains - Oracle VM Server for SPARC 2.2 Administration Guide
    Additional, read about
    About Zone Migrations and the zonep2vchk Tool - Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris…
    zonep2vchk

  • Solaris 10+lun from san

    hi,
    i need help with solaris 10+san. On one server with solaris 10 i connect lun from san. I dont now as...maybe. On new server i need connect new lun...have anybody step-by-step manual for this?on solaris on sparc i dont have problem, or on solaris 10 is big problem...help me?thanks...

    i have now this state:
    c0::500601603022435d disk connected configured unknown
    c0::50060160302247b5 disk connected configured unknown
    c0::500601683022435d disk connected configured unknown
    c0::50060168302247b5 disk connected configured unknown
    c1::50060161302247b5 disk connected configured unknown
    c1::500601693022435d disk connected configured unknown
    c1::50060169302247b5 disk connected configured unknown
    on format i have:
    AVAILABLE DISK SELECTIONS:
    0. c0t50060168302247B5d0 <drive type unknown>
    /pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w50060168302247b5,0
    1. c0t50060160302247B5d0 <drive type unknown>
    /pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w50060160302247b5,0
    2. c0t500601603022435Dd0 <drive type unknown>
    /pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w500601603022435d,0
    3. c0t500601683022435Dd0 <drive type unknown>
    /pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w500601683022435d,0
    4. c1t50060169302247B5d0 <drive type unknown>
    /pci@0,0/pci8086,25e4@4/pci1077,139@0,1/fp@0,0/disk@w50060169302247b5,0
    5. c1t50060161302247B5d0 <drive type unknown>
    /pci@0,0/pci8086,25e4@4/pci1077,139@0,1/fp@0,0/disk@w50060161302247b5,0
    6. c1t500601693022435Dd0 <drive type unknown>
    /pci@0,0/pci8086,25e4@4/pci1077,139@0,1/fp@0,0/disk@w500601693022435d,0
    7. c2t0d0 <DEFAULT cyl 4373 alt 2 hd 255 sec 63>
    /pci@0,0/pci8086,25e7@7/pci8086,32c@0/pci1028,1f08@8/sd@0,0
    where i have error?

  • Guest domains fail to boot from SAN disk after power-cycle

    Control Domain Environment
    # uname -a
    SunOS s0007 5.10 Generic_139555-08 sun4v sparc SUNW,T5440
    # ./ldm -V
    Logical Domain Manager (v 1.1)
    Hypervisor control protocol v 1.3
    Using Hypervisor MD v 0.1
    System PROM:
    Hypervisor v. 1.7.0 @(#)Hypervisor 1.7.0 2008/12/11 13:42\015
    OpenBoot v. 4.30.0 @(#)OBP 4.30.0 2008/12/11 12:16
    After a power-cycle the guest domains did not boot from SAN disks, they
    tried to boot from the network.
    # ./ldm list
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-cv- SP 16 8G 0.5% 4d 20h 46m
    s0500 active -t---- 5000 8 8G 12% 4d 20h 46m
    s0501 active -t---- 5001 8 4G 12% 4d 20h 46m
    s0502 active -t---- 5002 16 8G 6.2% 4d 20h 46m
    s0503 active -t---- 5003 8 2G 12% 4d 20h 46m
    s0504 active -t---- 5004 8 4G 0.0% 4d 20h 46m
    s0505 active -t---- 5005 4 2G 25% 4d 20h 46m
    s0506 active -t---- 5006 4 2G 25% 4d 20h 46m
    s0507 active -t---- 5007 4 2G 25% 4d 20h 46m
    s0508 active -t---- 5008 4 2G 25% 4d 20h 46m
    Connecting to console "s0508" in group "s0508" ....
    Press ~? for control options ..
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    If we reboot the quest domains now, it works.
    It seems the disks are not ready at the
    time the guest domains boot.
    We see this on systems with many guest domains.
    Is there any logfile, where we could find the reason for this?
    Or is this a known issue?

    Press F12 on boot to choose CD/DVD as a temporary boot device.
    My hunch is that there's something wrong with the SSD itself, though.
    Good luck.
    Cheers,
    George
    In daily use: R60F, R500F, T61, T410
    Collecting dust: T60
    Enjoying retirement: A31p, T42p,
    Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
    Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you.

  • Mac Pro boot from SAN - Can we do it?

    I am looking to put a Mac Pro in a data center.  I would like to set it up to boot from our SAN.  I've seen very little information on this possibility.  Ideally my configuration would look like this:
    Mac Pro - dual fibre channel adapters - boot from SAN, dual network connection with LACP port bonding, since dual power supply is not a possibility UPS rack mounted with the Pro.
    Is there any documentation, or recommendations on being able to boot OS X Mountain Lion from SAN?

    You can do it wirelessly, but that will take some time... you could also purchase a Thunderbolt to Ethernet adapter or a Thunderbolt to FireWire adapter. That would certainly be faster. You should be able to purchase either adapter at your local Apple Store or at the online Apple Store.
    Good luck,
    Clinton

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • Windows 2008 R2 Very Slow to Boot from SAN

    I have installed Windows 2008 R2 on a blade in a boot from SAN configuration. I loaded the Cisco Palo FCOE drivers during the install and it went through without a hitch.
    After the install the reboots are painfully slow, like 15 minutes. After boot up performance is great and there are no issues. I have tried changing the FC Adpater Policy from Windows to default and even to VMware but it doesn't make a difference.
    Anyone know if there is a solution to this slow boot time?
    by the way I have an ESX 4 blade running with boot from SAN and it boots normally.
    I have checked FC zoneing and Navisphere and can't figure out why it takes so long to boot.

    I had to open a case with TAC..  Bug ID CSCtg01880 is the issue..  a summary below.  Fix is end of May.
    Symptom:
    SAN Boot using Palo CNA for Window2008 takes 15~30 minutes everytime it boot.
    Conditions:
    This is due to incorrect bit set in FC frame.
    Workaround:
    User need to wait longer for OS to come up. Once OS comes up there will be no service impact.

  • Boot from san with local disk

    I have some B200M3's with local disk.  I would like to configure them to boot from san.  I've setup a service profile template with a boot policy to boot from CD first and then the SAN.  I have a local disk configuration policy to mirror the local disks.   I've zoned the machines so that it presently only sees one path to the storage because I'm installing windows and I don't want it to see the disks funky because of multiple paths to the same disk.  When I boot the machine it sees the disk.  I boot to the Windows 2012R2 iso and load the drivers for the cisco mlom and then the single lun will appear.  The local disk will also appear.  It can't install Windows 2012R2 on the SAN disk only the local disk.  It sees the local disk as disk 0 and the san disk as disk 3.  I don't know how to get the machine to see the san disk as disk 0.  I have the lun (which resides on an vnx5600) as lun 0.  The boot policy is configured to have the san lun as lun 0.  It even appears while booting the san lun appears as lun 0.  The error I'm getting from the windows installer is:  We couldn't install Windows in the location you chose.  Please check your media drive.  Here's more info about what happened: 0x80300001.  Any suggestions to get this to boot from SAN.

    Hi
    during the boot up process, do you see the wwpn of the target showing that the VIC can talk to the storage?
    Reboot the server in question, when you see the option to get into bios press F2, ssh to the primary fabrics a run the following commands
    connect adapter (x/x/x). <--- (chassis #/slot #/adapter#)
    connect
    attach-fls
    lunlist 
    (provide output of last command)
    lunmap 
    (provide output of last command)

  • Boot from SAN, ESX 4.1, EMC CX4

    Please feel free to redirect me to a link if this has already been answered. I've been all over the place and haven't found one yet.
    We have UCS connected (m81kr in the blade) via a Brocade FC switch into an EMC CX4.
    All physical links are good.
    We have a single vHBA in a profile along with the correct target WWN and LUN 0 for SAN booting.
    The Brocade shows both the interconnect and the server WWNs, and they are zoned along with the EMC WWN.
    Default VSAN (1) on the profile.
    What we were expecting to do is boot the server but not into an OS, and then open Connectivity Status on the EMC console and see the server's WWN ready to be manually registered ( a la http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/). We are not seeing this.
    Instead, when booting the blade, it will show on the switch (NPIV is enabled) and can be zoned, but the WWN won't show in Connectivity Status. Once we get the ESX installation media running, then it will show up and we can register and assign the host. That's fine for installing.Therefore, we know there is end-to-end connectivity between the server and the LUN.
    Once we get ESX installed and try to boot from SAN, the server's initiator won't log into the EMC. The server's KVM shows only a blinking cursor or it may drop down a few lines and hang. Connectivity Status shows the initiator still registered but no logged in.
    Are we making assumptions we should not?

    I think we're good all the way down to your comment, "If you get this  far and start the ESX install, you’ll see this as an availble target."  Here's where we diverge.
    Here is what we had thought should be possible, from http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/:
    UCS Manager Tasks
    Create a Service Profile Template with x number of vHBAs.
    Create a Boot Policy that includes SAN Boot as the first device and link it to the Template
    Create x number of Service Profiles from the Template
    Use Server Pools, or associate servers to the profiles
    Let all servers attempt to boot and sit at the “Non-System Disk” style message that UCS servers return
    Switch Tasks
    Zone the server WWPN to a zone that includes the storage array controller’s WWPN.
    Zone the second fabric switch as well. Note: For some operating  systems (Windows for sure), you need to zone just a single path during  OS installation so consider this step optional.
    Array Tasks
    On the array, create a LUN and allow the server WWPNs to have access to the LUN.
    Present the LUN to the host using a desired LUN number (typically  zero, but this step is optional and not available on all array models)
    From 1.5 above, that's where we'd hope to see the WWN show up in the storage and we could register the server's WWN and assign it to a storage group. It doesn't show up until the OS starts.
    But if we're trying to boot the OS from the LUN, we're at a catch-22 now. The initiator won't log in until the OS boots, and the OS won't boot until the initiator logs in, unless we're missing some little step.
    What we haven't done is check the local disk configuration policy, so we'll see if that's correct.
    EDIT: OK, when the vHBA BIOS message comes up, it sticks around for about 15 seconds and POST continues. The storage WWN does not show up and the Brocade's Name Servers screen doesn't show the server's HBA. It looks like it's misconfigured somewhere, it's just quite convoluted finding out where. I'll post back if we find it.
    EDIT2: We tried the installation again; the initiator stays logged out until ESX asks where to install and provides the SAN LUN. EMC then shows the initiator logged in.
    The Palo card does support SAN booting, correct?

Maybe you are looking for

  • DVDs not showing up in Finder (CDs OK)

    I have lots of data archived on DVD. Now DVDs suddenly will not show up on the desktop, data or otherwise. Odd, because I just burned a DVD last night. Didn't there used to be a CD/DVD control panel in System Prefs? Seems to be gone in OS X 10.34.9

  • Wireless Keyboard

    wireless Keyboard Posted: Sep 10, 2009 6:28 AM Reply Email The letter "a" is not working on my wireless keyboard. Not sure if my kids broke it or if it is something else. Is there anything I can do? I'm sick of copying and pasting the "a" to get anyt

  • My wife and I want to consolidate our music libraries on our travel laptop.

    It would also be nice to have all music on a dedicated drive. Is this possible? Thanks. M&K

  • Contract Pricing Scales

    All, we are using SRM Version 5.0 and have had SRM Contracts working with CCM system, (not MDM), for some time but without using price scales. We populate SRM contracts in CM by loading from a Procurement Catalog. Price scales exist in the procuremen

  • How to install windows support software without mac OSX install cd?

    I want to install windows 7 on my macbook pro 10.6.8 via bootcamp. I have a windows 7 install cd and I havn't got past the stage of installing the windows support software. I lost my mac osx install cd a long time ago so I can't install the software