Solaris 10 x86 boot from SAN

Hello
I am trying to install solaris10 x86 on an IBM Blade booting from SAN. The blade includes a fibre channel extension card (qla2312 from qlogic) and no internal drive.
The installation procedure does not find any bootable device to install solaris on. I have tried the qlogic driver disk for solaris 9, the driver is loaded (apparetly), but no disk is found (a lun is configured and presented to the blade, the bios of the FC card can see it).
Is there any solution ?
thanks in advance

I just today posted in "Solaris 10 General" about this same topic.
It appears that only the sparc supports this but I'm not certain.
As I stated in my other post and as you also state, the installer doesn't see the lun.
fyi, I was able to do this with RHEL on the same blade and to the same lun.
Did you find any solution ?

Similar Messages

  • Solaris support booting from san ?

    Does anyone know if Solaris 10 supports booting from a san fabric lun ? In particular, an IBM DS4800.
    The installer doesn't see the luns when it comes time to choose a disk to install on, only the internal disks.
    The server is a IBM LS20 blade. I've managed to install Sol. 10 on the internal drives and can see the luns on the DS4800.
    I'm using Solaris 10 Express 10/05 x86/64.
    Brian.

    Do you see the luns after boot? I'm wondering if there isn't a driver for your controller in the default installation environment. I would assume one could be added (that being one of the big benefits of newboot). Especially if you could create a jumpstart server. (It's easer to add a driver to it than to burn a new CD/DVD).
    Darren

  • 1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk? 2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?

    FYI....boot from SAN is required for physical server (T4-1) (not OVM).
    1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk?
    The SAN disks allocated are visible in ok prompt. below is the output.
    (0) ok show—disks
    a) /pci@400/pci@2/pci@0/pci@f/pci@0/usb@0, 2/hub@2/hub@3/storage@2/disk
    b) /pci@400/pci@2/pci@0/pci€a/SUNW, ezalxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk
    d) /pci@400/pci@2/pci@0/pci@8/SUNW, emlxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@8/SUNW,enlxs@0/fp@0,0/disk
    f) /pci@400/pci@2/pci@0/pci@4/scsi@0/disk
    g) /pci@400/pci@1/pci@0/pci@4/scsi@0/disk
    h) /iscsi—hba/disk
    q) NO SELECTION
    valid choice: a. . .h, q to quit c
    /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk has been selected.
    Type “Y ( Control—Y ) to insert it in the command line.
    e.g. ok nvalias mydev “Y
    for creating devalias mydev for /pci@400/pci@2/pci@0/pci@a/SUNW,emlxs@0/fp@0,0/disk
    (0) ok set—sfs—boot
    set—sfs—boot ?
    We tried selecting a disk and applying sfs-boot at ok prompt.
    Can you please help me providing detailed pre-requesites/steps/procedure to implement this and to start boot from SAN.
    2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?
    As we know that ZFS is the default filesystem in Solaris 11.
    We have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.
    I have seen the solution that using format -e, we change the labelling but all the data will be lost, whats the way to apply a SMI Label/Format on a rpool disks while OS Installation itself.
    Please provide me the steps to SMI Label a disk while installaing Solaris 11.1 OS.

    Oracle recommends below things on rpool: (thats reason wanted to apply SMI Label)
    I have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.

  • Boot from san in solaris 11

    Hi,
    Thanks in advance.
    I have a solaris 11 set up with SPARC(sunfire v-250). I wanted to set up a boot from san environment in Solaris 11.
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?
    Thanks and Regards
    Maneesh

    Glad to hear that you have other supportable systems that you can try this with.
    881312 wrote:
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs. With zfs, the analogs to ufsdump and ufsrestore are 'zfs send' and 'zfs receive'. The process for creating an archive of a root pool and restoring it is documented in the ZFS admin guide at http://docs.oracle.com/cd/E23824_01/html/821-1448/recover-1.html#scrolltoc. Note that instead of sending it to a file and receiving it from the file, you can use a command like "zfs send -R pool1@snap | zfs recv pool2@snap". Read the doc chapters that I mention for actual zfs send and recv options that may be important, as well as other things you need to do to make the other pool bootable.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".I would have expected this to work better than that - but needing to set boot-device in the OBP doesn't surprise me. By any chance, was the pool on the SAN created using the whole disk (e.g. c3t0d0) instead of a slice (ct30d0s0)? Root pools need to be created on a slice.
    Note that beadm only copies the boot environment. Datasets like <rootpool>/export (mounted at /export) and its descendants are not copied. Also, dump and swap are not created in the new pool. Thus, you may have built dependencies into the system that cross the original and new root pools. You may be better off using a variant of the procedure in the ZFS admin guide I mentioned above to be sure that everything is copied across. On the first boot you will likely have other cleanup tasks, such as:
    - 'zpool export' the old pool so that you don't have multiple datasets (e.g. <oldpool>/export and <newpool>/export) both trying to mount datasets on the same mountpoint.
    - Modify vfstab to point to the new swap device
    - Use dumpadm to point to the new dump device
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.I think that once you get past this initial hurdle, you will find that beadm is a great improvement. Note that beadm is not really intended to migrate the contents of one root pool to another - it has a more limited scope.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?Is there any reason to not just install directly to the SAN device? You shouldn't really need to do the extra step of installing to a non-SAN disk first, thus avoiding the troubles you are seeing.
    Good luck and please let us know how it goes.
    Mike

  • How to "boot from SAN" from a LUN which belongs to other solaris Machine.

    Hi all
    I have installed solaris on a lun ( boot from SAN).
    Then i had assigned the same os lun to another machine ( the hardware is exactly the same) but now the new machine had detected the os but it reboots and kicksoff.
    I have tried changing vfstab setting,
    can someone help me??
    Thanks in advance.
    sidd.

    disable IDE RAID and ensure SATA is enabled in BIOS
    disconnect any other IDE HDDs before you install Windows to your SATA drive; they can be reconnected again afterwards.
    make sure that the SATA drive is listed first in 'Hard Disk Priority' under 'Boot Order' in BIOS

  • Mac Pro boot from SAN - Can we do it?

    I am looking to put a Mac Pro in a data center.  I would like to set it up to boot from our SAN.  I've seen very little information on this possibility.  Ideally my configuration would look like this:
    Mac Pro - dual fibre channel adapters - boot from SAN, dual network connection with LACP port bonding, since dual power supply is not a possibility UPS rack mounted with the Pro.
    Is there any documentation, or recommendations on being able to boot OS X Mountain Lion from SAN?

    You can do it wirelessly, but that will take some time... you could also purchase a Thunderbolt to Ethernet adapter or a Thunderbolt to FireWire adapter. That would certainly be faster. You should be able to purchase either adapter at your local Apple Store or at the online Apple Store.
    Good luck,
    Clinton

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • Windows 2008 R2 Very Slow to Boot from SAN

    I have installed Windows 2008 R2 on a blade in a boot from SAN configuration. I loaded the Cisco Palo FCOE drivers during the install and it went through without a hitch.
    After the install the reboots are painfully slow, like 15 minutes. After boot up performance is great and there are no issues. I have tried changing the FC Adpater Policy from Windows to default and even to VMware but it doesn't make a difference.
    Anyone know if there is a solution to this slow boot time?
    by the way I have an ESX 4 blade running with boot from SAN and it boots normally.
    I have checked FC zoneing and Navisphere and can't figure out why it takes so long to boot.

    I had to open a case with TAC..  Bug ID CSCtg01880 is the issue..  a summary below.  Fix is end of May.
    Symptom:
    SAN Boot using Palo CNA for Window2008 takes 15~30 minutes everytime it boot.
    Conditions:
    This is due to incorrect bit set in FC frame.
    Workaround:
    User need to wait longer for OS to come up. Once OS comes up there will be no service impact.

  • Can Solaris 10 boot from USB devices like USB thumb?

    1. Any live CD build for Solaris 10? That's the one which can boot up from CD and run immediately.
    No installation requirement.
    2. Can Solaris 10 boot from USB devices like USB thumb?

    You can find "live" CD and DVD images at OpenSolaris site: www.genunix.org/distributions/belenix_site. This OSolaris version is booted from CD/DVD drive and can be permanently installed on HD drive. Moreover You can find at this site suitable script for USB thumb preparation. However You should remember that the number of read/writes on USB thumb is limited (ca 400 and slightly more in more expensive versions).
    Good luck.

  • Boot from san with local disk

    I have some B200M3's with local disk.  I would like to configure them to boot from san.  I've setup a service profile template with a boot policy to boot from CD first and then the SAN.  I have a local disk configuration policy to mirror the local disks.   I've zoned the machines so that it presently only sees one path to the storage because I'm installing windows and I don't want it to see the disks funky because of multiple paths to the same disk.  When I boot the machine it sees the disk.  I boot to the Windows 2012R2 iso and load the drivers for the cisco mlom and then the single lun will appear.  The local disk will also appear.  It can't install Windows 2012R2 on the SAN disk only the local disk.  It sees the local disk as disk 0 and the san disk as disk 3.  I don't know how to get the machine to see the san disk as disk 0.  I have the lun (which resides on an vnx5600) as lun 0.  The boot policy is configured to have the san lun as lun 0.  It even appears while booting the san lun appears as lun 0.  The error I'm getting from the windows installer is:  We couldn't install Windows in the location you chose.  Please check your media drive.  Here's more info about what happened: 0x80300001.  Any suggestions to get this to boot from SAN.

    Hi
    during the boot up process, do you see the wwpn of the target showing that the VIC can talk to the storage?
    Reboot the server in question, when you see the option to get into bios press F2, ssh to the primary fabrics a run the following commands
    connect adapter (x/x/x). <--- (chassis #/slot #/adapter#)
    connect
    attach-fls
    lunlist 
    (provide output of last command)
    lunmap 
    (provide output of last command)

  • Boot from SAN, ESX 4.1, EMC CX4

    Please feel free to redirect me to a link if this has already been answered. I've been all over the place and haven't found one yet.
    We have UCS connected (m81kr in the blade) via a Brocade FC switch into an EMC CX4.
    All physical links are good.
    We have a single vHBA in a profile along with the correct target WWN and LUN 0 for SAN booting.
    The Brocade shows both the interconnect and the server WWNs, and they are zoned along with the EMC WWN.
    Default VSAN (1) on the profile.
    What we were expecting to do is boot the server but not into an OS, and then open Connectivity Status on the EMC console and see the server's WWN ready to be manually registered ( a la http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/). We are not seeing this.
    Instead, when booting the blade, it will show on the switch (NPIV is enabled) and can be zoned, but the WWN won't show in Connectivity Status. Once we get the ESX installation media running, then it will show up and we can register and assign the host. That's fine for installing.Therefore, we know there is end-to-end connectivity between the server and the LUN.
    Once we get ESX installed and try to boot from SAN, the server's initiator won't log into the EMC. The server's KVM shows only a blinking cursor or it may drop down a few lines and hang. Connectivity Status shows the initiator still registered but no logged in.
    Are we making assumptions we should not?

    I think we're good all the way down to your comment, "If you get this  far and start the ESX install, you’ll see this as an availble target."  Here's where we diverge.
    Here is what we had thought should be possible, from http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/:
    UCS Manager Tasks
    Create a Service Profile Template with x number of vHBAs.
    Create a Boot Policy that includes SAN Boot as the first device and link it to the Template
    Create x number of Service Profiles from the Template
    Use Server Pools, or associate servers to the profiles
    Let all servers attempt to boot and sit at the “Non-System Disk” style message that UCS servers return
    Switch Tasks
    Zone the server WWPN to a zone that includes the storage array controller’s WWPN.
    Zone the second fabric switch as well. Note: For some operating  systems (Windows for sure), you need to zone just a single path during  OS installation so consider this step optional.
    Array Tasks
    On the array, create a LUN and allow the server WWPNs to have access to the LUN.
    Present the LUN to the host using a desired LUN number (typically  zero, but this step is optional and not available on all array models)
    From 1.5 above, that's where we'd hope to see the WWN show up in the storage and we could register the server's WWN and assign it to a storage group. It doesn't show up until the OS starts.
    But if we're trying to boot the OS from the LUN, we're at a catch-22 now. The initiator won't log in until the OS boots, and the OS won't boot until the initiator logs in, unless we're missing some little step.
    What we haven't done is check the local disk configuration policy, so we'll see if that's correct.
    EDIT: OK, when the vHBA BIOS message comes up, it sticks around for about 15 seconds and POST continues. The storage WWN does not show up and the Brocade's Name Servers screen doesn't show the server's HBA. It looks like it's misconfigured somewhere, it's just quite convoluted finding out where. I'll post back if we find it.
    EDIT2: We tried the installation again; the initiator stays logged out until ESX asks where to install and provides the SAN LUN. EMC then shows the initiator logged in.
    The Palo card does support SAN booting, correct?

  • Guest domains fail to boot from SAN disk after power-cycle

    Control Domain Environment
    # uname -a
    SunOS s0007 5.10 Generic_139555-08 sun4v sparc SUNW,T5440
    # ./ldm -V
    Logical Domain Manager (v 1.1)
    Hypervisor control protocol v 1.3
    Using Hypervisor MD v 0.1
    System PROM:
    Hypervisor v. 1.7.0 @(#)Hypervisor 1.7.0 2008/12/11 13:42\015
    OpenBoot v. 4.30.0 @(#)OBP 4.30.0 2008/12/11 12:16
    After a power-cycle the guest domains did not boot from SAN disks, they
    tried to boot from the network.
    # ./ldm list
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-cv- SP 16 8G 0.5% 4d 20h 46m
    s0500 active -t---- 5000 8 8G 12% 4d 20h 46m
    s0501 active -t---- 5001 8 4G 12% 4d 20h 46m
    s0502 active -t---- 5002 16 8G 6.2% 4d 20h 46m
    s0503 active -t---- 5003 8 2G 12% 4d 20h 46m
    s0504 active -t---- 5004 8 4G 0.0% 4d 20h 46m
    s0505 active -t---- 5005 4 2G 25% 4d 20h 46m
    s0506 active -t---- 5006 4 2G 25% 4d 20h 46m
    s0507 active -t---- 5007 4 2G 25% 4d 20h 46m
    s0508 active -t---- 5008 4 2G 25% 4d 20h 46m
    Connecting to console "s0508" in group "s0508" ....
    Press ~? for control options ..
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    Requesting Internet Address for 0:14:4f:fa:b9:1b
    If we reboot the quest domains now, it works.
    It seems the disks are not ready at the
    time the guest domains boot.
    We see this on systems with many guest domains.
    Is there any logfile, where we could find the reason for this?
    Or is this a known issue?

    Press F12 on boot to choose CD/DVD as a temporary boot device.
    My hunch is that there's something wrong with the SSD itself, though.
    Good luck.
    Cheers,
    George
    In daily use: R60F, R500F, T61, T410
    Collecting dust: T60
    Enjoying retirement: A31p, T42p,
    Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
    Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you.

  • UCS Unable to boot from SAN

    I have some blades that I'm unable to boot from SAN.  The weird thing is I can see the LUN's and I can install ESXi 5.5 fine.  It's up until I reboot and the blades just boot to the BIOS because they don't see any disk.  When I try to change boot order, the only thing I can boot to is UEFI.  

    When you see your lun, and can install ESXi, it means, that
    - your zoning is correct
    - your lun masking / mapping is correct
    If boot fails, your boot policy is the problem ! are you sure, that the target pwwn points to the controller of the disksubsystem, that the lun nr. is correct ? we seen all kind of weird things, like cable crossed,.....

  • Boot from SAN, yes or no ??

    I see more and more partners and customers, that oppose to do boot from a network (be it FC, FCoE, iSCSI, PXE); they prefer installation on local disk, USB and / or SD. Any feedback from the field.
    One should also be aware, that W2012 (R2) we SMB V3 storage (pushed by MSFT of course) doesn't support boot over the network; W2012 has to be installed on a local device.
    Walter.

    Walter,
    The problem in not use boot from SAN is the data management, is decentralized. (I'm not talking about performance yet).
    So far, I never did a deploy of UCS with boot from local disks, all booting from SAN (FC,FCoE,iSCSI).
    Is there better place to keep data than a storage?
    Using boot from SAN and Service Profiles of UCS, you can delivery a solution than can restore any server in minutes without stress.
    Of course, if we talking a very small deployment, the boot from SAN doesn't make a big impact.

  • Exchange 2013 Servers booting from SAN

    Hi,
    One of my client has decided to go with NetApp Flexpods for implementing Exchange 2013 in their environment.
    It has also been decided that these servers will boot from SAN and
    Exchange 2013 will also be installed on SAN. 
    We are planning to Install Win 2012 R2 for OS and E2k13 CU8. Few questions:
    1) Is it supported to boot Windows Server 2012 R2 from SAN, Specially if that server is going to used for E2k13
    2) Is it supported to Install Exchange 2013 CU8 on SAN
    I researched about this and understood that this is supported from Windows OS perspective. I am more concerned about the whole configuration falling within support boundary of Exchange 2013. I found an article which talks about the complexity involved in
    troubleshooting and indicates that its mainly SAN vendor who would be troubleshooting if running into any issues.
    https://support.microsoft.com/en-us/kb/305547- This article is not yet applicable to 2012 R2. Its applicable only till 2008 R2.
    http://www.enterprisenetworkingplanet.com/datacenter/the-pros-cons-of-booting-from-san.html - Pro & Cons of booting from SAN
    The booting from SAN for Exchange servers concerns me a lot as i have never seen any implementation in Exchange 2003, 2007 & 2010 doing this in my 10 years of experience. I have supported about 10 different enterprise exchange deployments and never saw
    this. So this is an unknown territory for me, hence looking for some best practices, articles and guidance around this if its supported by Microsoft Exchange team to run Windows and Exchange from SAN.
    Thanks
    Siva

    Hello,
    Please see the “Supported storage architectures” in the following article:
    https://technet.microsoft.com/en-us/library/ee832792%28v=exchg.150%29.aspx
    Thanks,
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Simon Wu
    TechNet Community Support

Maybe you are looking for