Solaris 10+lun from san

hi,
i need help with solaris 10+san. On one server with solaris 10 i connect lun from san. I dont now as...maybe. On new server i need connect new lun...have anybody step-by-step manual for this?on solaris on sparc i dont have problem, or on solaris 10 is big problem...help me?thanks...

i have now this state:
c0::500601603022435d disk connected configured unknown
c0::50060160302247b5 disk connected configured unknown
c0::500601683022435d disk connected configured unknown
c0::50060168302247b5 disk connected configured unknown
c1::50060161302247b5 disk connected configured unknown
c1::500601693022435d disk connected configured unknown
c1::50060169302247b5 disk connected configured unknown
on format i have:
AVAILABLE DISK SELECTIONS:
0. c0t50060168302247B5d0 <drive type unknown>
/pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w50060168302247b5,0
1. c0t50060160302247B5d0 <drive type unknown>
/pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w50060160302247b5,0
2. c0t500601603022435Dd0 <drive type unknown>
/pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w500601603022435d,0
3. c0t500601683022435Dd0 <drive type unknown>
/pci@0,0/pci8086,25e4@4/pci1077,139@0/fp@0,0/disk@w500601683022435d,0
4. c1t50060169302247B5d0 <drive type unknown>
/pci@0,0/pci8086,25e4@4/pci1077,139@0,1/fp@0,0/disk@w50060169302247b5,0
5. c1t50060161302247B5d0 <drive type unknown>
/pci@0,0/pci8086,25e4@4/pci1077,139@0,1/fp@0,0/disk@w50060161302247b5,0
6. c1t500601693022435Dd0 <drive type unknown>
/pci@0,0/pci8086,25e4@4/pci1077,139@0,1/fp@0,0/disk@w500601693022435d,0
7. c2t0d0 <DEFAULT cyl 4373 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,25e7@7/pci8086,32c@0/pci1028,1f08@8/sd@0,0
where i have error?

Similar Messages

  • Solaris support booting from san ?

    Does anyone know if Solaris 10 supports booting from a san fabric lun ? In particular, an IBM DS4800.
    The installer doesn't see the luns when it comes time to choose a disk to install on, only the internal disks.
    The server is a IBM LS20 blade. I've managed to install Sol. 10 on the internal drives and can see the luns on the DS4800.
    I'm using Solaris 10 Express 10/05 x86/64.
    Brian.

    Do you see the luns after boot? I'm wondering if there isn't a driver for your controller in the default installation environment. I would assume one could be added (that being one of the big benefits of newboot). Especially if you could create a jumpstart server. (It's easer to add a driver to it than to burn a new CD/DVD).
    Darren

  • How to "boot from SAN" from a LUN which belongs to other solaris Machine.

    Hi all
    I have installed solaris on a lun ( boot from SAN).
    Then i had assigned the same os lun to another machine ( the hardware is exactly the same) but now the new machine had detected the os but it reboots and kicksoff.
    I have tried changing vfstab setting,
    can someone help me??
    Thanks in advance.
    sidd.

    disable IDE RAID and ensure SATA is enabled in BIOS
    disconnect any other IDE HDDs before you install Windows to your SATA drive; they can be reconnected again afterwards.
    make sure that the SATA drive is listed first in 'Hard Disk Priority' under 'Boot Order' in BIOS

  • Solaris 10 x86 boot from SAN

    Hello
    I am trying to install solaris10 x86 on an IBM Blade booting from SAN. The blade includes a fibre channel extension card (qla2312 from qlogic) and no internal drive.
    The installation procedure does not find any bootable device to install solaris on. I have tried the qlogic driver disk for solaris 9, the driver is loaded (apparetly), but no disk is found (a lun is configured and presented to the blade, the bios of the FC card can see it).
    Is there any solution ?
    thanks in advance

    I just today posted in "Solaris 10 General" about this same topic.
    It appears that only the sparc supports this but I'm not certain.
    As I stated in my other post and as you also state, the installer doesn't see the lun.
    fyi, I was able to do this with RHEL on the same blade and to the same lun.
    Did you find any solution ?

  • Removing LUNs from Solaris 10

    Hi all, I have a 2-node Solaris 10 11/06 cluster running Sun Cluster 3.2. It's shared storage is supplied from an EMC Clariion SAN. Some of the LUNs used by the cluster have recently been replaced. I now have multiple LUNs on this cluster that are no longer needed. But there's a problem.
    Last week I added, then removed, a new LUN to the cluster and one of the nodes in the cluster rebooted. Sun Microsystems analysed the problem and identified the reboot as a known issue (Bug ID is 6518348):
    On Solaris 10 Systems, any SAN interruptions causing a path to SAN devices to be temporarily offlined may cause the system to panic. The SAN interruption may be due to regular SAN switch and/or array maintenance, or as simple as a fibre cable disconnect.
    I need to remove the unused LUNs as the SAN that they live on is being moved.
    Is there any way I can safely remove the unused LUNs without causing or requiring server reboots?
    Thanks in advance,
    Stewart

    Hi Stewart,
    Usually devfsadm -Cv ( -C = cleanup, v = verbose) does a good job
    If it doesn't work, you can use cfgadm -c unconfigure/remove <LUNid>
    Ex: cfgadm -c unconfigure c2::50060161106023dd;cfgadm -c remove c2::50060161106023dd
    Marco

  • Patches needed to see extra luns from HDS SAN

    Hi,
    I hope you can help, i am having problems at the moment getting two solaris 9 servers to see more than one lun from the attached SAN on both machines. Nothing is being shown in cfgadm -al.
    The hardware is
    TWO Qlogic 2300 FC HBA Cards attached to a tagnastore AMS200 Hitachi SAN. This is connected through a silkworm brocade switch. At present i can only see the first lun assigned and nothing else.
    Any ideas ?
    Cheers,
    Mike

    wdey,
    The link you supplied in your response is the answer.
    You can look at the devices on the FC boot paths, but not while the OS is running.
    I have a test server I could reeboot to try what you have in your document, and it does work.
    Too bad Cisco could not create a linux util rpm that would allow looking down the fabric while running. Im quit sure emulex has this kind of util, called HBAanywhere...at least for solaris.
    In any case, thanks for the link to your article, it was VERY good!
    My problem with seeing a shorter list of luns on 2 of my FC paths may get solved this afternoon by a change my SAN admin is going to make.. If it does I will post the fix.

  • 1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk? 2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?

    FYI....boot from SAN is required for physical server (T4-1) (not OVM).
    1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk?
    The SAN disks allocated are visible in ok prompt. below is the output.
    (0) ok show—disks
    a) /pci@400/pci@2/pci@0/pci@f/pci@0/usb@0, 2/hub@2/hub@3/storage@2/disk
    b) /pci@400/pci@2/pci@0/pci€a/SUNW, ezalxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk
    d) /pci@400/pci@2/pci@0/pci@8/SUNW, emlxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@8/SUNW,enlxs@0/fp@0,0/disk
    f) /pci@400/pci@2/pci@0/pci@4/scsi@0/disk
    g) /pci@400/pci@1/pci@0/pci@4/scsi@0/disk
    h) /iscsi—hba/disk
    q) NO SELECTION
    valid choice: a. . .h, q to quit c
    /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk has been selected.
    Type “Y ( Control—Y ) to insert it in the command line.
    e.g. ok nvalias mydev “Y
    for creating devalias mydev for /pci@400/pci@2/pci@0/pci@a/SUNW,emlxs@0/fp@0,0/disk
    (0) ok set—sfs—boot
    set—sfs—boot ?
    We tried selecting a disk and applying sfs-boot at ok prompt.
    Can you please help me providing detailed pre-requesites/steps/procedure to implement this and to start boot from SAN.
    2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?
    As we know that ZFS is the default filesystem in Solaris 11.
    We have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.
    I have seen the solution that using format -e, we change the labelling but all the data will be lost, whats the way to apply a SMI Label/Format on a rpool disks while OS Installation itself.
    Please provide me the steps to SMI Label a disk while installaing Solaris 11.1 OS.

    Oracle recommends below things on rpool: (thats reason wanted to apply SMI Label)
    I have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.

  • Boot from san in solaris 11

    Hi,
    Thanks in advance.
    I have a solaris 11 set up with SPARC(sunfire v-250). I wanted to set up a boot from san environment in Solaris 11.
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?
    Thanks and Regards
    Maneesh

    Glad to hear that you have other supportable systems that you can try this with.
    881312 wrote:
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs. With zfs, the analogs to ufsdump and ufsrestore are 'zfs send' and 'zfs receive'. The process for creating an archive of a root pool and restoring it is documented in the ZFS admin guide at http://docs.oracle.com/cd/E23824_01/html/821-1448/recover-1.html#scrolltoc. Note that instead of sending it to a file and receiving it from the file, you can use a command like "zfs send -R pool1@snap | zfs recv pool2@snap". Read the doc chapters that I mention for actual zfs send and recv options that may be important, as well as other things you need to do to make the other pool bootable.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".I would have expected this to work better than that - but needing to set boot-device in the OBP doesn't surprise me. By any chance, was the pool on the SAN created using the whole disk (e.g. c3t0d0) instead of a slice (ct30d0s0)? Root pools need to be created on a slice.
    Note that beadm only copies the boot environment. Datasets like <rootpool>/export (mounted at /export) and its descendants are not copied. Also, dump and swap are not created in the new pool. Thus, you may have built dependencies into the system that cross the original and new root pools. You may be better off using a variant of the procedure in the ZFS admin guide I mentioned above to be sure that everything is copied across. On the first boot you will likely have other cleanup tasks, such as:
    - 'zpool export' the old pool so that you don't have multiple datasets (e.g. <oldpool>/export and <newpool>/export) both trying to mount datasets on the same mountpoint.
    - Modify vfstab to point to the new swap device
    - Use dumpadm to point to the new dump device
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.I think that once you get past this initial hurdle, you will find that beadm is a great improvement. Note that beadm is not really intended to migrate the contents of one root pool to another - it has a more limited scope.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?Is there any reason to not just install directly to the SAN device? You shouldn't really need to do the extra step of installing to a non-SAN disk first, thus avoiding the troubles you are seeing.
    Good luck and please let us know how it goes.
    Mike

  • Boot sparc Solaris 10 from SAN

    Hi,
    Is it possible to boot Solaris 10 from SAN?
    Is there any documentation that explains how?

    sure.
    docs.sun.com
    emulex cards: www.emulex.com
    qlogicL www.glogic.com
    depending on your array, card, solaris ver, you can find the specifics in the support area at the hba vendor sites

  • Windows 2003 File Share 4 node Cluster: Does Cluster Resources need to be brought offline prior removing / unmapping any LUN's from SAN end?

    Hello All,
    Recently, on a 4 node Windows 2003 File Share Cluster, we encountered a problem where when trying to remove few shares (that were no longer in use) directly from SAN end crashed the entire Cluster (i.e., other shares also lost their SAN connectivity). I
    suppose the Cluster resources need to be brought offline prior removing it from SAN but I've been advised that these shares were not the root and instead a 'mount point' created within the share; and hence there is no need of bringing down any Cluster resources
    offline.
    Please can someone comment on the above and provide me detailed steps as to how we go about reclaiming SAN space from specific shares on a W2003 Cluster?
    p.s., let me know if you need any additional information.
    Thanks in advance.

    Hi Alex,
    The problem started when SAN Support reclaimed few storage LUNs by unmapping them from our clustered file servers.  When they reclaimed the unused LUNs, other SAN drives which were there also disappeared causing the unavailability of file shares.
    Internet access is not enabled on these servers. Servers in question are running 64-bit Windows Server 2003 Sp2. This is a four node file share cluster. When the unsued LUN's were pulled, the entire Cluster lost its SAN connectivity. Windows cluster service
    was not starting on any of  the cluster nodes. To resolve the problem all the four cluster nodes were rebooted after which cluster service started on all the cluster nodes and resources came online.
    Some of the events at the time of problem occurrence were,
    Event ID     : 57                                                      
    Raw Event ID : 57                                                      
    Record Nr.   : 25424072                                                
    Category     : None                                                    
    Source       : Ftdisk                                                  
    Type         : Warning                                                 
    Generated    : 19.07.2014 10:49:46                                     
    Written      : 19.07.2014 10:49:46                                     
    Machine      : ********                                             
    Message      : The system failed to flush data to the transaction log.   
    Corruption may occur.                                                  
    Event ID     : 1209   
    Raw Event ID : 1209                                                    
    Record Nr.   : 25424002                                                
    Category     : None                                                    
    Source       : ClusDisk                                                
    Type         : Error                                                   
    Generated    : 19.07.2014 10:49:10                                     
    Written      : 19.07.2014 10:49:10                                     
    Machine      : ***********                                             
    Message      : Cluster service is requesting a bus reset for device      
    \Device\ClusDisk0.                                                     
    Event ID     : 15     
    Raw Event ID : 15                                                      
    Record Nr.   : 25412958                                                
    Category     : None                                                    
    Source       : Disk                                                    
    Type         : Error                                                   
    Generated    : 11.07.2014 10:54:53                                     
    Written      : 11.07.2014 10:54:53                                     
    Machine      : *************                                            
    Message      : The device, \Device\Harddisk46\DR48, is not ready for access yet.                                                            
    Let me know if you need any additional info, many thanks.

  • Oracle 10g RAC Database Migration from SAN to New SAN.

    Hi All,
    Our client has implemented a Two Node Oracle 10g R2 RAC on HP-UX v2. The Database is on ASM and on HP EVA 4000 SAN. The database size in around 1.2 TB.
    Now the requirement is to migrate the Database and Clusterware files to a New SAN (EVA 6400).
    SAN to SAN migration can't be done as the customer didn't get license for such storage migration.
    My immediate suggestion was to connect the New SAN and present the LUNs and add the Disks from New SAN and wait for rebalance to complete. Then drop the Old Disks which are on Old SAN. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. (Doc ID 837308.1).
    Clients wants us to suggest alternate solutions as they are worried that presenting LUNs from Old SAN and New SAN at the same time may give some issues and also if re-balance fails then it may affect the database. Also they are not able to estimate the time to re-balance a 1.2 TB database across Disks from 2 different SAN. Downtime window is ony 48 hours.
    One wild suggestion was to:
    1. Connect the New SAN.
    2. Create New Diskgroups on New SAN from Oracle RAC env.
    3. Backup the Production database and restore on the same Oracle RAC servers but on New Diskgroups.
    4. Start the database from new Diskgroup location by updating the spfile/pfile
    5. Make sure everything is fine then drop the current Diskgroups from Old SAN.
    Will the above idea work in Production env? I think there is a lot of risks in doing the above.
    Customer does not have Oracle RAC on Test env so there isn't any chance of trying out any method.
    Any suggestion is appreciated.
    Rgds,
    Thiru.

    user1983888 wrote:
    Hi All,
    Our client has implemented a Two Node Oracle 10g R2 RAC on HP-UX v2. The Database is on ASM and on HP EVA 4000 SAN. The database size in around 1.2 TB.
    Now the requirement is to migrate the Database and Clusterware files to a New SAN (EVA 6400).
    SAN to SAN migration can't be done as the customer didn't get license for such storage migration.
    My immediate suggestion was to connect the New SAN and present the LUNs and add the Disks from New SAN and wait for rebalance to complete. Then drop the Old Disks which are on Old SAN. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. (Doc ID 837308.1).
    Clients wants us to suggest alternate solutions as they are worried that presenting LUNs from Old SAN and New SAN at the same time may give some issues and also if re-balance fails then it may affect the database. Also they are not able to estimate the time to re-balance a 1.2 TB database across Disks from 2 different SAN. Downtime window is ony 48 hours.Adding and removing LUNs online is one of the great features of ASM. The Rebalance will be perfomed under SAN. No downtime!!!
    If your customer is not entrusting on ASM. So Oracle Support can answer all doubt.
    Any concern .. Contat Oracle Support to guide you in the best way to perform this work.
    >
    One wild suggestion was to:
    1. Connect the New SAN.
    2. Create New Diskgroups on New SAN from Oracle RAC env.
    3. Backup the Production database and restore on the same Oracle RAC servers but on New Diskgroups.
    4. Start the database from new Diskgroup location by updating the spfile/pfile
    5. Make sure everything is fine then drop the current Diskgroups from Old SAN.
    ASM Supports many Terabytes, if you need to migrate 3 Database with 20TB each using this way described above would be very laborious .. .. So add and remove Luns online is one feature that must work.
    Take the approval from Oracle support and do this work using the ASM Rebalance.
    Regards,
    Levi Pereira

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • Question from SAN Newbie

    Traditionally we've always used NAS based storage for our server load balancing solutions.
    With SAN, is it possible to set up shared storage that can be accessed by multiple servers simultaneously for the purposes of load balancing ? (web/email) I know you can configure individual storage volumes for use by individual servers but this is of lesser importance to us.
    Also, which Cisco solution would be required for iSCSI mirroring of SAN data from one site to another ? The throughput of the write-thru is only on the magnitude of about 50Mbps.
    Thanks,
    = K

    In a SAN multi Targets(storage LUN's) can be shared and accessed by multi Initiators(hosts) Clustering software on the host or SCSI reserve methods are used to lock that area on the disk so only one write operation can be working at a time. For reads all can be readin at same time. There are many ways to balance the I/O's to the targets on the SAN. As for mirroring with iSCSI where there is a local SAN connected Target and a iSCSI connected Target the I/O is sent to both, local copy and remote copy for reason of backup, what the application is to do this could be as simple as NTbackup. If your really wanting to push a backup or replication of the data from SAN to SAN then you would use FCIP protocol along with a application data mover on the array or a package like Veritas.

  • Boot from san with local disk

    I have some B200M3's with local disk.  I would like to configure them to boot from san.  I've setup a service profile template with a boot policy to boot from CD first and then the SAN.  I have a local disk configuration policy to mirror the local disks.   I've zoned the machines so that it presently only sees one path to the storage because I'm installing windows and I don't want it to see the disks funky because of multiple paths to the same disk.  When I boot the machine it sees the disk.  I boot to the Windows 2012R2 iso and load the drivers for the cisco mlom and then the single lun will appear.  The local disk will also appear.  It can't install Windows 2012R2 on the SAN disk only the local disk.  It sees the local disk as disk 0 and the san disk as disk 3.  I don't know how to get the machine to see the san disk as disk 0.  I have the lun (which resides on an vnx5600) as lun 0.  The boot policy is configured to have the san lun as lun 0.  It even appears while booting the san lun appears as lun 0.  The error I'm getting from the windows installer is:  We couldn't install Windows in the location you chose.  Please check your media drive.  Here's more info about what happened: 0x80300001.  Any suggestions to get this to boot from SAN.

    Hi
    during the boot up process, do you see the wwpn of the target showing that the VIC can talk to the storage?
    Reboot the server in question, when you see the option to get into bios press F2, ssh to the primary fabrics a run the following commands
    connect adapter (x/x/x). <--- (chassis #/slot #/adapter#)
    connect
    attach-fls
    lunlist 
    (provide output of last command)
    lunmap 
    (provide output of last command)

  • Boot from SAN, ESX 4.1, EMC CX4

    Please feel free to redirect me to a link if this has already been answered. I've been all over the place and haven't found one yet.
    We have UCS connected (m81kr in the blade) via a Brocade FC switch into an EMC CX4.
    All physical links are good.
    We have a single vHBA in a profile along with the correct target WWN and LUN 0 for SAN booting.
    The Brocade shows both the interconnect and the server WWNs, and they are zoned along with the EMC WWN.
    Default VSAN (1) on the profile.
    What we were expecting to do is boot the server but not into an OS, and then open Connectivity Status on the EMC console and see the server's WWN ready to be manually registered ( a la http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/). We are not seeing this.
    Instead, when booting the blade, it will show on the switch (NPIV is enabled) and can be zoned, but the WWN won't show in Connectivity Status. Once we get the ESX installation media running, then it will show up and we can register and assign the host. That's fine for installing.Therefore, we know there is end-to-end connectivity between the server and the LUN.
    Once we get ESX installed and try to boot from SAN, the server's initiator won't log into the EMC. The server's KVM shows only a blinking cursor or it may drop down a few lines and hang. Connectivity Status shows the initiator still registered but no logged in.
    Are we making assumptions we should not?

    I think we're good all the way down to your comment, "If you get this  far and start the ESX install, you’ll see this as an availble target."  Here's where we diverge.
    Here is what we had thought should be possible, from http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/:
    UCS Manager Tasks
    Create a Service Profile Template with x number of vHBAs.
    Create a Boot Policy that includes SAN Boot as the first device and link it to the Template
    Create x number of Service Profiles from the Template
    Use Server Pools, or associate servers to the profiles
    Let all servers attempt to boot and sit at the “Non-System Disk” style message that UCS servers return
    Switch Tasks
    Zone the server WWPN to a zone that includes the storage array controller’s WWPN.
    Zone the second fabric switch as well. Note: For some operating  systems (Windows for sure), you need to zone just a single path during  OS installation so consider this step optional.
    Array Tasks
    On the array, create a LUN and allow the server WWPNs to have access to the LUN.
    Present the LUN to the host using a desired LUN number (typically  zero, but this step is optional and not available on all array models)
    From 1.5 above, that's where we'd hope to see the WWN show up in the storage and we could register the server's WWN and assign it to a storage group. It doesn't show up until the OS starts.
    But if we're trying to boot the OS from the LUN, we're at a catch-22 now. The initiator won't log in until the OS boots, and the OS won't boot until the initiator logs in, unless we're missing some little step.
    What we haven't done is check the local disk configuration policy, so we'll see if that's correct.
    EDIT: OK, when the vHBA BIOS message comes up, it sticks around for about 15 seconds and POST continues. The storage WWN does not show up and the Brocade's Name Servers screen doesn't show the server's HBA. It looks like it's misconfigured somewhere, it's just quite convoluted finding out where. I'll post back if we find it.
    EDIT2: We tried the installation again; the initiator stays logged out until ESX asks where to install and provides the SAN LUN. EMC then shows the initiator logged in.
    The Palo card does support SAN booting, correct?

Maybe you are looking for

  • How to get information about cursor? I had only a reference on it.

    Hello How to get information about cursor, that I receive in my procedure as a reference. For example I need to know, a number of columns. I know nothing about this cursor, except the link on it. Best Regards, Kostya Proskudin!

  • Can't Send Email to a Group

    Trying to send an email to groups I created in address book.  Opening a new email and typing group name brings up the group but it doesn't populate with emails.  I right clicked on the group to create an email and again, created the email but no emai

  • ISight working with only Safari, nothing else

    My built-in MacBook iSight is only working with Safari (at like video sites), but it won't work with iChat or Photo Booth. I have no idea how to try to fix this.

  • Blackberry curve 8900

    Hi, can someone please talk me through how I move photos off the device memory onto the media card?  Thanks. Solved! Go to Solution.

  • Mass processing of indexes cancelled

    Hi, After a system refresh there were a number of missing secondary indexes (tcode DB02). I have selected them all for mass processing, and then stared to process them with tcode SE14 > DB Requests > Mass processing. During this mass processing the s