Guest domains fail to boot from SAN disk after power-cycle

Control Domain Environment
# uname -a
SunOS s0007 5.10 Generic_139555-08 sun4v sparc SUNW,T5440
# ./ldm -V
Logical Domain Manager (v 1.1)
Hypervisor control protocol v 1.3
Using Hypervisor MD v 0.1
System PROM:
Hypervisor v. 1.7.0 @(#)Hypervisor 1.7.0 2008/12/11 13:42\015
OpenBoot v. 4.30.0 @(#)OBP 4.30.0 2008/12/11 12:16
After a power-cycle the guest domains did not boot from SAN disks, they
tried to boot from the network.
# ./ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 16 8G 0.5% 4d 20h 46m
s0500 active -t---- 5000 8 8G 12% 4d 20h 46m
s0501 active -t---- 5001 8 4G 12% 4d 20h 46m
s0502 active -t---- 5002 16 8G 6.2% 4d 20h 46m
s0503 active -t---- 5003 8 2G 12% 4d 20h 46m
s0504 active -t---- 5004 8 4G 0.0% 4d 20h 46m
s0505 active -t---- 5005 4 2G 25% 4d 20h 46m
s0506 active -t---- 5006 4 2G 25% 4d 20h 46m
s0507 active -t---- 5007 4 2G 25% 4d 20h 46m
s0508 active -t---- 5008 4 2G 25% 4d 20h 46m
Connecting to console "s0508" in group "s0508" ....
Press ~? for control options ..
Requesting Internet Address for 0:14:4f:fa:b9:1b
Requesting Internet Address for 0:14:4f:fa:b9:1b
Requesting Internet Address for 0:14:4f:fa:b9:1b
Requesting Internet Address for 0:14:4f:fa:b9:1b
If we reboot the quest domains now, it works.
It seems the disks are not ready at the
time the guest domains boot.
We see this on systems with many guest domains.
Is there any logfile, where we could find the reason for this?
Or is this a known issue?

Press F12 on boot to choose CD/DVD as a temporary boot device.
My hunch is that there's something wrong with the SSD itself, though.
Good luck.
Cheers,
George
In daily use: R60F, R500F, T61, T410
Collecting dust: T60
Enjoying retirement: A31p, T42p,
Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you.

Similar Messages

  • Cannot Boot From Internal Disk After Failed Firmware Update

    Xserve 1,1 (Late 2006)
    After a failed firmware update the server will not boot from an internal volume.
    I have tried the firmware restoration CD 1.4 without success due to the following:
    I press the sleep LED but only get the initial three fast blinks, not the next slower three blinks, and the last three fast blinks.
    I can boot in Target Mode, NetBoot, from an installation CD. The volume with the OS (10.5) is recognized by another machine, but the server will not boot from it.
    I have tried reinstalling the OS on another drive, and repairing the drive.
    I would appreciate any suggestions.

    I read somewhere that it might have to do with the fact that it says "S.M.A.R.T. not supported" on DU. Is there any way to fix this? does it mean I'll need a new drive soon?
    I think something else is afoot. I believe that, when you boot from the the system/install and run DU from there, DU does not recognize SMART reporting.
    Before I recently updated my MBP to 10.5.7, I ran DU from the DVD to verify the disk before applying the update (I'm cautious about those updates). It gave the same report you are seeing. Once I was booted from the hard drive, the SMART status report returned to normal. SMART will typically report "failing, not "not supported" if the drive is going bad.

  • 1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk? 2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?

    FYI....boot from SAN is required for physical server (T4-1) (not OVM).
    1) How to Boot from SAN for T4-1 Server with Solaris 11.1 OS on the disk?
    The SAN disks allocated are visible in ok prompt. below is the output.
    (0) ok show—disks
    a) /pci@400/pci@2/pci@0/pci@f/pci@0/usb@0, 2/hub@2/hub@3/storage@2/disk
    b) /pci@400/pci@2/pci@0/pci€a/SUNW, ezalxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk
    d) /pci@400/pci@2/pci@0/pci@8/SUNW, emlxs@0, l/fp@0, 0/disk
    e) /pci@400/pci@2/pci@0/pci@8/SUNW,enlxs@0/fp@0,0/disk
    f) /pci@400/pci@2/pci@0/pci@4/scsi@0/disk
    g) /pci@400/pci@1/pci@0/pci@4/scsi@0/disk
    h) /iscsi—hba/disk
    q) NO SELECTION
    valid choice: a. . .h, q to quit c
    /pci@400/pci@2/pci@0/pci@a/SUNW, ealxs@0/fp@0, 0/disk has been selected.
    Type “Y ( Control—Y ) to insert it in the command line.
    e.g. ok nvalias mydev “Y
    for creating devalias mydev for /pci@400/pci@2/pci@0/pci@a/SUNW,emlxs@0/fp@0,0/disk
    (0) ok set—sfs—boot
    set—sfs—boot ?
    We tried selecting a disk and applying sfs-boot at ok prompt.
    Can you please help me providing detailed pre-requesites/steps/procedure to implement this and to start boot from SAN.
    2) How to SMI Label/Format a disk while OS Installation in Solaris 11.1?
    As we know that ZFS is the default filesystem in Solaris 11.
    We have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.
    I have seen the solution that using format -e, we change the labelling but all the data will be lost, whats the way to apply a SMI Label/Format on a rpool disks while OS Installation itself.
    Please provide me the steps to SMI Label a disk while installaing Solaris 11.1 OS.

    Oracle recommends below things on rpool: (thats reason wanted to apply SMI Label)
    I have seen in the Oracle documentation that for rpool below are recommended:
    - A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label.
    - Create root pools with slices by using the s* identifier.
    - ZFS applies an EFI label when you create a storage pool with whole disks.
    - In general, you should create a disk slice with the bulk of disk space in slice 0.

  • Boot from san with local disk

    I have some B200M3's with local disk.  I would like to configure them to boot from san.  I've setup a service profile template with a boot policy to boot from CD first and then the SAN.  I have a local disk configuration policy to mirror the local disks.   I've zoned the machines so that it presently only sees one path to the storage because I'm installing windows and I don't want it to see the disks funky because of multiple paths to the same disk.  When I boot the machine it sees the disk.  I boot to the Windows 2012R2 iso and load the drivers for the cisco mlom and then the single lun will appear.  The local disk will also appear.  It can't install Windows 2012R2 on the SAN disk only the local disk.  It sees the local disk as disk 0 and the san disk as disk 3.  I don't know how to get the machine to see the san disk as disk 0.  I have the lun (which resides on an vnx5600) as lun 0.  The boot policy is configured to have the san lun as lun 0.  It even appears while booting the san lun appears as lun 0.  The error I'm getting from the windows installer is:  We couldn't install Windows in the location you chose.  Please check your media drive.  Here's more info about what happened: 0x80300001.  Any suggestions to get this to boot from SAN.

    Hi
    during the boot up process, do you see the wwpn of the target showing that the VIC can talk to the storage?
    Reboot the server in question, when you see the option to get into bios press F2, ssh to the primary fabrics a run the following commands
    connect adapter (x/x/x). <--- (chassis #/slot #/adapter#)
    connect
    attach-fls
    lunlist 
    (provide output of last command)
    lunmap 
    (provide output of last command)

  • Boot from USB disk fails

    goal:
    I would like to boot from a USB pendrive
    specs:
    MSI K8N neo platinum (bios 1.4/1.56)
    Transcend jetflash 256MB
    problem:
    The system does not boot from the disk
    what i tried:
     i made the usb stick bootable in 2 differend ways:
      - i used the included software to make it bootable
      - i used mkbt.exe to make a bootrecord
    i updated the bios from 1.4 to 1.56
    i did try to boot with the F11 option
    i did set the primary boot order to USB-ZIP and i also tryed USB-FDD and USB-HDD
    i also enabled the USB storage device in the bios
    i tried to get the USB stick as a disk drive while booting from a disk, which also failed
    is there anyone who have a idea how-to make it bootable?
    i'm kinda out of idea's

    what more specs do you want?
    specs:
    MSI K8N neo platinum (bios 1.4/1.56)
    Transcend jetflash 256MB
    AMD 64 3200+
    Zalman cooler
    2 512 Kingston memory
    2 Maxtor HD
    2 CD/DVD drives
    ATI 9600 VGA card
    i would like to boot from a pendrive for various reasons, but mostly because i would like to boot in Dos or Linux.

  • UCS Unable to boot from SAN

    I have some blades that I'm unable to boot from SAN.  The weird thing is I can see the LUN's and I can install ESXi 5.5 fine.  It's up until I reboot and the blades just boot to the BIOS because they don't see any disk.  When I try to change boot order, the only thing I can boot to is UEFI.  

    When you see your lun, and can install ESXi, it means, that
    - your zoning is correct
    - your lun masking / mapping is correct
    If boot fails, your boot policy is the problem ! are you sure, that the target pwwn points to the controller of the disksubsystem, that the lun nr. is correct ? we seen all kind of weird things, like cable crossed,.....

  • Solaris 10 x86 boot from SAN

    Hello
    I am trying to install solaris10 x86 on an IBM Blade booting from SAN. The blade includes a fibre channel extension card (qla2312 from qlogic) and no internal drive.
    The installation procedure does not find any bootable device to install solaris on. I have tried the qlogic driver disk for solaris 9, the driver is loaded (apparetly), but no disk is found (a lun is configured and presented to the blade, the bios of the FC card can see it).
    Is there any solution ?
    thanks in advance

    I just today posted in "Solaris 10 General" about this same topic.
    It appears that only the sparc supports this but I'm not certain.
    As I stated in my other post and as you also state, the installer doesn't see the lun.
    fyi, I was able to do this with RHEL on the same blade and to the same lun.
    Did you find any solution ?

  • Boot from san in solaris 11

    Hi,
    Thanks in advance.
    I have a solaris 11 set up with SPARC(sunfire v-250). I wanted to set up a boot from san environment in Solaris 11.
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?
    Thanks and Regards
    Maneesh

    Glad to hear that you have other supportable systems that you can try this with.
    881312 wrote:
    1) In solaris 10 I used to do set up using ufsdump command and the further steps. Now in solaris 11 the command ufsdump cannot be used since the native file system has changed to zfs. With zfs, the analogs to ufsdump and ufsrestore are 'zfs send' and 'zfs receive'. The process for creating an archive of a root pool and restoring it is documented in the ZFS admin guide at http://docs.oracle.com/cd/E23824_01/html/821-1448/recover-1.html#scrolltoc. Note that instead of sending it to a file and receiving it from the file, you can use a command like "zfs send -R pool1@snap | zfs recv pool2@snap". Read the doc chapters that I mention for actual zfs send and recv options that may be important, as well as other things you need to do to make the other pool bootable.
    2) When I tried to create a boot environment using beadm utility, I am able to create and activate BE in the san(SCSI) disk. But when I activated the new BE, status of new BE is changed to "R" and that of the current BE is still "NR" (which should be N, as per the behaviour of beadm). When I reboot the system it is again getting booted in the same old BE. I tried by setting the boot-device in OBP as the new san disk, which is giving error as "Does not appear to be an executable".I would have expected this to work better than that - but needing to set boot-device in the OBP doesn't surprise me. By any chance, was the pool on the SAN created using the whole disk (e.g. c3t0d0) instead of a slice (ct30d0s0)? Root pools need to be created on a slice.
    Note that beadm only copies the boot environment. Datasets like <rootpool>/export (mounted at /export) and its descendants are not copied. Also, dump and swap are not created in the new pool. Thus, you may have built dependencies into the system that cross the original and new root pools. You may be better off using a variant of the procedure in the ZFS admin guide I mentioned above to be sure that everything is copied across. On the first boot you will likely have other cleanup tasks, such as:
    - 'zpool export' the old pool so that you don't have multiple datasets (e.g. <oldpool>/export and <newpool>/export) both trying to mount datasets on the same mountpoint.
    - Modify vfstab to point to the new swap device
    - Use dumpadm to point to the new dump device
    In solaris 10 i used lucreate for creating zfs BE, but lucreate command is not present solaris 11.I think that once you get past this initial hurdle, you will find that beadm is a great improvement. Note that beadm is not really intended to migrate the contents of one root pool to another - it has a more limited scope.
    Can anybody help me creating a SAN boot environment either with ufs or zfs filse system?Is there any reason to not just install directly to the SAN device? You shouldn't really need to do the extra step of installing to a non-SAN disk first, thus avoiding the troubles you are seeing.
    Good luck and please let us know how it goes.
    Mike

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • Boot from SAN, ESX 4.1, EMC CX4

    Please feel free to redirect me to a link if this has already been answered. I've been all over the place and haven't found one yet.
    We have UCS connected (m81kr in the blade) via a Brocade FC switch into an EMC CX4.
    All physical links are good.
    We have a single vHBA in a profile along with the correct target WWN and LUN 0 for SAN booting.
    The Brocade shows both the interconnect and the server WWNs, and they are zoned along with the EMC WWN.
    Default VSAN (1) on the profile.
    What we were expecting to do is boot the server but not into an OS, and then open Connectivity Status on the EMC console and see the server's WWN ready to be manually registered ( a la http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/). We are not seeing this.
    Instead, when booting the blade, it will show on the switch (NPIV is enabled) and can be zoned, but the WWN won't show in Connectivity Status. Once we get the ESX installation media running, then it will show up and we can register and assign the host. That's fine for installing.Therefore, we know there is end-to-end connectivity between the server and the LUN.
    Once we get ESX installed and try to boot from SAN, the server's initiator won't log into the EMC. The server's KVM shows only a blinking cursor or it may drop down a few lines and hang. Connectivity Status shows the initiator still registered but no logged in.
    Are we making assumptions we should not?

    I think we're good all the way down to your comment, "If you get this  far and start the ESX install, you’ll see this as an availble target."  Here's where we diverge.
    Here is what we had thought should be possible, from http://jeffsaidso.com/2010/11/boot-from-san-101-with-cisco-ucs/:
    UCS Manager Tasks
    Create a Service Profile Template with x number of vHBAs.
    Create a Boot Policy that includes SAN Boot as the first device and link it to the Template
    Create x number of Service Profiles from the Template
    Use Server Pools, or associate servers to the profiles
    Let all servers attempt to boot and sit at the “Non-System Disk” style message that UCS servers return
    Switch Tasks
    Zone the server WWPN to a zone that includes the storage array controller’s WWPN.
    Zone the second fabric switch as well. Note: For some operating  systems (Windows for sure), you need to zone just a single path during  OS installation so consider this step optional.
    Array Tasks
    On the array, create a LUN and allow the server WWPNs to have access to the LUN.
    Present the LUN to the host using a desired LUN number (typically  zero, but this step is optional and not available on all array models)
    From 1.5 above, that's where we'd hope to see the WWN show up in the storage and we could register the server's WWN and assign it to a storage group. It doesn't show up until the OS starts.
    But if we're trying to boot the OS from the LUN, we're at a catch-22 now. The initiator won't log in until the OS boots, and the OS won't boot until the initiator logs in, unless we're missing some little step.
    What we haven't done is check the local disk configuration policy, so we'll see if that's correct.
    EDIT: OK, when the vHBA BIOS message comes up, it sticks around for about 15 seconds and POST continues. The storage WWN does not show up and the Brocade's Name Servers screen doesn't show the server's HBA. It looks like it's misconfigured somewhere, it's just quite convoluted finding out where. I'll post back if we find it.
    EDIT2: We tried the installation again; the initiator stays logged out until ESX asks where to install and provides the SAN LUN. EMC then shows the initiator logged in.
    The Palo card does support SAN booting, correct?

  • G5 gets grey screen with logo then goes black, won't boot from any disk

    Won't boot from any disk, already changed VRAM battery, no success. tried hardware disk once and ran a full scan, said error with Video, but now won't boot at all with any disk. Screen is choppy also(have pic). Thinking video graphics card? Not sure. Need help!

    Hi bribiel, and a warm welcome to the forums!
    It certainly sounds like the Video card from the little info we have, & that can certainly cause it to fail to bootup.

  • Boot from SAN, yes or no ??

    I see more and more partners and customers, that oppose to do boot from a network (be it FC, FCoE, iSCSI, PXE); they prefer installation on local disk, USB and / or SD. Any feedback from the field.
    One should also be aware, that W2012 (R2) we SMB V3 storage (pushed by MSFT of course) doesn't support boot over the network; W2012 has to be installed on a local device.
    Walter.

    Walter,
    The problem in not use boot from SAN is the data management, is decentralized. (I'm not talking about performance yet).
    So far, I never did a deploy of UCS with boot from local disks, all booting from SAN (FC,FCoE,iSCSI).
    Is there better place to keep data than a storage?
    Using boot from SAN and Service Profiles of UCS, you can delivery a solution than can restore any server in minutes without stress.
    Of course, if we talking a very small deployment, the boot from SAN doesn't make a big impact.

  • How to "boot from SAN" from a LUN which belongs to other solaris Machine.

    Hi all
    I have installed solaris on a lun ( boot from SAN).
    Then i had assigned the same os lun to another machine ( the hardware is exactly the same) but now the new machine had detected the os but it reboots and kicksoff.
    I have tried changing vfstab setting,
    can someone help me??
    Thanks in advance.
    sidd.

    disable IDE RAID and ensure SATA is enabled in BIOS
    disconnect any other IDE HDDs before you install Windows to your SATA drive; they can be reconnected again afterwards.
    make sure that the SATA drive is listed first in 'Hard Disk Priority' under 'Boot Order' in BIOS

  • How to boot from rescue disk

    Hi,
    I am running Macbook Pro, snow leopard 10.6.8
    My resuce disk is based on 10.6.4
    I tried to boot from rescue disk to repair the main disk.
    I failed to boot from the rescue disk as my rescue disk (from CD) is not being recognized. I tried multiple options such as holding down  "C", "c", "option" key etc while the system is restarting. Only the main disk was being shown as the bootable option (while holding down the option key when restarting) but not the rescue disk.
    I verified that my rescue disk is good. Infact, I could boot a Lion MacBookPro using my resue disk.
    Any inputs are appreciated.
    Thanks
    gb

    It was 10.6.4 (Snow Leopard). It was the same as the rescue disk version.
    thanks
    gb

  • IMac crashed! Only get partial boot from OSX disk one

    Booting from OSX disk 1,  goes as far as a blue screen with no icons or menu. Boot with D key for hardware test. Test ok.
    At a loss what to do next.
    David

    Does Seem so, or the Internal Drive is so messed up it's interfering, try this...
    Does it boot to Single User Mode, CMD+s keys at bootup, if so try...
    /sbin/fsck -fy
    Repeat until it shows no errors fixed.
    (Space between fsck AND -fy important).
    Resolve startup issues and perform disk maintenance with Disk Utility and fsck...
    http://docs.info.apple.com/article.html?artnum=106214

Maybe you are looking for