LDOM in iSCSI LUN - iSCSI logging in too early

Hi there,
I am having a very strange problem. I'll describe my setup first.
The hardware hosting the LDOM environment does not have enough local disk space to support the number of required domains so to get around this problem the disks used by the configuration are, in fact, iSCSI LUNS mounted from a Netapp.
This works fine once they are up and running, but I am having problems when the host server is rebooted because the services do not seem to be starting in a sensible order. If you look at the dmesg output at [http://www.joshberry.plus.com/dmesg.txt|http://www.joshberry.plus.com/dmesg.txt] this is the boot up sequence for the server.
You can see the iSCSI failing to log in first, with the associated errors where disks are unavailable:
Apr 30 14:55:21 fhw-dataarchive02 iscsi: [ID 286457 kern.notice] NOTICE: iscsi connection(7) unable to
connect to target iqn.1992-08.com.netapp:sn.101204418:vf.c254e0d0-1836-11dd-a621-00a0980aec76 (errno:128)Lower down in the boot sequence the reason for this becomes obvious:
Apr 30 14:55:26 fhw-dataarchive02 mac: [ID 469746 kern.info] NOTICE: e1000g1 registered
Apr 30 14:55:26 fhw-dataarchive02 e1000g: [ID 766679 kern.info] Intel(R) PRO/1000 Network Connection, Driver Ver. 5.1.8
Apr 30 14:55:26 fhw-dataarchive02 e1000g: [ID 801725 kern.info] NOTICE: pciex8086,105e - e1000g[1] : Adapter copper link is down.
Apr 30 14:55:29 fhw-dataarchive02 e1000g: [ID 801725 kern.info] NOTICE: pciex8086,105e - e1000g[1] : Adapter 1000Mbps full duplex copper link is up.Here, as far as I can see, the network is down when the LDOM is initiated and tries to get access to the iSCSI, which is never going to work.
So, firstly, have I got the right end of the stick here?
If I am on the right track my question is: can you change the order that the system initialises everything so that the network will be brought up before the guest domains are initialised.
Thanks
Josh

Don't use CHAP for discovery.
Enable CHAP afterwards.
Andy

Similar Messages

  • Boot ISCSI LUN from LDOM

    Hi,
    We are exporting bootable LUNS from a 7110 array to LDOM's on our Blades. The array was rebooted today. Since the reboot we are unable to connect to the hosts booted from the array.
    I did not configure this but have the task of fixing it.....
    We are running LDOM 1.2,REV=2009.06.25.09.48
    So, does the LDOM boot directly from the LUN from the LDOM OBP?
    The syntax in this link causes the below failure;
    http://wikis.sun.com/display/OpenSolarisInfo/Configuring+iSCSI+Boot+for+SPARC+Systems
    Unknown key 'iscsi-target-ip'
    Unknown key 'iscsi-target-name'
    Unknown key 'iscsi-lun'The ISCSI LUNS are seen by the host.
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 273>
    /pci@0/pci@0/pci@2/LSILogic,sas@0/sd@0,0
    1. c1t2d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 32 sec 279>
    /pci@0/pci@0/pci@2/LSILogic,sas@0/sd@2,0
    2. c3t600144F04B73EB660000000000000000d0 <SUN-SOLARIS-1 cyl 32766 alt 2 hd 4 sec 736>
    /scsi_vhci/ssd@g600144f04b73eb660000000000000000
    3. c3t600144F04B713A260000000000000000d0 <SUN-SOLARIS-1 cyl 32766 alt 2 hd 4 sec 736>
    /scsi_vhci/ssd@g600144f04b713a260000000000000000
    4. c3t600144F04B9910370000000000000000d0 <SUN-SOLARIS-1 cyl 32766 alt 2 hd 4 sec 1200>
    /scsi_vhci/ssd@g600144f04b9910370000000000000000Nothing has changed on the LDOM controller domain.
    I'm really stuck so any input would be appreciated.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Im getting the following error as well.. Was there any solution to this please ?
    Im not using any LDOM. Just on the h/w platform on T5140
    {0} ok boot net Boot device: /pci@500/pci@0/pci@8/network@0 File and args:
    Unknown key 'iscsi-target-ip'
    Unknown key 'iscsi-target-name'
    Manual Configuration: Host IP, boot server and filename must be specified
    ERROR: boot-read fail
    Can't open boot device
    {0} ok
    Edited by: 836793 on Feb 15, 2011 12:36 AM

  • Using iscsi luns as vdiskserverdevices

    Hi
    I am new to LDoms and was wondering if anyone has any experience of using iscsi luns (presented by zfs volumes on an X4500) as vdiskserverdevices . In my case the iscsi luns were presented to the service domain (in this case the control domain) and were correctly visible and accessible from it. However when presented as vdisks to the guest domains, for use as the root disk, the guest domains could not access/see the disks. When the guest domain was booted for a jumpstart install one of the first messages was that it couldn't access the vdisks and this was confirmed during the install process when a boot disk couldn't be found. AFAIK solaris can't boot of iscsi luns (without iscsi HBA's) but I figured that hiding the iscsi lun behind the vdiskserver would work -but no such luck. Any advice on whether this is possible would be most welcome.
    As an unwanted work-around I created a couple of large files (aprox 420 Gb and 920Gb) and presented these instead as the vdsdevs. This time the smaller 420Gb couldn't be seen at all by the guest domain while the larger file showed up as a SUN 920Gb type disk though while partitioning it, it only showed about 100Gb as the disk size. Any pointers as to what I'm doing wrong here is also most welcome.
    My hardware consists of 2* T5220 running Sol10 08/07 with latest recommended patches and firmware and an X4500 which is being used to provide all (root and data) storage for the guest domains.
    Thanks
    Saurav

    Hi there,
    I am setting up a platform when the guest domains are installed into iSCSI LUNS and it is working fine.
    I am using LUN's expported from a NetApp rather than using ZFS but that shouldn't matter. They appear like this on my test platform:
    root@solarisvh # iscsiadm list target -S
    Target: iqn.1992-08.com.netapp:sn.********
            Alias: -
            TPGT: 2000
            ISID: 4000002a0000
            Connections: 1
            LUN: 0
                 Vendor:  NETAPP
                 Product: LUN
                 OS Device Name: /dev/rdsk/c4t0d0s2And I can add it into a guest domain by running:
    ldm add-vdsdev /dev/rdsk/c4t0d0s2 examplevol1@primary-vds0I found that I had to pop in an label the disk (using format) on the first use for it to be picked up but after that it worked fine and I could boot up and install an OS into the LUN.
    Josh

  • NSS 324 iSCSI lun error

    I have an iSCSI lun on my NSS 324 device that shows status error while the target it is mapped to shows connected.  The log file shows the following error: [iSCSI] Fail to expand LUN.  I cannot enable the LUN and it is not discoverable by my VMWare.  How can I resolve this without losing my data?                  

    not sure if i understand. are you mapping the iSCSI LUN to the NSS or are you mapping the iSCI LUN to your ESX server?
    which device is connecting to the LUN?

  • P2v conversion of sap ecc6 host w/ ISCSI Lun

    Hello, I am looking for some feedback here please.
    Background:
    We have an Physical SAP Host that has ISCSI LUNs (volumes) attached with are formatted as NTFS file system.  These are then presented to the OS as drive letters L:\ (Logs) and D:\ (Database) and M:\ (Backups).
    We want to virtualize the entire machine and create a virtual machine on ESX Virtual Infrastructure 3.  We have an EMC AX4 SAN.
    The question is how to do it?
    There appear to be two ways:
    1.  The first way and perhaps the easier way would be to stop SAP, stop the Database and log off the ISCSI adapters from the OS.  Then virtualize the physical machine (P2V) with only the C:\ drive.  When the machine comes up as a virtual host, it should reattach directly to the existing LUNs through its internal configuration and attach to the existing LUN.  We may have to fiddle around with the drive letters to get them to be as previously L:\ D:\ and M:\ but essentially there is not really anything else to do.  The internal EMC Powerpath should still be working properly.  The ESX Host is already connected and configured to have one physical adapter on the iSCSI VLAN; thus, there should be no additional configuration as long as the VM sees the same network.
    2.  The second way would be to perform the entire P2V migration and create a brand new gigantic LUN on the SAN and virtualize the entire physical host including all the attached drives.  The disadvantages here would be that the entire machine would be on one gigantic LUN and each Drive Letter would be a different file.  (I believe).
    To complicate things we also have a Cluster with three ESX Hosts.  VMotion between Hosts would be no issue under either solution.
    I would like to get feedback on how to proceed and hear about any pros/cons and or any success or failure stories.
    Thank you,
    Pete

    Hi Nadeem
    Save your init<SID>.ora.
    You can try to set processes and sessions parameters in the init<sid>.ora to the following value :
    processes = 800
    sessions =1600
    - Stop your instance
    - Stop your DB
    - Recreate the spfile
    - Start your DB
    - Start your instance
    If the issue does not solve the problem, you return back.
    Otherwise, when you start your instance is that the processes dw.sap <SID> _DVEBMGSXX starts then dies or they do not start?
    Best regards.

  • ISCSI LUNs presented as online on new server for new cluster.

    We are building out a new 2012 R2 Hyper-V cluster. The old environment is a mix of Cluster Shared Volume drives and LUNs presented just for the a VM itself.
    I had planned on presenting everything in the old environment to the new environment and then just using the cluster migration wizard to move VM's over a few at a time.
    I ran into a problem when I connected my first host to our SAN today.  It is in a group that has over 70 LUN's presented to it.  Once I connect to the target the host just cripples.  I am not having Memory or CPU issues but Disk IO issues. 
    I noticed that the host now sees all 70 plus LUNs and has tried to bring those disk online as well.
    I don't want them online right now.  I just need the quorum drive and a few of the newly created LUN's online to finish creating our Cluster so we can start migrations.
    Why is the Host trying to bring these drives online?  As soon as I click on Devices in the iSCSI initiator the program locks up and doesn't respond. 
    Is there a way to setup the target but force the OS not to bring any of those disk online?  I removed the favorite target, and there are no items listed in the Volume List under Volumes and Devices.  However if you go to Disk Manager it shows all
    70 plus disk and most of them now show online except for my newly created LUNs which are not initialized yet.
    Kristopher Turner | Not the brightest bulb but by far not the dimmest bulb.

    Hi KristopherJTurner,
    You can try to remove the issue occurred hosts from iSCSI target SAN first. The iSCSI initiator first logs on to the target which the target has grant access before the server
    can start reading and writing to all LUNs that are assigned to that target.
    More information:
    Manage iSCSI Targets
    http://technet.microsoft.com/en-us/library/cc726015.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Add iSCSI LUN to Multiple Hyper-V Cluster Hosts?

    Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?

    Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?
    Here's a good step-by-step guide on how to do everything you want using just PowerShell. Please see:
    Configuring iSCSI storage for a Hyper-V Cluster
    http://www.hypervrockstar.com/qs-buildingahypervcluster_part3/
    This part is should be of a particular interest of yours. See:
    Connect Nodes to iSCSI Target
    Once the target is created and configured, we need to attach the iSCSI initiator in each node to the storage. We will use MPIO to
    ensure best performance and availability of storage.  When we enable the MS
    DSM to claim all iSCSI LUNs we must reboot the node for the setting to take affect. MPIO is utilized by creating a persistent connection to the target for each data NIC on the target server and from all iSCSI initiator NICs on our hyper-v
    server.  Because our hyper-v servers are using converged networking, we only have 1 iSCSI NIC.  In our example resiliency is provided by the LBFO team we created in the last video.
    PowerShell Commands
    1
    2
    3
    4
    5
    6
    7
    8
    9
    Set-Service -Name
    msiscsi -StartupType Automatic
    Start-Service msiscsi
    #reboot requres after claim
    Enable-MSDSMAutomaticClaim -BusType
    iSCSI
    Set-MSDSMGlobalDefaultLoadBalancePolicy
    -Policy RR
    New-IscsiTargetPortal –TargetPortalAddress 192.168.1.107
    $target = Get-IscsiTarget
    -NodeAddress *HyperVCluster*
    $target| Connect-IscsiTarget
    -IsPersistent $true -IsMultipathEnabled
    $true -InitiatorPortalAddress
    192.168.1.21 -TargetPortalAddress 10.0.1.10
    $target| Connect-IscsiTarget-IsPersistent$true-IsMultipathEnabled$
    You'll find a reference to "Connect-IscsiTarget" PowerShell cmdlet here:
    Connect-IscsiTarget
    https://technet.microsoft.com/en-us/library/hh826098.aspx
    Set of samples on how to control MSFT iSCSI initiator with PowerShell could be found here:
    Managing iSCSI Initiator with PowerShell
    http://blogs.msdn.com/b/san/archive/2012/07/31/managing-iscsi-initiator-connections-with-windows-powershell-on-windows-server-2012.aspx
    Good luck and happy clustering :)
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • After delete iSCSI LUN vacant space does not become available

    px12-400r, one pool on RAID10 (5,42TB).
    Pool divided into 4 iSCSI LUN (2TB+2TB+1TB+0,42TB).
    After deleteting one of them (on 2TB), this vacant space in tab "System Status" (over Web Interface) shown in the diagram as the free and "Amount of available space" on Status Bar 2TB (on Device Home Page over Web Interface),
    but in tab "Disk Management" shows "Allocated/Available" - "5,42TB/0B" , and new volume or iSCSI LUN can not be added ("In any storage pool does not have enough space to add an iSCSI drive"
        Storage in AD. Deleting iSCSI LUN over Web Interface under AD user with storage admin rights.
     Rebooting storage did not help. What to do? How to return control of this free disk space?

    Hello gusev67
    To be certain that the iSCSI LUN was actually delete, please try this alternate method to check if the LUN is present and if so, deleted.
    Go to Drive Management and click on the "volumes" section of your storage pool
    Navigate to the iSCSI LUN that should have been deleted/removed
    If it is present there, please click on the iSCSI LUN name to show the iSCSI information overview for the LUN, there click on the "delete" option.
    If that does not seem to resolve the issue  I recommend contacting LenovoEMC support regarding the problem.
    LenovoEMC Contact Information is region specific. Please select the correct link then access the Contact Us at the top right:
    US and Canada: https://lenovo-na-en.custhelp.com/
    Latin America and Mexico: https://lenovo-la-es.custhelp.com/
    EU: https://lenovo-eu-en.custhelp.com/
    India/Asia Pacific: https://lenovo-ap-en.custhelp.com/
    http://support.lenovoemc.com/

  • Can not attach older iSCSI lun to repository

    I am a current Virtual Iron user and have set up OVM to replace my aging system. I have everything set up on some new hardware and up to a point where I want to set up some VMs.
    I have an older VM under Virtual Iron I am no longer using, so I thought this would be a good test subject.
    I assigned both my OVM servers to the volume in my iSCSI san.
    The SAN and volume show correctly within the storage tab.
    I have rescanned for physical disks on each server and found the volume.
    I can not create a VM just using the physical disk....I wants me to use a repository...
    Ok, so I tried to create a repository using the existing volume that contains my server....no go...get this error
    (10/16/2012 12:38:54:518 PM)
    OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000df4ea0c58186307e] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: OVMS1 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500003beb40ffab937285 /dev/mapper/36000eb346b4e2d99000000000000012e 0, Status: OSCPlugin.InvalidValueEx:'The backing device /dev/mapper/36000eb346b4e2d99000000000000012e is not allowed to contain partitions'
    Tue Oct 16 12:38:54 CDT 2012
    Tue Oct 16 12:38:54 CDT 2012] OVMAPI_4010E Attempt to send command: dispatch to server: OVMS1 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500003beb40ffab937285 /dev/mapper/36000eb346b4e2d99000000000000012e 0, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.InvalidValueEx:'The backing device /dev/mapper/36000eb346b4e2d99000000000000012e is not allowed to contain partitions'
    Tue Oct 16 12:38:54 CDT 2012
    Tue Oct 16 12:38:54 CDT 2012
    Tue Oct 16 12:38:54 CDT 2012
    Looks like it will not use the volume because something is on it.
    In Virtual Iron, I could use a virtual storage (repository), local disks, or iSCSI luns for my VMs.
    So how am I to use my older server within OVM?

    Ok. Found that someone else was having similar problems....
    Oracle VM Manager 3.1.1: Discovering SAN Servers
    My solution was to scrap the whole thing....and start over with the new beta 3.2.1. (build 258) This has given me the ability to see all my SANs now.
    Edited by: Looney128 on Nov 13, 2012 8:32 AM

  • Install OVM server 3.2.1 on iSCSI LUN

    Hello friends,
    I have a Cisco Blade and would like to boot OVM server from an iSCSI lun. I create a LUN from a SAN and presented the lun to the blade. When the server boots up, it sees the iSCSI lun just fine but when I tried to install OVM server 3.2.1, it did not detect that LUN. I tried to install Oracle Linux 6.3 and it sees the LUN ok. Is there a way to make OVM server to see that LUN.
    thanks,
    TD

    I have done to many oel and ovs install lately such that they are blurring together...
    On the OVM 3.2.1 install watch closely for a place to add support for non local storage. I remember seeing a small prompt in some of the installs but it may have been from some of the OEL installs (I have also been doing OEL 4, 5, & 6) & not OVM.
    Sorry I can't remember right now. If I get a moment and try to run the install process I will and report back if no one else does.

  • Oracle VM 3.1.1: Using iSCSI LUN's as Device-Backed Virtual Machine Disks?

    Will Oracle VM Manager 3.1.1 support the use of iSCSI LUN's as device-backed virtual machine disks? i.e., Can iSCSI LUN's be treated like directly-attached storage (DAS) and used as virtual machine disks?
    Eric Pretorious
    Truckee, CA

    user12273962 wrote:
    I don't personally use ISCSI. BUT, I don't see why you can't create physical pass through attachments to virtual guests using ISCSI LUNS. It is no different than FC or direct attach storage. Worst case scenerio... you can create a repo on the ISCSI LUN and then create virtual disks.
    Oracle VM Getting Started Guide for Release 3.1.1Chapter 7. Create a Storage Repository
    A storage repository is where Oracle VM resources may reside... Resources include virtual machines, templates for virtual machine creation, virtual machine assemblies, ISO files (DVD image files), shared virtual disks, and so on. This would have the effect of using file-backed virtual machine disks but at least they'd be centrally-located on shared storage (and, therefore, the virtual machine could run as an HA guest)
    budachst wrote:
    I have kicked this idea around as well, but I decided do it slightly different, by providing the storage repo via multipathed iSCSI and use iSCSI within my guests if needed for better speed or lower disk latency.
    If you want to use iSCSI as the virtual devices for your guests in a clustered server pool, you'd have to grant access to all of your iSCSI targets to any VM server and I was not feeling comfortable with that. I rather grant only the guest directly access to "its" iSCSI target. So I will keep all of my guests system images on the clustered server pool - and additionally also the virtual disks that don't need a high performance and have the really heavy-duty vdisks as separate iSCSI targets.That seems logical, Budy:
    <ul>
    <li>Use a multipathed iSCSI repository for hosting the virtual machine's disk image, and then;
    <li>Utilize iSCSI LUN's for "more performant" parts of the guest's file system (e.g., /var).
    </ul>
    I'm still puzzled about that image, though. +i.e., the fourth image from the bottom of Chapter 7.7, "Creating a Virtual Machine" ("To add a physical disk:...") where it's clear that the exising "physical" disks are actually iSCSI LUN's that are attached to the OVM server.+
    I suppose that I'll have to configure some iSCSI LUN's and try it out to be certain. Just another tool that I don't have +but will need to add to my toolbox+.

  • Unable to detect iSCSI LUN size change on Solaris 10

    I resized (grew) an OpenFiler iSCSI LUN, but I cannot get Solaris 10 to pick up the change. Using the type -> autoconfigure in format and relabeling the disk ought to do the trick but it does not work. Does anybody know how to overcome this?

    Hmm, what filesystem do you have on the disk?
    .7/M.

  • ISCSI LUN doesn reconnect if CHAP authentication is enabled on 7410

    I have a Windows 2008 R2 server as an iSCSI initiator connecting to the Sun storage 7410 NAS. Everything appeared to be working except the iSCSI LUN is unable to reconnect to the initiator as long as I have CHAP enabled (initiator authentication only). I tested the same initiator configuration with another iSCSI target using StarWind and it didn't have the same problem. Any ideas? Thanks

    I have the same issue. Unable to connect from 2008r2 until i disabled CHAP. We are running 2010Q3p1.

  • ISCSI- Adding iSCSI SRs to XS booting via iSCSI [Cisco UCS]

    Hello all-
    I am currently running XS 6.0.2 w/ HF XS602E001 & XS602E002 applied running on Cisco UCS B200-M3 blades. XS seems to be functioning ok, except that i cannot see the subnet assigned to my SAN, and as a result, i cannot add any SR for VM storage.
    The subnet and vNIC that i am configuring is also used for the iSCSI boot.
    The vNIC is on LAN 2 and is set to Native
    The SAN is a LeftHand P4300 connected directly to appliance ports (LAN2) on the Fabrics
    Used the following commands during installation to get the installer to see the LUN.
    echo "InitiatorName=my.lun.x.x.x" > /etc/iscsi/initiatorname.iscsi
    /opt/xensource/installer/init --use_ibft
    If i missed anything or more info is needed, please let me know.

    Thanks Padramas,
    I have 2 NICs that show up in XenCenter. Nic A is the vNIC associated with VLAN1 and provide the uplink to the rest of the network. The second NIC in XenCenter matches the MAC of the vNIC used to boot XenServer, so the NIC used for booting the OS is visible in XenCenter.
    I have tried many different thing on my own to resolve this issue. I believe i have already tried adding a 3rd vNIC to the service profile for VM iSCSI traffic, but with make another attempt.
    When configuring the vNIC for VM iSCSI traffic, do I only need to add the 3rd vNIC, or do i need to create both a vNIC and second iSCSI vNIC with Overlay vNic pointing to newly created (3rd) vNIC? My understanding is that the iSCSI vNICs are only used for booting, but I am not 100% sure.
    Thanks again for the help!

  • IChat keeps telling me I've tried logging on too many times...

    Ok... so here's the story:
    I'm visiting my sister, and she has netgear... I tried logging on to iChat, and it went berserk, logging on and off several times in a row. And then a message popped up the next time I logged on saying: "You have attempted to login too often in a short period of time. Wait a few minutes before trying to login again." So... I searched on this website to see what to do about this, and I found that I need to change the port to 443... so I did it... but it won't even let me attempt to log in anymore, and the same error message keeps popping up.
    Is there a way I can reset this so it doesn't keep telling me I've tried to log on too many times? I've tried restarting, and that's just not working. Any help would be greatly appreciated... I'm gonna try turning the computer off and removing and replacing the battery to see if that words...
    thanks

    Long version
    Open the Hard Drive
    Open Users.
    Open your account
    Open Library
    Open Preferences
    Short Version
    Open your Little House icon followed by the Library and then Preferences.
    8:41 PM Friday; January 9, 2009

Maybe you are looking for