Redhat 6.3 on Hyper-V, "SRAT: Hotplug area too small" at boot

Hello Guys,
After installing Linux Redhat 6.3 on Hyper I receive this message at boot of the Linux virtual machine:
SRAT: Hotplug area too small
Can anyone help understand this message and how to resolve.
Thank you in advance.
Regards
Raymond TODO

Hi Michael,
Thank you for your answer. Just in case you need them, please find below the information you requested in your previous post.
What version of Hyper-V are you running? (2008, 2008 R2, 2012, 2012 R2):
Answer: Core Windows 2008 R2 Hyper V
How much memory is configured in the VM?:
Answer: 16Gb
And how many vCPUs?
Answer: 4
How did you install RHEL 6.3? From the installation ISO? Or by copying a VHD that already had RHEL 6.3 in it?:
Answer:  From ISO
If from the RHEL 6.3 ISO, then the Linux Integration Services (LIS) are not already installed. If from a cloned VHD, then the LIS might already be installed. So what I'm really wanting to know is if the LIS is installed.
Answer: Yes, I installed the LIS v3.4 after the RHEL 6.3 installation
Thank you again for your assistance.
Regards
Raymond TODO

Similar Messages

  • VMs on Hyper-V 2012 R2 are shutting down automatically

    Hello Guys! 
    I'm having a huge problem in my Hyper-V 2012 R2 environment. Yesterday, 9 of 40 VMs were shutdown automatically, and so, caused a downtime problem to my customers in the business hours.
    I checked in the Event Viewer (Application / System /  Microsoft - Windows - Hyper) of the Host and VM but I couldn't locate any log related to this unexpected shutdown. 
    It occurred twice yesterday, the first in the morning and the second in the afternoon. I'm afraid that this issue can happen again. I've tried to locate some logs to identify this problem, but no luck here. 
    My question: Is there any other place where it can help me in this troubleshooting to find the root cause?
    Obs: All the VMs are licensed / activated. 
    Obs2: The VMs are Win 2008 R2 Std
    Leandro Soares - MCP/MCSA/MCTS

    Hello Brian!
    The customers doesn't have access using the RDP / ICA. The role machines is web servers.
    In the VMs I can find only the following log:
    Log Name:      System
    Source:        Microsoft-Windows-Kernel-Power
    Date:          24/06/2014 11:09:04
    Event ID:      41
    Task Category: (63)
    Level:         Critical
    Keywords:      (2)
    User:          SYSTEM
    Computer:      xxx
    Description:
    The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.
    But on the Host Hyper-V I cannot find any error related to this failure as I described in this thread.
    Leandro Soares - MCP/MCSA/MCTS

  • Tutorial - How to triple boot OSX, Linux and Windows 8.1 with a shared Data Partition without any third party Win / OSX softwares

    This is not a question, but rather a personal guide that has proved to be running successfully.
    I would like to thank numerous sources, including Christopher Murphy's suggestions at:
    Re: Repairing Boot Camp after creating new partition
    Before proceeding, there are certain concepts needs to know:
    Why Boot Camp does NOT allow further partitioning of drives after Windows has installed?
    Answer: Because the way Apple configures the Mac to be recognized as non UEFI capable system on Windows.
    Quote from Christopher Murphy based on the above line:
    However, Windows on Macs right now use CSM-BIOS mode in Mac firmware that presents BIOS to Windows rather than EFI. Windows thinks it's on a BIOS computer, and therefore mandates the use of MBR for boot disks, rather than GPT. So that's why we have this hybrid MBR+GPT approach on Mac with Windows on it. You inherit the limitations of MBR, which is four primary partitions.
    So what does it means?
    It means that OSX + EFI + Recovery HD + Boot Camp partition = 4 primary partitions and thus any attempt to modify the disk will render booting issues of either system.
    For more info on GPT (GUID Partition Table disks VS Master Boot Record or MBR in short, you may visit: http://msdn.microsoft.com/en-us/library/windows/hardware/dn640535%28v=vs.85%29.a spx)
    So, how to overcome it?
    The general guideline is to install ALL GPT ready OS first then create a Data partition, before installing Windows (Which is again, NOT supported GPT due to EFI configuration by Apple where end-users are not able to modify it).
    Interestingly, since Mac Pro 2013 Late supports only Windows 8 and above, thus it is not known if this CSM-BIOS applies to it or not.
    Do take note that GPT disks in Windows can only be booted when the system meets the 2 requirements:
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn640535%28v=vs.85%29.a spx#gpt_faq_win7_boot
    1) Windows x64 version (Which is a must for newer Macs. If you cannot go to Boot Camp 5, then you need Windows 7 x86 or 32bit version)
    2) UEFI system. However, Windows sees all Macs (With the possibility of Mac Pro 2013 Late is an exception. To be determined) as BIOS, or rather NON-UEFI system.
    In short, booting on GPT disks is not possible for Mac in Windows.
    Summary,
    It is tested that a combination of the following will not work:
    - OSX + Windows + Linux
    - Windows + OSX + Linux
    - Windows + Linux + OSX
    Usually it can create the system un-bootable or OSX refused to install due to the system does not recognize such partitions and / or Disk Utility refused to format a free space. An example screen-shot is provided below:
    The error message is shown as
    Title: "Failed to erase volume" Message: "Failed to wipe volume, as an error occurred: MediaKit has reported that the device does not have enough free space to execute the requested operations."
    The second thing is about the preparations we need.
    1) 1X Windows 7 or 8 DVD or USB thumbdrive
    1A) If you uses a DVD to install, you will need another thumbdrive to load the BootCamp drivers for Windows as well as may requires an external DVD drive for newer Macs
    2) 1X Linux DVD of your choice. Personally I choose Fedora 20.
    So ready? Let's go.
    1. Using Disk Utility, shrink the OSX's partition size to what is needed. For me, I give OSX 150GB. Do NOT create any new partition.
    Disk Utility should see something like below whereby only OSX partition is left with desired disk space. The remaining space are to be unused disk space for the moment.
    Note: Click on the top most item that should start with the size of your HDD / SSD. Then clicked on "Partition" and specify the desired OSX size. Hit "Apply" after that.
    2: Download Boot Camp drivers only via Boot Camp Assistant. The USB thumbdrive shall be used later after Linux's installation.
    Boot Camp Assistant should see this:
    I have only selected "Download latest Windows Support Files from Apple"
    3. Insert Linux DVD, reboot Mac into EFI mode (The left most first "EFI mode").
    Note 1: Before rebooting, please plugged in an Ethernet adapter because Wi-Fi drivers is not installed.
    Note 2: For Thunderbolt adapters, it must be plugged in before reboot as hot-swapping is not supported under Linux. More on the tips at the end of this article.
    Note 3: Press and hold "Option" after the screen turns black. Release Option key after you see the image as below:

    For the unfortunate part that did not make it on time to edit the images:
    9. Install the Windows Support software from your CD/USB drive to gain full functionality of your computer. Reboot and go to Windows again.
    Note 1: You may choose to eject disc at this point of time. For Apple SuperDrive users, you will need to wait until the drivers (i.e. Boot Camp support files) is installed and rebooted before ejecting is reasonably possible (As I failed to figured out how to right click without the drivers)
    Note 2: Unlike Windows 7 on KBase article TS4599 Keyboard/trackpad inoperative, black screen, or alert messages when installing Windows 7, USB stick can be plugged in after the Windows installation is done. This is because Windows 7 (And probably Windows 7 with SP1 DVD) does not have a built in USB 3 drivers when it was released back in 2009 where USB3 has not arrived then.
    Note 3: Due to TPM, Bitlocker is not supported without the use of thumbdrives.
    10. Using Disk Management to determine the given drive letter for the DATA partition (DO NOT DELETE and RECREATE partition or else you can goodbye to booting Linux and OSX). Disk Management will not allow you to format it as exFAT / FAT32 in graphical way.
    Note: You may remove or modify some of the disk letters in Disk Management. However, do NOT remove / modfify the drive letter for the partition with 200MB size in HFS. This is because it will disallow booting of Linux and neither could Windows nor OSX can do anything EXCEPT to reinstall Linux only.
    11. Open Command Prompt in Administrator Mode (Important!!), and key in the following command:
    format F: /FS:exFAT
    Give this volume a label after it has successfully formatted before hitting "Enter" again.
    Note: Mine Data partition was assigned as F drive. Please make necessary adjustment to "F:" should your Data partition is assigned to other letters.
    12. After that, Setup your Data partition structure as you like.
    Tip: Minimally create the important folders such as:
    - Music
    - Documents
    - Movie (Videos)
    - Downloads
    - Pictures
    All these folders are commonly used by the 3 OSes. I do NOT recommend changing of /home (OSX and / or Linux) and / or user home directory (Windows) either partially or as a whole.
    This is because of compatibility issue.
    On a side note, iTunes Media Library used in OSX and Windows are NOT able to be use interchangably due to hard-coded path used.
    13. Useful troubleshooting in Fedora / Linux:
    With references to these:
    http://chaidarun.com/fedora-mbp
    http://anderson.the-silvas.com/2014/02/14/fedora-20-on-a-macbook-pro-13-late-201 3-retina-display/
    http://unencumberedbyfacts.com/2013/08/16/linux-on-a-macbook-pro-101/
    I would like to highlight a few important points:
    1) Wi-Fi driver:
    http://rpmfusion.org/Configuration
    Note 1: The sound driver should be installed at Out of Box Experience. However, the Wi-Fi is not.
    Note 2: Install both free and non-free repository. By the way, some other software like VLC can only be found after the Free Repository is installed.
    Search for "akmod-wl" in Gnome-Package-Installer in order to install Wi-Fi drivers
    Note 3: For those who do not have Ethernet adapters and their Mac does NOT have a built-in Ethernet port, it is recommended to get one. This is because Fedora 20 does not have a good support for iPhone USB tethering. Unsure for Andriod / Blackberry / Windows Phone users.
    2) Grub Menu:
    It will show several options to boot into OSX, even of the capability to boot into x86 or x64 mode. However, neither of them is bootable except Linux and the rescue.
    Hence, it is recommended to remove the items by hand in this file:
    /boot/efi/EFI/fedora/grub.cfg
    Command to be used:
    "sudo gedit /boot/efi/EFI/fedora/grub.cfg"
    Parts to be removed:
    - For any extra kernels, delete the target entry by locating the line "menuentry" under "/etc/grub.d/10_linux" sector to one line above the next "menuentry".
    It is recommended to keep one main kernel, and one recovery at the minimal.
    - For other OS, delete all the entry (Since neither it can works) under "/etc/grub.d/30_os-prober" sector without removing the lines starts with ###.
    Auto Mount exFAT partition:
    - After installing extra packages for exFAT support (Since it is not supported by Fedora 20 from a default installation), you may wish to edit "/etc/fstab" in order to mount the exFAT partition during boot time.
    Command to be used:
    "sudo gedit /etc/fstab"
    Add the following line in gedit:
    UUID=702D-912D /run/media/Samuel/DATA                   exfat    defaults        1 2
    Note 1: For DATA partition, OSX & Boot Camp partition, Fedora defaults mounts under: "/run/medua/<Username with case sensitive>/<Partition Label Name>"
    Note 2: UUID is unique ID. You can find out the UUID by:
    Step 1: First determine the DATA partition number:
    "sudo gdisk /dev/sda"
    Step 2: Determine the UUID of this partition number:
    "sudo blkid /dev/sda8"
    Reference 1: http://manpages.courier-mta.org/htmlman5/fstab.5.html
    Reference 2: http://liquidat.wordpress.com/2007/10/15/short-tip-get-uuid-of-hard-disks/
    3) Overheating CPU
    Solution is to issue the following command in Linux terminal: su -c "echo -n 1 > /sys/devices/system/cpu/intel_pstate/no_turbo"
    4) System resumes immediately after suspend
    Solution is to issue the following command in Linux terminal: su -c "echo XHC1 > /proc/acpi/wakeup"
    5) What does not works well out of box:
    - Both GNOME and KDE's fonts are too small to be readable for out of box experience. Additional configuration is a need. (Some of the info can be found on "More Tips" later)
    - Thunderbolt hotplugging is NOT supported under Windows and Linux so far. Neither FaceTime HD camera works as well.
    - The red light in Headphone jack is always on. I do not have luck in switching off the light without losing the sound.
    Note 1: It is determined that the module "snd_hda_intel" is used by both cards (HDMI and normal output)
    Note 2: It is also known that blacklisting it can switch off the redlight at the price of muting the system.
    Note: Based on this article, http://support.apple.com/kb/TS1574
    A Mac (Except Mac Pro) needs servicing when there is a red light while the system fails to detect internal speakers. However, this article does NOT applies to this issue.
    5A) More Tips:
    Install gnome-tweak-tool for more customization
    Search for: "gnome-package" to install:
    Install Gnome Package Installer for advanced package repository
    Install Gnome Package Updater for advanced updates to be install (Whereby Fedora's App Store alike might not show the relevant updates)
    14. Verify if disk is still GPT:
    Use Gdisk to determine if the disk is pure GPT:
    http://ubuntuforums.org/showthread.php?t=1742682
    Command: sudo gdisk -l /dev/sda (The entire hard drive)
    You should see the MBR is "Protective" instead of anything else.
    15. Congrats, the system is ready for triple boot. (I forgot to eject my Windows DVD when the photo was taken)
    Note 1: You cannot set the default startup disk in Linux due to the lack of Boot Camp Control Panel in Linux.
    Neither is changing startup disk recommended in Windows due to the inability to display correctly.
    For me, I click "Cancel" whenever I am on this tab (Feel free to make other Boot Camp adjustments in other tabs).
    Only OSX I know that can show the startup disk options correctly.
    Note 2: For some reason, OSX likes to auto mount the EFI partition everytime it boots up. It is not known to have any issue for ejecting other disks or mounting disks via Disk Utility.
    Note 3: It is not determined if any Firmware or System upgrades will cause issues. It is only known that all 3 OS's regular updates should not be an issue.
    System Updates excludes Mac OSX 10.9.3 updates to OSX 10.9.4 type as I had done it on a OSX 10.9.4 Mac or Windows 8.1 to Windows 8.1 Update 1 since my Windows DVD comes with Update 1.
    System Upgrades refers to OSX Mavericks to Yosemite, Fedora 20 to Fedora 21, Windows 8.1 Update 1 to Windows 8.2 / Windows 9 for that matter.
    Note 4: Reset SMC and / or PRAM will NOT affect your ability to boot any of the OS (OSX, Recovery HD, Fedora & Windows 8)
    Yup, that is it!

  • Print dialog font is too small: Acrobat Reader on Linux

    I'm running Acrobat Reader version 7.0.5 on CentOS 4.5 (RedHat Enterprise
    4.5) and find that the fonts used on the Print dialog are too small.
    They are much smaller than the fonts used on labels and other controls
    in the rest of the application, and are very difficult to read. By
    comparison, they seem to be about 1/2 the size of the font used by the
    Reader menu bar (File, Edit, View, ...).
    I've gone through the Edit -> Preferences, but don't see anything which
    will change the size of the fonts used on this dialog.
    Is there a configuration file or some other way that I can increase the
    size of these fonts?
    Thanks,

    the issue occurs on systems running debian-etch, debian-etch'n-half,
    debian-lenny, and gentoo with xdm as display manager and fvwm as
    window manager. the x-server runs with the '-dpi 75' option regardless of
    the screen's physical dimensions. the screen's resolution is 1600x1200.
    for me it looks like acroread is creating it's dialog boxes independently
    from gtk-settings in $HOME/.gtkrc-2.0 (apart from the OK/Cancel buttons of
    the printing dialog) and while doing so, it makes wrong assumptions about
    the font-size by using the dpi setting of the screen to calulate the physical
    dimention of the screen. if the x-server is running with '-dpi 100' the
    dialog fonts are considerably bigger, but still differs from the settings
    in $HOME/.gtkrc-2.0. for example when you make setting of
            style "default"
                    font_name = "Verdana 24"
            class "*" style "default"
    then you don't get neither Verdana nor 24pt fonts in the whole acroread
    gui except for  the menubar and the OK/Cancel buttons of the printing
    dialog, while any other gtk-program like firefox, thunderbird, and others
    apprear completely in Verdana 24pt -- which is of course far too big, but
    illustrated the issue very clearly.
    screenshots (with gtk-fonts set to Verdana 24pt and Tahoma 11pt) are
    inserted below. (as you can see, the printing dialog and advanced printing
    dialog even use different fonts.)
    thank you very much for the very quick reply.

  • Checkpoint not complete;;cannot allocate new log ;;; PLZ HELP ME

    Hi all,
    We are working on Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 on a Redhat Linux Server platform.
    We are facing the following problem in the alert.log file :
    Wed Aug 22 02:58:57 2007
    Thread 1 cannot allocate new log, sequence 43542
    Checkpoint not complete
    Current log# 1 seq# 43541 mem# 0: /u01/oradata/DB01/redo01.log
    Thread 1 advanced to log sequence 43542
    Current log# 4 seq# 43542 mem# 0: /u01/oradata/DB01/redo04.log
    Current log# 4 seq# 43542 mem# 1: /u01/oraindx/DB01/redo04.log
    Wed Aug 22 03:00:00 2007
    Thread 1 advanced to log sequence 43543
    Current log# 5 seq# 43543 mem# 0: /u01/oradata/DB01/redo05.log
    Current log# 5 seq# 43543 mem# 1: /u01/oraindx/DB01/redo05.log
    Wed Aug 22 03:01:00 2007
    Thread 1 cannot allocate new log, sequence 43544
    Checkpoint not complete
    Current log# 5 seq# 43543 mem# 0: /u01/oradata/DB01/redo05.log
    Current log# 5 seq# 43543 mem# 1: /u01/oraindx/DB01/redo05.log
    Thread 1 advanced to log sequence 43544
    Current log# 6 seq# 43544 mem# 0: /u01/oradata/DB01/redo06.log
    Current log# 6 seq# 43544 mem# 1: /u01/oraindx/DB01/redo06.log
    Wed Aug 22 03:01:26 2007
    Thread 1 advanced to log sequence 43545
    Current log# 2 seq# 43545 mem# 0: /u01/oradata/DB01/redo02.log
    Thread 1 advanced to log sequence 43546
    Current log# 3 seq# 43546 mem# 0: /u01/oradata/DB01/redo03.log
    Thread 1 advanced to log sequence 43547
    Current log# 1 seq# 43547 mem# 0: /u01/oradata/DB01/redo01.log
    Wed Aug 22 03:01:38 2007
    Thread 1 cannot allocate new log, sequence 43548
    Checkpoint not complete
    Current log# 1 seq# 43547 mem# 0: /u01/oradata/DB01/redo01.log
    I know that this message indicates that Oracle wants to reuse a redo log file, but
    the current checkpoint position is still in that log. In this case, Oracle must
    wait until the checkpoint position passes that log. Because the
    incremental checkpoint target never lags the current log tail by more than 90%
    of the smallest log file size, this situation may be encountered :
    1-if DBWR writes too slowly,
    or
    2-if a log switch happens before the log is completely full,
    or
    3-if log file sizes are too small.
    I read some posts in this forum regarding this error but sincerly i don't know how to find the exact cause of this error? Maybe Should I add new redo files or one new redo group? I don't know how to resolve it :( ;;;
    such as I have 6 redo files 3 of them 5MB size and the others 3 files 10MB size ;
    Thank you,
    Regards,
    Message was edited by:
    HAGGAR

    Make DBWR write more aggressively - as you are on 9i the parameter I would use is FAST_START_MTTR_TARGET=(how long you want recovery to take in seconds), the lower that number, the more aggressively DBWR has to write to keep up with the target, the advantage of this is that by the time LGWR comes to overwrite the redo log file, the chances are that DBWR has already written the "high scn#" (and beyond) from the checkpoint q.
    -- This disadvantage is that you will get more I/O to your disks.
    2. Create more redo log file groups - this will give DBWR more time to write before LGWR tries to overwrite a particular redolog file, again the chances are that the extra (6th) or (7th) group will give CKPT enough time to completely checkpoint beyond the "highest scn#" before that group is again required..
    Which one to go for.. well that's up to you and your setup, if you have an I/O bound system then 2. would be better for you, as 1. will just increase your I/O problem, however if physical space is an issue and I/O isn't then 1. might be better (with the added advantage that instance recovery will also be faster).
    Sorry for the training session, but as with everything to do with Oracle, there is rarely one solution that apply to everyone...
    Gopu

  • Problem with file handles : jdeveloper makes them getting smaller

    hello,
    I downloaded today jdeveloper 11.1.2.1 for linux and I installed it on my centos 6.2 (like redhat).
    I didn't read the prerequisites.
    the installation worked fine, but when I tried to modify the JDialog swing file of a view project, I have an error message telling me there are save file errors, and complaining the files handles are too small.
    I think the two errors are linked.
    I read some stuff on the net and found a tutorial explaining how to have fs.file-max=70000 in sysctl.conf and 30000 when you type ulimit -n, what is also explained in the prerequisites.
    so I ran jdeveloper a new time and found the same error.
    I opened a terminal, and saw that ulimit -n gives ... 1024.
    seems that the parameter is changed when I launch jdeveloper.
    can you see an explanation?
    thanks
    olivier

    the resolution is here : [http://publib.boulder.ibm.com/infocenter/dmndhelp/v6rxmx/index.jsp?topic=/com.ibm.wbit.help.install.doc/topics/tincreasehandles.html|http://publib.boulder.ibm.com/infocenter/dmndhelp/v6rxmx/index.jsp?topic=/com.ibm.wbit.help.install.doc/topics/tincreasehandles.html] .
    cool!

  • Squid proxy server (redhat) latency issues?

    I need to deploy a proxy server on a network servicing about 200 machines.
    If I virtualize redhat Linux to use squid, will the latency be too much?  Bear in mind that the underlying OS and hardware (server 2008 on RAID-5 and 2GB of RAM) must also act as a WDS server.  If it weren't for the necessity of WDS I would make
    the whole machine a RHL box.
    My boss thinks that there will be too much latency... I on the other hand don't have the experience to know or the time to build a prototype for testing it. 

    I would doubt that the latency will be an issue.  Almost every kind of workload is being virtualized these days. Hypervisors in general, and Hyper-V in particular, offer synthetic devices such as NICs and storage controllers along with custom device
    devices for those synthetic devices specifically so that performance and latency can get pretty close to what you would have with physical devices.  Virtual appliances for various network functions such as proxies, firewall, and load balancers, have been
    created specifically to run in virtual environments, and they are achieving the necessary performance and latency.  I'm sure there might be specific situations that are particularly sensitive to latency and that could be a problem running virtual, but
    all the mainstream scenarios are pretty much OK.
    Michael Kelley, Lead Program Manager, Open Source Technology Center

  • Linux on Hyper-V - keyboard/mouse frozen intermittently

    Hi all,
    I am running a custom linux VM on Hyper-V Server 2012. Once my VM boots, its is not responding to Keyboard inputs, if left idle for sometime. I have the Integration services installed.
    If left idle for sometime, all the VMs in Hyper-V Manager are not responding to keyboard/mouse except the
    'Ctrl+Pg_up' keys to scroll up. I have tried typing, refreshing and reconnecting the VM. But of no use.
    The only workaround is to reset the VM and use as soon as it boots up.
    This issue happens intermittently.
    I searched the forum, but found nothing useful.
    Thanks in advance,
    Saleem

    Hi Saleem,
    First thing I would suggest is that try the new LIS 3.5 drivers from here:http://www.microsoft.com/en-us/download/details.aspx?id=41554
    Second, since you are not using a standard distribution such as RedHat, Ubuntu or SUSE you should try to compile the drivers for your distribution before using them. You can download the entire LIS source code from here:
    https://github.com/LIS/LIS3.5/tree/release
    Let me know if the above steps help. You can ignore my comment about the keyboard driver for now as it is not included in the LIS 3.5 release.
    Thanks,
    Abhishek

  • Installation Help for JRE v1.4 on Redhat Linux 7.3

    Hello everyone,
    I would like to install the Java Runtime Environment, and was hoping someone would please be able to give me a step by step guide.
    Thanks

    Hi,
    1 Download the j2sdk-1_4_0_01-linux-i586-rpm.bin file.
    2 If file on Redhat 7.3 system then "sh j2sdk-1_4_0_01-linux-i586-rpm.bin" in the bash to get j2sdk-1_4_0_01-fcs-linux-i386.rpm. (I am assuming this works since I did it the other way).
    3 If file on Windows system you can use winzip to extract it too j2sdk-1_4_0_01-fcs-linux-i386.rpm
    4 Place file on Redhat 7.3 system. Ensure that java is not already there with "rpm -aq | grep j2". If it is uninstall it with rpm -e <package name>.
    5 Install java with "rpm -i j2sdk-1_4_0_01-fcs-linux-i386.rpm"
    5 Check that java package is now on system with "rpm -aq | grep java2". You should get this "j2sdk-1.4.0_01-fcs"
    6 Java binaries are located in "/usr/java/j2sdk1.4.0_01/bin"
    7 Set a tempary path for the shell "export PATH=$PATH:/usr/java/j2sdk1.4.0_01/bin"
    8 Run "java" and you should get something back to show that the command did try to execute.
    I could not see anything on the Sun site that described these simple steps or somthing like it. Seems like an oversight by Sun that they are not there or they are too hard to find by users like myself.
    Thanks
    Tom

  • Hyper-V Host (Win 8) to XP Virtual Machine File Transfer

    Hello guys, first post here, and I've got a question about transferring files from the host computer to an xp hyper v virtual machine. I've researched and tried many methods but some are too complicated or in the case of the floppy it doesn't have enough
    space. I need to transfer a 30mb 16-bit application from my host, Windows 8, to a windows XP virtual machine where I think I can run it. I seem to be "missing some network hardware" so I can't exactly access the internet, but is there still a way
    to transfer an app within a host-to-vm network? I think I set up an external network, but don't really know how to use it. 
    Thanks, and happy new year

    Hi Alec22,
    >>but is there still a way to transfer an app within a host-to-vm network? I think I set up an external network, but don't really know how to use it. 
    I would suggest you to use network share .
    Please create an internal virtual switch and configure an IP  for it in host's "network connections " , then connect that VM to internal virtual switch
    (that VM's IP should be in same subnet with that "internal virtual switch ")
    Also you can refer to following article :
    http://technet.microsoft.com/en-us/library/ee256061(v=WS.10).aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Hyper-v 2012 r2 slow throughputs on network / live migrations

    Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
    Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between
    the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip
    large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-(  but it doesn't really seem to make a difference with all of these settings.
    The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut
    town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
    I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.

    Hi Mark,
    The servers (hyper-v 2012 r2) are all basically configured with ssvmm2012R2 where they all have teamed 1Gb pNics, into a virtual switch, then there are vNics for the Vmcloud, live migration etc.  The physical network is 2 Netgear Gs724T switches which
    are interlinked and each servers 1st nic is plugged into the switch1 and the second nic is plugged into the switch2.See Below Image)  The hyper-v port is set to independent Hyper-v load balancing. 
    The R320 servers are running raid 5 sas drives, the R210s have 1Tb drives mirrored.  The servers all are using DAS storage, we have not moved to looking at using iscsi and san is out the question at the moment.
    I am currently testing between 2x 320s and 2x R210s, I am not copying data to the vm's yets, I am basically testing the transfer between the actual hosts at the moment by copying a 4Gb file manually, After testing the live migrations I decided to test to
    see the transfer rates between the servers first, I have been playing around with the offload settings and rss, what I don't understand is yesterday, the copy between the servers was running up to 228Mbs ie (using both nics) when copying
    the file between the servers, and then a few hours later it only was copying at 50/60Mbs, but its now back at 113Mbs seemingly to be only using one nic.
    I was under the impression if you copy a file between 2 servers the nicks could use the 2gb bandwidth, but after reading many posts they say only one nic, so how did the copies get up to 2Gb yesterday. Then again if you copy files between 3 servers, then
    each copy would use one nic, basically giving you 2Gbs, but this is again not being seen.
    Regards Keith

  • Online Backup of supported Linux VM on Hyper-V 2012 R2 / SC DPM 2012 R2

    Hi,
    I'm trying to set up a lab environment:
    Win 2012 R2 with Hyper-V
    running 2 Linux Machines:
    Linux2 - CentOS 6.4 with manually installed Linux Integration services 3.4
    Linux3 - CentOS 6.4 without LIS (should be already included in CentOS)
    Another machine running Win 2012 R2 Server with SC DPM 2012 R2
    but both VMs show as "Offline" when trying to back them up via DPM. Tried local Windows Server Backup with the same result.
    I am able to backup the VMs "offline" (pausing the VM, taking snapshot, resume VM) but according to MS, SC DPM 2012 R2 should be able to do Online backups for supported Linux VMs (http://blogs.technet.com/b/virtualization/archive/2013/07/24/enabling-linux-support-on-windows-server-2012-r2-hyper-v.aspx)
    The only things in the EventLog are these:
    A storage device in 'Linux3' loaded but has a different version from the server.  Server version 6.0  Client version 4.2 (Virtual machine ID 4F5CDDD8-B855-41CF-83B2-772C1B99090D). The device will work, but this is an unsupported configuration.
    This means that technical support will not be provided until this problem is resolved. To fix this problem, upgrade the integration services. To upgrade, connect to the virtual machine and select Insert Integration Services Setup Disk from the Action menu.
    Any Ideas ?
    Thanks

    Hi,
    That list would need to come from the Windows hyper-v group, they are responsible with adding the feature to the integration components for various Linux OSes.  DPM just backs up whatever the hyper-V writer presents to us, if the guest supports
    online, we back it up online, if not hyper-V saves the guest before the VSS snapshot is taken and DPM takes the backup from the saved state.
    NEW NOTE ADDED 1-29-14: Windows group just released “Linux Integration Services Version 3.5 for Hyper-V”. The
    document mentions that some versions of Red Hat and CentOS are now
    supported to do online backup.
    Live virtual machine backup support
    ======================
    RHEL/CentOS 6.0-6.3
    RHEL/CentOS 5.7-5.8
    RHEL/CentOS 5.5-5.6
    ADDTL NOTES: If there are open file handles during a live virtual machine backup operation, the backed-up virtual hard disks (VHDs) might have to undergo a file system consistency check (fsck) when restored.
    Live backup operations can fail silently if the virtual machine has an attached iSCSI device or a physical disk that is directly attached to a virtual machine (“pass-through disk”).
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
    posting is provided "AS IS" with no warranties, and confers no rights.

  • Hyper V on Win 8.1 Pro - can you use a wireless nic or does it need to be ethernet

    On my desktop with win 8.1 pro i installed win 8.1 pro on hyper v as we are going over this chapter in school. i got it loaded just fine but when i start it up I have no internet on the virtual 8.1 pro OS. My desktop uses a wireless nic and I'm wondering
    if this is the problem. I was able to get it to work once, but when I shut it down I no longer had an internet connection on the host OS. So i went in and removed the virtual switch and that did it - now i have internet on my desktop but when i turned the
    virtual machine back on, created the virtual switch again - no internet on virtual machine. I have a feeling I'm configuring something wrong - I will probably find out monday at school but decided to post here because it is bugging me!
    Any help is greatly appreciated
    S Vender

    FYI  - you should not need to perform the wireless sharing that is outlined in the article in the link provided.
    You should only need to select the wireless adapter as the physical adapter associated with your External Virtual Network.
    The internet sharing that folks do is only necessary in a very small number of cases due to router issues or wireless NIC driver issues.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Windows 8.1 Hyper-V virtualization rights

    So in Windows 7 Pro we had XP mode.  It was a full copy of XP which was licensed under the Windows 7 Pro license.  Do I have the licensing rights to install Windows 7 Pro on my Windows 8/8.1 machine using Hyper-V?
    Quite often my company uses XP mode to run 16 bit applications, or apps that just seem to work better on a 32 bit system, but I really don't want to have to keep getting Windows 7 machines when 8/8.1 works fine except for these old apps which I need an older/32
    bit system.

    Hi,
    The Windows 8 x86 can run the 16-bit application, please refer to the following link.
    http://social.technet.microsoft.com/forums/windows/en-US/d03a56cb-0039-42b7-962d-a389caab99f9/installing-windows-8-32bit
    I don’t think you have the license right of the Windows 7 Pro in Windows 8 Hyper-v, if you don’t buy it.
    Please refer to the license agreement as follow.
    http://download.microsoft.com/Documents/UseTerms/Windows_8_English_ca383862-45cf-467e-97d3-386e0e0260a6.pdf
    Additionally, if you install the Windows 7 in Hyper-v, you will not be able to use XP mode, because XP mode is a vitual mode and the Hyper-v is a virtual machine.
    You can't use vitual mode in a virtual machine. Please refer to the following link.
    http://social.technet.microsoft.com/forums/windows/en-US/78578f0d-4891-4bbf-8bbb-6b9eb05bc031/want-to-run-xp-mode-on-windows-7-vm-on-hyper-v
    If there are any problems, please let me know.
    Best regards

  • Windows Server 2012 - Hyper-V - Cluster Sharded Storage - VHDX unexpectedly gets copied to System Volume Information by "System", Virtual Machines stops respondig

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM. This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM. This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched off.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, VMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation:
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with two 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair.
    Our Problem:
    Our problem is that for some reason a random VHDX gets copied to System Volume Information by "System" of the Clusterd Shared Storage (i.e. C:\ClusterStorage\Volume1\System Volume Information).
    All VMs stops responding or responds very slowly during this copy process and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    This happens at random and not every day and different VHDX files from different VMs gets copied each time. Some time it happens during daytime wich causes a lot of problems, especially when a 200 GB file gets copied (which take a lot of time).
    What it is not:
    We thought that this was connected to the backup, but the backup had finished 3 hours before the last time this happended and the backup never uses any of the files in System Volume Information so it is not the backup.
    An observation:
    When this happend today I switched on ShadowCopy (previous files) and set it to only to use 320 MB of storage and then the Copy Process stopped and the virtual Machines started responding again. This could be unrelated since there is no way to see
    how much of the VHDX that is left to be copied, so it might have been finished at the same time as I enabled  ShadowCopy (previos files).
    Our question:
    Why is a VHDX copied to System Volume Information when scheduled ShadowCopy (previous version of files) is switched off? As far as I know, nothing should be copied to this folder when this functionis switched off?
    List of VSS Writers:
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Writer name: 'Task Scheduler Writer'
       Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124}
       Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b}
       State: [1] Stable
       Last error: No error
    Writer name: 'VSS Metadata Store Writer'
       Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06}
       Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93}
       State: [1] Stable
       Last error: No error
    Writer name: 'Performance Counters Writer'
       Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2}
       Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381}
       State: [1] Stable
       Last error: No error
    Writer name: 'System Writer'
       Writer Id: {e8132975-6f93-4464-a53e-1050253ae220}
       Writer Instance Id: {7848396d-00b1-47cd-8ba9-769b7ce402d2}
       State: [1] Stable
       Last error: No error
    Writer name: 'Microsoft Hyper-V VSS Writer'
       Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Instance Id: {8b6c534a-18dd-4fff-b14e-1d4aebd1db74}
       State: [5] Waiting for completion
       Last error: No error
    Writer name: 'Cluster Shared Volume VSS Writer'
       Writer Id: {1072ae1c-e5a7-4ea1-9e4a-6f7964656570}
       Writer Instance Id: {d46c6a69-8b4a-4307-afcf-ca3611c7f680}
       State: [1] Stable
       Last error: No error
    Writer name: 'ASR Writer'
       Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4}
       Writer Instance Id: {fc530484-71db-48c3-af5f-ef398070373e}
       State: [1] Stable
       Last error: No error
    Writer name: 'WMI Writer'
       Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0}
       Writer Instance Id: {3792e26e-c0d0-4901-b799-2e8d9ffe2085}
       State: [1] Stable
       Last error: No error
    Writer name: 'Registry Writer'
       Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485}
       Writer Instance Id: {6ea65f92-e3fd-4a23-9e5f-b23de43bc756}
       State: [1] Stable
       Last error: No error
    Writer name: 'BITS Writer'
       Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0}
       Writer Instance Id: {71dc7876-2089-472c-8fed-4b8862037528}
       State: [1] Stable
       Last error: No error
    Writer name: 'Shadow Copy Optimization Writer'
       Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f}
       Writer Instance Id: {cb0c7fd8-1f5c-41bb-b2cc-82fabbdc466e}
       State: [1] Stable
       Last error: No error
    Writer name: 'Cluster Database'
       Writer Id: {41e12264-35d8-479b-8e5c-9b23d1dad37e}
       Writer Instance Id: {23320f7e-f165-409d-8456-5d7d8fbaefed}
       State: [1] Stable
       Last error: No error
    Writer name: 'COM+ REGDB Writer'
       Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f}
       Writer Instance Id: {f23d0208-e569-48b0-ad30-1addb1a044af}
       State: [1] Stable
       Last error: No error
    Please note:
    Please only answer our question and do not offer any general optimization tips that do not directly adress the issue! We want the problem to go away, not to finish a bit faster!

    Hallo Lawrence!
    Thankyou for youre reply, some comments to help you and others who read this thread:
    First of all, we use Windows Server 2012 and the VHDX as I wrote in the headline and in the text in my post. We have not had this problem in similar setups with Windows Server 2008 R2, so the problem seem to be introduced in Windows Server 2012.
    These posts that you refer to seem to be outdated and/or do not apply to our configuration:
    The post about Dynamic Disks:
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx is only a recommendation for Windows Server 2008 R2 and the VHD format. Dynamic VHDX is indeed recommended by Microsoft when using Windows Server 2012 (please look in the optimization guide
    for Windows Server 2012).
    Infact, if we use fixed VHDX then we would have a bigger problem since fixed VHDX are generaly larger then Dynamic Disks, i.e. more data would be copied and that would take longer time = the VMs would be unresponsive for a longer time.
    The post "What's the deal with the System Volume Information folder"
    http://blogs.msdn.com/b/oldnewthing/archive/2003/11/20/55764.aspx is for Windows XP / Windows Server 2003 and some things has changed since then. for instance In Windows Server 2012, Shadow Copies cannot be controlled by going to Control panel -> System.
    Instead you right-click on a Drive (i.e. a Volume, for instance the C drive/Volume) in Computer and then click "Configure Shadow Copies".
    Windows Server 2008 R2 Backup problem
    http://social.technet.microsoft.com/Forums/en/windowsbackup/thread/0fc53adb-477d-425b-8c99-ad006e132336 - This post is about the Antivirus software trying to scan files used during backup that exists in the System Volume Information folder and we do not
    have any antivirus software installed on our hosts as I stated in my post.
    Comment that might help us:
    So according to “System Volume Information” definition, the operation you mentioned is Volume Shadow Copy. Check event viewer to find Volume Shadow Copy related event logs and post them.
    Why?
    Furhter investigation suggests that a volume shadow copy is somehow created even though the Schedule for Shadows Copies is turned off for all drives. This happens at random and we have not found any pattern. Yesterday this operation took almost all available
    disk space (over 200 GB), but all the disk space was released when I turned on scheduled Shadow Copies for the CSV.
    I therefore draw these conclusions:
    The CSV Volume has about 600 GB of disk space and since Volume Shadows Copy used 200 GB, or about 33% of the disk space, and the default limit is 10% then I conclude that for some reason the unscheduled Volume Shadow Copy did not have any limit (or ignored
    the limit).
    When I turned on the Schedule I also change the limit to the minimum amount which is 320 MB and this is probably what released the disk space. That is, the unscheduled Volume Shadow Copy operation was aborted and it adhered to the limit and deleted the
    Volume Shadow Copy it had taken.
    I have also set the limit for Volume Shadow Copies for all other volumes to 320 MB by using the "Configure Shadow Copies" Window that you open by right clicking on a drive (volume) in Computer and then selecting "Configure Shadow Copies...".
    It is important to note that setting a limit for Shadow Copy Storage, and disabaling the Schedule are two different things! It is possible to have unlimited storage for Shadow Copies when the Schedule is disabled, however I do not know if this was the case
    Before I enabled Shadow Copies on the CSV since I did not look for this.
    I now have defined a limit for Shadow Copy Storage to 320 MB on all drives and then no VHDX should be copied to System Volume Information since they are all larger than 320 MB.
    Does this sound about right or am I drawing the wrong conclusions?
    Limits for Shadow Copies:
    Below we list the limits for our two hosts:
    "Primary Host":
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (91%)
    Shadow Copy Storage association
       For volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Shadow Copy Storage volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    Shadow Copy Storage association
       For volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Shadow Copy Storage volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (3%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    C:\>cd \ClusterStorage\Volume1
    Secondary host:
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 35,0 MB (10%)
    Shadow Copy Storage association
       For volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Shadow Copy Storage volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 27,3 GB (10%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 6,80 GB (10%)
    C:\>
    There is something strange about the limits on the Secondary host!
    I have not in any way changed the settings on the Secondary host and as you can see, the Secondary host has a maximum limit of only 35 MB storage on the CSV, but it also shows that this is 10% of the Volume. This is clearly not the case since 10% if 600
    GB = 60 GB!
    The question is, why does it by default set a too small limit (i.e. < 320 MB) on the CSV and is this the cause of the problem? I.e. is the limit ignored since it is smaller than the smallest amount you can provide using the GUI?
    Is the default 35 MB maximum Shadow Copy limit a bug, or is there any logical reason for setting a limit that according to the GUI is too small?

Maybe you are looking for