Possible To Virtualize?

I have two machines, of the following configuration, which I want to virtualize:
Sunfire V880
Sun Blade 2500 (Silver)
Processor
5 * Ultrasparc III
2 * Ultrasparc III
RAM
8 GB
2 GB
HDD
5 * 73.4 GB
2 * 146 GB
OS
Solaris 10
Solaris 10
I couldn't find if it is possible to do the needful. I tried to try it myself and for that I tried to download 'Oracle VM for SPARC' but its not there in the list of downloadable items. On the download page 'Oracle VM for x86 ' is available but not 'Oracle VM for SPARC'.
I would be grateful if someone throws some light on the possibility of virtualizing the said machines using 'Oracle VM  for SPARC'. If it is possible then how can I download the package of 'Oracle VM for SPARC'.

The supported Systems for the current LDom 3.1 Release
are listed in the Release Notes
http://docs.oracle.com/cd/E38405_01/html/E38409/ldomssupportedplatforms.html#scrolltoc
Best regards,
Marcel

Similar Messages

  • Virtualize OS X while booted in Windows 7

    Hey, I was wondering if it was possible to virtualize OS X FROM Bootcamp. I know you can virtualize windows 7 via the bootcamp partition using VMWare Fusion, but I was wondering if there was a similar solution for Windows to access my OS X partition from within windows without having to create a virtual machine and that allows full read/write capabilities to the partition. Google says practically nothing on this, (or I haven't been able to find anything...) so please let me know if this is possible. It would be wonderful to be able to access my OS X partition while using windows.

    Why not start a new thread? this one is getting close to a year old....
    MacIntouch had a lot of reader input on the subject.
    My take was, all the talk of virtualizing was an OS X host AND client.
    Such as running SL as a guest under Lion.
    You have three vendors that would know more, though under wraps or in the works and in beta and NDA most likely.
    What I don't see happening is running OS X in any fashion under another non-Mac OS (Windows, Linux etc).
    I detest the use of the name "boot camp" it is fuzzy and confuses rather than clarifies.
    People run Windows Server 2008 natively on Mac, then run multiple VMs for Windows.
    VMs are popular, more cost effective, can be moved around, reconfigured, moved to high speed storage when needed, on the fly.
    I don't see taking a physical OS used both native and in VM except Mac products.
    Paragon converts VMware VMs and others like Oracle to or from a VM to a physical partition, yes.
    If it happens, it will be on VMware and Parallels as well as on the front page of your news sites, from Computerworld to rumormill most assuredly.

  • BIA running under VMware

    Please don't flame me as I am just trying to understand licensing, scalability, and cost of this solution.  My understanding of the BIA appliance consist of Linux blades (16GB of memory), share SAN disk subsystem (GPFS configured), isolated switched infrastructure, and the application software (TREX).  Speaking to the physical configuration, this could all be virtualized into a custom VMware configuration where you could maintain/guarantee a high level of performance for disk, network, and Linux guest/blade performance.  Now for some of my questions.
    My understanding of the license is per 16GB where 16GB is the max per blade.  This license is for production usage and would be priced different in a dev/test environment.  Is this correct?  To move a dev blade to a production blade center would have a license cost, correct?  Also, if this solution was virtualized, an additional license would be needed when adding new virtual Linux guests to the solution.
    Based on licensing, would it be possible to virtualize the dev/test environment for no or little cost and concentrate the hardware solution to production usage?
    What have people done when they max out their current blade center (10 or 14 depending on HP or IBM)?  Just buy another appliance?  It would see easier to support a configuration where you could add a VMware guest and just purchase the additional software license.  A win for SAP and the customer!  This would allow an easier scalability path when loads increase.  I haven't heard/read about processor being a bottleneck so I have left this out but please let me know if this is an issue.
    I know this is easier said then done but with virtual being supported in all other areas of the SAP landscape, it would make sense to allow it to creep into the BIA solution.  
    What do you guys think?  Am I crazy for even thinking it possible?
    Thanks,
    Steve

    Hello Steve,
    first about licensing... There are no SAP license fees for the non-productive blades. If you move blades from dev/qa system to production, then you new license fees are due. Please talk to your SAP account executive in this case.
    Next about virtualization... BI Accelerator is all about performance. BIA is highly optimized for parallel processing and taking advantage of Intel CPUs. VMware or other attempts to virtualize BIA will not get even close to the real deal. It's basically a waste of time to even try.
    Nevertheless there is one case where it makes sense. For a training system you would not need good performance and you would be using the "virtual BIA" only with a single training user. We have been thinking about it but as you said, it's not as easy as it sounds.
    Regards,
    Marc
    SAP NetWeaver RIG

  • [Hyper-V] Too Much Hardware Resources?

    Hello,
    I have a Small Business Server 2008 standard (SBS2008) install that is
    virtualized. The virtual was installed and set up on Hyper-V Server 2008 originally 5 years ago. I simply copied the virtual server from 2008 to a new set of hardware running
    Hyper-V Server 2012 R2 with NUMA virtualization enabled. 
    The new hardware is a Dell PowerEdge R520 server with dual Xeon E5-2420 2.2 GHZ processors, 96 GB Ram (16384MB @ 1600 mhz x 6 DIMMs spanned across 2 NUMA nodes), and 6 600GB Seagate Cheetah 15K RPM SAS 6Gbs drives in a RAID 10 configuration on a Dell RAID
    Perc H710P (which is made by LSI)  and we know that our disk I/O benchmark tops out at
    3.5 Gbps read and write on the appropriate file sizes that they would typically read and/or write. The NICs are dual gigabit Broadcom NetXtreme NICs teamed together through powershell (http://technet.microsoft.com/en-us/library/jj130847.aspx)
    In short, the new hardware is FAST. Really fast.
    On the SBS2008 install, we initially assigned 18 logical processors and 24 GB of ram because we thought let's throw as much resources to it as possible to ensure it is also fast. 
    Up until yesterday, nobody was having a problem, for approximately two weeks. Then for no reason, the server begins to perform horribly - pings to itself, from itself, would avg around 80 ms, spike as high as 300 - 400 ms and not go below 10 ms. This should
    always be <1 ms. Pings to the Hyper-V server from a workstation were consistently <1 ms. Pings to the other virtualized server (Server 2008 R2 Enterprise running remote desktop services with 56 GB ram assigned and a lot of processors) were also consistently
    < 1 ms. From Hyper-V to the SBS2008 virtual were also as bad as pinging SBS2008 from itself to itself. Not until I was able to stop Microsoft Exchange and SQL Server did the server start to respond 'normally' again. I then shut down the server and decreased
    ram to 12 GB and logical processors from 18 to 8. I also disabled NUMA virtualization. Since yesterday at 3 pm, the server has been performing great. 
    Questions:
    1. Could it be possible that we had assigned too many logical processors or RAM initially to the SBS2008 virtual guest?
    2. Could it be possible NUMA virtualization was causing these issues?
    3. Is it possible the issue could still not be resolved and would represent itself again in two weeks? 
    4. This SBS2008 virtual guest is a 5-year old install, installed according to Microsoft's best practices guide on SBS2008 installation. Beyond that, could there still be some configuration issue that could result in that kind of poor performance?
    Any other thoughts and suggestions are welcome. The server is again performing fine since the changes mentioned above, but there was no definite resolution discovered, so I have no proof that it is resolved until time tells us otherwise.

    #1 is your likely cause. What you're really assigning a slice of CPU time, not a true processor.
    Excessively assigning CPU resources to a VM rarely results in better performance and can definitely result in worse performance for that VM and others as well. The maximum
    supported processors for 2008 is 4.  Going beyond the supported and tested processors can result in unwanted behavior.
    For other OSes, look here:
    http://technet.microsoft.com/library/dn792027.aspx
    Also, for future Hyper-V questions, a better place to post is the Hyper-V forum, here:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winserverhyperv
    It's where all the Hyper-V gurus hang out.
    This one is for the product Virtual Server 2005.

  • Solaris 8 VM on T41 Sparc server?

    is it possible to virtualize a solaris 8 OS as a VM on OVM server for sparc, 3.x running ona T4-1 server? I'm sure it's not possible, but have to ask.
    secondly, legacy zones only work with solaris 10, isit possible to make a solaris 10 VM and then run a legacy zone inside of that ( similar to nested VMs)?
    Rob

    Hello
    You are right in
    Solaris 8 , can only be virtulized in a solaris 10 GZ doing a sol8  brand zone. So you can create a ldom guest with solaris 10 and put and brandz solaris 8 zone.
    this doc can help you if you do it
    How to Setup Solaris 8 and 9 Containers (Branded Zones) in Solaris 10 (Doc ID 1019682.1)
    Regards
    Eze

  • Replication for High Write SQL Server Environment

    Hello everybody,
    Long-time reader, first time writer ;-).
    We have been analyzing our SQL environment with our vendor and wanted to reach out to the community to ask for advice.  Our main solution takes in large files daily and performs a series of logic that requires heavy writes to the disk upon import --
    BCP or SSIS is not an option.  We have a bare metal SQL server that is configured which handles the import of the largest files in three hours.  We are looking to possibly use virtualization and  SQL 2014 to provide a better disaster recovery
    option. 
    We are working through a proof of concept with our hosting provider that has SQL running in a VMWare virtual environment using SAN disk as the main disk.  During our tests, the file that takes three hours to import takes around six hours in the virtual
    environment.  Their engineers are looking at speeding up the SAN, but I would like to have an alternative.
    Could I do something similar to the following:  have a bare metal server act as the main import server, and then replicate the data to servers that are virtualized?  I believe there is an always on feature with SQL 2014 which would provide automatic
    switch over in case of failure.  How would you suggest I configure my environment to handle this?  Would it be make sense to have the bare metal server kick off a Tlog transfer after the file is imported to the virtualized environment?  If so,
    how would that look?
    Thank you very much, in advance, for you advice!!!!!

    >have a bare metal server act as the main import server, and then replicate the data to servers that are virtualized?  I believe there is an always on feature with SQL 2014 which would provide automatic switch over in case of failure.
    Yes.  That would be
    AlwaysOn Availibility Groups. In an AG all the transaction log records written to the primary database's log file are copied to the secondary replicas as well.  The AG can't fail over automatically unless the secondary replica is synchronized. 
    In your scenario it will probably take additional time to replicate the log records and apply them on the secondary, so you would not have automatic failover during that window. 
    You could switch the replica from Sync to Async mode during the load window, and then back to Sync mode after.
    >Would it be make sense to have the bare metal server kick off a Tlog transfer after the file is imported to the virtualized environment?
    You could do this too, with Log Shipping.  The VM would be even farther behind the primary.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • CSS load balancing in both directions.

    Hi all,
    my questions are
    -if it is possible divide (virtualize) one physical CSS to separate ones?
    and than
    -if it is possible use one virtual CSS for loadbalancing in one direction and other CSS use for loadbalancing in opposite direction?
    BR
    gg

    It sounds like you need to implement a group rule using 'add service service_name'.
    ie.
    service web1
    ip address 192.168.1.1
    port 80
    active
    service web2
    ip address 192.168.1.2
    port 80
    active
    owner vip
    content web_servers
    vip address 192.168.1.100
    port 80
    protocol tcp
    add service web1
    add service web2
    active
    group web_servers
    vip address 192.168.1.100
    add service web1
    add service web2
    active
    What this should do is NAT any request *initiated* from web1 or web2 to the IP address specified in the group rule. In this case it is 192.168.1.100, the same as the content rule. This is fine, or you can use a different IP. I'm using RFC1918 addresses in this example, as 192.168.1.100 would be natted to some public IP on the firewall in front of the CSS.
    If you wanted to do internal load balancing, or load balance to a service *NOT* within your environment (ie. 3rd party data center), you would simply change 'add service' to 'add destination service' in the group rule.
    James

  • Moving from CSS to ACE

    I'm trying to find documentation on moving from a CSS to the ACE but have not been able to find much on the ACE in general (no books at all). Does anyone have any info on this? We are currently using the CSS for multiple Web and Server farms, and are looking to add SSL in the mix. Trying to decide if we should just offload the SSL to the ACE for now (eventually migrating completely to the ACE) or if we should convert everything over at the same time.
    Any links or book suggestions would be appreciated!

    Hi,
    Here is the official link to ACE documentaton (but you probably have already found this...):
    http://www.cisco.com/en/US/products/ps6906/tsd_products_support_model_home.html
    I don't believe that there is a book, as this is relatively new product. Also don't hope too much to find migration guide :)
    You may use some design guides for CSM module and try to apply a part of it to ACE (Topology will be simmilar for ACE and CSM, but with ACE you additionaly have possibility of virtualization/contexts).
    But, pay attention, becouse ACE and CSM have completely different config command syntax and configuration philosophy!
    I did not quite understand your dilemma regarding migration?
    Personally, I have not yet had a chance to implement SSL offload on ACE, but it sounds logical to move the server farm that will use SSL offload behind ACE, and do SSL termination and load-balancing for that server farm on ACE. Then, gradually you can move other servers behind ACE...
    You will have to decide based on conditions and requirements in your network, and after reading thousands of pages of documentation... ;)
    Good luck!
    Best regards,
    Jasmina

  • VDI Primary on VirtualBox

    Is it possible to virtualize the VDI primary node similar to the Primary Host Virtualized Configuration but using VirtualBox?
    If not, what's the recommend way of creating HA for the Primary(other than the secondaries which don't broker)? If I understand correctly, if the primary goes down there is no node to broker connections which prevents new users from connecting.

    That helped but I would still like to know if installing the primary node within VirtualBox instead of esx is supported?

  • ACE versus CCS.

    Hi all,
    can someone help me what is better to use? We are planning to create Data center with Cat6500 so there is possibility to insert ACE module, but ACE is a quite expensive to CSS.
    On the other side, is it possible to virtualize CSS as ACE?
    BR
    gg

    You should not be comparing ACE module with CSS appliance.
    CSS appliance can be compared with ACE 4710 appliance which is not as expensive as ACE module.Ace module is needed when more throughput is needed ( more than supported by ACE appliance).
    Since there is no active development for CSS its not advisable to deploy CSS in new Data Centers.
    CSS cannot be virtualized.
    Syed Iftekhar Ahmed

  • HT1461 Virtual Snow Leopard 10.6.8

    I was curious if it was possible to virtualize Snow Leopard 10.6.8. I am running an iMac that has bootcamp with Windows 7. I am hoping to virtualize on the windows side so that I will not have to constantly switch between Leopard and Windows for remote desktop.
    Thanks

    Would this not work: Microsoft Remote Desktop Connection 2.1.1

  • Mac A1181

    My neice has a Mac Laptop model A1181 and she is at Central Michigan she needs Office 2010 and she is not sure if she buys the academnic version if it will install on her Mac being that version is PC based. Office 2010 for a Mac is supposed to be coming out later this year but she needs it now as school is starting. Thanks for any feedback on this.

    Welcome to Apple Discussions!
    Office 2010 has very few things that can't be done by existing Mac Office applications. http://www.neooffice.org/ has Word, Powerpoint and Excel compatibility with their free software. That MacBook would appear to be one of the original MacBooks. It may not have enough juice for the Mac version of Office when it comes out. That remains to be seen. However, you could install Windows virtualization on the machine if access to Access is necessary, or Mac OS X Mail's Exchange compatibility or web based access to Exchange Server is not a possibility. Virtualization options are documented here*:
    http://www.macmaps.com/macosxnative.html#WINTEL
    - * Links to my pages may give me compensation.

  • Is it possible to sandbox the entire system?

    DISCLAIMER
    This is partly just thinking out loud.
    There may be some completely obvious solution for achieving this that I have not come across.
    My ideas may be flawed.
    I saw the other thread about sandboxing but that had a different focus and went in a different direction than this hopefully will.
    First, by sandboxing I mean the following:
    * let an application see the actual system, but only selectively, e.g. make /usr visible but /home inaccessible
    * intercept all writes to the system
    * let an application see all intercepted writes as though they have actually occurred
    * intercept all network communication and allow the user to approve or deny it, e.g. enable a source download from one site but prevent the application from calling home to another
    * the application cannot escape the sandbox
    * the application should not be able to detect the sandbox
    Is this possible?
    First I thought about using FUSE to mask the entire filesystem but this would affect all applications and probably wouldn't work on a running sysem.
    Then I thought about using virtualization. Maybe it would be possible to create a fake base image of the live host system and then add an overlay to that to create a sandboxed virtual clone of the host system. The network connection could probably by the host in that case.
    I don't know if it would be at all possible though to create a fake base image of the live host system. I also don't know if it would need to be static or if the image could remain dynamic. In the latter case. it would probably be possible to create the image with FUSE. Using FUSE it might even be possible to forgo the overlay image as FUSE itself could intercept the writes. There are obvious complexities in that though, such as how to present changes to a file by the host to the guest if the guest has modified it previously. I also have no idea if the guest system could use a clone of the hosts file system.
    Why I would want to do this:
    * "Safely" test-run anything while protecting your system (hide your sensitive data, protect all of your files, control network access)**.
    * Simplified package building: build the application as it's meant to be built in the sandbox, compare the sandbox to the host and then use the differences to build the package***.
    * It would be cool.
    ** Before anyone interjects with the "only run trusted apps" mantra, this would also apply to trusted apps which might contain bugs. Let's face it, most people do not plough through source code of trusted apps before building/installing/running them.
    *** This was prompted by my ongoing installation of SAGE which is built in the post-install function instead of the PKGBUILD itself due to the complexities of the build process. The general idea is to create a way in which all application that can be built can be packaged uniformly.

    Are you sure that you can change the permissions of symlinks themselves? I think I've tried to make files read-only via symlinks on a local server but ended up using bindfs because it wasn't possible. Even if you can, symlinking everything that might be necessary for a given environment would not be ideal, plus I don't think symlinks can be used across different filesystems.
    If a real-life human can figure out if it's he/she is in a chroot and break out of it, then he/she can write a script to do the same. I want a sandbox that could run malicious code with no effect on the system (if that's possible). Also, I think if the chroot idea were truly feasible, makepkg would have been using it for years already to simply install packages in the chroot as you normally would and then package them. There would also be several sandbox applications that could run applications safely. So far I have yet to find any.
    I admit that I haven't looked into using a chroot in detail though and of course I may have missed some application which creates such a setup. Right now I think using per-application namespaces with fuse seems the most promising but I won't know until I've finished implementing a test application. If it turns out that it's a dead end I'll take harder look at chroot but it really doesn't seem to be able to do what I want.

  • Is it possible to install SQL Server 2012 Standard Edition on a Windows Server 2012 Standard Which is installed on a Virtual Machine that is created by Hyper-V?

    Hi
    I have recently bought a Virtual Machine from a data center and installed a Windows Server 2012 Standard Edition on it. As I tried to install the SQL Server 2012 Standard Edition, the data center administrator denied me to do this telling me it is not possible
    to install any licensed Microsoft software on a Windows installed on virtual machine created by Hyper-V. It was really strange for me! I wanna know if this is true.
    Thanks Everybody

    Hi Farid,
    For SQL server running on virtual server please refer to following article :
    http://blogs.technet.com/b/peeyusht/archive/2010/03/19/sql-licensing-under-virtualization.aspx
    In addition , for license issue we still recommend you to contact MS Activation Center:
    http://www.microsoft.com/licensing/existing-customers/activation-centers.aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Virtualize an existing windows 8.1 installation on arch

    Hi,
    3 days ago I installed arch and I'm very delighted. Originally coming from windows I installed ubuntu on my laptop a few months back. Linux instantly grabbed my attention. So when I decided to buy a new desktop PC I compared various distros and arch was the most promising one. I haven't been disappointed; The learning curve is very good and the distro is really well documented. But now I have a problem where the documentation can't help.
    I have the following setup:
    - Intel i5 4570s
    - SSD for os
      - sda1: EFI boot partition
      - sda2: ext4 with arch installed
      - sda3: swap
      - sda4: microsoft reserved
      - sda5: ntfs with windows 8.1 installed
    - HDD for data
    - EFI motherboard
    Install order: arch -> windows
    Before I installed arch I read an article about how to boot virtual in an existing windows installation and nonetheless be able to boot normally into it and have the instances synced. Later I found out: VirtualBox isn't capable of booting an EFI windows installation.
    As I read in another blog, other virtualization software like QEMU is able to boot an EFI windows installation with OVMF. So I followed the first article and created a raw image of my windows disks with
    VBoxManage internalcommands createrawvmdk -filename w8raw.vmdk --rawdisk /dev/sda -partitions 1,4,5 -relative
    and tried to open that image with QEMU via
    qemu-system-x86_64 --enable-kvm -pflash OVMF.fd w8raw.vmdk
    I get the following error
    Unsupported image type 'partitionedDevice'
    It may be noteworthy that the VBoxManage command produced two files
    w8raw.vmdk
    w8raw-pt.vmdk
    Now my questions:
    1) Is it somehow possible to open the created .vmdk file with QEMU?
    2) Is it possible to create such an raw image with QEMU too? How? The documentation didn't help me
    3) Do you have any other ideas how to archive this?
    If you need further information I gladly add them here.

    You have created a vmdk file that point to a real disk. The vmdk file does not contains the data in itself; probably qemu cannot open such virtual disk. The raw disk is simply /dev/sda in your case; you can pass it as an argument to qemu (provided you have the necessary permissions to access it), but ensure that the mounted partitions (the one you use in your running linux) are not accessed in any way by Windows (it should be the case if they are not FAT or NTFS and you do not have installed special tools in Windows to read them). Virtualbox can authorize the access of individual partitions of a physical disk (by using the VBoxManage command you mention), but I don't think this trick can be exported to qemu.
    You can create a raw image of your disk by using the dd command:
    dd if=/dev/sda of=/some-file.raw
    but do that from a bootable recovery disk, do not mount /dev/sda while running dd (and have an external hard disk that is big enough to contain the whole image of /dev/sda).
    But I doubt you will achieve your real goal. At the very best, Windows will consider that it run on a new hardware within the virtual machine. At best it will ask you to re-register and will cease to work if you boot it directly. Some OEM Windows do a BIOS/UEFI check to verify they have not be copied and will refuse to work in the virtual machine. You might also expect a whole bunch of problems because the same windows will run on two different hardware (virtual and physical). Windows has not been designed to be moved from one machine to another. If you want to have a virtual Windows, I would strongly suggest you to install it from scratch in the virtual machine. VirtualBox has more feature (some of them are really useful: e.g. the Windows additions that provide nice integration between the two machines; USB access, etc.) and is easier to manage than qemu. Just put VirtualBox in BIOS mode and install Windows (all Windows versions accept to be installed in BIOS mode).
    Last edited by olive (2014-04-13 09:32:49)

Maybe you are looking for