Private Cloud

I create Private Cloud Using System Center 2012 R2 , and I have services in My cloud , so I cannot publish my private cloud in the internet , Please How to publish this cloud in the internet ?
and thank you for help me
best regards

Hi,
Sorry for the delay reply.
Windows Azure Pack for Windows Server is a collection of Windows Azure technologies, available to Microsoft customers at no additional cost for installation into your data center. It runs on top of Windows Server 2012 R2 and System Center 2012 R2 and, through
the use of the Windows Azure technologies, enables you to offer a rich, self-service, multi-tenant cloud, consistent with the public Windows Azure experience.
If you want to use Azure Pack, you could ask in:
https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=windowsazurepack
Meanwhile, FIM means Forefront Identity Manager, it is used to FIM+AAD Connector to connect to Azure AD
http://msdn.microsoft.com/en-us/library/azure/dn783462.aspx
For this, i think you may ask in Azure AD forums:
https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=WindowsAzureAD
Regards.
Vivian Wang

Similar Messages

  • Can't Delete Private Cloud - tbl_WLC_PhysicalObject not being updated

    I am having issues with our SCVMM instance where I can't delete Private Clouds...even if there empty. 
    When I right click the Private Cloud and click Delete, the "Jobs" panel says it finished successfully, however, the Private Cloud is not deleted. 
    After doing some researching, I believe its because entries in the tbl_WLC_PhysicalObject database table are not being updated correctly, when a VM is moved from one Private Cloud to another. After determining the "CloudID" of the Private
    Cloud I am trying to delete, I still see resources assigned to this Private Cloud in the tbl_WLC_PhysicalObject table, even though from VMM Console, the Private Cloud shows up empty. 
    For some testing purposes, I assigned a VM back to the Private Cloud I am trying to delete, only to move it out again and gather some tracing/logging. When I moved the VM back out of the Private Cloud, I had a SQL Profiler running in the background, capturing
    the SQL statements on the DB server. Looking at the "exec dbo.prc_WLC_UpdatePhysicalOBject" statement, I see the @CloudID variable is assigned the "CloudID" of the Private Cloud the VM is currently assigned to/the Private Cloud I am trying
    to delete and is NOT the CloudID of the Private Cloud the VM is being moved to/assigned to. 
    Instead of having the VMM Console GUI do the Private Cloud assignment/change...I copied the PowerShell commands out...so I can run them manually. Looks like the script gets 4 variables ($VM, $OperatingSystem, $CPUType, and $Cloud), and then runs the "Set-SCVirtualMachine"
    CMDLET. For the $Cloud variable, it does return the proper "CloudID" of the Private Cloud I am trying to move the VM too (I ran it separately and then ran an ECHO $Cloud to look at its value). When I run the "Set-SCVirtualMachine" CMDLET,
    the output has values for "CloudID" and "Cloud" and these are still the values of the source Private Cloud/Private Cloud I am moving the VM out of and ultimately want to delete. 
    Has anyone ran into this? Is something not processing right in the "Set-SCVirtualMachine" CMDLET?

    I been slowing looking into this and this is where I am at:
    I built a development SCVMM 2012 R2  instance that mocks our production environment (minus all the VM's...just the networking configuration and all the private clouds have been mocked). From there, I started at SCVMM 2012 R2 GA and one by one installed
    the 4 rollup patches in ordered and at each new patch level,  I monitored the queries coming in through SQL Profiler, as I moved a VM between private clouds and created new VM's within clouds. As I created new VM's and moved the VM's between clouds. the
    stored procedure "prc_WLC_UpdatePhysicalOBject" all have a value of NULL for the CloudID column....so a CloudID isnt even associated to the physical objects (basically the VHDX files and any mounted ISO's I have on the VM's). 
    I did find out this SCVMM instance was upgraded from SCVMM 2008 (I took over after the 2012 R2 upgrade was completed). 
    I am thinking at this point...nothing is wrong with SCVMM 2012 R2 if you build and recreate it from scratch and a new DB. I am thinking this might be a depreciated field from SCVMM 2008. The only other thing we did, was put in a SAN and moved VM's from stand-alone
    hosts to the new CSV's (A mixture of 2008 R2 and 2012 NON R2 hosts). 
    At this point...since we dont have Self-Service enabled yet....it will be a days work to rebuild a new instance of SCVMM 2012 R2 and migrate the hosts/VM's to it and start from a clean slate. 
    I know the DB structure isnt really published...but does anybody have any other insights into this? 

  • How to configure DPM2012R2 to use a private cloud

    Hi,
    I have configure a private cloud over several MS 2012 R2 servers. I have the storage and all required network done. I then installed DPM 2012 R2 and would like to use the "long term" storage function with the private cloud that I created. Has anyone
    managed to get this done? The options in DPM 2012 R2 seems to only allow cloud association with Azure. Is it even possible to configure DPM 2012 R2 to use a private cloud deployment?
    Any help with this will be greatly appreciated.
    Kind Regards

    Thanks for the reply:
    How would I be able to do any of the long term storage to disk with second DPM? 
    My scenario is as follow:
    Primary DPM server is onsite with shortterm backup configured to disk. There are several protection groups to allow for different kinds of backup scenarios ie different times, different types of clients, etc. Then there is a "offsite" dpm server with about
    40Tb available storage that I'd like to use as a "long term" storage for the currently protected groups. However I'm not extremely proficient with DPM and therefore I'm uncertain how to get this done.
    What needs to be achieved is:
    Onsite I'd like to keep 14days (currently configured to do so, so no change required)
    Offsite I'd like to keep 12months (1 per month for each month) as well as 1 per year for 2 years of all protected groups.
    The offsite DPM server has thusfar only been configured as Server 2012 r2 Std and the storage has been allocated to the private cloud. This however doesn't need to stay like this and can be "broken" again and reconfigured. If there are step by step instructions
    that you could provide to do this I'd really appreciate.
    Kind Regards

  • Best way to set up a private cloud for family

    I wish to set up my private cloud. Is there any Apple device that will help me do so? If not, which are the devices that will be most compatible with Mac, iPhone and an iPad?
    I don't have a server as yet. I am looking for an Apple device that can act as a storage for all kinds of files for every member in my family.
    Will I be able to access my files from anywhere once secure my files in Mac server? If yes, can someone please explain the detailed procedure to do so?

    In isolation, the term "Cloud" is effectively meaningless; it's largely become a marketing term designed to try to open up wallets in an effort toward the proper vacuuming of available funds.  It started out as what used to be timesharing or hosted services, or client server.  The "private" version is now typically used for devices or servers you own.  Better than these terms, you'll need to decide what services you want now, and what services you might grow into — and whether you want to provide gear for the immediate needs, or gear that's more capable and that you can grow into.  Storage is obvious.  Probably VPN services.  Web (and possibly WebDAV) with a Wiki or a content management system, and potentially your own mail server and calendar (CalDAV) server.  Once you have sorted out what you want to do and what you want to grow into (and implicitly also the budget involved here), then we'll have a better idea of what sort of "cloud" you need.
    If you want to share storage via CIFS/SMB or AFP, then OS X (client) can do that now.  No need for OS X Server, or some other server.  For AFP, Time Capsule can provide local storage.  Further along, OS X Server, which can deal with sharing storage, as well as DNS and distributed authentication.  (Recent versions of OS X Server have had some issues with the VPN server, but Apple has released some patches that have supposedly addressed that.)
    Or you can use Network Attached Storage (NAS) devices from various vendors.   This would be similar to the Time Capsule, though many of the available devices are more capable and more capacious.  Synology makes some of the higher-end gear in the home range, and there are many other vendors of NAS devices.
    When working with multiple users where privacy is a concern, the necessity for authentication and access controls can arise with traditional file shares, though these simple shares will work for smaller configurations.  In a larger system, you'll want to have the access tied into the identity of the accessor, rather than having everybody configured with passwords all over the place — those passwords don't tend to get changed or require the user to log in and change it across multiple devices, and security tends to go downhill from there.  You might or might not be small enough here to not have these requirements; where you can either have passwords on multiple devices, or can share passwords — but that's something to ponder.
    Remote access requires a network connection and probably involving a VPN security — an encrypted network connection — into a gateway firewall device with an embedded VPN server, or using NAT VPN passthrough into some other local VPN server you've installed and configured, and it requires an ISP connection that allows that remote access.  For simple access and where your ISP allows in-bound VPNs, dynamic DNS (to get from name to IP address) and a VPN server in your gateway device or in some other box inside your network — with NAT VPN passthrough enabled on your gateway device — will work.  For a bigger group or when more advanced features or typical use of a small business or such, you'd typically want static IP, as you'd typically be looking to add mail services and some other features.  (This also means you'll want a reasonably fast and reliable remote network link from your ISP, obviously.)
    iOS doesn't particularly do file sharing (not without add-on tools, and many apps are not set up for accessing file shares in any case), so you'll have to look at what the particular apps you're using do support.  Once you know what those support, that'll help determine what can be shared there, and how.
    If you wanted to go gonzo, there are commercial and open-source cloud implementations, but that's going to involve managing a server, and the cloud software.
    Dropbox and Spideroak and other such are common choices for sharing files using hosted services, and Mac Mini Colo offers OS X-based cloud capabilities if you'd prefer to use a hosted OS X system.  This if you don't want to acquire and configure and manage and maintain the gear and the network access.

  • Best way to set up private cloud

    I wish to set up my private cloud. Is there any Apple device that will help me do so? If not, which are the devices that will be most compatible with Mac, iPhone and an iPad?

    I don't have a server as yet. I am looking for an Apple device that can act as a storage for all kinds of files for every member in my family.
    Will I be able to access my files from anywhere once secure my files in Mac server? If yes, can someone please explain the detailed procedure to do so?
    Thanks.

  • Installing Project Server 2013 as a private cloud

    hi,
    as far as I know ,SharePoint 2013 is designed to support cloud computing.Moreover, I saw several Microsoft partners that provide project server as a SAAS.
    now I was wondering if is there any instruction for lunching a project server as a private cloud?

    Hello Sam-Net
    Not sure what your question is? 
    If you install SharePoint 2013 on premise and expose it to the cloud, and then want different instance of project server running you would  use PowerShell to setup TENACY.  This is what some of the partners do for hosting project server.
    If you install SharePoint 2013 in Azure?  It would be the same answer.  You have to setup using PowerShell and creating Tenants. 
    Cheers!
    Michael Wharton, MVP, MBA, PMP, MCT, MCTS, MCSD, MCSE+I, MCDBA
    Website http://www.WhartonComputer.com
    Blog http://MyProjectExpert.com contains my field notes and SQL queries
    thanks Michael,
    it's not just about multi-tenancy. as far as I know, software as a service(SAAS) has several specifications such as multi-tenancy , on-demand , Rapid elasticity ,Self-service ,...
    for example a customer wants to have a separate servers for app and front end. if we add 2 new servers to our farm and run configuration wizard, all the services in the farm will stop and other customers will be dissatisfied with this situation.
     On the other hand, if we provide a farm with a great amount of RAM ,CPU and Hard drive, and for example run 3 instance of pwa (multi-tenancy) how is it possible to limit the amount of hardware for each instance?
    these are just one those issues which we will face when tend to lunch a project server to service customers as a cloud
    would you please guide me if I'm wrong
    Best,

  • Can I use virtual Servers in private cloud for RAC

    Hello  to all
    We are going to install an Oracle RAC on two servers
    But our Hardware Administrator says to us   “I Allocate two virtual servers in the our private cloud not two physical Servers (or real Servers)”
    Do you think it’s practical and reasonable to using virtual Server for Oracle RAC  in production environment ?
    Which one is better physical server or virtual server  for RAC?
    Please write your reasons
    Thanks

    Using virtual machines is officially  supported for RAC only in a few cases which can be found here:
    http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html
    Make sure that you meet these requirements in your private cloud. Some cases like vmware are still somewhat supported despite beeing not on the list.
    Beside this you should make sure that your 2 virtual machines run on different hardware servers in the cloud, otherwise you lose most parts of the rac advantage regarding high availability, when both virtual servers happen to run on the same hardware during a crash
    Virtual servers are used in production environments, but you will have to take greater care for many aspects of rac compared to physical hardware, e.g.. something like "live migration" of vmware can kill a rac node due to timeout.
    I would prefer hardware for rac anytime over virtual servers and spare me the hassle of dealing with all possible issues arising from the virtualization.
    And check oracles licensing policy...
    Running an enterprise edition rac on e.g. a large vmware cluster is insanely expensive, you pay every cpu core the rac COULD run on -> the entire cluster!
    If you must use virtual hardware but don't want to and need an argument against it use the license issue.
    Regards
    Thomas

  • Pre-requisite learning for private cloud

    What is the pre-requisite learning to mould career in private cloud?
    Regards, Firoz Ahmad (Cheeeeeerrrrssssss)

    From a Microsoft perspective, it is important to absorb everything related to Hyper-V, networking, storage and clustering. From there, Virtual Machine Manager is a key as the management layer.
    I would recommend to read our whitepaper on how to leverage IaaS with the Cloud OS, available here: http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Azure Pack Private Cloud RDP GateWay Configuration Console Access

    Hello,
    We are in the process of deploying the Azure Pack Private Cloud and have run into an issue with configuration of the Console Access.  We have been able to successfully configure the Console to work internally but external access is not functional.  When
    a user attempts to connect from Windows 8.1 to the RDP Console via the Remote Desktop Services Gateway they are seeing the error:
    An Authentication error has occurred (code 0x607)
    Internal access works flawlessly.
    Environment: Windows Server 2012r2 Data Center, fully updated
    Windows Azure Pack RTM - Distributed Installation - 7 VMs - wapadmapi1, wapadmauth1, wapadmprtl1, waptenauth1, waptenprtl1, waptenpriapi1, waptenpubapi1
    System Center 2012r2 Components: VMM, Service Provider Foundation, SCORCH, SCOM
    SQL Server 2012 Enterprise SP1 running on Server 2012r2
    Hyper-V Failover Cluster 2012r2
    Wildcard Certificate has been assigned and RDP Gateway/VMM/SPF/Hyper-V have been configured as per http://www.hyper-v.nu/archives/mvaneijk/2013/09/windows-azure-pack-console-connect/ (PS this article needs to be updated because the set-scspfvmconnectglobalsettings
    cmdlet doesn't work on RTM) and http://technet.microsoft.com/en-US/library/dn469415.aspx
    Thanks,
    Rick

    No problem Dennis, as noted in the two articles I referenced above:
    http://www.hyper-v.nu/archives/mvaneijk/2013/09/windows-azure-pack-console-connect/  and http://technet.microsoft.com/en-US/library/dn469415.aspx
    an RDP Gateway is required to connect the external user to the console access on the Hyper-V backend
    server.  When the RDP file is downloaded it enables users to have more functionality than a standard RDP session and it also allows you access to internal resources, acting similar to a proxy.  I have checked the security logs and there is nothing
    that jumps out at me.
    Here is an excerpt from the logs:
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", met connection authorization policy requirements and was therefore authorized to access the RD Gateway server. The authentication method used was: "Cookie" and connection protocol
    used: "HTTP".
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", met resource authorization policy requirements and was therefore authorized to connect to resource "HV_SERVER_INT_IP".
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", connected to resource "HV_SERVER_INT_IP". Connection protocol used: "HTTP".
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", disconnected from the following network resource: "HV_SERVER_INT_IP". Before the user disconnected, the client transferred 1254 bytes and received 1615 bytes. The client session duration
    was 0 seconds. Connection protocol used: "HTTP".
    I have removed the actual IP addresses from the logs but kept the user credentials that are being currently passed.  HV_SERVER_INT_IP = IP of Hyper-V Host, internal and CLIENT_EXT_IP = IP of client who is attempting to access.

  • Implementation private cloud in university

    Hi
    i want to implementation private cloud in mu university. but i don't know
    which software is better for this work???

    Hello Maryam,
    I work for Harvard Business School and we have implemented a private cloud using Hyper-V and System Center products.
    We have found that software defined networking works well for research where we cannot spend a lot of time with network infrastructure changes to suit the needs of the researchers.
    Also we have found that using System Center is a step up from just using VCenter to control VMware machine provisioning.
    The System Center suite brings much more management capability and when you use System Center Application Controller we can also manage Windows Azure.
    There are indeed pros and cons to each hypervisor but we have had success with Hyper-v on Windows Server 2012.
    If this has answered your question, please mark it!
    @sharepointmcts
    http://www.sharepointfeed.com

  • VMs on Hyper -V Host Cluster not appearing in Private Cloud in VMM

    Hello All,
    We are running our production VMs (around 70 VMs) on a failover cluster (let’s say ProductionCluster) of Hyper-V hosts. I created a host group of ProductionCluster in VMM 2012 R2 (with Update Rollup
    4 installed), added all the Hyper-V hosts to this host group.
    Then I configured a Private Cloud using ProductionCluster, the job successfully completed. I verified it that my cloud was appearing in the VMs and Services workspace
    à Clouds, my cloud is appearing there. I also verified that the private cloud library was
    created, from Library à Cloud Libraries.
    However, any of our production VMs is not appearing in VMs pane. I want to assign user roles and services to the private cloud.
    I can see all the VMs when I go to VMs ribbon of host group, but not in private cloud pane.
    Please help and guide me.
    Thank you.
    Regards
    Hasan Bin Hasib

    >Is it mandatory to associate my VMs with the Cloud?
    No, it's not. You can have VMs 'outside' of your 'Cloud'.
    >Can I associate my running (production) VMs to the Cloud?
    Yes, you can.
    >Will it reboot or change the any properties of VM?
    No. The only change is the value of the 'Cloud' property.
    >Actually, my ultimate goal is to assign delegate control to the VMs' users, and not to give full rights on my Cloud to the VMs' users.
    For every VM created before your cloud were created you must assign a VM to your cloud if you want your cloud users to see\manage it. For every VM deployed to a cloud (by a user via WAP portal or if you select 'Deploy to a Cloud' option when deploying a
    VM from a VMM Console) the "cloud" property will be set automatically.
    http://OpsMgr.ru/

  • App Controller and Clouds connect to or configure public/private clouds

    Hi
      i was wondering what kinds of clouds private/public can App Controller can connect to or configure and where can i find the list of the public/private clouds that it supports.
    Thanks

    App Controller can connect to the following clouds:
    System Center Virtual Machine Manager
    Windows Azure
    Hosting service providers
    I don't have a list of the Hosting service providers that have a compatible cloud, although I know a couple have previously posted to this forum. The service provider needs to be running System Center and in particular
    Service Provider Foundation.
    Regards,
    Richard 
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • I forgot my password and email Forgot your private cloud small my daughter and I can not use the device jewelery Is there another way to run the machine?

    I forgot my password and email Forgot your private cloud small my daughter and I can not use the device jewelery Is there another way to run the machine?
    I hope there is a solution to this problem note that I can not use the iPad now

    Your question makes no sense and I'm not sure if you typed very quickly and autocorrect went nuts on you, or if you are not a native English speaker and Google Translate didn't work properly. If you are saying that you forgot the passcode to unlock the iPad, follow the instructions here.
    Forgot passcode for your iPhone, iPad, or iPod touch, or your device is disabled - Apple Support

  • SCOM2012 R2 Move in Private Cloud

    Currently we are using Microsoft System Center Operations Manager 2012 SP1 and planning for upgrade to SCOM 2012 R2.
    In this regards, we need your opinion and recommendation whether SCOM MS should be placed in Private Cloud or outside the Private Cloud.
    For our understanding, you are requested to please provide the reference links for both scenarios.

    Hi,
    SCOM server should be placed outside of the private clould. See the videos below.
    Monitoring Private Cloud
    http://blogs.technet.com/b/privatecloud/archive/2013/08/12/cloud-os-video-series-5-monitoring-the-private-cloud.aspx
    Juke Chou
    TechNet Community Support

  • Howto setup a simple network in SCVMM 2012 R2 and HyperV for test-Lab /Private Cloud

    Howto setup a simple network in SCVMM 2012 R2 and HyperV for test-Lab /Private Cloud
    I have domain controller on my laptop (i5 core 8gb) en one hyperV machine running also scvmm 2012 r2 on it. (i3 core 16gb)
    both runung windows server 2012 r2 x64 on it.
    I want to install a easy "Test Lab" for Scvmm 2012 r2 and hyper-V r2 for mine Private cloud exams. (70-246 & 70-247).
    howto arrange networking and clustering for this mine network and cluster is 192.168.1.0/24.
    ThanX anyway!!

    Also, I have tried to redeploy other test VM networks, VMs, and the PA Network itself and it still doesn't work. 
    Any ideas?

  • SGE2010 Private Cloud networking = complexity

    I've been tasked by 4 medical practices to move their servers into a colocation facility, connect their remote offices using high-speed fiber (10-100Mb) and still keep each practice separate.  They are sharing the colo and SAN to reduce their individual cost.  Basically I'm setting up a private cloud with a twist - keeping them separate.  I worked all afternoon yesterday and through the night until 5am with no luck getting anything to work.  Please help.  I'm sure there is something obvious that I'm overlooking.
    Here is where I'm currently at.  I'm a server administrator that knows enough networking to be dangerous.  Here is what I have so far.
    All switches have been changed to Layer 3
    Practice 1
    Colo network & 32nd St Office
    I was able to connect these two locations with the high-speed fiber using a flat network (VLAN 1).  I will need to break them into different subnets.
    2 - SGE2010s stacked
    IP: 192.1.1.38 /24 (It was already like this.  I'll need to migrate them to 10.13.2.1)
    VLAN: 132
    VLAN IP: 192.168.13.2 /24
    VLAN: 1013
    Port 48 changed to General, admit all (Fiber connection)
    Routing
    192.1.35.0 /24 -> 192.168.13.3
    0.0.0.0 /0 -> 192.1.1.38
    CG office
    1 - SGE2010
    IP: 192.1.35.254 /24 (will migrate later to 10.13.3.1)
    VLAN: 133
    VLAN IP: 192.168.13.3 /24
    VLAN: 1013
    Port 48 changed to General, admit all (Fiber connection)
    Routing
    192.1.1.0 /24 -> 192.168.13.1
    I can't see the remote CG office.  What am I missing?  Once I get this basic setup I'm going to use it as a template for the remaing 9 offices.
    Thanks!

    "is there any other way i can just create a two node fail over in vmware workstation without having to use physical servers "
    Absolutely!  You will need at least three VMs.  One VM will be your Active Directory domain controller and your shared storage server (not a best practice, but it seems like you are creating a lab, so it will work).  Then you will need two
    to be the nodes of the cluster.  On the first machine you will need to set up either iSCSI or SMB - SMB is a lot easier.
    http://blogs.technet.com/b/josebda/archive/2013/08/16/3587652.aspx provides a step by step guide for setting up a configuration that is more complex than you want, but it should
    help you get started.  He uses a three node cluster, but two will do.  And, he is doing things under Hyper-V, so some things will have to be translated to the VMware environment.  Of course, if you use Hyper-V on a Windows 8.1 system instead
    of VMware workstation, more will apply.
    . : | : . : | : . tim

Maybe you are looking for