Secure PRIVATE Cloud Services

Does anyone have any experience of integrating OS X and iOS devices with Secure Private Cloud Services. Please do not respond with any details about iCloud as this is a Public Cloud Service.

Does anyone have any experience of integrating OS X and iOS devices with Secure Private Cloud Services. Please do not respond with any details about iCloud as this is a Public Cloud Service.

Similar Messages

  • How secure would the cloud services be?

    Hi all
    From what I heard, some organizations would use cloud services for non-sensitive data to shared across office to reduce operational cost.
    However even with TLS, file encryption and other security technologies, some government sector or organizations would still host their own file server / VPN for top secret or confidential information. Why is that so?
    Would there possibly be data leaked out from cloud services? (e.g. cloud vendor's employee sabotage or attacked) I am interested to know about your opinion about this.
    Have a nice day! 
    Peter
    W520 (4284-A99)
    Does someone’s post help you? Give them kudos as a reward, as they will do better to improve | Mark it as solved if the solution works for you, so it could be reference for others in the future
    =====================================
    Sound Enthusiast and Enhancement (Post comments, share mixes, etc.)
    Dolby Home Theater v4 ; IdeaPad Slate Tablet
    Solved!
    Go to Solution.

    Remember too the requirements for data security that are implied by securities laws in many countries. The accidental release of, say, financial data prior to any public release creates a disclosure requirement with, in the US, the SEC. That could be extremely embarrassing and even create abnormal stock tradeing in a company's stock. I recall an incident some 10 or 15 years ago where the CEO of a major publically traded company had his laptop stolen from his hotel room. That mandated public disclosure and SEC filings, and made the front page of the Wall Street Journal. Embarrassing indeed, and potentially dangerous as well.
    Consider the increased public relations implications if that type of data were stored on a public cloud service rather than on a private server, either using a private cloud, VPN or both.  You might expect significant questions to be raised if the compromised data were stored on a public cloud.
    Don't Read? Can't Learn!
    Administrator, SpywareHammer.com

  • Issue with ADF security enabled App deployed to java cloud services

    Hi,
    Here are the instance details:
    Jdev cloud build:JDEVADF_11.1.1.6.0CLOUD_GENERIC_121118.1600.6229
    Java cloud service version:13.1
    I have created a simple ADF Application & enabled security by editing web.xml:
    <login-config>
        <auth-method>CLIENT-CERT</auth-method>
        <realm-name>default</realm-name>
      </login-config>
      <security-role>
        <description>manager</description>
        <role-name>manager</role-name>
      </security-role>
    Then I have tried to deploy this Application to Java cloud services.Deployment works fine.
    I have 2 users created in Identity console- x & y.In my case x user has manager role enabled & y doesn't have manager role enabled.
    Now when I try to access the above deployed ADF Application with 'y' user,the page is accessible.
    My question here is that since 'y' user does not have the privilege he should not be able to access this page,could you please let me know if am missing something?
    Thanks.

    Hi,
    You may refer to the documentation available in the link: Developing Applications for Oracle Java Cloud Service - Release 13.1
    Please refer to the section: Securing Java EE Applications- Roles and Constraints
    Hope this helps
    Regards,
    Santhosh

  • How to add security for Azure Cloud Service?

    Hi,
    We have build some API's in azure cloud service.
    We want to add security for Azure Cloud Service.
    How to add security for Azure Cloud Service?

    Hi Santhosh,
    You may add security for your API's by using:
    Mutual certificate authentication
    Using OAuth 2.0
    Manage developer accounts
    Regards,
    Manu Rekhar

  • Network security with Oracle Database Cloud Service

    Does the Oracle Database Cloud service support SSL? Or, any form of network encryption/authentication between a client and the service across the Internet?

    Thank you Rick. I'm intending to use Oracle Database Cloud Service as a "Database-as-a-Service", however I have read that it is actually more of a "Platform-as-a-Service" offering.
    What I would like to do is to interact with the Oracle Database Cloud Service via a local JDBC client. However, from further reading, it looks like the only way to interact with the Oracle Database Cloud Service from a non-Oracle-cloud-based client is via its RESTful web services (which, as you said, support SSL).
    That is to say, I cannot simply connect to the Oracle Database Cloud Service from a local client just through JDBC alone. It looks like I would have to configure my client to make the relevant RESTful web service calls instead, and likewise configure my settings on the Oracle Database Cloud Service to make the necessary translations (from REST to SQL).
    Just to finally clarify, is my above understanding correct?

  • Best way to set up a private cloud for family

    I wish to set up my private cloud. Is there any Apple device that will help me do so? If not, which are the devices that will be most compatible with Mac, iPhone and an iPad?
    I don't have a server as yet. I am looking for an Apple device that can act as a storage for all kinds of files for every member in my family.
    Will I be able to access my files from anywhere once secure my files in Mac server? If yes, can someone please explain the detailed procedure to do so?

    In isolation, the term "Cloud" is effectively meaningless; it's largely become a marketing term designed to try to open up wallets in an effort toward the proper vacuuming of available funds.  It started out as what used to be timesharing or hosted services, or client server.  The "private" version is now typically used for devices or servers you own.  Better than these terms, you'll need to decide what services you want now, and what services you might grow into — and whether you want to provide gear for the immediate needs, or gear that's more capable and that you can grow into.  Storage is obvious.  Probably VPN services.  Web (and possibly WebDAV) with a Wiki or a content management system, and potentially your own mail server and calendar (CalDAV) server.  Once you have sorted out what you want to do and what you want to grow into (and implicitly also the budget involved here), then we'll have a better idea of what sort of "cloud" you need.
    If you want to share storage via CIFS/SMB or AFP, then OS X (client) can do that now.  No need for OS X Server, or some other server.  For AFP, Time Capsule can provide local storage.  Further along, OS X Server, which can deal with sharing storage, as well as DNS and distributed authentication.  (Recent versions of OS X Server have had some issues with the VPN server, but Apple has released some patches that have supposedly addressed that.)
    Or you can use Network Attached Storage (NAS) devices from various vendors.   This would be similar to the Time Capsule, though many of the available devices are more capable and more capacious.  Synology makes some of the higher-end gear in the home range, and there are many other vendors of NAS devices.
    When working with multiple users where privacy is a concern, the necessity for authentication and access controls can arise with traditional file shares, though these simple shares will work for smaller configurations.  In a larger system, you'll want to have the access tied into the identity of the accessor, rather than having everybody configured with passwords all over the place — those passwords don't tend to get changed or require the user to log in and change it across multiple devices, and security tends to go downhill from there.  You might or might not be small enough here to not have these requirements; where you can either have passwords on multiple devices, or can share passwords — but that's something to ponder.
    Remote access requires a network connection and probably involving a VPN security — an encrypted network connection — into a gateway firewall device with an embedded VPN server, or using NAT VPN passthrough into some other local VPN server you've installed and configured, and it requires an ISP connection that allows that remote access.  For simple access and where your ISP allows in-bound VPNs, dynamic DNS (to get from name to IP address) and a VPN server in your gateway device or in some other box inside your network — with NAT VPN passthrough enabled on your gateway device — will work.  For a bigger group or when more advanced features or typical use of a small business or such, you'd typically want static IP, as you'd typically be looking to add mail services and some other features.  (This also means you'll want a reasonably fast and reliable remote network link from your ISP, obviously.)
    iOS doesn't particularly do file sharing (not without add-on tools, and many apps are not set up for accessing file shares in any case), so you'll have to look at what the particular apps you're using do support.  Once you know what those support, that'll help determine what can be shared there, and how.
    If you wanted to go gonzo, there are commercial and open-source cloud implementations, but that's going to involve managing a server, and the cloud software.
    Dropbox and Spideroak and other such are common choices for sharing files using hosted services, and Mac Mini Colo offers OS X-based cloud capabilities if you'd prefer to use a hosted OS X system.  This if you don't want to acquire and configure and manage and maintain the gear and the network access.

  • Azure Pack Private Cloud RDP GateWay Configuration Console Access

    Hello,
    We are in the process of deploying the Azure Pack Private Cloud and have run into an issue with configuration of the Console Access.  We have been able to successfully configure the Console to work internally but external access is not functional.  When
    a user attempts to connect from Windows 8.1 to the RDP Console via the Remote Desktop Services Gateway they are seeing the error:
    An Authentication error has occurred (code 0x607)
    Internal access works flawlessly.
    Environment: Windows Server 2012r2 Data Center, fully updated
    Windows Azure Pack RTM - Distributed Installation - 7 VMs - wapadmapi1, wapadmauth1, wapadmprtl1, waptenauth1, waptenprtl1, waptenpriapi1, waptenpubapi1
    System Center 2012r2 Components: VMM, Service Provider Foundation, SCORCH, SCOM
    SQL Server 2012 Enterprise SP1 running on Server 2012r2
    Hyper-V Failover Cluster 2012r2
    Wildcard Certificate has been assigned and RDP Gateway/VMM/SPF/Hyper-V have been configured as per http://www.hyper-v.nu/archives/mvaneijk/2013/09/windows-azure-pack-console-connect/ (PS this article needs to be updated because the set-scspfvmconnectglobalsettings
    cmdlet doesn't work on RTM) and http://technet.microsoft.com/en-US/library/dn469415.aspx
    Thanks,
    Rick

    No problem Dennis, as noted in the two articles I referenced above:
    http://www.hyper-v.nu/archives/mvaneijk/2013/09/windows-azure-pack-console-connect/  and http://technet.microsoft.com/en-US/library/dn469415.aspx
    an RDP Gateway is required to connect the external user to the console access on the Hyper-V backend
    server.  When the RDP file is downloaded it enables users to have more functionality than a standard RDP session and it also allows you access to internal resources, acting similar to a proxy.  I have checked the security logs and there is nothing
    that jumps out at me.
    Here is an excerpt from the logs:
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", met connection authorization policy requirements and was therefore authorized to access the RD Gateway server. The authentication method used was: "Cookie" and connection protocol
    used: "HTTP".
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", met resource authorization policy requirements and was therefore authorized to connect to resource "HV_SERVER_INT_IP".
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", connected to resource "HV_SERVER_INT_IP". Connection protocol used: "HTTP".
    The user "FedAuthDomain\FedAuthUser", on client computer "CLIENT_EXT_IP", disconnected from the following network resource: "HV_SERVER_INT_IP". Before the user disconnected, the client transferred 1254 bytes and received 1615 bytes. The client session duration
    was 0 seconds. Connection protocol used: "HTTP".
    I have removed the actual IP addresses from the logs but kept the user credentials that are being currently passed.  HV_SERVER_INT_IP = IP of Hyper-V Host, internal and CLIENT_EXT_IP = IP of client who is attempting to access.

  • We have concerns with Adobe Acrobat DC - document cloud services and how it relates to HIPAA.  Has anyone else worked through these issues?

    We have questions relating to the Document Cloud services offered with Acrobat DC.  We are considering disabling the cloud feature to stay in compliance with HIPAA.  Has anyone else worked through these issues?

    Hi mikem82897618,
    You can refer the following link to know what can be done with Adobe Document Cloud Services when working in Acrobat DC:
    Store Files Online, Share & Access From Anywhere | Adobe Acrobat.com
    Also, visit this link to know more on how Adobe Document Cloud services are compliant with HIPAA security standards:
    E-signatures vs digital signatures | eSign services from Adobe
    Please specify more details on the same as why you would need to disable the Cloud feature.
    What and where exactly are you facing the trouble?
    Let me know.
    Regards,
    Anubha

  • Is it possible to use both an ILB and an ELB (listening on the same port) in the same Azure cloud service?

    I'm building a test Lync deployment on Azure; yes, I know this is not supported, hence "test".
    Lync Front-End servers expose two set of web services, one for internal users and one for external ones; they listen on different ports (443 and 4443) on the same servers; when external services are published, you need a reverse proxy or a port forwarding
    in order to map port 443 of a public IP address to port 4443 of the Front-End server(s). When you have multiple Front-End servers in a pool, you also need to load-balance them.
    So, a typical Lync deployment looks like this:
           Internal users
                     |
                   443
                      |
              Internal LB
            192.168.0.20
            443         443
              |               |
       Lync FE 1     Lync FE 2
    192.168.0.21 192.168.0.22
              |               |
          4443        4443
              External LB
           Public IP Address
                     |
                  443
                     |
           External Users
    This should be easily replicated in Azure, as it supports both external load balancing and internal load balancing. They are even supported together in the same cloud service, so this configuration should be easy. However, it looks like "should"
    is the keyword here.
    After creating the external load balanced endpoint (which listens on external port 443 and forwards to port 4443 on the servers), I'm trying to create an internal load balancer and add internal endpoints to is; however, while the ILB can be created successfully,
    adding an internal endpoint listening on port 443 and forwarding to port 443 on the servers fails miserably, with an error stating that port 443 is already in use by another endpoint:
    Update-AzureVM : BadRequest : Port 443 is already in use by one of the endpoints in this deployment. Ensure that the port numbers are unique across endpoints within a deployment.
    For reference, my commands are:
    Add-AzureInternalLoadBalancer -InternalLoadBalancerName "LyncILB" -ServiceName "LyncFrontEnd" -SubnetName "LabSubnet" -StaticVNetIPAddress 192.168.0.20
    (This completes successfully)
    Get-AzureVM LYNCFE1 | Add-AzureEndpoint -Name "Https-Int" -Protocol "tcp" -LocalPort 443 -PublicPort 443 -LBSetName "HttpsIntLB" -DefaultProbe -InternalLoadBalancerName "LyncILB"
    (This fails)
    The existing external endpoint is configured as such:
    Get-AzureVM LYNCFE1 | get-azureendpoint
    LBSetName : HttpsExtLB
    LocalPort : 4443
    Name : HTTPS-Ext
    Port : 443
    Protocol : tcp
    Vip :
    ProbePath :
    ProbePort : 4443
    ProbeProtocol : tcp
    ProbeIntervalInSeconds : 15
    ProbeTimeoutInSeconds : 31
    EnableDirectServerReturn : False
    Acl : {}
    InternalLoadBalancerName :
    IdleTimeoutInMinutes :
    LoadBalancerDistribution :
    The error doesn't even make a lot of sense; the external load balancer listens on a public IP address, while the internal load balancer listens on a private IP address in the internal network; there
    shouldn't be any conflict here... however it looks like there is one instead.
    Why doesn't this work? Am I doing something wrong, or is Azure networking just being silly as usual again?

    Hello Massimo Pascucci,
    The issue that you are facing when creating an endpoint with internal loadbalancer is the limitation of not allowing same ports to be listening under a single cloud service. This reason for this is that there is a limitation of only one private IP (Also
    known as the Internal load balanced IP) per cloud service.
    There is also a limitation on the Internal load balancer more than one port to be published per load balancer:
    You can leave your feedback by following the link below:
    https://social.msdn.microsoft.com/Forums/en-US/1805c5a0-3906-4cd6-8561-9802d77e0ae5/is-it-possible-to-use-both-an-ilb-and-an-elb-listening-on-the-same-port-in-the-same-azure-cloud?forum=WAVirtualMachinesVirtualNetwork
    Refer to this article for more information on Internal load balancer:
    http://azure.microsoft.com/blog/2014/05/20/internal-load-balancing/
    Thanks,
    Syed Irfan Hussain

  • Need advice for starting a Managed Cloud Service for Small Businesses

    I hope this is in the right forum.  I have done a lot of research and searching but havent found anything that specifically answers, in total, what I am wanting to accomplish.  I live in a small town and want to start a Managed Cloud Services for
    small to small-medium business in my area (2-30 users for each business).  I want to market this to have businesses replace their in-house server(s) to virtual ones I would host in a local Data Center with my own equipment that I would maintain.  I
    am just starting off so I don't have any clients I do this for currently, but I get asked about this frequently.  I want to run a 2012 R2 Domain Controller and a Hyper-V 2012 R2 server.  The virtual servers I will host are going to be for AD, RDS,
    FTP, and files.  Software examples that people are going to be using these virtual servers for are Quickbooks, Sage Accounting, Remote Desktop or RemoteApp, custom CRM or small database software, Office 2013, etc.  No Exchange currently but will
    probably configure something for that in the future (maybe run 1-3 virtually for now if someone asks, but will only do it if the user base is fairly small ~under 10 users).  I only have 1 static IP to work with over a 100Mbps connection up and down.
    For hardware, I am figuring something along the lines of this:
    (1) 1U, single CPU w/2-4 cores, 8GB, 2x73GB SAS 10k RAID 1, Dual PSU, running Windows Server 2012 R2
    Domain Controller
    (1) 2U, 2x 8-core Xeon ~2.6Ghz, 80GB RAM, 8x600GB SAS 15k in Raid 10 for Storage (VHDX files, etc), RAID 1 small Basic drives (or USB stick) for OS, Dual PSU, Quad GB Nic which I can use for load balancing/teaming, Hyper-V
    2012 R2
    Hyper-V Virtual Server
    (1) GB Unmanaged Network Switch & (1) Cisco 5510 Firewall
    Most of my questions are about the best way to configure this.  I am planning on managing my Hyper-V from the physical Domain Controller server.  Each virtual server will have RDS & (possibly) AD services on a single server.
    1) I want to replicate the physical Domain Controller.  Should I get another server or just virtualize the replica in Hyper-V?  I understand that if the Hyper-V goes down, so does my DC replica.
    2) Should I use my Domain Controller to manage ALL users on each virtual Server, by creating separate Organizational Units for each business?
    3) Should I setup my domain controller with Hyper-V management and then each Virtual Server I setup be a separate domain (Ex. mydomain.local, business1.local, business2.local, etc)?  Each one has no connection to any
    other, completely seperate.  Or should I do subdomains (business1.mydomain.local).
    4) What I have read is that Subdomains are a pain to manage with user rights, etc.  I want to keep each server complete separated from one another over a network connection, I suppose the VLan through Hyper-V options
    do this?  I dont want wondering users to stumble upon another businesses files (I know they would probably be prompted with a login for that business/domain).
    5) For each virtual server, I want to create and have an HTTP subdomain point to that server from my domain name. (Ex: business1.mydomain.com, business2.mydomain.com, etc.)  I want them to be able to have access to
    only their RemoteApps or be able to type that address in their Remote Desktop program as the host name.  This would be for viewing the RemoteApp login page and RemoteApps for that business over HTTP/S through a browser.
    6) If I do not have separate DC's in each virtual and my main DC manages each one, is their a way to connect up each companies RemoteApps using a single site that only shows what they are assigned to based upon their login?
    (Ex. http://login.mydomain.com which then shows that user what they are assigned on their own virtual server)
    7) Since each business will use the same ports for RemoteApp (443) & RDC (3389 unless I change it), how would I setup the subdomains to point to their correct server and not overlap for mess with any of the other servers
    since its all over 1 static WAN IP for all servers.  Thats why I figured setting up IIS subdomains would solve this.
    8) For backups or Hyper-V replication, is it better to have software that backs up the ENTIRE Hyper-V server (Acronis Advanced Backup for Hyper-V) as well as replication or just backups?  Or should I do separate file
    backup on each virtual with a replica?  Can a replica be a slower server since its just a backup? (Ex. 1x 8 core, 80GB, 8x600GB 10k SAS)
    9) For the servers that will be using FTP, can I again rely on the subdomains to determine which server to connect to on port 21 without changing each FTP servers ports?  I just want each business/person to type in
    the subdomain for their business and it connect up to their assigned FTP directory over port 21.
    10) If the physical DC manages DNS for all Virtual servers, can I forward sub domain requests to the proper virtual server so they connect to the correct RemoteApp screen etc.  Again all I have is 1 IP.
    I hope all of these questions make sense.  I just want every business to be independent of each other on the Hyper-V, each on their own virtual server, all without changing default ports on each server, each server running RDS, (possibly) AD, (a few) FTP,
    and all over a common single WAN IP.  Hoping subdomains (possibly managed through IIS on the physical DC) will redirect users to their appropriate virtual server.

    If you really want to run your own multi-tenant service provider cloud, Microsoft has defined the whole setup needed.
    They call it Infrastructure as a Service Product Line Architecture.  You can find the full documentation here -
    http://blogs.technet.com/b/yuridiogenes/archive/2014/04/17/infrastructure-as-a-service-product-line-architecture.aspx
    There are several different ways of configuring and installing it.  Here is a document I authored that provides step-by-step instructions for deploying into a Cisco UCS and EMC VSPEX environment -
    http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_mspc_fasttrack40_phase1.pdf
    This document contains the basic infrastructure required to manage a private cloud.  I will soon be publishing a document to add the Windows Azure Pack components onto the above configuration.  That is what would more easily provide a multi-tenant
    experience with a Azure look and feel.  It is not Azure, but the Azure pack is a series of applications, some of which came from Azure, the provides Azure-like capabilities only in a service provider type of environment.
    Whether you use my document or not (which has actually corrected errors found in the Microsoft documentation), you should take a look at it to see what it takes to put something like this up, if you are really serious about it.  It is not a small undertaking. 
    It requires a lot of moving pieces to be coordinated.  Yes, my document is designed to scale to a large environment, but you need the components that are there.  No need re-inventing the wheel.  Microsoft's documentation is based on a lot of
    real hands on experience of their consulting organization that has been doing this for customers for years.  This one is also know as Fast Track 4.  I've done 2 (2008 R2) and 3 (2012), also and it just keeps getting more complicated based on customer
    demands and expectations.
    Good luck!
    . : | : . : | : . tim

  • What are Azure limitations for Websockets in Cloud Services (web and worker role)?

    A WebSocket Server should be built on Azure platform with OnPrem connections and have questions regarding limitations for Websockets in Azure Cloud Services - web and worker roles.
    Websockets can be configured for Web Sites and limitations are understood, but Azure Websites is not an option. 
    Nevertheless it is planned to run a web service (without UI - no web site) as a Cloud service which has secure websocket (WSS) connections to OnPrem machines. Websocket protocol is enabled for IIS8 on Cloud services web and worker roles. Azure Service Bus Relay
    is not an option.
    Questions:
    1) Are Websockets supported for Azure Cloud services web and worker roles? we assume yes
    2) What are potential limitations from Azure side to support concurrent Websocket connections? We are aware that CPU, memory etc are limitations, but are there additional limitations from MS Azure side? 
     

    Hi,
    As I know, azure cloud service web and worker role support Websockets, users can connect to the role via the special endpoint, if we use Azure cloud service, I think we can monitor the metrics such as CPU, memory, etc... and scale our cloud service via these
    metrics to keep the websockets working, refer to
    http://azure.microsoft.com/en-us/documentation/articles/cloud-services-how-to-scale/ for more information about how to scale a cloud service.
    Regards

  • Can't Delete Private Cloud - tbl_WLC_PhysicalObject not being updated

    I am having issues with our SCVMM instance where I can't delete Private Clouds...even if there empty. 
    When I right click the Private Cloud and click Delete, the "Jobs" panel says it finished successfully, however, the Private Cloud is not deleted. 
    After doing some researching, I believe its because entries in the tbl_WLC_PhysicalObject database table are not being updated correctly, when a VM is moved from one Private Cloud to another. After determining the "CloudID" of the Private
    Cloud I am trying to delete, I still see resources assigned to this Private Cloud in the tbl_WLC_PhysicalObject table, even though from VMM Console, the Private Cloud shows up empty. 
    For some testing purposes, I assigned a VM back to the Private Cloud I am trying to delete, only to move it out again and gather some tracing/logging. When I moved the VM back out of the Private Cloud, I had a SQL Profiler running in the background, capturing
    the SQL statements on the DB server. Looking at the "exec dbo.prc_WLC_UpdatePhysicalOBject" statement, I see the @CloudID variable is assigned the "CloudID" of the Private Cloud the VM is currently assigned to/the Private Cloud I am trying
    to delete and is NOT the CloudID of the Private Cloud the VM is being moved to/assigned to. 
    Instead of having the VMM Console GUI do the Private Cloud assignment/change...I copied the PowerShell commands out...so I can run them manually. Looks like the script gets 4 variables ($VM, $OperatingSystem, $CPUType, and $Cloud), and then runs the "Set-SCVirtualMachine"
    CMDLET. For the $Cloud variable, it does return the proper "CloudID" of the Private Cloud I am trying to move the VM too (I ran it separately and then ran an ECHO $Cloud to look at its value). When I run the "Set-SCVirtualMachine" CMDLET,
    the output has values for "CloudID" and "Cloud" and these are still the values of the source Private Cloud/Private Cloud I am moving the VM out of and ultimately want to delete. 
    Has anyone ran into this? Is something not processing right in the "Set-SCVirtualMachine" CMDLET?

    I been slowing looking into this and this is where I am at:
    I built a development SCVMM 2012 R2  instance that mocks our production environment (minus all the VM's...just the networking configuration and all the private clouds have been mocked). From there, I started at SCVMM 2012 R2 GA and one by one installed
    the 4 rollup patches in ordered and at each new patch level,  I monitored the queries coming in through SQL Profiler, as I moved a VM between private clouds and created new VM's within clouds. As I created new VM's and moved the VM's between clouds. the
    stored procedure "prc_WLC_UpdatePhysicalOBject" all have a value of NULL for the CloudID column....so a CloudID isnt even associated to the physical objects (basically the VHDX files and any mounted ISO's I have on the VM's). 
    I did find out this SCVMM instance was upgraded from SCVMM 2008 (I took over after the 2012 R2 upgrade was completed). 
    I am thinking at this point...nothing is wrong with SCVMM 2012 R2 if you build and recreate it from scratch and a new DB. I am thinking this might be a depreciated field from SCVMM 2008. The only other thing we did, was put in a SAN and moved VM's from stand-alone
    hosts to the new CSV's (A mixture of 2008 R2 and 2012 NON R2 hosts). 
    At this point...since we dont have Self-Service enabled yet....it will be a days work to rebuild a new instance of SCVMM 2012 R2 and migrate the hosts/VM's to it and start from a clean slate. 
    I know the DB structure isnt really published...but does anybody have any other insights into this? 

  • Applying security on a service

    Hello,
    We are working on UCM10gR3 10.1.3.3.156 with LinkManager 8 (build 21).
    We created a service retrieving all the documents referencing one
    document.
    This service is called in a custom Site Studio fragment. An anonymous
    user shouldn't be able to see the contents on which he has not, at
    least, READ permissions.
    What is happening is that if there is a result which the user doesn't
    have access, the page connexion appears.
    We would like our service only retrieves results on which the user
    has, at least, READ access, instead of having a connexion page (as the
    service "GET_SEARCH_RESULTS")Applying security on a service.
    Does anyone have an idea ?
    Here is our service :
    <table border=1><caption><strong>Scripts For Custom
    Services</strong></caption>
    <tr>
    <td>Name</td><td>Attributes</td><td>Actions</td>
    </tr>
    <tr>
    <td>GET_DOC_TO_LINK</td>
    <td>SearchService
    33
    StandardResults
    null
    null<br>
    null</td>
    <td>5:QdocLinksPerso:DOCUMENTS_LINK::null</td>
    </tr>
    Here is our query :
    <table border=1><caption><strong>Query Definition Table</strong></caption>
    <tr>
    <td>name</td><td>queryStr</td><td>parameters</td>
    </tr>
    <tr>
    <td>QdocLinksPerso</td>
    <td>SELECT M.dDocName FROM ManagedLinks M, Revisions R
    WHERE M.dDocName=R.dDocName
    AND R.dReleaseState='Y'
    AND (R.dDocType='ART' OR R.dDocType='BRV')
    AND upper(M.dLkResource)=?
    AND M.dLkResourceType='doc'
    ORDER BY dDocName</td>
    <td>dDocTest varchar</td>
    </tr>
    </table>

    We created a service using the LinkManager tables in order to research all the documents referencing (by a link) one precise document.
    The problem is that the resultset created by this service always return all the documents referencing the precise document even those on which the user doesn't have READ privileges, so, in this case, the user is asked to log in and can't access to the page (Permission denied).
    At the contrary, GET_SEARCH_RESULTS creates a resultset with takes in account the user rights and this service doesn't include the documents which the user can't read in the resultset.
    Basing on my service I described in my first message (GET_DOC_TO_LINKS), I would like that this service only retrieves results accessible by the user (eliminating "private" contents in the resultset)

  • Best way to set up private cloud

    I wish to set up my private cloud. Is there any Apple device that will help me do so? If not, which are the devices that will be most compatible with Mac, iPhone and an iPad?

    I don't have a server as yet. I am looking for an Apple device that can act as a storage for all kinds of files for every member in my family.
    Will I be able to access my files from anywhere once secure my files in Mac server? If yes, can someone please explain the detailed procedure to do so?
    Thanks.

  • Direct Traffic to one node in Cloud Service

    Is there a way to make a cloud service (web role) always use one of the nodes, there are two in this scenario, and only direct to the other one if the first one gets shut down. We have a piece of legacy software we use in our Cloud Service and we are waiting
    on it being updated so it can work across the load balancer, but currently it does not. So we only have one node in the service. I would like to have two and see if we can always hit node 1 for example, and then hit node 2 if 1 is down. I am assuming, maybe
    wrongly, that this would still be within SLA, and if maintenance was happening then both nodes would not get shutdown. Is it possible to maybe have a web role class which controls the load balancer traffic?
    Thanks
    Eamonn

    Hi Eamonn,
    Thanks for your posting!
    >>Is there a way to make a cloud service (web role) always use one of the nodes, there are two in this scenario,...
    Base on my experience, if we have one or more instances of a VM (Web or Worker role), the traffic will be distributed amongst the instances. We isn't allowed to specify which instances. Please see David's psot(http://stackoverflow.com/a/12726613
    >> I would like to have two and see if we can always hit node 1 for example, and then hit node 2 if 1 is down. I am assuming, maybe.....
    About this requirement, I suggest you could try to those steps:
    1.create a web role with adding a web service or create a web role using WCF service.
    2.create a service with the instance information, like this:
    [WebMethod]
    public ReturnResult ReverseString(string value)
    ReturnResult rr = new ReturnResult();
    rr.ReturnString = new string(value);
    rr.HostName = RoleEnvironment.CurrentRoleInstance.Id;;
    return rr;
    //Class
    public class ReturnResult
    private String returnString;
    public String ReturnString
    get { return returnString; }
    set { returnString = value; }
    private String hostName;
    public String HostName
    get { return hostName; }
    set { hostName = value; }
    3.Host this service on azure cloud service
    4.Create a new project whatever it is webpage or console application.
    5.using the Thread Pool or BackgroundWorker to send the concurrent request.
    6.Check the results list and host name.
    After made it, you could get the every requests hot which instances .
    If you'd like to custom the Azure loadbalancer, I recommend you could refer to those documents:
    http://msdn.microsoft.com/en-us/library/azure/jj151530.aspx
    http://blogs.msdn.com/b/kdot/archive/2013/06/29/implementing-custom-load-balancing-with-sticky-connections-on-windows-azure.aspx
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for

  • All of my apps are missing in the apps panel and it says download error

    When I open my apps panel, it says download error and wants me to reload the apps. However, it won't work. I was trying to download After Effects and it did download and now it won't work.

  • How do I change my choices of external editor

    I have recently upgraded my Photoshop from CS3 to CS6.  However, in iPhoto preferences (Preferences-->Advanced-->edit photos in drop down) I see only the option to use CS3 as my external editor. How do I get iPhoto preferences to understand that I ha

  • Using sql with CR XI

    Is it possible to create a sql query that will join to tables(excel) in CR XI If yes how do I do it I have been using the add command but it only seems to find one table at a time

  • Back to my mac isn't working properly

    After downloading Mavericks, (which went without a hitch), I am now receiving an error in system preferences stating: Back to my Mac isn't working properly because the DNS server isn't responding. Contact your internet service provider for a differen

  • HELP: Copying layers from one file to another to keep structure not possible??!!!

    Hi guys, I have a curious question. Sounds a bit silly and highly disappointed if Illustrator is actually not capable of doing this. Hopefully I have missed something. I have an AI doc that has grown so large that saving it now is a major effort so t