Best practices for securing communication to internet based SCCM clients ?

What type of SSL certs does the community think should be used to secure traffic from internet based SCCM clients ?  should 3rd party SSL certs be used ?  When doing an inventory for example of the clients configuration in order to run reports
later how the  data be protected during transit ?

From a technical perspective, it doesn't matter where the certs come from as there is no difference whatsoever. A cert is a cert is a cert. The certs are *not* what provide the protection, they simply enable the use of SSL to protect the data in transit
and also provide an authentication mechanism.
From a logistics and cost perspective though, there is a huge difference. You may not be aware, but *every* client in IBCM requires its own unique client authentication certificate. This will get very expensive very quickly and is a recurring cost because
certs expire (most commercial cert vendors rarely offer certs valid for more than 3 years). Also, deploying certs from a 3rd party is not a trivial endeavor -- you more less run into chicken and egg issues here. With an internal Microsoft PKI, if designed
properly, there is zero recurring cost and deployment to internal systems is trivial. There is still certainly some cost and overhead involved, but it is dwarfed by that that comes with using with a third party CA for IBCM certs.
Jason | http://blog.configmgrftw.com | @jasonsandys

Similar Messages

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best Practice for Security Point-Multipoint 802.11a Bridge Connection

    I am trying to get the best practice for securing a point to multi-point wireless bridge link. Link point A to B, C, & D; and B, C, & D back to A. What authenication is the best and configuration is best that is included in the Aironet 1410 IOS. Thanks for your assistance.
    Greg

    The following document on the types of authentication available on 1400 should help you
    http://www.cisco.com/univercd/cc/td/doc/product/wireless/aero1400/br1410/brscg/p11auth.htm

  • Best practice for securing confidential legal documents in DMS?

    We have a requirement to store confidential legal documents in DMS and are looking at options to secure access to those documents.  We are curious to know.  What is the best practice?  And how are other companies doing it?
    TIA,
    Margie
    Perrigo Co.

    Hi,
    The standard practice for such scenarios is to use 'authorization' concept.You can give every user to use authorization to create,change or display these confidential documents. In this way, you can control access authorization.SAP DMS system monitors how you work, and prevents you from displaying or changing originals if you do not have the required authorization.
    The below link will provide you with an improved understanding of authorization concept and its application in DMS
    http://help.sap.com/erp2005_ehp_04/helpdata/en/c1/1c24ac43c711d1893e0000e8323c4f/frameset.htm
    Regards,
    Pradeepkumar Haragoldavar

  • Best practices for realtime communication between background tasks and main app

    I am developing (in fact, porting to WinRT Universal App) an application connecting to Bluetooth medical devices. In order to support background connectivity, it seems best is to use background tasks triggered by a device connection. However, some of these
    devices provide a stream of data which has to be passed to the main app in real time when it is active - i.e. to show an ECG on the screen. So my task ideally should receive and store data all the time (both background and foreground) and additionally make
    main app receive it live when it is in foreground.
    My question is: how do I make background task pass real-time data to the app when it is active? Documentation talks about using storage, but it does not seem optimal for realtime messaging.. Looking for best practices and advice. Platform is Windows 8.1
    and Windows Phone 8.1.

    Hi Michael,
    Windows phone app has resource quotas, to prevent it from interfering with real-time communication functionality, background task using the ControlChannelTrigger and PushNotificationTrigger receive guaranteed resource quotas for every running task. You can
    find more information from
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh977056(v=win.10).aspx. See Background task resource guarantees for real-time communication section. ControlChannelTrigger is not supported on windows phone, so you can have a look at PushNotificationTrigger
    class.
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.applicationmodel.background.pushnotificationtrigger.aspx.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate
    the survey.

  • Best practices for mass reimaging? Having unmanaged duplicate clients with mismatched Resource IDs. (2012 R2)

    We reimage about 5000-8000 clients each summer. Last summer we were still at RTM, but have since moved on to R2. We build images (Win7 SP1) in vsphere and capture via capture media TS. Last year's image didn't get SCCM client uninstalled prior to capture
    so we would have random issues with computers going to unmanaged status and some that would show up without a client cert. To avoid this issue we stripped the client out prior to capture this year.
    I believe this is how we handled the reimage process last year as well, but I am not positive on that. We were also dealing with a lot of new laptops last summer, where as they obviously have existing records this summer. Since SCCM replaces the wired MAC
    address with the wireless MAC (laptops) we can't just toss these into an OSD collection because it won't pickup the OSD advertisement / PXE. (Is there any workaround for this?) Since this is the case, we are blowing away each client's AD account and DDR in
    SCCM, then doing mass import of hostname and wired MAC into SCCM, dumping them into the appropriate OSD collection, and they image unless they happen to pickup last year's PXE deployment that first has to be cleared, or unless they had a motherboard replaced
    and our MAC database didn't get updated. We did the mass import a week ago and the manual entries are listed with hostname and MAC and entry date of 7/9/2014. This week we started imaging. Almost immediately after reimaging (at which time the AD record is
    created upon rejoining domain) we see a second account show up in SCCM from AD Discovery with dates of 7/14/2014 and 7/15/2014. Neither account is managed or shows that SCCM client is installed, but it shows a site code.
    The manual entry lists an agent name, agent site, wired MAC, name, NetBIOS name, Resource ID of 167xxxxx, assigned site, and CCM GUID.
    The AD Discovery record shows agent name, agent site, domain, IPv4 and IPv6 addresses, name, NetBIOS name, primary group ID, domain, Resource ID of 20971xxxxx, resource name and type, SID, assigned sites, container name, UAC, etc.
    Why won't these records merge and show up as being properly managed? I am not yet sure if they fix themselves after one record or the other is deleted. Obviously this process isn't working well and it removes the clients from their direct membership collections
    and AD groups. I'd think that all of this could be avoided if we just had the wired MAC persist in the DDR.

    Last year's image didn't get SCCM client uninstalled prior to capture so we would have random issues with computers going to unmanaged status and some that would show up without a client cert. To avoid this issue we stripped the client out prior to capture.
    There is no reason or need to do this. There is no correlation between the two as long as the client agent was properly prepared (which does happen with capture media although you should strongly consider using a build and capture task sequence). Clients
    are perfectly capable of living within an image -- I do it all the time and it is a common practice.
    Since SCCM replaces the wired MAC address with the wireless MAC (laptops) we can't just toss these into an OSD collection because it won't pickup the OSD advertisement / PXE. (Is there any workaround for this?)
    This is not correct and thus also unnecessary as ConfigMgr will use the MAC Address *or* the SMSBIOS GUID of the system to determine targeting during OSD. The SMBIOS GUID is an immutable unique ID set by the OEM that is part of the resource record in ConfigMgr
    also.
    Jason | http://blog.configmgrftw.com

  • Best Practice For Secure File Sharing?

    I'm a newbie to both OX X Server and File Sharing protocols, so please excuse my ignorance...
    My client would like to share folders in the most secure way possible; I was considering that what might be the best way would be for them to VPN into the server and then view the files through the VPN tunnel; my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN (i.e. from inside of the internal network)... I don't see any options in Server Admin to restrict users in that way....
    I'm not afraid of the command line, FYI, I just don't know if this is:
    1. Possible!
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    Thanks for any suggestions!

    my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN
    Simple - don't expose your server to the outside world.
    As long as you're running on a NAT network behind some firewall or router that's filtering traffic, no external traffic can get to your server unless you setup port forwarding - this is the method used to run, say, a public web server where you tell the router/firewall to allow incoming traffic on port 80 to get to your server.
    If you don't setup any port forwarding, no external traffic can get in.
    There are additional steps you can take - such as running the software firewall built into Mac OS X to tell it to only accept network connections from the local network, but that's not necessary in most cases.
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    VPN should take care of most of your concerns - at least as far as the file server is concerned. I'd be more worried about what happens to the files once they leave the network - for example have you ensured that the remote user's local system is sufficiently secured so that no one can get the documents off his machine once they're downloaded?

  • Best practice for secure zone various access

    I am setting up a new site with a secure zone.
    There will be a secure zone. Once logged in, users will have access to search and browse medical articles/resources
    This is how an example may go:
    The admin user signs up Doctor XYZ to the secure zone.
    The Doctor XYZ is a heart specialist, so he only gets access to web app items that are classified as "heart".
    However, he may also be given access to other items, eg: "lung" items.
    Or, even all items. It will vary from user to user.
    Is there any way to separate areas within the secure zone and give access to those separate areas (without having to give access to individual items - which will be a pain because there will be hundreds of records; and also without having the user log out and log into another secure area)

    my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN
    Simple - don't expose your server to the outside world.
    As long as you're running on a NAT network behind some firewall or router that's filtering traffic, no external traffic can get to your server unless you setup port forwarding - this is the method used to run, say, a public web server where you tell the router/firewall to allow incoming traffic on port 80 to get to your server.
    If you don't setup any port forwarding, no external traffic can get in.
    There are additional steps you can take - such as running the software firewall built into Mac OS X to tell it to only accept network connections from the local network, but that's not necessary in most cases.
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    VPN should take care of most of your concerns - at least as far as the file server is concerned. I'd be more worried about what happens to the files once they leave the network - for example have you ensured that the remote user's local system is sufficiently secured so that no one can get the documents off his machine once they're downloaded?

  • Best Practices for securing VTY lines?

    Hi all,
    The thread title makes this sound like a big post but it's not. 
    If my router has say., 193 VTY lines as a maximum, but by default running-config has only a portion of those mentioned, should I set any configs I do on all lines, or just on the lines sh run shows?  Example: 
    sh run on a router I have with default config has: :
    line vty 0 4
    access-class 23 in
    privilege level 15
    login local
    transport input telnet ssh
    line vty 5 15
    access-class 23 in
    privilege level 15
    login local
    transport input telnet ssh
    Yet, I have the option of configuring up to 193 VTY lines:
    Router(config)#line vty ?
      <0-193>  First Line number
    It seems lines 16-193 still exist in memory, so my concern is that they are potentially exposed somehow to exploits or what not.  So my practice is to do any configs I do using VTY 0 193 to ensure universal configuration.  But, my "enabling" the extra lines, am I using more memory, and, how secure is this against somebody trying to say, connect 193 times to my router simtaneously?  Does it increase the likelihood of success on DoS attack for example. 

    Hi guys, thanks for the replies and excellent information.  I'm excited to look at the IOS Hardending doc and the other stuff too. 
    Just to clarify, I don't actually use the default config, I only pasted it from a new router just to illustrate the default VTY line count. 
    I never use telnet from inside or outside, anyting snooping a line will pick up the cleartext as ou both know of course.  SSH is always version 2 etc. 
    I was considering doing a console server from the insidde as the only access method - which I do have set up but I have to remote to it It's just that with power outages at times, the console PC won't come back up (no BIOS setting to return to previous state, no WOL solution in place) so now I have both that plus the SSH access.  I have an ACL on both the VTY lines themselves as well as a ZBFW ACL governing SSH - perhaps a bit redundant in some ways but oh well if there's a zero-day ou thtere for turning off the zbfw I might still be protected  
    Regretfully I havne't learned about AAA yet - that I believe is in my CCNA Security book but first I need to get other things learned. 
    And with regard to logging in general, both enabling the right kind and monitoring it properly, that's a subject I need to work on big time.  I still get prot 25 outbound sometimes from a spam bot, but by the time I manually do my sh logging | i :25 I have missed it (due to cyclic logging with a buffer at 102400).  Probably this woud be part of that CCNA Security book as well. 
    So back to the # of VTY lines.  I will see what I can do to reduce the line count.  I suppose something like "no line vty 16 193" might work, if not it'll take some research. 
    But if an attacker wants to jam up my vty lines so I can't connect in, once they've fingerprinted the unit a bit to find out that I don't have an IPS running for example, wouldn't it be better that they have to jam up 193 lines simultaneously (with I presume 193 source IPs) instaed of 16?  Or am I just theorizing too much here.  I'ts not that this matters much, anybody who cares enough to hack this router will get a surprise when they find out there's nothing worth the effort on the other side But this is more so I can be better armed for future deployments.  Anyway, I will bookmark the info from this thread and am looking forward to reading it. 

  • Best Practices for Securing Oracle e-Business Suite -Metalink Note 189367.1

    Ok we have reviewed our financials setup against the title metalink document. But we want to focus on security and configuration specific to the Accounts Payable module of Oracle Financialos. Can you point me in the direction of any useful documents for this or give me some pointers??

    Ok we have reviewed our financials setup against the title metalink document. But we want to focus on security and configuration specific to the Accounts Payable module of Oracle Financialos. Can you point me in the direction of any useful documents for this or give me some pointers??

  • Best practices for securely storing environment properties

    Hi All,
    We have a legacy security module that is included in many
    different applications. Historically the settings (such as
    database/ldap username and password) was stored directly in the
    files that use them. I'm trying to move towards a more centralized
    and secure method of storing this information, but need some help.
    First of all, i'm struggling a little bit with proper scoping
    of these variables. If another application does a cfinclude on one
    of the assets in this module, these environment settings must be
    visible to the asset, but preferrably not visible to the 'calling'
    application.
    Second i'm struggling with the proper way to initialize these
    settings. If other applications run a cfinclude on these assets,
    the application.cfm in the local directory of the script that's
    included does not get processed. I'm left with running an include
    statement in every file, which i would prefer to avoid if at all
    possible.
    There are a ton (>50) applications using this code, so i
    can't really change the external interface. Should i create a
    component that returns the private settings and then set the
    'public' settings with Server scope? Right now i'm using
    application scope for everything because of a basic
    misunderstanding of how the application.cfm's are processed, and
    that's a mess.
    We're on ColdFusion 7.
    Thanks!

    Hi,
    Thank you for posting in Windows Server Forum.
    As per my research, we can create some script for patching the server and you have 2 servers for each role. If this is primary and backup server respectively then you can manage to update each server separately and bypass the traffic to other server. After
    completing once for 1 server you can just perform the same step for other server. Because as I know we need to restart the server once for successful patching update to the server.
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Best practice for class communication

    Hi all,
    I'm trying to get my head around how to communicate through scope. I hope/think this is a pretty basic question but I'm pretty stuck in the AS2 mentality.
    To make a simple example, say I have 2 classes, ClassA and ClassB. ClassA loads an instance of ClassB:
    package
      public class ClassA
         private var _classB:ClassB;
         public function ClassA
             _classB:ClassB = new ClassB();
      class ClassB {} 
    Now I want ClassB to communicate with ClassA. It's easy for ClassA to invoke a method on ClassB, _classB.somePublicMethod();, but how does ClassB communicate back up to ClassA?
    I know one method is making a custom event and adding a listenor in ClassA that binds to ClassB while having ClassB dispatch that custom event. Is there any easier way I'm not aware of? Some kind of parent/super/etc method of talking to the class that instantiated ClassB?
    edit:
    Incase it matters or changes the approach someone would recommend, what I'm trying to do is make some touchscreen-esque scrolling functionality. The same stuff built into any smartphone. If you touch the screen and drag the touchscreen in certain contexts knows you want to scroll the information you're looking at. If you just tap something however it knows that you want to interact with what you tapped. That's what I'm trying to figure out how to do in as3.
    I have a touchscreen and I have navigation that doesn't fit all on the same screen. I'm trying to allow the user to scroll the navigation up/down to see all of the options but also let the user tap a nav item to let them select it. So I'm making a reusable class that's pretty abstract on the concept of allowing me to point to an object and say this object can be clicked, but if the user drags they intend to scroll a display.
    The actual end use of this is, as a personal learning exercise, I'm trying to duplicate a doodle/paint program my 3yr old son has in flash. He has a touchscreen laptop and he can scroll through a long list of artwork he can touch and it paints it on the screen. I'm trying to mimic that functionality where I try to determine if someone is trying to drag/scroll a list or select something in the list.
    That said, in that context ClassA is the painting app and ClassB is a reuable class that's applied to a navigation area whose job is to inform ClassA if the user intends to drag or select something. I make my nav items in ClassA and then instantiate ClassB. Because I need to 'wait' until ClassB tells ClassA what the user is doing, it's not a return value type of situation. I have to wait for ClassB to figure out if the person is trying to click or drag, then communicate that back to ClassA so it knows how to handle it.

    I will definitely use an event. I've never made a custom event but the top google search (always a blog) has good comments on the approach so I'm using this approach at it.
    Anyone think that approach is bad/outdated/refineable?
    edit:
    Man, it's just one of those days. This is all working fine and well but I can honestly say in no project have I ever needed to make a custom event and I've been using flash since the early 90s with nothing but telltarget.....
    I do have one question, because I (admittedly) spent a freaking hour (*sigh*) on trying to figure out why I'd dispatch an event and it wasn't picked up.
    A quick psuedo example:
    package{
      public class ClassA {
        private var _classB:ClassB = new ClassB();
        public function ClassA() {
            this.addEventListener(CustomEvent.WHATEVER, _doSomething);
        // _doSomething func......
      class ClassB {
        parent.dispatchEvent(CustomEvent(CustomEvent.WHATEVER, { foo:"bar" }));
      class CustomEvent extends Event
        public static const WHATEVER:String = "whatever";
        public function CustomEvent(type:String, params:Object, bubbles:Boolean = false, cancelable:Boolean = false)  {
                super(type, bubbles, cancelable);
                this.params = params;
         // clone/tostring overrides.....
    Is it better symantics to do it that way with parent.dispatchEvent() or should I have done _classB.addEventListener(...) and then in ClassB I this.dispatchEvent()?
    What screwed me up for an hour was I was just this.dispatchEvent() instead of parent.dispatchEvent() and the event was never seen in the parent. It (hindsight) makes obvious sense I need to dispatch the event in the scope of whatever is looking for the event but somehow that wasn't really explained to me in the tutorials (like I linked). Their examples made the event, listener and dispatcher in the same place. I'm dispatching the event from a separate class so it didn't occur to me I needed to send that event back to the scope the listener existed in... Oy.. vey....

  • Best Practice for starting & stopping HA msg nodes?

    Just setup a cluster and was trying to start-msg ha and getting error about watcher not being started. Does that have to be started separately? I figured start-msg ha would do both.
    For now I setup this in the startup script. Will the SMF messaging.xml work with HA? Whats the right way to do this?
    /opt/sun/comms/messaging64/bin/start-msg watcher && /opt/sun/comms/messaging64/bin/start-msg ha
    -Ray

    ./imsimta version
    Sun Java(tm) System Messaging Server 7.3-11.01 64bit (built Sep 1 2009)
    libimta.so 7.3-11.01 64bit (built 19:54:45, Sep 1 2009)
    Using /opt/sun/comms/messaging64/config/imta.cnf (not compiled)
    SunOS szuml014aha 5.10 IDR142154-02 sun4v sparc SUNW,T5240
    sun cluster 3.2. And we are following the zfs doc. I haven't actually restarted the box yet, just doing configs and testing still and noted that.
    szuml014aha# ./start-msg
    Warning: a HA configuration is detected on your system,
    use the HA start command to properly start the messaging server.
    szuml014aha# ./start-msg ha
    Connecting to watcher ...
    Warning: Cannot connect to watcher
    Critical: FATAL ERROR: shutting down now
    job_controller server is not running
    dispatcher server is not running
    sched server is not running
    imap server is not running
    purge server is not running
    store server is not running
    szuml014aha# ./start-msg watcher
    Connecting to watcher ...
    Launching watcher ... 11526
    szuml014aha# ./start-msg ha
    Connecting to watcher ...
    Starting store server .... 11536
    Checking store server status ...... ready
    Starting purge server .... 11537
    Starting imap server .... 11538
    Starting sched server ... 11540
    Starting dispatcher server .... 11543
    Starting job_controller server .... 11549
    Also I read in the zfs / msg doc about the recommendations:
    http://wikis.sun.com/display/CommSuite/Best+Practices+for+Oracle+Communications+Messaging+Exchange+Server
    If I split the messages and indices, will there be any issues should I need to imsbackup and imsrestore the messages to a different environment without the indices and messages split?
    -Ray
    Edited by: Ray_Cormier on Jul 22, 2010 7:27 PM

  • Enterpise Best Practices for iPad

    Is anyone aware of any documentation idnetifying best practices for securely deploying iPads in an enterprise environment?

    There is some information out there, though not as much as I think we are typically used to for enterprise environments. (It is a consumer device, and Apple is a consumer-driven company, and I don't fault them for that one bit.)
    Here is some documentation from Apple:
    http://www.apple.com/support/ipad/enterprise/
    Also, Jamf Software has some information regarding their Casper suite.
    We don't use it yet at my workplace, but I have heard good things about them.
    http://www.jamfsoftware.com/solutions/mobile-device-management
    Edit:
    And welcome to the forums!
    Message was edited by: tibor.moldovan

  • Best practice for OSB to OSB communication

    Cross posting this message:
    I am currently in a project where we have two OSB that have to communicate. The OSBs are located in different security zones ("internal" and "secure"). All communication on a network level must be initiated from the secure zone to the internal zone. The message flow should be initated from the internal zone to the secure zone. Ideally we should establish a tcp connection from the secure zone to the internal zone, and then use SOAP over HTTP on this pre-established connection. Is this at all possible?
    Our best approach now, is to establish a jms-queue in the internal zone and let both OSBs connect to this. All communication between the zone is then done over JMS. This is an approach that would work, but is it the only solution?
    Can the t3/t3s protocol be used to achieve our goal? I.e. To have synchronous commincation over a pre-established connection (that is established the in opposite direction of the communication)?
    Is there any other approach that might work?
    What is considered best practice for sending messages from a OSB to another OSB in a more secure zone?
    Edited by: hilmersen on 11.jun.2009 00:59

    Hi,
    In my experience in a live project, we have used secured communication (https) between internal service bus and DMZ/external service bus.
    We also used two way SSL with customers.
    The ports were also secured by firewall in between them.
    If you wish more details, please email [email protected]
    Ganapathi.V.Subramanian[VG]
    Sydney, Australia
    Edited by: Ganapathi.V.Subramanian[VG] on Aug 28, 2009 10:50 AM

Maybe you are looking for