Remote Control Best Practices

Hello. I am new to the world of Mac. I have a bunch of Windows XP machines at home but just bought a MacBook White for my daughter for college. I would like to be able to provide tech support while she is away so I have to find a remote control solution that will do the trick.
I will not have a Mac at the house (at least not right away) so I am hoping to find something that will allow me to access her Mac over the Internet using a Windows-based client.
So I have two basic questions:
1) What is my best solution for this? I read that VNC might work but is that the best solution? If so, is there a VNC Host already embedded in the MAC OS or do I have to find one (and where)?
2) If I end up getting a MacBook Air, what Apple-based solution is best? ARD? WHat would I have to install/purchase on both the MacBook and the Air?
Thanks for your suggestions and patience with a Windows guy who appreciates Mac.
-Rick

I have been very successful using TeamViewer from Mac to Mac, Mac to Windows and some Windows to Mac. It's free to use within limits: you have to buy it if you're using it professionally, but for what you're talking about, it's free. http://teamviewer.com
If you've tried it before, download it again, because they just released the 4.0 version the other day. I've gotten through many firewalls with it, and even some dual NAT situations.
If you get a Mac, you can also use the screen sharing function of iChat, but I find it to be a lot less reliable than TeamViewer, expecially through corporate firewalls.

Similar Messages

  • Remoting Security: Best Practice

    I am exploring Remoting and I am curious about security best practice. By default, Enable-PSRemoting will configure an HTTP listener that listens to all addresses. Initially I thought this address was the addresses of the computer making
    the demoting request, but it isn't, it's the address on the local machine that is doing the listening. My reason for thinking this was the controller machine IP was that I thought I might want to limit successful remote requests to just the one machine. From
    a security standpoint this seemed better than letting any machine initiate a remote session. I know that the remote session is limited by the permissions of the user initiating, so any real threat is only because I have already been breached anyway. But still,
    I wonder if there is a way, and value, in limiting remoting to a subset of machines?
    Or is the default here really fine from a security standpoint as well?
    Thanks!
    Gordon

    It is most secure to configure remoting and restrict it using Group Policy.  GP will let you define subnets for both ends of the conversation network wide.
    \_(ツ)_/

  • Remote site best practice

    I have a main site with ccm 4.1 in a full cisco environment. working perfectly. this main site has two external links: E1 for PSTN and a broadband on fiber 6Mbps.
    I have a remote site with about 10 users, with a broadband link 2Mbps symetric.
    I configured a plain ipsec VPN between the remote router (2621 12.2(17) ) and the PIX of the main site (6.2).
    IP phones of the remote site work fine.
    However, at times, audio is lost. I have Qos on the 2621 and "show policy-map int fa 0/0" does not show dropped packets.
    Is this the best configuration ?
    Should I use the 2621 as a MGCP Gateway instead ? a H323 gateway instead ?
    what are the main criteria for these choices ?
    thnak you.

    I have less than 20 ms between routers and 22 ms within the ipsec tunnel.
    It's good to me.

  • OWB Change Management/Version Control Best Practice

    Hi
    I am about to start developing a data warehouse using OWB 10g R2, and I've been doing quite a lot of research into the various deployment/change management/version control techniques that can be used, but am still unsure which is the best to use.
    We will have 2-3 developers working on the project, and will be deploying from Development, to Test, to Production (each will have a separate repository). We want to be able to easily identify changes made between 1 release and the next to have a greater degree of control and awareness of what goes into each release. We also wish to use a source control system to track changes (we'll probably use SVN, but I don't think that the actual SCS tool makes a big difference to our decision at this point).
    The options available (that I'm aware of), are:
    1. Full MDL export/import.
    2. Snapshot MDL export/import.
    3. Manual coding of everything using OMB Plus.
    I am loath to use the full MDL export/import functionality since it will be difficult, if not impossible, to identify easily the changes made between 1 release and the next.
    The snapshot MDL export/import functionality is a little better at comparing releases, but it's still difficult to see exactly what has changed between 1 version and the next - particularly when a change to a transformation has been made. It also doesn't cope that well with tracking individually made changes to different components of the model.
    The manual coding using OMB Plus seems like the best option at the moment, though I keep thinking "What's the point of using a GUI tool, if I'm just going to code everything in scripts anyway?".
    I know that you can create OMB Plus code generation scripts to create your 'creation' scripts, but the code generation of the Alteration scripts seems that it would be more complicated than just writing the Alteration scripts manually.
    Any thoughts anyone out there has would be much appreciated.
    Thanks
    Liffey

    Well, you can also do per-object MDL exports and then manage those in your version control system. With a proper directory structure it would be fairly simple to code an OMB+ Script that scans a release directory tree and imports the objects one by one. I have done this before, although if you are using OWB as the primary metadata location for database objects then you have to come up with some way to manage object dependency order issues.
    The nice thing about this sort of system is that a patch can be easily shipped with only those objects that need to be updated.
    And if you force developers to put object-level MDL into your version control system then your system should also have pretty reporting on what objects were changed for a release and why.
    At my current job we do full exports of the project MDL and have a deployment script that drops the pre-existing deployed version of the project before importing and deploying the new version, which also works quite well - although as you note the tracking of what has changed in a release then needs to be carefully managed elsewhere. But we don't deploy any of our physical database objects through OWB. Those are deployed from Designer, and our patch script applies all physical changes first before we replace the mappings from the OWB project. We don't even bother synching the project metadata for tables / views / etc. at deployment. If the OWB project's metadata for database objects is not in sync with Designer, then we wind up with deployment errors. But on the whole it works pretty well.

  • Best practice for using remote control under limited rights?

    Hi. We are getting ready to take admin rights away from our users and make them standard users. We plan to utilize Zen for most of our in-scope applications so that we can allow users to install supported software. There is usually no problem in that case because Zen can elevate to System access during the install. However, we know that there are applications out there that a user may want to install that is not packaged in Zen. Also, in the event that a system setting needs to be changed, we will have to have a method for supporting this. In either case, the user will call our help desk. Unfortunately, the user will not have enough rights to do the install or system change even if the help desk associate remote control's the PC. What is the best practice to handle this situation in a Netware/Zenworks environment where users only have limited access?
    I was thinking of three possibilities:
    1.) The obivous one is to send a technician over to log in using local admin credentials to install the software or perform the change. (Drawback - not very efficient because a desktop tech would have to get over to the user's PC to perform the work)
    2.) Have the help desk engineer log out of the machine through remote control, log back in as local admin to install the software or perform the change. (Drawback - not very convienant and time consuming.)
    3.) Have the help desk engineer use the "run as" command or even create a Zen application object that could be executed to provide temporary rights for installing software or making system changes. Aaron Margosis of Microsoft writes about this quite a bit in his blog Aaron Margosis' "Non-Admin" WebLog : Table of Contents (Aaron Margosis' Non-Admin WebLog) (Drawback - some software or settings will not work properly using this technique)
    The last one that I didn't list was creating a new application object. I did not factor this one in because this isn't always applicable to system changes and we really don't want to be making app objects for every out of scope app that exists in the user community. We typically only make them for widely used and supported apps.
    Your feedback is appreciated.
    Thanks

    Originally Posted by spond
    Joshbilsky,
    how about
    4) use the remote execute option to remotely launch an app as admin?
    Shaun Pond
    That's probably an option that we will make available. I wasn't sure how some things will work under the SYSTEM context vs local admin.

  • Best practice for version control

    Hi.
    I'm setting up a file share, and want some sort of version control on the file share. What's the best practice method for this sort of thing?
    I'm coming at this as a subversion server administrator, and in subversion people keep their own copy of everything, and occasionally "commit" their changes, and the server keeps every "committed" version of every file.
    I liked subversion because: 1) users have their own copy, if they are away from the office or make a big oops mistake, it doesn't ever hit the server, and 2) you can lock a file to avoid conflicts, and 3) if you don't lock the file and a conflict (two simultaneous edits) occur, it has systems for dealing with conflicts.
    I didn't like subversion because it adds a level of complexity to things -- and many people ended up with critical files that should be shared on their own hard drives. So now I'm setting up a fileshare for them, which they will use in addition to the subversion repository.
    I guess I realize that I'll never get full subversion-like functionality in a file share. But through a system of permissions, incremental backups and mirroring (rsync, second-copy for windows users) I should be able to allow a) local copies on user's hard drives, b) control for conflicts (locking, conflict identification), and keeping old versions of things.
    I wonder if anyone has any suggestions about how to best setup a file share in a system where many people might want to edit the same file, with remote users needing to take copies of directories along with them on the road, and where the admin wants to keep revisions of things?
    Links to articles or books are welcome. Thanks.

    Subversion works great for code. Sort-of-ok for documents. Not so great for large data files.
    I'm now looking at using the wiki for project-level documentation. We've done that before quite successfully, and the wiki I was using (mediawiki) provides version history of pages and uploaded files, and stores the uploaded files in the file system.
    Which would leave just the large data files and some working files on the fileshare. Is there any way people can lock a file on the fileshare, to indicate to others that they are working on it and others shouldn't be modifying it? Is there a way to use unix permissions (user-group-other) permissions, "chmod oa-w" to lock a file and indicate that one is working on it?
    I also looked at Alfresco, which provides a CIFS (windows SMB) view of data files. I liked it in principle, but the files are all stored in a database, not in the file system, which makes me uneasy about backups. (Sure, subversion also stores stuff in a database, not a file system, but everyone has a copy of everything so I only lose sleep about backups regarding version history, not backups on the most recent file version.)
    John Abraham
    [email protected]

  • Query: Best practice SAN switch (network) access control rules?

    Dear SAN experts,
    Are there generic SAN (MDS) switch access control rules that should always be applied within the SAN environment?
    I have a specific interest in network-based access control rules/CLI-commands with respect to traffic flowing through the switch rather than switch management traffic (controls for traffic flowing to the switch).
    Presumably one would want to provide SAN switch demarcation between initiators and targets using VSAN, Zoning (and LUN Zoning for fine grained access control and defense in depth with storage device LUN masking), IP ACL, Read-Only Zone (or LUN).
    In a LAN environment controlled by a (gateway) firewall, there are (best practice) generic firewall access control rules that should be instantiated regardless of enterprise network IP range, TCP services, topology etc.
    For example, the blocking of malformed TCP flags or the blocking of inbound and outbound IP ranges outlined in RFC 3330 (and RFC 1918).
    These firewall access control rules can be deployed regardless of the IP range or TCP service traffic used within the enterprise. Of course there are firewall access control rules that should also be implemented as best practice that require specific IP addresses and ports that suit the network in which they are deployed. For example, rate limiting as a DoS preventative, may require knowledge of server IP and port number of the hosted service that is being DoS protected.
    So my question is, are there generic best practice SAN switch (network) access control rules that should also be instantiated?
    regards,
    Will.

    Hi William,
    That's a pretty wide net you're casting there, but i'll do my best to give you some insight in the matter.
    Speaking pure fibre channel, your only real way of controlling which nodes can access which other nodes is Zones.
    for zones there are a few best practices:
    * Default Zone: Don't use it. unless you're running Ficon.
    * Single Initiator zones: One host, many storage targets. Don't put 2 initiators in one zone or they'll try logging into each other which at best will give you a performance hit, at worst will bring down your systems.
    * Don't mix zoning types:  You can zone on wwn, on port, and Cisco NX-OS will give you a plethora of other options, like on device alias or LUN Zoning. Don't use different types of these in one zone.
    * Device alias zoning is definately recommended with Enhanced Zoning and Enhanced DA enabled, since it will make replacing hba's a heck of a lot less painful in your fabric.
    * LUN zoning is being deprecated, so avoid. You can achieve the same effect on any modern array by doing lun masking.
    * Read-Only exists, but again any modern array should be able to make a lun read-only.
    * QoS on Zoning: Isn't really an ACL method, more of a congestion control.
    VSANs are a way to separate your physical fabric into several logical fabrics.  There's one huge distinction here with VLANs, that is that as a rule of thumb, you should put things that you want to talk to each other in the same VSANs. There's no such concept as a broadcast domain the way it exists in Ethernet in FC, so VSANs don't serve as isolation for that. Routing on Fibre Channel (IVR or Inter-VSAN Routing) is possible, but quickly becomes a pain if you use it a lot/structurally. Keep IVR for exceptions, use VSANs for logical units of hosts and storage that belong to each other.  A good example would be to put each of 2 remote datacenters in their own VSAN, create a third VSAN for the ports on the array that provide replication between DC and use IVR to make management hosts have inband access to all arrays.
    When using IVR, maintain a manual and minimal topology. IVR tends to become very complex very fast and auto topology isn't helping this.
    Traditional IP acls (permit this proto to that dest on such a port and deny other combinations) are very rare on management interfaces, since they're usually connected to already separated segments. Same goes for Fibre Channel over IP links (that connect to ethernet interfaces in your storage switch).
    They are quite logical to use  and work just the same on an MDS as on a traditional Ethernetswitch when you want to use IP over FC (not to be confused with FC over IP). But then you'll logically use your switch as an L2/L3 device.
    I'm personally not an IP guy, but here's a quite good guide to setting up IP services in a FC fabric:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/ipsvc.html
    To protect your san from devices that are 'slow-draining' and can cause congestion, I highly recommend enabling slow-drain policy monitors, as described in this document:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/int/nxos/intf.html#wp1743661
    That's a very brief summary of the most important access-control-related Best Practices that come to mind.  If any of this isn't clear to you or you require more detail, let me know. HTH!

  • Best Practices for Remote Data Communication?

    Hello all
    I am developing a full-fledged website in Flex 3.4 and Zend Framework, PHP. I am using the Zend_AMF class in Zend framework for communicating the data to remote server.
    I will be communicating to database in the following way...
    get data from server
    send form data to server
    send requests to server to get data in response
    Right now I have created just a simple login form which just sends two fields username and password in the method in service class on remote server.
    Here is a little peek into how I did that...
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
      <mx:RemoteObject id="loginService" fault="faultHandler(event)" source="LoginService" destination="dest">
        <mx:method name="doLogin" result="resultHandler(event)" />
      </mx:RemoteObject>
      <mx:Script>
        <![CDATA[
          import mx.rpc.events.ResultEvent;
          import mx.controls.Alert;
          private function resultHandler(event:ResultEvent):void
            Alert.show("Welcome " + txtUsername.text + "!!!");
        ]]>
      </mx:Script>
      <!-- Login Panel -->
      <mx:VBox>
        <mx:Box>
          <mx:Label text="LOGIN"/>
        </mx:Box>
        <mx:Form>
          <mx:FormItem>
            <mx:Label text="Username"/>
            <mx:TextInput id="txtUsername"/>
          </mx:FormItem>
          <mx:FormItem>
            <mx:Label text="Password"/>
            <mx:TextInput id="txtPassword" displayAsPassword="true" width="100%"/>
          </mx:FormItem>
          <mx:FormItem>
          <mx:Button label="Login" id="loginButton" click="loginService.doLogin(txtUsername.text, txtPassword.text)"/>
          </mx:FormItem>
        </mx:Form>
      </mx:VBox>
    </mx:Application>
    This works fine. But if I create a complicated form which has many fields then it would be almost unbearable to sent each fields as an argument of a function.
    Another method that can be used is using HttpService which supports XML like request and response.
    I want to ask what are best practices in Flex when using remote data communication on a large scale? Like may be using some classes or objects which store data? Can somebody guide me on how to approach data storing?
    Thanks and Regards
    Vikram

    Oh yes, I have done study about Cairngorm, haven't really applied it though. I thought that it helps in separating the data models, presentation and business logic into various layers.
    Although what I am looking for is something about data models may be?
    Thanks and Regards
    Vikram

  • Best Practices for Setting up a Windows 2012 R2 STD Domain Controller in a Remote Site

    So I'm looking for an article or writeup similar to the "Adding Domain Controllers in Remote Sites" TechNet article but for Windows Server 2012 STD R2.  Here is my scenario:
    1.  I want to setup the domain controller at Site A where the primary domain controller is located.  The primary domain controller is Windows Server 2008 R2. 
    2.  Once the DC is setup I plan on leaving it on our network for a few days before shipping it to remote Site B for installation
    Other key items:
    1.  The remote Site B will have a different IP range than Site A but will be connected to Site A via a single VPN tunnel.  All the DCs that replicate with each other are on the same domain. 
    2.  The 2012 DC that I setup for Site B (same domain in same forest) will be a DHCP, DNS, and WSUS server all replicating to the primary DC at Site A
    Questions:
    1.  What items can I setup while it's at Site A without effecting or conflicting with the existing network and domain controller?  Can I setup a scope once the DHCP role is added? 
    2.  All of our DCs replicate through Sites and Services, do I have to manually add this to our primary DC for the new DC going to remote Site B?  Or when does this happen automatically when I promote the DC? 
    All and all I'm just looking for a list of Best Practices for 2012 or a Step by Step Guide.  Any help would be appreciated. 

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best Practice for SRST deployment at a remote site

    What is the best practice for a SRST deployment at a remote site? Should a separate router such as a 3800 series be deployed for telephony in addition to another router to be deployed for Data? Is there a need for 2 different devices?

    Hi Brian,
    This is typically done all on one ISR Router at the remote site :)There are two flavors of SRST. Here is the feature comparison;
    SRST Fallback
    This feature enables routers to provide call-handling support for Cisco Unified IP phones if they lose connection to remote primary, secondary, or tertiary Cisco Unified Communications Manager installations or if the WAN connection is down. When Cisco Unified SRST functionality is provided by Cisco Unified CME, provisioning of phones is automatic and most Cisco Unified CME features are available to the phones during periods of fallback, including hunt-groups, call park and access to Cisco Unity voice messaging services using SCCP protocol. The benefit is that Cisco Unified Communications Manager users will gain access to more features during fallback ****without any additional licensing costs.
    Comparison of Cisco Unified SRST and
    Cisco Unified CME in SRST Fallback Mode
    Cisco Unified CME in SRST Fallback Mode
    • First supported with Cisco Unified CME 4.0: Cisco IOS Software 12.4(9)T
    • IP phones re-home to Cisco Unified CME if Cisco Unified Communications Manager fails. CME in SRST allows IP phones to access some advanced Cisco Unified CME telephony features not supported in traditional SRST
    • Support for up to 240 phones
    • No support for Cisco VG248 48-Port Analog Phone Gateway registration during fallback
    • Lack of support for alias command
    • Support for Cisco Unity® unified messaging at remote sites (Distributed Exchange or Domino)
    • Support for features such as Pickup Groups, Hunt Groups, Basic Automatic Call Distributor (BACD), Call Park, softkey templates, and paging
    • Support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0 on same computer
    • No support for secure voice in SRST mode
    • More complex configuration required
    • Support for digital signal processor (DSP)-based hardware conferencing
    • E-911 support with per-phone emergency response location (ERL) assignment for IP phones (Cisco Unified CME 4.1 only)
    Cisco Unified SRST
    • Supported since Cisco Unified SRST 2.0 with Cisco IOS Software 12.2(8)T5
    • IP phones re-home to SRST router if Cisco Unified Communications Manager fails. SRST allows IP phones to have basic telephony features
    • Support for up to 720 phones
    • Support for Cisco VG248 registration during fallback
    • Support for alias command
    • Lack of support for features such as Pickup Groups, Hunt Groups, Call Park, and BACD
    • No support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0
    • Support for secure voice during SRST fallback
    • Simple, one-time configuration for SRST fallback service
    • No per-phone emergency response location (ERL) assignment for SCCP Phones (E911 is a new feature supported in SRST 4.1)
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/prod_qas0900aecd8028d113.html
    These SRST hardware based restrictions are very similar to the number of supported phones with CME. Here is the actual breakdown;
    Cisco 880 SRST Series Integrated Services Router
    Up to 4 phones
    Cisco 1861 Integrated Services Router
    Up to 8 phones
    Cisco 2801 Integrated Services Router
    Up to 25 phones
    Cisco 2811 Integrated Services Router
    Up to 35 phones
    Cisco 2821 Integrated Services Router
    Up to 50 phones
    Cisco 2851 Integrated Services Router
    Up to 100 phones
    Cisco 3825 Integrated Services Router
    Up to 350 phones
    Cisco Catalyst® 6500 Series Communications Media Module (CMM)
    Up to 480 phones
    Cisco 3845 Integrated Services Router
    Up to 730 phones
    *The number of phones supported by SRST have been changed to multiples of 5 starting with Cisco IOS Software Release 12.4(15)T3.
    From this excellent doc;
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/data_sheet_c78-485221.html
    Hope this helps!
    Rob

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Best practice concerning embedding script in report vs.  controlling from Java

    Hi,
    I'm faced(probably not the only one) with adding some intelligence to my reports.  In a prior post I was curious about displaying/hiding sections based on conditions found in the bean/pojo. 
    Is there a best practice concerning embedding logic in the report in the form of formula(s), vs. using Java to get or create a field and then creating a formula on the fly?  I suspect the answer has something to do with truely dynamic fields, and perhaps a little bit of both Java, and script.
    Anyone on staff care to try answering??
    Peter

    Hi,
    log into your SAP ERP system using the SAP GUI and choose in the SAP Menu the following path:
    SAP Menu -> Accounting -> Controlling -> Cost Cetner Controlling ->Environment->Set Controlling Area.
    Set the desired controlling area for your user there (DO NOT FORGET TO CLICK ON THE DISKETTE ICON) and try again.
    Regards,
    Stratos

  • Running Best Practice Analyzer on remote 2008 R2 domain controllers

    Hello Powershell World,
    I'll start out by first mentioning that I am a powershell rookie so I gladly welcome any input to help me improve or work more efficiently.  Anyway, I recently used powershell to run the best practice analyzer for DNS on all of our domain controllers.
     The way I went about was pretty tedious and inefficient but still got the job done through a series of one-liners and exported the report to a UNC path as follows:
    Enable-PSremoting -Force (I logged into all of the domain controllers individually and ran this before running the one-liners below from my workstation)
    New-PSSession -Name <Session Name> -ComputerName <Hostname>
    Enter-PSSession -Name <Session Name>
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer | Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help | Sort Category | Export-CSV \\server\share\BPA_DNS_SERVERNAME.csv
    I'm looking to do this again but for the Directory Services best practice analyzer without having to individually enable remoting on the domain controllers and also provide a lsit of servers for the script to run against. 
    Thanks in advance for all your help!

    What do you mean by "without having to individually enable remoting "?
    You cannot remote without enabling remoting.  You only need to enable remoting once.  It is a configuraiton change.  If you have done it once you do not need to do it again.
    Here is how to runfrom a list of DCs.
    $sb={
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer |
    Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help |
    Sort Category |
    Export-CSV "\\server\share\BPA_DNS_$env:COMPUTERNAME.csv"
    Invoke-BPAModel Microsoft/Windows/DirectoryServices
    # etc...
    ForEach($dc in $listofDCs){
    Invoke-Command -ScriptBlock $sb -Computer $dc
    ¯\_(ツ)_/¯

  • What is best practice for remotely managing bank of switches over POTS

    I need to be able to have a back door into several catalyst switches and ASA.
    What is the best practice for accessing them remotely. ?

    Just place a modem into any console port. Ideally you use a terminal server, but is not always really needed.

Maybe you are looking for

  • FCS2 Update dilemma

    I want to update using the update method mentioned when you have ozone problems. In a previous post there are steps to remedy this by installing the 2007 updates in order before installing all other FCS apps. I've just been on the update page and the

  • Subcontracting Info record for Same Vendor which is used in Purchasing vendor

    Hi Gurus, When we want to create info record for both standard and sub contract  with one single vendor.,  system is not creating a separate info record. It is considering the same  Info record for all the options.  This triggers error -  Info record

  • Multiple regions inside one matrix window

    Trying to select more than one region in the arrangement and display them inside one matrix window. can logic do that yet?

  • Component relationship in bill of material

    hi...... what is the table to find out the parent and child component relation in bill of material ?

  • Custom tag paths

    I have serveral CF servers running. Is it possible to map to custom tags on one server from another server without a share between the servers? Or even with shared drives? What are the security issues if you create a share? Thanks Mark