DNS best practice in local domain network of Windows 2012?

Hello.
We have a small local domain network in our office. Which one is the best practice for the DNS: to setup a DNS in our network forwarding to public DNSs or directly using public DNS in all computers including
server?
Thanks.
Selim

Hi Selim,
Definately the first option  "setup a DNS in our network forwarding to public DNSs " and all computers including server has local DNS configured
Even better best practice would be, this local DNS points to a standalone DNS server in DMZone which queries the public DNS.
Using a centralized DNS utilizes the DNS cache to answer similar queries, resulting in faster response time, less internet usage for repeated queries.
Also an additional DNS layer helps protect your internal DNS data from attackers out in the internet.
Using internal DNS on all the computer will also help you host intranet websites and accessibility to them directly. Moreover when you are on a AD domain, you need to have the computers DNS configured properly for AD authentication to happen.
Regards,
Satyajit
Please “Vote As Helpful”
if you find my contribution useful or “Mark As Answer” if it does answer your question. That will encourage me - and others - to take time out to help you.

Similar Messages

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Direct Access DNS resolution local domain network

    Hey guys,
    some information to my test environment...
    My direct access server and my DC are based on Windows Server 2012 R2. The direct access server has one nic. Port 443 requests are forwarded through an firewall to the direct access server. The configuration for direct access is based on the built in assistens
    to configure it.
    On client side i am using Windows 8.1 x64.
    Now the to my problem...
    If I do an ping or a gpupdate when i am not connected to my local company network, the server responds and gpupdate/ping works fine. As soon as i am connected to my local company network i am not able to do a gpupdate or a ping (error in resolving dns).
    But i am able to use nslookup to query names.
    Anyone a suggestion where the problem could be?

    Hi,
    It seems that this problem is caused by the issue of Network Location Server.
    Does the client know that it is connected to the local network?
    When the client connects to the local network, it should show "Connected to network locally or through VPN".
    Here is the screenshot of my lab server,
    Aslo, we can use the command below to verify this,
    netsh dns show state
    The Machine location should be "Inside corporate network"  when the client is connected to the local network.
    If the client doesn't know that it is inside the corporate network, please check if client can access the Network Location Server.
    Best Regards.
    Steven Lee
    TechNet Community Support

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • DNS best practice question

    Hello,
    we currently have an issue regarding DNS in a multiple Domain Forest.
    first of all, in the forest there are 5 Domains (names changed):
    dom1.domain.org
    sub.dom1.domain.org
    dom2.domain.org
    dom1.url.de
    dom.de
    As you see, a forest full of Domains not matching ;-)
    We also have multiple sites, and as per network requirements, replication is made trough Domain: dom1.domain.org
    All other Domains replicate only with this one.
    The DNS is currently set up as follows:
    Each Domain Controller holds its own domain as primary AD integrated Domain in DNS (Domain wide repl.).
    All others are set up as Forest Wide AD integrated Stubs.
    Each startup we get Event 4515 on the DCs, that a Zone is available twice.
    So, I have to troubleshoot this infrastructure now.
    Can you tell me, what is best practice here to set up DNS correctly with less replication traffic as possible?
    Best regards

    By Default DNS Zone replication Scope is Domainwide but except _MSDCS Zone . _MSDCS Zone replication should be forest wide. In addition Replication Scope can be decide as per your business requirment.
    Regards~Biswajit
    Disclaimer: This posting is provided & with no warranties or guarantees and confers no rights.
    MCP 2003,MCSA 2003, MCSA:M 2003, CCNA, MCTS, Enterprise Admin
    MY BLOG
    Domain Controllers inventory-Quest Powershell
    Generate Report for Bulk Servers-LastBootUpTime,SerialNumber,InstallDate
    Generate a Report for installed Hotfix for Bulk Servers

  • DNS best practices for hub and spoke AD Architecture?

    I have an Active Directory Forest with a forest root such as joe.co and the root domain of the same name, and root DNS servers (Domain Controllers) dns1.joe.co and dns2.joe.co
    I have child domains with names in the form region1.joe.com, region2.joe.co and so on, with dns servers dns1.region1.joe.co and so on.
    Each region has distribute offices that may have a DC in them, servers named in the form dns1branch1.region1.joe.co
    Over all my DNS tests out okay, but I want to get the general guidelines for setting up new DCs correct.
    Configuration:
    Root DC/DNS server dns1.joe.co adapter settings points DNS to itself, then two other root domain DNS/DCs dns2.joe.co and dns3.joe.co.
    The other root domain DNS/DCs adapter settings point to root server dns1.joe.co and then to itself dns2.joe.co, and then 127.0.0.1
    The regional domains have a root dns server dns1.region1.joe.co with adapter that that points to root server dns1.joe.co then to itself.
    The additional region domain DNS/DCs adapter settings point to dns1.region1.joe.co then to itself then to dn1.joe.co
    What would you do to correct this topology (and settings) or improve it?
    Thanks in advance
    just david

    Hi,
    According to your description, my understanding is that you need suggestion about your DNS topology.
    In theory, there is no obvious problem. Except for the namespace and server plaining for DNS, zone is also needed to consideration. If you place DNS server on each domain and subdomain, confirm that if the traffic browsed by DNS will affect the network performance.
    Besides, fault tolerance and security are also necessary.
    We usually recommend that:
    DC with DNS should point to another DNS server as primary and itself as secondary or tertiary. It should not point to self as primary due to various DNS islanding and performance issues that can occur. And when referencing a DNS server on itself, a DNS client
    should always use a loopback address and not a real IP address. detailed information you may reference:
    What is Microsoft's best practice for where and how many DNS servers exist? What about for configuring DNS client settings on DC’s and members?
    http://blogs.technet.com/b/askds/archive/2010/07/17/friday-mail-sack-saturday-edition.aspx#dnsbest
    How To Split and Migrate Child Domain DNS Records To a Dedicated DNS Zone
    http://blogs.technet.com/b/askpfeplat/archive/2013/12/02/how-to-split-and-migrate-child-domain-dns-records-to-a-dedicated-dns-zone.aspx
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • OS X Server DNS Best practice?

    Hello,
    I am having a little trouble with my OS X Server DNS.
    I have set up server.example.com and that works fine but now from my internal network I cannot get to:
    example.com or
    www.example.com
    example.com is a website I have set up on a remote webserver.
    My records currently look like this.
    Primary Zone: example.com
    server.example.com - machine
    server.example.com - nameserver
    Reverse Zone: 50.168.192.in-addr.arpa
    192.168.50.25 - reverse mapping
    server.portalpie.com - nameserver
    My webserver for example.com has an IP of something like 175.117.174.19
    How would I get example.com and www.example.com to point to 175.117.174.19?
    Thanks

    tsumugari wrote:
    Hello, DNS for simple name resolution work correctly for internal and external name. Internal it is .lan and external .fr
    I think there is perhaps SRV entry to add.
    Please do not use .LAN as your top-level domain.  That's not a valid top-level domain right now, but it's also not reserved for this sort of use.  Either use a real and registered domain, or a subdomain of a real and registered domain, or — if you squat in a domain or try to use a TLD that's not registered — expect to have problems as new top-level domains are added.  At the rate that the new TLDs are coming online from ICANN, I'd expect to see .LAN get allocated and used, too.    .GURU, .RIP, .PLUMBING and dozens of other new top-level domains are already online, and probably thousands more are coming online.  
    SRV records are not related to accessing the Internet, those are service records which some applications use to access certain network services; they're a way to locate a target server and a port for specific applications — CalDAV does use an SRV record, but that's not related to the original posting's issues.   If you're having issues similar to the OP, then access your server and launch Terminal.app from Applications > Utilities and verify local DNS with the (harmless, diagnostic) command-line command:
    sudo changeip -checkhostname
    Enter your administrative password.  That command might show a one-time informational message about the use of sudo, and will usually then show some network configuration information about your server, and then an indication that no problems were found, or some indications of issues.  If there are errors reported, your IP network or your local DNS is not configured correctly — I'm here assuming a NAT network.
    I usually do this DNS set-up in a couple of steps.  First, get private DNS services configured and working.  This is always the first step, right after assigning the IP addresses.   It's just too convenient not to have DNS running on your local LAN, once you get to the point of having and running a server.   Then for external access for (for instance) web services, get port-forwarding working at the firewall/NAT/gateway box working; get your public static IP address mapped to the server's internal, private, static IP address.  Then get the public DNS configuration to resolve your external domain name to your public static IP address.
    My preference is to use separate DNS domains or a domain and a subdomain inside and outside.  Using real and registered domains, and not using any domains associated with a dynamic DNS provider — that's possible, but a little more tricky to configure.  This internal and external domain usage simplifies certain steps, and it avoids having to deal with cases where — for instance — some of your services have public IP addresses — such as a mail server you might be using — and other services might be entirely private.  If you have one domain (or subdomain) be public and one be private, then you don't have to track external IP address changes in your private DNS services; public DNS has just your public stuff, and your private domain (or subdomain) has just your private stuff.  Also obviously easy to tell what's inside your firewall, and what's outside, using this. 
    If you're thinking of running a publicly-accessible mail server, you'll need additional steps in the public DNS.
    Little of the above probably makes sense, so here's a write-up on configuring DNS on OS X Server.   All of the Server.app stuff works about the same for general DNS setup, too.  More recent Server.app is usually more flexible and capable than the older Server.app stuff, though.

  • Best practice converting local laptop accounts to Mobile Accounts with PHD

    Hi,
    what is the best practice to convert local laptop users (with different UIDs than their network account) to mobile accounts? Especially when the local dir should not be synced in whole (just Documents, Library). Client and server are 10.5, network accounts are on NFS.
    I tried creating the mobile account with a minimal network directory (Library etc. ) and then move the original folders into place, but this didn't work out (the sync info was overwritte somewhere ..)
    Christian

    I think your best bet is to copy the home folder off the laptop to the user share on the server. Then with WGM create the same user and the apply all permissions of the network user to the copied folder.
    Once you have that create your settings for the PHD and then go to the laptop. There you will setup the laptop and bind it to the directory, have that user login (might want to do this on a lan, not airport) and then it will move all the data across to that laptop, and since the network user (same as the local) owns that folder everything should work. If the password is the same then OS X should fix the login and keychain password, so saved forms or email password would show up.
    I did this same thing for 20 OS 10.4 client laptops. Took me a while to get all of this in place but will spare you the running around...
    hope that helps

  • Best Practice Question on Heartbeat Network

    After running 3.0.3 a few weeks in production, we are wondering if we set up our Heartbeat /Servers correctly.
    We have 2 servers in our Production Server pool. Our LAN, a 192.168.x.x network, has the Virtual IP of the Cluster (heartbeat), the 2 main IP addresses of the servers, and a NIC assigned to each guest. All of this has been configured on the same network. Over the weekend, I wanted to separate the Heartbeat onto a new network, but when trying to add to the pool I received:
    Cannot add server: ovsx.mydomain.com, to pool: mypool. Server Mgt IP address: 192.168.x.x, is not on same subnet as pool VIP: 192.168.y.y
    Currently, I only have one router that translate our WAN to our LAN of 192.168.x.x. I thought the heartbeat would strictly be internal and would not need to be routed anywhere and just set up as a separate VLAN and this is why I created 192.168.y.y. I know that the servers can have multiple IP addresses, and I have 3 networks added to my OVM servers. 192.168.x.x, 192.168.y.y and 192.168.z.z. y and z are not pingable from anything but the servers themselves or one of the guests that I have assigned that network to. I can not ping them directly from our office network, even through the VPN which only gives us access to 192.168.x.x.
    I guess I can change my Sever Mgt IP away from 192.168.x.x to 192.168.y.y, but can I do that without reinstalling the VM server? How have others structured there networks especially relating to the heartbeat?
    Is there any documentation/guides that would describe how to set up the networks properly relating to the heartbeat?
    Thanks for any help!!

    Hello user,
    In order to change your environment, what you could do is go to the Hardware tab -> Network. Within here you can create new networks and also change via the Edit this Network pencil icon what networks should manage what roles (i.e. Virtual Machine, Cluster Heartbeat, etc). In my past experience, I've had issues changing the cluster heartbeat once it has been set. If you have issues changing it, via the OVM Manager, one thing you could do is change it manually via the /etc/ocfs2/cluster.conf file. Also, if it successfully lets you change it via the OVM Manager, verify it within the cluster.conf to ensure it actually did your change. This is where that is being set. However, doing it manually can be tricky because OVM has a tendency to like to revert it's changes back to its original state say after a reboot. Of course I'm not even sure if they support you manually making that change. Ideally, when setting up an OVM environment, best practice would be to separate your networks as much as possible i.e. (Public network, private network, management network, clusterhb network, and live migration network if you do a lot of live migrating, otherwise you can probably place it with say the management network).
    Hope that helps,
    Roger

  • Autodiscover best practice over multiple domains

    Hi guys,
    Domain A is a SME with Exchange 2007, recently installed 2010 and working in a mixed mode. All working well.
    A company took over my company which looks after Domain A. A domain migration was done into Domain B, however the mail environment was left "as is" for now in the original domain.
    So all objects are in a single domain (parent company), but we have the old domain A with the users mailboxes still.
    Domain B is now about to deploy a 2010 infrastructure. Currently I use an SRV record on the split-brain internal DNS to sort out Autodiscover. I assume that when Domain B publishes their SCP into the domain that all the users sit on, my users (that have
    their mail on Domain A still) will pickup the SCP and get errors....
    So having everyone in domainb.com at the moment, but with a set of users having their mailboxes on domaina.org. Their SMTP is still domaina.org too. When domainb.com roles our their Ex2010 CAS and publishes an SCP how can I prevent my users from getting
    tripped up.
    How would / do you implement Autodiscover so different mail users in a single domain get put to their local CAS if using local domain email but gets pushed to a trusted domain if using a different email domain....?
    Ideas welcome! :)
    Thanks - Steve

    Hi Steve,
    Domain A migrated into domain B, and the legacy domain A still existing, right ?
    “We have forwarders in domainb to resolve requests to domaina, and we have a stub in domaina to point back to domainb.”
    How to do this ?  
    By the way, what’s your goal ? And do you want the users log on via new CAS server ?
    Wendy Liu
    TechNet Community Support

  • Best practices for periodic reboot of MS Windows 2003 Server

    Hi there,
    We are a 4-month old v9.3.1 Essbase environment running on a 8-CPU MS Windows 2003 Enterprise Edition server and we run data loads twice daily as well as full outline restructures overnight. Data loads are executed both via SQL retrieves (ie, from a view set up on another server) and via data file loads after clearing the outline and rebuilding it.
    Over time, I have noticed that the performance of the server was degrading markedly and that calculation scripts are taking longer. Funnily enough, the server's performance goes back to what I would expect it to be after a full reboot.
    My questions are as follows:
    1. Is it typically best practice to reboot MS Windows 2003 servers when dealing with a heavily accessed environment?
    2. If yes, is it mentioned anywhere in the Essbase manuals that MS Windows servers ought to be rebooted on a periodic basis in order to perform at their optimal best?
    3. Does Microsoft recommend such a practice of rebooting their servers on a periodic basis? I looked throughout their KnowledgeBase but couldn't find any mention of this fact in spite of the fact that it is obvious that a periodic reboot boosts the performance of MS Windows servers.
    Thanks in advance for your responses/recommendations
    J

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services over to the new infrastructure .   We are planning on rolling out
    2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Any applicable\recommended Group Policy settings (Local & Domain) for configuring windows 8.1 "gold master image" for collection

    Happy Friday everybody -
    I'm working on implementing Microsoft RDS 2012\VDI for the folks here at work.  I've read - online - a lot of articles on VDI and RDS 2012 - and have a working model that is working somewhat satisfactorily.  I haven't seen much online about steps
    I could take in Local Group Policy on my Windows 8.1 'gold image' - or for that matter Domain level group policy - that can assist in creating a better, more reliable/robust Windows 2012 VDI environment.
    Anybody out there got any information or opinions or advice on Group Policy settings for VDI environments?
    Thanks again, everyone!
    Adrian
    anr

    Hi Adrian,
    Thank you for posting in Windows Server Forum.
    In regards to your issue you can refer beneath article for detail information.
    1. Group Policy Best Practices for VDI Environments
    2.Some Basic Group Policy Settings for VDI
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Looking for Some Examples / Best Practices on User Profile Customization in RDS 2012 R2

    We're currently running RDS on Windows 2008 R2. We're controlling user's Desktops largely with Group Policy. We're using Folder Redirection to configure their Start Menus as well.
    We've installed a Server 2012 R2 RDS box and all the applications that users will need. Should we follow the same customization steps for 2012 R2 that we used in 2012 R2? I would love to see some articles on someone who has customized a user profile/Desktop
    in 2012 R2 to see what's possible.
    Orange County District Attorney

    Hi Sandy,
    Here are some related articles below for you:
    Easier User Data Management with User Profile Disks in Windows Server 2012
    http://blogs.msdn.com/b/rds/archive/2012/11/13/easier-user-data-management-with-user-profile-disks-in-windows-server-2012.aspx
    User Profile Best Practices
    http://social.technet.microsoft.com/wiki/contents/articles/15871.user-profile-best-practices.aspx
    Since you want to customize user profile, here is another blog for you:
    Customizing Default users profile using CopyProfile
    http://blogs.technet.com/b/askcore/archive/2010/07/28/customizing-default-users-profile-using-copyprofile.aspx
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]

  • Best Practice for Showing Videos in a Windows Application

    I am building a windows 8 application that has multiple articles.  Each article will have anywhere from 0-3 videos associated with it.  I was wondering where I could find some guidelines, best practices or examples about the how best to do this
    from a design stand point, so that app is in line with the Windows Application guidelines.
    Any help would be appreciated!
    Thanks!

    Hi bcp1978,
    >>I was wondering where I could find some guidelines, best practices or examples about the how best to do this from a design stand point, so that app is in line with the Windows Application guidelines
    To design a windows app, we need to understand Microsoft design principles first:
    https://msdn.microsoft.com/library/windows/apps/hh781237.aspx Use these principles as you plan your app, and let them guide your design and development choices
    Microsoft also provide a page of design guideline:
    https://dev.windows.com/en-us/design
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

Maybe you are looking for

  • [SOLVED] PCManFM file manager not working for me

    Hi, i'm having some troubles with PCManFM. Every time i want to open the file manager it just doesn't do anything, no errors and no feedback if i run it from a console. If i create a folder on my desktop and double click on it, file manager will pop

  • Change file extension in receiver file adapter

    File to File Scenario Posted: Sep 11, 2006 3:47 PM      Reply      E-mail this post  I am working on a File to File scenario. Source System gives out a file: abc ( with no extension) We need to archive it as: abc.done And we need to send it to a FTP

  • Servlet and JDBC on linux

    Hi, I am running a servlet program on linux that retrieves some information from oracle database using jdbc. I can run a normal java application also on linux with a database connection: Class.forName("oracle.jdbc.OracleDriver"); Connection con = Dri

  • Want to print Logo

    Hi I want to print logo in the RTF header. The Logo file path is coming from Profile value. I am getting the path given in the profile. but i am not able to print the pic(logo). Can anyone help me on this. Thanks in advance

  • Role and uses of cue sheets

    hi i am about to (re) rip my entire cd collection onto my mac in aiff format and either aac or apple lossless and i have stumbled across cue sheets i believe they hold info re gaps between songs as well as other stuff am i right in thinking that thes