Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
space. MDT is designed to have multiple task sequences, why wouldn't you use them?
Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
version of it when we fully migrate to SCCM.
Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

Similar Messages

  • Best practice for managing unofficial user repo with git?

    G'day Archers,
    (I hope this is the right subforum for my question -- feel free to move.)
    I am currently playing around with setting up an unofficial repo. So far so good, except I am trying to make sure I do things "properly".
    I am currently deploying my website according to this tip I've seen recommended. I.e. on the server, set up a bare repo in say ~/src/www.git with a post-receive hook that states something like
    GIT_WORK_TREE=$HOME/www_public git checkout -f
    where ~/www_public is the apache directory root for the website.
    Doing it this way, I can push and pull changes between my desktop, the server and my laptop and the .git files don't end up visible to the public.
    Now I want to add my repo to this system, but repo-add creates ".tar.gz" files. According to github help, these are better off ignored by git.
    One solution that occurs to me is to add a pre-commit that decompresses the db into plain text and a post-receive that recompresses to a form that pacman will understand. However, I wonder if there is a simpler solution that I'm missing? I notice that Debian seems to have a tool that will do what I want, for Debian users. Does something equivalent exist for Archlinux, or do any Archers who run unofficial repos have any tips for me based on their experience/mistakes/successes of the past?

    G'day Archers,
    (I hope this is the right subforum for my question -- feel free to move.)
    I am currently playing around with setting up an unofficial repo. So far so good, except I am trying to make sure I do things "properly".
    I am currently deploying my website according to this tip I've seen recommended. I.e. on the server, set up a bare repo in say ~/src/www.git with a post-receive hook that states something like
    GIT_WORK_TREE=$HOME/www_public git checkout -f
    where ~/www_public is the apache directory root for the website.
    Doing it this way, I can push and pull changes between my desktop, the server and my laptop and the .git files don't end up visible to the public.
    Now I want to add my repo to this system, but repo-add creates ".tar.gz" files. According to github help, these are better off ignored by git.
    One solution that occurs to me is to add a pre-commit that decompresses the db into plain text and a post-receive that recompresses to a form that pacman will understand. However, I wonder if there is a simpler solution that I'm missing? I notice that Debian seems to have a tool that will do what I want, for Debian users. Does something equivalent exist for Archlinux, or do any Archers who run unofficial repos have any tips for me based on their experience/mistakes/successes of the past?

  • Best Practice for Managing a BPC Environment?

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Best practices for 2 x DNS servers with 2 x sites

    I am curious if someone can help me with best practices for my DNS servers.  Let me give my network layout first.
    I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8)  All 4 servers are promoted to DC's and have DNS services
    running.
    Here goes my questions:
    Site #1
    DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    Site #2
    DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
    > properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario. 
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
    Active Directory - Integrated > Primary Zone? Should any of these be set to
    Secondary Zone?
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
    Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?

    I am curious if someone can help me with best practices for my DNS servers.  Let me give my network layout first.
    I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8)  All 4 servers are promoted to DC's and have DNS services
    running.
    Here goes my questions:
    Site #1
    DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    Site #2
    DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
    > properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario. 
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
    Active Directory - Integrated > Primary Zone? Should any of these be set to
    Secondary Zone?
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
    Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?
    Site1
    DC1: Primary 10.0.0.7. Secondary 10.0.0.8. Tertiary 127.0.0.1
    DC2: Primary 10.0.0.8.  Secondary 10.0.0.7. Tertiary 127.0.0.1
    Site2
    DC1: Primary 10.2.0.7.  Secondary 10.2.0.8. Tertiary 127.0.0.1
    DC2: Primary 10.2.0.8.  Secondary 10.2.0.7. Tertiary 127.0.0.1
    The DC's should automatically register in msdcs.  Do not register external DNS servers in msdcs or it will lead to issues. Yes, I recommend all zones to be set to AD-integrated. No need to allow zone transfers as AD replication will take care
    of this for you.  Same for mydomain.local.
    Hope this helps.  

  • Best practices for managing Movies (iPhoto, iMovie) to IPhone

    I am looking for some basic recommendations best practices on managing the syncing of movies to my iPhone. Most of my movies either come from a digital camcorder into iMovie or from a digital Camera into iPhone.
    Issues:
    1. If I do an export or a share from iPhoto, iMovie, or QuickTime, what formats should I select. I've seem 3gp, mv4.
    2. When I add a movie to iTunes, where is it stored. I've seen some folder locations like iMovie Sharing/iTunes. Can I copy them directly there or should I always add to library in iTunes?
    3. If I want to get a DVD I own into a format for the iPhone, how might I do that?
    Any other recommedations on best practices are welcome.
    Thanks
    mek

    1. If you type "iphone" or "ipod" into the help feature in imovie it will tell you how.
    "If you want to download and view one of your iMovie projects to your iPod or iPhone, you first need to send it to iTunes. When you send your project to iTunes, iMovie allows you to create one or more movies of different sizes, depending on the size of the original media that’s in your project. The medium-sized movie is best for viewing on your iPod or iPhone."
    2. Mine appear under "movies" which is where imovie put them automatically.
    3. If you mean movies purchased on DVD, then copying them is illegal and cannot be discussed here.
    From the terms of use of this forum:
    "Keep within the Law
    No material may be submitted that is intended to promote or commit an illegal act.
    Do not submit software or descriptions of processes that break or otherwise ‘work around’ digital rights management software or hardware. This includes conversations about ‘ripping’ DVDs or working around FairPlay software used on the iTunes Store."

  • Best Practice for managing variables for multiple environments

    I am very new to Java WebDynPro and have a question
    concerning our deployments to Sandbox, Development, QA,
    and Production environments.
    What is the 'best practice' that people use so that if
    you have information specific to each environment you
    don't hard-code it in your Java WebDynPro code.
    I could put the value in a properties file, but how do I
    make that variant?  Otherwise I'd still have to make a
    change for each environment to the property file and
    re-deploy.  I know there are some configurations on the
    Portal but am not sure if that will work in my instance.
    For example, I have a URL that varies based on my
    environment.  I don't want to hard-code and re-compile
    for each environment.  I'd prefer to get that
    information on the fly by knowing which environment I'm
    running in and load the appropriate URL.
    So far the only thing I've found that is close to
    telling me where I'm running is by using a Parameter Map
    but the 'key' in the map is the URL not the value and I
    suspect there's a cleaner way to get something like that.
    I used Eclipse's autosense in Netweaver to discover some
    of the things available in my web context.
    Here's the code I used to get that map:
    TaskBinder.getCurrentTask().getWebContextAdapter().getRequestParameterMap();
    In the forum is an example that gets the IP address of
    the site you're serving from. It sounds like it is going
    to be or has been deprecated (it worked on my system
    right now) and I would really rather have something like
    the DNS name, not something like an IP that could change.
    Here's that code:
    String remoteHost = TaskBinder.getCurrentTask().getWebContextAdapter().getHttpServletRequest().getRemoteHost();
    Thanks in advance for any clues you can throw my way -
    Greg

    Hi Greg:
         I suggest you that checks the "Software Change Managment Guide", in this guide you can find an explication of the best practices to work with a development infrastructure.
    this is the link :
    http://help.sap.com/saphelp_erp2005/helpdata/en/83/74c4ce0ed93b4abc6144aafaa1130f/frameset.htm
    Now if you can gets the ip of your server or the name of your site you can do the next thing:
    HttpServletRequest request = ((IWebContextAdapter) WDWebContextAdapter.getWebContextAdapter()).getHttpServletRequest();
    String server_name = request.getServerName();
    String remote_address =     request.getRemoteAddr()
    String remote_host = request.getRemoteHost()
    Only you should export the servlet.jar in your project properties > Build Path > Libraries.
    Good Luck
    Josué Cruz

  • Best Practices for Integrating UC-5x0's with SBS 2003/8?

    Almost all of Cisco's SBCS market is the small and medium business space.  Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
    To that end, I see a  lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
    I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
    Some of the challanges include;
    1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
    2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
    3. Which device should handle DNS?
    My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
    1. A UC-540 device utilizing SIP for the cost savings
    2. High Speed Internet with 5 static routable IP addresses
    3. An existing Microsoft Small Business Server 2003/8
    4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
    5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
    Your thoughs appreciated.

    Progress Report;
    The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
    DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
    Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.  
    To review, this configuration assumes the following;
    1. The UC540 has a public IP address of 4.2.2.2
    2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
    3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
    First, backup your current configuration via the CCA,
    Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
    ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
    ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
    ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
    ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
    ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
    ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
    ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
    ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
    ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
    Next, you will need to amend your UC540's default ACL.
    First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
    Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
    int fas 0/0
    no ip access-group 104 in
    no access-list 104
    access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
    access-list 104 remark SDM_ACL Category=1
    access-list 104 permit tcp any host 4.2.2.3 eq 25 log
    access-list 104 permit tcp any host 4.2.2.3 eq 80 log
    access-list 104 permit tcp any host 4.2.2.3 eq 443 log
    access-list 104 permit tcp any host 4.2.2.3 eq 987 log
    access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
    access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log 
    access-list 104 permit tcp any host 4.2.2.4 eq 80 log
    access-list 104 permit tcp any host 4.2.2.4 eq 443 log
    access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
    access-list 104 permit udp host 116.170.98.142 eq 5060 any
    access-list 104 permit udp host 116.170.98.143 any eq 5060
    access-list 104 deny   ip 10.1.10.0 0.0.0.3 any
    access-list 104 deny   ip 10.1.1.0 0.0.0.255 any
    access-list 104 deny   ip 192.168.10.0 0.0.0.255 any
    access-list 104 permit udp host 116.170.98.142 eq domain any
    access-list 104 permit udp host 116.170.98.143 eq domain any
    access-list 104 permit icmp any host 4.2.2.2 echo-reply
    access-list 104 permit icmp any host 4.2.2.2 time-exceeded
    access-list 104 permit icmp any host 4.2.2.2 unreachable
    access-list 104 permit udp host 192.168.10.1 eq 5060 any
    access-list 104 permit udp host 192.168.10.1 any eq 5060
    access-list 104 permit udp any any range 16384 32767
    access-list 104 deny   ip 10.0.0.0 0.255.255.255 any
    access-list 104 deny   ip 172.16.0.0 0.15.255.255 any
    access-list 104 deny   ip 192.168.0.0 0.0.255.255 any
    access-list 104 deny   ip 127.0.0.0 0.255.255.255 any
    access-list 104 deny   ip host 255.255.255.255 any
    access-list 104 deny   ip host 0.0.0.0 any
    access-list 104 deny   ip any any log
    int fas 0/0
    ip access-group 104 in
    Lastly, save to memory
    wr mem
    One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
    (config)#ip access-list extended 104
    (config-ext-nacl)#7 permit gre any any
    Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
    Thanks to Vijay in Cisco Tac for the guidence.

  • Best practices for setting up RDS pool, with regards to profiles /appdata

    All,
    I'm working on a network with four physical sites and currently using a single pool of 15 RDS servers with one broker. We're having a lot of issues with the current deployment, and are rethinking our strategy. I've read a lot of conflicting information on how
    to best deploy such a service, so I'd love some input.
    Features and concerns:
    Users connect to the pool from intranet only.
    There are four sites, each with a somewhat different local infrastructure. Many users are connecting to the RDS pool via thin clients, although some locations have workstations in place.
    Total user count that needs to be supported is ~400, but it is not evenly distributed - some sites have more than others.
    Some of the users travel from one site to another, so that would need to be accounted for with any plans that involve carving up the existing pool into smaller groups.
    We are looking for a load-balanced solution - using a different pool for each site would be acceptable as long as it takes #4 and #7,8 into account.
    User profile data needs to be consistent throughout: My Docs, Outlook, IE favorites, etc.
    Things such as cached IE passwords (for sharepoint), Outlook settings and other user customization needs to be carried over as well.
    As such, something needs to account for the information in AppData/localroaming, /locallow and /local between these RDS servers.
    Ideally the less you have to cache during each logon the better, in order to reduce login times.
    I've almost never heard anything positive about using roaming profiles, but is this one of those rare exceptions? Even if we do that, I don't believe that covers the information in <User>/AppData/*  (or does it?), so what would be the best
    way to make sure that gets carried over between sessions inside the pool or pools?
    The current solution involves using 3rd party apps, registry hacks, GPOs and a mashup of other things and is generally considered to be a poor fit for the environment. A significant rework is expected and acceptable. Thinking outside the box is fine!
    I would relish any advice on the best solutions for deployment! Thank you!

    Hi Ben,
    Thank you for posting in Windows Server Forum.
    Please check below blogs and document which helps to understand some basic requirement and to setup the new environment with proper guided manner.
    1. Remote Desktop Services Deployment Guide
    (Doc)
    2. Step by Step Windows 2012 R2 Remote Desktop Services –
    Part 1, 2,3 & 4
    3.Deploying a 2012 / 2012R2 Remote Desktop Services (RDS) farm
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Best practice for management of Portal personalisation

    Hi
    I've been working in our sandpit client figuring out all portal personalisation and backend/frontend customisation for ESS (with some great help from SDN). However I'm not clear on what best practice is when doing this for 'real'.
    I've read in places that objects to be personalised should be copied first. Elsewhere I've read about the creating of a new portal desktop for changes to the default framework page etc
    What I'm not clear on is the strategy for how you manage and control all the personalisation changes.  For example, if you copy an iview and personalise where do you copy it to? Do you create "Z" versions of all the folders of the standard content or something else?
    Do you copy everything at the lowest possible object level and copy or highest?
    Implications of applying support or enhancement packs?
    We're going pretty vanilla to begin with in that there will just be a single ESS role so things should be fairly simple from that point of view, but I've trawled all around SDN without being able to find clear guidance on the overall best pratice process for personalisation.
    Can anyone help?
    Regards
    Phil
    Edited by: Phil Martin on Feb 15, 2010 6:47 PM

    Hi Phil,
    To keep things organized I would create an new folder (so don't use the desktop one). The new folder should sit under Portal Content. In that folder create a folder for ESS, then inside that one for iViews (e.g. COMPANY_XYZ >>> ESS >>> iViews) then copy via delta link the iViews you want from the standard SAP folders.
    Then you will have to create another folder for your own roles (e.g. ESS >>> Roles). Inside this folder create a new role and add your iViews (again via delta link). Or you could just copy one of the standard SAP roles into this folder and not bother with copying the iViews. What you do depends on how many changes you intend to make, sometimes if the changes are only small it is easier to copy the entire SAP role (via delta link) and make your changes.
    You should end up with something link this in your PCD:
    Portal Content
         COMPANY_XYZ
              ESS
                   iViews
                        iView1
                        iView2 etc..
                   Roles
                        Role1
                        Role2 etc...
    Then don't forget to assign your new roles to your users or groups.
    Hope that makes sense.
    BRgds,
    Simon
    Edited by: Simon Kemp on Feb 16, 2010 3:56 PM

  • Best practice for a distributed BI4 deployment?

    Hello Experts,
    Need suggestion on the following?
    We plan to deploy the BI4 software across 6 Servers (1 node on each server)
    We plan to dedicate 2 servers for the Management/Intelligent tier and 4 servers for the processing Tier (Webi, crystal and dashboards)
    WebApp and Database will be hosted on separate servers and will not be part of the above 6 servers.
    1 Option:
    -Install BI4 on the first server (Start a new deployment)
    -Then on the remaining 5 servers, use the option (Expand an existing deployment)
    In this approach CMS will be created on all 6 nodes however since our requirement is to have CMS on only 2 nodes (stopping it on the remaining 4 servers after install), is this a suggested approach? Since CMS poll each other a lot, we feel this might not be the best approach? Is there a possibility that there might be a negative impact on the overall platform if such an approach is considered?
    Or should we be considering other options to install BI4 software independently on each server, clustering the 2 nodes with all default services and then adding the other 4 nodes to this cluster (however creating only processing servers on these 4 nodes)?
    Look forward to your recommendations!
    Cheers,
    Vikram

    Hello,
    maybe you want to check out the SAP BI Pattern Book?!
    SAP Business Intelligence Platform Pattern Books - Business Intelligence (BusinessObjects) - SCN Wiki
    Regards
    -Seb.

  • Advice re best practice for managing the scan listener logs and list logs

    Hi friends,
    I've just started a job as a RAC dba administrator for some big 24*7 systems, I've never worked with clusterware and RAC.
    2 Space problems
    1) Very large listener_scan2.log in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/trace folder
    2) Heaps of log_nnn.xml files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert folder (4Gb used up)
    Welcome advice on the best way to manage these in the short term (i.e. delete manually) and the recommended practice and safest way (adri maybe not sure how it works with scan listeners)
    Welcome advice and commands that could be used to safely clean these up and put a robust mechanism in place for logfile management in RAC and CLusterware systems.
    Finally should I be checking the log files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert regulalrly ?
    My experience with listener logs is that they are only looked at when there are major connectivity issues and on the whole are ignored.
    Thanks for your help,
    Cheers, Rob

    Have you had any issues that require them for investigative purposes? If not, just remove them. Are the logs required for some sort of audit process? If yes, gzip them to a location where you can use your OS tape backup policies to retain them for n-days. Once you remove an active file, it should recreate the file and continue without interruption.

  • JSF best practice for managed bean population

    hi;
    consider a simple session scope managed bean with 1 attribute which is calculated dynamically.
    there seem to be 3 ways of populating that attribute (that i could think of):
    1. have the bean's get<attribute> get the data and populate it
    2. have the constructor of the bean get the data and populate it
    3. have an action get the data, and set it from the action code (maybe use a valueBinding to get the bean if the bean which needs population is not the same bean that holds the action) .
    option 1 seems ok for situations where i need my data to be calculated for every valueRef for that attribute but is an overhead for data that need not be calculated for every valueRef.
    option 2 will ensure that te data is calculated only once per session. but it still looks kinda strange to do it
    with option 3, it seems you should populate the dynamic content of the next page and its intuitive but some places in the page might be common content with dynamic data and this has nothing to do with navigation and the next page to be displayed.
    is there a best practice of how to populate a managed bean data?
    are different cases fit different ways?
    any thoughts are appriciated.

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Any "best practices" for managing a 1.3TB iPhoto library?

    Does anyone have any "best practices" or suggestions for managing and dealing with a large iPhoto library?  I currently have a 1.3 TB library.  This is made up of anything shot in the past 8 years culminating with the past 2 years being 5D Mark II images.  This also includes a big dose of 1080P video shot with the camera.  These are only our family photos so I would hate to break up the library into years as that would really hurt a lot of iPhotos "power".
    It runs fine in day to day use, but I recently tried to upgrade to iPhoto 11 and it crashes repeatedly during the upgrading library process.
    (I have backups, so no worries there.)
    I just know with Lion and iPhoto 9 being a bit old my upgrade day is coming and I'm not sure what my path is.

    If you have both versions of iPhoto on your Mac then try the following; While running iPhoto 09 create a new, test library and import a few photos into it.  Then try to convert that test library with iPhoto 11.  If it converts OK then your big library is causing the problem.
    If that's the case you can try rebuilding your working library as follows:  make a temporary, backup copy of the library and try the following:
    launch iPhoto with the Command+Option keys held down and rebuild the library.
    select the options identified in the screenshot.
    Click to view full size
    Once rebuild try converting it to iPhoto 11.
    NOTE:  if you already have a backup copy of your library you don't need to make the temporary backup copy.
    OT

Maybe you are looking for