Best Practice for Laptop in Field, Server at Home

I'm sure this is a common workflow. If somebody has a link to the solution, please pass it along here.
I keep all my images on a server at home. I would like to keep that as the repository for all my images.
I also have a laptop that I am going to use in the field and edit work in lightroom. When I get back home I would like to dump those images on my home server (or do it from the field via vpn), but I would like the library to keep the editing settings attached to the images (which would now be on the server and deleted from the laptop). Then if I open them on my desktop (accessed from the server), I'd like all that editing I've already done to show up. Then if I go back to the macbook (from images on the server) all the edits are available.
Is this possible? Can anybody give me an idea about the best way to do solve this.

Version 1.1 will be adding much of the fundamental database functionality that is currently missing from V1, and should make the database a practical tool.
If you are doing individual jobs that you need to carry around between your laptop and your desktop, I recommend using multiple individual databases. You can copy (in actuality move) the database between your laptop and your desktop much easier because the size remains manageable, and you can even do slideshows and sorting without access to the originals. Using one big database is impractical because the thumbnail folders get so humongous.
It is a rather kludgy workaround, but it sure beats not being able to share a database between a laptop and a desktop system.
Another option is to keep the Lightroom databases on a removable hard drive, and just use that as your 'main' storage, with your backups on your real main storage. If you keep your originals on the same drive, you can do all your work this way, although you may have to 'find' your folders with your originals every time you move between the different systems.
Again, even when using a removable drive, using small separate databases seems to be the only way to go for now.
The XMP path is a terrible workaround IMHO, since the databases get all out of sync between systems, requiring lots of maintenance, and not everything transfers back and forth.

Similar Messages

  • Best Practice(s) for Laptop in Field, Server at Home? (Lightroom 3.3)

    Hi all!
    I just downloaded the 30-day evaluation of Lightroom, now trying to get up to speed. My first task is to get a handle on where the files (photos, catalogs, etc.) should go, and how to manage archiving and backups.
    I found a three-year-old thread titled "Best Practice for Laptop in Field, Server at Home" and that describes my situation, but since that thread is three years old, I thought I should ask again for Lightroom 3.3.
    I tend to travel with my laptop, and I'd like to be able to import and adjust photos on the road. But when I get back home, I'd like to be able to move selected photos (or potentially all of them, including whatever adjustments I've made) over to the server on my home network.
    I gather I can't keep a catalog on the server, so I gather I'll need two Lightroom catalogs on the laptop: one for pictures that I import to the laptop, and another for pictures on the home server -- is that right so far?
    If so, what's the best procedure for moving some/all photos from the "on the laptop catalog" to the "on the server catalog" -- obviously, such that I maintain adjustments?
    Thanks kindly!  -Scott

    Hi TurnstyleNYC,
    Yes, I think we have the same set-up.
    I only need 1 LR-catalog, and that is on the laptop.
    It points to the images wherever they are stored: initially on the laptop, later on I move some of them (once I am am fairly done with developing) within LR per drag&drop onto the network storage. Then the catalog on the laptop always knows they are there.
    I can still continue to work on the images on the network storage (slightly slower than on laptop's hard drive) if I still wish to.
    While travelling, I can also work on metadata / keywording, although without access to my home network the images themselves are offline for develop work.
    2 separate catalogs would be very inconvenient, as I would always have to remember if I have some images already moved. No collections would be possible of images including some on the laptop, some on the network.
    Remember: a LR catalog is just a database with entries about images and the pointer to their storage location.
    You can open only 1 DB of this sort at a time.
    There is no technical reason for limiting a LR-catalog - I have read of people with several hundert thousand images within one.
    The only really ever growing part on my laptop with this setup is the previews folder "<catalog name> Previews.lrdata". I render standard previews so that I can do most of the work for offline-images while travelling.
    The catalog itsself "<catalog name>.lrcat" grows much slower. It is now 630 MB for 60'000+ images, whereas previews folder is 64 GB.
    So yes, I dedicate quite a junk of my laptop hard disk to that. I could define "standard"-previews somewhat smaller, fitting to the laptop's screen resolution, but then when working at home with a bigger external monitor LR would load all the time for the delta size, which is why I have defined standard-preview-size for my external monitor. It may turn out to be the weakness of my setup long-term.
    That is all what is needed in terms of Lightroom setup.
    What you need additionally to cover potential failure of drives is no matter of LR, but *usual common backup sense* along the question "what can be recreated after failure, if so by what effort?" Therefore I do not backup the previews, but very thoroughly the images themselves as well as the catalog/catalog backups, and for convenience my LR presets.
    Message was edited by: Cornelia-I: sorry, initially I had written "1:1-previews", but "standard previews" is correct.

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • 2K8 - Best practice for setting the DNS server list on a DC/DNS server for an interface

    We have been referencing the article 
    "DNS: DNS servers on <adapter name> should include their own IP addresses on their interface lists of DNS servers"
    http://technet.microsoft.com/en-us/library/dd378900%28WS.10%29.aspx but there are some parts that are a bit confusing.  In particular is this statement
    "The inclusion of its own IP address in the list of DNS servers improves performance and increases availability of DNS servers. However, if the DNS server is also a domain
    controller and it points only to itself for name resolution, it can become an island and fail to replicate with other domain controllers. For this reason, use caution when configuring the loopback address on an adapter if the server is also a domain controller.
    The loopback address should be configured only as a secondary or tertiary DNS server on a domain controller.”
    The paragraph switches from using the term "its own IP address" to "loopback" address.  This is confusing becasuse technically they are not the same.  Loppback addresses are 127.0.0.1 through 127.255.255.255. The resolution section then
    goes on and adds the "loopback address" 127.0.0.1 to the list of DNS servers for each interface.
    In the past we always setup DCs to use their own IP address as the primary DNS server, not 127.0.0.1.  Based on my experience and reading the article I am under the impression we could use the following setup.
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  127.0.0.1
    I guess the secondary and tertiary addresses could be swapped based on the article.  Is there a document that provides clearer guidance on how to setup the DNS server list properly on Windows 2008 R2 DC/DNS servers?  I have seen some other discussions
    that talk about the pros and cons of using another DC/DNS as the Primary.  MS should have clear guidance on this somewhere.

    Actually, my suggestion, which seems to be the mostly agreed method, is:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    The tertiary more than likely won't be hit, (besides it being superfluous and the list will reset back to the first one) due to the client side resolver algorithm time out process, as I mentioned earlier. Here's a full explanation on how
    it works and why:
    This article discusses:
    WINS NetBIOS, Browser Service, Disabling NetBIOS, & Direct Hosted SMB (DirectSMB).
    The DNS Client Side Resolver algorithm.
    If one DC or DNS goes down, does a client logon to another DC?
    DNS Forwarders Algorithm and multiple DNS addresses (if you've configured more than one forwarders)
    Client side resolution process chart
    http://msmvps.com/blogs/acefekay/archive/2009/11/29/dns-wins-netbios-amp-the-client-side-resolver-browser-service-disabling-netbios-direct-hosted-smb-directsmb-if-one-dc-is-down-does-a-client-
    logon-to-another-dc-and-dns-forwarders-algorithm.aspx
    DNS
    Client side resolver service
    http://technet.microsoft.com/en-us/library/cc779517.aspx 
    The DNS Client Service Does Not Revert to Using the First Server in the List in Windows XP
    http://support.microsoft.com/kb/320760
    Ace Fekay
    MVP, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.
    I agree with this proposed solution as well:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    One thing to note, in this configuration the Best Practice Analyzer will throw the error:
    The network adapter Local Area Connection 2 does not list the loopback IP address as a DNS server, or it is configured as the first entry.
    Even if you add the loopback address as a Tertiary DNS address the error will still appear. The only way I've seen this error eliminated is to add the loopback address as the second entry in DNS, so:
    Primary DNS:  The assigned IP of another DC (i.e. 192.168.1.6)
    Secondary DNS: 127.0.0.1
    Tertiary DNS:  empty
    I'm not comfortable not having the local DC/DNS address listed so I'm going with the solution Ace offers.
    Opinion?

  • Best Practices for Packaging and Deploying Server-Specific Configurations

    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks can
    configure each server's properties at the server. But it appears that an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please let me
    know how.
    Or do we have to build a unique ear for each server? This is possible, of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

    The one draw back to this is you have to go all the way back to ant and the
    build system to make changes. You really want these env variables to be
    late binding.
    cheers
    mbg
    "Sai S Prasad" <[email protected]> wrote in message
    news:[email protected]...
    >
    Paul,
    I have a similar situation in our project and I don't create ear filesspecific
    to the environment. I do the following:
    1) Create .properties file for every environment with the same attributename
    but different values in it. For example, I have phoneix.properties.NT,phoenix.properties.DEV,
    phoenix.properties.QA, phoenix.properties.PROD.
    2) Use Ant to compile, package and deploy the ear file
    I have a .bat file in NT and .sh for Solaris that in turn calls theant.bat or
    ant.sh respectively. For the wrapper batch file or shell script, you canpass
    the name of the environment. The wrapper batch file will copy theappropriate
    properties file to "phonenix.properties". In the ant build.xml, I alwaysrefer
    to phonenix.properties which is available all the time depending on theenvironment.
    >
    It works great and I can't think of any other flexible way. Hope thathelps.
    >
    Paul Hodgetts <[email protected]> wrote:
    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as
    an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks
    can
    configure each server's properties at the server. But it appears that
    an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please
    let me
    know how.
    Or do we have to build a unique ear for each server? This is possible,
    of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure
    that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

  • Best practice for deploying the license server

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

  • Best Practice for General User File Server HA/Failover

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks
    If you care about performance and real failover transparency then guest VM cluster is a way to go (compared to DFS of course). I don't get your point about "no deduplication". You can still use dedupe inside your VM just will have sure you "shrink" the VHDX
    from time to time to give away space to host file system. See:
    Using Guest Clustering for High Availability
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Super-fast
    Failovers with VM Guest Clustering in Windows Server 2012 Hyper-V
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx
    can't
    shrink vhdx file after applying deduplication
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/533aac39-b08d-4a67-b3d4-e2a90167081b/cant-shrink-vhdx-file-after-applying-deduplication?forum=winserver8gen
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Best practice for JSON-REST client server programming

    I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
    Basically I have the following resource on the server side
    import org.json.JSONObject;
    import org.restlet.resource.Get;
    import org.restlet.resource.ServerResource;
    * Resource which has only one representation.
    public class UserResource extends ServerResource
         User user1 = new User("userA", "secret1");
         User user2 = new User("userB", "secret2");
         User user3 = new User("userC", "secret3");
         @Get
         public String represent()
              return user1.toJSONobject().toString();
         public static class User
              private String name;
              private String pwd;
              public User( String name, String pwd )
                   this.name = name;
                   this.pwd = pwd;
              public JSONObject toJSONobject()
                   JSONObject jsonRepresentation = new JSONObject();
                   jsonRepresentation.put("name", name);
                   jsonRepresentation.put("pwd", pwd);
                   return jsonRepresentation;
    }and my mapping defined as
         <servlet>
              <servlet-name>RestletServlet</servlet-name>
              <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
              <init-param>
                   <param-name>org.restlet.application</param-name>
                   <param-value>firstSteps.FirstStepsApplication </param-value>
              </init-param>
         </servlet>
         <!-- Catch all requests -->
         <servlet-mapping>
              <servlet-name>RestletServlet</servlet-name>
              <url-pattern>/user</url-pattern>
         </servlet-mapping>and I have a test client as follows
              HttpClient httpclient = new DefaultHttpClient();
              try {
                   HttpGet httpget = new HttpGet("http://localhost:8888/user");
                   // System.out.println("executing request " + httpget.getURI());
                   // Create a response handler
                   ResponseHandler<String> responseHandler = new BasicResponseHandler();
                   String responseBody = httpclient.execute(httpget, responseHandler);
                   JSONObject obj = new JSONObject(responseBody);
                   String name = obj.getString("name");
                   String pwd = obj.getString("pwd");
                   UserResource.User user = new UserResource.User(name, pwd);
                   user.notify();
              }Everything works fine and I can retrieve my USer object on the client side.
    What I would like to know is
    Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
    On the client side you need to implement code that knows how to build a User object from the received JSON object.
    Basically is there any frameworks available I could leverage to do this work?
    Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
    I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
    Thanks

    I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
    Basically I have the following resource on the server side
    import org.json.JSONObject;
    import org.restlet.resource.Get;
    import org.restlet.resource.ServerResource;
    * Resource which has only one representation.
    public class UserResource extends ServerResource
         User user1 = new User("userA", "secret1");
         User user2 = new User("userB", "secret2");
         User user3 = new User("userC", "secret3");
         @Get
         public String represent()
              return user1.toJSONobject().toString();
         public static class User
              private String name;
              private String pwd;
              public User( String name, String pwd )
                   this.name = name;
                   this.pwd = pwd;
              public JSONObject toJSONobject()
                   JSONObject jsonRepresentation = new JSONObject();
                   jsonRepresentation.put("name", name);
                   jsonRepresentation.put("pwd", pwd);
                   return jsonRepresentation;
    }and my mapping defined as
         <servlet>
              <servlet-name>RestletServlet</servlet-name>
              <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
              <init-param>
                   <param-name>org.restlet.application</param-name>
                   <param-value>firstSteps.FirstStepsApplication </param-value>
              </init-param>
         </servlet>
         <!-- Catch all requests -->
         <servlet-mapping>
              <servlet-name>RestletServlet</servlet-name>
              <url-pattern>/user</url-pattern>
         </servlet-mapping>and I have a test client as follows
              HttpClient httpclient = new DefaultHttpClient();
              try {
                   HttpGet httpget = new HttpGet("http://localhost:8888/user");
                   // System.out.println("executing request " + httpget.getURI());
                   // Create a response handler
                   ResponseHandler<String> responseHandler = new BasicResponseHandler();
                   String responseBody = httpclient.execute(httpget, responseHandler);
                   JSONObject obj = new JSONObject(responseBody);
                   String name = obj.getString("name");
                   String pwd = obj.getString("pwd");
                   UserResource.User user = new UserResource.User(name, pwd);
                   user.notify();
              }Everything works fine and I can retrieve my USer object on the client side.
    What I would like to know is
    Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
    On the client side you need to implement code that knows how to build a User object from the received JSON object.
    Basically is there any frameworks available I could leverage to do this work?
    Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
    I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
    Thanks

  • Best practice for Admin viewing contents of network homes

    How are you viewing the contents of your users' network home directories in the gui?
    Is there a better way than logging in locally as root? I'd like to do this over AFP if possible.
    Can I make an HomeAdmins group and propogate that group to have read access to all users' home folders? How about for new homes that are subsequently created?
    Thanks,
    b.

    You probably know this already, but:
    1. Nothing bad should happen if you change the group owner of your home directories unless you're using the current group ownership for something important.
    2. If you set the setgid bit on the root directory of the sharepoint and it is owned by the admin group then new folders created within should have the group owner you want. There are various ways to ensure the home directories would have the proper permissions.

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • BEST PRACTICE FOR AN EFFICIENT SEARCH FACILITY

    Good Morning,
    Whilst in Training, our Trainer said that the most efficiency from the Sharpoint Search would be to install the Search Facility on a separate server (hardware).
    Not sure how to have this process done.
    Your advice and recommendation would be greatly appreciated.
    thanks a mil.
    NRH

    Hi,
    You can
    create a dedicated search server that hosts all search components, query and index role, and crawl all on one physical server.
    Here are some article for your reference:
    Best practices for search in SharePoint Server 2010:
    http://technet.microsoft.com/en-us//library/cc850696(v=office.14).aspx
    Estimate performance and capacity requirements for SharePoint Server 2010 Search:
    http://technet.microsoft.com/en-us/library/gg750251(v=office.14).aspx
    Below is a similar post for your reference:
    http://social.technet.microsoft.com/Forums/en-US/be5fcccd-d4a3-449e-a945-542d6d917517/setting-up-dedicated-search-and-crawl-servers?forum=sharepointgeneralprevious
    Best regards
    Wendy Li
    TechNet Community Support

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best practice for "Quantity" field in Asset Master

    Hi
    I want to know what is the best practice for "Quantity field" in asset master. It should be made displayed only or required field in Asset Master creation.
    Initially I made this field as required entry. So user entered 1 quantity. At the time of posting F-90, he again entered quantity. So my quantity in asset master got increased. Hence i decided to make that field display only in asset master creation.
    Now i made that field as display only in asset master creation. At the time of posting F-90, that quantity field is not coming only. I check my field status group for posting key as well as GL account. Its optional field. Inspite of that user is able to make entry in F-90. Now quantity field is '0' only in asset master even though there is some value in asset.
    Please help what is the best practice wrt quantity field. Should be open in asset master or it should be display only.

    Hi:
               SAP Standard does not recommend you to update quantity field in asset master data.  Just leave the Qty Field Blank , just mention the Unit of Measure as EA. While you post acquisition through F-90 or MIGO this field will get updated in Asset master data automatically. Hope this will help you.
    Regards

  • Best practice for install oracle 11g r2 on Windows Server 2008 r2

    Dear all,
    May I know what is the best practice for install oracle 11g r2 on windows server 2008 r2. Should I create a special account for windows for the oracle database installation? What permission should I grant to the folders where Oracle installed and the database related files located (datafiles, controlfiles, etc.)
    Just grant Full for Administrators and System and remove permissions for all others accounts?
    Also how should I configure windows firewall to allow client connect to the database.
    Thanks for your help.

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

Maybe you are looking for