Microsoft best practices for patching a Cluster server

Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
Failover Clusters in Windows Server 2008 R2
http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
Patching Windows Server Failover Clusters
http://support.microsoft.com/kb/174799/i

Hi Vincent!
I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
Anders

Similar Messages

  • Best Practices for patching Exchange 2010 servers.

    Hi Team,
    Looking for best practices on patching Exchnage Server 2010.
    Like precautions  , steps and pre and post patching checks.
    Thanks. 

    Are you referring to Exchange updates? If so:
    http://technet.microsoft.com/en-us/library/ff637981.aspx
    Install the Latest Update Rollup for Exchange 2010
    http://technet.microsoft.com/en-us/library/ee861125.aspx
    Installing Update Rollups on Database Availability Group Members
    Key points:
    Apply in role order
    CAS, HUB, UM, MBX
    If you have CAS roles in an array/load-balanced, they should all have the same SP/RU level.  so coordinate the Exchange updates and add/remove nodes as needed so you do not run for an extended time with different Exchange levels in the same array.
    All the DAG nodes should be at the same rollup/SP level as well. See the above link on how to accomplish that.
    If you are referring to Windows Updates, then I typically follow the same install pattern:
    CAS, HUB, UM, MBX
    With windows updates however, I tend not to worry about suspending activation on the DAG members rather simply move the active mailbox copies, apply the update and reboot if necessary.

  • 2K8 - Best practice for setting the DNS server list on a DC/DNS server for an interface

    We have been referencing the article 
    "DNS: DNS servers on <adapter name> should include their own IP addresses on their interface lists of DNS servers"
    http://technet.microsoft.com/en-us/library/dd378900%28WS.10%29.aspx but there are some parts that are a bit confusing.  In particular is this statement
    "The inclusion of its own IP address in the list of DNS servers improves performance and increases availability of DNS servers. However, if the DNS server is also a domain
    controller and it points only to itself for name resolution, it can become an island and fail to replicate with other domain controllers. For this reason, use caution when configuring the loopback address on an adapter if the server is also a domain controller.
    The loopback address should be configured only as a secondary or tertiary DNS server on a domain controller.”
    The paragraph switches from using the term "its own IP address" to "loopback" address.  This is confusing becasuse technically they are not the same.  Loppback addresses are 127.0.0.1 through 127.255.255.255. The resolution section then
    goes on and adds the "loopback address" 127.0.0.1 to the list of DNS servers for each interface.
    In the past we always setup DCs to use their own IP address as the primary DNS server, not 127.0.0.1.  Based on my experience and reading the article I am under the impression we could use the following setup.
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  127.0.0.1
    I guess the secondary and tertiary addresses could be swapped based on the article.  Is there a document that provides clearer guidance on how to setup the DNS server list properly on Windows 2008 R2 DC/DNS servers?  I have seen some other discussions
    that talk about the pros and cons of using another DC/DNS as the Primary.  MS should have clear guidance on this somewhere.

    Actually, my suggestion, which seems to be the mostly agreed method, is:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    The tertiary more than likely won't be hit, (besides it being superfluous and the list will reset back to the first one) due to the client side resolver algorithm time out process, as I mentioned earlier. Here's a full explanation on how
    it works and why:
    This article discusses:
    WINS NetBIOS, Browser Service, Disabling NetBIOS, & Direct Hosted SMB (DirectSMB).
    The DNS Client Side Resolver algorithm.
    If one DC or DNS goes down, does a client logon to another DC?
    DNS Forwarders Algorithm and multiple DNS addresses (if you've configured more than one forwarders)
    Client side resolution process chart
    http://msmvps.com/blogs/acefekay/archive/2009/11/29/dns-wins-netbios-amp-the-client-side-resolver-browser-service-disabling-netbios-direct-hosted-smb-directsmb-if-one-dc-is-down-does-a-client-
    logon-to-another-dc-and-dns-forwarders-algorithm.aspx
    DNS
    Client side resolver service
    http://technet.microsoft.com/en-us/library/cc779517.aspx 
    The DNS Client Service Does Not Revert to Using the First Server in the List in Windows XP
    http://support.microsoft.com/kb/320760
    Ace Fekay
    MVP, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.
    I agree with this proposed solution as well:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    One thing to note, in this configuration the Best Practice Analyzer will throw the error:
    The network adapter Local Area Connection 2 does not list the loopback IP address as a DNS server, or it is configured as the first entry.
    Even if you add the loopback address as a Tertiary DNS address the error will still appear. The only way I've seen this error eliminated is to add the loopback address as the second entry in DNS, so:
    Primary DNS:  The assigned IP of another DC (i.e. 192.168.1.6)
    Secondary DNS: 127.0.0.1
    Tertiary DNS:  empty
    I'm not comfortable not having the local DC/DNS address listed so I'm going with the solution Ace offers.
    Opinion?

  • Best Practices for Packaging and Deploying Server-Specific Configurations

    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks can
    configure each server's properties at the server. But it appears that an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please let me
    know how.
    Or do we have to build a unique ear for each server? This is possible, of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

    The one draw back to this is you have to go all the way back to ant and the
    build system to make changes. You really want these env variables to be
    late binding.
    cheers
    mbg
    "Sai S Prasad" <[email protected]> wrote in message
    news:[email protected]...
    >
    Paul,
    I have a similar situation in our project and I don't create ear filesspecific
    to the environment. I do the following:
    1) Create .properties file for every environment with the same attributename
    but different values in it. For example, I have phoneix.properties.NT,phoenix.properties.DEV,
    phoenix.properties.QA, phoenix.properties.PROD.
    2) Use Ant to compile, package and deploy the ear file
    I have a .bat file in NT and .sh for Solaris that in turn calls theant.bat or
    ant.sh respectively. For the wrapper batch file or shell script, you canpass
    the name of the environment. The wrapper batch file will copy theappropriate
    properties file to "phonenix.properties". In the ant build.xml, I alwaysrefer
    to phonenix.properties which is available all the time depending on theenvironment.
    >
    It works great and I can't think of any other flexible way. Hope thathelps.
    >
    Paul Hodgetts <[email protected]> wrote:
    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as
    an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks
    can
    configure each server's properties at the server. But it appears that
    an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please
    let me
    know how.
    Or do we have to build a unique ear for each server? This is possible,
    of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure
    that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

  • Best practice for deploying the license server

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

  • Best Practices for patch/rollback on Windows?

    All,
    I have been working on BO XI with UNIX for some time now and while I am pretty comfortable with managing it on UNIX, I am not too sure about the "best practices" when it comes to Windows.
    I have a few specific questions:
    1) What is the best way to apply a patch or Service Pack to BO XI R2 on a Windows envt without risking a system corruption?
    - It is relatively easier on UNIX because you don't have to worry about registry entries and you can even perform multiple installations on the same box as long as you use different locations and ports.
    2) What should be the ideal "rollback" strategy in case an upgrade/patch install fails and corrupts the system?
    I am sure I will have some follow up questions, but if someone can get the discussion rolling with these for now, I would really appreciate!
    Is there any documentation available around these topics on the boards some place?
    Cheers,
    Sarang

    This is unofficial but usually if you run into a disabled system as a result of a patch and the removal/rollback does NOT work (in other words you are still down).
    You should have made complete backups of your FRS, CMS DB, and any customizations in your environment.
    Remove the base product and any seperate products that share registry keys (i.e. crystal reports)
    Remove the left over directories (XIR2 this is boinstall\business objects\*)
    Remove the primary registry keys (hkeylocalmachine\software\businessobjects\* & hkeycurrentuser\software\businessobjects\* )
    Remove any legacy keys (i.e. crystal*)
    Remove any patches from the registry (look in control panel and search for the full patch name)
    Then reinstall the product (test)
    add back any customizations
    reinstall either the latest(patch prior to update) or newest patch(if needed)
    and restore the FRS and CMS DB.
    There are a few modifications to these steps and you should leave room to add more (if they improve your odds at success).
    Regards,
    Tim

  • What are best practices for rolling out cluster upgrade?

    Hello,
    I am looking for your input on the approaches to implement Production RAC upgrade without having a spare instance of RAC servers. We have a 2-node database RAC 11.1 on Production. We are planning to upgrade to 11.2 but only have a single database that the pre-Production integration can be verified on. Our concern is that the integration may behave differently on RAC vs. the signle database instance. How everybody else approaches this problem?

    you want to test RAC upgrade on NON RAC database. If you ask me that is a risk but it depends on may things
    Application configuration - If your application is configured for RAC, FAN etc. you cannot test it on non RAC systems
    Cluster upgrade - If your standalone database is RAC one node you can probably test your cluster upgrade there. If you have non RAC database then you will not be able to test cluster upgrade or CRS
    Database upgrade - There are differences when you upgrade RAC vs non RAC database which you will not be able to test
    I think the best way for you is to convert your standalone database to RAC one node database and test it. that will take you close to multi node RAC

  • Best Practices for Patching RDS Environment Computers

    Our manager has tasked us with creating a process for patching our RDS environment computers with no disruption to users if possible. This is our environment:
    2 Brokers configured in HA Active/Active Broker mode
    2 Web Access servers load balanced with a virtual IP
    2 Gateway servers load balanced with a virtual IP
    3 session collections, each with 2 hosts each
    Patching handled through Configuration Manager
    Our biggest concern is the gateway/hosts. We do not want to terminate existing off campus connections when patching. Are there any ways to ensure users are not using a particular host or gateway when the patch is applied?
    Any real world ideas or experience to share would be appreciated.
    Thanks,
    Bryan

    Hi,
    Thank you for posting in Windows Server Forum.
    As per my research, we can create some script for patching the server and you have 2 servers for each role. If this is primary and backup server respectively then you can manage to update each server separately and bypass the traffic to other server. After
    completing once for 1 server you can just perform the same step for other server. Because as I know we need to restart the server once for successful patching update to the server.
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Best Practices for patching Sun Clusters with HA-Zones using LiveUpgrade?

    We've been running Sun Cluster for about 7 years now, and I for
    one love it. About a year ago, we starting consolidating our
    standalone web servers into a 3 node cluster using multiple
    HA-Zones. For the most part, everything about this configuration
    works great! One problem we've having is with patching. So far,
    the only documentation I've been able to find that talks about
    patch Clusters with HA-Zones is the following:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi2g0
    Sun Cluster System Administration Guide for Solaris OS
    How to Apply Patches in Single-User Mode with Failover Zones
    This documentation works, but has two major drawbacks:
    1) The nodes/zones have to be patched in Single-User Mode, which
    translates to major downtime to do patching.
    2) If there are any problems during the patching process, or
    after the cluster is up, there is no simple back out process.
    We've been using a small test cluster to test out using
    LiveUpgrade with HA-Zones. We've worked out most of bugs, but we
    are still in a position of patching our HA-Zoned clusters based
    on home grow steps, and not anything blessed by Oracle/Sun.
    How are others patching Sun Cluster nodes with HA-Zones? Has any
    one found/been given Oracle/Sun documentation that lists the
    steps to patch Sun Clusters with HA-Zones using LiveUpgrade??
    Thanks!

    Hi Thomas,
    there is a blueprint that deals with this problem in much more detail. Actually it is based on configurations that are solely based on ZFS, i.e. for root and the zone roots. But it should be applicable also to other environments. "!Maintaining Solaris with Live Upgrade and Update On Attach" (http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach)
    Unfortunately, due to some redirection work in the joint Sun and Oracle network, access to the blueprint is currently not available. If you send me an email with your contact data I can send you a copy via email. (You'll find my address on the web)
    Regards
    Hartmut

  • Best practice for patching ( Feb 2014 CU) of Project Server 2010

    Hi,
    Please advise.
    Thanks
    srabon

    Hi,
    My current environment is,
    # VM -1 (Project Server 2010) Enterprise Edition
    - Microsoft SharePoint and Project Server 2010 SP1
    - Configuration Database Version 14.0.6134.5000
    - Patch installed - KB2767794
    # VM -2 (SQL Server 2008 R2)
    Now my plan is,
    - Taking Snapshot for the VM-1
    - Should I also take VM snapshot for VM-2 ?
    - Taking Farm Backup
    - Taking /pwa site collection backup
    For your information I do see in your articale MS says that I may have SP1 or SP2 before running this patch but you mentioned I have to have SP2 as well ?
    Fyi....
    Prerequisites
    To install this cumulative update, you must have one of the following products installed:
    Microsoft Project Server 2010 Service Pack 1 (SP1) or Service Pack 2 (SP2)
    Microsoft SharePoint Server 2010 Service Pack 1 (SP1) or Service Pack 2 (SP2)
    Microsoft SharePoint Foundation 2010 Service Pack 1 (SP1) or Service Pack 2 (SP2)
    Please advise if I miss something in my plan.
    Thanks
    srabon

  • Best Practice for Laptop in Field, Server at Home

    I'm sure this is a common workflow. If somebody has a link to the solution, please pass it along here.
    I keep all my images on a server at home. I would like to keep that as the repository for all my images.
    I also have a laptop that I am going to use in the field and edit work in lightroom. When I get back home I would like to dump those images on my home server (or do it from the field via vpn), but I would like the library to keep the editing settings attached to the images (which would now be on the server and deleted from the laptop). Then if I open them on my desktop (accessed from the server), I'd like all that editing I've already done to show up. Then if I go back to the macbook (from images on the server) all the edits are available.
    Is this possible? Can anybody give me an idea about the best way to do solve this.

    Version 1.1 will be adding much of the fundamental database functionality that is currently missing from V1, and should make the database a practical tool.
    If you are doing individual jobs that you need to carry around between your laptop and your desktop, I recommend using multiple individual databases. You can copy (in actuality move) the database between your laptop and your desktop much easier because the size remains manageable, and you can even do slideshows and sorting without access to the originals. Using one big database is impractical because the thumbnail folders get so humongous.
    It is a rather kludgy workaround, but it sure beats not being able to share a database between a laptop and a desktop system.
    Another option is to keep the Lightroom databases on a removable hard drive, and just use that as your 'main' storage, with your backups on your real main storage. If you keep your originals on the same drive, you can do all your work this way, although you may have to 'find' your folders with your originals every time you move between the different systems.
    Again, even when using a removable drive, using small separate databases seems to be the only way to go for now.
    The XMP path is a terrible workaround IMHO, since the databases get all out of sync between systems, requiring lots of maintenance, and not everything transfers back and forth.

  • Best Practice for General User File Server HA/Failover

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks
    If you care about performance and real failover transparency then guest VM cluster is a way to go (compared to DFS of course). I don't get your point about "no deduplication". You can still use dedupe inside your VM just will have sure you "shrink" the VHDX
    from time to time to give away space to host file system. See:
    Using Guest Clustering for High Availability
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Super-fast
    Failovers with VM Guest Clustering in Windows Server 2012 Hyper-V
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx
    can't
    shrink vhdx file after applying deduplication
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/533aac39-b08d-4a67-b3d4-e2a90167081b/cant-shrink-vhdx-file-after-applying-deduplication?forum=winserver8gen
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Best practice for JSON-REST client server programming

    I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
    Basically I have the following resource on the server side
    import org.json.JSONObject;
    import org.restlet.resource.Get;
    import org.restlet.resource.ServerResource;
    * Resource which has only one representation.
    public class UserResource extends ServerResource
         User user1 = new User("userA", "secret1");
         User user2 = new User("userB", "secret2");
         User user3 = new User("userC", "secret3");
         @Get
         public String represent()
              return user1.toJSONobject().toString();
         public static class User
              private String name;
              private String pwd;
              public User( String name, String pwd )
                   this.name = name;
                   this.pwd = pwd;
              public JSONObject toJSONobject()
                   JSONObject jsonRepresentation = new JSONObject();
                   jsonRepresentation.put("name", name);
                   jsonRepresentation.put("pwd", pwd);
                   return jsonRepresentation;
    }and my mapping defined as
         <servlet>
              <servlet-name>RestletServlet</servlet-name>
              <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
              <init-param>
                   <param-name>org.restlet.application</param-name>
                   <param-value>firstSteps.FirstStepsApplication </param-value>
              </init-param>
         </servlet>
         <!-- Catch all requests -->
         <servlet-mapping>
              <servlet-name>RestletServlet</servlet-name>
              <url-pattern>/user</url-pattern>
         </servlet-mapping>and I have a test client as follows
              HttpClient httpclient = new DefaultHttpClient();
              try {
                   HttpGet httpget = new HttpGet("http://localhost:8888/user");
                   // System.out.println("executing request " + httpget.getURI());
                   // Create a response handler
                   ResponseHandler<String> responseHandler = new BasicResponseHandler();
                   String responseBody = httpclient.execute(httpget, responseHandler);
                   JSONObject obj = new JSONObject(responseBody);
                   String name = obj.getString("name");
                   String pwd = obj.getString("pwd");
                   UserResource.User user = new UserResource.User(name, pwd);
                   user.notify();
              }Everything works fine and I can retrieve my USer object on the client side.
    What I would like to know is
    Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
    On the client side you need to implement code that knows how to build a User object from the received JSON object.
    Basically is there any frameworks available I could leverage to do this work?
    Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
    I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
    Thanks

    I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
    Basically I have the following resource on the server side
    import org.json.JSONObject;
    import org.restlet.resource.Get;
    import org.restlet.resource.ServerResource;
    * Resource which has only one representation.
    public class UserResource extends ServerResource
         User user1 = new User("userA", "secret1");
         User user2 = new User("userB", "secret2");
         User user3 = new User("userC", "secret3");
         @Get
         public String represent()
              return user1.toJSONobject().toString();
         public static class User
              private String name;
              private String pwd;
              public User( String name, String pwd )
                   this.name = name;
                   this.pwd = pwd;
              public JSONObject toJSONobject()
                   JSONObject jsonRepresentation = new JSONObject();
                   jsonRepresentation.put("name", name);
                   jsonRepresentation.put("pwd", pwd);
                   return jsonRepresentation;
    }and my mapping defined as
         <servlet>
              <servlet-name>RestletServlet</servlet-name>
              <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
              <init-param>
                   <param-name>org.restlet.application</param-name>
                   <param-value>firstSteps.FirstStepsApplication </param-value>
              </init-param>
         </servlet>
         <!-- Catch all requests -->
         <servlet-mapping>
              <servlet-name>RestletServlet</servlet-name>
              <url-pattern>/user</url-pattern>
         </servlet-mapping>and I have a test client as follows
              HttpClient httpclient = new DefaultHttpClient();
              try {
                   HttpGet httpget = new HttpGet("http://localhost:8888/user");
                   // System.out.println("executing request " + httpget.getURI());
                   // Create a response handler
                   ResponseHandler<String> responseHandler = new BasicResponseHandler();
                   String responseBody = httpclient.execute(httpget, responseHandler);
                   JSONObject obj = new JSONObject(responseBody);
                   String name = obj.getString("name");
                   String pwd = obj.getString("pwd");
                   UserResource.User user = new UserResource.User(name, pwd);
                   user.notify();
              }Everything works fine and I can retrieve my USer object on the client side.
    What I would like to know is
    Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
    On the client side you need to implement code that knows how to build a User object from the received JSON object.
    Basically is there any frameworks available I could leverage to do this work?
    Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
    I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
    Thanks

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

Maybe you are looking for

  • Need to take part of selection criteria out of a custom report (R painter)

    Hi Experts.. I have a requirement to hide cost elements/group which appear as part of selection criteria on our custom report (report painter) for projects. 1. Why this appears as part of the selection criteria as i did not find this defined in Edit

  • Deleting more than one e-mail at a time

    Is there a way to delete multiple e-mails at once? it takes an awful long time to delete them one at a time.

  • AED not appearing for utilization

    Hi all I am reposting the question having hopes of getting replies.. A similar question asked in this forum has also been included. This particular case is for import trading scenario Purchase Order is made in MM and during MIGO excise invoice is cap

  • Sorting TV shows by Episode number instead of alphabetically.

    I have an 80GB classic. I have loaded 3 seasons of the office onto it, and each season is seperated, which is great, but inside of each season the episodes are sorted aphabetically and not in order of the episodes. After I uploaded them I went into t

  • Unable to update data from JSPDynpage using RFC

    I have this code in my JSPDynpage      public void onSaveButtonClicked (Event event) throws PageException            DropdownListBox dListMain = (DropdownListBox) getComponentByName("mydropdown");           DropdownListBox dListEthnicityNew = (Dropdo