Best Practice for General User File Server HA/Failover

Hi All,
Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
We use DFS to publish file shares to users and machines.
Solutions I have researched with potential draw backs:
Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
 - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
 -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
Any thoughts on where I should be focusing my efforts?
Thanks

Hi All,
Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
We use DFS to publish file shares to users and machines.
Solutions I have researched with potential draw backs:
Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
 - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
 -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
Any thoughts on where I should be focusing my efforts?
Thanks
If you care about performance and real failover transparency then guest VM cluster is a way to go (compared to DFS of course). I don't get your point about "no deduplication". You can still use dedupe inside your VM just will have sure you "shrink" the VHDX
from time to time to give away space to host file system. See:
Using Guest Clustering for High Availability
http://technet.microsoft.com/en-us/library/dn440540.aspx
Super-fast
Failovers with VM Guest Clustering in Windows Server 2012 Hyper-V
http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx
can't
shrink vhdx file after applying deduplication
http://social.technet.microsoft.com/Forums/windowsserver/en-US/533aac39-b08d-4a67-b3d4-e2a90167081b/cant-shrink-vhdx-file-after-applying-deduplication?forum=winserver8gen
Hope this helped :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for storing user's generated file?

    Hi all,
    I have this web application that user draws an image off the applet and is able to send the image via mms.
    I wonder what is the best practice to store the user image before sending the mms.
    Message was edited by:
    tomdog

    java.util.prefs

  • Best practices for multiple users on a network?

    We have a centralized server storing all of our media, and a Final Share system allowing about 10 clients to connect.  The details of the system aren't important, it mounts just like a local drive on each client and bandwidth is higher than FW800.
    My question is what is the best way to handle cache files, sidecar files, autosave, etc. when passing edits between editors?  What should be stored locally, what can be placed on the server?  Of course all media has to be placed on the server, but can everything else sit alongside the footage? 
    The reason I ask is we started by pointing everything at the server, but now are having the good old Serious Error whenever two editors (in different project files) are referencing the same media.  We'd love to resolve this... assistant editors can't log footage if doing so causes the editor's project file to lock up. 
    Thanks!
    P.S. first tried called adobe phone support about this... not making that mistake again.  The most apathetic customer service rep I think I've ever spoken with told me they wouldn't provide support because our Serious Error was the result of network issues and not their fault.  And then that we should be storing our media files locally.  I didn't see it necessary to mention that local storage on 10 machines isn't that viable with almost 50TB of data.

    klonaton wrote:
    The reason I ask is we started by pointing everything at the server, but now are having the good old Serious Error whenever two editors (in different project files) are referencing the same media.  We'd love to resolve this... assistant editors can't log footage if doing so causes the editor's project file to lock up.
    Hi Klonaton.
    Are you using two Premiere projects or one Prelude project (for logging) and one Premiere project (for editing) ? I'll have to check tomorrow to be absolutely sure but we do sometimes have simultaneous logging and editing with -respectively- Prelude and Premiere through our small network, with no issues.
    Changes to metadata and comment markers created into Prelude instantly appears and update into Premiere. Subclip don't and have to be sent from Prelude to Premiere on the same computer (or the media has to be imported again inside Premiere). I think it's normal considering how subclips work a bit differently in Prelude and Premiere.
    Every files are on a thunderbolt raid drive shared on the network through an iMac. The Media Cache and Media Cache DB are on the network shared drive too, and common to every users and computers.
    Also I don't get if crashes happen when your two projects are running simultaneously or not (in the latter case, that's a huge problem).

  • Best Practices for Packaging and Deploying Server-Specific Configurations

    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks can
    configure each server's properties at the server. But it appears that an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please let me
    know how.
    Or do we have to build a unique ear for each server? This is possible, of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

    The one draw back to this is you have to go all the way back to ant and the
    build system to make changes. You really want these env variables to be
    late binding.
    cheers
    mbg
    "Sai S Prasad" <[email protected]> wrote in message
    news:[email protected]...
    >
    Paul,
    I have a similar situation in our project and I don't create ear filesspecific
    to the environment. I do the following:
    1) Create .properties file for every environment with the same attributename
    but different values in it. For example, I have phoneix.properties.NT,phoenix.properties.DEV,
    phoenix.properties.QA, phoenix.properties.PROD.
    2) Use Ant to compile, package and deploy the ear file
    I have a .bat file in NT and .sh for Solaris that in turn calls theant.bat or
    ant.sh respectively. For the wrapper batch file or shell script, you canpass
    the name of the environment. The wrapper batch file will copy theappropriate
    properties file to "phonenix.properties". In the ant build.xml, I alwaysrefer
    to phonenix.properties which is available all the time depending on theenvironment.
    >
    It works great and I can't think of any other flexible way. Hope thathelps.
    >
    Paul Hodgetts <[email protected]> wrote:
    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as
    an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks
    can
    configure each server's properties at the server. But it appears that
    an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please
    let me
    know how.
    Or do we have to build a unique ear for each server? This is possible,
    of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure
    that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

  • 2K8 - Best practice for setting the DNS server list on a DC/DNS server for an interface

    We have been referencing the article 
    "DNS: DNS servers on <adapter name> should include their own IP addresses on their interface lists of DNS servers"
    http://technet.microsoft.com/en-us/library/dd378900%28WS.10%29.aspx but there are some parts that are a bit confusing.  In particular is this statement
    "The inclusion of its own IP address in the list of DNS servers improves performance and increases availability of DNS servers. However, if the DNS server is also a domain
    controller and it points only to itself for name resolution, it can become an island and fail to replicate with other domain controllers. For this reason, use caution when configuring the loopback address on an adapter if the server is also a domain controller.
    The loopback address should be configured only as a secondary or tertiary DNS server on a domain controller.”
    The paragraph switches from using the term "its own IP address" to "loopback" address.  This is confusing becasuse technically they are not the same.  Loppback addresses are 127.0.0.1 through 127.255.255.255. The resolution section then
    goes on and adds the "loopback address" 127.0.0.1 to the list of DNS servers for each interface.
    In the past we always setup DCs to use their own IP address as the primary DNS server, not 127.0.0.1.  Based on my experience and reading the article I am under the impression we could use the following setup.
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  127.0.0.1
    I guess the secondary and tertiary addresses could be swapped based on the article.  Is there a document that provides clearer guidance on how to setup the DNS server list properly on Windows 2008 R2 DC/DNS servers?  I have seen some other discussions
    that talk about the pros and cons of using another DC/DNS as the Primary.  MS should have clear guidance on this somewhere.

    Actually, my suggestion, which seems to be the mostly agreed method, is:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    The tertiary more than likely won't be hit, (besides it being superfluous and the list will reset back to the first one) due to the client side resolver algorithm time out process, as I mentioned earlier. Here's a full explanation on how
    it works and why:
    This article discusses:
    WINS NetBIOS, Browser Service, Disabling NetBIOS, & Direct Hosted SMB (DirectSMB).
    The DNS Client Side Resolver algorithm.
    If one DC or DNS goes down, does a client logon to another DC?
    DNS Forwarders Algorithm and multiple DNS addresses (if you've configured more than one forwarders)
    Client side resolution process chart
    http://msmvps.com/blogs/acefekay/archive/2009/11/29/dns-wins-netbios-amp-the-client-side-resolver-browser-service-disabling-netbios-direct-hosted-smb-directsmb-if-one-dc-is-down-does-a-client-
    logon-to-another-dc-and-dns-forwarders-algorithm.aspx
    DNS
    Client side resolver service
    http://technet.microsoft.com/en-us/library/cc779517.aspx 
    The DNS Client Service Does Not Revert to Using the First Server in the List in Windows XP
    http://support.microsoft.com/kb/320760
    Ace Fekay
    MVP, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.
    I agree with this proposed solution as well:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    One thing to note, in this configuration the Best Practice Analyzer will throw the error:
    The network adapter Local Area Connection 2 does not list the loopback IP address as a DNS server, or it is configured as the first entry.
    Even if you add the loopback address as a Tertiary DNS address the error will still appear. The only way I've seen this error eliminated is to add the loopback address as the second entry in DNS, so:
    Primary DNS:  The assigned IP of another DC (i.e. 192.168.1.6)
    Secondary DNS: 127.0.0.1
    Tertiary DNS:  empty
    I'm not comfortable not having the local DC/DNS address listed so I'm going with the solution Ace offers.
    Opinion?

  • Best practice for end user menu pages

    Hi
    here is my goal :
    i want to add a link on the end user menu offering the user the ability to reset some LDAP fields (for example reset default values for mail settings after the user made a wrong customization).
    I saw all links on the end user menu are pointing to jsp pages. must i do the same or can i achieve my goal with only workflows and forms in the BPE ?
    is there somewhere a good example or a best practice desciption of such a customization ?
    thanks a lot

    Hi
    here is my goal :
    i want to add a link on the end user menu offering
    the user the ability to reset some LDAP fields (for
    example reset default values for mail settings after
    the user made a wrong customization).
    I saw all links on the end user menu are pointing to
    jsp pages. must i do the same or can i achieve my
    goal with only workflows and forms in the BPE ?
    is there somewhere a good example or a best practice
    desciption of such a customization ?
    thanks a lotDo you mean adding additional tabs to the accounts information (default being identity, assignments, security, and attributes). So, you want an additional tab such as 'LDAP attributes' after attributes, once you have assigned LDAP resournce to that user, right?
    Cheers,
    Kaushal Shah

  • On best practices for minimizing user impact for db/dw migrations

    Hi Everybody!
    Our department will be undertaking the migration of our ODS and Datawarehouse to Oracle 10g in the coming months and I wanted to query this group in anticipation for any good tips, DOs and DON'Ts and best practices that you might want to share with the group on how to minimize user impact, especially when some of the queries that different departments use have no known author and would need to be migrated to a different database dialect. Our organization is a large one and therefore efficacy in communicating the benefits of our project and handling a large number of user questions will be key items in our conversion plan.
    Thanks a lot to all those who can contribute to this thread, hopefully it will become a good way to record the expertise of this group's members on this very specific project category.
    -Ignacio

    BTW it is not clear from WHAT you want to migrate? Other DB or simply other Oracle version?
    OK anyway speaking about Data migration strategy there is at least one valueable article
    http://www.dulcian.com/papers/The%20Complete%20Data%20Migration%20Methodology.html
    Speking about technical execution you can look at my article Data migration from old to new application: an experience at http://www.gplivna.eu/papers/legacy_app_migration.htm
    None of them focuse on datawarehouse though.
    Gints Plivna
    http://www.gplivna.eu

  • Best practice for deploying the license server

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

  • Best Practice for storing user preferences

    Is there something like best practice or guideline for storing user preferences for desktop application such as window position, layout setting, etc. ?

    java.util.prefs

  • Best practices for managing jar files

    just wondering what anyone's thoughts are on managing jar files. I have read that instead of adding the jar files to your CLASSPATH environment variable, you can just extend the jdk. This is accomplished by inserting the jar files into the C:\jdk1.3.1_01\jre\lib\ext directory. I suspect that if the latter option is truly feasible, it would be preferred to adding entries to the CLASSPATH variable for every jar file you want to use. You could end up with hundreds of entries into the CLASSPATH variable if you have that many jar files in use.
    Any thoughts, comments, or links to where I can find any info on this?...
    thanks for any information.....

    In my company, we use scripts (in UNIX system) to handle this problem. The following scripts can iterate jar files in a given directory:
    for i in ${YOUR_LIB_DIR}/*.jar
    do
    YOUR_CLASSPATH=${YOUR_CLASSPATH}:$i
    done
    With this simple script, you can use any directory in the same way as the ext\ direcotry (just copy the jar file in it), and you still have the flexibility of maintaining jar files in different directories, say a number of projects.
    but i don't know whether .bat/.cmd scripts for windows have the same function.

  • Best Practice for Laptop in Field, Server at Home

    I'm sure this is a common workflow. If somebody has a link to the solution, please pass it along here.
    I keep all my images on a server at home. I would like to keep that as the repository for all my images.
    I also have a laptop that I am going to use in the field and edit work in lightroom. When I get back home I would like to dump those images on my home server (or do it from the field via vpn), but I would like the library to keep the editing settings attached to the images (which would now be on the server and deleted from the laptop). Then if I open them on my desktop (accessed from the server), I'd like all that editing I've already done to show up. Then if I go back to the macbook (from images on the server) all the edits are available.
    Is this possible? Can anybody give me an idea about the best way to do solve this.

    Version 1.1 will be adding much of the fundamental database functionality that is currently missing from V1, and should make the database a practical tool.
    If you are doing individual jobs that you need to carry around between your laptop and your desktop, I recommend using multiple individual databases. You can copy (in actuality move) the database between your laptop and your desktop much easier because the size remains manageable, and you can even do slideshows and sorting without access to the originals. Using one big database is impractical because the thumbnail folders get so humongous.
    It is a rather kludgy workaround, but it sure beats not being able to share a database between a laptop and a desktop system.
    Another option is to keep the Lightroom databases on a removable hard drive, and just use that as your 'main' storage, with your backups on your real main storage. If you keep your originals on the same drive, you can do all your work this way, although you may have to 'find' your folders with your originals every time you move between the different systems.
    Again, even when using a removable drive, using small separate databases seems to be the only way to go for now.
    The XMP path is a terrible workaround IMHO, since the databases get all out of sync between systems, requiring lots of maintenance, and not everything transfers back and forth.

  • Best practice for JSON-REST client server programming

    I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
    Basically I have the following resource on the server side
    import org.json.JSONObject;
    import org.restlet.resource.Get;
    import org.restlet.resource.ServerResource;
    * Resource which has only one representation.
    public class UserResource extends ServerResource
         User user1 = new User("userA", "secret1");
         User user2 = new User("userB", "secret2");
         User user3 = new User("userC", "secret3");
         @Get
         public String represent()
              return user1.toJSONobject().toString();
         public static class User
              private String name;
              private String pwd;
              public User( String name, String pwd )
                   this.name = name;
                   this.pwd = pwd;
              public JSONObject toJSONobject()
                   JSONObject jsonRepresentation = new JSONObject();
                   jsonRepresentation.put("name", name);
                   jsonRepresentation.put("pwd", pwd);
                   return jsonRepresentation;
    }and my mapping defined as
         <servlet>
              <servlet-name>RestletServlet</servlet-name>
              <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
              <init-param>
                   <param-name>org.restlet.application</param-name>
                   <param-value>firstSteps.FirstStepsApplication </param-value>
              </init-param>
         </servlet>
         <!-- Catch all requests -->
         <servlet-mapping>
              <servlet-name>RestletServlet</servlet-name>
              <url-pattern>/user</url-pattern>
         </servlet-mapping>and I have a test client as follows
              HttpClient httpclient = new DefaultHttpClient();
              try {
                   HttpGet httpget = new HttpGet("http://localhost:8888/user");
                   // System.out.println("executing request " + httpget.getURI());
                   // Create a response handler
                   ResponseHandler<String> responseHandler = new BasicResponseHandler();
                   String responseBody = httpclient.execute(httpget, responseHandler);
                   JSONObject obj = new JSONObject(responseBody);
                   String name = obj.getString("name");
                   String pwd = obj.getString("pwd");
                   UserResource.User user = new UserResource.User(name, pwd);
                   user.notify();
              }Everything works fine and I can retrieve my USer object on the client side.
    What I would like to know is
    Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
    On the client side you need to implement code that knows how to build a User object from the received JSON object.
    Basically is there any frameworks available I could leverage to do this work?
    Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
    I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
    Thanks

    I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
    Basically I have the following resource on the server side
    import org.json.JSONObject;
    import org.restlet.resource.Get;
    import org.restlet.resource.ServerResource;
    * Resource which has only one representation.
    public class UserResource extends ServerResource
         User user1 = new User("userA", "secret1");
         User user2 = new User("userB", "secret2");
         User user3 = new User("userC", "secret3");
         @Get
         public String represent()
              return user1.toJSONobject().toString();
         public static class User
              private String name;
              private String pwd;
              public User( String name, String pwd )
                   this.name = name;
                   this.pwd = pwd;
              public JSONObject toJSONobject()
                   JSONObject jsonRepresentation = new JSONObject();
                   jsonRepresentation.put("name", name);
                   jsonRepresentation.put("pwd", pwd);
                   return jsonRepresentation;
    }and my mapping defined as
         <servlet>
              <servlet-name>RestletServlet</servlet-name>
              <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
              <init-param>
                   <param-name>org.restlet.application</param-name>
                   <param-value>firstSteps.FirstStepsApplication </param-value>
              </init-param>
         </servlet>
         <!-- Catch all requests -->
         <servlet-mapping>
              <servlet-name>RestletServlet</servlet-name>
              <url-pattern>/user</url-pattern>
         </servlet-mapping>and I have a test client as follows
              HttpClient httpclient = new DefaultHttpClient();
              try {
                   HttpGet httpget = new HttpGet("http://localhost:8888/user");
                   // System.out.println("executing request " + httpget.getURI());
                   // Create a response handler
                   ResponseHandler<String> responseHandler = new BasicResponseHandler();
                   String responseBody = httpclient.execute(httpget, responseHandler);
                   JSONObject obj = new JSONObject(responseBody);
                   String name = obj.getString("name");
                   String pwd = obj.getString("pwd");
                   UserResource.User user = new UserResource.User(name, pwd);
                   user.notify();
              }Everything works fine and I can retrieve my USer object on the client side.
    What I would like to know is
    Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
    On the client side you need to implement code that knows how to build a User object from the received JSON object.
    Basically is there any frameworks available I could leverage to do this work?
    Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
    I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
    Thanks

  • Best practice for handling original files once movie is complete?

    So I'm taking movies from my Canon S5IS (and other cameras in the past) and making projects in iMovie, sharing in the Media Browser, importing into iDVD, and burning to DVD.
    I can't help but wonder if I might need the original footage one day. Do most people keep their original files for future media (replacement for DVD) which I realize would require recreation of the movies that were created in 2008 with iMovie (with title screens, transitions, etc.)? Or do most people delete the originals with the feeling that DVD will be a suitable way to watch home movies for the foreseeable future?
    I just can't figure out what to do. I don't want to burn dozens of DVDs of raw footage, only to have keep up with them in a safe deposit box and have to deal with the anxiety of having to recreate movies one day (which is daunting enough now...unbelievably daunting to think about the exponential growth as time progresses).
    Hope this make sense. Reading that DVD movies are not suitable for editing due to the codec has made me realize I need to think through this before destroying all these originals as I'm finished with them.
    Thanks in advance!
    -John

    If any of your cams are miniDV, then you simply need to keep the original tapes and tape is still the safest long term archiving solution, when stored properly.
    Other cams that use flash memory, hard drives, even DVD cams, do not offer the security that tape does. If you are wanting to save those types of files, the best option would be to store them on one or two external hard drives, bearing in mind those drives could fail anytime. Back up to your back up in that case.
    Another nice thing about miniDV cams is that you can export your finished movie back to a tape also, using iMovie HD6, and have safe copies of original and finished material.
    Message was edited by: Forest Mccready

Maybe you are looking for

  • Error: PO has no line items

    While trying to do goods receipt-purchase order, when I give the PO No. it gives an error saing PO has no line items. Whereas when the PO was created there was one line item. can anyone guide pl. rgds

  • Connecting instrument using IP address

    Hello, There is a instrument which I need to communicate with, Unit works on MODBUS\TCP protocol. Instrument is connected to LAN and has a IP address of its own, When I try to read the registers using MODBUS library I am getting Serial Overrun error.

  • Oracle appication server

    I installed OAS 4.0.8.1 on window nt workstation 4.0 service pack 5.I write some procedure with database 8.1.5.I am able to see all the procedure from broser.Everything look good. Question: Can i published all those procedure on the internet.With win

  • How to transfer my ocp exam history to oversea?

    I have passed 1Z0-051, 1Z0-042, 1Z0-043, which are required to get OCP 10g. But I did not take the hands-on course. For some reason, I need to get back to my home country. How should I do to continue to take hands-on course and get the OCP in my home

  • Bank Statement - Cashed Checks

    How would you clear ( show cashed) a range of checks.......... example check # 11200 thru 11259? We do not have an electronic bank statement from the bank so we would have to show them cashed manually. Thanks