Best practice for JSON-REST client server programming

I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
Basically I have the following resource on the server side
import org.json.JSONObject;
import org.restlet.resource.Get;
import org.restlet.resource.ServerResource;
* Resource which has only one representation.
public class UserResource extends ServerResource
     User user1 = new User("userA", "secret1");
     User user2 = new User("userB", "secret2");
     User user3 = new User("userC", "secret3");
     @Get
     public String represent()
          return user1.toJSONobject().toString();
     public static class User
          private String name;
          private String pwd;
          public User( String name, String pwd )
               this.name = name;
               this.pwd = pwd;
          public JSONObject toJSONobject()
               JSONObject jsonRepresentation = new JSONObject();
               jsonRepresentation.put("name", name);
               jsonRepresentation.put("pwd", pwd);
               return jsonRepresentation;
}and my mapping defined as
     <servlet>
          <servlet-name>RestletServlet</servlet-name>
          <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
          <init-param>
               <param-name>org.restlet.application</param-name>
               <param-value>firstSteps.FirstStepsApplication </param-value>
          </init-param>
     </servlet>
     <!-- Catch all requests -->
     <servlet-mapping>
          <servlet-name>RestletServlet</servlet-name>
          <url-pattern>/user</url-pattern>
     </servlet-mapping>and I have a test client as follows
          HttpClient httpclient = new DefaultHttpClient();
          try {
               HttpGet httpget = new HttpGet("http://localhost:8888/user");
               // System.out.println("executing request " + httpget.getURI());
               // Create a response handler
               ResponseHandler<String> responseHandler = new BasicResponseHandler();
               String responseBody = httpclient.execute(httpget, responseHandler);
               JSONObject obj = new JSONObject(responseBody);
               String name = obj.getString("name");
               String pwd = obj.getString("pwd");
               UserResource.User user = new UserResource.User(name, pwd);
               user.notify();
          }Everything works fine and I can retrieve my USer object on the client side.
What I would like to know is
Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
On the client side you need to implement code that knows how to build a User object from the received JSON object.
Basically is there any frameworks available I could leverage to do this work?
Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
Thanks

I did use SOAP quiet a bit a while back but now on a new project I have to get a handle on JSON-REST communication.
Basically I have the following resource on the server side
import org.json.JSONObject;
import org.restlet.resource.Get;
import org.restlet.resource.ServerResource;
* Resource which has only one representation.
public class UserResource extends ServerResource
     User user1 = new User("userA", "secret1");
     User user2 = new User("userB", "secret2");
     User user3 = new User("userC", "secret3");
     @Get
     public String represent()
          return user1.toJSONobject().toString();
     public static class User
          private String name;
          private String pwd;
          public User( String name, String pwd )
               this.name = name;
               this.pwd = pwd;
          public JSONObject toJSONobject()
               JSONObject jsonRepresentation = new JSONObject();
               jsonRepresentation.put("name", name);
               jsonRepresentation.put("pwd", pwd);
               return jsonRepresentation;
}and my mapping defined as
     <servlet>
          <servlet-name>RestletServlet</servlet-name>
          <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class>
          <init-param>
               <param-name>org.restlet.application</param-name>
               <param-value>firstSteps.FirstStepsApplication </param-value>
          </init-param>
     </servlet>
     <!-- Catch all requests -->
     <servlet-mapping>
          <servlet-name>RestletServlet</servlet-name>
          <url-pattern>/user</url-pattern>
     </servlet-mapping>and I have a test client as follows
          HttpClient httpclient = new DefaultHttpClient();
          try {
               HttpGet httpget = new HttpGet("http://localhost:8888/user");
               // System.out.println("executing request " + httpget.getURI());
               // Create a response handler
               ResponseHandler<String> responseHandler = new BasicResponseHandler();
               String responseBody = httpclient.execute(httpget, responseHandler);
               JSONObject obj = new JSONObject(responseBody);
               String name = obj.getString("name");
               String pwd = obj.getString("pwd");
               UserResource.User user = new UserResource.User(name, pwd);
               user.notify();
          }Everything works fine and I can retrieve my USer object on the client side.
What I would like to know is
Is this how the server side typically works, you need to implement a methot to convert your model class to a JSON object for sending to the client
On the client side you need to implement code that knows how to build a User object from the received JSON object.
Basically is there any frameworks available I could leverage to do this work?
Also, what would I need to do on the server side to allow a client to request a specific user using a URL like localhost:8888/user/user1?
I know a mapping like /user/* would direct the request to the correct Resource on the server side but how would I pass the "user1" parameter to the Resource?
Thanks

Similar Messages

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • 2K8 - Best practice for setting the DNS server list on a DC/DNS server for an interface

    We have been referencing the article 
    "DNS: DNS servers on <adapter name> should include their own IP addresses on their interface lists of DNS servers"
    http://technet.microsoft.com/en-us/library/dd378900%28WS.10%29.aspx but there are some parts that are a bit confusing.  In particular is this statement
    "The inclusion of its own IP address in the list of DNS servers improves performance and increases availability of DNS servers. However, if the DNS server is also a domain
    controller and it points only to itself for name resolution, it can become an island and fail to replicate with other domain controllers. For this reason, use caution when configuring the loopback address on an adapter if the server is also a domain controller.
    The loopback address should be configured only as a secondary or tertiary DNS server on a domain controller.”
    The paragraph switches from using the term "its own IP address" to "loopback" address.  This is confusing becasuse technically they are not the same.  Loppback addresses are 127.0.0.1 through 127.255.255.255. The resolution section then
    goes on and adds the "loopback address" 127.0.0.1 to the list of DNS servers for each interface.
    In the past we always setup DCs to use their own IP address as the primary DNS server, not 127.0.0.1.  Based on my experience and reading the article I am under the impression we could use the following setup.
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  127.0.0.1
    I guess the secondary and tertiary addresses could be swapped based on the article.  Is there a document that provides clearer guidance on how to setup the DNS server list properly on Windows 2008 R2 DC/DNS servers?  I have seen some other discussions
    that talk about the pros and cons of using another DC/DNS as the Primary.  MS should have clear guidance on this somewhere.

    Actually, my suggestion, which seems to be the mostly agreed method, is:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    The tertiary more than likely won't be hit, (besides it being superfluous and the list will reset back to the first one) due to the client side resolver algorithm time out process, as I mentioned earlier. Here's a full explanation on how
    it works and why:
    This article discusses:
    WINS NetBIOS, Browser Service, Disabling NetBIOS, & Direct Hosted SMB (DirectSMB).
    The DNS Client Side Resolver algorithm.
    If one DC or DNS goes down, does a client logon to another DC?
    DNS Forwarders Algorithm and multiple DNS addresses (if you've configured more than one forwarders)
    Client side resolution process chart
    http://msmvps.com/blogs/acefekay/archive/2009/11/29/dns-wins-netbios-amp-the-client-side-resolver-browser-service-disabling-netbios-direct-hosted-smb-directsmb-if-one-dc-is-down-does-a-client-
    logon-to-another-dc-and-dns-forwarders-algorithm.aspx
    DNS
    Client side resolver service
    http://technet.microsoft.com/en-us/library/cc779517.aspx 
    The DNS Client Service Does Not Revert to Using the First Server in the List in Windows XP
    http://support.microsoft.com/kb/320760
    Ace Fekay
    MVP, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.
    I agree with this proposed solution as well:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    One thing to note, in this configuration the Best Practice Analyzer will throw the error:
    The network adapter Local Area Connection 2 does not list the loopback IP address as a DNS server, or it is configured as the first entry.
    Even if you add the loopback address as a Tertiary DNS address the error will still appear. The only way I've seen this error eliminated is to add the loopback address as the second entry in DNS, so:
    Primary DNS:  The assigned IP of another DC (i.e. 192.168.1.6)
    Secondary DNS: 127.0.0.1
    Tertiary DNS:  empty
    I'm not comfortable not having the local DC/DNS address listed so I'm going with the solution Ace offers.
    Opinion?

  • What is the best practice for uninstalling only some CS4 programs (Win 7 PC)

    I recently upgraded from CS4 to CS5.5 and wanted to free up some hard drive space on my Windows 7 PC. I wanted to uninstall only a few programs though from CS4, such as Photoshop, Illustrator, Flash and Bridge. What is the best way to do this keeping in mind licensing, deactivating and properly removing components? I have the original installation disk for CS4 if needed. Thanks for any help!

    Best practice: Uninstall everything (including your CS5.5), run the Creative Suite Cleaner Tool, then reinstall the components you need from both editions. CS4 may have a repair/ change configuration mode, but I'd strongly advise against using it, as it will do more damage than good, so use the long way round. It's also the only way to not bust up file associations with an uninstall of CS4...
    Mylenium

  • Best Practices for Packaging and Deploying Server-Specific Configurations

    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks can
    configure each server's properties at the server. But it appears that an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please let me
    know how.
    Or do we have to build a unique ear for each server? This is possible, of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

    The one draw back to this is you have to go all the way back to ant and the
    build system to make changes. You really want these env variables to be
    late binding.
    cheers
    mbg
    "Sai S Prasad" <[email protected]> wrote in message
    news:[email protected]...
    >
    Paul,
    I have a similar situation in our project and I don't create ear filesspecific
    to the environment. I do the following:
    1) Create .properties file for every environment with the same attributename
    but different values in it. For example, I have phoneix.properties.NT,phoenix.properties.DEV,
    phoenix.properties.QA, phoenix.properties.PROD.
    2) Use Ant to compile, package and deploy the ear file
    I have a .bat file in NT and .sh for Solaris that in turn calls theant.bat or
    ant.sh respectively. For the wrapper batch file or shell script, you canpass
    the name of the environment. The wrapper batch file will copy theappropriate
    properties file to "phonenix.properties". In the ant build.xml, I alwaysrefer
    to phonenix.properties which is available all the time depending on theenvironment.
    >
    It works great and I can't think of any other flexible way. Hope thathelps.
    >
    Paul Hodgetts <[email protected]> wrote:
    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as
    an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks
    can
    configure each server's properties at the server. But it appears that
    an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please
    let me
    know how.
    Or do we have to build a unique ear for each server? This is possible,
    of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure
    that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

  • Best practice for deploying the license server

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

  • Best Practice for Laptop in Field, Server at Home

    I'm sure this is a common workflow. If somebody has a link to the solution, please pass it along here.
    I keep all my images on a server at home. I would like to keep that as the repository for all my images.
    I also have a laptop that I am going to use in the field and edit work in lightroom. When I get back home I would like to dump those images on my home server (or do it from the field via vpn), but I would like the library to keep the editing settings attached to the images (which would now be on the server and deleted from the laptop). Then if I open them on my desktop (accessed from the server), I'd like all that editing I've already done to show up. Then if I go back to the macbook (from images on the server) all the edits are available.
    Is this possible? Can anybody give me an idea about the best way to do solve this.

    Version 1.1 will be adding much of the fundamental database functionality that is currently missing from V1, and should make the database a practical tool.
    If you are doing individual jobs that you need to carry around between your laptop and your desktop, I recommend using multiple individual databases. You can copy (in actuality move) the database between your laptop and your desktop much easier because the size remains manageable, and you can even do slideshows and sorting without access to the originals. Using one big database is impractical because the thumbnail folders get so humongous.
    It is a rather kludgy workaround, but it sure beats not being able to share a database between a laptop and a desktop system.
    Another option is to keep the Lightroom databases on a removable hard drive, and just use that as your 'main' storage, with your backups on your real main storage. If you keep your originals on the same drive, you can do all your work this way, although you may have to 'find' your folders with your originals every time you move between the different systems.
    Again, even when using a removable drive, using small separate databases seems to be the only way to go for now.
    The XMP path is a terrible workaround IMHO, since the databases get all out of sync between systems, requiring lots of maintenance, and not everything transfers back and forth.

  • Best Practice for General User File Server HA/Failover

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks
    If you care about performance and real failover transparency then guest VM cluster is a way to go (compared to DFS of course). I don't get your point about "no deduplication". You can still use dedupe inside your VM just will have sure you "shrink" the VHDX
    from time to time to give away space to host file system. See:
    Using Guest Clustering for High Availability
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Super-fast
    Failovers with VM Guest Clustering in Windows Server 2012 Hyper-V
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx
    can't
    shrink vhdx file after applying deduplication
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/533aac39-b08d-4a67-b3d4-e2a90167081b/cant-shrink-vhdx-file-after-applying-deduplication?forum=winserver8gen
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • BEST PRACTICE FOR AN EFFICIENT SEARCH FACILITY

    Good Morning,
    Whilst in Training, our Trainer said that the most efficiency from the Sharpoint Search would be to install the Search Facility on a separate server (hardware).
    Not sure how to have this process done.
    Your advice and recommendation would be greatly appreciated.
    thanks a mil.
    NRH

    Hi,
    You can
    create a dedicated search server that hosts all search components, query and index role, and crawl all on one physical server.
    Here are some article for your reference:
    Best practices for search in SharePoint Server 2010:
    http://technet.microsoft.com/en-us//library/cc850696(v=office.14).aspx
    Estimate performance and capacity requirements for SharePoint Server 2010 Search:
    http://technet.microsoft.com/en-us/library/gg750251(v=office.14).aspx
    Below is a similar post for your reference:
    http://social.technet.microsoft.com/Forums/en-US/be5fcccd-d4a3-449e-a945-542d6d917517/setting-up-dedicated-search-and-crawl-servers?forum=sharepointgeneralprevious
    Best regards
    Wendy Li
    TechNet Community Support

  • Best practice for install oracle 11g r2 on Windows Server 2008 r2

    Dear all,
    May I know what is the best practice for install oracle 11g r2 on windows server 2008 r2. Should I create a special account for windows for the oracle database installation? What permission should I grant to the folders where Oracle installed and the database related files located (datafiles, controlfiles, etc.)
    Just grant Full for Administrators and System and remove permissions for all others accounts?
    Also how should I configure windows firewall to allow client connect to the database.
    Thanks for your help.

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Design Pattern for multithreaded client server program

    I asked this question in another post, but with other stuff, so I'll distill this one.
    I am creating a multi-threaded client server program (just for learning - a chat program at this point). I built the server and client in swing, and I'm wondering what the best design pattern is for this setup. Right now all the swing stuff is in the MyServer class. In that class I have a loop accepting client connections to the serverSocket and creating a new MyServerThread (threaded client connection).
    The problem is that all the work of creating input streams, interacting with the server, all that stuff is done in the MyServerThread class, but I want that text to be written up to the Swing objects - which is in the MyServer class. So right now in the MyServerThread class I pass the MyServer object into it, but I'm not sure if that is really the most robust thing to do. Does anybody have any suggestions as to how this should be done. If somebody has an article they'd like to point to I'll check it out too. But if it's just the run-of-the-mill multithreaded client-server article, I've read alot and most don't specifically address my question.

    Thanks for the reply Kaj, and I think I'll keep my design for now, since it's just quick and dirty. I've read the MVC concept a while ago and I'll revisit it again when I get more serious. But I have a question, why should I be using a callback interface, why an interface at all? And then make MyServer implement that interface...why not just pass MyServer to the thread object? Or is there something down the line that I did not forsee?

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

Maybe you are looking for

  • I have made a playlist on my ipod touch, how to do I get it burned to a cd?

    I have made a playlist on my ipod touch and when I try to burn a cd from it , there is no file option for that list, I also can't drag the list into a library playlist. Any suggestions?

  • Get filename from Watched folder start point

    I have a watched folder as a start point and I can use the symbols %F and %E in order to save the output documents in the Result folder. This is easy. What I cannot do, is getting in a string variable the same filename in order to use it within the P

  • Ways to display SSMS data vertically?

    We have several tables with 30+ columns  (imports from an older system). Scrolling horizontally through the data is cumbersome at best. What's the best way in Sql to present the data vertically, like this: This:      col1  col2   col3    col41......

  • Ipod is skipping songs on my playlist??

    My Touch is skipping music videos! If I pick a certain videos from my list of Music it jumps to a different video. But if I go under Video and pick the same video, it plays it?? What the heck? Messing up my workout routine, as I play a Workout Playli

  • Bootcamp Assistant: Downloading Windows support software... STUCK.

    I've been trying for 2 days to get Bootcamp Assistant to run on my iMac 5K, and every time I've tried I get to "Downloading Windows support software..."  and it just stalls.   I even left it over night in hopes maybe it would advance some how, but no