Best practice for GWT testing?

I was wondered what is the best way to test my GWT client logic which calls into my server and processes the results. I don't want to have to have a server running for these tests? That will come under my functional/performance testing. Is there a way to mock a server ?
Specifically some code like
String url = "localhost:8080/myApp/login";
RequestBuilder builder = new RequestBuilder( RequestBuilder.POST,  url );
builder.sendRequest( "My data String",  new MyResponseHandelr() );How should I test this without there being a service available at the given URL, but instead have a means where by I can send specific data back to the client code under test?

The unit testing support of GWT does not work for you?
http://code.google.com/webtoolkit/doc/latest/tutorial/JUnit.html
If not, running a server might be the only option you have. Perhaps JBoss Arquillian can help you there.
http://www.jboss.org/arquillian
Arquillian supports Jetty as a container for example, which boots really fast.

Similar Messages

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for EBS Testing

    Hello Friends - We are doing EBS Configuration and it includes Search strings also. These changes are applicable to around 40-50 Bank accounts.
    Is it that I should ask Bank to send us test Bank statements for all these accounts.
    Can you pls share Best practices to test EBS set up.
    Thanks

    Hello!
    You don't have to test EBS for each bank account. The best approach to testing is to identify the typical bank statement cases and test them. For instance, if you have bank statements from five banks with several bank accounts in each bank, you need to test the one bank statement for one bank account from each bank. Similarly, if you have different types of bank accounts (e.g. current account, deposit account, transfer account etc.), you will have different operation types in bank statements for these accounts. Therefore, you also have to test bank statement from different account types.
    To sum up, test typical bank statements from each bank and different bank statements from each account type if applicable.
    Hope this will help you!
    Best regards!

  • Best practice for creating test classes

    I am quite new to java using NetBeans 6.1 with JDK 1.6 and for some of the functions I created I want to write test classes to run automated tests to check consistence when I do some changes.
    I read something about this topic getting confused about different approaches depending on the jdk version / netbeans version used and however found very different looking examples on the net.
    Now I am very confused how to continue. Any hint would be helpful,
    Thanks,
    Martin.

    georgemc wrote:
    Can you expand on that? I suspect you're talking about JUnit < 4 vs JUnit 4, since JUnit 4 can only work with Java versions from 5 onwards.Yes, I am talking about JUnit - On NetBeans the default was 3.8.2 if I remember right and I installed 4.1.
    The problem starts where to start; creating a JUnit Test, Test for Existing Class or Test Suite.
    Trying to create tests for an existing class it shows me something like:
    import org.junit.After;
    import org.junit.Before;
    import org.junit.Test;
    import static org.junit.Assert.*;
    public class TestsTest {
        public TestsTest() {
        @Before
        public void setUp() {
        @After
        public void tearDown() {
    }using 4.1
    but
    import junit.framework.TestCase;
    public class TestsTest extends TestCase {
        public TestsTest(String testName) {
            super(testName);
        protected void setUp() throws Exception {
            super.setUp();
        protected void tearDown() throws Exception {
            super.tearDown();
    }using 3.8.2
    so first the versions seem to work completely different. The older version seems more understandable (for me as a beginner). Is it "save/stable" to use the already the newer JUnit? How is the support in NetBeans with the newer version - I mean do I have to expect problems not going with the older one?

  • Best Practices for unit testing an Xlet?

    I'd like to be able to unit test and Xlet using JUnit, but have been stumped by how this can be done correctly.
    The first inclination is that I should mock the XletContext, but that seems to be getting out of control quickly.
    Does anyone have any tips on this they can point me to?
    Thanks.

    Hi Jean-Paul Smit,
    Here is a goog article about how to unit test SQL Server 2008 database using Visual Studio 2010, please see:
    http://blogs.msdn.com/b/atverma/archive/2010/07/28/how-to-unit-test-sql-server-2008-database-using-visual-studio-2010.aspx
    Thanks,
    Eileen
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.

  • Best practice for test reports location -multiple installers

    Hi,
    What is recommended best practice for saving test reports with multiple installers of different applications:
    For example, if I have 3 different teststand installers: Installer1, Installer2 and Installer3 and I want to save test reports of each installer at:
    1. C:\Reports\Installer1\TestReportfilename
    2. C:\Reports\Installer2\TestReportfilename
    3. C:\Reports\Installer3\TestReportfilename
    How could I do this programatically as to have all reports at the proper folder when teststand installers are deployed to a test PC?
    Thanks,
    Frank

    There's no recommended best practice for what you're suggesting. The example here shows how to programmatically modify a report path. And, this Knowledge Base describes how you can change a report's filepath based on test results.
    -Mike 
    Applications Engineer
    National Instuments

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best Practice for Customization of ESS 50.4

    Hi ,
    We have implemented ESS 50.4 on EP 6.0 SP 14 and R3 4.6C . I want to know what is the best practice for minor modification of ESS transaction . For eg : I need to hide the change button in Personal information screen .
    Pls let me know .
    PS : Guaranteed award points
    Aneez

    @Aneez
       "Best Practice" is just going to be good ole' ITS custom development. All the "old" ESS services are all ITS based. What can not be done through config is then done by developing custom version of the ESS services. For what you describe (ie. the typical "hide a button" scenario) it is simply a matter of:
    (1) create custom version(ie. "Z" version) of the standard service. The service file will still call the same backend transaction via the ITS parameter ~transaction.
    (2) Since you are NOT making changes that require anything changed on the backend transaction (such as adding new fields, changing business logic, etc) you are lucky to ONLY have to change the web templates. Locate the web template in your new custom service file that corresponds to the screen in the transaction where the "CHANGE" button appears. The ITS naming convention for web templates is <sapprogramname>_<screennumber>.
    (3) After locating the web template that corresponds to your needed screen, simply locate in the HTMLb where the "CHANGE" button code is and comment it out. Just that easy!
    (4) Publish your new customized service and test it out directly through ITS. ie. via the direct URL to it: http://<yourdomain>/scripts/wgate/<yourservice>!
    (5) once you see that it works, you can then make an iView for it in your portal (or simply change the iView you have to now point to your custom ITS service.
    LOTS and LOTS more info on ITS development all around this site and in the ITS sepcific forum.
    Hope this helps!
    Award points or save them...I really don't care. I think the points system here is one of the dumbest ideas since square wheels. =)

  • Best practice for use of spatial operators

    Hi All,
    I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
    Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
    I've currently been experimenting with queries such as:
    select
    from
    uk_ward a,
    uk_county b
    where
    UPPER(b.name) = 'DORSET COUNTY' and
    sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
    However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
    Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.

    Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
    Elapsed: 00:01:24.81
    Execution Plan
    Plan hash value: 598052089
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
    |* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
    | 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
    |* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
    Predicate Information (identified by operation id):
    2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
    4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
    nside')='TRUE')
    Statistics
    20431 recursive calls
    60 db block gets
    22432 consistent gets
    1156 physical reads
    0 redo size
    2998369 bytes sent via SQL*Net to client
    1158 bytes received via SQL*Net from client
    17 SQL*Net roundtrips to/from client
    452 sorts (memory)
    0 sorts (disk)
    125 rows processed
    The wards table has 7545 rows, the county table has 207.
    We are currently on release 10.2.0.3.
    All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
    Also looking through the forums now for tuning topics...

  • Best practice for RAC connections

    Got a question of what people consider best practice for setting up high-availability connection pools to a RAC cluster. Now that you can specify the fail-over logic right in the thin connection string it seems like there are three options.
    A) Use OCI connections and allow the fail-over logic to be maintained in the TNSNAMES.ORA file.
    B) Use simple thin connections with multi-pools and let WebLogic maintain the fail-over logic.
    C) Use simple thin connections with fail-over logic in the connection string.
    Thanks,
    Rodger...

    If you need XA, then follow the WebLogic documentation. If not, then
    you have much more freedom. The thin driver can be configured to
    use the tnsnames.ora file if that helps you. WebLogic much prefers the
    thin driver to the OCI-based one, which can kill a JVM with OCI bugs.
    If you do driver-level failover, each failed connection will cost a test
    and replace. If you use multipools, WLS can be configured to flush a
    whole pool when it finds a connection bad, and also make the failover
    at the pool level, right then, so application delay is minimized.
    Joe

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best Practice for Production environment

    Hello everyone,
    can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
    i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
    thanks.

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practice for OSB, OER and OSR

    Hi,
    I would like to know what are the best practices for using OSB, OER and OSR together in a run-time environment for a simple WSDL service.
    From how I see it:
    1) Create service-definition in OER
    2) Create proxy and business service in OSB
    3) Use harvester to harvest the proxy and business services in OER
    4) Use Exchange Utility to publish the proxy service to OSR
    Then the client and discover the proxy service and use OSB for invoking the operations from service.
    Kindly correct me where I am wrong.
    Some questions:
    1) Do we ever publish business service to OSR? Or is it only meant for proxy services?
    2) Is synching of OSB and OSR only meant when we are not using OER?
    Looking forward...

    Hi Anuj,
    Thanks for the links, they have been helpful.
    I understand now that OSR is only meant to contain only Proxy services. The synch facility is between OSR and OSB so that in case when you are not using OER, you can publish Proxy services to OSR from OSB. What I didn't understand was why there was a option to publish a Proxy service back to OSB and why it ended up as a Business service. From the link you provided, it mentioned that this case is for multi-domain OSBs, where one OSB wants to use the other OSB's service. It is clear now.
    Some more questions:
    1) In the design-time, in OER no Endpoints are generated for Proxy services. Then how do we publish our design-time services to OSR for testing purposes? What is the correct way of doing this?
    Thanks,
    Umar

Maybe you are looking for