Connection pool testing for dealing with database restarts

How do you configure the app server to test JDBC connections before handing them to the application? The app server needs to accomodate database restarts while the app is running.
We have an EAR file built with ANT (we use Eclipse 3.x to develop the app) and we use the deploy tool for getting it installed on the engine at 6.40 SP15. We have setup a datasource in the project options but I don't see how to configure it so that the engine tests JDBC connections before giving them to the application. We need to guard against database restarts and other app servers that I have used have an option that takes care of this detail. I cannot get the deploy tool help to appear due to an invalid URL being requested by the tool.

Craig,
you are right, in visual admin, it appears, any options to test a poll served connection are absent.
Remedy: In tab additional
- check expiration control and set
- connection lifetime to say 60 (seconds) and
- cleanup thread to say 30
This way, you can't fully avoid, but diminish the likelihood to catch a bad connection.
Regards
Gregor

Similar Messages

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Creative Zen Micro Photo not tested for use with Windows so cant install it! Please help

    Heya! Basically when installing the software for Microphoto it stops saying that as my device hasn't been tested for use with windows it cant install for fear of corrupting my laptop! Can anyone help me overcome this because i installed it aagges ago and it let me have the option of continuing the installating, but i had 2 uninstall it and upon reinstalling it it does not give me the option to continue. It just cancels the installation. Can someone please help me as this is driving me nuts!! Y is it that technology is continuosly mucking up grrrrr!
    Lucy

    Are you runnning Win XP SP2? If so, this is a standard warning about unsigned drivers, but it should still give you the option to "continue anyway" to complete the installation.
    Also, make sure you have Windows Media Player 0, as this is needed for Windows XP (drivers) to recognize your MicroPhoto.

  • Oracle connection pool problem (dbcp binded with  jtom)

    Hi,
    my webserver is Tomcat5.0.18, I want to offer the connection pool and transaction management
    in my currently system
    I know in Tomcat5.0 version, we can use the DBCP to offer the database connection pool
    service, to apply the transaction management in my system, I adapt the JTOM
    now, my problem is
    to use DBCP, I setup the datasource factory to be
    org.apache.commons.dbcp.BasicDataSourceFactory, the connection pool is ok, but
    the transaction management offerred by JTOM will failure
    to make the tranaction management of JTOM succeed, I have to setup the datasource factory to be
    org.objectweb.jndi.DataSourceFactory, but the connection pool offerred by Tomcat will failure now
    it seems that these two datasource factory has conflict
    How can I do?, I don't want to use the connection pool offerred in JTOM
    Anyone can help me??
    Thanks in advance

    Hi,
    I don't know the solution for JOTM, but you could try this JTA and its connection pools:
    http://www.atomikos.com ships a JTA that integrates with Tomcat and also provides JDBC connection pooling. There is a GUI control panel so that you don't have to know the XML details for Tomcat's config files.
    Best,
    Guy

  • Connection pooling problem in TopLink with JBoss4.0.3

    Hi All
    I am using JBoss4.0.3SP1 with TopLink-10g(9.0.4.5) and Sybase as database. I am implementing connection pooling of JBoss with Sybase. I have done all the changes in JBoss mapping files.
    Now to enable connection pooling in TopLink I have set the <uses-external-connection-pooling> attribute in sessions.xml file to true and I have added <datasource> attribute corresponding to the datasource defined in JBoss. It is working fine but when I delete the username, password...etc attribute from the sessions.xml then I am not able to get the session from datasource.
    Can anyone tell me what to do or where I am making mistake? Any help on this will be appreciated

    I tried that code also but the project object that I am getting is null which is in this line
    DatabaseLogin login = (DatabaseLogin)mySession.getProject().getLogin();
    mySession is an object of oracle.toplink.sessions.Session interface but when I call this method then it returns me a null object of Project class, so I am getting NullPointerException error. I tried to do this stuff by many ways but I am always getting the NullPointereException in setLogin() method of Session. This is the stackTrace of the error.....
    java.lang.NullPointerException
    at oracle.toplink.publicinterface.Session.setLogin(Session.java:2871)
    at com.test.TestCon.executeQurey(TestCon.java:91)
    at org.apache.jsp.Test_jsp._jspService(org.apache.jsp.Test_jsp:55)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
    at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:322)
    at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
    at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
    at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:81)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
    at org.jboss.web.tomcat.security.CustomPrincipalValve.invoke(CustomPrincipalValve.java:39)
    at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:159)
    at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:59)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
    at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744)
    at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
    at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112)
    at java.lang.Thread.run(Unknown Source)
    Can you please tell me where I am wrong or any other way to do fix this problem.

  • Connection pool driver for Oracle 8i

    Dear all,
    I am testing 6 beta.
    i can't create a connection pool with the driver works in 5.1.
    any suggestion or example to show how to do it?
    Thanks a lot!
    Regards,
    King

    Hi,
    Thanks for the quick reply.
    As I said I'm using OracleConnectionCacheImpl class to handle connection pooling with in the code rather than configuring in the web server. Now we are using ojdbc5.jar with Oracle 11g, I need equivalent class to handle the connection pool rather than configuring in the web server or using OracleDataSource. If OracleDataSource is the connection pool class which is equivalent to (OracleConnectionCacheImpl) of Classes12.jar, then can you please provide the example connection code.
    Thanks.

  • Connection pooling - looking for some advice

    Hi all,
    I wish to implement a JDBC connection pool. I have read through previous posts and have also read the tutorial on connection pooling.
    Some things are just still not quite clear. I am only actually asking for advice as my deadline is approaching - it is still at respectable distance but I don't have time to make mistakes (I have made enough already :-)).
    Setting a minimum number of connections makes sense but I don't know why you would want/need to set a maximum number - wouldn't this mean that (potentially) 1 or more clients would simply have to wait for a connection to be freed by the pool ? I am only working with a small database (I am expecting approx. 200 users).
    Also when I run short of connections (I have to create a new connection for the client) I wish to execute a background process that will create say 5 new connections whenever I create just 1. From my poor knowledge of threads I take it using a thread would not make any difference as the client would have to wait for this thread to end. Is this possible ?
    I would be glad for any advice (I am not looking for source code as I have quite a good idea as to how to proceed).
    Thanks,
    BadLands

    answer for u r first Q
    Setting a minimum number of connections makes sense but I don't know why you would want/need to set a maximum number - wouldn't this mean that (potentially) 1 or more clients would simply have to wait for a connection to be freed by the pool ? I am only working with a small database (I am expecting approx. 200 users).
    one basic funda of connection pooling
    do u know why we implement connection pooling to save resources
    reduce overhead on database main is to reuse the connection object already created
    there r 2 parameter minimum no and maximum no of connection
    when the server is started it will create the minimum no of connection object say 5 and put it in pool so whenver client comes it wont create any other conn assign a one from pool
    now according to u if there is no maximum no
    then once this 5 conn objects r used and when client comes it will create a new one and it will go on createing as and when client comes
    so ther's is no funda of pooling
    what will happen when the initial clients have used the conn object and they no longer want to use it
    so what our connection pol will do it will put this connection object back to the pool and when client comes it will give him this conn object which is already used thus resusablity is achived
    hope tis clear

  • Creagting connection pools dynamically and bind with TxDataSource

              Hi!
              Is there a way I can create a dynamic connection pool and associate it to a DataSource
              using weblogic. I know I can't create a datasource dynamically, so are there
              any work arounds to bind the dynamically created connection pool to configured
              datasource.
              thanks,
              Srinivas
              

    Ken Yeung wrote:
    My application is required to support many different connections to different databases. I'll need to setup potentially hundreds of connection pools (and associated datasources). I was thinking of creating them dynamically through the app as needed.. but, wondering if I can get a way with creating all of them upfront (and setting the initial connections to 0). This way I can set all the database properties via WL console instead of hardcoding in the appl. I would like to know if there's a significant cost to creating datasources and connections pools upfront (even though they're initially not used). Please let me know how you would approach this. thanks.You can certainly create them ahead of time. No significant overhead from simply having
    the pools. However, if you set them up to do periodic refresh they will involve some cycles.

  • Strategies for dealing with large forms

    I've created a dynamic form that is about 30 pages long. Although performance in reader is OK, when editing, every change takes about 10 seconds to be digested by Designer. I've tried editing parts and then copying them across, but formatting information seems to be lost when pasting.
    What are the practical limits to how long a form can be, and are there any suggestions for how to deal with longer forms?
    Many thanks
    Alex

    Performance may be okay in Reader 8, but a 30-page dynamic form will probably by unusable in Reader 7. I have done work with very large forms before, and I have maintained each page in a separate XDP and then brought them together once all the kinks are worked out. I don't know why you would be losing formatting information.
    There is a technique that I have used where you use dynamic subforms to show only one page of the form at a time. That approach allows a very large form to be used in version 7 with acceptable response time. However, it's a very complex approach, and I have come to think that it's too complex for comfort. The more complex your approach is the more likely it is to fail in a future version of Reader.

  • Tips for dealing with large channel count on cRio

    Hello, I have a very simple application that takes an analog input using an AI module (9205) from a thermistor and based on the value of the input it sends out a true/false signal using a digital out module (9477). Each cRio chassis will have close to 128 channels, the code being exactly the same for each channel.
    I wonder if anyone has any tips for how I can do this so that I don't have to copy and paste each section of code 128 times. Obviously this would be a nightmare if the code ever had to be changed. I'm sure there is a way to make a function or a class but being new to graphical programming I can't think of a good way to do this. I looked for a way to dynamically select a channel but can't seem to find anything, if I could select the channel dynamically I'm guessing I can create a subvi and do it that way. Any tips or help would be greatly appreciated.

    There isn't a way to dynamically choose a channel at runtime.  In order
    for the VI to compile successfully, the compiler must be able to statically
    determine which channel is being read or written in the I/O Node at
    compile time.  However, that doesn't mean you can't write a reusable
    subvi.  If you right click on the FPGA I/O In terminal of the I/O Node and create a constant or control, you should be able to reuse the same logic for all of your channels.  The attached screen shot should illustrate the basics of what this might look like.  If you right click the I/O control/constant and select "Configure I/O Type...", you can configure the interface the I/O Item must support in order for it to be selectable from the control.  While this helps single source some of the logic, you will still eventually need 128 I/O constants somewhere in your FPGA VI hierarchy.
    I should also mention that if each channel being read from the 9205 is contained in a separate subVI or I/O Node, you will also incur some execution time overhead due to the scanning nature of the module.  You mentioned you are reading temperature signals so the additional execution time may not be that important to you.  If it is, you may want to look at the IO Sample Method.  You can find more information and examples on how to use this method in the LV help.  Using the IO Sample Method does allow you to dynamically choose a channel at runtime and is generally more efficient for high channel counts.  However, it's also a lot more complicated to use than the I/O Node.
    You also mentioned concerns about the size of arrays and the performance implications of using a single for loop to iterate across your data set.  That's the classic design trade off when dealing with FPGAs.  If you want to perform as much in parallel as possible, you'll need to store all 128 data points from the 9205 modules at once, process the data in parallel using 128 instances of the same circuit, and then output a digital value based on the result.  If you're using fixed point data types, that's 182 x 26 bits for just the I/O data from the 9205.  While this will yield the fastest execution times, the resulting VI may be too large to fit on your target.  Conversely, you could use the IO Sample Method to read each channel one at a time, process the data using the same circuit, and then output a digital value.  This strategy will use the least amount of logic on the FPGA but will also take the longest to execute.  Of course, there are all sorts of options you could create in between these two extremes.  Without knowing more about your requirements, it's hard to advise which end of the spectrum you should shoot for.  Anyway, hopefully this will give you some ideas on where to get started.
    Attachments:
    IO Constant.JPG ‏31 KB

  • Browser application test for Firefox with JLP

    How much different if Firefox (en-US) is configured to added-on the Japanese Language pack (JLP)?
    I'm planning on the browser test for Firefox on the following condition:
    Operating System / Windows 7 (Japanese)
    Browser / Firefox Setup 10.0.2, 11.0, 12.0 each
    *For each Firefox version, to add-on the JLP @ ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/10.0.2/win32/xpi/ja.xpi (in case of 10.0.2)
    Testing covers not just what the layout looks good, but also to check if the encoding through database functions well.
    I'd really appreceate if someone would tell me the impact on the add-on JLP to Firefox (en-US).

    cor-el, thanks for your quick answer! It makes it clear that JLP only affects strings settings and shortcut keys (keybord stuffs).
    I'm trying to test for several browsers from the viewpoint of the followings;
    1. JavaScript
    2. VMware Remote Console (VMRC) plug-in
    3. HTML5 File Reader API
    It seems that nothing can be differenct to check if these works, when Firefox addons the JLP xpi file.
    I'd really appreciate ii if you give me any extra comments aobut the test on this conditions.

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

Maybe you are looking for