JDBC best practice

Hi,
I am creating a simple desktop application for my own purposes. There will be only one user (me) and I need to use database. Let’s say I have various events (button clicks etc) that triggers some actions on my database (update/select/delete). Should I a) open/close connection for each of my queries, b) should I use singleton to create one instance of connection and to close at the end of my app lifecycle c) should I use connection pooling (like BasicDataSource class from apache dbcp) or d) Other
I know that almost always you have to use connection pooling with multiple users, but I'm interested in the situation when there is only one user. I don't like first ("a") solution because it takes too much time for connecting to database.
And I don't see the difference between "b" and "c" solution if there is only one user. Because if I understand correctly, if I use connection. close() on pooled connection it won't be closed and only returned to the pool, which seems the same as singleton(one open connection).
And when the solution "a" is applicable? If it has to connect to database before each query (ies) then it looks like not suitable in most cases
Thanks

jonasnas wrote:
Hi,
I am creating a simple desktop application for my own purposes. There will be only one user (me) and I need to use database. Let’s say I have various events (button clicks etc) that triggers some actions on my database (update/select/delete). Should I a) open/close connection for each of my queries, b) should I use singleton to create one instance of connection and to close at the end of my app lifecycle c) should I use connection pooling (like BasicDataSource class from apache dbcp) or d) Other Depends on the application and usage.
Do you do something like poll the database every minute? Then perhaps a pool is advisable.
Does the app/user only do queries and then an update once an hour? Then a pool is a waste of time.
>
I know that almost always you have to use connection pooling with multiple users, but I'm interested in the situation when there is only one user. I don't like first ("a") solution because it takes too much time for connecting to database.And you know how long it takes to connect to the database because you have actually timed it?
Excluding WANs and flaky networks a connection is going to take quite a bit less time than 1 second. And with a good network it is probably going to be less than 100 ms.

Similar Messages

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best practice for opening/closing JDBC conection

    I've written a program that accesses a database, and I'd like to know the best practice for opening and closing that connection. For example should I use a try{} finally {} block,
    try
        //load driver
        //create conection
        //create statement object
        //sql statement to execute
        //execute statement
    finally
        //close connection
    }Or should I split the code into seperate methods, maybe an init() method that loads the driver and makes the connection and an execute() method that creates the statement and executes it and finally a cleanUp() method that closes the connection.
    So, which would be the best way to do this?

    Hallo,
    your idea seems OK to me. However, there are a couple of points to consider:
    1. Do you just want to execute one SQL query? Or will your program execute several? If the latter case is possible, then you do not want to close your connection between queries. Opening and closing a connection to a database takes 'a lot of time', relatively speaking. In this case, it is better to save the connection and reuse it every time that you need it.
    2. Do not forget to close the statements and result sets that you create or get back from JDBC. Depending on the database (eg Oracle) that you are using, you can run out of cursors, and your application will stop.

  • Oracle JDBC (10g) reading clobs -- best practices

    What is the better approach using oracle 10g to save clobs:
    #1) This:
    PreparedStatement pstmt = conn.prepareStatement
    //Create the clob for insert
    Clobs Clobs = new Clobs();
    CLOB TempClob = Clobs.CreateTemporaryCachedCLOB(conn);
    java.io.Writer writer = TempClob.getCharacterOutputStream();
    writer.write(Description);
    writer.flush();
    writer.close();
    #2) Or this:
    OraclePreparedStatement pstmt = (OraclePreparedStatement)conn.prepareStatement
    pstmt.setStringForClob()
    According to my notes, it is #2.
    What is the better approach to read clobs:
    #1) Stream the clob
    //Get character stream to retrieve clob data
    Reader instream = ClobIn.getCharacterStream();
    //Create temporary buffer for read
    char[] buffer = new char[10];
    //Length of characters read
    int length = 0;
    //Fetch data
    while ((length = instream.read(buffer)) != -1){
    for (int i=0; i<length; i++){
    Contents += buffer;
    //Close input stream
    instream.close();
    //Empty LOB
    ClobIn.empty_lob();
    #2) Or this:
    Simply use rs.getString() to get your clob contents. This will return the entire clob and will not truncate.
    Im just confused on the best practices for performance/memory allocation and I keep reading people saying different.
    Reposted in JDBC forum

    Check chapter 16 of "PL/SQL Programming", by Oracle Press, for a starter.
    Then have a look at this link - I found it helpful: http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/advanced/LOBSample/LOBSample.java.html

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • Best practice - caching objects

    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

    problem with using object id fields instead of PC object references in your
    object model is that it makes your object model less useful and intuitive.
    Taking to the extreme (replacing all object references with their IDs) you
    will end up with object like a row in JDBC dataset. Plus if you use PM per
    HTTP request it will not do you any good since organization data won't be in
    PM anyway so it might even be slower (no optimization such as Kodo batch
    loads)
    So we do not do it.
    What you can do:
    1. Do nothing special just use JVM level or distributed cache provided by
    Kodo. You will not need to access database to get your organization data but
    object creation cost in each PM is still there (do not forget this cache we
    are talking about is state cache not PC object cache) - good because
    transparent
    2. Designate a single application wide PM for all your read-only big
    things - lookup screens etc. Use PM per request for the rest. Not
    transparent - affects your application design
    3. If large portion of your system is read-only use is PM pooling. We did it
    pretty successfully. The requirement is to be able to recognize all PCs
    which are updateable and evict/makeTransient those when PM is returned to
    the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
    all managed object of a certain class) so you do not have stale data in your
    PM. You can use Apache Commons Pool to do the pooling and make sure your PM
    is able to shrink. It is transparent and increase performance considerably
    One approach we use
    "Gary" <[email protected]> wrote in message
    news:[email protected]...
    >
    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

  • Best Practices?

    I have an application which has several EJBs that make JDBC connections.
    What is considered best practice for passing parameters like class, url,
    username and password to these EJBs? The basic structure is a servlet which
    uses one class that makes all the calls to the EJBs. I've come up with a
    couple ideas for passing these JDBC parameters around. Have I missed any
    thing and what are the pros/cons of each?
    1. Read <init-param> from WEB-INF/web.xml for the servlet. Pass these to
    the helper/wrapper class, which then passes them to all of the EJBs.
    2. Specify all the parameters in META-INF/ejb-jar.xml for each EJB.
    3. Somehow specify these parameters on a global or application level on the
    application server and have all EJBs use these.
    #1 seems like it should work, but also seems awkward, as business logic is
    now on the web side. Also, database connection data would be passed across
    the firewall between the web and app server. Is this a security risk?
    #2 will work, but changes to our JDBC pool would require changing the
    META-INF/ejb-jar.xml for each EJB.
    #3 sounds the best to me, as no database information is passed though
    between the app and web server and we would only have to change the
    parameters in one location to get all EJBs to now connect to the new JDBC
    pool. How can this be implemented?
    On a related note, what's the difference between a <contex-param> and
    <env-entry> in the WEB-INF/web.xml? We are using <contex-param> to read a
    URL for our JSPs for image locations, while we are using <env-entry> to read
    in the URL to our application server.
    Thanks,
    Eric

    I believe you already posted this question in a different newsgroup. Please
    only post in one newsgroup.
    Peace,
    Cameron Purdy
    Tangosol Inc.
    << Tangosol Server: How Weblogic applications are customized >>
    << Download now from http://www.tangosol.com/download.jsp >>
    "Eric Fenderbosch" <[email protected]> wrote in message
    news:3b86afae$[email protected]..
    I have an application which has several EJBs that make JDBC connections.
    What is considered best practice for passing parameters like class, url,
    username and password to these EJBs? The basic structure is a servletwhich
    uses one class that makes all the calls to the EJBs. I've come up with a
    couple ideas for passing these JDBC parameters around. Have I missed any
    thing and what are the pros/cons of each?
    1. Read <init-param> from WEB-INF/web.xml for the servlet. Pass these to
    the helper/wrapper class, which then passes them to all of the EJBs.
    2. Specify all the parameters in META-INF/ejb-jar.xml for each EJB.
    3. Somehow specify these parameters on a global or application level onthe
    application server and have all EJBs use these.
    #1 seems like it should work, but also seems awkward, as business logic is
    now on the web side. Also, database connection data would be passedacross
    the firewall between the web and app server. Is this a security risk?
    #2 will work, but changes to our JDBC pool would require changing the
    META-INF/ejb-jar.xml for each EJB.
    #3 sounds the best to me, as no database information is passed though
    between the app and web server and we would only have to change the
    parameters in one location to get all EJBs to now connect to the new JDBC
    pool. How can this be implemented?
    On a related note, what's the difference between a <contex-param> and
    <env-entry> in the WEB-INF/web.xml? We are using <contex-param> to read a
    URL for our JSPs for image locations, while we are using <env-entry> toread
    in the URL to our application server.
    Thanks,
    Eric

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Data access best practice

    Oracle web site has an article talking about the 9iAS best practice. Predefining column type in the select statement is one of topics. The detail is following.
    3.5.5 Defining Column Types
    Defining column types provides the following benefits:
    (1) Saves a roundtrip to the database server.
    (2) Defines the datatype for every column of the expected result set.
    (3) For VARCHAR, VARCHAR2, CHAR and CHAR2, specifies their maximum length.
    The following example illustrates the use of this feature. It assumes you have
    imported the oracle.jdbc.* and java.sql.* interfaces and classes.
    //ds is a DataSource object
    Connection conn = ds.getConnection();
    PreparedStatement pstmt = conn.prepareStatement("select empno, ename, hiredate from emp");
    //Avoid a roundtrip to the database and describe the columns
    ((OraclePreparedStatement)pstmt).defineColumnType(1,Types.INTEGER);
    //Column #2 is a VARCHAR, we need to specify its max length
    ((OraclePreparedStatement)pstmt).defineColumnType(2,Types.VARCHAR,12);
    ((OraclePreparedStatement)pstmt).defineColumnType(3,Types.DATE);
    ResultSet rset = pstmt.executeQuery();
    while (rset.next())
    System.out.println(rset.getInt(1)+","+rset.getString(2)+","+rset.getDate(3));
    pstmt.close();
    Since I'm new to 9iAS, I'm not sure whether it's true that 9iAS really does an extra roundtrip to database just for the data type of the columns and then another roundtrip to get the data. Anyone can confirm it? Besides the above example uses the Oracle proprietary information.
    Is there any way to trace the db activities on the application server side without using enterprise monitor tool? Weblogic can dump all db activities to a log file so that they can be reviewed.
    thanks!

    Dear Srini,
    Data level Security is not at all issue for me. Have already implement it and so far not a single bug in testing is caught.
    It's about object level security and that too for 6 different types of user demanding different reports i.e. columns and detailed drill downs are different.
    Again these 6 types of users can be read only users or power users (who can do ad hoc analysis) may be BICONSUMER and BIAUTHOR.
    so need help regarding that...as we have to take decision soon.
    thanks,
    Yogen

  • Best Practice Internet Security with ADO / OraMTS / OraOLEDB and 9i?

    Hi people,
    I have the following scenario to support and I URGENTLY need some information regarding the security model vs performance envelope of these platforms.
    We currently are developing a web-application using IE 5.0^ as our browser, IIS 5.0 as our server, ASP (JScript) as our component glue, custom C++ COM+ middle tier components using ADO / Oracle OLE DB to talk to a Solaris based Oracle 9i instance.
    Now it comes to light from the application requirements that the system should, if at all possible, be supporting Virtual Private Databases for subscribers [plus we need to ease backend data service development and row-level security combined with fine grained audit seems the way to go].
    How does one use Oracle's superior row-level security model in this situation?
    How does one get the MS middle tier to authenticate with the database given that our COM+ ADO components are all required to go through ONE connection string? [Grrrr]
    Can we somehow give proxy rights to this identity so that it can "become" and authenticate with an OID/LDAP as an "Enterprise User"? If so, how?
    I have seen a few examples of JDBC and OCI middle-tier authentication but how does one achieve the same result as efficiently as possible from the MS platform?
    It almost appears, due to connection pooling that each call to the database on each open connection could potentially be requiring a different application context - how does one achieve this efficiently?
    If this is not the way to go - how could it work?
    What performance tradeoffs do we have using this architecture? (And potentially how will we migrate to .Net on the middle tier?)
    As you can see, my questions are both architectural and technical. So, are there any case studies, white papers or best practice monographs on this subject that are available to either Technet members or Oracle Partners?
    Alternatively, anyone else come up against this issue before?
    Thanks for your attention,
    Lachlan Pitts
    Developer DBA (Oracle)
    SoftWorks Australia Pty Ltd

    Hi people,
    I have the following scenario to support and I URGENTLY need some information regarding the security model vs performance envelope of these platforms.
    We currently are developing a web-application using IE 5.0^ as our browser, IIS 5.0 as our server, ASP (JScript) as our component glue, custom C++ COM+ middle tier components using ADO / Oracle OLE DB to talk to a Solaris based Oracle 9i instance.
    Now it comes to light from the application requirements that the system should, if at all possible, be supporting Virtual Private Databases for subscribers [plus we need to ease backend data service development and row-level security combined with fine grained audit seems the way to go].
    How does one use Oracle's superior row-level security model in this situation?
    How does one get the MS middle tier to authenticate with the database given that our COM+ ADO components are all required to go through ONE connection string? [Grrrr]
    Can we somehow give proxy rights to this identity so that it can "become" and authenticate with an OID/LDAP as an "Enterprise User"? If so, how?
    I have seen a few examples of JDBC and OCI middle-tier authentication but how does one achieve the same result as efficiently as possible from the MS platform?
    It almost appears, due to connection pooling that each call to the database on each open connection could potentially be requiring a different application context - how does one achieve this efficiently?
    If this is not the way to go - how could it work?
    What performance tradeoffs do we have using this architecture? (And potentially how will we migrate to .Net on the middle tier?)
    As you can see, my questions are both architectural and technical. So, are there any case studies, white papers or best practice monographs on this subject that are available to either Technet members or Oracle Partners?
    Alternatively, anyone else come up against this issue before?
    Thanks for your attention,
    Lachlan Pitts
    Developer DBA (Oracle)
    SoftWorks Australia Pty Ltd

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • How to establish the connection - Best Practice

    Following is my code for database connection
    import java.sql.Connection;
    import java.sql.DriverManager;
    import oracle.jdbc.driver.OracleDriver;
    public class DBConnect
         private static Connection connection = null;
         static
              try
                   DriverManager.registerDriver( new OracleDriver() );
                   String url = "jdbc:oracle:thin:@dbserver:1521:ORCL";
                   connection = DriverManager.getConnection( url, "user", "password" );
              catch ( Exception e )
         private DBConnect()
         public static synchronized Connection getConnection()
              return connection;
    }Tell me the Best Practice to establish the connection.
    Edited by: shashiwagh on Feb 1, 2010 11:25 AM
    Edited by: shashiwagh on Feb 1, 2010 11:26 AM

    First, handle your exceptions properly.
    Second, you should not normally create static database connections.
    Third, hardcoding connection data like that is not a good idea.

  • Best practice in database

    Dan
    I would appreciate help with the following query:
    The best practice in the development and deployment in the use of database is:
    1. Creating a resource external SQL database. If positive, indicates that when the DataSource is created in the WebLogic Server.
    2. Create a remote JDBC.
    Thanks and Best Regards,

    Hi,
    Others will have different ideas that are probably more useful, but I personally like "green field" opportunities like you're describing.
    One thing you have to figure out is what technology you want to develop and maintain your components in. Once built, they can be exposed as web services, Java POJOs, EJBs, .NET assemblies and databases which Oracle BPM can consume. Pick a technology that your team is most comfortable with.
    A best practice preference would be to use a Service Bus as the intermediary layer between Oracle BPM and the components consumed if you own one. If you don't, Oracle BPM will need to consume the components directly.
    I'd use Oracle BPM for what it was intended for. Sometimes I see the architecture "flipped" where the customer wants a third party UI to drive instances through the process via the API. While this will work, it's a lot of extra work to rebuild what Oracle BPM does a good job of OOTB.
    Dan

  • Eclipse / Workshop dev/production best practice environment question.

    I'm trying to setup an ODSI development and production environment. After a bit of trial and error and support from the group here (ok, Mike, thanks again) I've been able to connect to Web Service and Relational database sources and such. My Windows 2003 server has 2 GB of RAM. With Admin domain, Managed Server, and Eclipse running I'm in the 2.4GB range. I'd love to move the Eclipse bit off of the server, develop dataspaces there, and publish them to the remote server. When I add the Remote Server in Eclipse and try to add a new data service I get "Dataspace projects cannot be deployed to a remote domain" error message.
    So, is the best practice to run everything locally (admin server, Eclipse/Workshop). Get everything working and then configure the same JDBC (or whatever) connections on the production server and deploy the locally created dataspace to the production box using the Eclipse that's installed on the server? I've read some posts/articles about a scripting capability that can perhaps do the configuration and deployment but I'm really in the baby steps mode and probably need the UI for now.
    Thanks in advance for the advice.

    you'll want 4GB.
    - mike

  • JMS Best Practice

    We are starting to use JMS more and more heavily for back end sync and replication of changes.
    What I want to know is how should I use connections and sessions? These processes are always running, but they seem to time out.
    Should I be opening new connections each time, or just sessions, or even just the senders or publishers? I haven't done much benchmarking to see if this is bad, but taking the idea from JDBC we have a JMSBroker that manages the connections.
    I also notice that there is no obvious way to check if stuff is open using the API, so I am forced to try it, catch the exception and the re-connect if required. This sort of causes loops and weird conditions when there is a serious issue (like JMS server down, or network issue) as the system thinks its just a timed out connection.
    Any thought on this, or pointers to JMS Best Practices?
    Steve

    I think best practices would be to instantiate a single Connection, a single Session, and finally a single MessageConsumer. If you are sending messages, do the same but in the end create a single MessageProducer. If you are consuming and producing, make sure to use the same Session for creating both consumer and producer so both receiving and sending are all wrapped in a single transaction. XA can also be used if you want a third party to manage the JMS transaction with say a database transaction.
    You can also register an ExceptionListener on the Connection which may get around the timeout issue.
    Of course, as I write this I realize that some of this assumes an ideal world. If your JMS provider has some odd timeout behavior and you don't have a high volume of messages, just keep the administrative objects (ConnectionFactories, Destinations) around and recreate everything else for each batch of messages sent. Do what works now, leave the optimizations for when they are needed.
    Dwayne

Maybe you are looking for