What's the better approach?

Hi, I was just making a program that access a DB through JDBC, and I got myself into this dilemma
What's the better approach to make a connection to a DB?
approach #1(use of singleton pattern)
import java.sql.*;
public class DBConnection {
    private ResultSet rs;
    private Connection conn;
    private PreparedStatement ps;
    private static boolean singleton = false;
    private DBConnection() throws Exception{
        Class.forName("driverPath").newInstance();
     conn = DriverManager.getConnection("url", "user", "pass");
     singleton = true;
    public static DBConnection getInstance() throws Exception{
        if(singleton)
         return null;
        return new DBConnection();
    protected void finalize() throws Throwable {
        //close the connection and release resources...
        singleton = false;
    //Methods to make DB querys and stuff.     
}approach #2 (make a connection only when doing querys)
public class DBConnection {
    private ResultSet rs;
    private Connection conn;
    private PreparedStatement ps;
    public DBConnection() throws Exception {
        Class.forName("driverPath").newInstance();
    //Just some random method to access the DB
    public ArrayList<Row> selectAllFromTable() {
        ArrayList<Row> returnValue = new ArrayList<Row>();
     try {
         conn = DriverManager.getConnection("url", "user", "pass");
         //make querys and fill the arraylist with rows from the table
     } catch(Exception ex) {
         returnValue = null;
         ex.printStackTrace();
     } finally {
         if(ps != null)
             ps.close();
            if(rs != null)
          rs.close();
         if(conn != null)
          conn.close();
     return returnValue;
}I know this classes maybe don't even compile and I don't handle the Exceptions, I'm just trying to make a point about how to manage the connection
So, what it's the better approach in your opinions? #1? #2? neither?

Hi,
I'm resurrecting this thread to ask is this approach OK?
I'm trying to make a single MySql JDBC connection accessible throughout the model.
I'm planning to use it in a Swing application. Whilst I realise the Swing apps are inherently multi-threaded, everything I plan to do can (I think) be done within the constraint that all access to model happens on the EDT, and the user will just have to wear any unresponsiveness.
package datatable.utils;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
abstract public class MySqlConnection {
     public static final String URL = "jdbc:mysql://localhost/test";
     public static final String USERNAME = "keith";//case sensitive
     private static final String PASSWORD = "chewie00";//case sensitive
     private static final Connection theConnection;
     static {
          String driverClassName = "com.mysql.jdbc.Driver";
          try {
               Class.forName(driverClassName);
               theConnection = DriverManager.getConnection(URL, USERNAME, PASSWORD);
          } catch (Exception e) {
               throw new DAOException("Failed to register JDBC driver class \""+driverClassName+"\"", e);
     public static Connection get() {
          return(theConnection);
}Is there a better solution short of c3po? Which I played with, but couldn't work out how to configure.
Thanx guys (and uj),
keith.

Similar Messages

  • What is the better approach to populate Ref cursor into ADF business Component

    HI All,
    I have a requirement where I get the search results by joining more than 10 tables for my search criteria.
    What is the best approach to populate into ADF BCs.
    I have to go with Stored Procedure approach where I input the search Criteria and get the cursor back from the same.
    If I populate VO programmatically then the filtering at column level is a challenge.
    Anyone Please help me. Thanks in Advance.
    Thanks,
    Balaji Mucheli.

    The number of tables to be joined for a VO's query shouldn't matter - if the query is valid and fixed.
    Are you saying that you have logic to decide which tables to join to create the correct query?  I admit that would be a difficult thing to do, other than with a programmatic VO.  However, there are two possible solutions for doing this without a programmatic VO.
    Instead of your procedure returning a REF CURSOR, you can create an object type (CREATE TYPE) and a nested table of that type.  Then you have a function that returns an instance of the nested table.  The query in the VO is SELECT my_column... FROM TABLE(CAST my_table_function(:bind_variable) AS my_table_type).  You may have trouble getting the VO to accept this query as valid SQL, but it IS valid and it DOES work.
    Create a VO to be the master VO - it should have all the attributes that you intend to use, but the query can be anything, even a SELECT from DUAL.  Then create other VOs that subclass the master VO - one for each major variation of the query.  When you use the VO, use the data control for the master VO to create your pages, but change the pageDef to make the VO name a variable (with expression language).  Then you can set that variable to the correct child VO for the query that needs to execute.

  • Simulating a mutiple line text entry-what is the better approach?

    I brand new to Captivate but hail from a past Authorware background. I'm creating an exercise where the user must add 5 individual terms to a list.
    I'd like to accomplish this with one text box but it looks like I am going to have to have 5 separate slides and swap things such that on entry 2 I show entry 1 and on entry 3 I show entry 1,2 etc.
    Also-I need to switch the text in the entry box. I'm going to reserach the manual on this as I am sure I am missing something.
    Thanks

    You will need 5 Text Entry Boxes, but not necessarily on 5 slides. It is possible to time each TEB on the timeline.  Let the user give the first item in a first TEB on the start of the Timeline. The standard action when confirming the entry (by a button or a shortcut like Enter) is 'Continue', so the slide will continue and get to the second TEB. The user gives the second entry, after confirming the slide will continue, you are getting to the third TEB.
    I hope this is not too confusing. Perhaps this screenshot will explain more:

  • What are the different approaches to do Fault Handling?

    What are the different approaches to do Fault Handling?

    for uplodig data to non sap we have 2 methodes
    i) if u know bapi u will use lasm
    2) bdc
    but u mentioned so many records isthere
    best thing is u will uplode all record sto al11 using XI interface
    then u have to write bdc / lsmw  program
    beter to go for lsmw before that u will find bapi
    if u will unable to find bapi
    u have to create bapi and use it in lasmw
    ofter that u have schedule the lsmw program as a bockground
    then u have to create a job for it
    and release from sm 37
    then u have to moniter through bd87
    if u want to go through i will help u.
    if it is usefull to u pls give points
    Saimedha

  • What are the better load/performance testing tools available for Flex Application with BlazeDS RO?

    In my application is designed with Flex3, ActionScript3, BlazeDS Remote Objects.
    Just i tried with OPENSTA but i cant do the dynamic parameterization in their generated scripts because the response of the calls is binary values and also we cant get the response using with SCL language.
    While testing with OPENSTA with HttpService, i can do the dynamic parameterization and got the response.
    can give the information about the below questions
    whether we can do dynamic parameterization with OPENSTA for Flex Remote objects?
    and  what are the better load/performance tools available for Flex Remote Objects?

    Your approach is fine, depending on how many and what type of CFCs you are talking about. If they are "singletons" - that is, only one instance of each CFC is needed to be in memory and can be reused/shared from multiple parts of your application - caching them in the application scope is common.  Just make sure they are thread safe ("var" or local.* all your method variables).
    You might consider taking advantage of a dependency injection framework, such as DI/1 (part of the FW/1 MVC framework), ColdSpring, or WireBox (a module of the ColdBox platform that can be used independently).  They have mechanisms for handling and caching singletons.  Then you wouldn't have to go to the application scope to get your CFC instances.
    -Carl V.

  • What is a better approach, batch or prepared statement?

    Hi,
    In my application, the user account property will be retrieved from DB and the last login time will be updated after a successful sign on. There are two query statements in this procedure; select and update. What is a better approach in JDBC to deal with the two queries: batch or prepared statement?
    Thanks.

    Hi, Duffy,
    Thanks for your input.
    I would like to use PreparedStatement with batch update. From my reading, however, it is not suitable to use PreparedStatement with batch update. At less, I haven't see a single example on the tutorial or article sections of this site. I am not sure the following is valid or not:
                   stmt = connection.prepareStatement(INSERT_SMT_QUERY_STR);
                   stmt.setLong(1, details.getRegardID());
                   stmt.setLong(2, details.getWriterID());
                   stmt.setInt(3, details.getVisibility());
                   stmt.setString(4, details.getSubject().trim());
                   stmt.setString(5, details.getContent().trim());
    stmt.addBatch() <-- what goes into here?

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • What is the better way to change name of a site with paths and links?

    Hi!
    i m using dreamweaver cs4 and i have a site already configured on dreamweaver
    i need to change the name of the site and at the same time, every path names and every links
    This site has about 50 pages so i would like to know what is the better steps to do for modified the name of the site unless problem?
    I read the informations about  'manage site' but its not clear for me about links and path names!!
    thank you

    Just use site manager to change the site name. Nothing complicated with that.
    What exactly do you mean by changing path names? Do you want to change the name of folders in the site? If so, just change them within DW and it will prompt if you want to update links.

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • What's the better order to install OEM + AS + DB

    Hi everbody.
    What's the better order to install on RHEL4 Update2:
    - Enterprise Manager Control Grid (10gR2)
    - Database (10gR2)
    - Application Server (10gR3)
    And about the $ORACLE_HOME?? How to? Each instalation have a separate $ORACLE_HOME??
    Tks!

    Rafael,
    Save the Grid Control as the last installation. The order of Database and Application Server doesn't matter, however I would personally install the database first.
    Please read all Installation Guides' before starting any installations, and follow all pre-requisites stated for RHEL4.
    The Installation Guide's states that all products must be installed in separate ORACLE_HOME. As a matter of fact, the Oracle Installer doesn't allow installing any of these products into an existing ORACLE_HOME.
    Regards,
    Martin

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • How to map journal fields and whats is the better  process type

    /Journal/JournalSuspenseCostCentre     NULL
    /Journal/JournalBalancingCentre     Lookup from Organisation ID
    /Journal/JournalMultiCompany     u2018Nu2019
    /Journal/JournalBatchNumber     NULL
    /Journal/JournalNumTransactions     Total number of /Journal/JournalLine transactions
    /Journal/JournalBaseDRTotal     Sum of /Journal/JournalLine/JournalLineBaseValue u2013 Debit Values only
    /Journal/JournalBaseCRTotal     Sum of /Journal/JournalLine/JournalLineBaseValue u2013 Credit Values only
    How to map journal fields and whats is the better  process type idoc/proxies?please let me know
    Journal Line
    Multiple journal lines per header:
    Schema Element     Data
    /Journal/JournalLine/JournalLineCostCentre     Bank account control Cost Centre
    /Journal/JournalLine/JournalLineAccount     Bank account control Account Code
    /Journal/JournalLine/JournalLineMoneyTotal     Transaction Line Amount
    /Journal/JournalLine/JournalLineVolume     NULL
    /Journal/JournalLine/JournalLineDescription     Payee Name
    /Journal/JournalLine/JournalLineChequeBookReference     NULL
    /Journal/JournalLine/JournalLineMatchField     Cheque Number
    NB Contra accounting entries should be posted to:
    Schema Element     Data
    /Journal/JournalLine/JournalLineCostCentre     Bank account control Cost Centre
    /Journal/JournalLine/JournalLineAccount     Bank account control Account Code
    /Journal/JournalLine/JournalLineMoneyTotal     Transaction Line Amount * -1
    /Journal/JournalLine/JournalLineDescription     Payee Name
    /Journal/JournalLine/JournalLineMatchField     Cheque Number
    /Journal/JournalPeriod     Current General Ledger Period
    /Journal/JournalYear     Current General Ledger Year

    It looks you are new to PI,
    you have to develop scenario end to end, by creating source data type and target data type(if you have XSD's not required),then use Graphical mapping (message mapping) to map source and target structures.
    Search in sdn for one end to end scenario you will understand easily
    Regards,
    Raj

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • Newbie: What is the best approach to integrate BO Enterprise into web app

    Hi
    1. I am very new to Business Objects and .Net. I need to know what's the best approach
    when intergrating bo into my web app i.e which sdk do i use?
    For now i want to provide very basic viewing functionality for the following reports :
    -> Crystal Reports
    -> Web Intellegence Reports
    -> PDF Reports
    2. Where do i find a standalone install for the Business Objects Enteprise XI .Net providers?
    I only managed to find the wssdk but i can't find the others. Business Objects Enteprise XI
    does not want to install on my machine (development) - installed fine on server, so i was hoping i could find a standalone install.

    To answer question one, you can use the Enterprise .NET SDK for each, though for viewing Webi documents it is much easier to use the opendocument method of URL reporting to view them.
    The Crystal Reports and PDF instances can be viewed easily using the SDK.
    Here is a link to the Developer Library:
    [http://devlibrary.businessobjects.com/]
    VB.NET XI Samples:
    [http://support.businessobjects.com/communityCS/FilesAndUpdates/bexi_vbnet_samples.zip.asp]
    C# XI Samples:
    [http://support.businessobjects.com/communityCS/FilesAndUpdates/bexi_csharp_samples.zip.asp]
    Other samples:
    [https://boc.sdn.sap.com/codesamples]
    I answered the provider question on your other thread.
    Good luck!
    Jason

  • What is the best approach to process data on row by row basis ?

    Hi Gurus,
    I need to code stored proc to process sales_orders into Invoices. I
    think that I must do row by row operation, but if possible I don't want
    to use cursor. The algorithm is below :
    for all sales_orders with status = "open"
    check for credit limit
    if over credit limit -> insert row log_table; process next order
    check for overdue
    if there is overdue invoice -> insert row to log_table; process
    next order
    check all order_items for stock availability
    if there is item that has not enough stock -> insert row to
    log_table; process next order
    if all check above are passed:
    create Invoice (header + details)
    end_for
    What is the best approach to process data on row by row basis like
    above ?
    Thank you for your help,
    xtanto

    Processing data row by row is not the fastest method out there. You'll be sending much more SQL statements towards the database than needed. The advice is to use SQL, and if not possible or too complex, use PL/SQL with bulk processing.
    In this case a SQL only solution is possible.
    The example below is oversimplified, but it shows the idea:
    SQL> create table sales_orders
      2  as
      3  select 1 no, 'O' status, 'Y' ind_over_credit_limit, 'N' ind_overdue, 'N' ind_stock_not_available from dual union all
      4  select 2, 'O', 'N', 'N', 'N' from dual union all
      5  select 3, 'O', 'N', 'Y', 'Y' from dual union all
      6  select 4, 'O', 'N', 'Y', 'N' from dual union all
      7  select 5, 'O', 'N', 'N', 'Y' from dual
      8  /
    Tabel is aangemaakt.
    SQL> create table log_table
      2  ( sales_order_no number
      3  , message        varchar2(100)
      4  )
      5  /
    Tabel is aangemaakt.
    SQL> create table invoices
      2  ( sales_order_no number
      3  )
      4  /
    Tabel is aangemaakt.
    SQL> select * from sales_orders
      2  /
            NO STATUS IND_OVER_CREDIT_LIMIT IND_OVERDUE IND_STOCK_NOT_AVAILABLE
             1 O      Y                     N           N
             2 O      N                     N           N
             3 O      N                     Y           Y
             4 O      N                     Y           N
             5 O      N                     N           Y
    5 rijen zijn geselecteerd.
    SQL> insert
      2    when ind_over_credit_limit = 'Y' then
      3         into log_table (sales_order_no,message) values (no,'Over credit limit')
      4    when ind_overdue = 'Y' and ind_over_credit_limit = 'N' then
      5         into log_table (sales_order_no,message) values (no,'Overdue')
      6    when ind_stock_not_available = 'Y' and ind_overdue = 'N' and ind_over_credit_limit = 'N' then
      7         into log_table (sales_order_no,message) values (no,'Stock not available')
      8    else
      9         into invoices (sales_order_no) values (no)
    10  select * from sales_orders where status = 'O'
    11  /
    5 rijen zijn aangemaakt.
    SQL> select * from invoices
      2  /
    SALES_ORDER_NO
                 2
    1 rij is geselecteerd.
    SQL> select * from log_table
      2  /
    SALES_ORDER_NO MESSAGE
                 1 Over credit limit
                 3 Overdue
                 4 Overdue
                 5 Stock not available
    4 rijen zijn geselecteerd.Hope this helps.
    Regards,
    Rob.

Maybe you are looking for

  • Table name for shipping details for the given purchase order

    Hi, Table name for shipping details for the given purchase order regards, vijay

  • Portfolio Acrobat 8.0 vs. Reader 9.0 problem

    I have a pdf portfolio that I created in Adobe Acrobat 9.0 Pro and I am sharing it via an internal intranet site with co-workers. For the co-workers that simply have Reader installed theres no issue, they upgrade to Reader 9.0 or above and it opens.

  • Very urgent : please help !!!! Blowfish problem

    Greetings people, I have devised an application which reads the value from file and encrypts it (using Blowfish)......The file is then read and decrypted accordingly.............. Problem : I seem to be able to encrypt and further decrypt file(s) whi

  • No file extensions for attachments?

    Any time I attach a document using Mail, it appears as a .doc file (with that file extension). However, when people receive it (both Mac and Windows users), they always say that there is no file extension, or they even say that it is saved as a .mpeg

  • Pricing type in copy control for billing documents

    Hi Experts, In copy control for billing documents, there is a pricing type M - copy pricing elements, turn value? What is the effect of this pricing type? thanks