What is the best practice to get database connection?

What are best best practices in database connection to follow?

式 可 以 采 用Class.forName 方 法 显 示 加 载, 如 下 面 的 语 句 加 载Sun 公 司 的JDBC-ODBCbridge 驱 动 程 序:
Class.forName(�sun.jdbc.odbc.JdbcOdbcDriver�);
然 后 运 用DriverManager 类 的getConnection 方 法 建 立 与 数 据 源 的 连 接:
Connectioncon=DrivenManagerget-
Connection(url);
该 语 句 与url 对 象 指 定 的 数 据 源 建 立 连 接。 若 连 接 成 功, 则 返 回 一 个Connection 类 的 对 象con。 以 后 对 这 个 数 据 源 的 操 作 都 是 基 于con 对 象 的。
执 行 查 询 语 句。 本 文 介 绍 基 于Statement 对 象 的 查 询 方 法。 执 行SQL 查 询 语 句 需 要 先 建 立 一 个Statement 对 象。 下 面 的 语 句 建 立 名 为guo 的Statement 对 象:
Statement guo=con.creatStatement();
在Statement 对 象 上, 可 以 使 用execQuery 方 法 执 行 查 询 语 句。execQuery 的 参 数 是 一 个String 对 象, 即 一 个SQL 的Select 语 句。 它 的 返 回 值 是 一 个ResultSet 类 的 对 象。
ResultSet result=guo.execQuery(�SELECT*FROM A�)
该 语 句 将 在result 中 返 回A 中 的 所 有 行。
对Result 对 象 进 行( 下 转76 页)( 上 接73 页) 处 理 后, 才 能 将 查 询 结 果 显 示 给 用 户。Result 对 象 包 括 一 个 由 查 询 语 句 返 回 的 一 个 表, 这 个 表 中 包 含 所 有 的 查 询 结 果。 对Result 对 象 的 处 理 必 须 逐 行, 而 对 每 一 行 中 的 各 个 列, 可 以 按 任 何 顺 序 进 行 处 理。Result 类 的getXXX 方 法 可 将 结 果 集 中 的SQL 数 据 类 型 转 换 为Java 数 据 类 型

Similar Messages

  • What is the best practice to get thumbnail of photo which I get from cameraUI

    Hi,
    I building a photo sharing application in android and I want to upload a smaller size of photo just after I receive the photo from the cameraUI.
    What is the best practice to resize the photo and to upload it, At the moment I doesn't resize it And the file size is big (1MB) for HTC DESIRE HD ?
    Is it possible to resize the image and keep the EXIF data ?
    Thanks,
    Nimrod.

    Yep,
    Media Manager is the way to go.
    Read the manual aout it.
    It is all explained in there.
    Rienk

  • What's the best practice to get App Module on Jdev 10.1.3 using Struts/ADF?

    Hi,
    I read Mr. Muench's post stating that the best way to execute an App. Module method is to make the method part of the client interface and then drag and drop it on a form as a button ...
    In another post Muench says that a another way to get the App Module is using getDataProvider() method, I tried and found that this worked in:
    // standard event handler interface
    // on PageController class
    public class MyFormPageController extends PageController
    public void onUpdate(PageLifecycleContext ctx)
    AMServImp am;
    am = (AMServImpl)ctx.getBindingContext().getDefaultDataControl().getDataProvider();
    am.myMethod( ... );
    am.getTransaction().commit();
    I really like the second option because is closer to what I used to do in 10.1.2 with an event handler receiving DataActionContex parameter. Besides, is not clear to me in the former method with the drag and drop how the form inputs are assigned to the method parameters ....
    I would like to know however what is the best way and why ?
    Any comments ?
    -OM
    Message was edited by:
    omar71

    According to JSR 227, which ADFm is implementing, the view/controller portions of your application aren't really supposed to be touching the business services, or even the data controls, at all. They're supposed to do all model manipulation entirely through the databindings.
    This is just another bit of "code separation"--much like MVC code separation--that should make the application a bit easier to maintain. Your application module could change dramatically--maybe even be replaced by an EJB session bean--and all you have to do is change the .dcx file and your page model files, rather than searching through your code. Maintainability and readability is the advantage here; I don't think there could possibly be any slowdown by calling ctx.getBindingContext().getDefaultDataControl().getDataProvider(), since that's what the bindings would do anyway.
    Well, of course there's the "ADF supports going this way declaratively", which is not to be sneezed at as an advantage.
    As to how to put the form values in to the method binding: Look up the <af:setActionListener> tag (e.g., using full-text search in the help). You can nest that into your commandButton or commandLink tags and use it to set method parameters.
    Best,
    Avrom

  • What is the best practice on mailbox database size in exchange 2013

    Hi, 
    does anybody have any links to good sites that gives some pros/cons when it comes to the mailbox database sizes in exchange 2013? I've tried to google it - but hasn't found any good answers. I would like to know if I really need more than 5 mailbox databases
    or not on my exchange environment. 

    Hi
       As far as I know, 2TB is recommended maximum database size for Exchange 2013 databases.
       If you have any feedback on our support, please click
    here
    Terence Yu
    TechNet Community Support

  • What is the best practice for PXI controller, connect to the company network and install antivirus? Special Subnet?

    I need your suggestions and common practices. 

    Hello TomMex,
    Thanks for posting. If what you are looking for are suggestions for how to use your PXI controller in regards to some of the issues you mentioned, then here are my suggestions. For networking purposes, you can consider your PXI controller the same as any other computer; you should be able to connect it to your network just fine and it will be able to see other computers and devices that are on the same subnet. Antivirus software in general should be fine for your system until you want to install new NI software, at which point you may want to disable it to avoid issues during installation. Does this answer your question? Let me know, thanks!
    Regards,
    Joe S.

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • What is the best methodology to handle database schema changes after an application has been deployed?

    Hi,
    VS2013, SQL Server 2012 Express LocalDB, EF 6.0, VB, desktop application with an end user database
    What is a reliable method to follow when there is a schema change for an end user database used by a deployed application?  In other words, each end user has their own private data, but the database needs to be expanded for additional features, etc. 
    I list here the steps it seems I must consider.  If I've missed any, please also inform:
    (1) From the first time the application is installed, it should have already moved all downloaded database files to a separate known location, most likely some sub-folder in <user>\App Data.
    (2) When there's a schema change, the new database file(s) must also be moved into the location in item (1) above.
    (3) The application must check to see if the new database file(s) have been loaded, and if not, transfer the data from the old database file(s) to the new database file(s).
    (4) Then the application can operate using the new schema.
    This may seem basic, but for those of us who haven't done it, it seems pretty complicated.  Item (3) seems to be the operative issue for database schema changes.  Existing user data needs to be preserved, but using the new schema.  I'd like
    to understand the various ways it can be done, if there are specific tools created to handle this process, and which method is considered best practice.
    (1) Should we handle the transfer in a 'one-time use' application method, i.e. do it in application code.
    (2) Should we handle the transfer using some type of 'one-time use' SQL query.  If this is the best way, can you provide some guidance if there are different alternatives for how to perform this in SQL, and where to learn/see examples?
    (3) Some other method?
    Thanks.
    Best Regards,
    Alan

    Hi Uri,
    Thank you kindly for your response.  Also thanks to Kalman Toth for showing the right forum for such questions.
    To clarify the scenario, I did not mean to imply the end user 'owns' the schema.  I was trying to communicate that in my scenario, an end user will have loaded their own private data into the database file originally delivered with the application. 
    If the schema needs to be updated for new application features, the end user's data will of course need to be preserved during the application upgrade if that upgrade includes a database schema change.
    Although I listed step 3 as transferring the data, I should have made more clear I was trying to express my limited understanding of how this process "might work", since at the present time I am not an expert with this.  I suspected my thinking
    is limited and someone would correct me.
    This is basically the reason for my post; I am hoping an expert can point me to what I need to learn about to handle database schema changes when application upgrades are deployed.  For example, if an SQL script needs to be created and deployed
    then I need to learn how to do that.  What's the best practice, or most reliable/efficient way to make sure the end user's database is changed to the new schema after the upgraded application is deployed?  Correct me if I'm wrong on this,
    but updating the end user database will have to be handled totally within the deployment tool or the upgraded application when it first starts up.
    If it makes a difference, I'll be deploying application upgrades initially using Click Once from Visual Studio, and eventually I may also use Windows Installer or Wix.
    Again, thanks for your help.
    Best Regards,
    Alan

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • What is the best practice for full browser video to achieve the highest quality?

    I'd like to get your thoughts on the best way to deliver full-browser (scale to the size of the browser window) video. I'm skilled in the creation of the content but learning to make the most out of Flash CS5 and would love to hear what you would suggest.
    Most of the tutorials I can find on full browser/scalable video are for earlier versions of Flash; what is the best practice today? Best resolution/format for the video?
    If there is an Adobe guide to this I'm happy to eat humble pie if someone can redirect me to it; I'm using CS5 Production Premium.
    I like the full screen video effect they have on the "Sounds of pertussis" web-site; this is exactly what I'm trying to create but I'm not sure what is the best way to approach it - any hints/tips you can offer would be great?
    Thanks in advance!

    Use the little squares over your video to mask the quality. Sounds of Pertussis is not full screen video, but rather full stage. Which is easier to work with since all the controls and other assets stay on screen. You set up your html file to allow full screen. Then bring in your video (netstream or flvPlayback component) and scale that to the full size of your stage  (since in this case it's basically the background) . I made a quickie demo here. (The video is from a cheapo SD consumer camera, so pretty poor quality to start.)
    In AS3 is would look something like
    import flash.display.Loader;
    import flash.net.URLRequest;
    import flash.display.Bitmap;
    import flash.display.BitmapData;
    import flash.ui.Mouse;
    import flash.events.Event;
    import flash.events.MouseEvent;
    import flash.display.StageDisplayState;
    stage.align = StageAlign.TOP_LEFT;
    stage.scaleMode = StageScaleMode.NO_SCALE;
    // determine current stage size
    var sw:int = int(stage.stageWidth);
    var sh:int = int(stage.stageHeight);
    // load video
    var nc:NetConnection = new NetConnection();
    nc.connect(null);
    var ns:NetStream = new NetStream(nc);
    var vid:Video = new Video(656, 480); // size off video
    this.addChildAt(vid, 0);
    vid.attachNetStream(ns);
    //path to your video_file
    ns.play("content/GS.f4v"); 
    var netClient:Object = new Object();
    ns.client = netClient;
    // add listener for resizing of the stage so we can scale our assets
    stage.addEventListener(Event.RESIZE, resizeHandler);
    stage.dispatchEvent(new Event(Event.RESIZE));
    function resizeHandler(e:Event = null):void
    // determine current stage size
        var sw:int = stage.stageWidth;
        var sh:int = stage.stageHeight;
    // scale video size depending on stage size
        vid.width = sw;
        vid.height = sh;
    // Don't scale video smaller than certain size
        if (vid.height < 480)
        vid.height = 480;
        if (vid.width < 656)
        vid.width = 656;
    // choose the smaller scale property (x or y) and match the other to it so the size is proportional;
        (vid.scaleX > vid.scaleY) ? vid.scaleY = vid.scaleX : vid.scaleX = vid.scaleY;
    // add event listener for full screen button
    fullScreenStage_mc.buttonMode = true;
    fullScreenStage_mc.mouseChildren = false;
    fullScreenStage_mc.addEventListener(MouseEvent.CLICK, goFullStage, false, 0, true);
    function goFullStage(event:MouseEvent):void
        //vid.fullScreenTakeOver = false; // keeps flvPlayer component from becoming full screen if you use it instead  
        if (stage.displayState == StageDisplayState.NORMAL)
            stage.displayState=StageDisplayState.FULL_SCREEN;
        else
            stage.displayState=StageDisplayState.NORMAL;

  • What are the best practices for generating an EPS logo from InDesign?

    Our costomer is running into technical issues with the logo we sent them, which was exported from Indesign. Images were not embedded and fonts missing. I was able to embed the images and fonts. However, we DO NOT want them to be able to make any text changes. So after exporting an eps, I opened the file in Adobe Illustrator and made all the text outlines. I hope this works. But I just wanted to post the question on what are the best practices for doing this?
    The client needs the logo with transparent background, images emebdded and type in outlines. Also, they need some space around the text. When I exported the eps, the file is right up on the edge of the type.

    It sounds like you are pretty far from "best practice" with regard to logo design and delivery.
    These days, the very use of the EPS format should be considered bad practice, and some other terms in your post, (i.e., 'images,' 'missing fonts'), make it sound like there is not a seasoned logo designer involved.
    That said, you probably already got the advice you need to get out of the immediate jam. However, without proper logo design, you and the client will soon be facing other problems. You should be delivering a 100% vector graphic in single-color (black) and corporate-color(s) versions, with no live font data, that has been test-scaled to very small and very large sizes; ensuring it will work at postage-stamp size and on the side of a truck or building, with specific spot color(s) and proportions that will enable it to be offset printed, embroidered and screen-printed on apparel, and cut into signage materials and decals.

  • What is the best practice for BADI?

    Hi all, this is my first post.
    I've seen many BADI examples here at SDN and elsewhere where after defining the badi at SE18, and the implementation at SE19, you just create a Z program and call the badi and its method. No brainer. I know how to use the new badi of 'get badi' and 'call badi' instead of the classic exit handler. So I do know how to call a badi properly, in a Z program.
    However, SAP's intention of BADI is to replace the traditional user exit. My question is how do you guys use badi in replacement of user exit?
    User exits have 'call customer-functions' where you put your code in the Z includes, without touching standard SAP programs. Where/how do i link standard programs to call my badi? Even if I implement a standard bapi, my implementation is a Z. And a standard bapi and its methods are, well, standard. How do I call the Z stuff.
    1. If using cmod, call customer-function and the z include to call my badi and method, how does that replace user exit if I'm still using cmod???
    2. I've seen people add includes in a standard function-pool. The Z include calls the badi. But doing so is a modification, which I thought is to be avoided.
    So my question is what is the best practice out there, how do you guys use badi for enhancement in replacement of user exit? In combination of cmod, adding includes in standard programs, other methods?

    Hi Shawn,
    Welcome to SDN
    First thing, you got the whole concept of BADI partially wrong.
    BADI are like user-exits only and the difference is it uses ABAP OO an some more functionalities.
    As you find in the code CALL CUSTOMER FUNCTION, the same way there are exit handlers for BADI.
    User Exits are not completely removed. They are still there and will be there. Its just the extra flexibility with BADI.
    Regards,
    Atish

  • What is the best practice for AppleScript deployment on several machines?

    Hi,
    I am developing some AppleScripts for my colleagues at work and I don't want to visit each of them to deploy my AppleScript on their Macs.
    So, what is the best practice for AppleScript deployment on several machines?
    Is there an installer created by the Automator available?
    I would like to have something like an App to run which puts all my AppleScript relevant files into the right place onto a destination Mac.
    Thanks in advance.
    Regards,

    There's really no 'right place' to put applescripts.  folder action scripts nees to go in ~/Library/Scripts/Folder Action Scripts (or /Library/Scripts/Folder Action Scripts), anything you want to appear in the script menu needs to go in ~/Library/Scripts (or /Library/Scripts), script applications should probably go in the Applications folder, but otherwise scripts can be placed anywhere.  conventional places to put them are in ~/Library/Scripts or in a subfolder of ~/Library/Application Support if they are run by an application.  The more important issue is to make sure you generalize the scripts: use the path to command to get local paths rather than hard-coding them in, make sure you test to make sure applications or unic executables you call are present ont he machine, use script bundles rather tna scripts if you scripts have private resources.
    You can write a quick installer script if you want to make sure scripts go where you want them.  Skeleton verion looks like this:
    set scriptsFolder to path to scripts folder from user domain
    set scriptsToExport to path to resource "xxx.scpt" in directory "yyy"
    tell application "Finder"
      duplicate scriptsToExport to scriptsFolder with replacing
    end tell
    say "Scripts are installed"
    save this as a script application, then open the application pacckage and create a folder called "yyy" in the resources folder and copy your script "xxx.scpt" into it.  other people can run the app to install the script.

  • What are the best practice for CQ5.5 configuration?

    Hello,
    What are the best practice for CQ5.5 configuration which handle for High availability.
    Last time I had a issues on server when I was uploaded 2 GB of DAM and then after that the server is not able to start and always getting error regarding Tar Persistance.
    So kindly request you to please let me know what are the best apache felix configuration.
    Thanks in advance...
    Regards,
    Satish

    Hi,
    A DAM upload, regardless of the size of the assets, never should result in TarPM problems, unless you run into an OOM, which left the repository in an unclean state. So if you regularly do DAM uploads of that size, you should check the Garbage Collection logs and probably adjust the heapsize if necessary. You might want to limit the number of concurrent running workflows to keep the memory consumption a bit lower.
    To your question: HA in a traditional sense you cannot achieve with a single box, even with optimized settings. In an author usecase you would need clustering.
    Jörg

  • What is the best practice in order to create flow in a single maintenance plan?

    Hi All,
    What is the best practice in order to create flow in a single maintenance plan.
    1st Check Database Integrity (Check DB)
    2nd Rebuild Index 
    or 
    1st Rebuild Index
    2nd Check Database Integrity (Check DB)
    Grateful to your time and support. Regards, Shiva

    Use the Maintenance Plan Wizard to create a maintenance plan:
    "This topic describes how to create a single server or multiserver maintenance plan using the Maintenance Plan Wizard in SQL Server 2012. The Maintenance Plan Wizard creates a maintenance plan that Microsoft SQL Server Agent can run on a regular
    basis. This allows you to perform various database administration tasks, including backups, database integrity checks, or database statistics updates, at specified intervals."
    LINK: http://technet.microsoft.com/en-us/library/ms191002.aspx
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

Maybe you are looking for