Closing the database connection after report in a server application

I searched these forums and the internet for a definitive answer on asking the Crystal SDK for java to close the jdbc connection after it has generated a report.  We have been using the Crystal Report SDK to generate reports within our JEE application, built upon the Spring framework, for the past two years.  It works well, especially if you prepare views in the database for your reports.
From what I can tell once you have used ReportClientDocument to create your report you all the close() method to release resources associated with report generation, but this does not close the jdbc database connection.
Further research states that if you are using the CrystalReportViewer you can call the dispose method to close the database connection.  We are not using JSP nor this class, so that does us little good.
Finally I found a post that one could call ((AdvancedReportDocument)reportClientDocument.getReportSource()).dispose().  This doesn't drop the connection either.
Each report actually opens 3 connections according to SQL Server.  Each report will reuse the connections it has open, so for 50 reports, theoretically, we could have up to 150 connections.  We explained to our client those connections  remain inactive, however this is unacceptable to our client as they would like to minimize the number of connections left open to their database.
If anyone can post any further information on this issue, it is much appreciated.

Yes, another team member found the issue.  Quite embarrassing really I didn't see it.  I was looking for the answer within Crystal's libraries.  It had nothing to do with Crystal.
The developer who wrote the helper code for using Crystal first opened a connection to the datasource for the live production database and read that connection information for the report. Next he set that connection information in the report template's PropertyBag, then ran the report. The developer however forgot to close the connection he used to look up the connection info, leaving a memory leak and using up all the connections.
I'm glad you inquired.  I forgot to post the resolution here.

Similar Messages

  • How to correct close database connection after report generation

    Hello
    I have problem a with alive database connection after report creation and report  closing. How to properly to close connection to database?
    Best regards
    Edited by: punkers84 on Jun 17, 2011 10:38 AM

    that's what I am doing... after viewing the report, I call the close method on the window closing event of the container window. but the connection is still open. I had a lot of other issues with my jdbc driver but after downgrading to an older version those issues are resolved.Only this one is still there! Is there any other way to close the connection (like using dbcontroller or etc.)?

  • Set the Database connection in report at runtime in VB 6.0

    Hi,
    I have a report created in CR XI that connects to the oracle server. The report works fine if I have same Service name in the tnsnames.ora file which I had used while creating the report.
    But I dont want that. I want that I pass a recordset directly to the report and it uses the connection that was used to create the recordset. but it doesnt do that and I get "Failed to open the connection" error
    This is the way I have the code
    dim lrpt as CRAXDRT.Report
    AdoRecordset.open (ReportQueryString)
    lrpt.Database.SetDataSource (AdoRecordSet)
    Can someone tell how can I make this working??
    Thanks,
    Reena

    Hi Reena,
    If you have code related issue.
    Post your question in Development - Forum
    Business Objects SDK Application Development
    Link is here :
    [https://www.sdn.sap.com/irj/sdn/businessobjects-sdk-forum]
    Regards,
    Shweta

  • Changing the Database connection after deployment.

    Dear All,
    I have deployed an application and want to change the connection to live database so how do i change it..
    Do i have to redeploy the application ?. Both db's are oracle.
    -Thanks

    Hi Santosh,
    As Frank said u can connect ur Application using a JDBC Datasource.. So u can change in ur Connection Settings in WLS... u can find the documentation as below....
    Section 9.3.1 http://download.oracle.com/docs/cd/E14571_01/web.1111/b31974/bcservices.htm#sm0203
    For Weblogic Server : Section 35.8 http://download.oracle.com/docs/cd/E12839_01/web.1111/b31974/deployment_topics.htm#CHDFJADJ
    Regards,
    Suganth.G

  • My iPad3 loses the 3g connection after closing the magnetic cover.

    My iPad 3 loses the 3g connection after closing the magnetic cover.  When the cover is opened, the iPad shows the connection is active.  However, it does not connect but searches for emails etc.  I have to then reboot.

    have you tried disabling then reabling the 3G setting on your iPad after waking the iPad?
    It may be that as it goes to sleep the iPad loses the 3G connection, you may be in a low connection area, and then has problems finding it again, this can happen occassional with WiFi.
    If you go into Settings, then Mobile Data and toggle  the On/Off switch Off, then On again, see if that brings it back without restarting the iPad.

  • Unable to open database connection after applicaiton is running for several days.

    Has anyone experienced a similar error to this:
    After about 3 days of use our application starts to report errors opening
    the database connection. By that time we've had thousands of transactions
    happen. Oracle is only showing a few open connections to our iAS host so
    the error seems to indicate the connection pool. I'm configured for 64
    connections (the default). We are using the Oracle native driver on iAS
    SP3, Solaris.
    The iAS ksvradmin monitor gives errors when trying to see how many open
    connections it has (verified bug in SP3), so I can't get any info from iAS
    on the connection pool.
    Thanks in advance,
    Rodger Ball
    Sr. Engineer
    Business Wire

    iAS6 SP3 and earlier cannot detect dead connections. If connections become
    stale, iAS does not detect this and will hand out these stale connections.
    I don't know enough about your problem, but you can check for this.
    hope this helps,
    -James
    "Rodger Ball" <[email protected]> wrote in message
    news:9suucb$[email protected]..
    Has anyone experienced a similar error to this:
    After about 3 days of use our application starts to report errors opening
    the database connection. By that time we've had thousands of transactions
    happen. Oracle is only showing a few open connections to our iAS host so
    the error seems to indicate the connection pool. I'm configured for 64
    connections (the default). We are using the Oracle native driver on iAS
    SP3, Solaris.
    The iAS ksvradmin monitor gives errors when trying to see how many open
    connections it has (verified bug in SP3), so I can't get any info from iAS
    on the connection pool.
    Thanks in advance,
    Rodger Ball
    Sr. Engineer
    Business Wire

  • Why do I get a class conflict between the Prepare SQL.vi and the Get Column Name.vi with the SQL Toolkit compatibility vis from the Database Connectivity Toolkit?

    I have done extensive programming with the SQL Toolkit with LabVIEW versions through 6.1. My customer now wants to upgrade to Windows 7, so I am trying to upgrade to LabVIEW 2009 (my latest purchased version) using the Database Connectivity Toolkit, and the SQL Toolkit Compatibility vis. Everything seemed to be going okay with the higher level SQL operations, but I ran into trouble with the Get Column Name.vi. 
    The pictures below show the problem. The original SQL Toolkit connected the Prepare SQL.vi with the Get Column Name.vi with a cluster of two references, one for connection, and one for sql. The new compatibility vis have a class conflict in the wire because the Prepare SQL.vi contains a cluster with connection, and command references, but the Get Column Name.vi expects a cluster with connection and recordset references. 
    How do I resolve this conflict?
    Thank You.
    Dan

    I've never worked with the old version of the toolkit, so I don't know how it did things, but looking inside the SQL prep VI, it only generates a command, and the the column name VI wants a recordset. I'm not super familiar with all the internals of ADO, but my understanding is that is standard - you only have the columns after you execute the command and get the recordset back. What you can apparently do here is insert the Execute Prepared SQL VI in the middle and that will return what you need.
    I'm not sure why it worked before. Maybe the execute was hidden inside the prep VI or maybe you can get the column names out of the command object before execution. In general, I would recommend considering switching to the newer VIs.
    Try to take over the world!

  • Problem in getting the database connection from a connection pool

    Hai All,
    I am facing a problem in getting the database connection from a connection pool created on weblogic server 8.1.
    I am using the Oracle database 8.1.7.
    I have configured my connection pool, datasource and JNDI in weblogic.
    In my java program i have the following code to retrieve the connection.
    import java.sql.*;    
    import java.util.Hashtable;
    import javax.naming.Context;
    import javax.naming.InitialContext;
    class jdbcshp1 {
        public static void main(String[] args) {
         Connection connection = null;
         try {
               Hashtable ht = new Hashtable();
               ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");  // Wanna get rid of this.
               ht.put(Context.PROVIDER_URL,"t3://localhost:7001"); // wanna get rid of this.
               // Get a context for the JNDI look up
               Context ctx = new InitialContext(ht);
            javax.sql.DataSource ds = (javax.sql.DataSource) ctx.lookup ("myjndi1");
              //Create a connection object
              connection = ds.getConnection();
         The above code is working fine but, the two ht.put statements are creating problem.
    The problem is, after converting the application into WAR file it can be deployed
    on any machine or different port on same machine. My application fails if its deployed on
    weglogicserver which is at different port.
    Is there any way that i can get rid of those ht.put statements or any other way to solve the problem.
    any help is appreciated.
    Thanks in advance
    Pooja.

    Hai All,
    Firstly, thanks for ur reply.
    Even i have seen some code which uses context constructor with out any parameter and works fine.
    i dont understand why its not working for my code.
    When i remove those ht.put code and use context constructor with out any parameter, it giving an error.
    Context ctx = new InitialContext();
    javax.sql.DataSource ds = (javax.sql.DataSource) ctx.lookup ("ocjndi");
    connection = ds.getConnection();The error is as follows:
    javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
    the above error is forcing me to include those code but if the port number is changed the code will not work. Plz let meknow if some setting have to be made.
    I appreciate all ur valuable help.
    Thanks once again.
    Pooja.

  • When using the Database Connectivity Toolset, reads and writes with long binary fields are incompatible.

    I am trying to write LabVIEW Variants to long binary fields in a .mdb file using the Database Connectivity Toolset. I get errors when trying to convert the field back to a variant after reading it back from the database.
    I next tried flattening the variant before writing it and ultimately wound up doing the following experiments:
    1) If I use DB Tools Insert Data to write an ordinary string and read it back using a DB Tools Select Data, the string is converted from ASCII to Unicode.
    2) If I use DB Tools Create Parameterized Query to do an INSERT INTO or an UPDATE operation, specifying that the data is BINARY, then read it back using a DB Tools Select Data,
    the length of the string is prepended to the string itself as a big-endian four-byte integer.
    I can't think of any way to do a parameterized read, although the mechanism exists to return data via parameters.
    Presuming that this same problem affects Variants when they are written to the database and read back, I could see why I get an error. At least with flattened strings I have the option of discarding the length bytes from the beginning of the string.
    Am I missing something here?

    David,
    You've missed the point. When a data item is flattened to a string, the first four bytes of the string are expected to be the total length of the string in big-endian binary format. What is happening here is that preceding this four-byte length code is another copy of the same four bytes. If an ordinary string, "abcdefg" is used in place of the flattened data item, it will come back as <00><00><00><07>abcdefg. Here I've used to represent a byte in hexadecimal notation. This problem has nothing to do with flattening and unflattening data items. It has only to do with the data channel consisting of writing to and reading from the database.
    I am attaching three files that you can use to demonstrate the problem. The VI file c
    ontains an explanation of the problem and instructions for installing and operating the demonstration.
    Ron Martin
    Attachments:
    TestLongBinaryFields.vi ‏132 KB
    Sample.UDL ‏1 KB
    Sample.mdb ‏120 KB

  • The Database Connection could not be found

    Hi guys,
    we did a complete new installation of EPM 11.1.2.1 for a testing environment.
    Did the installation 1-1 to our production system.
    But on TEST we have the problem to create database connections:
    Tools > Database Connection Manager > New Database Connection > Fails:
    8001: The Database Connection could not be found: -15da10bd_1378bcfb29f_-7d2c
    Any ideas?
    I am Admin and have all privileges.
    The problem exists for Essbase connections and HFM, too.
    Regards,
    Bernd

    Are you using a NAS share for storing reporting and analysis files? I mean the RM1 folder?

  • Name of the database connection

    When I create a new datasource in JSC and then deploy my application, I find that the JSC creates a database connection called, in my case:
    jdbc/decsis_RaveGenerated_1088606994
    Why the RaveGenerated plus the number? I would just want it to be called jdbc/decsis. Can this be changed so it remains the same always without that extra characters? Thank you.
    Franklin Angulo

    No. That is not how it is working... An example: I took my .war application to another company, then installed an application server with a the following database connections:
    Connection Pool: DecsaPool
    Resource: jdbc/decsis
    When I run my application, it gives me the error that it doesn't find the jdbc/decsis_RaveGenerated10906100xxxx. So I go to my Application Server and under Resources create a new one with the same name as it asks me in the error and make it refer to the DecsaPool Connection Pool. And then my application works fine.
    I found that I could modify the sun-web.xml after my application builds to change that name to just jdbc/decsis. But when I clean my project that information would disappear and then when I build again I would have to modify the sun-web.xml again. So is there a way to modify this so it never gives me the RaveGenerated part at all? Thank you.
    Franklin Angulo

  • Where do I put the .jar file for the database connection?

    I am trying to connect to an Oracle 11g database. I see under the coldfusion settings summary that the java version is 1.7.0_55. So I downloaded the ojdbc7.jar file from Abode.com. Now the question is where do I put it so that coldfusion can access it?
    Of course I need to set up the database connection (which I still don't have the connection string for), but I still need to know where to put the ojdbc.jar file.
    Thanks.

    jasonwryan wrote:What does the man page say?
           The  program  has builtin defaults and temperature thresholds but users
           can   specify   their   own    settings    in    configuration    files
           /etc/default/i8kmon  and  ~/.i8kmon.   The daemon defines 4 states with
           different fan speeds ({0 0}, {1 0}, {1 1}, {2 2}) and  for  each  state
           are  defined  the temperature thresholds which cause the switching to a
           higher or lower  state.  Furthermore  each  state  can  have  different
           thresholds  for operation on ac power or battery.  For example the fol‐
           lowing configuration:
    I've put the following file in /etc/default/i8kmon:
    # Run as daemon, override with --daemon option
    set config(daemon) 0
    # Automatic fan control, override with --auto option
    set config(auto) 1
    # Report status on stdout, override with --verbose option
    set config(verbose) 1
    # Status check timeout (seconds), override with --timeout option
    set config(timeout) 1
    # Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt}
    set config(0) {{-1 0} -1 50 -1 50
    set config(1) {{-1 1} 50 70 50 70}
    set config(3) {{-1 2} 70 128 70 128}
    But it doesn't seem to do anything. I'm running a Dell Inspiron 3521 with no dedicated GPU, so it only has one fan. I was somehow able to get i8kmon to work in Ubuntu by putting this config in /etc/i8kmon, but can't get it to work in Arch. Putting it in /etc/i8kmon and /etc/default/i8kmon doesn't do anything.

  • Timeout: Tuxedo kills the service but not the database connection

    Hi all,
    I am experiencing some performance problems on my system due an efficient SQL and a Tuxedo improper timeout handling.
    A service is using a "problematic" SQL (we will tune it but it's not the main problem). After 60 seconds from the execution, Tuxido kills the services for a timeout.
    At this point I would like Tuxedo to notify DB2 database as well in order to stop processing the SQL. Instead the SQL continues running on the database (also if the service is killed) and this produce a gradual slow down of the performances.
    In the UBBCONFIG, we are using a service configuration like the following timeout configuration:
    .RESOURCES
    SCANUINIT 5
    SANITYSCAN 6
    BLOCKTIME 12
    .SERVICES
    DEFAULT: SVCTIMEOUT=45
    service1 SVCTIMEOUT=60 TRANTIME=60
    service2 SVCTIMEOUT=60 TRANTIME=60
    Note: not all the services are listed in the .SERVICES section and we are using the default NOTIFY as well as an OPENINFO.
    Can you please help me in finding a configuration to kill both the services and the database?
    Thanks in advance,
    Benedetto

    Hi Benedetto.
    First of all, Tuxedo doesn't kill services, it kills servers. Your UBBCONFIG file specifies three timeouts, BLOCKTIME, SVCTIMEOUT, and TRANTIME.
    BLOCKTIME specifies how long a Tuxedo API that needs a response will wait for that response. If the response isn't received in that period of time, Tuxedo will return TPETIME to the caller. As with any failure, if the request was part of a transaction, the transaction is marked rollback only. Note, this timeout does not affect the request, whether sitting in a server's IPC queue or currently executing in a server.
    SVCTIMEOUT is a much more severe timeout and determines how long Tuxedo will allow a service implementation to execute. If a service implementation doesn't reply within the SVCTIMEOUT period, Tuxedo will issue an OS level KILL request to kill the process. If the server is marked restartable, Tuxedo will then try to restart the server assuming none of the restart limits have been reached. Killing the server causes the request to be lost within the server so the caller will stay blocked until BLOCKTIME is reached at which point the above actions will take place.
    TRANTIME is the amount of time Tuxedo allows a transaction to remain active and viable. When this period expires, Tuxedo will mark the transaction as timed out with the only option being rollback. As well, Tuxedo aborts any API requests that would normally cause messaging to occur, i.e., making a tpcall() within a timed out transaction will fail without any attempt to call the service.
    So in your case, the issue is partially that you have the values of your timeouts in somewhat reverse order. Typically we see BLOCKTIME being the smallest value, with TRANTIME typically larger than BLOCKTIME, and SVCTIMEOUT larger even still, although there are good reasons for exceptions to this guideline. Part of the reasoning behind this is that killing a server is a significant thing and its usually best to try and let the server complete whatever its doing, if if the work has been timed out either due to BLOCKTIME or TRANTIME, since the cost of killing and restarting a server is significant.
    Tuxedo will notify the database of the transaction status when the application finally issues a tpcommit() or a tpabort() request, but not until then. Although, if SVCTIMEOUT is hit, then killing the server should cause the database connection to be lot.
    If you could describe the behavior you are seeing and the relevant portions of your ULOG we can try to make some sense of what is happening.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • Workspace Error 8001: The Database Connection could not be found

    We had to reboot all our windows servers recently and since then all the database connections in the database base manager in workspace are gone.
    I can see all the existing reports but cannot use any of them as I dont have any connections.
    Also I cannot create any new connections
    I get the error 8001: The Database Connection could not be found: 13cbd63_12b5e2a8b73_-7f8f when I try to create a new connection.
    Not sure what the problem is.
    Any suggestions?

    Hi Pat,
    It loks like your Biplus relational DB is corrupted, try with those steps:
    1. De-register BI+ from Shared Services (Config Utility->Hyperion Reporting and Analysis > Deregister from Shared Services)
    2. Configure against the backup database (Drop and recreate tables option).
    3. Re-register BI+ with Shared Services.
    4. Re-deploy BI+.
    5. Test Workspace and add new connections.
    Regards.

  • FMS closing the TCP connection forcibly

    Hi,
    I am developing an live encoder programs to publish the audio and video content to FMS. After establishing the TCP connection with FMS, when i start sending commanda and data, FMS is closing the TCP connection forcibly. Can anyone explain why is this happening?
    Thanks and Regards,
    Vishwanath

    Hi,
    If you use wireshark you can see what is happening on the wire to capture what FMS is sending (if at all) back to your live encoder program. Very very useful tool.
    BTW what version of FMS are you using? Have you tried FMLE 3.2 to ensure your setup is good to go as a benchmark, this ensures you have a working FMS server ready for your own live encoder program.
    I hope this helps with your debugging.

Maybe you are looking for