Concurrent sessions not being released in CRS2008

We have a servlet trying to connect to Crystal Reports Server 2008 using RAS Java API to open unmanaged reports.
We have 5 CALs and the connection type of the Guest user is configured to use the Concurrent User in the Crystal Reports Server.  We run the reports from our web application with a same user logged on.  We were able to get about 2-3 reports successfully.  After the total sessions reached 5, it fails at the very
beginning of ReportAppSession.initialize()  The logged error message in the Crystal Reports Server is:
ErrorLog 2010  1  7 16:29:25.187 5164 3432 (:46) (..\cdtsagent.cpp:3303): CDTSagent::doOneRequest reqId=154:CSResultException thrown.   ErrorSrc:"Analysis Server" FileName:"..\cdtsagent2.cpp" LineNum:448 ErrorCode:-2147217397 ErrorMsg:"" DetailedErrorMsg:""     ErrorSrc:"COM" FileName:"..\cdtsagent2.cpp" LineNum:443 ErrorCode:-2147210992 ErrorMsg:"All of your system's 5 Concurrent Access Licenses are in use at this time or your system's license key has expired. Try again later or contact your administrator to obtain additional licenses. (FWB 00014)" DetailedErrorMsg:""
We are using Tomcat and have tried the configuration in web.xml of infoviewapp and cmcapp but have no luck -
(1) Locate the pattern "logontoken.enabled" and change the value from the existing 'true' to 'false' :
<context-param>
<param-name>logontoken.enabled</param-name>
<param-value>false</param-value>
</context-param>
(2) Make sure these lines are uncommented:
<listener>
<listener-class>com.businessobjects.sdk.ceutils.SessionCleanupListener
</listener-class>
In a past thread, it mentioned that we may try to use various SDK code offerings to manage sessions.  Could you provide some sample codes using CRS SDK or CMS configuration to release the sessions.
Here is the codes:
try
ReportAppSession reportAppSession = new ReportAppSession();
reportAppSession.createService("com.crystaldecisions.sdk.occa.report.application.ReportClientDocument");
reportAppSession.setReportAppServer("myCRServer");
// This is where the exception is thrown.
reportAppSession.initialize();
ReportClientDocument lo_ReportClientDoc = new ReportClientDocument();
lo_ReportClientDoc.setReportAppServer(reportAppSession.getReportAppServer());
lo_ReportClientDoc.open(asReportName,OpenReportOptions._openAsReadOnly);
ReportServerControl control = new ReportServerControl();
control.setReportSource(lo_ReportClientDoc.getReportSource());
catch(Exception exc)
System.out.println( exc);
Edited by: Bonita Diemoz on Jan 12, 2010 8:17 PM

Recommendation is to publish the report to the Server, and use managed reporting.
You'd have more control over the EnterpriseSession that way.
Unmanaged RAS does use the Guest account for logon, and you don't have any control over the EnterpriseSession at all.
It would be better to upgrade the CAL licensing, if you require additional users.
Sincerely,
Ted Ueda

Similar Messages

  • Portal session not being terminated. browser "unload" event

    This line of code is in the portallauncher.default and eventually causes the problem:
    EPCM.subscribeEvent("urn:com.sapportals.portal:browser", "unload", releaseProducerSessions);
    releaseProducerSessions eventually calls a portal component
    WSRPSessionRelease.. which is causing the problem.
    When we upgraded from EP 6.0 to NW 2004, users started recieving the Netweaver Login Screen when they logged out and logged back in, in the same browser. We think this error occurs because NW 2004 implements Web Services Remote Portal functionality.
    We are using SiteMinder as a third party session management tool.
    What we found was that the Siteminder session was being killed but the Portal session was not. Therefore, when users logged back in they would see the generic Netweaver Login Screen, and they could actually just hit "enter" and continue to the portal.
    A successful logoff, users clicked the logoff button, the DSM terminator was being called, thus killing the portal session, then a form was submitted redirecting the users the the siteminder loggoff page, which logs the users off siteminder.
    When the logoff failed, we found that after the DSM Terminator was called
    and before the page was being redirected, a portal component
    (WSRPSessionRelease) was being called, which in turn, RECREATED the portal session. So the user never actually gets logged off from the portal.
    We found that the WSRPSessionRelease component is set to
    a "browser" "unload" event when the portallauncher.default component is first loaded. This is the same component that is being called when the user clicks the "X" to force close the browser.
    Not everytime is the WSRPSessionRelase component being called before the redirect to the siteminder logg off page. Sometimes this component is called after the redirect, and we find that this is a successful loggoff.
    The component is:
    irj/servlet/prt/portal/prtroot/com.sap.portal.wsrp.coreconsumer.WSRPSessionRelease

    Hello Michael,
    The 'log off' issue is a known issue with Portal since EP 6
    Had faced similar issue and SAP suggests to redirect the 'log off' link to another non-SAP site...like your company intranet site.
    This will help the session to break.
    There are 1-2 SAP Notes on this as well.
    Hope this helps.
    Regards,
    Ritu

  • Portal session not being terminated

    When we upgraded from EP 6.0 to NW 2004, users started recieving the
    Netweaver Login Screen when they logged out and logged back in, in the
    same browser. We think this error occurs because NW 2004 implements Web
    Services Remote Portal functionality.
    We are using SiteMinder as a third party session management tool.
    What we found was that the Siteminder session was being killed but the
    Portal session was not. Therefore, when users logged back in they would
    see the generic Netweaver Login Screen, and they could actually just
    hit "enter" and continue to the portal.
    A successful logoff, users clicked the logoff button, the DSM terminator
    was being called, thus killing the portal session, then a form was
    submitted redirecting the users the the siteminder loggoff page, which
    logs the users off siteminder.
    When the logoff failed, we found that after the DSM Terminator was called
    and before the page was being redirected, a portal component
    (WSRPSessionRelease) was being called, which in turn, recreated the
    portal session. So the user never actually gets logged off from the
    portal.
    We found that the WSRPSessionRelease component is set to
    a "browser" "unload" event when the portallauncher.default component is
    first loaded. This is the same component that is being called when the
    user clicks the "X" to force close the browser.
    Not everytime is the WSRPSessionRelase component being called before the
    redirect to the siteminder logg off page. Sometimes this component is
    called after the redirect, and we find that this is a successful loggoff.
    The component is:
    irj/servlet/prt/portal/prtroot/com.sap.portal.wsrp.coreconsumer.WSRPSessio
    nRelease

    Hi Michael, we are facing the same error. Have you found a solution?
    Thanks in advance and best regards

  • Session not being clean up by JRun

    My application is using IPlanet WebServer and JRun3.02 Application server. I am having a problem with active session not getting cleaned up by the App Server. When the user goes through the application and finishes the process, I invalidate the session by doing 'session.Invalidate()'. I also have set a 30 minute timeout value in the JRun global.properties file to invalidate the session if the user starts but not finish going through the application. However, the number of active session count in the JRun log doesn't seem to go down. After a few days, I ran out of sessions and the application hungs. I keep a few objects on the session including a pretty big 'pdfObject' that I use to create a PDF document on the fly.
    Any idea why JRun not able to clean up the sessions after the 30 minute timeout has passed? Does the fact that I have stored objects on the session preventing JRun from invalidating and cleaning up the session?
    Thanks in advance.

    Hi afikru
    According to the Servlet specification session.invalidate() method should unbind any objects associated with it. However I'm not conversant with JRun application server so I can only provide some pointers here to help you out.
    Firstly, try locating some documentation specific to your application server which may throw some light on why this may be happening.
    Secondly, I'd suggest running the Server within a Profiling tool so that you can see what objects are being created and how many of those. Try explicitly running the Garbage Collector and see if the sessions come down.
    Keep me posted on your progress.
    Good Luck!
    Eshwar R.
    Developer Technical Support
    Sun microsystems

  • Applets and memory not being released by Java Plug-in

    Hi.
    I am experiencing a strange memory-management behavior of the Java Plug-in with Java Applets. The Java Plug-in seems not to release memory allocated for non-static member variables of the applet-derived class upon destroy() of the applet itself.
    I have built a simple "TestMemory" applet, which allocates a 55-megabytes byte array upon init(). The byte array is a non-static member of the applet-derived class. With the standard Java Plug In configuration (64 MB of max JVM heap space), this applet executes correctly the first time, but it throws an OutOfMemoryException when pressing the "Reload / Refresh" browser button or if pressing the "Back" and then the "Forward" browser buttons. In my opionion, this is not an expected behavior. When the applet is destroyed, the non-static byte array member should be automatically invalidated and recollected. Isn't it?
    Here is the complete applet code:
    // ===================================================
    import java.awt.*;
    import javax.swing.*;
    public class TestMemory extends JApplet
      private JLabel label = null;
      private byte[] testArray = null;
      // Construct the applet
      public TestMemory()
      // Initialize the applet
      public void init()
        try
          // Initialize the applet's GUI
          guiInit();
          // Instantiate a 55 MB array
          // WARNING: with the standard Java Plug-in configuration (i.e., 64 MB of
          // max JVM heap space) the following line of code runs fine the FIRST time the
          // applet is executed. Then, if I press the "Back" button on the web browser,
          // then press "Forward", an OutOfMemoryException is thrown. The same result
          // is obtained by pressing the "Reload / Refresh" browser button.
          // NOTE: the OutOfMemoryException is not thrown if I add "testArray = null;"
          // to the destroy() applet method.
          testArray = new byte[55 * 1024 * 1024];
          // Do something on the array...
          for (int i = 0; i < testArray.length; i++)
            testArray[i] = 1;
          System.out.println("Test Array Initialized!");
        catch (Exception e)
          e.printStackTrace();
      // Component initialization
      private void guiInit() throws Exception
        setSize(new Dimension(400, 300));
        getContentPane().setLayout(new BorderLayout());
        label = new JLabel("Test Memory Applet");
        getContentPane().add(label, BorderLayout.CENTER);
      // Start the applet
      public void start()
        // Do nothing
      // Stop the applet
      public void stop()
        // Do nothing
      // Destroy the applet
      public void destroy()
        // If the line below is uncommented, the OutOfMemoryException is NOT thrown
        // testArray = null;
      //Get Applet information
      public String getAppletInfo()
        return "Test Memory Applet";
    // ===================================================Everything works fine if I set the byte array to "null" upon destroy(), but does this mean that I have to manually set to null all applet's member variables upon destroy()? I believe this should not be a requirement for non-static members...
    I am able to reproduce this problem on the following PC configurations:
    * Windows XP, both JRE v1.6.0 and JRE v1.5.0_11, both with MSIE and with Firefox
    * Linux (Sun Java Desktop), JRE v1.6.0, Mozilla browser
    * Mac OS X v10.4, JRE v1.5.0_06, Safari browser
    Your comments would be really appreciated.
    Thank you in advance for your feedback.
    Regards,
    Marco.

    Hi Marco,
    my guess as to why JPI would keep references around, if it does keep them, is that it propably is an implementation side effect. A lot of things are cached in the name of performance and it is easy to leave things laying around in your cache. Maybe the page with the associated images/applets is kept in the browser cache untill the browser needs some memory and if the browser memory manager is not co-operating with the JPI/JVM memory manager the browser is not out of memory, thus not releasing its caches but the JVM may be out of memory. Thus the browser indirectly keeps the reference that it realy does not need. This reference could be inderect through some 'applet context' or what ever the browser uses to interact with JPI, don't realy know any of these details, just imaging what must/could be going on there. Browser are amazingly complicated beast.
    This behaviour that you are observing, weather the origin is something like I speculated or not, is not nice but I would not expect it to be fixed even if you filed a bug report. I guess we are left with relleasing all significatn memory structures in destroy. A simple way to code this is not to store anything in the member fields of the applet but in a separate class; then one has to do is to null that one reference from the applet to that class in the destroy method and everything will be relased when necessary. This way it is not easy to forget to release things.
    Hey, here is a simple, imaginary, way in which the browser could cause this problem:
    The browser, of course needs a reference to the applet, call it m_Applet here. Presume the following helper function:
    Applet instantiateAndInit(Class appletClass) {
    Applet applet=appletClass.newInstance();
    applet.init();
    return applet;
    When the browser sees the applet tag it instantiates and inits the new applet as follows:
    m_Applet=instantiateAndInit(appletClass);
    As you can readily see, the second time the instantiation occurs, the m_Applet holds the reference to the old applet until after the new instance is created and initlized. This would not cause a memory leak but would require that twice the memory needed by the applet would be required to prevent OutOfMemory.I guess it is not fair to call this sort of thing a bug but it is questionable design.In real life this is propably not this blatant, but could happen You could try, if you like, by allocating less than 32 Megs in your init. If you then do not run out of memory it is an indication that there are at most two instances of your applet around and thus it could well be someting like I've speculated here.
    br Kusti

  • Java Threads not being released after loggin off

    Hello everyone,
    We are seeing a weird problem in our PI 7.0 box.
    Once I logout from the XI box (both ABAP and Java stck) my basis tem still sees Java threads aginst my id still open. Our system does not seem to be realeasing Java threads.
    Is this a know problem? What are the remidition steps.

    Hi,
    Are there existing HTTP sessions only?
    The following might be helpful.
    http://help.sap.com/saphelp_nw73/helpdata/en/c7/5ee440ba994fa3b187ff2f050cfe7c/content.htm
    http://wiki.sdn.sap.com/wiki/display/ERPHCM/Sessionnotendingafterlogoff
    Regards,
    Varun

  • Connections not being released from jdbc pool on WLS 5.1 sp8

    I am using WLS 5.1 on RedHat Linux kernel 2.4
    The database is oracle 8.1.6 and I am using the thin jdbc driver as shown in the config below.
    weblogic.jdbc.connectionPool.bobePool=\
    url=jdbc:oracle:thin:@localhost:1521:<sid>,\
    driver=oracle.jdbc.driver.OracleDriver,\
    loginDelaySecs=1,\
    initialCapacity=10,\
    maxCapacity=22,\
    capacityIncrement=5,\
    allowShrinking=true,\
    shrinkPeriodMins=3,\
    testConnsOnReserve=true,\
    testTable=dual,\
    refreshTestMinutes=5,\
    props=user=<username>;password=<password>
    The problem I am facing is that some connections remain active
    after the SQL query has finished. The no. of open connections
    keep on piling up till the limit of 18 connections is reached and
    ultimately at that point weblogic hangs.
    Has someone else faced this problem...and found a solution??
    Regards,
    Anish Srivastava
    System Analyst
    Baazee.com

    Hi Caren,
    Weblogic doesn't close connections on "time-out". There are some reasons
    for this. The most important one is that even if you have such a
    functionality,
    in case of a connection leak all the available connections can be trashed in
    a matter of milliseconds, so it wouldn't help at all. Though, weblogic does
    close leaked connection when they are garbage collected, but it's easy to
    see
    that it doesn't help a lot, too.
    So you need to make sure that all the JDBC objects are closed properly,
    for example like in this code:
    try {
    } finally {
    try {resultSet.close();} catch (SQLException se) {}
    try {preparedStatement.close();} catch (SQLException se) {}
    try {connection.close();} catch (SQLException se) {}
    Also it's possible that you simply don't have enough connection.
    You could try to increase size of the pool.
    Regards,
    Slava Imeshev
    "Karen Law" <[email protected]> wrote in message
    news:[email protected]...
    Closing the DB connections is important. But is there any parameters to be
    set in the WebLogic configuration in order to release the connection whenit
    hasn't been closed for a long time? Say, setting the connection timeout?
    Many thx!
    Karen
    "Deyan D. Bektchiev" <[email protected]> wrote in message
    news:[email protected]...
    Do you close the DB connections in a finally block after you've finished
    using them?
    If not your application is leaking the connections and sometimes GC is
    not able or does not free those in a timely manner.
    --dejan
    Anish Srivastava wrote:
    I am using WLS 5.1 on RedHat Linux kernel 2.4
    The database is oracle 8.1.6 and I am using the thin jdbc driver as
    shown
    in the config below.
    weblogic.jdbc.connectionPool.bobePool=\
    url=jdbc:oracle:thin:@localhost:1521:<sid>,\
    driver=oracle.jdbc.driver.OracleDriver,\
    loginDelaySecs=1,\
    initialCapacity=10,\
    maxCapacity=22,\
    capacityIncrement=5,\
    allowShrinking=true,\
    shrinkPeriodMins=3,\
    testConnsOnReserve=true,\
    testTable=dual,\
    refreshTestMinutes=5,\
    props=user=<username>;password=<password>
    The problem I am facing is that some connections remain active
    after the SQL query has finished. The no. of open connections
    keep on piling up till the limit of 18 connections is reached and
    ultimately at that point weblogic hangs.
    Has someone else faced this problem...and found a solution??
    Regards,
    Anish Srivastava
    System Analyst
    Baazee.com

  • Safety Stock Not Being Released To Cover New Sales Orders Or Forecast

    Hi,
    Does anyone know of a way to get safety stock at the lower levels of the BOM to get released for upper level sales order or forecast demand without having to adjust the effectively out to a future date?
    Example:
    A = Finish goods item with two weeks lead time
    B= component of A and it has two weeks lead time
    C= component of B and has 10 weeks lead time.
    Have safety stock hard coded with an effective date of today for 100,000 units.
    I have 100,000 in inventory and I have planned orders to cover all my demands and future safety stock requirements for the next ten weeks.
    I receive a new sales order for 50,000 units with a customer request date five weeks out.
    I am getting ATP dates to assign to the new sales order 14 weeks out versus allowing me to use 50,000 units from the safety stock to schedule in week five?
    Even If I schedule the sales order for five weeks, because I have gone down two levels to confirm that I have safety stock, and It could be used, planned orders would not get generated at the next two levels, because the demand date on the safety stock is earlier than the demand date for the new order?
    Is there any resolution that anyone has found that addresses this issue?
    We have set a demand priority rule on the ASCP plan options tab to "Sales Order Priority" where we have set criteria for number one to "Sales Orders & MDS Entries Priority" and number two to "Schedule Date".
    We have found that adjusting the supply and demand dates out to cover the full 14 week period does reduce the safety stock at the lower levels, but it also creates additional starts and inventory to be put into the line, which makes this approach not feasible.
    Does anyone have any solutions to this problem? We do not build safety stock for the sake of building safety stock. We want it to be used for any sales order or forecast demand that comes within the planning horizon.
    Appreciate any inputs.
    Regards,
    Dave

    Hi Dave,
    Try this note suggestion, possibly help you in resolving your issue.
    How to Avoid Getting Safety Stock Replenishment Too Early in an ASCP Plan [ID 301629.1]
    In order to line up the safety stock supply with the changes in safety stock, a combination of the profiles must be used:
    MSC: Use FIFO Pegging as yes
    and the plan option
    Peg Supplies by Demand Priority is checked.
    The profile MSC: Use FIFO Pegging as yes will perform the following:
    For all demands and supplies, it proceeds item by item and
    pegs supplies to demands on a daily basis. Daily supplies
    and demands are not sorted. When supplies or demands
    on a given date are used up, it picks from supplies or
    demands on the next date. The unpegged supplies are
    pegged to excess.
    The peg supplies by demand priority combined with the profile above will prevent the system to plan ahead for safety stock.
    Also no safety stock smoothing is setup.
    To implement the solution, please execute the following steps:
    1. Please set the profile:
    MSC: Use FIFO Pegging as yes and
    2. Under the plan settings in the main tab the set the following options:
    Enable Pegging ON
    Peg Supplies by Demand Priority ON
    Warm Regards
    Sivaraman.G

  • Nokia Lumia not being released in USA

    So the Nokia Lumia phones, the saviours of Nokia, the new phones from Nokia featuring the new Windows 7.5 OS are here! And THIS is what Nokia is pushing out to show they are still in the game, to claw back that lost market share! WE ARE BACK!
    Oh but we aren't releasing them in the USA.
    What?!?!?
    I hate to use internet slang, but I cannot think of a more suitable and succinct term than EPIC FAIL.
    Seriously Nokia, am I sitting here watching you commit suicide?
    Is Stephen Elop a secret Agent Provocateur from MS sent inside to kill the beast from within, cripple them or make them go bust, so that Microsoft can buy all of Nokia's sweet, sweet patents on the cheap, so they can increase their patent protection racket that brings in most of Microsoft's money these days?
    I am completely aghast by all of this. Are Nokia's board of directors in on it? How can they allow this self harm to continue?

    Everything I've read says these new phones will be released in the US. But my understanding is they will be released in Europe first.
    One theory is that Nokia's brand has taken such a beating in the states that they don't want to risk bugs/product imperfections knocking them down even further. But the European consumer has a more positive impression of Nokia's and are willing to overlook such kinks, and be more patient while Nokia corrects the problems...sort of like beta testers. So by the time the devices roll out to US customers in 2012, the major problems will be corrected, and Nokia will have a better chance of being successful in the US.

  • DB Connections not being released when using Weblogic Datasource

    I am using Kodo-JDO 2.5.3 and Weblogic 8.1.
    I have JDO running as a JCA connector and I have a simple stateless session
    bean persisting a simple object. My problem is that every time I call my
    session bean to persist an object it grabs a connection from the Weblogic
    connection pool and never returns it. So if I have configured a maximum of
    50 connections in the pool, on my 51st call to the session bean I will get
    an error saying it cannot acquire a connection from the pool. (error pasted
    below)
    I have configured my JDO parameters as follows:
    ConnectionRetainMode=persistence-manager (also tried 'transaction')
    TransactionMode=xa
    ConnectionFactoryName=ERDataSource
    ConnectionFactory2Name=NonXADataSource
    At the end of every call to the SessionBean I perform a
    persistenceManager.close(); and a persistenceManager=null;
    Any ideas why connections are not getting re-used?
    Exception I am receiving:
    java.sql.SQLException: Internal error: Cannot obtain XAConnection
    weblogic.common.resourcepool.ResourceLimitException: No resources currently
    available in pool MyJDBC Connection Pool-1 to allocate to applications,
    please increase the size of the pool and retry..
    at
    com.solarmetric.kodo.impl.jdbc.runtime.SQLExceptions.throwDataStore(SQLExcep
    tions.java:64)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getSQLExecutionManag
    er(JDBCStoreManager.java:722)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.setPersistenceManage
    r(JDBCStoreManager.java:133)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.initialize(PersistenceMa
    nagerImpl.java:173)
    at
    com.solarmetric.kodo.ee.EEPersistenceManager.initialize(EEPersistenceManager
    ..java:50)
    at
    com.solarmetric.kodo.impl.jdbc.ee.EEPersistenceManagerFactory.newPersistence
    Manager(EEPersistenceManagerFactory.java:107)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerFactoryImpl.getPersistenceMan
    ager(PersistenceManagerFactoryImpl.java:204)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerFactoryImpl.getPersistenceMan
    ager(PersistenceManagerFactoryImpl.java:136)
    at
    com.solarmetric.kodo.impl.jdbc.ee.JDOConnectionFactory.getPersistenceManager
    (JDOConnectionFactory.java:161)
    at
    com.mslv.osa.infrastructure.ossj.app.JVTSessionBean.getPersistenceManager(JV
    TSessionBean.java:308)
    at
    com.mslv.osa.infrastructure.system.app.SystemJVTSessionBean.createSystemProp
    erty(SystemJVTSessionBean.java:882)
    at
    com.mslv.osa.infrastructure.system.app.SystemJVTSessionBean_toe7tm_EOImpl.cr
    eateSystemProperty(SystemJVTSessionBean_toe7tm_EOImpl.java:1536)
    at
    com.mslv.osa.infrastructure.system.app.SystemJVTSessionBean_toe7tm_EOImpl_WL
    Skel.invoke(Unknown Source)
    at
    weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:407)
    at
    weblogic.rmi.cluster.ReplicaAwareServerRef.invoke(ReplicaAwareServerRef.java
    :108)
    at
    weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:356)
    at
    weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubjec
    t.java:353)
    at
    weblogic.security.service.SecurityManager.runAs(SecurityManager.java:123)
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:351)
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:3
    0)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:178)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:151)
    Glen

    A couple of suggestions that might do the trick:
    1. Upgrade to 2.5.4
    2. Leave the ConnectionRetainMode to its default value (on-demand).
    3. Make sure you always close your Query results and Extent iterators.

  • Connections not being released from pool with Weblogic 5.1 sp 8

    I am using WLS 5.1 on RedHat Linux kernel 2.4
    The database is oracle 8.1.6 and I am using the thin
    jdbc driver as shown in the config below.
    weblogic.jdbc.connectionPool.bobePool=\
    url=jdbc:oracle:thin:@localhost:1521:<sid>,\
    driver=oracle.jdbc.driver.OracleDriver,\
    loginDelaySecs=1,\
    initialCapacity=10,\
    maxCapacity=22,\
    capacityIncrement=5,\
    allowShrinking=true,\
    shrinkPeriodMins=3,\
    testConnsOnReserve=true,\
    testTable=dual,\
    refreshTestMinutes=5,\
    props=user=<username>;password=<password>
    The problem I am facing is that some connections remain active
    after the SQL query has finished. The no. of open connections
    keep on piling up till the limit of 18 connections is reached and
    ultimately at that point weblogic hangs.
    Has someone else faced this problem...and found a solution??
    Regards,
    Anish Srivastava
    System Analyst
    Baazee.com

    I agree with Sree. We had same kind of problem before but I attribute this mostly
    to the application handling of the connections rather than container in managing
    pools. Most often programmers tend to disregard to explicitly claim the resources(close
    them in case of db con.s) in case of exceptions. Though they keep a finally block
    at the end of every business method , my experience says, more safer it is to close
    them in exception catch block too. I dont know if some JVMs defer the execution of
    finally blocks/containers override the behavior.
    comments?
    -Chandra
    "Sree Bodapati" <[email protected]> wrote:
    Hello Anish,
    There are two things here:
    1. Make sure connections are closed (in a finally block). If objects are
    not
    at the method level, the application code may
    be overwriting and losing references to the connection. So watchout for
    that.
    2. Never try to get a connection in an infinite loop in the application,
    that will take all the available execute threads on the server when you
    have
    enough number of clients and can cause this issue.
    One other config change that I would suggest is,
    make refreshtestMinutes=99999, testConnsOnReserve should be sufficient to
    ensure application gets a good connection.
    hth
    sree
    "Anish Srivastava" <[email protected]> wrote in message
    news:[email protected]...
    I am using WLS 5.1 on RedHat Linux kernel 2.4
    The database is oracle 8.1.6 and I am using the thin
    jdbc driver as shown in the config below.
    weblogic.jdbc.connectionPool.bobePool=\
    url=jdbc:oracle:thin:@localhost:1521:<sid>,\
    driver=oracle.jdbc.driver.OracleDriver,\
    loginDelaySecs=1,\
    initialCapacity=10,\
    maxCapacity=22,\
    capacityIncrement=5,\
    allowShrinking=true,\
    shrinkPeriodMins=3,\
    testConnsOnReserve=true,\
    testTable=dual,\
    refreshTestMinutes=5,\
    props=user=<username>;password=<password>
    The problem I am facing is that some connections remain active
    after the SQL query has finished. The no. of open connections
    keep on piling up till the limit of 18 connections is reached and
    ultimately at that point weblogic hangs.
    Has someone else faced this problem...and found a solution??
    Regards,
    Anish Srivastava
    System Analyst
    Baazee.com

  • Instance not being released from task flow

    I have created a simple BPM workflow (start msg -> human task1 ->end). Used the auto-build feature in Jdev (11.1.1.4) to create the form. Message creates the instance. I am able to open the form and approve, but then the instance doesn't leave the taskflow (becomes stale in EM). How do I get this resolved?
    Using SOA 11.1.1.4, Windows 2008 R2, XE.

    Discovered that jdeveloper (or some process along the route) was not updating the roles in EM. Found old roles and several roles that didn't have users/groups associated with them, even though our organization in jdev did have the users/groups.
    We removed all the roles associated with this project and then redeployed. Issue resolved.

  • IPhone core data - fetched managed objects not being autoreleased on device (fine on simulator)

    I'm currently struggling with a core data issue with my app that defies (my) logic. I'm sure I'm doing something wrong but can't see what. I am doing a basic executeFetchRequest on my core data entity, but the array of managed objects returned never seems to be released ONLY when I run it on the iPhone, under the simulator it works exactly as expected. This is despite using an NSAutoreleasePool to ensure the memory footprint is minimised. I have also checked with Instruments and there are no leaks, just ever increasing allocations of memory (by '[NSManagedObject(_PFDynamicAccessorsAndPropertySupport) allocWithEntity:]'). In my actual app this eventually leads to a didReceiveMemoryWarning call. I have produced a minimal program that reproduces the problem below. I have tried various things such as faulting all the objects before draining the pool, but with no joy. If I provide an NSError pointer to the fetch no error is returned. There are no background threads running.
    +(natural_t) get_free_memory {
        mach_port_t host_port;
        mach_msg_type_number_t host_size;
        vm_size_t pagesize;
        host_port = mach_host_self();
        host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
        host_page_size(host_port, &pagesize);
        vm_statistics_data_t vm_stat;
        if (host_statistics(host_port, HOST_VM_INFO, (host_info_t)&vm_stat, &host_size) != KERN_SUCCESS) {
            NSLog(@"Failed to fetch vm statistics");
            return 0;
        /* Stats in bytes */
        natural_t mem_free = vm_stat.free_count * pagesize;
        return mem_free;
    - (void)viewDidLoad
        [super viewDidLoad];
        // Set up the edit and add buttons.
        self.navigationItem.leftBarButtonItem = self.editButtonItem;
        UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(insertNewObject)];
        self.navigationItem.rightBarButtonItem = addButton;
        [addButton release];
        // Obtain the Managed Object Context
        NSManagedObjectContext *context = [(id)[[UIApplication sharedApplication] delegate] managedObjectContext];
        // Check the free memory before we start
        NSLog(@"INITIAL FREEMEM: %d", [RootViewController get_free_memory]);
        // Loop around a few times
        for(int i=0; i<20; i++) {
            // Create an autorelease pool just for this loop
            NSAutoreleasePool *looppool = [[NSAutoreleasePool alloc] init];
            // Check the free memory each time around the loop
            NSLog(@"FREEMEM: %d", [RootViewController get_free_memory]);
            // Create a minimal request
            NSEntityDescription *entityDescription = [NSEntityDescription                                                 
                                                  entityForName:@"TestEntity" inManagedObjectContext:context];
            // 'request' released after fetch to minimise use of autorelease pool       
            NSFetchRequest *request = [[NSFetchRequest alloc] init];
            [request setEntity:entityDescription];
            // Perform the fetch
            NSArray *array = [context executeFetchRequest:request error:nil];       
            [request release];
            // Drain the pool - should release the fetched managed objects?
            [looppool drain];
        // Check the free menory at the end
        NSLog(@"FINAL FREEMEM: %d", [RootViewController get_free_memory]);
    When I run the above on the simulator I get the following output (which looks reasonable to me):
    2011-06-06 09:50:28.123 renniksoft[937:207] INITIAL FREEMEM: 14782464
    2011-06-06 09:50:28.128 renniksoft[937:207] FREEMEM: 14807040
    2011-06-06 09:50:28.135 renniksoft[937:207] FREEMEM: 14831616
    2011-06-06 09:50:28.139 renniksoft[937:207] FREEMEM: 14852096
    2011-06-06 09:50:28.142 renniksoft[937:207] FREEMEM: 14872576
    2011-06-06 09:50:28.146 renniksoft[937:207] FREEMEM: 14897152
    2011-06-06 09:50:28.149 renniksoft[937:207] FREEMEM: 14917632
    2011-06-06 09:50:28.153 renniksoft[937:207] FREEMEM: 14938112
    2011-06-06 09:50:28.158 renniksoft[937:207] FREEMEM: 14962688
    2011-06-06 09:50:28.161 renniksoft[937:207] FREEMEM: 14983168
    2011-06-06 09:50:28.165 renniksoft[937:207] FREEMEM: 14741504
    2011-06-06 09:50:28.168 renniksoft[937:207] FREEMEM: 14770176
    2011-06-06 09:50:28.174 renniksoft[937:207] FREEMEM: 14790656
    2011-06-06 09:50:28.177 renniksoft[937:207] FREEMEM: 14811136
    2011-06-06 09:50:28.182 renniksoft[937:207] FREEMEM: 14831616
    2011-06-06 09:50:28.186 renniksoft[937:207] FREEMEM: 14589952
    2011-06-06 09:50:28.189 renniksoft[937:207] FREEMEM: 14610432
    2011-06-06 09:50:28.192 renniksoft[937:207] FREEMEM: 14630912
    2011-06-06 09:50:28.194 renniksoft[937:207] FREEMEM: 14651392
    2011-06-06 09:50:28.197 renniksoft[937:207] FREEMEM: 14671872
    2011-06-06 09:50:28.200 renniksoft[937:207] FREEMEM: 14692352
    2011-06-06 09:50:28.203 renniksoft[937:207] FINAL FREEMEM: 14716928
    However, when I run it on an actual iPhone 4 (4.3.3) I get the following result:
    2011-06-06 09:55:54.341 renniksoft[4727:707] INITIAL FREEMEM: 267927552
    2011-06-06 09:55:54.348 renniksoft[4727:707] FREEMEM: 267952128
    2011-06-06 09:55:54.702 renniksoft[4727:707] FREEMEM: 265818112
    2011-06-06 09:55:55.214 renniksoft[4727:707] FREEMEM: 265355264
    2011-06-06 09:55:55.714 renniksoft[4727:707] FREEMEM: 264892416
    2011-06-06 09:55:56.215 renniksoft[4727:707] FREEMEM: 264441856
    2011-06-06 09:55:56.713 renniksoft[4727:707] FREEMEM: 263979008
    2011-06-06 09:55:57.226 renniksoft[4727:707] FREEMEM: 264089600
    2011-06-06 09:55:57.721 renniksoft[4727:707] FREEMEM: 263630848
    2011-06-06 09:55:58.226 renniksoft[4727:707] FREEMEM: 263168000
    2011-06-06 09:55:58.726 renniksoft[4727:707] FREEMEM: 262705152
    2011-06-06 09:55:59.242 renniksoft[4727:707] FREEMEM: 262852608
    2011-06-06 09:55:59.737 renniksoft[4727:707] FREEMEM: 262389760
    2011-06-06 09:56:00.243 renniksoft[4727:707] FREEMEM: 261931008
    2011-06-06 09:56:00.751 renniksoft[4727:707] FREEMEM: 261992448
    2011-06-06 09:56:01.280 renniksoft[4727:707] FREEMEM: 261574656
    2011-06-06 09:56:01.774 renniksoft[4727:707] FREEMEM: 261148672
    2011-06-06 09:56:02.290 renniksoft[4727:707] FREEMEM: 260755456
    2011-06-06 09:56:02.820 renniksoft[4727:707] FREEMEM: 260837376
    2011-06-06 09:56:03.334 renniksoft[4727:707] FREEMEM: 260395008
    2011-06-06 09:56:03.825 renniksoft[4727:707] FREEMEM: 259932160
    2011-06-06 09:56:04.346 renniksoft[4727:707] FINAL FREEMEM: 259555328
    The amount of free memory reduces each time round the loop in proportion to the managed objects I fetch e.g. if I fetch twice as many objects then the free memory reduces twice as quickly - so I'm pretty confident it is the managed objects that are not being released. Note that the entities that are being fetched are very basic, just two attributes, a string and a 16 bit integer. There are 1000 of them being fetched in the examples above. The code I used to generate them is as follows:
    // Create test entities
    for(int i=0; i<1000; i++) {
        id entity = [NSEntityDescription insertNewObjectForEntityForName:@"TestEntity" inManagedObjectContext:context];
        [entity setValue:[NSString stringWithFormat:@"%d",i] forKey:@"name"];
        [entity setValue:[NSNumber numberWithInt:i] forKey:@"value"];
    if (![context save:nil]) {
        NSLog(@"Couldn't save");
    If anyone can explain to me what is going on I'd be very grateful! This issue is the only only one holding up the release of my app. It works beautifully on the simulator!!
    Please let me know if there's any more info I can supply.

    Update: I modified the above code so that the fetch (and looppool etc.) take place when a timer fires. This means that the fetches aren't blocked in viewDidLoad.
    The result of this is that the issue happens exactly as before, but the applicationDidReceiveMemoryWarning is fired as expected:
    2011-06-08 09:54:21.024 renniksoft[5993:707] FREEMEM: 6131712
    2011-06-08 09:54:22.922 renniksoft[5993:707] Received memory warning. Level=2
    2011-06-08 09:54:22.926 renniksoft[5993:707] applicationDidReceiveMemoryWarning
    2011-06-08 09:54:22.929 renniksoft[5993:707] FREEMEM: 5615616
    2011-06-08 09:54:22.932 renniksoft[5993:707] didReceiveMemoryWarning
    2011-06-08 09:54:22.935 renniksoft[5993:707] FREEMEM: 5656576

  • Transports not automatically released SDMJ

    Hi-
    I am at the step where I confirm a successful test in my charm scenario. According to the SAP process for normal corrections (sdmj), at this point the status turns to Consolidated and the original transport request gets released automatically. The transport is not being released.
    Any thoughts?

    There is one possible reason.. check your all transport of copies has been imported or not. . Because once you select "Confirm successful teat" first system change status to "Consolidated" then after trying to release transport. Before releasing actual transport.. system check all transport of copies has been successfully imported in QA or not. If one of request is waiting for import then transport will not be released.  Please check Display/Close message window. Your CR will return status message over there... Hope this will help you to understand.
    Thanks
    Jignesh

  • Release notes for 2.16 states that there was a fix for alerts not being modal. We are using 3.0.6 and are experiencing the same issue; was there a regression to the modal fix. What version needs to be used to make sure that alert messages are modal?

    Release notes for 2.16 states that there was a fix for alerts not being modal. We are using 3.0.6 and are experiencing the same issue; was there a regression to the modal fix. What version needs to be used to make sure that alert messages are modal?

    We are trying to determine why alert boxes are not modal
    The fix states it's for Firefox 2.0 - 3.7a1pre
    We are using 3.0.6 not the current version of 3.6.13.
    Add-on release notes 2.16.1
    https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-standard/versions/
    Repeating 2.16 release notes since 2.16 was not made available for more than a couple of hours
    Fixed bug whereby some alert boxes weren't properly parented/owned. This led to some alerts not being properly modal
    with respect to the window/dialog that issued the alert.

Maybe you are looking for