Problem closing sockets when using OCI8 JDBC connections

Hi,
We have a java thread that maintains a socket connection with a Telnet client. We are finding that this thread is unable to successfully close this connection with Socket.close(). The reason that close() fails appears to be that the same thread is creating an (OCI8 driver) JDBC connection via another Java object. If we don't create the JDBC connection within this thread the socket closes correctly.
Has anybody else experienced such problems?
We believe that this problem might be related to a BugRat report #391.
Cheers.
Max
null

Sounds like a mis-match between the JDBC Drivers and the installed Oracle Client.
The JDBC Drivers with JDeveloper 9.0.3 should be the 9.0.1.3 versions (included with the iAS Admin Client and iAS 9.0.2)
Also note, the JDBC driver used by OC4J is "classes12dms.jar", not "classes12.jar"
Hope this helps,
Rob

Similar Messages

  • Advanced Queue - using the JDBC connection

    Advanced Queue using the JDBC connection gives us AN ERROR WHEN SENDING 32K QUEUES
    Is it true that RAW datatypes using the Java AQ API do indeed have a limit of 32K in 8.1.6.
    The workaround is to use the Java AQ API with Object payloads (BLOBs) to get around the problem.
    Has anyone used any other solution ?
    Thanks
    null

    There's a JDBC forum that's probably more germane to this question-- I'd suggest reposting it there.
    Justin Cave
    ODBC Development

  • ITunes closed unexpected when using BeoSoundPlugin

    Hi,
    After installing iTunes 10 a few days ago I am not able to open my iTunes anymore. If I try to I get a "Problem report about iTunes" saying that iTunes closed unexpected when using BeoSoundPlugin. I have never experienced any problems with this before. I don't think I have any BeoSound installed anymore. I am simply not able to enter iTunes. I have tried to uninstall iTunes and install it again but without any success.
    Does anybody know what to do? I have attached some of the text from the Problem report below.
    Process: iTunes [422]
    Path: /Applications/iTunes.app/Contents/MacOS/iTunes
    Identifier: com.apple.iTunes
    Version: 10.0 (10.0)
    Build Info: iTunes-10006701~1
    Code Type: X86 (Native)
    Parent Process: launchd [79]
    PlugIn Path: /Users/jesperstrunge/Library/iTunes/iTunes Plug-ins/BeoSoundPlugin.bundle/Contents/MacOS/BeoSoundPlugin
    PlugIn Identifier: com.bang-olufsen.itunes.2
    PlugIn Version: ??? (1.0.0f1)

    Hej Jesper and velkommen to the forums!
    The answer, if there is one, seems to be contained here (scroll down):
    http://beophile.com/?page_id=1091
    You may still have part of BeoSound left on your Mac. Either use Spotlight to find them or have a look in the most obvious places:
    Home/Library/Input Managers
    Hard Disk/library/Input Managers
    Hard Disk/Library/Application Support
    Also, try repairing permissions.

  • I am having problems sending emails when using apps. I dont receive any emails when I try to send documents such as pdf or pictures. I tried sending it to my other email account but I don't get any email. What is wrong?

    I am having problems sending emails when using apps. I dont receive any emails when I try to send documents such as pdf or pictures. I tried sending it to my other email account but I don't get any email. What is wrong?

    System Preferences > Network > your-connection-medium > (Assist me) > ( Diagnostics )
    This sometimes provides additional helpful information, sometimes not so much.

  • When attempting to use Lightroom external editor program to edit a photo in Photoshop Elements 10, the photo does not open / appear on photoshop elements screen.  I don't have any problem with this when using Photoshop Elements 6 or Photoshop CS.  I'm usi

    When attempting to use Lightroom external editor program to edit a photo in Photoshop Elements 10, the photo does not open / appear on photoshop elements screen.  I don't have any problem with this when using Photoshop Elements 6 or Photoshop CS.  I'm using a Mac with Mountain Lion OS.  Any solutions?

    Adobe now hides the editor - what looks like it is not - you want the editor hidden in the support folder - see http://forums.adobe.com/message/3955558#3955558 for details
    LN

  • Problem with SDO_relate when using polygons with holes.

    I'm having a problem with sdo_relate. I'm trying to extract all elements from a point table (bdtq_batim_p) that are inside a specific polygon from another table (SDA_MUNIC_SS). The spatial index for both table have been rebuilt and the data from both table is valid.
    When I do a count on the query, I know the answer should be 1422 elements (Counted in ArcGIS). However, sdo_relate gives a smaller number of elements in the result set.
    The query :
    SELECT count(distinct t.identifiant) FROM bdtq_batim_p t, SDA_MUNIC_SS s WHERE s.mus_co_geo = '48015' and sdo_relate( t.SHAPE,s.SHAPE,'mask=anyinteract querytype=window') = 'TRUE'
    returns 282 elements. The query with mask=inside, SDO_Anyinteract() and SDO_inside() all give the same result.
    I did a test with the following query and the result is 1422 (which is the good result).
    SELECT count(distinct t.identifiant) FROM bdtq_batim_p t, SDA_MUNIC_SS s WHERE s.mus_co_geo = '48015' and SDO_WITHIN_DISTANCE( t.SHAPE,s.SHAPE,'distance=0') = 'TRUE';
    It's important to note that the polygone (from SDA_MUNIC_SS) that is used for this query have holes in it. I have the same problem with all the polygons from the SDA_MUNIC_SS table that have holes in it. For the polygon without holes, the results are the same for the 2 queries.
    My question are :
    Why are the result from the two queries different? A query with a buffer of 0 should always return the same result as a query with Anyinteract.
    Is there a known problem with SDO_RELATE when using a polygon with holes in it?
    Do you have any idea how to solve my problem.

    Since i don't have much control on the version of Oracle and Patches that we use in the system, we used a workaround that detects the polygons with holes and uses the SDO_WITHIN_DISTANCE( t.SHAPE,s.SHAPE,'distance=0') = 'TRUE' operator in those case. We saw a slight decline in performance but it now returns the right results. When the system will be patched, we'll come back to the original version and see if the problem is solved.

  • CS 4 Dynamic Link to Encore doesn't work most of the time.  Encore stops operating after opening and periodically Premier Pro stops working.  I'm told that there has been a problem with CS4 when using Dynamic Link to go to Encore and build CD's.  Is there

    CS 4 Dynamic Link to Encore doesn't work most of the time.  Encore stops operating after opening and periodically Premier Pro stops working.  I'm told that there has been a problem with CS4 when using Dynamic Link to go to Encore and build CD's.  Is there a way around this?  Is there a patch to correct it?

    To build CD's???
    What problem does Encore have with DL?
    If DL is not working properly for you the way around this is to export from Premiere to either mpeg2-dvd for DVD or BluRay H.264 for BD-disks and import the files in Encore.

  • Connection closed error when using binding

    Hi,
    I am running WLS 7.0.1.0 with TopLink 9.0.3 as the persistence layer for
    EJB1.1 beans with CMP. When I use bind parameters I get a connection
    closed exception the second time the query is invoked.
    This is the query I see on server console when it is invoked the first
    time. This query returns the expected results:
    [TopLink]: ServerSession(91035)--Connection(887977)--SELECT
    LAST_CHANGED_ID, ALIAS_NAME, REFERENCE_ID, ALIAS_TYPE,
    REFERENCE_QUAL_CODE, ALIAS_QUAL_CODE, TLINK_VERSION, LAST_CHANGED_DATE,
    DELETED_FLAG FROM GLOBAL_ALIAS WHERE (DELETED_FLAG =
    bind => [N]
    However, when I run the same query the second time I get the
    following error:
    [TopLink Error]: ServerSession(91035)--Connection(0)--null--EXCEPTION
    [TOPLINK-4002] (TopLink (WLS CMP) - 9.0.3.1 (Build 426)):
    oracle.toplink.exceptions.DatabaseException
    EXCEPTION DESCRIPTION: java.sql.SQLException: Connection has already
    been closed.
    INTERNAL EXCEPTION: java.sql.SQLException: Connection has already been
    closed.
    ERROR CODE: 0
    Then when I execute the query again the third time I see the following
    sql query and it
    works fine.
    [TopLink]: ServerSession(91035)--Connection(889939)--SELECT
    LAST_CHANGED_ID, ALI
    AS_NAME, REFERENCE_ID, ALIAS_TYPE, REFERENCE_QUAL_CODE, ALIAS_QUAL_CODE,
    TLINK_V
    ERSION, LAST_CHANGED_DATE, DELETED_FLAG FROM GLOBAL_ALIAS WHERE
    (DELETED_FLAG =
    bind => [N]
    Does WebLogic close its connection to the database pool after each
    transaction? Is there something that needs to be done on the database?
    Any help will be greatly appreciated.
    Thanks in Advance,
    Anup.

    Hi. Our transaction coordinator does prevent any code from using a
    pool connection that was part of a transaction, after that transaction
    ends. Applications must obtain and use pool connections totally
    within or totally outside a UserTransaction, to prevent unintended
    or unclear interference of the transaction contents and/or locking.
    Joe Weinstein
    PS: Also post any followup to the ejb newsgroup, because this issue
    is more at the EJB level than JDBC.
    Anup Vachali wrote:
    Hi,
    I am running WLS 7.0.1.0 with TopLink 9.0.3 as the persistence layer for
    EJB1.1 beans with CMP. When I use bind parameters I get a connection
    closed exception the second time the query is invoked.
    This is the query I see on server console when it is invoked the first
    time. This query returns the expected results:
    [TopLink]: ServerSession(91035)--Connection(887977)--SELECT
    LAST_CHANGED_ID, ALIAS_NAME, REFERENCE_ID, ALIAS_TYPE,
    REFERENCE_QUAL_CODE, ALIAS_QUAL_CODE, TLINK_VERSION, LAST_CHANGED_DATE,
    DELETED_FLAG FROM GLOBAL_ALIAS WHERE (DELETED_FLAG =
    bind => [N]
    However, when I run the same query the second time I get the
    following error:
    [TopLink Error]: ServerSession(91035)--Connection(0)--null--EXCEPTION
    [TOPLINK-4002] (TopLink (WLS CMP) - 9.0.3.1 (Build 426)):
    oracle.toplink.exceptions.DatabaseException
    EXCEPTION DESCRIPTION: java.sql.SQLException: Connection has already
    been closed.
    INTERNAL EXCEPTION: java.sql.SQLException: Connection has already been
    closed.
    ERROR CODE: 0
    Then when I execute the query again the third time I see the following
    sql query and it
    works fine.
    [TopLink]: ServerSession(91035)--Connection(889939)--SELECT
    LAST_CHANGED_ID, ALI
    AS_NAME, REFERENCE_ID, ALIAS_TYPE, REFERENCE_QUAL_CODE, ALIAS_QUAL_CODE,
    TLINK_V
    ERSION, LAST_CHANGED_DATE, DELETED_FLAG FROM GLOBAL_ALIAS WHERE
    (DELETED_FLAG =
    bind => [N]
    Does WebLogic close its connection to the database pool after each
    transaction? Is there something that needs to be done on the database?
    Any help will be greatly appreciated.
    Thanks in Advance,
    Anup.

  • I have a problem with my Mac OS X Lion when using Remote Desk Connection or VPN with Wifi.

    I know it's the wifi configuration problem since it all works if I connect to the internet with a cable.

    Need some more details; are you getting any error messages? If you go to System Preferences > Networks what does it say under Airport?

  • What about session memory when using BEA Weblogic connection pooling?

    Hi,
    consider a web application, allowing database connections via a BEA Weblogic 8.1 application server. The app-server is pooling the oracle connections. The oracle database is running in dedicated server mode.
    How are the database requests from the web app served by the connection pool from BEA?
    1) Does one oracle session serve more than one request simultanously?
    2) Does BEA serialize the requests, which means, that a session from the pool is always serving only one request at a time?
    If (1) is true, than what about the session memory of Oracle sessions? I understand, that things like package global variables are beeing stored in this session private memory. If (1) is true, the PL/SQL programmer has the same situation, as with programming an Oracle databas in "shared server" mode, that is, he should not use package global variables etc.
    Thankful for any ideas...
    Message was edited by:
    Xenofon

    Xenofon Grigoriadis wrote:
    Hi,
    consider a web application, using BEA between client and an Oracle Database (v9i). BEA is pooling the oracle connections. The oracle database is running in dedicated server mode.
    How are the database requests from the web app beeing served by the connection pool from BEA?
    1) Does one oracle session serve more than one request simultanously?no.
    2) Or does BEA serialize the requests, which means, that a session from the pool is always serving only one request at a time?
    Reading "Configuring and Using WebLogic JDBC" from weblogic8.1 documentation, I read:
    "... Your application "borrows" a connection from the pool, uses it, then returns it to the pool by closing it...."
    What do you mean by returning the connection by closing it? Tbe server will either return the connection to the pool or close it...When application code does typical jdbc code, it obtains
    a connection via a WebLogic DataSource, which reserves an
    unused pooled connection and passes it (transparently wrapped)
    to the application. The application uses it, and then closes
    it. WebLogic intercepts the close() call via the wrapper, and
    puts the DBMS connection back into the WebLogic pool.
    The reason, why I as an Oracle programmer ask this is, because every session (=connection)
    in Oracle has its own dedicate, private memory for things like global PL/SQL variables.
    Now I want to figure out, if you have to careful in programming your databases, when
    one Oracle session (=connection) is serving many weblogic requests.It is serving many requests, but always serially. Do note however, that we
    also transparently cache/pool prepared and callable statements with the
    connection so repeat uses of the connection will be able to get already-made
    statements when they call prepareStatement() and prepareCall(). These
    long-lived statements will each require a DBMS-side cursor.
    >
    Thankful for any ideas or practical experience...
    Message was edited by:
    mk637Joe

  • When using the Database Connectivity Toolset, reads and writes with long binary fields are incompatible.

    I am trying to write LabVIEW Variants to long binary fields in a .mdb file using the Database Connectivity Toolset. I get errors when trying to convert the field back to a variant after reading it back from the database.
    I next tried flattening the variant before writing it and ultimately wound up doing the following experiments:
    1) If I use DB Tools Insert Data to write an ordinary string and read it back using a DB Tools Select Data, the string is converted from ASCII to Unicode.
    2) If I use DB Tools Create Parameterized Query to do an INSERT INTO or an UPDATE operation, specifying that the data is BINARY, then read it back using a DB Tools Select Data,
    the length of the string is prepended to the string itself as a big-endian four-byte integer.
    I can't think of any way to do a parameterized read, although the mechanism exists to return data via parameters.
    Presuming that this same problem affects Variants when they are written to the database and read back, I could see why I get an error. At least with flattened strings I have the option of discarding the length bytes from the beginning of the string.
    Am I missing something here?

    David,
    You've missed the point. When a data item is flattened to a string, the first four bytes of the string are expected to be the total length of the string in big-endian binary format. What is happening here is that preceding this four-byte length code is another copy of the same four bytes. If an ordinary string, "abcdefg" is used in place of the flattened data item, it will come back as <00><00><00><07>abcdefg. Here I've used to represent a byte in hexadecimal notation. This problem has nothing to do with flattening and unflattening data items. It has only to do with the data channel consisting of writing to and reading from the database.
    I am attaching three files that you can use to demonstrate the problem. The VI file c
    ontains an explanation of the problem and instructions for installing and operating the demonstration.
    Ron Martin
    Attachments:
    TestLongBinaryFields.vi ‏132 KB
    Sample.UDL ‏1 KB
    Sample.mdb ‏120 KB

  • REDIRECT JDBC URL WHEN USING DYNAMIC JDBC CREDENTIALS SO NOT HARDCODED

    I have taken over an application that uses row-level security and ADF (using
    dynamic JDBC Credentials). I have been able to set the internal_connection to
    a JDBCDatasource, but cannot set the Connection Type in the Oracle Business
    Component Configuration to a JDBCDatasource. When I do, I receive errors that
    tables are not found. When I set the value back to a JDBC URL, everything
    works fine again.
    I am looking for a solution where the userid and password are not hardcoded in
    the BC4J.xcfg or a way to redirect this information, as we change our system
    passwords every nighty days. Otherwise, I will have to redeploy the
    application every nighty days.
    I did not create this application, but I am sure that you could simply follow
    the "How to Support Dynamic JDBC Credentials" article. From that point, you
    will probably be where I am, where I have the internal_connection set to a
    JDBCDataSource and working properly, but cannot set the Connection Type to
    anything where the userid and password will not be hardcoded or cause failure.
    I wanted to let you know that I have
    found the updated How to Support Dynamic JDBC Credentials
    (http://www.oracle.com/technology/products/jdev/howtos/bc4j/howto_dynamic_jdbc.h
    tml) and was going to run through the "Advanced: Supporting Dynamic JDBC URLs",
    but once I was done keying in
    env.remove(ConnectionStrategy.DB_CONNECT_STRING_PROPERTY); I received a
    depreciation message on the DB_CONNECT_STRING_PROPERTY. (Note: I am coding in
    JDeveloper 10.1.3, so this may be depreciated as of then, but the ADF Libraries
    for JDeveloper 10.1.3 are on our Oracle 10gAS 10.1.2 server.)
    I thought maybe this would resolve my issue, but I can't be sure as the
    deprecation message leads me to believe that this solution may not be viable in
    the future.
    UPDATE
    =======
    The article you are referencing is definitely an older version.
    There is a newer article for 10g at:
    http://www.oracle.com/technology/products/jdev/howtos/10g/dynamicjdbchowto.html
    Please see if that helps.
    I have already reviewed this article.
    In fact, I have reviewed many versions of this document. I have not seen one
    created yet for 10.1.3 though (especially without JSF as our 10.1.2 AS server
    will not support it). I need to find an example or documentation that shows
    how we can keep from having the JDBC URL stored in the BC4J.xcfg or a way to
    use dynamic JDBC credentials with a JDBCDataSource. We do not want to store
    the userid and password in the application, rather, we would like to setup
    something that can be configurable from the application server.
    I think we need to use the dynamic JDBC credentials because we are using the
    row-level security, where we setup a database context for the user and only
    allow certain records of a database table to be returned to the browser based
    on that context.
    Might there be a way to still use the JDBCDataSource?

    I understand that the user provides the userid and password and that these values are setup using the Configuration class.
    However, when I am to deploy the ADF Business Module with my application, I have to specify either a JDBC URL or a JDBC DataSource in the Oracle Business Component Configuration.
    When I use JDBC DataSource, the code does not work properly, almost like the user's credentials are not used for the connection (I get errors like table or view does not exist).
    When I use the JDBC URL, the bc4j.xcfg stores a reference in the JDBCName attribute to a ConnectionDefinition in the same file. It is in this tag of the bc4j.xcfg where the userid, sid, and password (encrypted) is stored and used when retrieving the initial context of the ADF business components.
    It is these values that I want to have stored else where so that the application does not have to be redeployed in order for the password (or sid, or other connection information) to be change.

  • Empty univers list when using tier 3 connection

    Hello,
    I am having a problem on webi / deski with tier 3 connection mode.
    If i whant to create a new report, the list of univers ll be empty. When modifying a report, the only available univers will be those that are already used in one of the queries.
    When using direct connection to the databases, the whole list of univers is shown.
    I didn t make the original installation but wasn t able to reproduce the problem when installing a new server.
    This problem occured in SP2.7 and applying 6.3 didn t solve it.
    Does anyone have any leads ?
    Best regards,

    I checked the error logs of webi rich client and i found this one :
    accessRepoProxy.cpp:176:void __thiscall repoProxyAccessImp::ShowError(const class bo_utf8string &,const class bo_utf8string &,int): TraceLog message 1
    2014/06/03 20:47:34.107|>>|E| | 4352|7712| |||||||||||||||Error! repoProxyAccessImp::getAllAvailUniverseList() : [repo_proxy 36] UniverseFacade::getAllLightUniverseList - (com.crystaldecisions.sdk.exception.SDKException$PropertyNotFound: La propriété portant l'ID SI_FILES n'existe pas dans l'objet) [repo_bridge - BridgeUniverseFacade::getAllLightUniverseList]
    <The property with id SI_FILES doesn t exist in the object>

  • Error when using OCI8 driver

    Hi,
    We are facing an absurd situation when using an OCI8 driver in one of our Applications in java technology
    An error(ORA-00972) is displayed on using insert/select in some selected tables. But, the same code works fine when a thin driver is used.
    We are told that this is a known error & there are some workaround for it.
    Can someone help ?
    Thanks

    Note that you probably want to post this in the JDBC forum. Your question appears to relate to the JDBC driver that uses the OCI API, not to working with the OCI API yourself.
    Justin

  • Adworker using wrong JDBC connect string during upgrade to 12.1.3

    Our env
    APPS - 12.1.2 - running on SLES 11
    DB 11.2.0.3 - running on IMB System z
    We cloned PROD to a new host (instance was previously on DB host - debsdb01 and app host debsap03)
    The clone was made to debsdb04 - and app was reconfigured on debsap03
    I got the app to start up and run before taking it down to apply the 12.1.3 patch.
    However, in the process of the 12.1.3 patch, I see the following entry in the ADworker log files.
    JDBC connect string from AD_APPS_JDBC_URL is
    (DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=hostname1.companyname.com)(PORT=1549)))(CONNECT_DATA=(SID=DEV3)))
    where as it should be
    JDBC connect string from AD_APPS_JDBC_URL is
    (DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=hostname4.companyname.com)(PORT=1535)))(CONNECT_DATA=(SID=DEV3)))
    If I do an echo $AD_APPS_JDBC_URL on debspap03, it returns
    JDBC connect string from AD_APPS_JDBC_URL is
    (DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=hostname4.companyname.com)(PORT=1535)))(CONNECT_DATA=(SID=DEV3)))
    Where are the ADWORKERS getting the old value of the DB host name?
    Any help would be appreciated.
    Thanks
    Edited by: 864641 on Sep 16, 2012 7:32 AM

    I believe figured out what the issue was.
    I was (am) running the patch session from a VNC session that was established during the previous version of this instance - so the previous connect string was being used.
    Once I killed the old VNC session and established a new one, I was able to retrieve the correct value for $AD_APPS_JDBC_URL from the command prompt.
    Yes - the value of the $AD_APPS_JDBC_URL was showing the old value in the old VNC session.
    Thanks for all of your input esp on a Saturday - when I thought I would have to wait until Monday to get this figured out.

Maybe you are looking for