(statement cache size = 0) == clear statement cache ?
Hi
I ran this test with WLS 8.1. I set to the cache size to 5, and I call a servlet
which invokes a stored procedure to get the statement cached. I then recompile
the proc, set the statement cache size to 0 and re-execute the servlet.
The result is:
java.sql.SQLException: ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "CCDB_APPS.MSSG_PROCS" has been invalidated
ORA-04065: not executed, altered or dropped package "CCDB_APPS.MSSG_PROCS"
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1
which seems to suggest even though the cache size has set to 0, previously cached
statements are not cleared.
Rgs
Erik
Galen Boyer wrote:
On Fri, 05 Dec 2003, [email protected] wrote:
Galen Boyer wrote:
On 14 Nov 2003, [email protected] wrote:
Hi
I ran this test with WLS 8.1. I set to the cache size to 5,
and I call a servlet which invokes a stored procedure to get
the statement cached. I then recompile the proc, set the
statement cache size to 0 and re-execute the servlet.
The result is:
java.sql.SQLException: ORA-04068: existing state of packages
has been discarded ORA-04061: existing state of package
"CCDB_APPS.MSSG_PROCS" has been invalidated
ORA-04065: not executed, altered or dropped package
"CCDB_APPS.MSSG_PROCS" ORA-06508: PL/SQL: could not find
program unit being called ORA-06512: at line 1
which seems to suggest even though the cache size has set to
0, previously cached statements are not cleared.This is actually an Oracle message. Do the following test.
Open two sqlplus sessions. In one, execute the package.
Then, in the other, drop and recreate that package. Then, go
to the previous window and execute that same package. You
will get that error. Now, in that same sqlplus session,
execute that same line one more time and it goes through. In
short, in your above test, execute your servlet twice and I
bet on the second execution you have no issue.Hi. We did some testing offline, and verified that even a
standalone java program: 1 - making and executing a prepared
statement (calling the procedure), 2 - waiting while the
procedure gets recompiled, 3 - re-executing the prepared
statement gets the exception, BUT ALSO, 4 - closing the
statement after the failure, and making a new identical
statement, and executing it will also get the exception! JoeI just had the chance to test this within weblogic and not just
sqlplus.Note, I wasn't using SQL-PLUS, I wrote a standalone program
using Oracle's driver...
MY SCENARIO:
I had one connection only in my pool. I executed a package.
Then, went into the database and recompiled that package. Next
execution from app found this error. I then subsequently
executed the same package from the app and it was successful.And this was with the cache turned off, correct?
What the application needs to do is catch that error and within
the same connection, resubmit the execution request. All
connections within the pool will get invalidated for that
package's execution.Have you tried this? Did you try to re-use the statement you had,
or did you make a new one?
Maybe Weblogic could understand this and behave this way for
Oracle connections?It's not likely that we will be intercepting all exceptions
coming from a DBMS driver to find out whether it's a particular
failure, and then know that we can/must clear the statement cache.
Note also that even if we did, as I described, the test program I
ran did try to make a new statement to replace the one that
failed, and the new statement also failed.
In your case, you don't even have a cache. Would you verify
in your code, what sort of inline retry works for you?
Joe
Similar Messages
-
Jboss getting SQLException: Closed Statement prepared-statement- cache-size
My first post in this forum , hope to get a quick resolution :)
I am using Jboss 4.0.0 on Oracle 9.2.0.4.0
In order to improve the app performance , I had specified prepared-statement-cache-size as 50 as follows ,
<datasources>
<local-tx-datasource>
<jndi-name>jdbc/sct</jndi-name> <connection-url>jdbc:oracle:thin:@confidential:1560:sct1</connection-url>
<user-name>Confidential</user-name>
<password>Confidential</password>
<min-pool-size>10</min-pool-size>
<max-pool-size>120</max-pool-size> <prepared-statement-cache-size>50</prepared-statement-cache-size>
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptio nSorter</exception-sorter-class-name>
<idle-timeout-minutes>5</idle-timeout-minutes>
<track-statements>true</track-statements>
<new-connection-sql>select sysdate from dual</new-connection-sql>
<check-valid-connection-sql>select sysdate from dual</check-valid-connection-sql>
</local-tx-datasource>
</datasources>
After doing this , I start getting the following exception ,
java.sql.SQLException: Closed Statement
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:180)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:222)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:285)
at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:5681)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.j ava:409)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.ja va:366)
at org.jboss.resource.adapter.jdbc.CachedPreparedStatement.executeQuery(CachedPrepare dStatement.java:57)
at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPrepa redStatement.java:296)
at com.ge.sct.SiteText.getSiteTextFromDB(SiteText.java:292)
Thanks in Advance
BhavinHello,
I am also facing the same error: somewhere just now I read,
We were getting this error on JBoss / Oracle. The fix was setting the following to 0 in oracle-ds.xml:
<prepared-statement-cache-size>0</prepared-statement-cache-size>
Ref: http://www.jpox.org/servlet/forum/viewthread?thread=1108
May be you can try this, I am also still finding the solution, I will try the above and let u know, if i get success.
Regards,
Rajesh -
Why does performance decrease as cache size increases?
Hi,
We have some very serious performance problems with our
database. I have been trying to help by tuning the cache size,
but the results are the opposite of what I expect.
To create new databases with my data set, it takes about
8200 seconds with a 32 Meg cache. Performance gets worse
as the cache size increases, even though the cache hit rate
improves!
I'd appreciate any insight as to why this is happening.
32 Meg does not seem like such a large cache that it would
strain some system limitation.
Here are some stats from db_stat -m
Specified a 128 Meg cache size - test took 16076 seconds
160MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
160MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
34M Requested pages found in the cache (93%)
2405253 Requested pages not found in the cache
36084 Pages created in the cache
2400631 Pages read into the cache
9056561 Pages written from the cache to the backing file
2394135 Clean pages forced from the cache
2461 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
40048 Current total page count
40048 Current clean page count
0 Current dirty page count
16381 Number of hash buckets used for page location
39M Total number of times hash chains searched for a page (39021639)
11 The longest hash chain searched for a page
85M Total number of hash chain entries checked for page (85570570)
Specified a 64 Meg cache size - test took 10694 seconds
80MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
80MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
31M Requested pages found in the cache (83%)
6070891 Requested pages not found in the cache
36104 Pages created in the cache
6066249 Pages read into the cache
9063432 Pages written from the cache to the backing file
5963647 Clean pages forced from the cache
118611 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
20024 Current total page count
20024 Current clean page count
0 Current dirty page count
8191 Number of hash buckets used for page location
42M Total number of times hash chains searched for a page (42687277)
12 The longest hash chain searched for a page
98M Total number of hash chain entries checked for page (98696325)
Specified a 32 Meg cache size - test took 8231 seconds
40MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
40MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
26M Requested pages found in the cache (70%)
10M Requested pages not found in the cache (10812846)
35981 Pages created in the cache
10M Pages read into the cache (10808327)
9200273 Pages written from the cache to the backing file
9335574 Clean pages forced from the cache
1498651 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
10012 Current total page count
10012 Current clean page count
0 Current dirty page count
4099 Number of hash buckets used for page location
47M Total number of times hash chains searched for a page (47429232)
13 The longest hash chain searched for a page
118M Total number of hash chain entries checked for page (118218066)
vmstat says that a few minutes into the test, the box is
spending 80-90% of its time in iowait. That worsens as
the test continues.
System and test info follows
We have 10 databases (in 10 files) sharing a database
environment. We are using a hash table since we expect
data accesses to be pretty much random.
We are using the default cache type: a memory mapped file.
Using DB_PRIVATE did not improve performance.
The database environment created with these flags:
DB_CREATE | DB_THREAD | DB_INIT_CDB | DB_INIT_MPOOL
The databases are opened with only the DB_CREATE flag.
There is only one process accessing the db. In my tests,
only one thread access the db, doing only writes.
We do not use transactions.
My data set is about 550 Meg of plain ASCII text data.
13 million inserts and 2 million deletes. Key size is
32 bytes, data size is 4 bytes.
BDB 4.6.21 on linux.
1 Gig of RAM
Filesystem = ext3 page size = 4K
The test system is not doing anything else while I am
testing.Surprising result: I tried closing the db handles with DB_NOSYNC and performance
got worse. Using a 32 Meg cache, it took about twice as long to run my test:
15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
Here is some data from db_stat -m when using DB_NOSYNC:
40MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
40MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
26M Requested pages found in the cache (70%)
10M Requested pages not found in the cache (10811882)
44864 Pages created in the cache
10M Pages read into the cache (10798480)
7380761 Pages written from the cache to the backing file
3452500 Clean pages forced from the cache
7380761 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
10012 Current total page count
5001 Current clean page count
5011 Current dirty page count
4099 Number of hash buckets used for page location
47M Total number of times hash chains searched for a page (47428268)
13 The longest hash chain searched for a page
118M Total number of hash chain entries checked for page (118169805)
It looks like not flushing the cache regularly is forcing a lot more
dirty pages (and fewer clean pages) from the cache. Forcing a
dirty page out is slower than forcing a clean page out, of course.
Is this result reasonable?
I suppose I could try to sync less often than I have been, but more often
than never to see if that makes any difference.
When I close or sync one db handle, I assume it flushes only that portion
of the dbenv's cache, not the entire cache, right? Is there an API I can
call that would sync the entire dbenv cache (besides closing the dbenv)?
Are there any other suggestions?
Thanks,
Eric -
Does buffer cache size matters during imp process ?
Hi,
sorry for maybe naive question but I cant imagine why do Oracle need buffer cache (larger = better ) during inserts only (imp process with no index creation) .
As far as I know insert is done via pga area (direct insert) .
Please clarify for me .
DB is 10.2.0.3 if that matters :).
Regards.
GregSurprising result: I tried closing the db handles with DB_NOSYNC and performance
got worse. Using a 32 Meg cache, it took about twice as long to run my test:
15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
Here is some data from db_stat -m when using DB_NOSYNC:
40MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
40MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
26M Requested pages found in the cache (70%)
10M Requested pages not found in the cache (10811882)
44864 Pages created in the cache
10M Pages read into the cache (10798480)
7380761 Pages written from the cache to the backing file
3452500 Clean pages forced from the cache
7380761 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
10012 Current total page count
5001 Current clean page count
5011 Current dirty page count
4099 Number of hash buckets used for page location
47M Total number of times hash chains searched for a page (47428268)
13 The longest hash chain searched for a page
118M Total number of hash chain entries checked for page (118169805)
It looks like not flushing the cache regularly is forcing a lot more
dirty pages (and fewer clean pages) from the cache. Forcing a
dirty page out is slower than forcing a clean page out, of course.
Is this result reasonable?
I suppose I could try to sync less often than I have been, but more often
than never to see if that makes any difference.
When I close or sync one db handle, I assume it flushes only that portion
of the dbenv's cache, not the entire cache, right? Is there an API I can
call that would sync the entire dbenv cache (besides closing the dbenv)?
Are there any other suggestions?
Thanks,
Eric -
Suggest buffer cache size check
Hi experts,
please suggest how much give size of buffer cache. please tell me how to calculate this.
Note: on database running huge select with where clause.
>
SQL> show sga
Total System Global Area 536870912 bytes
Fixed Size 1220408 bytes
Variable Size 117440712 bytes
Database Buffers 411041792 bytes
Redo Buffers 7168000 bytes
>
>
SGA_ADVISORE
SQL> column c1 heading 'Cache Size (m)' format 999,999,999,999
SQL> column c2 heading 'Buffers' format 999,999,999
SQL> column c3 heading 'Estd Phys|Read Factor' format 999.90
SQL> column c4 heading 'Estd Phys| Reads' format 999,999,999,999
SQL>
SQL> select
2 size_for_estimate c1,
3 buffers_for_estimate c2,
estd_physical_read_factor c3,
4 5 estd_physical_reads c4
6 from
7 v$db_cache_advice
8 where
9 name = 'DEFAULT'
10 and
11 block_size = (SELECT value FROM V$PARAMETER
12 WHERE name = 'db_block_size')
and
13 14 advice_status = 'ON';
Estd Phys Estd Phys
Cache Size (m) Buffers Read Factor Reads
36 4,491 1.02 1,768,088,631
72 8,982 1.01 1,751,858,036
108 13,473 1.01 1,745,807,886
144 17,964 1.00 1,742,684,545
180 22,455 1.00 1,740,606,287
216 26,946 1.00 1,739,127,030
252 31,437 1.00 1,737,935,545
288 35,928 1.00 1,736,936,513
324 40,419 1.00 1,736,098,119
360 44,910 1.00 1,735,368,624
Estd Phys Estd Phys
Cache Size (m) Buffers Read Factor Reads
392 48,902 1.00 1,734,775,608
396 49,401 1.00 1,734,701,493
432 53,892 1.00 1,734,086,804
468 58,383 1.00 1,733,466,505
504 62,874 1.00 1,732,871,083
540 67,365 1.00 1,732,300,725
576 71,856 1.00 1,731,737,930
612 76,347 1.00 1,731,204,779
648 80,838 1.00 1,730,669,455
684 85,329 1.00 1,730,117,349
Estd Phys Estd Phys
Cache Size (m) Buffers Read Factor Reads
720 89,820 .98 1,703,583,925
21 rows selected.
Dictionary Cache Hit Ratio : 99.92% Value Acceptable.
Library Cache Hit Ratio : 98.22% Increase SHARED_POOL_SIZE parameter to bring value above 99%
DB Block Buffer Cache Hit Ratio : 60.53% Increase DB_BLOCK_BUFFERS parameter to bring value above 89%
Latch Hit Ratio : 99.72% Value acceptable.
Disk Sort Ratio : 0.00% Value Acceptable.
Rollback Segment Waits : 0.00% Value acceptable.
Dispatcher Workload : 0.00% Value acceptable.
>
Edited by: 928992 on Oct 18, 2012 2:31 PM
Edited by: 928992 on Oct 18, 2012 3:04 PMI am displaying you mine test db's buffer cache size : (11.2.0.1 on Windows box)
SQL> show parameter db_cache_size;
NAME TYPE VALUE
db_cache_size big integer 0
SQL> select name, current_size, buffers, prev_size, prev_buffers from v$buffer_pool;
NAME CURRENT_SIZE BUFFERS PREV_SIZE PREV_BUFFERS
DEFAULT 640 78800 0 0
SQL> select name,bytes from v$sgainfo where name='Buffer Cache Size';
NAME BYTES
Buffer Cache Size *671088640*
SQL> show sga;
Total System Global Area 1603411968 bytes
Fixed Size 2176168 bytes
Variable Size 922749784 bytes
*Database Buffers 671088640 bytes*
Redo Buffers 7397376 bytes
SQL> select * from v$sga;
NAME VALUE
Fixed Size 2176168
Variable Size 922749784
*Database Buffers 671088640*
Redo Buffers 7397376
SQL> show parameter sga_target;
NAME TYPE VALUE
sga_target big integer 0
SQL>Regards
Girish Sharma
Edited by: Girish Sharma on Oct 18, 2012 2:51 PM
Oracle and OS Info added. -
hi all,
please anyone please guide me to how to clear the cache from the application when logout tab is clicked.
thanks
ragards
vally.sYou keep posting the same question over and over expecting different answers, or something, can you explain this?
clear the cache
how to clear the cache
Re: how to clear the session when logout button is click
Scott -
ADO Recordset Cache Size Breaking SQL Reads
I've got a C++ application that uses ADO/ODBC to talk to various databases using SQL.
In an attempt to optimize performance, we modified the Cache Size parameter on the Recordset object from the default Cache Size of 1 to a slightly larger value. This has worked well for SQL Server and Access databases to increase the performance of our SQL reads.
However, talking to our Oracle 8i (8.1.6 version) database, adjusting the Cache Size causes lost records or lost fields.
We've tried the same operation using a VB application and get similar results, so it's not a C++ only problem.
For the VB app, changing the cursor-type from ForwardOnly to Dynamic does affect the problem, but neither work correctly. With a ForwardOnly cursor the string fields start coming back NULL after N+1 reads, where N is the Cache Size parameter. With a Dynamic cursor, whole records get dropped instead of just string fields: for example with a Cache Size of 5, the 2nd, 3rd, 4th and 5th records are not returned.
In our C++ application, the symptom is always lost string fields, regardless of these two cursor types.
I've tried updating the driver from 8.01.06.00 to the latest 8.01.66.00 (8.1.6.6) but this didn't help.
Is anybody familiar with this problem? know any workarounds?
Thanks
[email protected]I am displaying you mine test db's buffer cache size : (11.2.0.1 on Windows box)
SQL> show parameter db_cache_size;
NAME TYPE VALUE
db_cache_size big integer 0
SQL> select name, current_size, buffers, prev_size, prev_buffers from v$buffer_pool;
NAME CURRENT_SIZE BUFFERS PREV_SIZE PREV_BUFFERS
DEFAULT 640 78800 0 0
SQL> select name,bytes from v$sgainfo where name='Buffer Cache Size';
NAME BYTES
Buffer Cache Size *671088640*
SQL> show sga;
Total System Global Area 1603411968 bytes
Fixed Size 2176168 bytes
Variable Size 922749784 bytes
*Database Buffers 671088640 bytes*
Redo Buffers 7397376 bytes
SQL> select * from v$sga;
NAME VALUE
Fixed Size 2176168
Variable Size 922749784
*Database Buffers 671088640*
Redo Buffers 7397376
SQL> show parameter sga_target;
NAME TYPE VALUE
sga_target big integer 0
SQL>Regards
Girish Sharma
Edited by: Girish Sharma on Oct 18, 2012 2:51 PM
Oracle and OS Info added. -
Clearing my cache,why do I need to do this & consequences
Hi, I am new to iPad, I have an ipad4. I am not great on techy stuff.
The Admin of a site (one of my Spanish expat forums) suggests I clear my cache as his site does not show as it should on my iPad, it shows fine on his. I don't have this problem with other sites so I'm loathe to do what he says. Won't clearing my cache lose me everything on iPad? I need advice please?On iPad you can delete history, cookies and data. Tap Settings > Safari. Tap to clear cookies, history, and data. < same as clearing the cache.
No, clearing the cache does not delete everything on your iPad.
Keep in mind, not all websites render the same on an iOS device such as your iPad as they will on a Mac. Two different operating systems. Mac OS X vs iOS. -
Java.sql.SQLException: Statement cache size has not been set
All,
I am trying to create a light weight SQL Layer.It uses JDBC to connect to the database via weblogic. When my application tries to connect to the database using JDBC alone (outside of weblogic) everything works fine. But when the application tries to go via weblogic I am able to run the Statement objects successfully but when I try to run PreparedStatements I get the following error:
java.sql.SQLException: Statement cache size has not been set
at weblogic.rjvm.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:108)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:138)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_812_WLStub.prepareStatement(Unknown Source)
i have checked the StatementCacheSize and it is 10. Is there any other setting that needs to be implemented for this to work? Has anybody seen this error before? Any help will be greatly appreciated.
Thanks.Pooja Bamba wrote:
I just noticed that I did not copy the jdbc log fully earlier. Here is the log:
JDBC log stream started at Thu Jun 02 14:57:56 EDT 2005
DriverManager.initialize: jdbc.drivers = null
JDBC DriverManager initialized
registerDriver: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
Oracle Jdbc tracing is not avaliable in a non-debug zip/jar file
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
registerDriver: driver[className=weblogic.jdbc.jts.Driver,weblogic.jdbc.jts.Driver@c0a150]
registerDriver: driver[className=weblogic.jdbc.pool.Driver,weblogic.jdbc.pool.Driver@17dff15]
SQLException: SQLState(null) vendor code(17095)
java.sql.SQLException: Statement cache size has not been set
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)Hi. Ok. This is an Oracle driver bug/problem. Please show me the pool's definition
in the config.xml file. I'll bet you're defining the pool in an unusual way. Typically
we don't want any driver-level pooling to be involved. It is superfluous to the functionality
we provide, and can also conflict.
Joe
at oracle.jdbc.driver.OracleConnection.prepareCallWithKey(OracleConnection.java:1037)
at weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:477)
at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:420)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:353)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:144)
at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:415)
at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
SQLException: SQLState(null) vendor code(17095) -
Hi all,
while using some dinamyc store procedures I get in the following error:
[BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
I'm using WL8.1 and Sql Server 2000.
Store procedure contains two different queries where table name is a store procedure's
parameter.
The first time it works great, after that I always have this error:
Reading bea doc's I found
There may be other issues related to caching prepared statements that are not
listed here. If you see errors in your system related to prepared statements,
you should set the prepared statement cache size to 0, which turns off prepared
statement caching, to test if the problem is caused by caching prepared statements.
If I set prepared statement cache size to 0 everything works great but that does
not seem the better way.
Should we expect Bea to solve this problem?
Or whatever else solution?
such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
dynamically ?
thks in advance
Leonardocaching works well for DML and thats what it is supposed to do. But it looks
like you are doing DDL , which means your tables might be getting
created/dropped/altered which effectively invalidates the cache. So you
should try to turn the cache off.
"leonardo" <[email protected]> wrote in message
news:40b1bb75$1@mktnews1...
>
>
Hi all,
while using some dinamyc store procedures I get in the following error:
[BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
I'm using WL8.1 and Sql Server 2000.
Store procedure contains two different queries where table name is a storeprocedure's
parameter.
The first time it works great, after that I always have this error:
Reading bea doc's I found
There may be other issues related to caching prepared statements that arenot
listed here. If you see errors in your system related to preparedstatements,
you should set the prepared statement cache size to 0, which turns offprepared
statement caching, to test if the problem is caused by caching preparedstatements.
If I set prepared statement cache size to 0 everything works great butthat does
not seem the better way.
Should we expect Bea to solve this problem?
Or whatever else solution?
such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
dynamically ?
thks in advance
Leonardo -
In Tools, Options, there is a checkbox that states "ask me before clearing cache". It was checked by Firefox not me. I do not want to be asked. How do I get that box unchecked?
You are using a very old version of Firefox that is no longer supported.
You should upgrade to the latest Firefox 3.6 version from [http://www.mozilla.com]. Let me know if this fixes the issue. -
Statement cache size - application changes withouth restart
Hi, I would like to ask, how can I disable statement cache withouth restart.
If I set statement cache size to "0" and push button "Apply chanes", I got
message "No restart are necessary". Does it mean, that statement cache is
flushed? There is production environment and I would like to make sure
about it.
Thank you in advance.
Vladislav Rames, WLS 10.3.4Yes, setting the statement cache size is dynamic. A running server will close all cached
statements and do no more caching, as soon as you set the cache size to zero. -
hi,
can anyone tell me what is statement cache size?? i am using oracle 11g release 2 and i receive the following error frequently in my alert log.
ORA-03137: TTC protocol internal error : [12333] [10] [84] [101] [] [] [] []
i read an article in which they have told that if statement cache size value is non-zero change it to zero. Is that the solution for this problem?? where can i see the value of statement cache size??Hi,
You can refe to the below ORacle Doc
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jdbc_admin/jdbc_datasources.html
Statement Cache Type—The algorithm that determines which statements to store in the statement cache. See Statement Cache Algorithms.
Statement Cache Size—The number of statements to store in the cache for each connection. The default value is 10. See Statement Cache Size. http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/jdbc30/StmtCacheSample/Readme.html
HTH
- Pavan Kumar N -
Unable to set Oracle driver statement cache size to 300
Hi Friends,
when i am starting thread pool worker using threadpoolworker using threadpoolworker.cmd i am getting the error as follows
The root LoggedException was: Unable to set Oracle driver statement cache size to 300
at com.splwg.shared.common.LoggedException.raised(LoggedException.java:65)
at com.splwg.base.support.sql.OracleFunctionReplacer.setOracleCacheSize(OracleFunctionReplacer.java:232)
at com.splwg.base.support.sql.OracleFunctionReplacer.initializeConnectionForNewSession(OracleFunctionReplacer.java:207)
at com.splwg.base.support.context.FrameworkSession.initialize(FrameworkSession.java:225)
at com.splwg.base.support.context.FrameworkSession.<init>(FrameworkSession.java:194)
at com.splwg.base.support.context.ApplicationContext.createSession(ApplicationContext.java:417)
at com.splwg.base.support.context.ApplicationContext.createThreadBoundSession(ApplicationContext.java:461)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:96)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:79)
at com.splwg.base.support.context.ApplicationContext.initialize(ApplicationContext.java:211)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:114)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:90)
at com.splwg.base.support.context.ContextFactory.createDefaultContext(ContextFactory.java:498)
at com.splwg.base.api.batch.StandaloneExecuter.setupContext(StandaloneExecuter.java:258)
at com.splwg.base.api.batch.StandaloneExecuter.run(StandaloneExecuter.java:129)
at com.splwg.base.api.batch.StandaloneExecuter.main(StandaloneExecuter.java:357)
at com.splwg.base.api.batch.AbstractStandaloneRunner.invokeStandaloneExecuter(AbstractStandaloneRunner.java:403)
at com.splwg.base.api.batch.AbstractStandaloneRunner.run(AbstractStandaloneRunner.java:134)
at com.splwg.base.api.batch.ThreadPoolWorker.run(ThreadPoolWorker.java:24)
at com.splwg.base.api.batch.ThreadPoolWorker.main(ThreadPoolWorker.java:17)
can any one tell me the exact error
shyam.Hi Friends,
when i am starting thread pool worker using threadpoolworker using threadpoolworker.cmd i am getting the error as follows
The root LoggedException was: Unable to set Oracle driver statement cache size to 300
at com.splwg.shared.common.LoggedException.raised(LoggedException.java:65)
at com.splwg.base.support.sql.OracleFunctionReplacer.setOracleCacheSize(OracleFunctionReplacer.java:232)
at com.splwg.base.support.sql.OracleFunctionReplacer.initializeConnectionForNewSession(OracleFunctionReplacer.java:207)
at com.splwg.base.support.context.FrameworkSession.initialize(FrameworkSession.java:225)
at com.splwg.base.support.context.FrameworkSession.<init>(FrameworkSession.java:194)
at com.splwg.base.support.context.ApplicationContext.createSession(ApplicationContext.java:417)
at com.splwg.base.support.context.ApplicationContext.createThreadBoundSession(ApplicationContext.java:461)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:96)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:79)
at com.splwg.base.support.context.ApplicationContext.initialize(ApplicationContext.java:211)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:114)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:90)
at com.splwg.base.support.context.ContextFactory.createDefaultContext(ContextFactory.java:498)
at com.splwg.base.api.batch.StandaloneExecuter.setupContext(StandaloneExecuter.java:258)
at com.splwg.base.api.batch.StandaloneExecuter.run(StandaloneExecuter.java:129)
at com.splwg.base.api.batch.StandaloneExecuter.main(StandaloneExecuter.java:357)
at com.splwg.base.api.batch.AbstractStandaloneRunner.invokeStandaloneExecuter(AbstractStandaloneRunner.java:403)
at com.splwg.base.api.batch.AbstractStandaloneRunner.run(AbstractStandaloneRunner.java:134)
at com.splwg.base.api.batch.ThreadPoolWorker.run(ThreadPoolWorker.java:24)
at com.splwg.base.api.batch.ThreadPoolWorker.main(ThreadPoolWorker.java:17)
can any one tell me the exact error
shyam. -
Seagate 1TB Solid State Hybrid Drive SATA 6Gbps 64MB Cache 2.5-Inch ST1000LM014
Hello again, is the Macbook Pro 13-inch early 2011 compatiable with the Seagate 1TB Solid State Hybrid Drive SATA 6Gbps 64MB Cache 2.5-Inch ST1000LM014 and if so, how much faster would it be than a regular HDD?
If it's no higher than 12.5 mm, then it should work (and fit) in your model.
It will work fine where the optical drive is should you decide to replace the HDD with an SSD and put the hybrid in the space occupied by the optical drive. The optical drive provides 6.0 Gb/s support.
Maybe you are looking for
-
How can I set up my apple mail to get mail from my exchange account?
My work recently changed the security on the Outlook Exchange server. I used to be able to get my Exchange email on my iPhone and in Apple Mail, but now it only works on my iPhone. I don't understand how to get my Apple Mail to receive the emails? I
-
How to change MM/DD/YYYY to DD/MM/YYYY
hi i have a requirement. CONCATENATE SY-DATUM TEXT-T01 SY-UNAME INTO V_TEXT. here sy-datum is in the format of MM/DD/YYYY now i have to replace sy-datum with DD/MM/YYYY. Thanks, Maheedhar.T
-
I have installed LR5 on my MacBook Pro. I have completed a book that had online tutorials called Adobe Photoshop Lightroom 5 Classroom in a Book. All of my photos that are in my tutorial and the ones that I have imported from my hard drive have red
-
[Fixed] Lamp, can't get php to work.
I followed most of this guide http://wiki.archlinux.org/index.php/LAMP . I made a test.php file, with echo 'whatever';. When I try to visit it, it's just blank. If I go to an .html document, it works. My access log says a 500 error (Internal Server E
-
Red Eye and Healing with Content Aware
I was fortunate in being able to see the African Children's Choir in 2009 and today thought I would put some of the images through PSE 10. Issue 1 of 2: I'm not having any luck getting rid of red eye. I zoom in so that I'm hitting it dead on but ins