Cache size question...

howdy,
just a quick one.
i've noticed in the new firefox (1.5) there is an option in preferences to limit the cache size. however, there seems to be no option in safari.
:: is there a method to set the cache size in safari? ::
any info is greatly appreciated.
cheers...
ryan

ryan,
I am Terminally challenged, so I use CLIX and it has both Caches Off/Caches On for Safari.
I do not see a command for setting a cache limit.
Caches Off (rm -fr ~/Library/Caches/Safari;ln -s /dev/null ~/Library/Caches/Safari)
Caches On (rm ~/Library/Caches/Safari)
;~)

Similar Messages

  • Question of Berkeley DB "cache size"

    quote:
    Set the size of the shared memory buffer pool, that is, the size of the cache.
    The cache should be the size of the normal working data set of the application, with some small amount of additional memory for unusual situations. (Note: the working set is not the same as the number of pages accessed simultaneously, and is usually much larger.)
    The default cache size is 256KB, and may not be specified as less than 20KB. Any cache size less than 500MB is automatically increased by 25% to account for buffer pool overhead; cache sizes larger than 500MB are used as specified. The current maximum size of a single cache is 4GB. (All sizes are in powers-of-two, that is, 256KB is 2^18 not 256,000.)
    The database environment's cache size may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_cachesize", one or more whitespace characters, and the cache size specified in three parts: the gigabytes of cache, the additional bytes of cache, and the number of caches, also separated by whitespace characters. For example, "set_cachesize 2 524288000 3" would create a 2.5GB logical cache, split between three physical caches. Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time.
    This method configures a database environment, including all threads of control accessing the database environment, not only the operations performed using a specified Environment handle.
    This method may not be called after the environment has been opened. If joining an existing database environment, any information specified to this method will be ignored.
    This method may be called at any time during the life of the application.
    Parameters:
    cacheSize The size of the shared memory buffer pool, that is, the size of the cache.
    The question:
    When I have a host, the memory total is 16G.
    I don't know what mean of this document.
    How many max cache size can be set ?
    4G? 16G?
    or cacheCount (4)* 4G = 16G?
    My Email: [email protected]

    What version of Berkeley DB are you using?
    I'm a little confused about what you are quoting. Most of your quote seems to be from DB_ENV->set_cachesize(), but set_cachesize does not have a parameter named cacheSize. The parameters for set_cachesize are gbytes, bytes and ncache.
    You use set_cachesize to specify the logical cache that you can optionally split into more than one physical region. The maximum size of the logical cache is 4GB and there is only one logical cache. You specify the total size of the logical cache with the gbytes and bytes parameters. If you set ncache to a value greater than 1, you split this logical cache into separate physical regions. So, for example, if you specify (gbytes=2, bytes=0, ncache=1) you will have a logical cache of 2GB that internally is split into 2 separate physical regions of 1GB each.
    You can read more about the memory pool cache in the Reference Guide sections "Selecting a cache size" and "Configuring the memory pool".
    If you have other Berkeley DB questions that are not specific to replication, you should direct them to the general Berkeley DB forum where you will have the benefit of a wider set of Berkeley DB experts:
    Berkeley DB
    Paula Bingham
    Oracle

  • Jinitiator Question - JAR Cache Size Location

    Using Jinitiator 1.3.1.22 - on Windows 2000 Pro
    Does anyone know where this setting is stored on the PC when set in the Java Console? I have looked at the properties121222 in the .jinit folder under the User profile in Docs and Settings and it isn't in there!

    Thanks Francois...I know that bit. I want to know where the value for the default is actually derived from for my installation. According to the Forms Services Deployment Guide 'The default cache size for Oracle JInitiator is 20000000. This is set for you when you install Oracle JInitiator.'
    If you override the default in the Jini Configuration Panel, it's value will appear in a text file in the users .jini folder. Where is it held before that?!
    When Jini is installed at my workplace, the default for the JAR cache is 50mb. No one knows why and how it differs from what the documentation states! This is what I am trying to get to the bottom of!

  • Trying to change the cache size of FF3.6 from 75meg to a larger size, it only applies on a per session basis. i check the about:config and the changes have applied but when i restart FF it has reset itself to 75 :(

    as per the question, tried to up the cache from 75meg to 300meg but it resets after i restart firefox, have tried to change to various cache sizes but to no avail.
    -=EDIT=-
    it must be something to do with the profile, as when i set up a new profile in the manager, the cache size problem no longer appears. but now, how to repair my profile

    ok, nothing in that text file helped but the original file that it was based on pointed me in the direction that it might be an extension. The only extensions i have are NoScript and FasterFox Lite version....
    I have now traced the fault to lie with FasterFox... if you are not familiar with fasterfox it speeds up internet connections in firefox... several of the options are presets... but when i selected custom it gave me the option of a cache setting, which was set to 75megs.
    I have now changed that cache setting in fasterfox to 300 Megs and it is now persistant in firefox on restart.
    hopefully this information will be helpful to other people in the future that suffer the same problem.
    Thanks for your help TonyE, its greatly appreciated

  • Can't increase cache size, though i set it to 500mb it shows 27.65mb,what should i do??

    i am unable to increase cache size. whatever i put in the setting it says max cache limit 27.65 mb. I have 3 gb ram and 200 gb hard disk.

    Mark the question as solved. Please!

  • BerkeleyDB cache size and Solaris

    I am having problems trying to scale up an application that uses BerkelyDB-4.4.20 on Sun Sparc servers running Solaris 8 and 9.
    The application has 11 primary databases and 7 secondary databases.
    In different instances of the application, the size of the largest pimary database
    ranges only from 2MB to 10MB, but those will grow rapidly over the
    course of the semester.
    The servers have 4-8 GB of RAM and 12-20 GBytes of swap.
    Succinctly, when the primary databases are small, the application runs as expected.
    But as the primary databases grow, the following, counterintuitive phenomenon
    occurs. With modest cache sizes, the application starts up, but throws
    std::exceptions of "not enough space" when it attempts to delete records
    via a cursor. The application also crashes randomly returning
    RUN_RECOVERY. But when the cache size is increased, the application
    will not even start up; instead, it fails and throws std::exceptions which say there
    is insufficient space to open the primary databases.
    Here is some data from a server that has 4GB RAM with 2.8 GBytes free
    (according to "top") when the data was collected:
    DB_CONFIG............db_stat -m.................................Result
    set_cachesize........Pool......Ind. Cache
    0 67108864 1.........80 MB.......8 KB................Starts but crashes and can't delete by
    .....................................................................cursor because of insufficient space
    0 134217728 1.......160 MB......8 KB.................Same as case above
    0 268435456 1........320 MB.....8 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database.
    0 536870912 1.........512 MB...16 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different primary database than before.
    1 073741884 1........1GB 70MB....36 KB............Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    2 147483648 1.........2GB 140MB...672 KB........Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    I should also mention that the application is written in Perl and uses
    the Sleepycat::Db Perl module to interface with the BerkeleyDB C++ API.
    Any help on how to interpret this data and, if the problem is the
    interface with Solaris, how to tweak that, will be greatly appreciated.
    Sincerely,
    Bill Wheeler, Department of Mathematics, Indiana University, Bloomington.

    Having found answers to my questions, I think I should document them here.
    1. On the matter of the error message "not enough space", this message
    apparently orginates from Solaris. When a process (e.g., an Apache child)
    requests additional (virtual) memory (via either brk or mmap) such that the
    total (virtual) memory allocated to the process would exceed the system limit
    (set by the setrlimit command), then the Solaris kernel rejects the request
    and returns the error ENOMEM . Somewhat cryptically, the text for this error
    is "not enough space" (in contrast, for instance, to "not enough virtual
    memory").
    Apparently, when the BerkeleyDB cache size is set too large, a process
    (e.g., an Apache child) that attempts to open the environment and databases
    may request a total memory allocation that exceeds the system limit.
    Then Solaris will reject the request and return the ENOMEM error.
    Within Solaris, the only solutions are apparently
    (i) to decrease the cache size or
    (ii) to increase the system limit via the setrlimit command.
    2. On the matter of the DB_RUNRECOVERY errors, the cause appears
    to have been the use of the DB_TXN_NOWAIT flag in combination with
    code that was mishandling some of the resulting, complex situations.
    Sincerely,
    Bill Wheeler

  • Does buffer cache size matters during imp process ?

    Hi,
    sorry for maybe naive question but I cant imagine why do Oracle need buffer cache (larger = better ) during inserts only (imp process with no index creation) .
    As far as I know insert is done via pga area (direct insert) .
    Please clarify for me .
    DB is 10.2.0.3 if that matters :).
    Regards.
    Greg

    Surprising result: I tried closing the db handles with DB_NOSYNC and performance
    got worse. Using a 32 Meg cache, it took about twice as long to run my test:
    15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
    Here is some data from db_stat -m when using DB_NOSYNC:
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10811882)
    44864 Pages created in the cache
    10M Pages read into the cache (10798480)
    7380761 Pages written from the cache to the backing file
    3452500 Clean pages forced from the cache
    7380761 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    5001 Current clean page count
    5011 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47428268)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118169805)
    It looks like not flushing the cache regularly is forcing a lot more
    dirty pages (and fewer clean pages) from the cache. Forcing a
    dirty page out is slower than forcing a clean page out, of course.
    Is this result reasonable?
    I suppose I could try to sync less often than I have been, but more often
    than never to see if that makes any difference.
    When I close or sync one db handle, I assume it flushes only that portion
    of the dbenv's cache, not the entire cache, right? Is there an API I can
    call that would sync the entire dbenv cache (besides closing the dbenv)?
    Are there any other suggestions?
    Thanks,
    Eric

  • Problems setting cache size,

    Hi,
    I hope I m in the right category for this question:
    I m reading in a csv table, and putting it in one primary database ( DB_QUEUE )
    and several secondary databases ( DB_BTREE ). I m using the Berkeley DB 4.7
    for with C++. For the primary database i leave the standart cache size, for the secondary
    databases I want to use 16mb:
    unsigned long cache_byte= (1024*1024*16);
    sec[a]->set_cachesize(0,cache_byte,1)
    sec[*] are the secondary databases
    and the cache size is set before opening the databases.
    The problem is when i run the programm it allocates more and more memory,
    but it should just use a little more than a times 16 mb.
    Can somebody help me ?

    Welcome to the forums !
    You might get a better/faster response in the Berkeley DB Forum
    Berkeley DB
    HTH
    Srini

  • Time measuring and cache size

    Hi,
    This has been posted in C forum, but not much activity there.
    I have two questions.
    1. Is it possible to obtain the level 1 and 2 cache size from within a C/C++ program. You can do that with fpversion on the command line.
    2. If I have a multi-threaded program, then I want to meassure the time
    taken from within. Now I use getrusage. However, it includes the time for all child threads. How do I get the time for the main thread. The command line tool times seem to be able to do that. I do not want wall
    clock time but CPU time. This is possible on SGI.
    Thanks in advance.
    Erling

    1. Is it possible to obtain the level 1 and 2 cache size from within a C/C++ program. You can do that with fpversion on the command line. Yes!
    My friend work through this. If you still intresting - amail me at lesson1@mail/.com
    Wishes , a [url http://personallfiles.com/Grant.Scholarship.asp]nursing scholarship-in need for me

  • Bdb cache size setiings

    Hi,
    I have one question about bdb cache.
    If I set je.maxMemory=1073741824 in je.properties to limit bdb cache to 1G, is it possible that the real size of bdb cache is larger than 1G in a long period?
    Thanks,
    Yu

    To add to what Linda said, you are correct that is not a good idea to make the JE cache size too large, relative to the heap size. Some extra room is needed for three reasons:
    1) JE's calculations of memory size are approximate.
    2) Your application may use variable amounts of memory, in addition to JE.
    3) Java garbage collection will not perform well in some circumstances, if there is not enough free space.
    The last reason is a large variable. To find out how this impacts performance, and how much extra room you need, you'll really have to go though a Java GC tuning exercise using a system that is similar to your production machine, and do a lot of testing.
    I would certainly never use less than 10 or 20% free space, and with large heaps where GC is active, you will probably need more free space than that.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Page & cache size performance tuneup

    Hi
    I am doing performance evaluation on BDB. Please help me in find answer to below queries.
    1. page size: Do I need to give the page size based on my XML document size. Is there any relation(formula) between page size & XML document size to get optimum memory usage?
    2. cache size: Is cache size needs to be equal/more than the doc size to minimize the query response time? Could you please suggests a optimum cache size for 1MB XML document?
    3. I have stared with BDB version 2.3.10, but i read in this forum there is some performance improvement in BDB version 2.3.10. What version i should use for my evaluation? Is the latest(4.6.21) is best(stable)?
    4. Is any other parameters ( other than page & cache size) I need to tuneup to get optimum memory usage & minimal CPU utilization?
    Is there any reference document I can get more details on BDB performace?
    Thanks,
    Santhosh

    Hi Santhosh,
    It’s hard to give solid suggestions without knowing more about your application, what you are measuring and what your performance requirements are. What language are you implementing in?
    Is query response time most important, or document insertion or updates?
    I am going to request that you respond to this Performance Questionaire and answer as many questions as you can at this time. Send the questionaire to me at Ron dot Cohen at Oracle.
    http://forums.oracle.com/forums/ann.jspa?annID=426
    In addition to the information requested, you can see from the questionaire that the utility
    Db_stat –m is useful to look at a number of things including the effectiveness of the cache size you have.
    Have you taken any measurements yet? I would suggest going with the default pagesize but using a cachesize larger than the default. I don’t know how much real memory you have but for a first measurement you could try a cachesize of 100MB-500MB (or larger) depending on your workload and how much memory you have available. I am not recommending that as a final cache size, just giving you a number to start with.
    http://tinyurl.com/2mfn6f
    You will likely find a lot of improvements in performance can be obtained by your indexing strategy. This may be where you get the best results. You may want to spend some time reviewing that and the documentation on indexes:
    http://tinyurl.com/2522sc
    Also, take a look in the same document at the indexing sections.
    Berkeley DB XML 2.3 (Berkeley DB 4.5.20) should be fine to start (though you may have read on this forum about the speed improvements in Berkeley DB XML 2.4 which is currently in test mode).
    Please do respond to the survey, send it to me and we will try to help you further.
    Ron

  • Cache Size error

    We have a few users that occasionally receive the following:
    OLAP_error (1200601): Not enough memory for formula execution. Set MAXFORMULACACHESIZE configuration parameter to [2112]KB and try again.
    Our Essbase admin is suggesting that rather than increase the MAXFORMULACACHESIZE, that we reduce the maximum number of rows that are allowed to be returned.  Thoughts on that?. 
    2 other questions:
    Are there any issues with increasing the MAXFORMULACACHESIZE to a much larger number than what the error message recommends? (let's say 9000KB for the sake of this discussion).  In the DBAG I think it says it will only use what is needed.
    Are there any issues with setting the maximum rows allowed to be returned to a very high number (such as 1 million rows to reflect that max number of rows excel can handle)?

    Answer for both of your questions is a "No" . There wont eb any problem if you change teh cache size nor Increasing the row limit.But in Practical Conditions there will be no reports in any Financial organization retrieving a Million rows , so it is better to split the workbook for a faster retrieval and better performance.

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • In EJB3 entities, what is the equiv. of key-cache-size for PK generation?

    We have an oracle sequence which we use to generate primary keys. This sequence is set to increment by 5.
    e.g.:
    create sequence pk_sequence increment by 5;
    This is so weblogic doesn't need to query the sequence on every entity bean creation, it only needs to query the sequence every 5 times.
    With CMP2 entity beans and automatic key generation, this was configured simply by having the following in weblogic-cmp-rdbms-jar.xml:
    <automatic-key-generation>
    <generator-type>Sequence</generator-type>
    <generator-name>pk_sequence</generator-name>
    <key-cache-size>5</key-cache-size>
    </automatic-key-generation>
    This works great, the IDs created are 10, 11, 12, 13, 14, 15, 16, etc and weblogic only needs to hit the sequence 1/5 times.
    However, we have been trying to find the equivalent with the EJB3-style JPA entities:
    We've tried
    @SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "native(Sequence=pk_sequence, Increment=5, Allocate=5)")
    @SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "pk_sequence", allocationSize = 5)
    But with both configurations, the autogenerated IDs are 10, 15, 20, 25, 30, etc - weblogic seems to be getting a new value from the sequence every time.
    Am i missing anything?
    We are using weblogic 10.3

    If you are having a problem it is not clear what it is from what you have said.  If you have sugestions for improving some shortcomings you see in Flash CC then you should submit them to:
    Adobe - Wishlist & Bug Report
    http://www.adobe.com/cfusion/mmform/index.cfm?name=wishform

  • Java.sql.SQLException: Statement cache size has not been set

    All,
    I am trying to create a light weight SQL Layer.It uses JDBC to connect to the database via weblogic. When my application tries to connect to the database using JDBC alone (outside of weblogic) everything works fine. But when the application tries to go via weblogic I am able to run the Statement objects successfully but when I try to run PreparedStatements I get the following error:
    java.sql.SQLException: Statement cache size has not been set
    at weblogic.rjvm.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:108)
    at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:138)
    at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_812_WLStub.prepareStatement(Unknown Source)
    i have checked the StatementCacheSize and it is 10. Is there any other setting that needs to be implemented for this to work? Has anybody seen this error before? Any help will be greatly appreciated.
    Thanks.

    Pooja Bamba wrote:
    I just noticed that I did not copy the jdbc log fully earlier. Here is the log:
    JDBC log stream started at Thu Jun 02 14:57:56 EDT 2005
    DriverManager.initialize: jdbc.drivers = null
    JDBC DriverManager initialized
    registerDriver: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    Oracle Jdbc tracing is not avaliable in a non-debug zip/jar file
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    registerDriver: driver[className=weblogic.jdbc.jts.Driver,weblogic.jdbc.jts.Driver@c0a150]
    registerDriver: driver[className=weblogic.jdbc.pool.Driver,weblogic.jdbc.pool.Driver@17dff15]
    SQLException: SQLState(null) vendor code(17095)
    java.sql.SQLException: Statement cache size has not been set
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)Hi. Ok. This is an Oracle driver bug/problem. Please show me the pool's definition
    in the config.xml file. I'll bet you're defining the pool in an unusual way. Typically
    we don't want any driver-level pooling to be involved. It is superfluous to the functionality
    we provide, and can also conflict.
    Joe
         at oracle.jdbc.driver.OracleConnection.prepareCallWithKey(OracleConnection.java:1037)
         at weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
         at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
         at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_WLSkel.invoke(Unknown Source)
         at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:477)
         at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:420)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:353)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:144)
         at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:415)
         at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    SQLException: SQLState(null) vendor code(17095)

Maybe you are looking for