Cache Size error
We have a few users that occasionally receive the following:
OLAP_error (1200601): Not enough memory for formula execution. Set MAXFORMULACACHESIZE configuration parameter to [2112]KB and try again.
Our Essbase admin is suggesting that rather than increase the MAXFORMULACACHESIZE, that we reduce the maximum number of rows that are allowed to be returned. Thoughts on that?.
2 other questions:
Are there any issues with increasing the MAXFORMULACACHESIZE to a much larger number than what the error message recommends? (let's say 9000KB for the sake of this discussion). In the DBAG I think it says it will only use what is needed.
Are there any issues with setting the maximum rows allowed to be returned to a very high number (such as 1 million rows to reflect that max number of rows excel can handle)?
Answer for both of your questions is a "No" . There wont eb any problem if you change teh cache size nor Increasing the row limit.But in Practical Conditions there will be no reports in any Financial organization retrieving a Million rows , so it is better to split the workbook for a faster retrieval and better performance.
Similar Messages
-
Hi,
Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
I have done some Google and found that we need to add something in Essbase.cfg file like below.
1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
Possible Problems
Analytic Services could not lock enough blocks to perform the calculation.
Possible Solutions
Increase the number of blocks that Analytic Services can allocate for a calculation:
Set the maximum number of blocks that Analytic Services can allocate to at least 500.
If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
Stop and restart Analytic Server.
Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting.
Determine the block size.
Set the data catche size.
Actually in our Server Config file(essbase.cfg) we dont have below data added.
CalcLockBlockHigh 2000
CalcLockBlockDefault 200
CalcLockBlocklow 50
So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work? and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
Regards,
NaveenYour calculation needs to hold more blocks in memory than your current set up allows.
From the docs (quoting so I don't have to write it, not to be a smarta***:
CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
Example
If the essbase.cfg file contains the following settings:
CALCLOCKBLOCKHIGH 500 CALCLOCKBLOCKDEFAULT 200 CALCLOCKBLOCKLOW 50
then you can use the following SET LOCKBLOCK setting commands in a calculation script:
SET LOCKBLOCK HIGH;
means that Essbase can fix up to 500 data blocks when calculating one block.
Support doc is saying to change your config file so those settings can be made available for any calc script to use.
On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact. -
Java.sql.SQLException: Statement cache size has not been set
All,
I am trying to create a light weight SQL Layer.It uses JDBC to connect to the database via weblogic. When my application tries to connect to the database using JDBC alone (outside of weblogic) everything works fine. But when the application tries to go via weblogic I am able to run the Statement objects successfully but when I try to run PreparedStatements I get the following error:
java.sql.SQLException: Statement cache size has not been set
at weblogic.rjvm.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:108)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:138)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_812_WLStub.prepareStatement(Unknown Source)
i have checked the StatementCacheSize and it is 10. Is there any other setting that needs to be implemented for this to work? Has anybody seen this error before? Any help will be greatly appreciated.
Thanks.Pooja Bamba wrote:
I just noticed that I did not copy the jdbc log fully earlier. Here is the log:
JDBC log stream started at Thu Jun 02 14:57:56 EDT 2005
DriverManager.initialize: jdbc.drivers = null
JDBC DriverManager initialized
registerDriver: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
Oracle Jdbc tracing is not avaliable in a non-debug zip/jar file
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
registerDriver: driver[className=weblogic.jdbc.jts.Driver,weblogic.jdbc.jts.Driver@c0a150]
registerDriver: driver[className=weblogic.jdbc.pool.Driver,weblogic.jdbc.pool.Driver@17dff15]
SQLException: SQLState(null) vendor code(17095)
java.sql.SQLException: Statement cache size has not been set
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)Hi. Ok. This is an Oracle driver bug/problem. Please show me the pool's definition
in the config.xml file. I'll bet you're defining the pool in an unusual way. Typically
we don't want any driver-level pooling to be involved. It is superfluous to the functionality
we provide, and can also conflict.
Joe
at oracle.jdbc.driver.OracleConnection.prepareCallWithKey(OracleConnection.java:1037)
at weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:477)
at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:420)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:353)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:144)
at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:415)
at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
SQLException: SQLState(null) vendor code(17095) -
Hi all,
while using some dinamyc store procedures I get in the following error:
[BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
I'm using WL8.1 and Sql Server 2000.
Store procedure contains two different queries where table name is a store procedure's
parameter.
The first time it works great, after that I always have this error:
Reading bea doc's I found
There may be other issues related to caching prepared statements that are not
listed here. If you see errors in your system related to prepared statements,
you should set the prepared statement cache size to 0, which turns off prepared
statement caching, to test if the problem is caused by caching prepared statements.
If I set prepared statement cache size to 0 everything works great but that does
not seem the better way.
Should we expect Bea to solve this problem?
Or whatever else solution?
such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
dynamically ?
thks in advance
Leonardocaching works well for DML and thats what it is supposed to do. But it looks
like you are doing DDL , which means your tables might be getting
created/dropped/altered which effectively invalidates the cache. So you
should try to turn the cache off.
"leonardo" <[email protected]> wrote in message
news:40b1bb75$1@mktnews1...
>
>
Hi all,
while using some dinamyc store procedures I get in the following error:
[BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
I'm using WL8.1 and Sql Server 2000.
Store procedure contains two different queries where table name is a storeprocedure's
parameter.
The first time it works great, after that I always have this error:
Reading bea doc's I found
There may be other issues related to caching prepared statements that arenot
listed here. If you see errors in your system related to preparedstatements,
you should set the prepared statement cache size to 0, which turns offprepared
statement caching, to test if the problem is caused by caching preparedstatements.
If I set prepared statement cache size to 0 everything works great butthat does
not seem the better way.
Should we expect Bea to solve this problem?
Or whatever else solution?
such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
dynamically ?
thks in advance
Leonardo -
Java 1.4.2 default cache size problems
Hello all -
I am delivering panoramic image Virtual Travel® tours via Flash and the Java PTviewer applet, e.g. http://www.taj-mahal.net
We want to deliver much larger & "full screen" panoramas as well. This all works fine on PCs (sigh), but not on Macs.
On the Mac, I am seeing hanging" during loading of panos, and traced this to the fact that Windows typically has a 60 MB Java cache. I thought this was the same for Mac, but ....
I note that my Java 1.4.2 Plug-In Control Panel shows that only 50 MB is allocated. Since I have not adjusted this, I wonder...
a) Is the default Java cache for Mac OSX set a 50 MB after installation? Would most typical Mac users then have only 50 MB of Java cache?
b) Is there a user-friendly way to adjust the Java cache size up to 60 MB on the Mac? Some application or AppleScript or some such? If so, I can post an error message when (poor) Mac users try to view our large panoramas.
( I want to avoid trying to explain how to find and use the Java Plug-In control panel.... )
Thanks for your help !
William
G4 dual 1.25GHz ( London, UK )Where does that prerequisite come from?
On linux x86, Oracle provides the j2se jdk/jre for OracleAS itself, meaning OracleAS Forms & Reports services 9.0.4 does not require you to install a JDK. I think that actually the requirement is that you should not install any JDK. Of course, some other software might require you to install 1.4.2 java sdk.
You can find the AS 9.0.4 docu libraries here:
http://www.oracle.com/technology/documentation/appserver10g.html
(see docu library B13597_05, Platform tab for installation guides) -
Best size of procedure cache size?
here is my dbcc memusage output:
DBCC execution completed. If DBCC printed error messages, contact a user with System Administrator (SA) role.
Memory Usage:
Meg. 2K Blks Bytes
Configured Memory: 14648.4375 7500000 15360000000
Non Dynamic Structures: 5.5655 2850 5835893
Dynamic Structures: 70.4297 36060 73850880
Cache Memory: 13352.4844 6836472 14001094656
Proc Cache Memory: 85.1484 43596 89284608
Unused Memory: 1133.9844 580600 1189068800
So if proc cache is too small? I can put used memory 1133M to proc cache. but as many suggested that proc cache should be 20% of total memory.
Not sure it should be 20% of max memory or Total named cache memory?Hi
Database size: 268288.0 MB
Procedure Cache size is ..
1> sp_configure 'procedure cache size'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit Type
procedure cache size 7000 3362132 1494221 1494221 Memory pages(2k) dynamic
1> sp_monitorconfig 'procedure cache size'
2> go
Usage information at date and time: May 15 2014 11:48AM.
Name Num_free Num_active Pct_act Max_Used Reuse_cnt Instance_Name
procedure cache size 1101704 392517 26.27 787437 746136 NULL
1> sp_configure 'total logical memory'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit Type
total logical memory 73728 15624170 7812085 7838533 memory pages(2k) read-only
I got to know that the oparameter 'Reuse_cnt' should be zero from an ASE expert.
Suggest me if I need to increase the procedure cache with explanation
Thanks
Rajesh -
(statement cache size = 0) == clear statement cache ?
Hi
I ran this test with WLS 8.1. I set to the cache size to 5, and I call a servlet
which invokes a stored procedure to get the statement cached. I then recompile
the proc, set the statement cache size to 0 and re-execute the servlet.
The result is:
java.sql.SQLException: ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "CCDB_APPS.MSSG_PROCS" has been invalidated
ORA-04065: not executed, altered or dropped package "CCDB_APPS.MSSG_PROCS"
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1
which seems to suggest even though the cache size has set to 0, previously cached
statements are not cleared.
Rgs
ErikGalen Boyer wrote:
On Fri, 05 Dec 2003, [email protected] wrote:
Galen Boyer wrote:
On 14 Nov 2003, [email protected] wrote:
Hi
I ran this test with WLS 8.1. I set to the cache size to 5,
and I call a servlet which invokes a stored procedure to get
the statement cached. I then recompile the proc, set the
statement cache size to 0 and re-execute the servlet.
The result is:
java.sql.SQLException: ORA-04068: existing state of packages
has been discarded ORA-04061: existing state of package
"CCDB_APPS.MSSG_PROCS" has been invalidated
ORA-04065: not executed, altered or dropped package
"CCDB_APPS.MSSG_PROCS" ORA-06508: PL/SQL: could not find
program unit being called ORA-06512: at line 1
which seems to suggest even though the cache size has set to
0, previously cached statements are not cleared.This is actually an Oracle message. Do the following test.
Open two sqlplus sessions. In one, execute the package.
Then, in the other, drop and recreate that package. Then, go
to the previous window and execute that same package. You
will get that error. Now, in that same sqlplus session,
execute that same line one more time and it goes through. In
short, in your above test, execute your servlet twice and I
bet on the second execution you have no issue.Hi. We did some testing offline, and verified that even a
standalone java program: 1 - making and executing a prepared
statement (calling the procedure), 2 - waiting while the
procedure gets recompiled, 3 - re-executing the prepared
statement gets the exception, BUT ALSO, 4 - closing the
statement after the failure, and making a new identical
statement, and executing it will also get the exception! JoeI just had the chance to test this within weblogic and not just
sqlplus.Note, I wasn't using SQL-PLUS, I wrote a standalone program
using Oracle's driver...
MY SCENARIO:
I had one connection only in my pool. I executed a package.
Then, went into the database and recompiled that package. Next
execution from app found this error. I then subsequently
executed the same package from the app and it was successful.And this was with the cache turned off, correct?
What the application needs to do is catch that error and within
the same connection, resubmit the execution request. All
connections within the pool will get invalidated for that
package's execution.Have you tried this? Did you try to re-use the statement you had,
or did you make a new one?
Maybe Weblogic could understand this and behave this way for
Oracle connections?It's not likely that we will be intercepting all exceptions
coming from a DBMS driver to find out whether it's a particular
failure, and then know that we can/must clear the statement cache.
Note also that even if we did, as I described, the test program I
ran did try to make a new statement to replace the one that
failed, and the new statement also failed.
In your case, you don't even have a cache. Would you verify
in your code, what sort of inline retry works for you?
Joe -
when i put firefox in offline mode, and then click on pages saved in history , it can't load any pages or any images. i put cach size to 250mb but the problem is the same, it saves history for two months, but can't load pages.
Hi there,
When I inspect your site in browser tools, I'm getting 404 errors from your page:
[Error] Failed to load resource: the server responded with a status of 404 (Not Found) (jquery-2.0.3.min.map, line 0)
[Error] Failed to load resource: the server responded with a status of 404 (Not Found) (edge.4.0.0.min.map, line 0)
BarnardosIreland wrote:
I would have thought that publishing should give a complete package that doesn't need any further edits to the code and can just be directly ftp'ed to the web - is this correct?
In general, you are correct - but also your server does need to be properly configured (and those errors above lead me to think it may not be) to serve the file types that your uploading - but it could be something else entirely. Can you zip up your composition folder, upload it to your Creative Cloud files, set it to share, and then post a link here so I can download it? If you'd rather not share it publicly, can you PM me with a link to your composition files?
Thanks,
Joe -
Individual cache size too large: maximum is 10TB
Hi All,
When application is trying to open environment the following error occur:
individual cache size too large: maximum is 10TB
All BDB parameters are default. Environment created with the following flags: DB_CREATE | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_THREAD | DB_REGISTER | DB_RECOVER;
BDB version 4.8.24. OS Debian Linux 2.6.26-2-amd64
What can cause this error?
Modification of cache size in DB_CONFIG does not help.
Thanks
Edited by: user3578137 on Dec 17, 2009 7:54 AMNo, cache parameters are not set, they are default. So it is strange.
Here is the code:
env_flags = DB_CREATE | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_LOG |
DB_INIT_MPOOL | DB_THREAD | DB_REGISTER | DB_RECOVER;
rc = db_env_create(&dbenv, 0);
rc = dbenv->set_lk_detect(dbenv, DB_LOCK_MINWRITE);
dbenv->set_event_notify(dbenv, db_event_callback);
rc = dbenv->open(dbenv, work_dir, env_flags, 0);
error checks are omitted.
It works fine on machine with Fedora x64 and 1GB RAM, and error occurs on Debian x64 4GB RAM -
Hi, All,
I am encountering an Essbase data cache full error while running some calc scripts. I have already try to set the data cache size to 800MB to 1.6GB, and index cache to around 100MB. It is quite a large BSO Essbase, with one dimension over 1000 members, another about 2500 members, and the last one about 3000 members.
I have three similar scripts and each is for different entities to aggregate on some of those dimensions.
For example, I started with unload the app/db, and run the calc script 1 for Entity1, it is successfully; However, I continue to run on calc script 2 for Entity 2, it showed the "data cache full error". However, after I unload the app/db, and then run calc script 2 again, I can have the calc scripts completed with no errors.
I am running on Essbase 11.1.1.3 on AIX platform 32-bit Essbase.
Anyone had encountered that before. Is it a problem with Essbase RAM handling, in this case?
Thanks in advanceThank you John,
We have found that it is the entity dimension that should be responsible to this problem.
I remember we have encountered this kind of problem before when we aggregate an application with the entity dimension hierarchy mixed with shared and stored instances of the same level 0 members. To put it simple, there are three members under "Entity" dimension member, which represents different view of entity hierarchies of same level 0 members. the first one has stored level 0 entity members while the other two have shared ones. And at that time, our client added another hierarchy with shared level 0 members, but they did not put this tree under "Entity" dimension member directly, but rather put it under the first child of "Entity", which is the one with stored level 0 members.
It is a little bit confusing to describe the situation only by text. Anyway, at that time, the first hierarchy had both stored and shared instances of the same group of level 0 members. And the data cache is always full when aggregating. and after we moved the forth hierarchy to another tree so under that hierarchy the level 0 members are all shared instances, the aggregation worked flawlessly.
I wondered why this happened and consider this is related to detailed calculation logic of Essbase. May you shed some light on this topic? Thank you with all my heart!
Warm Regards,
John -
Increase the size of the cache using the cache.size= number of pages ?
Hi All,
I am getting this error when I do load testing.
I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
I got following exception when I was doing load testing with 30 concurrent users.
Any idea why this exception is coming ?
thanks in advance
Hitesh
javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)hitesh Chauhan wrote:
Hi All,
I am getting this error when I do load testing.
I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
I got following exception when I was doing load testing with 30 concurrent users.
Any idea why this exception is coming ?
thanks in advance
Hitesh Hi. Please note below, the stacktrace and exception is coming from the
Pointbase DBMS, nothing to do with Sybase. It seems to be an issue
with a configurable limit for PointBase, that you are exceeding.
Please read the PointBase configuration documents, and/or configure
your MDBs to use Sybase.
Joe
>
javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178) -
hi,
can anyone tell me what is statement cache size?? i am using oracle 11g release 2 and i receive the following error frequently in my alert log.
ORA-03137: TTC protocol internal error : [12333] [10] [84] [101] [] [] [] []
i read an article in which they have told that if statement cache size value is non-zero change it to zero. Is that the solution for this problem?? where can i see the value of statement cache size??Hi,
You can refe to the below ORacle Doc
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jdbc_admin/jdbc_datasources.html
Statement Cache Type—The algorithm that determines which statements to store in the statement cache. See Statement Cache Algorithms.
Statement Cache Size—The number of statements to store in the cache for each connection. The default value is 10. See Statement Cache Size. http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/jdbc30/StmtCacheSample/Readme.html
HTH
- Pavan Kumar N -
BerkeleyDB cache size and Solaris
I am having problems trying to scale up an application that uses BerkelyDB-4.4.20 on Sun Sparc servers running Solaris 8 and 9.
The application has 11 primary databases and 7 secondary databases.
In different instances of the application, the size of the largest pimary database
ranges only from 2MB to 10MB, but those will grow rapidly over the
course of the semester.
The servers have 4-8 GB of RAM and 12-20 GBytes of swap.
Succinctly, when the primary databases are small, the application runs as expected.
But as the primary databases grow, the following, counterintuitive phenomenon
occurs. With modest cache sizes, the application starts up, but throws
std::exceptions of "not enough space" when it attempts to delete records
via a cursor. The application also crashes randomly returning
RUN_RECOVERY. But when the cache size is increased, the application
will not even start up; instead, it fails and throws std::exceptions which say there
is insufficient space to open the primary databases.
Here is some data from a server that has 4GB RAM with 2.8 GBytes free
(according to "top") when the data was collected:
DB_CONFIG............db_stat -m.................................Result
set_cachesize........Pool......Ind. Cache
0 67108864 1.........80 MB.......8 KB................Starts but crashes and can't delete by
.....................................................................cursor because of insufficient space
0 134217728 1.......160 MB......8 KB.................Same as case above
0 268435456 1........320 MB.....8 KB.................Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database.
0 536870912 1.........512 MB...16 KB.................Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database (although it mentions a
......................................................................different primary database than before.
1 073741884 1........1GB 70MB....36 KB............Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database (although it mentions a
......................................................................different pimary database than
......................................................................previously).
2 147483648 1.........2GB 140MB...672 KB........Doesn't start and says there is
......................................................................not enough space to open a primary
......................................................................database (although it mentions a
......................................................................different pimary database than
......................................................................previously).
I should also mention that the application is written in Perl and uses
the Sleepycat::Db Perl module to interface with the BerkeleyDB C++ API.
Any help on how to interpret this data and, if the problem is the
interface with Solaris, how to tweak that, will be greatly appreciated.
Sincerely,
Bill Wheeler, Department of Mathematics, Indiana University, Bloomington.Having found answers to my questions, I think I should document them here.
1. On the matter of the error message "not enough space", this message
apparently orginates from Solaris. When a process (e.g., an Apache child)
requests additional (virtual) memory (via either brk or mmap) such that the
total (virtual) memory allocated to the process would exceed the system limit
(set by the setrlimit command), then the Solaris kernel rejects the request
and returns the error ENOMEM . Somewhat cryptically, the text for this error
is "not enough space" (in contrast, for instance, to "not enough virtual
memory").
Apparently, when the BerkeleyDB cache size is set too large, a process
(e.g., an Apache child) that attempts to open the environment and databases
may request a total memory allocation that exceeds the system limit.
Then Solaris will reject the request and return the ENOMEM error.
Within Solaris, the only solutions are apparently
(i) to decrease the cache size or
(ii) to increase the system limit via the setrlimit command.
2. On the matter of the DB_RUNRECOVERY errors, the cause appears
to have been the use of the DB_TXN_NOWAIT flag in combination with
code that was mishandling some of the resulting, complex situations.
Sincerely,
Bill Wheeler -
Hello,
I'd like to get the type of information in version 8 that I can get in version 9 through v$db_cache_advice in order to determine the size that the buffer cache should be. I've found sites that say you can set db_block_lru_extended_statistics to populate v$recent_bucket, but they say there is a performance hit. Can anyone tell me qualitatively how much of a performance hit this causes (obviously it would only be run this way for a short period of time), and whether or not this is really the best/right way to do this?
Thanks.Actually ours is bank Database,
Our Database size is 400GB.
last month they got ORA-000604 error,
so that production database got hanged for 15 min, issue got resolved automatically after 15min.
At that time complete buffer cache was flushed out & all oracle Processes was terminated.
becoz of that they increased buffer cache size. -
Unable to set Oracle driver statement cache size to 300
Hi Friends,
when i am starting thread pool worker using threadpoolworker using threadpoolworker.cmd i am getting the error as follows
The root LoggedException was: Unable to set Oracle driver statement cache size to 300
at com.splwg.shared.common.LoggedException.raised(LoggedException.java:65)
at com.splwg.base.support.sql.OracleFunctionReplacer.setOracleCacheSize(OracleFunctionReplacer.java:232)
at com.splwg.base.support.sql.OracleFunctionReplacer.initializeConnectionForNewSession(OracleFunctionReplacer.java:207)
at com.splwg.base.support.context.FrameworkSession.initialize(FrameworkSession.java:225)
at com.splwg.base.support.context.FrameworkSession.<init>(FrameworkSession.java:194)
at com.splwg.base.support.context.ApplicationContext.createSession(ApplicationContext.java:417)
at com.splwg.base.support.context.ApplicationContext.createThreadBoundSession(ApplicationContext.java:461)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:96)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:79)
at com.splwg.base.support.context.ApplicationContext.initialize(ApplicationContext.java:211)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:114)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:90)
at com.splwg.base.support.context.ContextFactory.createDefaultContext(ContextFactory.java:498)
at com.splwg.base.api.batch.StandaloneExecuter.setupContext(StandaloneExecuter.java:258)
at com.splwg.base.api.batch.StandaloneExecuter.run(StandaloneExecuter.java:129)
at com.splwg.base.api.batch.StandaloneExecuter.main(StandaloneExecuter.java:357)
at com.splwg.base.api.batch.AbstractStandaloneRunner.invokeStandaloneExecuter(AbstractStandaloneRunner.java:403)
at com.splwg.base.api.batch.AbstractStandaloneRunner.run(AbstractStandaloneRunner.java:134)
at com.splwg.base.api.batch.ThreadPoolWorker.run(ThreadPoolWorker.java:24)
at com.splwg.base.api.batch.ThreadPoolWorker.main(ThreadPoolWorker.java:17)
can any one tell me the exact error
shyam.Hi Friends,
when i am starting thread pool worker using threadpoolworker using threadpoolworker.cmd i am getting the error as follows
The root LoggedException was: Unable to set Oracle driver statement cache size to 300
at com.splwg.shared.common.LoggedException.raised(LoggedException.java:65)
at com.splwg.base.support.sql.OracleFunctionReplacer.setOracleCacheSize(OracleFunctionReplacer.java:232)
at com.splwg.base.support.sql.OracleFunctionReplacer.initializeConnectionForNewSession(OracleFunctionReplacer.java:207)
at com.splwg.base.support.context.FrameworkSession.initialize(FrameworkSession.java:225)
at com.splwg.base.support.context.FrameworkSession.<init>(FrameworkSession.java:194)
at com.splwg.base.support.context.ApplicationContext.createSession(ApplicationContext.java:417)
at com.splwg.base.support.context.ApplicationContext.createThreadBoundSession(ApplicationContext.java:461)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:96)
at com.splwg.base.support.context.SessionExecutable.doInReadOnlySession(SessionExecutable.java:79)
at com.splwg.base.support.context.ApplicationContext.initialize(ApplicationContext.java:211)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:114)
at com.splwg.base.support.context.ContextFactory.buildContext(ContextFactory.java:90)
at com.splwg.base.support.context.ContextFactory.createDefaultContext(ContextFactory.java:498)
at com.splwg.base.api.batch.StandaloneExecuter.setupContext(StandaloneExecuter.java:258)
at com.splwg.base.api.batch.StandaloneExecuter.run(StandaloneExecuter.java:129)
at com.splwg.base.api.batch.StandaloneExecuter.main(StandaloneExecuter.java:357)
at com.splwg.base.api.batch.AbstractStandaloneRunner.invokeStandaloneExecuter(AbstractStandaloneRunner.java:403)
at com.splwg.base.api.batch.AbstractStandaloneRunner.run(AbstractStandaloneRunner.java:134)
at com.splwg.base.api.batch.ThreadPoolWorker.run(ThreadPoolWorker.java:24)
at com.splwg.base.api.batch.ThreadPoolWorker.main(ThreadPoolWorker.java:17)
can any one tell me the exact error
shyam.
Maybe you are looking for
-
How to use bind variale to create list of values? show date range?
Hi, Dose anybode have a way to get around "bind variables are not allowed in Select statement" when create list of values from select? I have to create this based on who the user (one parameter passed into report) is. Another Question: I like to curr
-
Camera iPod touch is not working
Camera is not working. Tried resetting. Stopping app. Any good advice?
-
Error in "Withholding tax report" (?)
Hi, I've a problem in startdard report "Withholding tax report"... not correct Amount. I run this report with this parameters: http://img123.imageshack.us/img123/6303/24618497sr6.gif The result: http://img161.imageshack.us/img161/4273/74267634ep8.gif
-
How to install Adobe PDF toolbar in Internet Explorer
How do I install an Adobe PDF toolbar in Internet Explorer?
-
I have to reenter my Wifi password every time I restart my Ipod touch. It used to retain the password but now it drops out when I turn the Ipod off. Any suggestions?