Best size of procedure cache size?
here is my dbcc memusage output:
DBCC execution completed. If DBCC printed error messages, contact a user with System Administrator (SA) role.
Memory Usage:
Meg. 2K Blks Bytes
Configured Memory: 14648.4375 7500000 15360000000
Non Dynamic Structures: 5.5655 2850 5835893
Dynamic Structures: 70.4297 36060 73850880
Cache Memory: 13352.4844 6836472 14001094656
Proc Cache Memory: 85.1484 43596 89284608
Unused Memory: 1133.9844 580600 1189068800
So if proc cache is too small? I can put used memory 1133M to proc cache. but as many suggested that proc cache should be 20% of total memory.
Not sure it should be 20% of max memory or Total named cache memory?
Hi
Database size: 268288.0 MB
Procedure Cache size is ..
1> sp_configure 'procedure cache size'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit Type
procedure cache size 7000 3362132 1494221 1494221 Memory pages(2k) dynamic
1> sp_monitorconfig 'procedure cache size'
2> go
Usage information at date and time: May 15 2014 11:48AM.
Name Num_free Num_active Pct_act Max_Used Reuse_cnt Instance_Name
procedure cache size 1101704 392517 26.27 787437 746136 NULL
1> sp_configure 'total logical memory'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit Type
total logical memory 73728 15624170 7812085 7838533 memory pages(2k) read-only
I got to know that the oparameter 'Reuse_cnt' should be zero from an ASE expert.
Suggest me if I need to increase the procedure cache with explanation
Thanks
Rajesh
Similar Messages
-
In EJB3 entities, what is the equiv. of key-cache-size for PK generation?
We have an oracle sequence which we use to generate primary keys. This sequence is set to increment by 5.
e.g.:
create sequence pk_sequence increment by 5;
This is so weblogic doesn't need to query the sequence on every entity bean creation, it only needs to query the sequence every 5 times.
With CMP2 entity beans and automatic key generation, this was configured simply by having the following in weblogic-cmp-rdbms-jar.xml:
<automatic-key-generation>
<generator-type>Sequence</generator-type>
<generator-name>pk_sequence</generator-name>
<key-cache-size>5</key-cache-size>
</automatic-key-generation>
This works great, the IDs created are 10, 11, 12, 13, 14, 15, 16, etc and weblogic only needs to hit the sequence 1/5 times.
However, we have been trying to find the equivalent with the EJB3-style JPA entities:
We've tried
@SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "native(Sequence=pk_sequence, Increment=5, Allocate=5)")
@SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "pk_sequence", allocationSize = 5)
But with both configurations, the autogenerated IDs are 10, 15, 20, 25, 30, etc - weblogic seems to be getting a new value from the sequence every time.
Am i missing anything?
We are using weblogic 10.3If you are having a problem it is not clear what it is from what you have said. If you have sugestions for improving some shortcomings you see in Flash CC then you should submit them to:
Adobe - Wishlist & Bug Report
http://www.adobe.com/cfusion/mmform/index.cfm?name=wishform -
How to increase partial cache size on Lookup stuck at 4096 MB SSIS 2012
Hello,
After converting from SSIS 2008 to SSIS 2012, I am facing major performance slowdown while loading fact data.
When we used 2008 - one file used to take around 2 hours average and now after converting to 2012 - it took 17 hours to load one file.
This is the current scenario: We load data into Staging and Select everything from Staging (28 million rows) and use a lookups for each dimension. I believe it is taking very long time due to one Dimension table which has (89 million rows).
With the lookup, we currently are using partial cache because full cache caused system out of memory.
Lookup Transformation Editor - on this lookup -
Does anyone know how to increase the size on partial Cache size 64-bit? I am being stuck at 4096 MB and can not increase it. In 2008, I had 200,000 MB partial cache size.
Any ideas with how to make this faster?
Thanks in advance!Hi Sarora,
First of all, you may want to re-think if that huge dimension is really a dimension or you can re-design your model so you lower its cardinality.
Then, why are you using Partial Cache?
Partial Cache will end up querying directly a lot of times your huge dimension. Potentially, as many times as the No Lookup mode.
If you can afford loading the whole dimension in memory it'd be MUCH better in terms of performance. If you are loading the whole dimension, try just querying the surrogate key and the business key columns, you'll find the amount of memory used will be much
lower.
I am not sure how much memory do you have available or how much memory the Lookup used to take, but with these amounts you talked about you were probably loading the whole table in memory. You can rough estimate the amount of memory the Lookup will take
in Full Lookup mode using
(ColumnWidth1 + ColumnWidth2 + ... + ColumnWidthN) * NumberOfRows
Regards.
Pau -
Jboss getting SQLException: Closed Statement prepared-statement- cache-size
My first post in this forum , hope to get a quick resolution :)
I am using Jboss 4.0.0 on Oracle 9.2.0.4.0
In order to improve the app performance , I had specified prepared-statement-cache-size as 50 as follows ,
<datasources>
<local-tx-datasource>
<jndi-name>jdbc/sct</jndi-name> <connection-url>jdbc:oracle:thin:@confidential:1560:sct1</connection-url>
<user-name>Confidential</user-name>
<password>Confidential</password>
<min-pool-size>10</min-pool-size>
<max-pool-size>120</max-pool-size> <prepared-statement-cache-size>50</prepared-statement-cache-size>
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptio nSorter</exception-sorter-class-name>
<idle-timeout-minutes>5</idle-timeout-minutes>
<track-statements>true</track-statements>
<new-connection-sql>select sysdate from dual</new-connection-sql>
<check-valid-connection-sql>select sysdate from dual</check-valid-connection-sql>
</local-tx-datasource>
</datasources>
After doing this , I start getting the following exception ,
java.sql.SQLException: Closed Statement
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:180)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:222)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:285)
at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:5681)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.j ava:409)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.ja va:366)
at org.jboss.resource.adapter.jdbc.CachedPreparedStatement.executeQuery(CachedPrepare dStatement.java:57)
at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPrepa redStatement.java:296)
at com.ge.sct.SiteText.getSiteTextFromDB(SiteText.java:292)
Thanks in Advance
BhavinHello,
I am also facing the same error: somewhere just now I read,
We were getting this error on JBoss / Oracle. The fix was setting the following to 0 in oracle-ds.xml:
<prepared-statement-cache-size>0</prepared-statement-cache-size>
Ref: http://www.jpox.org/servlet/forum/viewthread?thread=1108
May be you can try this, I am also still finding the solution, I will try the above and let u know, if i get success.
Regards,
Rajesh -
For optimizing the calculation script i have set in my cube & compression type as RLE (prior calculation script was running with time span of 6 minutes now it takes 2 minutes ,datafile exported using dataexport is same )
The maximum index cache you have set as 4097152 kb ( i.e.3.9 gb) Is it ok to set the index cache so high even though my index file size is less than 1 GB.
1)How do i conclude the maximum value for datacache is 36000000 kb. What are the factors i need to take for consideration.
2)Datacache Maximum 36000000 kb
Datacache is 36000000 kb (i.e. 34.33 GB). Is it a practical approach
Regards
ShennaHi,
Index Cache:
The doc suggests to have- 1 MB of index cache for Buffered I/O and 10 MB of index cache for Buffered I/O !
While you can use this recommendation to start with- You're the right person to arrive at the actual figure by doing some trials relevant to your environment.
Data Cache:
Again, the doc. suggests to have- Data cache = 0.125 * the value of data file cache size
Where- Suggested Data file cache size = Combined size of all essn.pag files, if possible; otherwise as large as possible. This cache setting not used if Essbase is set to use buffered I/O.
It's prudent to do trials independently for each of the caches!
It's worth reading all the posts of the thread @ Understanding Buffered I/O and Direct I/O to understand experts' opinions !
Best of luck :)
- Natesh -
Hi all,
while using some dinamyc store procedures I get in the following error:
[BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
I'm using WL8.1 and Sql Server 2000.
Store procedure contains two different queries where table name is a store procedure's
parameter.
The first time it works great, after that I always have this error:
Reading bea doc's I found
There may be other issues related to caching prepared statements that are not
listed here. If you see errors in your system related to prepared statements,
you should set the prepared statement cache size to 0, which turns off prepared
statement caching, to test if the problem is caused by caching prepared statements.
If I set prepared statement cache size to 0 everything works great but that does
not seem the better way.
Should we expect Bea to solve this problem?
Or whatever else solution?
such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
dynamically ?
thks in advance
Leonardocaching works well for DML and thats what it is supposed to do. But it looks
like you are doing DDL , which means your tables might be getting
created/dropped/altered which effectively invalidates the cache. So you
should try to turn the cache off.
"leonardo" <[email protected]> wrote in message
news:40b1bb75$1@mktnews1...
>
>
Hi all,
while using some dinamyc store procedures I get in the following error:
[BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
I'm using WL8.1 and Sql Server 2000.
Store procedure contains two different queries where table name is a storeprocedure's
parameter.
The first time it works great, after that I always have this error:
Reading bea doc's I found
There may be other issues related to caching prepared statements that arenot
listed here. If you see errors in your system related to preparedstatements,
you should set the prepared statement cache size to 0, which turns offprepared
statement caching, to test if the problem is caused by caching preparedstatements.
If I set prepared statement cache size to 0 everything works great butthat does
not seem the better way.
Should we expect Bea to solve this problem?
Or whatever else solution?
such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
dynamically ?
thks in advance
Leonardo -
New FAQ Entry on JVM Parameters for Large Cache Sizes
I've posted a new [FAQ entry|http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#60] on JVM parameters for large cache sizes. The text of it is as follows:
What JVM parameters should I consider when tuning an application with a large cache size?
If your application has a large cache size, tuning the Java GC may be necessary. You will almost certainly be using a 64b JVM (i.e. -d64), the -server option, and setting your heap and stack sizes with -Xmx and -Xms. Be sure that you don't set the cache size too close to the heap size so that your application has plenty of room for its data and to avoided excessive full GC's. We have found that the Concurrent Mark Sweep GC is generally the best in this environment since it yields more predictable GC results. This can be enabled with -XX:+UseConcMarkSweepGC.
Best practices dictates that you disable System.gc() calls with -XX:-DisableExplicitGC.
Other JVM options which may prove useful are -XX:NewSize (start with 512m or 1024m as a value), -XX:MaxNewSize (try 1024m as a value), and -XX:CMSInitiatingOccupancyFraction=55. NewSize is typically tuned in relationship to the overall heap size so if you specify this parameter you will also need to provide a -Xmx value. A convenient way of specifying this in relative terms is to use -XX:NewRatio. The values we've suggested are only starting points. The actual values will vary depending on the runtime characteristics of the application.
You may also want to refer to the following articles:
* Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
* The most complete list of -XX options for Java 6 JVM
* My Favorite Hotspot JVM Flags
Edited by: Charles Lamb on Oct 22, 2009 9:13 AMFirst of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
indicates the Driver or the DB abends the connection due to a timeout.
Check out the wait_timeout mysql variable on the server and increase it. -
(statement cache size = 0) == clear statement cache ?
Hi
I ran this test with WLS 8.1. I set to the cache size to 5, and I call a servlet
which invokes a stored procedure to get the statement cached. I then recompile
the proc, set the statement cache size to 0 and re-execute the servlet.
The result is:
java.sql.SQLException: ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "CCDB_APPS.MSSG_PROCS" has been invalidated
ORA-04065: not executed, altered or dropped package "CCDB_APPS.MSSG_PROCS"
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1
which seems to suggest even though the cache size has set to 0, previously cached
statements are not cleared.
Rgs
ErikGalen Boyer wrote:
On Fri, 05 Dec 2003, [email protected] wrote:
Galen Boyer wrote:
On 14 Nov 2003, [email protected] wrote:
Hi
I ran this test with WLS 8.1. I set to the cache size to 5,
and I call a servlet which invokes a stored procedure to get
the statement cached. I then recompile the proc, set the
statement cache size to 0 and re-execute the servlet.
The result is:
java.sql.SQLException: ORA-04068: existing state of packages
has been discarded ORA-04061: existing state of package
"CCDB_APPS.MSSG_PROCS" has been invalidated
ORA-04065: not executed, altered or dropped package
"CCDB_APPS.MSSG_PROCS" ORA-06508: PL/SQL: could not find
program unit being called ORA-06512: at line 1
which seems to suggest even though the cache size has set to
0, previously cached statements are not cleared.This is actually an Oracle message. Do the following test.
Open two sqlplus sessions. In one, execute the package.
Then, in the other, drop and recreate that package. Then, go
to the previous window and execute that same package. You
will get that error. Now, in that same sqlplus session,
execute that same line one more time and it goes through. In
short, in your above test, execute your servlet twice and I
bet on the second execution you have no issue.Hi. We did some testing offline, and verified that even a
standalone java program: 1 - making and executing a prepared
statement (calling the procedure), 2 - waiting while the
procedure gets recompiled, 3 - re-executing the prepared
statement gets the exception, BUT ALSO, 4 - closing the
statement after the failure, and making a new identical
statement, and executing it will also get the exception! JoeI just had the chance to test this within weblogic and not just
sqlplus.Note, I wasn't using SQL-PLUS, I wrote a standalone program
using Oracle's driver...
MY SCENARIO:
I had one connection only in my pool. I executed a package.
Then, went into the database and recompiled that package. Next
execution from app found this error. I then subsequently
executed the same package from the app and it was successful.And this was with the cache turned off, correct?
What the application needs to do is catch that error and within
the same connection, resubmit the execution request. All
connections within the pool will get invalidated for that
package's execution.Have you tried this? Did you try to re-use the statement you had,
or did you make a new one?
Maybe Weblogic could understand this and behave this way for
Oracle connections?It's not likely that we will be intercepting all exceptions
coming from a DBMS driver to find out whether it's a particular
failure, and then know that we can/must clear the statement cache.
Note also that even if we did, as I described, the test program I
ran did try to make a new statement to replace the one that
failed, and the new statement also failed.
In your case, you don't even have a cache. Would you verify
in your code, what sort of inline retry works for you?
Joe -
Increase the size of the cache using the cache.size= number of pages ?
Hi All,
I am getting this error when I do load testing.
I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
I got following exception when I was doing load testing with 30 concurrent users.
Any idea why this exception is coming ?
thanks in advance
Hitesh
javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)hitesh Chauhan wrote:
Hi All,
I am getting this error when I do load testing.
I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
I got following exception when I was doing load testing with 30 concurrent users.
Any idea why this exception is coming ?
thanks in advance
Hitesh Hi. Please note below, the stacktrace and exception is coming from the
Pointbase DBMS, nothing to do with Sybase. It seems to be an issue
with a configurable limit for PointBase, that you are exceeding.
Please read the PointBase configuration documents, and/or configure
your MDBs to use Sybase.
Joe
>
javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178) -
Hello,
I'd like to get the type of information in version 8 that I can get in version 9 through v$db_cache_advice in order to determine the size that the buffer cache should be. I've found sites that say you can set db_block_lru_extended_statistics to populate v$recent_bucket, but they say there is a performance hit. Can anyone tell me qualitatively how much of a performance hit this causes (obviously it would only be run this way for a short period of time), and whether or not this is really the best/right way to do this?
Thanks.Actually ours is bank Database,
Our Database size is 400GB.
last month they got ORA-000604 error,
so that production database got hanged for 15 min, issue got resolved automatically after 15min.
At that time complete buffer cache was flushed out & all oracle Processes was terminated.
becoz of that they increased buffer cache size. -
JRE 1.6.0_05 Plugin settings ignore cache size
Hi!
I just tried to reduce the cache size of my Java plugin and, however, this setting is ignored by the plugin. I set the size on 50 MB, but the plugin still caches temporary files in the default folder (C:\Dokumente und Einstellungen\mas\Anwendungsdaten\Sun\Java\Deployment\cache ...) - actually about 100 MB.
Does anybody knows why, or is this just a bug?
Is it okay, that the plugin caches all files of an applet several times, although it�s always the same applet (same version, same host, same everything...)?
Thank you in advance!
Best regards
MarcHey lynchmob,
Try these steps to correct the problem, you need to be logged in as an administrator:
1. Go to the group policy editor. You can get they by typing gpedit.msc into the Run dialog.
2. Navigate to computer configuration->administrative templates->windows components->internet explorer.
3. Disable "make proxy settings per-machine (rather than per-user)".
4. Login with the user account and go to Internet Options.
5. Go to the Connection tab.
6. Click on the Lan Settings button.
7. You may notice that the proxy settings are not correct. Change the proxy settings to be whatever is required for your proxy.
8. Configure Java to use browser proxy settings.
9. Open the java console.
10. Set debug level to 5.
11. Press 'p' to reload proxy settings. Use the trace messages to verify correct proxy settings were loaded. -
SBS2011 (Exchange 2010 SP2) - limiting cache size doesn't appear to work
Hi All,
Hoping for some clarification here, or extra input at least. I know there are other posts about this topic such as
http://social.technet.microsoft.com/Forums/en-US/smallbusinessserver/thread/5acb6e29-13b3-4e70-95d9-1a62fc9304ac but these have been
incorrectly marked as answer in my opinion.
To recap the issue. The Exchange 2010 store.exe process uses a lot of memory. So much in fact it has a negative performance impact on the server (sluggish access to the desktop etc). You can argue about this all day - it's by design
and shouldn't be messed with etc but the bottom line is that it does use too much memory and it does need tweaked. I know this because if you simply restart the Information Store process (or reboot the server) it frees up the memory and the performance
returns (until its cache is fully rebuilt that is). I have verified this on 4 different fresh builds of SBS2011 over the last 6 months. (all on servers with 16GB RAM)
I have scoured the internet for information on limiting how much memory exchange uses to cache the information store and most articles point back to the same two articles (http://eightwone.com/2011/04/06/limiting-exchange-2010-sp1-database-cache/
and
http://eightwone.com/2010/03/25/limiting-exchange-2010-database-cache) that deal with exchange 2010 and exchange 2010 SP1, notably not exchange 2010 SP2. Ergo most articles are out of date since exchange 2010 SP2 has been released since these articles
were posted.
When testing with our own in house SBS2011 server (with exchange 2010 SP2) I have found that specifying the min, max and cache sizes in ADSIEDIT has varying results that are not in line with the results documented in the articles I mentioned above.
I suspect the behaviour of these settings has changed with the release of exchange 2010 SP2 (as it did between the initial release and SP1).
Specifically here's what I have found using ADSIEDIT;
If you set the msExchESEParamCacheSize to a value - it doesn't have any effect.
If you set the msExchESEParamCacheSizeMax to a value - it doesn't have any effect.
If you set the msExchESEParamCacheSizeMin to a value - it always locks the store.exe process to using exactly this value.
I have also tested using combinations of these settings with the result that the size and max size values are always ignored (and the store.exe process uses the maximum available amount of memory - thus causing the performance degradation) but as soon as
you specify the min value it locks it to this value and it doesn't change.
As a temporary solution on our in-house SBS2011 I have set the min value to 4GB and it appears to be running fine (only 15 mailboxes though).
Anyone got some input on this ? thank you for your time.I concur with Erin. I'm seeing the same behaviour across all SBS2011 boxes, whether running SP1, SP2 or SP3.
If a minimum value is set, the store cache size barely rises above the minumum. I have one server with 32GB RAM. Store.exe was using 20GB of RAM, plus all the other Exchange services which total 4GB+. That left virtually no free RAM and trying to do
anything else on the server was sluggish at best.
All the advise is that setting a maximum alone has no effect and a minimum must be set too. But when set, the store cache size barely rises above the minimum. I have set a 4GB minimum and 16GB max, but 5 days later it's still using only slightly more than
4GB and there's 8GB free. Now the server as a whole is responsive, but doing anything with Exchange is sluggish.
Just saying leave Exchange to manage itself is not an answer. The clue is in the name - Small Business Server. It's not Exchange Only Server - there are other tasks an SBS must handle so leaving Exchange to run rampant is not an option. Besides, there are
allegedly means to manage the Exchange cache size - they just don't apparently work!
I'm guessing nobody has an answer to this so the only solution is to effectively fix the cache size to a sensible value by setting min and max to the same value.
Adam@Regis IT -
Page & cache size performance tuneup
Hi
I am doing performance evaluation on BDB. Please help me in find answer to below queries.
1. page size: Do I need to give the page size based on my XML document size. Is there any relation(formula) between page size & XML document size to get optimum memory usage?
2. cache size: Is cache size needs to be equal/more than the doc size to minimize the query response time? Could you please suggests a optimum cache size for 1MB XML document?
3. I have stared with BDB version 2.3.10, but i read in this forum there is some performance improvement in BDB version 2.3.10. What version i should use for my evaluation? Is the latest(4.6.21) is best(stable)?
4. Is any other parameters ( other than page & cache size) I need to tuneup to get optimum memory usage & minimal CPU utilization?
Is there any reference document I can get more details on BDB performace?
Thanks,
SanthoshHi Santhosh,
It’s hard to give solid suggestions without knowing more about your application, what you are measuring and what your performance requirements are. What language are you implementing in?
Is query response time most important, or document insertion or updates?
I am going to request that you respond to this Performance Questionaire and answer as many questions as you can at this time. Send the questionaire to me at Ron dot Cohen at Oracle.
http://forums.oracle.com/forums/ann.jspa?annID=426
In addition to the information requested, you can see from the questionaire that the utility
Db_stat –m is useful to look at a number of things including the effectiveness of the cache size you have.
Have you taken any measurements yet? I would suggest going with the default pagesize but using a cachesize larger than the default. I don’t know how much real memory you have but for a first measurement you could try a cachesize of 100MB-500MB (or larger) depending on your workload and how much memory you have available. I am not recommending that as a final cache size, just giving you a number to start with.
http://tinyurl.com/2mfn6f
You will likely find a lot of improvements in performance can be obtained by your indexing strategy. This may be where you get the best results. You may want to spend some time reviewing that and the documentation on indexes:
http://tinyurl.com/2522sc
Also, take a look in the same document at the indexing sections.
Berkeley DB XML 2.3 (Berkeley DB 4.5.20) should be fine to start (though you may have read on this forum about the speed improvements in Berkeley DB XML 2.4 which is currently in test mode).
Please do respond to the survey, send it to me and we will try to help you further.
Ron -
Hi,
Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
I have done some Google and found that we need to add something in Essbase.cfg file like below.
1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
Possible Problems
Analytic Services could not lock enough blocks to perform the calculation.
Possible Solutions
Increase the number of blocks that Analytic Services can allocate for a calculation:
Set the maximum number of blocks that Analytic Services can allocate to at least 500.
If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
Stop and restart Analytic Server.
Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting.
Determine the block size.
Set the data catche size.
Actually in our Server Config file(essbase.cfg) we dont have below data added.
CalcLockBlockHigh 2000
CalcLockBlockDefault 200
CalcLockBlocklow 50
So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work? and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
Regards,
NaveenYour calculation needs to hold more blocks in memory than your current set up allows.
From the docs (quoting so I don't have to write it, not to be a smarta***:
CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
Example
If the essbase.cfg file contains the following settings:
CALCLOCKBLOCKHIGH 500 CALCLOCKBLOCKDEFAULT 200 CALCLOCKBLOCKLOW 50
then you can use the following SET LOCKBLOCK setting commands in a calculation script:
SET LOCKBLOCK HIGH;
means that Essbase can fix up to 500 data blocks when calculating one block.
Support doc is saying to change your config file so those settings can be made available for any calc script to use.
On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact. -
Java.sql.SQLException: Statement cache size has not been set
All,
I am trying to create a light weight SQL Layer.It uses JDBC to connect to the database via weblogic. When my application tries to connect to the database using JDBC alone (outside of weblogic) everything works fine. But when the application tries to go via weblogic I am able to run the Statement objects successfully but when I try to run PreparedStatements I get the following error:
java.sql.SQLException: Statement cache size has not been set
at weblogic.rjvm.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:108)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:138)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_812_WLStub.prepareStatement(Unknown Source)
i have checked the StatementCacheSize and it is 10. Is there any other setting that needs to be implemented for this to work? Has anybody seen this error before? Any help will be greatly appreciated.
Thanks.Pooja Bamba wrote:
I just noticed that I did not copy the jdbc log fully earlier. Here is the log:
JDBC log stream started at Thu Jun 02 14:57:56 EDT 2005
DriverManager.initialize: jdbc.drivers = null
JDBC DriverManager initialized
registerDriver: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
Oracle Jdbc tracing is not avaliable in a non-debug zip/jar file
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
DriverManager.getDriver("jdbc:oracle:oci:@devatl")
trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
registerDriver: driver[className=weblogic.jdbc.jts.Driver,weblogic.jdbc.jts.Driver@c0a150]
registerDriver: driver[className=weblogic.jdbc.pool.Driver,weblogic.jdbc.pool.Driver@17dff15]
SQLException: SQLState(null) vendor code(17095)
java.sql.SQLException: Statement cache size has not been set
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)Hi. Ok. This is an Oracle driver bug/problem. Please show me the pool's definition
in the config.xml file. I'll bet you're defining the pool in an unusual way. Typically
we don't want any driver-level pooling to be involved. It is superfluous to the functionality
we provide, and can also conflict.
Joe
at oracle.jdbc.driver.OracleConnection.prepareCallWithKey(OracleConnection.java:1037)
at weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:477)
at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:420)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:353)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:144)
at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:415)
at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
SQLException: SQLState(null) vendor code(17095)
Maybe you are looking for
-
Please help me play a rented itunes video on apple tv. Have a pictue and movie runs for about three seconds, and then stops. No sound? But I can play any music from my library with sound?
-
Edit photo no longer works with 5.1.1 upgrade on iPad and iPhone ?
After 5.1.1 upgrade both iPad and iPhone shutdown and reset, help. Anyone else have this issue? Solution?
-
Hi Tim, Do you know how to run the WORD macro in RTF? Suppose I want to run the macro for Section Break, Is there any way we can call it using WORD Macro. Any help is appriciated. Thanks, Ambadas
-
i haave a new ipad2 and all my emails etc from my googlemail are on it no problem. My contacts are not there automatically so i have read that to synch to my pc you need to have itunes on your pc which i dont hav. I have tried to downoad it to my pc
-
Hi, How do u remove the FI entries for a reversed(cancelled) Invoice Document? Thank You