Page & cache size performance tuneup

Hi
I am doing performance evaluation on BDB. Please help me in find answer to below queries.
1. page size: Do I need to give the page size based on my XML document size. Is there any relation(formula) between page size & XML document size to get optimum memory usage?
2. cache size: Is cache size needs to be equal/more than the doc size to minimize the query response time? Could you please suggests a optimum cache size for 1MB XML document?
3. I have stared with BDB version 2.3.10, but i read in this forum there is some performance improvement in BDB version 2.3.10. What version i should use for my evaluation? Is the latest(4.6.21) is best(stable)?
4. Is any other parameters ( other than page & cache size) I need to tuneup to get optimum memory usage & minimal CPU utilization?
Is there any reference document I can get more details on BDB performace?
Thanks,
Santhosh

Hi Santhosh,
It’s hard to give solid suggestions without knowing more about your application, what you are measuring and what your performance requirements are. What language are you implementing in?
Is query response time most important, or document insertion or updates?
I am going to request that you respond to this Performance Questionaire and answer as many questions as you can at this time. Send the questionaire to me at Ron dot Cohen at Oracle.
http://forums.oracle.com/forums/ann.jspa?annID=426
In addition to the information requested, you can see from the questionaire that the utility
Db_stat –m is useful to look at a number of things including the effectiveness of the cache size you have.
Have you taken any measurements yet? I would suggest going with the default pagesize but using a cachesize larger than the default. I don’t know how much real memory you have but for a first measurement you could try a cachesize of 100MB-500MB (or larger) depending on your workload and how much memory you have available. I am not recommending that as a final cache size, just giving you a number to start with.
http://tinyurl.com/2mfn6f
You will likely find a lot of improvements in performance can be obtained by your indexing strategy. This may be where you get the best results. You may want to spend some time reviewing that and the documentation on indexes:
http://tinyurl.com/2522sc
Also, take a look in the same document at the indexing sections.
Berkeley DB XML 2.3 (Berkeley DB 4.5.20) should be fine to start (though you may have read on this forum about the speed improvements in Berkeley DB XML 2.4 which is currently in test mode).
Please do respond to the survey, send it to me and we will try to help you further.
Ron

Similar Messages

  • Why does performance decrease as cache size increases?

    Hi,
    We have some very serious performance problems with our
    database. I have been trying to help by tuning the cache size,
    but the results are the opposite of what I expect.
    To create new databases with my data set, it takes about
    8200 seconds with a 32 Meg cache. Performance gets worse
    as the cache size increases, even though the cache hit rate
    improves!
    I'd appreciate any insight as to why this is happening.
    32 Meg does not seem like such a large cache that it would
    strain some system limitation.
    Here are some stats from db_stat -m
    Specified a 128 Meg cache size - test took 16076 seconds
    160MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    160MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    34M Requested pages found in the cache (93%)
    2405253 Requested pages not found in the cache
    36084 Pages created in the cache
    2400631 Pages read into the cache
    9056561 Pages written from the cache to the backing file
    2394135 Clean pages forced from the cache
    2461 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    40048 Current total page count
    40048 Current clean page count
    0 Current dirty page count
    16381 Number of hash buckets used for page location
    39M Total number of times hash chains searched for a page (39021639)
    11 The longest hash chain searched for a page
    85M Total number of hash chain entries checked for page (85570570)
    Specified a 64 Meg cache size - test took 10694 seconds
    80MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    80MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    31M Requested pages found in the cache (83%)
    6070891 Requested pages not found in the cache
    36104 Pages created in the cache
    6066249 Pages read into the cache
    9063432 Pages written from the cache to the backing file
    5963647 Clean pages forced from the cache
    118611 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    20024 Current total page count
    20024 Current clean page count
    0 Current dirty page count
    8191 Number of hash buckets used for page location
    42M Total number of times hash chains searched for a page (42687277)
    12 The longest hash chain searched for a page
    98M Total number of hash chain entries checked for page (98696325)
    Specified a 32 Meg cache size - test took 8231 seconds
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10812846)
    35981 Pages created in the cache
    10M Pages read into the cache (10808327)
    9200273 Pages written from the cache to the backing file
    9335574 Clean pages forced from the cache
    1498651 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    10012 Current clean page count
    0 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47429232)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118218066)
    vmstat says that a few minutes into the test, the box is
    spending 80-90% of its time in iowait. That worsens as
    the test continues.
    System and test info follows
    We have 10 databases (in 10 files) sharing a database
    environment. We are using a hash table since we expect
    data accesses to be pretty much random.
    We are using the default cache type: a memory mapped file.
    Using DB_PRIVATE did not improve performance.
    The database environment created with these flags:
    DB_CREATE | DB_THREAD | DB_INIT_CDB | DB_INIT_MPOOL
    The databases are opened with only the DB_CREATE flag.
    There is only one process accessing the db. In my tests,
    only one thread access the db, doing only writes.
    We do not use transactions.
    My data set is about 550 Meg of plain ASCII text data.
    13 million inserts and 2 million deletes. Key size is
    32 bytes, data size is 4 bytes.
    BDB 4.6.21 on linux.
    1 Gig of RAM
    Filesystem = ext3 page size = 4K
    The test system is not doing anything else while I am
    testing.

    Surprising result: I tried closing the db handles with DB_NOSYNC and performance
    got worse. Using a 32 Meg cache, it took about twice as long to run my test:
    15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
    Here is some data from db_stat -m when using DB_NOSYNC:
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10811882)
    44864 Pages created in the cache
    10M Pages read into the cache (10798480)
    7380761 Pages written from the cache to the backing file
    3452500 Clean pages forced from the cache
    7380761 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    5001 Current clean page count
    5011 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47428268)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118169805)
    It looks like not flushing the cache regularly is forcing a lot more
    dirty pages (and fewer clean pages) from the cache. Forcing a
    dirty page out is slower than forcing a clean page out, of course.
    Is this result reasonable?
    I suppose I could try to sync less often than I have been, but more often
    than never to see if that makes any difference.
    When I close or sync one db handle, I assume it flushes only that portion
    of the dbenv's cache, not the entire cache, right? Is there an API I can
    call that would sync the entire dbenv cache (besides closing the dbenv)?
    Are there any other suggestions?
    Thanks,
    Eric

  • When i put firefox in offline mode, and then click on pages saved in history , it can't load any pages or any images. i put cach size to 250mb but the problem is the same, it saves history for two months, but can't load pages.

    when i put firefox in offline mode, and then click on pages saved in history , it can't load any pages or any images. i put cach size to 250mb but the problem is the same, it saves history for two months, but can't load pages.

    Hi there,
    When I inspect your site in browser tools, I'm getting 404 errors from your page:
    [Error] Failed to load resource: the server responded with a status of 404 (Not Found) (jquery-2.0.3.min.map, line 0)
    [Error] Failed to load resource: the server responded with a status of 404 (Not Found) (edge.4.0.0.min.map, line 0)
    BarnardosIreland wrote:
    I would have thought that publishing should give a complete package that doesn't need any further edits to the code and can just be directly ftp'ed to the web - is this correct?
    In general, you are correct - but also your server does need to be properly configured (and those errors above lead me to think it may not be) to serve the file types that your uploading - but it could be something else entirely. Can you zip up your composition folder, upload it to your Creative Cloud files, set it to share, and then post a link here so I can download it? If you'd rather not share it publicly, can you PM me with a link to your composition files?
    Thanks,
    Joe

  • Increase the size of the cache using the cache.size= number of pages ?

    Hi All,
    I am getting this error when I do load testing.
    I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
    I got following exception when I was doing load testing with 30 concurrent users.
    Any idea why this exception is coming ?
    thanks in advance
    Hitesh
    javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
    java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
         at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
         at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
         at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
         at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
         at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
         at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
         at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

    hitesh Chauhan wrote:
    Hi All,
    I am getting this error when I do load testing.
    I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
    I got following exception when I was doing load testing with 30 concurrent users.
    Any idea why this exception is coming ?
    thanks in advance
    Hitesh Hi. Please note below, the stacktrace and exception is coming from the
    Pointbase DBMS, nothing to do with Sybase. It seems to be an issue
    with a configurable limit for PointBase, that you are exceeding.
    Please read the PointBase configuration documents, and/or configure
    your MDBs to use Sybase.
    Joe
    >
    javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
    java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
         at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
         at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
         at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
         at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
         at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
         at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
         at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

  • Question of Berkeley DB "cache size"

    quote:
    Set the size of the shared memory buffer pool, that is, the size of the cache.
    The cache should be the size of the normal working data set of the application, with some small amount of additional memory for unusual situations. (Note: the working set is not the same as the number of pages accessed simultaneously, and is usually much larger.)
    The default cache size is 256KB, and may not be specified as less than 20KB. Any cache size less than 500MB is automatically increased by 25% to account for buffer pool overhead; cache sizes larger than 500MB are used as specified. The current maximum size of a single cache is 4GB. (All sizes are in powers-of-two, that is, 256KB is 2^18 not 256,000.)
    The database environment's cache size may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_cachesize", one or more whitespace characters, and the cache size specified in three parts: the gigabytes of cache, the additional bytes of cache, and the number of caches, also separated by whitespace characters. For example, "set_cachesize 2 524288000 3" would create a 2.5GB logical cache, split between three physical caches. Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time.
    This method configures a database environment, including all threads of control accessing the database environment, not only the operations performed using a specified Environment handle.
    This method may not be called after the environment has been opened. If joining an existing database environment, any information specified to this method will be ignored.
    This method may be called at any time during the life of the application.
    Parameters:
    cacheSize The size of the shared memory buffer pool, that is, the size of the cache.
    The question:
    When I have a host, the memory total is 16G.
    I don't know what mean of this document.
    How many max cache size can be set ?
    4G? 16G?
    or cacheCount (4)* 4G = 16G?
    My Email: [email protected]

    What version of Berkeley DB are you using?
    I'm a little confused about what you are quoting. Most of your quote seems to be from DB_ENV->set_cachesize(), but set_cachesize does not have a parameter named cacheSize. The parameters for set_cachesize are gbytes, bytes and ncache.
    You use set_cachesize to specify the logical cache that you can optionally split into more than one physical region. The maximum size of the logical cache is 4GB and there is only one logical cache. You specify the total size of the logical cache with the gbytes and bytes parameters. If you set ncache to a value greater than 1, you split this logical cache into separate physical regions. So, for example, if you specify (gbytes=2, bytes=0, ncache=1) you will have a logical cache of 2GB that internally is split into 2 separate physical regions of 1GB each.
    You can read more about the memory pool cache in the Reference Guide sections "Selecting a cache size" and "Configuring the memory pool".
    If you have other Berkeley DB questions that are not specific to replication, you should direct them to the general Berkeley DB forum where you will have the benefit of a wider set of Berkeley DB experts:
    Berkeley DB
    Paula Bingham
    Oracle

  • Does buffer cache size matters during imp process ?

    Hi,
    sorry for maybe naive question but I cant imagine why do Oracle need buffer cache (larger = better ) during inserts only (imp process with no index creation) .
    As far as I know insert is done via pga area (direct insert) .
    Please clarify for me .
    DB is 10.2.0.3 if that matters :).
    Regards.
    Greg

    Surprising result: I tried closing the db handles with DB_NOSYNC and performance
    got worse. Using a 32 Meg cache, it took about twice as long to run my test:
    15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
    Here is some data from db_stat -m when using DB_NOSYNC:
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10811882)
    44864 Pages created in the cache
    10M Pages read into the cache (10798480)
    7380761 Pages written from the cache to the backing file
    3452500 Clean pages forced from the cache
    7380761 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    5001 Current clean page count
    5011 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47428268)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118169805)
    It looks like not flushing the cache regularly is forcing a lot more
    dirty pages (and fewer clean pages) from the cache. Forcing a
    dirty page out is slower than forcing a clean page out, of course.
    Is this result reasonable?
    I suppose I could try to sync less often than I have been, but more often
    than never to see if that makes any difference.
    When I close or sync one db handle, I assume it flushes only that portion
    of the dbenv's cache, not the entire cache, right? Is there an API I can
    call that would sync the entire dbenv cache (besides closing the dbenv)?
    Are there any other suggestions?
    Thanks,
    Eric

  • Index file increase with no corresponding increase in block numbers or Pag file size

    Hi All,
    Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
    I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
    The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB.  We expect this to grow fairly rapidly over the next 12 months or so.
    After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
    When I then perform a dense restructure, the index file reduces to 0.6GB.  The PAG file remains around 12GB (a minor reduction of 0.4GB occurs).  The number of blocks remains exactly the same.
    If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
    If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
    Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
    Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
    I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
    I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it.  At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
    After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1. 
    I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
    http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
    However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right?  But I am getting more than 160% growth (1.6GB / 0.6GB).
    And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
    The Index file growth in itself is not a problem.  But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st.  And with the expected growth of the model, this will likely get much worse.
    Anyone have any explanation as to what is occurring, and how to prevent it...?
    Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.

    alan.d wrote:
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.
    I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
    Sabrina

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Client cache size reverts to default

    (SCCM 2012 SP1 with CU5)
    I've used a powershell script to set the client cache size to 20GB, using get-wmiobject and the Put() function.  I've deployed this to a test collection of PCs using a CI with remediation, and it's both checking and remediating correctly.
    However, while the 'compliance' is reported at near 100%, when manually checking the clients, many have reset to 5120MB. This affects both Windows 7 and 8.x based clients.
    To take one client specifically, at the time of writing this the CI reports it ran and remediated at 11:45am - it is now 8pm, and the cache is back to 5120.
    Triggering the CI to evaluate again sets the cache to 20GB (verified via get-wmiobject), but I'm sure if I check again at some point tomorrow it will have reverted to 5120 again.
    I can't see any evidence in any client log that indicates what could be causing this.  I've looked into how I can see what has made the change in WMI via the Analytic/Debug logs in Event Viewer, but the tracing available needs to be run at the
    time the change is made - and I don't know exactly when that is.
    Any ideas?!
    Thanks, Bob
    Edit:  CI is set to evaluate every day at 9am.  Client push properties on primary site now include SMSCACHESIZE=20480, but this is recent change - when the existing clients were installed this was not present so they installed at 5120MB.

    Changing the value directly in WMI is unsupported. You need to use the UIResource.UIResourceMgr COM object to perform this task on the client side:
    http://msdn.microsoft.com/en-us/library/cc145211.aspx
    PowerShell examples are available at
    http://www.david-obrien.net/2013/02/07/how-to-configure-the-configmgr-client/
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Querie regarding cache sizes

    For optimizing the calculation script i have set in my cube & compression type as RLE (prior calculation script was running with time span of 6 minutes now it takes 2 minutes ,datafile exported using dataexport is same )
    The maximum index cache you have set as 4097152 kb ( i.e.3.9 gb) Is it ok to set the index cache so high even though my index file size is less than 1 GB.
    1)How do i conclude the maximum value for datacache is 36000000 kb. What are the factors i need to take for consideration.
    2)Datacache Maximum 36000000 kb
    Datacache is 36000000 kb (i.e. 34.33 GB). Is it a practical approach
    Regards
    Shenna

    Hi,
    Index Cache:
    The doc suggests to have- 1 MB of index cache for Buffered I/O and 10 MB of index cache for Buffered I/O !
    While you can use this recommendation to start with- You're the right person to arrive at the actual figure by doing some trials relevant to your environment.
    Data Cache:
    Again, the doc. suggests to have- Data cache = 0.125 * the value of data file cache size
    Where- Suggested Data file cache size = Combined size of all essn.pag files, if possible; otherwise as large as possible. This cache setting not used if Essbase is set to use buffered I/O.
    It's prudent to do trials independently for each of the caches!
    It's worth reading all the posts of the thread @ Understanding Buffered I/O and Direct I/O to understand experts' opinions !
    Best of luck :)
    - Natesh

  • Solaris 8 cyclical page cache

    I have read Sun's article about Solaris 8 cyclical page cache, but still don't understand how it works.
    It did not mention how does kernel devide memory between the IO page buffer cache and applications.
    Is it is first come, first served basis?
    Is there any way to size the IO page buffer cache?
    When app needs a page, and there are none any available, does it take it from IO buffer cache?
    Does it mean that IO page cache will be 0 in size if application requires all physical memory?
    In short, what is the algorithm?
    I see me database application swap the executable/data parts heavily. It looks like it is due to kernel page cache competing for memory with the app. So, the problem is still not fixed after introduction of "priority paging" in Solaris 2.7 and "cyclical page cache" in Solaris 8?
    Thanks for any feedback.

    It will gradually use all memory except for a meg or so as page cache.
    Thats what solaris does, if theres any memory thats not being used for actual work, it figures it might as well use it as page cache.
    But the page cache will be released immediately if anything else wants the memory.
    So you can think of the page cache pretty much as free memory. If fact most solaris tool simply report the page cache as part of the free memory.
    So why would you want to limit the page cache?

  • Best size of procedure cache size?

    here is my dbcc memusage output:
    DBCC execution completed. If DBCC printed error messages, contact a user with System Administrator (SA) role.
    Memory Usage:
                                       Meg.           2K Blks                Bytes
          Configured Memory:     14648.4375           7500000          15360000000
    Non Dynamic Structures:         5.5655              2850              5835893
         Dynamic Structures:        70.4297             36060             73850880
               Cache Memory:     13352.4844           6836472          14001094656
          Proc Cache Memory:        85.1484             43596             89284608
              Unused Memory:      1133.9844            580600           1189068800
    So if proc cache is too small? I can put used memory 1133M to proc cache. but as many suggested that proc cache should be 20% of total memory.
    Not sure it should be 20% of max memory or Total named cache memory?

    Hi
    Database size: 268288.0 MB
    Procedure Cache size is ..
    1> sp_configure 'procedure cache size'
    2> go
    Parameter Name                 Default     Memory Used  Config Value    Run Value         Unit                            Type
    procedure cache size           7000          3362132        1494221           1494221         Memory pages(2k)     dynamic
    1> sp_monitorconfig 'procedure cache size'
    2> go
    Usage information at date and time: May 15 2014 11:48AM.
    Name                      Num_free    Num_active  Pct_act Max_Used    Reuse_cnt   Instance_Name
    procedure cache size          1101704      392517  26.27       787437      746136 NULL
    1> sp_configure 'total logical memory'
    2> go
    Parameter Name           Default     Memory Used   Config Value      Run Value           Unit                         Type
    total logical memory        73728    15624170           7812085             7838533      memory pages(2k)     read-only
    I got to know that the oparameter 'Reuse_cnt' should be zero from an ASE expert.
    Suggest me if I need to increase the procedure cache with explanation
    Thanks
    Rajesh

  • Report (not Page) caching to improve report loading times

    We are trying out Cystal Reports Server 2008 as a replacement for Crystal Reports CR XI R2 for an ASP.NET Web Application. Running some tests we found that the same report loaded from the server was far slower than loading the same report locally, using the CRXI SDK.
    Being the Clone/Refresh strategy on the standalone SDK several times faster than the reportAppFactory.OpenDocument on the server platform, i think that the server is loading from scratch the report each time I execute the OpenDocument method. So what I am seraching for is, Is there a way to setup or tweak report caching on Crystal Server 2008? What I need is not page caching, but the whole report, which is used different databases each time it's executed (implying different data for each execution).
    Some insight on the problem:
    As I posted on a previous thread we are moving from CR XI due to a limit on the number of active instances suported by the SDK (74 instances). We reach this limit because of a workaround for improving report execution times, leaving instances of each report active, so the next execution just recicles the active instance, greatly reducing the overall process time. Therefor introducing Crystal Server is a setback on this module (due to it's actual performance).
    Part of the problem resides on the complexity of the report. Using the standalone SDK the report takes several seconds to load from disk, and several seconds to execute. Regretably, we are not at liberty to change the reports' structure, so optimizing it is beyond posibility for now.
    Thanks in advance,
    Gustavo

    Hello Maggie,
    >> How can I get an all-encoompassing CSV file of the main report (Page 2014)?
    This might be possible using the advanced print server configuration, with BI Publisher, using the same technique that is being used to print master-details reports (which is a type of a multi-region report) - http://www.oracle.com/technology/products/database/application_express/html/configure_printing.html . The standard print server configuration only supports reports with a single region. If you have BIP in your organization, that’s great. Otherwise, CSV files don’t warrant it.
    The only other option, I can see, is to create the CSV file manually, using the technique described in the following Blog entry, by Scott Spendolini - http://spendolini.blogspot.com/2006/04/custom-export-to-csv.html .
    Regards,
    Arie.

  • Default cache size reduced to 350 MB (from 1024 MB)

    Why was the default size of the automatic cache management reduced from 1024 MB to 350 MB ?
    How does that improve Firefox ?

    That is a consequence of this bug:
    *[https://bugzilla.mozilla.org/show_bug.cgi?id=709297 bug 709297] - Limit disk cache size until/unless "clear recent data" can be done async
    <i>(please do not comment in bug reports: [https://bugzilla.mozilla.org/page.cgi?id=etiquette.html])</i>

  • Swapping and Database Buffer Cache size

    I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.

    Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
    However ... as the buffer cache grows, the time to determine 'which blocks
    need to be cleaned' increases. Therefore, at a certain point the benefit of a
    larger cache is offset by the time to keep it sync'd to the disk. After that point,
    increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
    A checkpoint performs the following three operations:
    1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
    It's the DBWR that writes all modified databaseblocks back to the datafiles.
    2. The latest SCN is written (updated) into the datafile header.
    3. The latest SCN is also written to the controlfiles.
    The following events trigger a checkpoint.
    1. Redo log switch
    2. LOG_CHECKPOINT_TIMEOUT has expired
    3. LOG_CHECKPOINT_INTERVAL has been reached
    4. DBA requires so (alter system checkpoint)

Maybe you are looking for

  • Connection from SAP ERP 6 to B1i2007

    Hello, the test-connection between B1i and SAP is working, but when trying to test the connection from SAP to B1i, SAP gives the error mesaage that "the program is not registered". The B1i-System seems to be not registered in the SAP and it is only s

  • Multiple Column in JLIst

    Hi friend, Please could u tell me how can i make a multiple Column in a JList without using JTable. It would be better if u send me a complete code for that. Thanks u very much in Advance. Khaled Mahmud

  • How to get current URL from Internet Explorer ?

    Hi, I have design an add-on for IE and I'm trying to get URL from IE which is currently opened that is which is currently user sees. Is there any way to get it? I want to store that address in label and use it for something useful. Is it possible wit

  • Smartform in back ground

    could explain me the procedure how to perform smart form in back ground what are the paramaters required to be maintained in the FM

  • Can I use skype in outlook?

    Hi, I read that you can use skype in outlook.com, But I think I can't use it. Does anyone know if skype in outlook is avaible in the Netherlands? Thanks.