Global cache

Hi Experts
I want to create Global Cache on a Query to improve the performance of the Query.In RSRT we have the options of
Read Mode
         A Query to read all data at once
         X Query to read data during navigation
         H Quert to read when you navigate or expand hierarchies
Cache Mode
           1 main memory cache without swapping
           2 main memory cache with swapping
           3 persistent cache per application server
           4 persistent cache across each application server
Persistence Mode
           1 flat file
           2 cluster table
           3 transparent table(BLOB)
Optimization Mode
      0 query will be optimised after generation
Could you tell me which Mode I need to choose for the Query and how it affects.
I have created one of these options and executed , and then reun my query many times, But still it takes same amount of time to run, Could you tell me whether it is caching or not and what is the procedure to set query for caching.

files are only loaded in the system cache when specifically imported into the system cache using:
javaws -import -system <url>
/Andy

Similar Messages

  • Top time events showing global cache  buffer busy waits

    can any one guide me to find the root cause for global cache buffer waits

    "Segments by Global Cache Buffer Busy" output from an AWR report.
    And let us know how many CPUs you have If you don't want to reveal the names of the objects, then change them (but do it in a way that means if two indexes are for the same table then it's visible). The distribution of waits is significant.
    How many of the indexes in the "segments" are based on an Oracle Sequence ? Check the values for the CACHE size of those sequences they probably ought to be at least 1,000.
    See note below about producing readable output.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How Insert Work on global cache group?

    Hi all , i'm doing some test about how many transactions for second TimesTen can process.
    With a normal configuration "direct" i reached 5200 transaction for second, on my machine (OS windows normal work station).
    now i'm using the global cache groups because we need more then one DataSource , and they have to be syncro, one with each other.
    And how i read in the guide the global cache group are perfect for this purpose.
    After configured the 2 environment with different DataBase TimesTen (those machine are server SUN, much better of my work station :P), i tried a simple test
    of insert on a single node.
    But i reached only 1500 as maximum value of transactions for second.
    The 5200 value when testing on my work station was with normal Dynamic Cache Group, not Global. So i was thinking if this performance issue was related on how the Insert statement work on a global cache group.
    Some questions:
    1) before the insert is done on Oracle, the Cache Group do some query on the other cache global group to avoid some conflicts on primary key?
    2) there is any operation performed from global cache to others when a statement is sendend?
    The 2 global cache anyway are working well, locking and changing owner on a instance cache so no problems detected atm are about " how they have to work":).
    The problem is only that we need that the global cache do it more and more faster :P at last the 5200 transaction for second i reached on my work station.
    Thanks in advance for any suggestion.
    P.S.:I don't know much about the server configuraion (SO solaris some version) but anyway good machines :).

    Okay, the rows here are quite large so you need to do some tuning. In the ODBC (DSN) parameters I see that you are using the default log buffer abd log file sizes. these are totally inadequate for this kind of workload. You should increase both to a larger value. For this kind of workloads typial values would be in the range of 256 MB to 1024 MB for both log buffer and log file size. If you are using 32-bit TimesTen you may be constrained on how large you can make these sicne the log buffer is part of the overall datastore memory allocation wh9ich on 32-bit platforms is quite limited. On 64-bit TimesTen you have no such restriction (as long as the machine has enough memory). Here is an example of the directives you would use to set both to 1 GB. The key one is the log buffer size but it is important that LogFileSize is >= LogBufMB.
    [my_ds]
    LogBufMB=1024
    LogFileSize=1024
    For this change to take effect you need to shutdown (unload from memory) and restart (load back into memory) the datastore.
    Secondly, it's hard to be sure from your example code but it looks like maybe you are pre-paring the INSERT each time you execute it? If that is the case this is very expensive and unnecessary. You only need to prepare once and then you can execute many times as follows:
    insPs = connection.prepareStatement("Insert into test.transactions (ID_ ,NUMBE,SHORT_CODE,REQUEST_TIME) Values (?,?,?,?)");
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    connection.commit();
    This should improve performance noticeably. mif you can get away with only comiting every 'N' inserts you will see a further uplift. For example:
    int COMMIT_INTVL = 100;
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    if ( (i % COMMIT_INTVL) == 0 )
    connection.commit();
    connection.commit();
    And lastly, the fastest way of all is to use JDBC batch operations; see the JDBC documentation about batch operations. That will improve insert performance still more.
    Lastly, a word of caution. Although you will probably be able to easily achieve more than 5000 inserts per second into TimesTen, TimesTen may not be able to push the data to oracle at this rate. the rate of push to Oracle is likely to be significantly slower. Thus if you are executing a continuous high volume insert workload into TimesTen two things will happen; (a) the datastore will become fiull and unable to accepot any more inserts until you explicitly remove some data and (b) a backlog will build up (in the TT transaction logs on disk) of data waiting to be pushed to Oracle.
    This kind of setup is not really suited to support sustained high insert levels; you need to look at the maximum that can be sustained for the whole application -> TimesTen -> Oracle pathway. Of course, if the workload is 'bursty' then this may not be an issue at all.
    Chris

  • AE CS6's Global Cache and Premiere Pro

    Not sure if this is an AE or Premiere question...
    Is Premiere Pro CS6 aware of AE's Global Cache? If I have a comp that is cached in AE, and I use Dynamic Link to import that comp into Premiere Pro, does it know that it's already rendered to the cache? Or does Premiere Pro tell AE to re-render it from scratch?

    It's not that Premiere Pro knows anything about the cache. Rather, when Premiere Pro asks After Effects to render the frames and send them over Dynamic Link, the headless version of After Effects can retrieve the frames from the cache if they're there. This does, in fact, make Dynamic Link workflows much faster in CS6.
    Lutz, what bug are you referring to?
    BTW, mae sure that you've installed the latest updates (choose Help > Updates). The After Effects updates include a lot of fixes relating to the cache.

  • Global Caching Strategies

    Hi
    We are designing an application which does not have a web front end and need some
    suggestions on how to do global caching
    The environment is a cluster of 2 Weblogic 7.02 SP2 app servers.
    We need to cache some application wide data - we throught of using JNDI as a global
    cache but later realised that this would not work in a clustered environment.
    The other option is to use a global application cache which would have to be maintained
    on both servers and any other instances added to the cluster - this cache is not
    STATIC - rather it is updated at runtime as requests come in off an JMS queue.
    Therefore it needs to be a truly global cache i.e we cannot maintain the same
    read only cache on all servers in the cluster. Another option would be to use
    an stateless bean with JDBC / entity bean to talk to a database or a DAO talking
    to LDAP.
    Can anyone provide suggestions ?
    Thanks in advance.

    If you need to manage state efficiently in a WebLogic cluster, I suggest you
    evaluate our Coherence product:
    http://www.tangosol.com/coherence.jsp
    You can share data and manage the concurrent access to it from all nodes in
    the cluster, and it provides data replication and load balancing without any
    single points of failure. Sites like http://www.theserverside.com use it to
    cluster effectively.
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Ghulam Shaikh" <[email protected]> wrote in message
    news:3e9d869c$[email protected]..
    >
    Hi
    We are designing an application which does not have a web front end andneed some
    suggestions on how to do global caching
    The environment is a cluster of 2 Weblogic 7.02 SP2 app servers.
    We need to cache some application wide data - we throught of using JNDI asa global
    cache but later realised that this would not work in a clusteredenvironment.
    >
    The other option is to use a global application cache which would have tobe maintained
    on both servers and any other instances added to the cluster - this cacheis not
    STATIC - rather it is updated at runtime as requests come in off an JMSqueue.
    >
    Therefore it needs to be a truly global cache i.e we cannot maintain thesame
    read only cache on all servers in the cluster. Another option would be touse
    an stateless bean with JDBC / entity bean to talk to a database or a DAOtalking
    to LDAP.
    Can anyone provide suggestions ?
    Thanks in advance.

  • Global Cache and Information Broadcasting

    I've been reading about the Reporting Agent which was in previous versions of BW, and how when you ran scheduled a query from this it would store the result in the global cache. This meant that subsequent runs of the report would be a lot quicker.
    It struck me that this would be a very quick and cost-effective way of speeding up our slower queries (rather than laying down a lot of money on an accelerator).
    However, in BI7 we don't have the same functionality anymore - it's been replaced by Information Broadcasting, which seems a lot more suited to distributing pre-run reports than what I had in mind.
    Is there a way in BI7 to scehdule reports to run and store the results in cache memory?

    Hi,
      Even in Information broadcasting there is am option to fill OLAP cache. You can use this option. Reporting agent is available in BI7. You can use the tcode REPORTING_AGENT to access reporting agent, but note that this work for only 3.X queries and not 7.0 queries.
    See the below article by me to understand about filling OLAP cache using IB
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f048c590-31a4-2c10-8599-bd01fabb93d4
    Regards,
    Raghavendra.

  • EM Alert: Critical: Metrics "Global Cache Average Current Get Time" is at X

    I frequently get this notification email alert from the enterprise manager. What does this mean and should I ignore it or work on it?
    Name=<sid>_<instancename>
    Type=Database Instance
    Host=<servername>.<domainname>
    Metric=Global Cache Average Current Block Request Time (centi-seconds)
    Timestamp=Jan 27, 2009 9:27:20 PM GST
    Severity=Critical
    Message=Metrics "Global Cache Average Current Get Time" is at 1.46154
    Rule Name=XXProd
    Rule Owner=SYS
    [email protected]

    those metrics shows the effect of accessing blocks in the global cache and maintaining cache coherency.
    basically, the response time for Cache Fusion transfers is determined by the messaging time and
    processing time imposed by the physical interconnect components, the IPC protocol, and the
    GCS protocol.
    therefore, Inter-Instance performance issues can be caused by:
    1) Under configured network settings at the OS
    2) dropped packets,retransmits, or cyclic redundancy check errors (CRC)
    3) large number of processes in the run queue waiting for CPU
    4) high value for the DB_FILE_MULTIBLOCK_READ_COUNT
    Note also that poor SQL or bad optimization paths can cause additional block gets via the interconnect, just as not using ASSM for Locally Managed Tablespaces, or not using CACHE NOORDER for sequence, ...

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • CS6 Global Cache tips?

    Very excited about CS6's new global hash cache.
    The new "Cache Comp in Background" raises a couple questions in my mind:
    For every frame I see in my comp window, AE has cached that frame as well as the frame of each layer and pre-comp (and pre-pre-comp) to disk?
    Does RAM Preview now also write the global cache to disk as well as load frames into RAM? (Could it be thought of as "Cache Comp in Foreground"?)  If so, does spacebar playback do the same caching as RAM Preview?
    If I RAM Preview "Comp A" at half res, and then at full res, does AE store the global cache for the same comps at both resolutions?
    So, if I RAM Preview my final comp at full res, is there any need to use the "Cache Comp in Background" command?

    cache comp in background makes a point in time copy of your comp and renders it to disk.
    spacebar reads from disk into memory.  If you have a slow disk subsystem you may get slower than realtime playback.
    ram preview will try and render as many frames until it runs low on memory.  If you happen to have render multiple frames simultaneously, you might see "switching to foreground process".  This is because it ran low on RAM and is only rendering on 1 process, but will render on disk until it hits the upper limit of your media cache limit.
    Hope it helps..

  • Express to Global Cache IR repeater

    I want to have a global cache IR send signals from Cinemar DVD Lobby to my Sony DVD changer. The Global Cache can get input from ethernet and convert it to IR.
    My computer is in the office, the DVD player in the living room.
    Can I connect the Global Cache to the Airport Express and send the data wirelessly from my computer? Or is the ethernet port on the Express only for connecting a modem?
    I need the Express to receive and send to my Airport Base Station and send the info out to the controller via the ethernet port. Make sense?

    To activate the Ethernet port on the AirPort Express (AX), it must be wirelessly connected to a network using WDS.
    You can find directions for configure your AirPort Extreme base station (AEBS) and AX in KB 107454, AirPort Extreme and Express: Using WDS to create a network from multiple base stations.

  • Global cache question

    Hi,
    i've a question about global cache (11g)
    Blocks are shared in all nodes when one instance request it (select) or only when there are transactions?
    I think in both case, correct?

    In a doc i read about cache syncronization.
    "In an Oracle RAC environment, when users execute queries from different instances, instead of the DBWR process having to retrieve data from the I/O subsystem every single time, data is transferred (traditionally) over the interconnect from one instance to another. (In Oracle Database 11g Release 2, the new "bypass reader" algorithm used in the cache fusion technology bypasses data transfer when large numbers of rows are being read and instead uses the local I/O subsystem from the requesting instance to retrieve data.) This provides considerable performance benefits, because latency of retrieving data from an I/O subsystem is much higher compared to transferring data over the network. Basically, network latency is much lower compared to I/O latency."
    So blocks are shared in all instances in case of query that retrive small records.

  • Global cache in clustered environment

    We have a clustered external facing portal application with four servers and each server has two nodes. We want use to IcacheService to store some objects in a cache. But this cache is specific to each JVM and cannot be shared across the servers.
    Is there any global cache mechanism in SAP through which we can share the objects across all the servers in a clustered environment?
    Thanks in adavace.
    Ram

    Hi Rambhupal,
    Would you share any solution/information for the same as we have a similar requirement.
    Regards,
    Melwyn

  • Getting *Metrics "Global Cache Average CR Get Time" is at* Alerts

    Hi ,
    I am getting *** Metrics "Global Cache Average CR Get Time" is at *** kind of alerts from grid control from my RAC prod DB.
    Can anybody tell me what should i do for these critical alerts? or will these effect on my applications performance which are running in my DB.

    This metric refers to the average time necessary to get blocks from the Global cache.
    This itself does nog mean anything.
    Are you applications running on this DB experiencing performance issues?
    If not, you might consider:
    increasing the hresholds for this specific metric (use Monitoring Templates for this)
    disable this metric (nullify the thresholds)
    regards
    Rob
    http://oemgc.wordpress.com

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • Global Cache Average critical message at "idle nodes" in RAC

    We receives lots of critical message as Metrics "Global Cache Average Current Get Time" is at 1.125
    and Global Cache Average CR Get Time" is at 1.07843 by EM Oracle metric.
    We have a 4 node 11.1 RAC in red hat. However, these two nodes that display Critical messages message does not support application---second nodes in failover mode.
    I also do not see any user's session in these nodes by gv$session/v$session. I could not understand why these 'idle" node got Critical messages?
    Thanks for explaining
    Jin

    Check this: http://download.oracle.com/docs/cd/B19306_01/em.102/b25986/oracle_database.htm#sthref902

Maybe you are looking for

  • Google mail, click on a link in email and I have to click on "Allow" everytime, why?

    I have attempted to fix this by uninstalling/reinstalling Norton, and I called Windows to no avail. They are telling me that it is Firefox/Mozilla that is doing this. It started happening about 3-4 months ago, just about the time that I thought that

  • DVD player only shows video under Intel graphics

    I have a Macbook Pro 15", i7 2.66, and just noticed that Apple's DVD Player app will only show video under Intel graphics. When it's switched to the Nvidia 330M, all I get is a black screen (+ audio). Oddly, the VLC app works fine, so I'm (sort of) h

  • Untitled CD with radioactive symbol next to it

    I have an untitled cd in my finder under devices. My cd/dvd player will not work since it has been there. I cannot drag it to trash or anywhere else. I've found a few post online but none seem to help. Why is it there and how do I get rid of it? Once

  • Linksys SRW2008MP VLAN problems

    Hello, We have 7 SRW2008MP's and one SRW2016. We successfully configured 2 VLANS on all the above switches and found no problems when the Linksys SRW switches' VLANS were linked with two copper or FO cables. We recently needed to extend both Linksys

  • Lens Distortion

    I am considering Aperture, mainly because I'm not sure if PhotoShop Elements 6, which i normally use, will be as good when its Mac version appears. There is one aspect of PhotoShop I use a lot which aperture does not seem to have. This is the ability