EM Alert: Critical: Metrics "Global Cache Average Current Get Time" is at X

I frequently get this notification email alert from the enterprise manager. What does this mean and should I ignore it or work on it?
Name=<sid>_<instancename>
Type=Database Instance
Host=<servername>.<domainname>
Metric=Global Cache Average Current Block Request Time (centi-seconds)
Timestamp=Jan 27, 2009 9:27:20 PM GST
Severity=Critical
Message=Metrics "Global Cache Average Current Get Time" is at 1.46154
Rule Name=XXProd
Rule Owner=SYS
[email protected]

those metrics shows the effect of accessing blocks in the global cache and maintaining cache coherency.
basically, the response time for Cache Fusion transfers is determined by the messaging time and
processing time imposed by the physical interconnect components, the IPC protocol, and the
GCS protocol.
therefore, Inter-Instance performance issues can be caused by:
1) Under configured network settings at the OS
2) dropped packets,retransmits, or cyclic redundancy check errors (CRC)
3) large number of processes in the run queue waiting for CPU
4) high value for the DB_FILE_MULTIBLOCK_READ_COUNT
Note also that poor SQL or bad optimization paths can cause additional block gets via the interconnect, just as not using ASSM for Locally Managed Tablespaces, or not using CACHE NOORDER for sequence, ...

Similar Messages

  • Getting *Metrics "Global Cache Average CR Get Time" is at* Alerts

    Hi ,
    I am getting *** Metrics "Global Cache Average CR Get Time" is at *** kind of alerts from grid control from my RAC prod DB.
    Can anybody tell me what should i do for these critical alerts? or will these effect on my applications performance which are running in my DB.

    This metric refers to the average time necessary to get blocks from the Global cache.
    This itself does nog mean anything.
    Are you applications running on this DB experiencing performance issues?
    If not, you might consider:
    increasing the hresholds for this specific metric (use Monitoring Templates for this)
    disable this metric (nullify the thresholds)
    regards
    Rob
    http://oemgc.wordpress.com

  • Global cache average current block request

    Hi all experts,
    I am getting this error frequently through 12C EM. Please tell me how I can suppress this alert so that I don't get it in future.
    Name=<sid>_<instancename>
    Type=Database Instance
    Host=<servername>.<domainname>
    Metric=Global Cache Average Current Block Request Time (centi-seconds)
    Timestamp xxxxxxxxxxxxxxxxxxxxxxxx
    Severity=Critical
    Message=Metrics "Global Cache Average Current Get Time 62" is at 1.46154
    Rule Name=XXProd
    Rule Owner=SYS

    this metric needs to be disabled in EM at the target level for each instance. you can disable "Global Cache Statistics" metrics in two different ways:
    Navigate to the instance, then click the Oracle Database drop-down -> Monitoring -> All Metrics -> (expand) Global Cache Statistics
    -> Global Cache Average CR Block Request Time (centi-seconds)
    -> Global Cache Average Current Block Request Time (centi-seconds)
    -> Global Cache Blocks Corrupt
    -> Global Cache Blocks Lost
    a. At the top level (Global Cache Statistics), click the [Modify] button , then the (*) Disable radio button to disable the entire group
    b. If only disabling the individual metric (Global Cache Average CR Block Request Time (centi-seconds)), click on Modify Thresholds button to delete the values. Note that empty Thresholds will disable alerts for that metric.
    Same can be done via navigation to the Cluster database instance -> Oracle Database drop-down -> Monitoring -> Metric and Collection Settings -> Global Cache Statistics

  • Global Cache Average critical message at "idle nodes" in RAC

    We receives lots of critical message as Metrics "Global Cache Average Current Get Time" is at 1.125
    and Global Cache Average CR Get Time" is at 1.07843 by EM Oracle metric.
    We have a 4 node 11.1 RAC in red hat. However, these two nodes that display Critical messages message does not support application---second nodes in failover mode.
    I also do not see any user's session in these nodes by gv$session/v$session. I could not understand why these 'idle" node got Critical messages?
    Thanks for explaining
    Jin

    Check this: http://download.oracle.com/docs/cd/B19306_01/em.102/b25986/oracle_database.htm#sthref902

  • Avg global enqueue get time (ms)

    I noticed Avg global enqueue get time (ms) is 214.5. This is as high as I have seen this and what does it mean? The following flush times are also really high:
    Avg global cache cr block flush time (ms): 15.7
    Avg global cache current block flush time (ms): 12.9
    Avg message sent queue time is showing 942.0 ms
    Here are top events: Avg
    wait % DB
    Event Waits Time(s) (ms) time Wait Class
    cursor: pin S wait on X 414 16,367 39535 88.4 Concurrenc
    kksfbc child completion 44 1,799 40887 9.7 Other
    DB CPU 240 1.3
    IPC send completion sync 77,702 25 0 .1 Other
    external table write 2,434,678 20 0 .1 User I/O
    The users are stating that everything is now running many times slower than what we saw in the lab. I verified the interconnect is going across the private interfaces (not public). What else should I be looking at?
    Thanks in advance!!

    Hi Mike,
    I got something for you. Please check this.
    -- GLOBAL CACHE LOCK PERFORMANCE
    -- This shows the average global enqueue get time.
    -- Typically AVG GLOBAL LOCK GET TIME should be 20-30 milliseconds. the elapsed
    -- time for a get includes the allocation and initialization of a new global
    -- enqueue. If the average global enqueue get (global cache get time) or average
    -- global enqueue conversion times are excessive, then your system may be
    -- experiencing timeouts. See the 'WAITING SESSIONS', 'GES LOCK BLOCKERS',
    -- 'GES LOCK WAITERS', and 'TOP 10 WAIT EVENTS ON SYSTEM' sections if the
    -- AVG GLOBAL LOCK GET TIME is high.
    set numwidth 20
    column "AVG GLOBAL LOCK GET TIME (ms)" format 9999999.9
    select b1.inst_id, (b1.value + b2.value) "GLOBAL LOCK GETS",
    b3.value "GLOBAL LOCK GET TIME",
    (b3.value / (b1.value + b2.value) * 10) "AVG GLOBAL LOCK GET TIME (ms)"
    from gv$sysstat b1, gv$sysstat b2, gv$sysstat b3
    where b1.name = 'global lock sync gets' and
    b2.name = 'global lock async gets' and b3.name = 'global lock get time'
    and b1.inst_id = b2.inst_id and b2.inst_id = b3.inst_id
    or b1.name = 'global enqueue gets sync' and
    b2.name = 'global enqueue gets async' and b3.name = 'global enqueue get time'
    and b1.inst_id = b2.inst_id and b2.inst_id = b3.inst_id;
    Thanks,
    Keyur

  • Settinggs for Generic Alert Log Metric, OEM GC 10204

    OS: AIX Grid Control Version: 10204
    Agent:10204, Repository DB:10204
    This is the setting on a 9208 DB I was testing for Generic Alert Log Error Metric (I had to change [ (square bracket with { curly brackets on the post).
    Warning Metric: ORA-*
    Critical Metric: ORA-0*(60{0-9}|1157|1562|1628|1650|1653|1654|1655|1656|4031|7445|16014|29740){^0-9}
    The Warning metric (ORA-*) is set to a catch all else if not found in the critical metric. Not sure if there's an order the warning vs critical is read, but so far this has worked for our purposes.
    In GC 10204, we can add an ORA- error to the 'Alert Log Filter Expresion' and this would prevent a metric from being sent out if matched, so if there's a particular ORA-0600 error we do not want to be sent out as critical, we could add it to to the filter to be ignored. In the Advance Setting GC example , we want to ignore "ORA-00600: internal error code, arguments {qerfxFetch_01}, {}, {}, {}, {}, {}, {}" , so we would added it to the filter as ".*ORA-00600:.*\{qerfxFetch{^\}}*\}.*"
    My question - Instead of ignoring the error "ORA-00600: internal error code, arguments {qerfxFetch_01}, {}, {}, {}, {}, {}, {}", I want to move it to be a Warning, can anyone suggest any way of doing this? I've tried adding to the Warning metric {.*ORA-00600:.*\{qerfxFetch{^\}}*\}.*}|ORA-*, but this does not work, as the alert still comes out as a critical.
    All suggestions are welcome.

    Hi,
    You did not mention database version. in 11g, the generic alert log metric will not work. For 10g, you can set this from metrics and policy settings page bu setting warning and critical thresholds for this metric to "ORA-*". And then create a notification rule for this metric.
    For 11g, you need to check MOS note 949858.1
    Salman

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • Generic Alert log metric

    I'm trying to figure out how to setup the generic alert log metric to run a corrective action based on a warning .
    So when i try to work with the generic alert log metric .
    it provides a time/line number which has an all others with a warning and a critical .
    if i tried to add a new line with a % in the time/line number and warning expression it wont let me add it .
    what am i doing wrong here . Obviously since this is the alert log i will not know the time or line number of when i will recieve this error unless i'm reading something wrong here.

    if i tried to add a new line with a % in the time/line number and warning expression it wont let me add it . Try "%" (in quotes)

  • Top time events showing global cache  buffer busy waits

    can any one guide me to find the root cause for global cache buffer waits

    "Segments by Global Cache Buffer Busy" output from an AWR report.
    And let us know how many CPUs you have If you don't want to reveal the names of the objects, then change them (but do it in a way that means if two indexes are for the same table then it's visible). The distribution of waits is significant.
    How many of the indexes in the "segments" are based on an Oracle Sequence ? Check the values for the CACHE size of those sequences they probably ought to be at least 1,000.
    See note below about producing readable output.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How Insert Work on global cache group?

    Hi all , i'm doing some test about how many transactions for second TimesTen can process.
    With a normal configuration "direct" i reached 5200 transaction for second, on my machine (OS windows normal work station).
    now i'm using the global cache groups because we need more then one DataSource , and they have to be syncro, one with each other.
    And how i read in the guide the global cache group are perfect for this purpose.
    After configured the 2 environment with different DataBase TimesTen (those machine are server SUN, much better of my work station :P), i tried a simple test
    of insert on a single node.
    But i reached only 1500 as maximum value of transactions for second.
    The 5200 value when testing on my work station was with normal Dynamic Cache Group, not Global. So i was thinking if this performance issue was related on how the Insert statement work on a global cache group.
    Some questions:
    1) before the insert is done on Oracle, the Cache Group do some query on the other cache global group to avoid some conflicts on primary key?
    2) there is any operation performed from global cache to others when a statement is sendend?
    The 2 global cache anyway are working well, locking and changing owner on a instance cache so no problems detected atm are about " how they have to work":).
    The problem is only that we need that the global cache do it more and more faster :P at last the 5200 transaction for second i reached on my work station.
    Thanks in advance for any suggestion.
    P.S.:I don't know much about the server configuraion (SO solaris some version) but anyway good machines :).

    Okay, the rows here are quite large so you need to do some tuning. In the ODBC (DSN) parameters I see that you are using the default log buffer abd log file sizes. these are totally inadequate for this kind of workload. You should increase both to a larger value. For this kind of workloads typial values would be in the range of 256 MB to 1024 MB for both log buffer and log file size. If you are using 32-bit TimesTen you may be constrained on how large you can make these sicne the log buffer is part of the overall datastore memory allocation wh9ich on 32-bit platforms is quite limited. On 64-bit TimesTen you have no such restriction (as long as the machine has enough memory). Here is an example of the directives you would use to set both to 1 GB. The key one is the log buffer size but it is important that LogFileSize is >= LogBufMB.
    [my_ds]
    LogBufMB=1024
    LogFileSize=1024
    For this change to take effect you need to shutdown (unload from memory) and restart (load back into memory) the datastore.
    Secondly, it's hard to be sure from your example code but it looks like maybe you are pre-paring the INSERT each time you execute it? If that is the case this is very expensive and unnecessary. You only need to prepare once and then you can execute many times as follows:
    insPs = connection.prepareStatement("Insert into test.transactions (ID_ ,NUMBE,SHORT_CODE,REQUEST_TIME) Values (?,?,?,?)");
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    connection.commit();
    This should improve performance noticeably. mif you can get away with only comiting every 'N' inserts you will see a further uplift. For example:
    int COMMIT_INTVL = 100;
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    if ( (i % COMMIT_INTVL) == 0 )
    connection.commit();
    connection.commit();
    And lastly, the fastest way of all is to use JDBC batch operations; see the JDBC documentation about batch operations. That will improve insert performance still more.
    Lastly, a word of caution. Although you will probably be able to easily achieve more than 5000 inserts per second into TimesTen, TimesTen may not be able to push the data to oracle at this rate. the rate of push to Oracle is likely to be significantly slower. Thus if you are executing a continuous high volume insert workload into TimesTen two things will happen; (a) the datastore will become fiull and unable to accepot any more inserts until you explicitly remove some data and (b) a backlog will build up (in the TT transaction logs on disk) of data waiting to be pushed to Oracle.
    This kind of setup is not really suited to support sustained high insert levels; you need to look at the maximum that can be sustained for the whole application -> TimesTen -> Oracle pathway. Of course, if the workload is 'bursty' then this may not be an issue at all.
    Chris

  • How to get the EJB "Cached Beans Current Count" in Command-Line

    I use WebLogic console to monitor EJB's "Cached Beans Current Count" property in monitor tab, but I want to get the result in Command-Line using "weblogc.Admin".
    What arguments to be used?
    I can get the EJB property use:
    java -cp weblogic.jar weblogic.Admin -url t3://svr:7001 -username weblogic -password weblogic GET -mbean mydomain:Application=myear,Name=myejb
    .jar,Type=EJBComponent
    How can I to get the "Cached Beans Current Count"

    Hello,
    This should do the trick
    $JAVA_HOME/bin/java -cp $CLASSPATH weblogic.Admin -url <server_url> -username <username> -password <password> GET -pretty -type EJBCacheRuntime -property CachedBeansCurrentCount
    Also see http://e-docs.bea.com/wls/docs81/admin_ref/cli.html for more info.
    Cheers,
    Hoos
    Message was edited by hoos at Nov 29, 2004 1:50 AM

  • EM Alert: Critical:PROD.test.edu.au - Failed to connect to database instanc

    HI Experts,
    Today i got this alert from oem grid during RMAN bakcup of the PROD database.when i try to connect the database i am able to get into to the DB successfully with out any issues.
    But i am not sure why this error has come and resone behind that.Please advise.
    From: Oracle OEM [mailto:[email protected]]
    Sent: Tuesday, February 22, 2011 11:21 PM
    To: DBA_GROUP
    Subject: EM Alert: Critical:PROD.test.edu.au - Failed to connect to database instance: ORA-00257: archiver error. Connect internal only, until freed. (DBD ERROR: OCISessionBegin).
    Target Name=PROD.test.edu.au
    Target type=Database Instance
    Host=delhi.win.test.edu.au
    Occurred At=Feb 22, 2011 11:20:33 PM EST
    Message=Failed to connect to database instance: ORA-00257: archiver error. Connect internal only, until freed. (DBD ERROR: OCISessionBegin).
    Severity=Critical
    Acknowledged=No
    Notification Rule Name=Database Availability and Critical States
    Notification Rule Owner=SYSMAN
    when i check the alertlog file i could see the following
    Tue Feb 22 23:01:36 2011
    Starting control autobackup
    Control autobackup written to DISK device
    handle '/oracledb/rman_backup/PROD/control_c-197342269-20110222-01'
    Tue Feb 22 23:22:42 2011
    Errors in file /oracle/admin/PROD/udump/hubprd_ora_27069.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01438: value larger than specified precision allowed for this column
    ORA-06512: at line 2
    PLease let me know the cause of the issues and solution to overcome this issues.
    Regards,
    Salai

    It would appear that your archive location is full. If it is full, then try to get some more free space there by either deleting the old archives or specifying a different location.
    I don't see any mention of a flash recovery area in the log message, but you may also want to confirm that you have reached your space allocated to the FRA.

  • Global cache

    Hi Experts
    I want to create Global Cache on a Query to improve the performance of the Query.In RSRT we have the options of
    Read Mode
             A Query to read all data at once
             X Query to read data during navigation
             H Quert to read when you navigate or expand hierarchies
    Cache Mode
               1 main memory cache without swapping
               2 main memory cache with swapping
               3 persistent cache per application server
               4 persistent cache across each application server
    Persistence Mode
               1 flat file
               2 cluster table
               3 transparent table(BLOB)
    Optimization Mode
          0 query will be optimised after generation
    Could you tell me which Mode I need to choose for the Query and how it affects.
    I have created one of these options and executed , and then reun my query many times, But still it takes same amount of time to run, Could you tell me whether it is caching or not and what is the procedure to set query for caching.

    files are only loaded in the system cache when specifically imported into the system cache using:
    javaws -import -system <url>
    /Andy

  • EM Alert: Critical

    Dear Legends,
    In our RAC we are facing this error from yesterday, as we verified that it is due to the trace files are generated for every 10seconds.
    Symptom: EM Alert: Critical: Filesystem /u02 has 4.99% available space, fallen below warning (20) or critical (5) threshold.
    We do not generate any trace manually.
    Q: Is there a way to find how the traces are generated?
    Q: If not how to proceed with this error? Can we check for TKPROF/EXPLAIN PLAN?
    Any kind of help would be appreciated much.
    Thanks,
    Karthik

    Hi,
    You can run the below command if any trace/event parameter set
    Events:
        SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.gv$parameter2  WHERE  UPPER(name) ='EVENT' AND  isdefault='FALSE';
    Trace Events:
        SELECT (translate(value,chr(13)||chr(10),' ')) from sys.gv$parameter2 WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE';
    HTH

  • AE CS6's Global Cache and Premiere Pro

    Not sure if this is an AE or Premiere question...
    Is Premiere Pro CS6 aware of AE's Global Cache? If I have a comp that is cached in AE, and I use Dynamic Link to import that comp into Premiere Pro, does it know that it's already rendered to the cache? Or does Premiere Pro tell AE to re-render it from scratch?

    It's not that Premiere Pro knows anything about the cache. Rather, when Premiere Pro asks After Effects to render the frames and send them over Dynamic Link, the headless version of After Effects can retrieve the frames from the cache if they're there. This does, in fact, make Dynamic Link workflows much faster in CS6.
    Lutz, what bug are you referring to?
    BTW, mae sure that you've installed the latest updates (choose Help > Updates). The After Effects updates include a lot of fixes relating to the cache.

Maybe you are looking for

  • Status Column Of v$log on standby database

    DB Version : 11.2.0.1 OS Version : AIX 7.1 Status column of v$log is showing values as "CLEARING" & "CLEARING_CURRENT". It doesn't change even after adding the standby log files. Here are values from this view. I am not using real time apply feature

  • Call to WS from one backend to other backend

    Hi All, I created  web service in one back-end  e.g. DEV1 and i want to call him from other back-end DEV2. How can i do that ? Best Regards Nina Edited by: Nina C on May 4, 2009 12:53 PM

  • PDF files smaller than 8K will not open

    Hi, I am using Adobe reader 8.1.3 and IE 6.0.0. I have an unusual situation where only pdf files LARGER than 8k will open in the browser. Anything smaller than 8k will not. This seems to be an issue with pdf files only as I am able to open other file

  • ELearning Guild - DevLearn - Adobe Summit - moving forward from Authorware

    Anyone going to DevLearn? Adobe are running an eLearning Summit at the end of DevLearn. Here's the schedule I was browsing the Adobe Summit info for DevLearn, and came across the updated schedule here http://www.elearningguild.com/content.cfm?selecti

  • Mm01 - open materials for a prefedined set of sales organization

    I would like that it will be mandatory for my users to open materials for a predefined set of sales organization. Could I do it with customization activity? Best regards.