Global cache in clustered environment

We have a clustered external facing portal application with four servers and each server has two nodes. We want use to IcacheService to store some objects in a cache. But this cache is specific to each JVM and cannot be shared across the servers.
Is there any global cache mechanism in SAP through which we can share the objects across all the servers in a clustered environment?
Thanks in adavace.
Ram

Hi Rambhupal,
Would you share any solution/information for the same as we have a similar requirement.
Regards,
Melwyn

Similar Messages

  • Cache Seeding in OBI 11g in Clustered Environment on Unix Box

    hi All,
    i want to do cache seeding for a couple of dashboards in OBI 11g,
    Steps taken:
    1. Global Cache is ON
    2. Global shared path is SET
    3 Global cache size = 3GB
    i am using agents for cache seeding and are scheduled after the dataload is complete.
    It successfully runs early morning...delivers a mail.
    But when I am trying to the reports from OBI...its again generating the SQl...without hitting the cache.
    It might be due to the Clusterred Env... 2 nodes Env.
    Please suggest some technique to achieve this cache seeding.
    Thanks in advance

    GLOBAL_CACHE_STORAGE_PATH This path must point to a network share. All clustering nodes share the same location.
    Check the bellow setting
    GLOBAL_CACHE_STORAGE_PATH = "directory name" SIZE;
    ex: GLOBAL_CACHE_STORAGE_PATH = "C:\cache" 250 MB;
    CLUSTER_AWARE_CACHE_LOGGING -->Turns on logging for the cluster caching feature. Used only for troubleshooting. The default is NO.
    Example: CLUSTER_AWARE_CACHE_LOGGING = NO;

  • Data Caching in a Clustered Environment

              I want to cache read-only reference/code table data that will run in a clustered
              WLS6 environment. It's a JSP application and I am storing a complete HTML Select
              Control per reference/code table data in the cache. The question is where to
              cache it? I was going to put it in the ServletContext (JSP "application" implicit
              object), but the ServletContext is not replicated. I considered using JNDI, but
              there are problems with duplicate name errors when another server who doesn't
              originally bind the object tries to lookup, change and rebind the object. I guess
              JMS Multicasting is an option, but I don't want to implement JMS just for an application
              data cache.
              Any suggestions for a simple reference/code table read-only caching strategy that
              will work in a clustered WLS6 environment?
              

    If the data is strictly read-only, and you do not have to worry about cache
              integrity, then look at WebLogic JSP cachetag:
              http://www.weblogic.com/docs51/classdocs/API_jsp.html#cachetag
              You can use it to cache both the output and the calculations results
              (variables calculated inside the cache tag).
              The scenario will be exactly the same for non-clustered and clustered
              cases - using multicast to broadcast small invalidation messages (so the
              data can be refreshed from the database) is ok, but replicating application
              data is not (and you definitely do not want to use JNDI for this purpose).
              BTW, the initial CacheTag implementation in 5.1 (supposedly) had a 'cluster'
              scope and I assume it was multicasting fresh data after cache miss - there
              is no such scope in 6.0 implementation.
              If you still want replication you can look at javagroups:
              http://sourceforge.net/projects/javagroups/
              (distributedhashtable example).
              Olsen <[email protected]> wrote:
              > Cameron,
              > Thanks for the reply. However, as I stated below, I am not interested in
              > JMS, nor an EJB solution to the problem. It really is not that complicated of
              > a concept and I know a solution or two (ServletContext, JNDI), but none that works
              > in a WLS6 clustered environment.
              > Any other ideas???
              > Thanks...
              > "Cameron Purdy" <[email protected]> wrote:
              >>Dimitri had a clever (as ever) solution using JMS to maintain cache
              >>integrity:
              >>
              >>explanation at
              >>http://dima.dhs.org/misc/readOnlyUpdates.html
              >>
              >>d/l from
              >>http://dima.dhs.org/misc/readOnlyUpdates.jar
              >>
              >>--
              >>Cameron Purdy
              >>Tangosol, Inc.
              >>http://www.tangosol.com
              >>+1.617.623.5782
              >>WebLogic Consulting Available
              >>
              >>
              >>"Olsen" <[email protected]> wrote in message
              >>news:[email protected]...
              >>>
              >>> I want to cache read-only reference/code table data that will run in
              >>a
              >>clustered
              >>> WLS6 environment. It's a JSP application and I am storing a complete
              >>HTML
              >>Select
              >>> Control per reference/code table data in the cache. The question is
              >>where
              >>to
              >>> cache it? I was going to put it in the ServletContext (JSP "application"
              >>implicit
              >>> object), but the ServletContext is not replicated. I considered using
              >>JNDI, but
              >>> there are problems with duplicate name errors when another server who
              >>doesn't
              >>> originally bind the object tries to lookup, change and rebind the object.
              >>I guess
              >>> JMS Multicasting is an option, but I don't want to implement JMS just
              >>for
              >>an application
              >>> data cache.
              >>> Any suggestions for a simple reference/code table read-only caching
              >>strategy that
              >>> will work in a clustered WLS6 environment?
              >>
              >>
              Dimitri
              

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • Global Caching Strategies

    Hi
    We are designing an application which does not have a web front end and need some
    suggestions on how to do global caching
    The environment is a cluster of 2 Weblogic 7.02 SP2 app servers.
    We need to cache some application wide data - we throught of using JNDI as a global
    cache but later realised that this would not work in a clustered environment.
    The other option is to use a global application cache which would have to be maintained
    on both servers and any other instances added to the cluster - this cache is not
    STATIC - rather it is updated at runtime as requests come in off an JMS queue.
    Therefore it needs to be a truly global cache i.e we cannot maintain the same
    read only cache on all servers in the cluster. Another option would be to use
    an stateless bean with JDBC / entity bean to talk to a database or a DAO talking
    to LDAP.
    Can anyone provide suggestions ?
    Thanks in advance.

    If you need to manage state efficiently in a WebLogic cluster, I suggest you
    evaluate our Coherence product:
    http://www.tangosol.com/coherence.jsp
    You can share data and manage the concurrent access to it from all nodes in
    the cluster, and it provides data replication and load balancing without any
    single points of failure. Sites like http://www.theserverside.com use it to
    cluster effectively.
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Ghulam Shaikh" <[email protected]> wrote in message
    news:3e9d869c$[email protected]..
    >
    Hi
    We are designing an application which does not have a web front end andneed some
    suggestions on how to do global caching
    The environment is a cluster of 2 Weblogic 7.02 SP2 app servers.
    We need to cache some application wide data - we throught of using JNDI asa global
    cache but later realised that this would not work in a clusteredenvironment.
    >
    The other option is to use a global application cache which would have tobe maintained
    on both servers and any other instances added to the cluster - this cacheis not
    STATIC - rather it is updated at runtime as requests come in off an JMSqueue.
    >
    Therefore it needs to be a truly global cache i.e we cannot maintain thesame
    read only cache on all servers in the cluster. Another option would be touse
    an stateless bean with JDBC / entity bean to talk to a database or a DAOtalking
    to LDAP.
    Can anyone provide suggestions ?
    Thanks in advance.

  • Deploying Webapps in clustered environment

    Hi All,
              We have started the installation of BI 4.1 SP3. We have 3 Application servers on 3 machines (with only one CMS). We have a seperate server for the Tomcat.
    We have installed the Tomcat too on the seperate machine.
    We did create the BIlaunchpad.properties, CmcApp.properties and opendocument.properties files in the Webapps folder (under BOE).
    The properties of the files looks like below,
    [email protected]:6400
    cms.default=USHPEWSAPP743:6400
    Still, we are not able to access the CMS and Launchpad.

    Is your configuration any different in the clustered environment? Are you using cache-coordination/synchronization in TopLink?
    Any idea of what the application is doing that leads to the server running out of memory?
    You may wish to use a memory profiler such as JProfiler or JProbe on the server to determine the cause of the memory leak.

  • How Insert Work on global cache group?

    Hi all , i'm doing some test about how many transactions for second TimesTen can process.
    With a normal configuration "direct" i reached 5200 transaction for second, on my machine (OS windows normal work station).
    now i'm using the global cache groups because we need more then one DataSource , and they have to be syncro, one with each other.
    And how i read in the guide the global cache group are perfect for this purpose.
    After configured the 2 environment with different DataBase TimesTen (those machine are server SUN, much better of my work station :P), i tried a simple test
    of insert on a single node.
    But i reached only 1500 as maximum value of transactions for second.
    The 5200 value when testing on my work station was with normal Dynamic Cache Group, not Global. So i was thinking if this performance issue was related on how the Insert statement work on a global cache group.
    Some questions:
    1) before the insert is done on Oracle, the Cache Group do some query on the other cache global group to avoid some conflicts on primary key?
    2) there is any operation performed from global cache to others when a statement is sendend?
    The 2 global cache anyway are working well, locking and changing owner on a instance cache so no problems detected atm are about " how they have to work":).
    The problem is only that we need that the global cache do it more and more faster :P at last the 5200 transaction for second i reached on my work station.
    Thanks in advance for any suggestion.
    P.S.:I don't know much about the server configuraion (SO solaris some version) but anyway good machines :).

    Okay, the rows here are quite large so you need to do some tuning. In the ODBC (DSN) parameters I see that you are using the default log buffer abd log file sizes. these are totally inadequate for this kind of workload. You should increase both to a larger value. For this kind of workloads typial values would be in the range of 256 MB to 1024 MB for both log buffer and log file size. If you are using 32-bit TimesTen you may be constrained on how large you can make these sicne the log buffer is part of the overall datastore memory allocation wh9ich on 32-bit platforms is quite limited. On 64-bit TimesTen you have no such restriction (as long as the machine has enough memory). Here is an example of the directives you would use to set both to 1 GB. The key one is the log buffer size but it is important that LogFileSize is >= LogBufMB.
    [my_ds]
    LogBufMB=1024
    LogFileSize=1024
    For this change to take effect you need to shutdown (unload from memory) and restart (load back into memory) the datastore.
    Secondly, it's hard to be sure from your example code but it looks like maybe you are pre-paring the INSERT each time you execute it? If that is the case this is very expensive and unnecessary. You only need to prepare once and then you can execute many times as follows:
    insPs = connection.prepareStatement("Insert into test.transactions (ID_ ,NUMBE,SHORT_CODE,REQUEST_TIME) Values (?,?,?,?)");
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    connection.commit();
    This should improve performance noticeably. mif you can get away with only comiting every 'N' inserts you will see a further uplift. For example:
    int COMMIT_INTVL = 100;
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    if ( (i % COMMIT_INTVL) == 0 )
    connection.commit();
    connection.commit();
    And lastly, the fastest way of all is to use JDBC batch operations; see the JDBC documentation about batch operations. That will improve insert performance still more.
    Lastly, a word of caution. Although you will probably be able to easily achieve more than 5000 inserts per second into TimesTen, TimesTen may not be able to push the data to oracle at this rate. the rate of push to Oracle is likely to be significantly slower. Thus if you are executing a continuous high volume insert workload into TimesTen two things will happen; (a) the datastore will become fiull and unable to accepot any more inserts until you explicitly remove some data and (b) a backlog will build up (in the TT transaction logs on disk) of data waiting to be pushed to Oracle.
    This kind of setup is not really suited to support sustained high insert levels; you need to look at the maximum that can be sustained for the whole application -> TimesTen -> Oracle pathway. Of course, if the workload is 'bursty' then this may not be an issue at all.
    Chris

  • WebLogic clustering Environment ...

              Hi,
              I have an clustering environment setup ( and see several issues) and was wondering
              if anyone else is having the same kind or are seeing similar issues.
              1) Have a weblogic proxy on win2000 box
              2) have two weblogic instances on another win2000 box.
              3) have two weblogic instances on third win2000 box.
              4) have db and 2 app servers on HPUX box.
              Questions:
              1) Does anyone have this kind of environment running successfully?
              2) Our stuff runs but we see several problems :
              --- sometimes we see blank page which comes back with actual page upon hitting
              refresh
              --- Several images don't show up. It tries to look for images on proxy server
              cache, but does not find it over there. It is in fact in the cache directory on
              one of the weblogic instances.
              --- I log in as one user and then log in as another user, it still brings
              up the page for the previous user.
              Can anyone throw some light on this issues or config.
              Any help / effort is greatly appreciated
              Thanks
              Bharat
              

    If you have a significant site (heavy enough traffic or HLA requirement to
              use >1 proxy) then I suggest putting the static stuff on the proxy or even
              outside of the WL loop altogether. Generally speaking, it is fine to put it
              on the proxy, though, like on Apache.
              Here's the problem, though -- most apps hard-code the relative locations of
              the images. I suggest using a JSP tag for "image path" that uses property
              (config) information to deref the location of the files. That way you can
              dev on a single workstation and deploy to a single server or cluster or
              cluster with a set of proxies out front.
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "Joe" <[email protected]> wrote in message
              news:3a7a0b6f$[email protected]..
              >
              > I want to ask where should I put the images file in clustered envirnment.
              In database
              > as BLOB object or in Web Server??????
              >
              > If I put images in web server then it seems to be difficult to maintain if
              the web
              > site is mutilangual. But if I put images in database then the internal
              network traffic
              > increase.
              >
              > what is common practice for this issue??????
              

  • Global cache question

    Hi,
    i've a question about global cache (11g)
    Blocks are shared in all nodes when one instance request it (select) or only when there are transactions?
    I think in both case, correct?

    In a doc i read about cache syncronization.
    "In an Oracle RAC environment, when users execute queries from different instances, instead of the DBWR process having to retrieve data from the I/O subsystem every single time, data is transferred (traditionally) over the interconnect from one instance to another. (In Oracle Database 11g Release 2, the new "bypass reader" algorithm used in the cache fusion technology bypasses data transfer when large numbers of rows are being read and instead uses the local I/O subsystem from the requesting instance to retrieve data.) This provides considerable performance benefits, because latency of retrieving data from an I/O subsystem is much higher compared to transferring data over the network. Basically, network latency is much lower compared to I/O latency."
    So blocks are shared in all instances in case of query that retrive small records.

  • Adminui - Error 500 in clustered environment

    Hi, i'm Alessandro from Italy, and i have a problem in a clustered environment. This is my topology:
    2 servers Red Hat AS4
    2 Websphere Application Server 6.1 (fix 23) + 1 Websphere Deployment Manager 6.1 (fix 23)
    2 Ibm Http Server (fix 23)
    On this cluster i've deployed LiveCycle ES 8.2.1 SP2 using the Configuration Manager without any problem (all steps validated etc etc)
    After the deploy i've re-generated the xml plugin, and set up the gemfire locator (on both nodes).
    Now i have this problem:
    If i try to connect to the adminui directly on a node, all works fine (http://hostname1:9080/adminui or http://hostname2:9080/adminui)
    If i try to connect to the adminui using the http server (http://hostname1/adminui or http://hostname2/adminui) i get an error 500 and this exception in the SystemOut.log:
    [5/20/09 10:13:30:188 CEST] 00000090 ServletWrappe I   SRVE0242I: [LiveCycle8] [/adminui] [FilterProxyServlet]: Initialization successful.
    [5/20/09 10:13:37:726 CEST] 0000008f ServletWrappe E   SRVE0068E: Uncaught exception thrown in one of the service methods of the servlet: Faces Servlet. Exception thrown : java.lang.NullPointerException
        at com.sun.faces.application.ViewHandlerImpl.convertViewId(ViewHandlerImpl.java:881)
        at com.sun.faces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:142)
        at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:87)
        at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:239)
        at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:118)
        at javax.faces.webapp.FacesServlet.service(FacesServlet.java:198)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1143)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1084)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:145)
        at com.adobe.framework.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:1 73)
        at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 190)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:130)
        at com.adobe.idp.um.auth.filter.PortalSSOFilter.doFilter(PortalSSOFilter.java:113)
        at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 190)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:130)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)
        at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:832)
        at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:679)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:587)
        at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:481)
        at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.jav a:90)
        at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:748)
        at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1466)
        at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:119)
        at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink .java:458)
        at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink .java:387)
        at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.jav a:102)
        at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionLi stener.java:165)
        at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
        at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
        at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:136)
        at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:196)
        at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:751)
        at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:881)
        at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1473)
    [5/20/09 10:13:37:736 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl initialize FFDC0009I: FFDC opened incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_0.txt
    [5/20/09 10:13:37:777 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl resetIncidentStream FFDC0010I: FFDC closed incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_0.txt
    [5/20/09 10:13:37:782 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl open FFDC0009I: FFDC opened incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_1.txt
    [5/20/09 10:13:37:797 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl resetIncidentStream FFDC0010I: FFDC closed incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_1.txt
    [5/20/09 10:13:37:801 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl open FFDC0009I: FFDC opened incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_2.txt
    [5/20/09 10:13:37:834 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl resetIncidentStream FFDC0010I: FFDC closed incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_2.txt
    [5/20/09 10:13:37:840 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl open FFDC0009I: FFDC opened incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_3.txt
    [5/20/09 10:13:37:842 CEST] 0000008f ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl resetIncidentStream FFDC0010I: FFDC closed incident stream file /opt/IBM/WebSphere/AppServer/profiles/Custom01/logs/ffdc/STAMPE_TEST_1_0000008f_09.05.20_ 10.13.37_3.txt
    [5/20/09 10:13:37:844 CEST] 0000008f WebApp        E   [Servlet Error]-[Faces Servlet]: java.lang.NullPointerException
        at com.sun.faces.application.ViewHandlerImpl.convertViewId(ViewHandlerImpl.java:881)
        at com.sun.faces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:142)
        at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:87)
        at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:239)
        at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:118)
        at javax.faces.webapp.FacesServlet.service(FacesServlet.java:198)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1143)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1084)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:145)
        at com.adobe.framework.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:1 73)
        at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 190)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:130)
        at com.adobe.idp.um.auth.filter.PortalSSOFilter.doFilter(PortalSSOFilter.java:113)
        at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java: 190)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:130)
        at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)
        at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:832)
        at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:679)
        at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:587)
        at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:481)
        at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.jav a:90)
        at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:748)
        at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1466)
        at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:119)
        at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink .java:458)
        at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink .java:387)
        at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.jav a:102)
        at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionLi stener.java:165)
        at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
        at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
        at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:136)
        at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:196)
        at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:751)
        at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:881)
        at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1473)
    But if i turn off one node, all works fine....
    Any ideas?
    thanks in advance!
    Alessandro
    p.s. i've attched the ffdc related to the exception

    Hi, thank you for your answer. This is my plugin xml:
    <Config ASDisableNagle="false" AcceptAllContent="false" AppServerPortPreference="HostHeader" ChunkedResponse="false" FIPSEnable="false" HTTPMaxHeaders="300" IISDisableNagle="false" IISPluginPriority="High" IgnoreDNSFailures="false" RefreshInterval="60" ResponseChunkSize="64" SSLConsolidate="false" TrustedProxyEnable="false" VHostMatchingCompat="false">
    <Log LogLevel="Error" Name="/opt/IBM/WebSphere/Plugins/logs/http_plugin.log"/>
    <Property Name="ESIEnable" Value="true"/>
    <Property Name="ESIMaxCacheSize" Value="1024"/>
    <Property Name="ESIInvalidationMonitor" Value="false"/>
    <Property Name="ESIEnableToPassCookies" Value="false"/>
    <VirtualHostGroup Name="default_host">
    <VirtualHost Name="*:9080"/>
    <VirtualHost Name="*:80"/>
    <VirtualHost Name="*:9443"/>
    <VirtualHost Name="*:5060"/>
    <VirtualHost Name="*:5061"/>
    <VirtualHost Name="*:443"/>
    <VirtualHost Name="ls001s41-01-was.rmasede.grma.net:9080"/>
    <VirtualHost Name="ls001s41-01-was.rmasede.grma.net:80"/>
    <VirtualHost Name="ls001s41-01-was.rmasede.grma.net:9443"/>
    <VirtualHost Name="ls001s41-01-was.rmasede.grma.net:5060"/>
    <VirtualHost Name="ls001s41-01-was.rmasede.grma.net:5061"/>
    <VirtualHost Name="ls001s41-01-was.rmasede.grma.net:443"/>
    <VirtualHost Name="ls001s42-01-was.rmasede.grma.net:9082"/>
    <VirtualHost Name="ls001s42-01-was.rmasede.grma.net:80"/>
    <VirtualHost Name="ls001s42-01-was.rmasede.grma.net:9443"/>
    <VirtualHost Name="ls001s42-01-was.rmasede.grma.net:5060"/>
    <VirtualHost Name="ls001s42-01-was.rmasede.grma.net:5061"/>
    <VirtualHost Name="ls001s42-01-was.rmasede.grma.net:443"/>
    <VirtualHost Name="*:9082"/>
    </VirtualHostGroup>
    <ServerCluster CloneSeparatorChange="false" GetDWLMTable="false" IgnoreAffinityRequests="true" LoadBalance="Round Robin" Name="STAMPE_TEST_Cluster" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60">
    <Server CloneID="1448fbbu8" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="2" MaxConnections="-1" Name="ls001s41-01-wasNode01_STAMPE_TEST_1" ServerIOTimeout="0" WaitForContinue="false">
    <Transport Hostname="ls001s41-01-was" Port="9080" Protocol="http"/>
    <Transport Hostname="ls001s41-01-was" Port="9443" Protocol="https">
    <Property Name="keyring" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.kdb"/>
    <Property Name="stashfile" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.sth"/>
    </Transport>
    </Server>
    <Server CloneID="145c74c57" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="2" MaxConnections="-1" Name="ls001s42-01-wasNode01_STAMPE_TEST_2" ServerIOTimeout="0" WaitForContinue="false">
    <Transport Hostname="ls001s42-01-was" Port="9082" Protocol="http"/>
    <Transport Hostname="ls001s42-01-was" Port="9443" Protocol="https">
    <Property Name="keyring" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.kdb"/>
    <Property Name="stashfile" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.sth"/>
    </Transport>
    </Server>
    <PrimaryServers>
    <Server Name="ls001s41-01-wasNode01_STAMPE_TEST_1"/>
    <Server Name="ls001s42-01-wasNode01_STAMPE_TEST_2"/>
    </PrimaryServers>
    </ServerCluster>
    <ServerCluster CloneSeparatorChange="false" GetDWLMTable="false" IgnoreAffinityRequests="true" LoadBalance="Round Robin" Name="STAMPEJMS_TEST_Cluster" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60">
    <Server CloneID="144nk4i4n" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="2" MaxConnections="-1" Name="ls001s41-01-wasNode01_STAMPEJMS_TEST_1" ServerIOTimeout="0" WaitForContinue="false">
    <Transport Hostname="ls001s41-01-was" Port="9081" Protocol="http"/>
    <Transport Hostname="ls001s41-01-was" Port="9444" Protocol="https">
    <Property Name="keyring" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.kdb"/>
    <Property Name="stashfile" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.sth"/>
    </Transport>
    </Server>
    <Server CloneID="144nlgs3s" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="2" MaxConnections="-1" Name="ls001s42-01-wasNode01_STAMPEJMS_TEST_2" ServerIOTimeout="0" WaitForContinue="false">
    <Transport Hostname="ls001s42-01-was" Port="9081" Protocol="http"/>
    <Transport Hostname="ls001s42-01-was" Port="9444" Protocol="https">
    <Property Name="keyring" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.kdb"/>
    <Property Name="stashfile" Value="/opt/IBM/WebSphere/Plugins/etc/plugin-key.sth"/>
    </Transport>
    </Server>
    <PrimaryServers>
    <Server Name="ls001s41-01-wasNode01_STAMPEJMS_TEST_1"/>
    <Server Name="ls001s42-01-wasNode01_STAMPEJMS_TEST_2"/>
    </PrimaryServers>
    </ServerCluster>
    <ServerCluster CloneSeparatorChange="false" GetDWLMTable="false" IgnoreAffinityRequests="true" LoadBalance="Round Robin" Name="dmgr_ls001s40-01-dmrCellManager01_Cluster" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60">
    <Server ConnectTimeout="0" ExtendedHandshake="false" MaxConnections="-1" Name="ls001s40-01-dmrCellManager01_dmgr" ServerIOTimeout="0" WaitForContinue="false"/>
    <PrimaryServers>
    <Server Name="ls001s40-01-dmrCellManager01_dmgr"/>
    </PrimaryServers>
    </ServerCluster>
    <UriGroup Name="default_host_STAMPE_TEST_Cluster_URIs">
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-lcm-lcvalidator/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/datamanagerservice/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adminui/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/AACComponent/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/TrustStoreComponent/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/um/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/umscheduler/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/umstartup/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/umcache/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/repository/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/repository/*.jsp"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/repository/*.jsv"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/repository/*.jsw"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/repository/j_security_check"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/repository/ibm_security_logout"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/workflow-scheduler/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/cache-controller/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/dsc/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/soap/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/bootstrap"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/success"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/failure"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/fetchTasks"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/*.jsp"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/*.jsv"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/*.jsw"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/j_security_check"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-bootstrapper/ibm_security_logout"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-lcm-bootstrapper/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-lcm-bootstrapper/*.jsp"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-lcm-bootstrapper/*.jsv"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-lcm-bootstrapper/*.jsw"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-lcm-bootstrapper/j_security_check"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-lcm-bootstrapper/ibm_security_logout"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/CoreSystemConfigComponent/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/DocumentManager/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/DocumentManager/*.jsp"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/DocumentManager/*.jsv"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/DocumentManager/*.jsw"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/DocumentManager/j_security_check"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/DocumentManager/ibm_security_logout"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/remoting/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/aac_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/aac_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/aac_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/aac_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/coresystem_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/coresystem_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/coresystem_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/coresystem_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/processmgmt_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/processmgmt_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/processmgmt_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/processmgmt_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/truststore_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/truststore_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/truststore_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/truststore_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/um_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/um_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/um_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/um_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/OutputService/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/OutputAdmin/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/adobe-forms-cacheService/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/XMLFMCallBackService/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/output_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/output_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/output_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/output_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/pdfg-adminui/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/pdfg-ipp/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/pdfg_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/pdfg_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/pdfg_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/pdfg_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/DctmConnectorAdmin/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/documentum_admin_en/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/documentum_admin_de/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/documentum_admin_fr/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/documentum_admin_ja/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/servicesnatives-2/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/xmlformservice/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/convertpdfservice/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/ps2pdf/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/openOffice2pdf/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/img2pdf/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/html2pdf/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/colorProfile/*"/>
    </UriGroup>
    <Route ServerCluster="STAMPE_TEST_Cluster" UriGroup="default_host_STAMPE_TEST_Cluster_URIs" VirtualHostGroup="default_host"/>
    <RequestMetrics armEnabled="false" loggingEnabled="false" rmEnabled="false" traceLevel="HOPS">
    <filters enable="false" type="URI">
    <filterValues enable="false" value="/snoop"/>
    <filterValues enable="false" value="/hitcount"/>
    </filters>
    <filters enable="false" type="SOURCE_IP">
    <filterValues enable="false" value="255.255.255.255"/>
    <filterValues enable="false" value="254.254.254.254"/>
    </filters>
    <filters enable="false" type="JMS">
    <filterValues enable="false" value="destination=aaa"/>
    </filters>
    <filters enable="false" type="WEB_SERVICES">
    <filterValues enable="false" value="wsdlPort=aaa:op=bbb:nameSpace=ccc"/>
    </filters>
    </RequestMetrics>
    </Config>

  • JMX in clustered environment

              hi all,
              i am little confused about how weblogic 'behaves' in following situation:
              - i have one adminserver and 4 managed servers in a clustered environment
              - i deploy an application on all the servers in the cluster
              the application (servlet based) registers an MBean with adminserver.
              question that i would like to ask is the following:
              if my app is deployed in a cluster with 4 servers,does it mean that i will have
              at least 4
              registrations of the same MBean? because in this case i'¨ll have to handle exceptions
              in case same objectname has been registered.
              can anyone clarify me?
              thanx in advance and regards
              marco
              

    Is your configuration any different in the clustered environment? Are you using cache-coordination/synchronization in TopLink?
    Any idea of what the application is doing that leads to the server running out of memory?
    You may wish to use a memory profiler such as JProfiler or JProbe on the server to determine the cause of the memory leak.

  • Woking in clustered environment

    Are there any special concerns I should know about KODO JDO when running
    in a
    clustered environment???

    Can you post some details about the cluster? Do you mean you will be
    using EJBs, or that you will be running multiple separate intances of
    Kodo on different machines? If the latter, then you will either want to
    use pessimistic locking, or use some other mechanism to make sure that
    your cache dosen't have stale data.
    karim qazi <[email protected]> wrote:
    Are there any special concerns I should know about KODO JDO when running
    in a
    clustered environment???--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com
    Kodo Java Data Objects Full featured JDO: eliminate the SQL from your code

  • Deploying in Clustered environment..

    We are using Toplink 10.1.3 in our application and deploying the application in clustered environment. The application is running fine in single node but when deployed in clustered environment with 2 nodes of Jboss application server (4.0.3 sp1) then we are encountering OutOfMemory problem.
    Can anyone help us whether there any specific settings that need to be done for Toplink to deploy the application in clustered environment...
    An early response will be highly appreciated..
    Thanks,

    Is your configuration any different in the clustered environment? Are you using cache-coordination/synchronization in TopLink?
    Any idea of what the application is doing that leads to the server running out of memory?
    You may wish to use a memory profiler such as JProfiler or JProbe on the server to determine the cause of the memory leak.

  • Singleton behaviour in clustered environment

    Hi,
              I have a very basic question. In a WebLogic clustered
              environment, is a Singleton replicated? In other words, if the
              mySingleton on node A in a cluster is updated with a piece of data,
              will this show in the mySingleton in node B?
              Thanks
              Ciao
              Ferruccio
              

    No. If you need this type of functionality, use Coherence.
              http://www.tangosol.com/coherence.jsp
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "Ferruccio" <[email protected]> wrote in message
              news:[email protected]..
              > Hi,
              > I have a very basic question. In a WebLogic clustered
              > environment, is a Singleton replicated? In other words, if the
              > mySingleton on node A in a cluster is updated with a piece of data,
              > will this show in the mySingleton in node B?
              >
              > Thanks
              >
              > Ciao
              > Ferruccio
              

  • Design Considerations for a Repository Manager in a clustered environment

    Hi,
    I am currently building a repository manager for a backend document management system.  Our portal is installed in a load balanced clusterd environment and we are creating the following functionality in the repostiory manager -
      -  Browse content
      -  Read/Write document Metadata 
      -  Upload/download documentation
      -  Reserve/Unreserve documentation
      -  Search
    How will a clustered environment affect the implementation of the repository manager?  Would I need to check for a clustered installation in any kind of way in my code?  Currently I do not see why this would be necessary, but i'm not entirely sure and I need this manager to work in a clustered environment.
    I could see this as being an issue if we are caching information about the documents in the repository.  However, we are not.  We will only be caching user information which can be stored separately in a single cache on each server. 
    Thanks for your help,
    Scott

    Hi Scott!
    There might be two reasons for knowing whether other cluster nodes are running or not and for communicating with them:
    - synchronizing access to the backend
    - synchronizing caches
    If your repository manager must care for serialized access to the backend you will have to synchronize cluster-wide. You don't have to, if your backend can handle parallel access on its own (like a web-server).
    If you cache data in your repository and you want to update the caches of the repositories on the other cluster nodes (maybe because you don't get events for changed data from your backend), you must think of sending messages between cluster nodes. You don't have to, if you think that the expiration of your cache entries is shorter than the required timeliness of the data.
    Kind Regards, Dirk

Maybe you are looking for

  • How do I play play a dvd from my mac and send to apple tv

    Just got my Apple tv for my new Imac everything works fine. How do I play a dvd on my mac and send to apple tv

  • Getting error message when trying to display error messages on a ProfilePage

    Hi All, I'm getting following error message when trying to display error on a Profile page. On page i'm seeing this:      org.apache.jasper.JasperException: java.lang.IllegalStateException: getOutputStream() has already been called for this response

  • Unable to edit preferences in PSE 10 organizer

    Hi all, I have searched to see if this topic has been previously posted, so if I have missed it, please forgive me.,,, Anyone else experiencing not being able to get passed the "loading" msg that appears after attempting to edit preferences in PSE 10

  • Problems with windows 7 and itunes

    I purchased a new computer which now has windows 7, I am having a problem with iTunes sync my iPod.  I have installed iTunes 3 different times and it still wont sync my iPod touch.  I have the first version of the iPod touch, I am out of solutions on

  • Oracle Apps Technical training

    Hello, Can anyone please let me know if there are any good Oracle Apps Technical Training institutes in and around New jersey and Boston. My number is +1 309-569-9501. Thanks is advance. Regards, Shushanth