ObjectName conventions for multi-classloader, clustered environments

I've been reading a number JMX documents, including these best practice articles [1] [2], but I've run into a question about the practical application of the ObjectName conventions within a multi-classloader and/or clustered environment.
The general suggestion is to use a singleton name that's unique within an MBeanServer (environment). For example, my.counter:name=CounterPool. This is fine when running one instance of an object within a JVM, but what about mult-classloader environments where an object could be loaded multiple times and causing multiple registrations, which would obviously fail. The best example of this would be code running in a Web container that gets deployed to two separate applications (WARs).
What's the best practice for naming an MBean that uses the platform MBeanServer, but could be loaded multiple times by multiple class loaders?
[1] http://java.sun.com/products/JavaManagement/best-practices.html
[2] http://www-128.ibm.com/developerworks/java/library/j-jtp09196/index.html

That's certainly an interesting question!
If an MBean can be created by two different apps running in the same JVM, then the question is whether it reflects something about each app individually or some global state. For example, if the my.counter:name=CounterPool MBean sometimes represents the counter pool of the Fred app and sometimes of the Jim app, then its name should reflect that. You might imagine adding an extra key to the ObjectName, like my.counter:name=CounterPool,app=Fred. (You should also have a type key, by the way.) Or, you could imagine that each app has its own domain, like Fred/my.counter:name=CounterPool.
This can be tricky if the MBeans are being created by a library. I think a library that could be used independently by different apps in the same JVM should not create an MBean with a hardcoded ObjectName unless the behaviour of that MBean is the same no matter what app registers it. If the MBean reflects the state of the library, then that state will be different for the copy of the library in each app, so registering a single MBean is wrong.
If the MBean reflects global state (like the hostname or the filesystem or whatever, which will be the same for each app), then it does seem reasonable to use a fixed name. In that case, the library could try to do the following. Suppose the name of the MBean is d:k=v.
* Register an MBeanServerNotification listener on the MBeanServerDelegate using an MBeanServerNotificationFilter, to be informed when the d:k=v MBean is unregistered.
* Try to register d:k=v. If this fails with InstanceAlreadyExistsException, then somebody else got there first.
* When you get an MBeanServerNotification saying that d:k=v has been unregistered (the other app went away), you register your own d:k=v. Again, if you get InstanceAlreadyExistsException you can ignore it.

Similar Messages

  • Best method for state machine design for multi-dimensional clustered data

    I have an application that is collecting analog tag data(1000 points) and displaying on a graph. In each vi, I may collect data for as many as 32 channels, but only one channel at a time. Usually this is collected in numeric order, but maybe not all the time. I am also saving an "expected profile" to a file for each channel. I display both expected and actual data on the graph and select which channel to view with a ring control. My question is about how to design the state machine for best memory usage/execution speed/... If I use a shift register with a cluster datatype, that holds both 1000 point arrays, some statistical data like max/min/pass/fail/test limits for 32 channels, won't all of the u
    nbundle/bundle functions use a ton of memory??? Right now I am writing the cluster data to memory tags for each channel. I use the ring control to determine which tag to read. I have the tags grouped by channel into several groups for actual data, expected data, results data,... What design methods would provide the best function using a shift register/queues and notifiers/memory tags in the tag engine?? I also use CITADEL to read the max and average as a trend display on each channel and to select a specific 1000 point dataset to view if the max and average are out of limits. Does anyone have similar applications??

    It sounds as if you will be working with alot of data here so I can understand your concerns for memory management. If you were to unbundle and bundle data in your application, you are correct in saying that it will require more memory. It is somewhat difficult to get the overall picture of what you are needing to set up, but from what I can gather, it sounds as if you could have an array of 32 elements; each element a cluster of your data as well as the statistical data, and then you could index through that array to determine what to display on the graph. The shift registers will be reusing the same memory over and over so there should not be a problem there. Queues wouldn't necessarily help you in any way in this situation. Depending on how many memory tag
    s you are using, you could increase the amount of memory usage required by Citadel and the tag engine, but if you are needing to write your cluster data to the database, then there is not much you can do about it. Ultimately it sounds like a pretty involved application and will require a fair amount of memory regardless to ensure smooth functionality.
    In general, questions about the LabVIEW architecture and memory management are typically better answered if they are posted to the general LabVIEW discussion forum area. But I hope that this has addressed some of your concerns. Have a good day.
    Patrick R.

  • SQL Server 2012 Multi-Site clustering with 2 nodes for HA and DR

    Usually we setup 2 Node Prod cluster for Local HA and 1 or 2 Nodes in other data centre for DR
    Given that we have an option to setup multi-site / multi-subnet clustering from SQL 2008 R2/2012. I am planning to use just 2 nodes, 1 in prod data centre and 1 in DR data centre with 2 or 3 instances. This will act as both HA and DR solution.
    I would like to know if this solution is good, and any disadvantages, any best practices, etc.? By implementing this I can save some cost on physical servers.
    Following will be configured:
    * Will be using different subnets, quorum on different server with "Node and File Share Majority"
    * All virtual IPs will be registered for virtual name, and Subnetdelay, Subnet threshold will be modified accordingly
    * All nodes on same domain
    * Use SAN Disk with replication to DR site

    SQL 2008 R2 doesn't support multi-subnet clustering. You would still need 3rd party component like VLAN and Disk Replication. SQL 2012 is the first version to support multi-subnet clustering without using VLAN. you would still need disk replication hardware/software.
    Taken from my book
    Since nodes are often located in two different data centers at geographically dispersed locations, there is no shared storage between the nodes in a multi-site cluster. Clustering across two different data centers provides a higher level of availability and
    protection at the storage level as we have more than a single copy of the data.
    For SAN replication technology implemented in such clusters, the main activity is to keep data replicated between the sites. Typically, if we have nodes on two different sites, we would have two different network infrastructures and the nodes would be in
    different subnets. In such cases, if we are on a SQL Server version before 2012, we need to use third party VLAN (Virtual LAN) technology so that one IP address travels between two sites. This is called wide-IP. Companies hesitate with this solution because
    of the need to buy a third party solution to deploy the VLAN. Using VLAN technology means the same IP address would failover to the remote site in case of a local site disaster. Network administration might consider this as an overhead to maintenance and an
    extra piece of the networking component that needs to be secure.
    With SQL Server 2012 we do not need to use stretch VLAN technology but SAN replication is still needed for multi-site clustering. The OS version for this can be from Windows Server 2008 R2 and above. In this deployment, we can have a SQL virtual network
    name having an “OR” dependency on two different IP addresses. One address would be representing each subnet. With the “OR” dependency, if IP1 or IP2 is online we just use the network name. This is one of the Enterprise Editions only features.
    Other option which you can think of, without using 3rd party solutions would be AlwaysOn Availability Group. I have written details about it in my book.
    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Adapter configurations in clustered environments

    Could you please let me know if any additional configurations/changes in the WSDL files are required for the Oracle adapters in a clustered environment.
    The Apps adapters , AQ adapters and the DB adapters in our processes have stopped working after being migrated to a clustered environments.
    Any inputs on the same would be greatly appreciated.
    Thanks a lot in advance!

    DB Inbound WSDL generated for single node cannot be used in clusters. At least for DB adapter released with OSB 10.3.1 Maintenance Pack1, we have to create separate inbound wsdl in jdev if they are supposed to be deployed to cluster. These is no way we can modify the Inbound WSDL generated for Single node to work in cluster.
    For a cluster inbound these steps are required
      1. Distributed Polling – When generating the WSDL in JDeveloper, set the Distributed Polling option when creating the DB adapter WSDL in JDeveloper. This option is required in an Oracle Service Bus cluster.  
    2. UseDirectlySQL – When generating the WSDL in JDeveloper, if you are using Distributed Polling  
    3. <toplink:lock-mode>lock</toplink:lock-mode> on toplink file
    Outbound WSDL should work with out any issues in cluster.
    Are you using adapters bundled with OSB?
    Cheers
    Manoj

  • Extending Screens for Multi-Select in the LightSwitch HTML Client

    Hi i
    read Mike Droney's article of 
    Extending Screens for Multi-Select in the LightSwitch HTML Client
    But i just want to understand the code, so what is the ‘__isSelected’ property? from where does it come?
    why does the contentItem.value.details have an ‘__isSelected’ property?
    is the value of the contentItem not the screen?
    and also how can i implement the ‘Can Execute Code’ only if one or more check boxes are checked?
    Thanks  

    But i just want to understand the code, so what is the ‘__isSelected’ property? from where does it come?
    why does the contentItem.value.details have an ‘__isSelected’ property?
    is the value of the contentItem not the screen?
    and also how can i implement the ‘Can Execute Code’ only if one or more check boxes are checked?
    The '__isSelected' property is a private member of the class msls.ContentItem related to the backing data for the selected item.  That is to say, it would be a private member if JavaScript actually had encapsulation and information hiding like a typical
    object-oriented language. I like to reference David Herman's description from his book
    Effective JavaScript:
    Often, JavaScript programmers resort to coding conventions rather than any absolute enforcement mechanism for private properties. For example, some programmers use naming conventions such as prefixing or suffixing private property names with an underscore
    character (_). This does nothing to enforce information hiding, but it suggests to well-behaved users of an object that they should not inspect or modify the property so that the object can remain free to change its implementation.
    ...which means that it's generally not recommended to directly get or set backing data properties like __isSelected, instead working with the public property 'selectedItem', although it may work fine in certain cases like this one.
    To make _canExecute fire only when an item in the list is selected to enable a button method, try:
    return (screen.Contacts.selectedItem !== null);

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • Multi-tier clustering

              Hi,
              I am trying to configure multi-tier clustering.
              Configured 2 clusters each with one machine and one WLS.
              Set up the JNDI lookup on machine 1 as below:
              Hashtable ht = new Hashtable();
              ht.pu(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
              ht.put(Context.PROVIDER_URL, "t3://machine2:port");
              Deployed web app on machine. Ejbs on the other.
              Machine1 has all the necessary home and remote interface of
              the ejbs on Machine2.
              Webapp gives me the following error:
              javax.naming.CommunicationException [Root exception is weblogic.rmi.UnmarshalException:
              Unmarshalling return
               - with nested exception:
              [java.lang.ClassNotFoundException: class com.edocs.enrollment.user.AccountBeanHomeImpl_ServiceStub
              previously not found]]
              Any one know what I am missing?
              Thanks,
              Sreeni
              

              I tried that. It passed that error and was giving a jndi link error.
              My ejb jndi name is "edx/ejb/PWCConfig". It gives the whole name
              as the remaining name. I did not set the provider URL for this
              bean as this is not exposed to the webApp.
              My understanding is that if I have the home and remote interface
              is good enough.
              Appreciate any help.
              Thanks,
              Sreeni
              Chris Palmer <[email protected]> wrote:
              >The lookup looks ok. Make sure that machine1 has not only
              >the home & remote interfaces, but also the
              >stubs generated by ejbc, on it's classpath (or in the
              >WEB-INF/lib directory for the web app). (Try
              >just including the whole jar file produced by ejbc).
              >
              >Sreeni Chippada wrote:
              >
              >> Hi,
              >> I am trying to configure multi-tier clustering.
              >> Configured 2 clusters each with one machine and
              >one WLS.
              >> Set up the JNDI lookup on machine 1 as below:
              >>
              >> Hashtable ht = new Hashtable();
              >> ht.pu(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
              >> ht.put(Context.PROVIDER_URL, "t3://machine2:port");
              >>
              >> Deployed web app on machine. Ejbs on the other.
              >> Machine1 has all the necessary home and remote interface
              >of
              >> the ejbs on Machine2.
              >>
              >> Webapp gives me the following error:
              >>
              >> javax.naming.CommunicationException [Root exception
              >is weblogic.rmi.UnmarshalException:
              >> Unmarshalling return
              >>  - with nested exception:
              >> [java.lang.ClassNotFoundException: class com.edocs.enrollment.user.AccountBeanHomeImpl_ServiceStub
              >> previously not found]]
              >>
              >> Any one know what I am missing?
              >>
              >> Thanks,
              >> Sreeni
              >
              

  • What should be the naming convention for indexes?

    hi,
        I have created some sps and functions, and named them like uspgetreportno,udfgetreportno respectivily.
    please tel me some prevelent prefix for clustered index and non clustered index.
    yours sincerley

    I use 
    PK_ for primary keys
    UK_ for unique keys
    IX_ for non clustered non unique indexes
    UX_ for unique indexes
    Also recommended naming conventions
    PKC_ Primary Key, Clustered
    PKNC_ Primary Key, Non Clusterd
    NCAK_ Non Clustered, Unique
    CAK_ Clustered, Unique
    NC_ Non Clustered
    Hope this Helps
    Thanks,
    Nihar

  • Mkfs: bad value for nbpi: must be at least 1048576 for multi-terabyte, nbpi

    Hi, guys!
    *1. I have a big FS (8 TB) on UFS which contains a lot of small files ~ 64B-1MB.*
    -bash-3.00# df -h /mnt
    Filesystem Size Used Avail Use% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8.0T 4.3T 3,7T 54% /mnt
    *2. But today I noticed in dmesg such errors: "ufs: [ID 682040 kern.notice] NOTICE: /mnt: out of inodes"*
    -bash-3.00# df -i /mnt
    Filesystem Inodes IUsed IFree IUse% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8753024 8753020 4 100% /mnt
    *3. So, I decided to make file system with new parameters:*
    -bash-3.00# mkfs -m /dev/rdsk/c10t600000E00D000000000201A400020000d0s0
    mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=8192,cgsize=143,free=1,rps=1,nbpi=997778,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=y /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    -bash-3.00#
    -bash-3.00# mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=1024,cgsize=143,free=1,rps=1,nbpi=512,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=f /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    *3. I've got some warnings about inodes threshold:*
    -bash-3.00# mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=1024,cgsize=143,free=1,rps=1,nbpi=512,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=n /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    mkfs: bad value for nbpi: must be at least 1048576 for multi-terabyte, nbpi reset to default 1048576
    Warning: 2128 sector(s) in last cylinder unallocated
    /dev/rdsk/c10t600000E00D000000000201A400020000d0s0: 17165172656 sectors in 2793811 cylinders of 48 tracks, 128 sectors
    8381432.0MB in 19538 cyl groups (143 c/g, 429.00MB/g, 448 i/g)
    super-block backups (for fsck -F ufs -o b=#) at:
    32, 878752, 1757472, 2636192, 3514912, 4393632, 5272352, 6151072, 7029792,
    7908512,
    Initializing cylinder groups:
    super-block backups for last 10 cylinder groups at:
    17157145632, 17158024352, 17158903072, 17159781792, 17160660512, 17161539232,
    17162417952, 17163296672, 17164175392, 17165054112
    *4.And my inodes number didn't change:*
    -bash-3.00# df -i /mnt
    Filesystem Inodes IUsed IFree IUse% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8753024 4 8753020 1% /mnt
    I found http://wesunsolve.net/bugid.php/id/6595253 that is a bug of mkfs without workaround. Is ZFS what I need now?

    Well, to fix the bug you referred to you can apply patch 141444-01 or 141445-01.
    However that bug is just regarding an irrelevant error message from mkfs, it will not fix your problem as such.
    It seems to me like the minimum value for nbpi on a multi-terabyte filesystem is 1048576, hence you won't be able to create a filesystem with more inodes.
    The things to try would be to either create two UFS filesystems, or go with ZFS, which is the future anyway ;-)
    .7/M.

  • In OBIEE mobile apps designer there is no option for multi select prompts?The navigation page gives option only for single select?Is there a work around for this?

    In OBIEE mobile apps designer there is no option for multi select prompts?The navigation page gives option only for single select?Is there a work around for this?

    Nic, for me the iTunes window looks like this, when I connect my iPad 3:
    I select the iPad in the "devices" section of the Sidebar (use: "View > Show Sidebar" if the sidebar is hidden).
    Click the "Apps" tab in the "Devices" pane.
    Scroll all the way down in the Devices pane to "File Sharing" "Apps" section.
    Then do I click "GarageBand" to select the documents in the right panel.
    Which part is different for you? Perhaps you could post a screenshot?
    Regards
    Léonie

  • The software i use to play video cause for multi display in which my Macbook pro has only a single display. Is theres a way to get a multi display added to to my computer

    The software i use to play dj music videos cause for multi disply in which my Macbook pro has a single display funtion. Is the ant way to get multi monitor display added to my computer

    If the monitor can be detected your Displays preferences should show it in its Arrangement panel.  You may have to play around with your Display panel to get the desired resolution.  And also may have to click detect displays and gather windows to see both sets of preferences on your main screen.

  • ABAP Mapping :: for multi files

    Dear Experts,
    We are doing an Idoc to file interface, using ABAP mapping.
    This is 1:n mapping i.e receiver message interface is 0..unbounded.
    We have achieved the mapping for 1:1. But when I test for multi, i get an error in moni saying
    Parsing error after multi mapping.Expected Message<i> instead of Item
    Item is the name of the node that has to be created multiple times.
    Has anyone done multi mapping in ABAP?? Any idea why this error....may be we are missing something.
    Any idea as to how we can progress???
    Thanks in advance
    Regards
    Shobha

    Hi,
    Surely u can use an ABAP mapping for this.
    Sounds like your problem is your not using correct output structure for multi mapping.
    As with any type of multi mapping your structure should reflect this. Your target payload must thus have the following structure:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
         <ns0:Message1>
              <unboundedPayload/>
         </ns0:Message1>
    </ns0:Messages>
    The <unboundedPayload> element in above should of course be replaced with your actual payload - I believe 'Item' in your case, if that is in fact the root node of your actual payload.
    Regards,
    Daniel

  • Handling Tab Delimited File generation in non-unicode for multi byte lang

    Hi,
    Requirement:
    We are generating a Tab Delimited File in different languages (Single Byte and Multi Byte) and placing the files at application server.
    Problem:
    Our system is a Non-unicode system so we are facing problems with generation of Tab delimited file for multibyte languages like Russian, Japanese, Chinese etc.,
    I am actually using data: d_tab TYPE X value '09' but it dont work for multi byte. I cant see tab delimited file at application server path.
    Any thoughts about how to proceed on this issue?Please let me know.
    Thanks & Regards,
    Pavan

    >
    Pavan Ravikanti wrote:
    > Thanks for your answer but do you reckon cl_abap_char_utilities will be a work around for data: d_tab type X VALUE '09' .
    > Pavan.
    On a non-unicode system the X Variant is working, but not on a unicode system. Here you must use the class. On the other hand you can use the class on a non-unicode system und your char var will always be correct (one byte/twobyte depending on which system your report is running).
    What you are planning to do is to put a file with an amount of possible characters into a system with has a less amount of characters. Thats not working in no way.
    What you can do is to build up a multi-code-page system where the codepage is bound to the user or bound to the logon-language. Here you you can read and process textfiles in several codepages - but not a textfile in unicode. You have to convert the unioce textfile into a non-unicode textfile before processing it.
    Remember that SAP does not support multi-code-page Systems anymore and multi-code-page systems will result in much more work when converting the system to unicode.
    Even non-unicode system will not be maintained by SAP in the near future.
    What you encounter here are problems for what unicode was developped. A unicode system can handle non-unicode textfiles, but the other way round will always lead to problems which cant be solved.

  • Conventions for XML-files

    Hi!
    I'm working on the text of a Bible translation that will be quite an extensive file when it's finished. The publisher has requested that I make an XML file in addition to the print-ready inDesign files they will use themselves. This is because there will be other publishers and companies that will want to use their translation (or parts of it) and it's inconvenient to send them inDesign files. An XML file with all of the paragraph styles, character styles and footnotes would be more convenient.
    Now, I've been trying to find information on how such a XML should look. Are there any code conventions for it? Most of the issues on this forum concerns importing xml files for use in indesign - but what I'm wondering is what I must think of when exporting XML files that other people will use?
    Also, is it possible to create inline xml tags based on character styles? (Such as bold, italic, suberscript etc.)
    I'm using CS6 and Mac OS X Lion.
    EDIT: Another thing just struck me. If anyone thinks that XML is not the best format for this then I'd very much like to hear about any other suggestions.

    InDesign has a native "Export XML" option, but that requires that you tag each separate item as an "XML item". Some of this can be automated, using "Map Styles to Tags". I don't have any experience with it, positive or negative, but if you exclusively used paragraph and character styles to layout your text it should be straightforward.
    Fastest way to get re-usable text in a not too difficult to parse file format is to export as Tagged Text. It's not really proper XML, but it may be Close Enough; if the above route is too complickated, try this (and send your publisher a test file).
    SimonLinden wrote:
    I've been trying to find information on how such a XML should look. Are there any code conventions for it? Most of the issues on this forum concerns importing xml files for use in indesign - but what I'm wondering is what I must think of when exporting XML files that other people will use?
    There are no "conventions". XML itself only has some very basic requirements (the way of writing element and attribute tags, correct nesting, some special characters that may not be used in plain text; that sort of things). So you can make up your own set of tags, or, to not confuse your receiving party, use a well-known set of tags such as XHTML (ie., <p> is for Paragraph, <i> is for Italic, and so on) or DocBook. There are also Scripture specific schema's of XML; http://ebible.org/usfx/ is one found with a quick Google.

  • An application for multi-channel measurements

    Does NI have a software solution for multi-channel measurements? I mean systems for measurements, tests and monitoring which contain numerous DAQ devices with thousands of sensors.
    I suppose the software for such system should have the following features:
    Instrument control
    Sensor management (type, s/n, accuracy, calibration data, next calibration date, measurement limits, etc.)
    Data acquisition
    Storing data in databases
    Data visualisation and analysis
    Report generation
    Tools for creating custom user interfaces / data visualisations for monitoring
    As far as I know the DIAdem is great for data analysis, visualisation and report generation but it's not suitable for other tasks. With LabVIEW you can do anything but it's not an "out-of-the-box" solution.
    Just to clarify what I'm talking about, here's an application that seems to fit the description. It's the HBM catman. Maybe someone worked with it? Do you know any analogues for it?

    Just to add to Hooovahh's comments.
    NI has flat out stated that they do not want to make turn-key solutions.  That would take away from them being able to make tools for people to create the solutions.  That is why they have alliance partners.  These partners take the tools made by NI and make really cool stuff.  My latest project was a software package that helped a technician build a jet engine correctly so that the turbine blades do not come out and destroy the engine (just slightly important).  I have also done some test systems for space craft avionics.
    So if you are really serious about this, I highly recommend finding an Alliance Partner to help you out.  If you want, give me a PM and I can work on getting you and a few people on my side to discuss your requirements and proceed from there.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

Maybe you are looking for

  • Airport Express - want only to use it for connecting a printer wirelessly

    Ugh... have opened up a brand new Canon printer and plugged it into a new Airport Express and hoped that it would somehow talk to my iMac across the room but no luck. Apple Store boy said no problem - click and plug and play... but alas no. Right now

  • Adobe photoshop cs6 has stopped working

    Hello, I have a problem with adobe photoshop cs6 when I use a very big file, and I would like to ask someone for help to fix it. This problem several times daily returns. The error in event log is the following: Faulting application name: Photoshop.e

  • Keep getting "not connected to Internet" message when trying to upgrade iPad software to iOS 7.0.6

    I was able to install the upgrade fine on both iPhones but when I try to install on iPad keep getting "you are not connected to the internet" when I am in fact connected. tried shutting iPad down and restarting that didn't work.

  • Safari display problems in https sites

    I have some problems when trying to reach HTTPS sites with Safari (under Mountain Lion). I only get text display and no rendering of images. I works perfectly well with FireFox. Have the same behavior with Mac App Store which only displays text. and

  • Window placement

    I wonder if I am missing something. Pages always opens documents based on templates 1/3 inch from the left edge of the screen. It never remembers the window placement on templates. Is there a method for changing this? I want my default documents to o