Behaviour of JMX in clustering

          hi all,
          i want to write my Custom MBean to monitor some part of my web applications
          (such as logging variables), and i wanted to know what is the behaviour of JMX
          in a cluster.
          Suppose that i write my CustomMBean to monitor properties of some components of
          my webapplication.
          If i use my MBean to change the properties of one component, will the change
          affec all teh cluster if teh webapp is deployed in the cluster?
          thanx in advance and regards
          marco
          

Thanks for your response Patrick.
Your suggestion should work too but I am curious as to why 'NOT IN' is not working now.
As I mentioned earlier that '<>' or NOT EXIST can work, but if I have to make this change then I
would have to do it in the entire codebase; just to be sure that I do not encounter this issue again.
Tarun

Similar Messages

  • JMX in clustering

              hi all,
              i was wondering one thing. When u deploy a web application, BEA's MBeans allows
              you to change values in the web.xml descript or files. i have changed some values
              thru the console, and i have noticed that if the change is done meanwhile the
              user is logged in, the user's ession will be invalidated and hte user has to re-login.
              What happens in a clustered environment? is it so that the session will be invalidated
              for all the servers in the cluster?
              What happens if, using the plain old approachinstead of JMX, i shut down the servers
              part of the clusters one at a time meanwhile the user is logged in? will hte user
              lose its Session also in this case?
              regards
              marco
              

    My reference for Websphere 5.1 is the "IBM WebSphere Application Server V5.1 System Management and Configuration" redbook, number SG24-6195-01. It has a section about page 780 and onwards about JMX, in the context of administration and scripting.
    Presumably there's a similar reference for WS 5.0.

  • JMX in clustered environment

              hi all,
              i am little confused about how weblogic 'behaves' in following situation:
              - i have one adminserver and 4 managed servers in a clustered environment
              - i deploy an application on all the servers in the cluster
              the application (servlet based) registers an MBean with adminserver.
              question that i would like to ask is the following:
              if my app is deployed in a cluster with 4 servers,does it mean that i will have
              at least 4
              registrations of the same MBean? because in this case i'¨ll have to handle exceptions
              in case same objectname has been registered.
              can anyone clarify me?
              thanx in advance and regards
              marco
              

    Is your configuration any different in the clustered environment? Are you using cache-coordination/synchronization in TopLink?
    Any idea of what the application is doing that leads to the server running out of memory?
    You may wish to use a memory profiler such as JProfiler or JProbe on the server to determine the cause of the memory leak.

  • GetConnection on clustered DS not failing over as documentation indicates

    Hi,
    (Weblogic 7 SP2 )
    I have a Connection pool and datasource both targeted to a cluster.
    The cluster is working as a cluster, HTTP session failover is working
    for instance, as is context failover. However, ds.getConnection() does
    not failover to another DS when the machine hosting the ds is down. I
    don't expect connection failover, I just expect DS failover as indicated
    by the docs.
    The JDBC documentation indicates that the datasource is cluster aware
    (see the quotes below). However, I do not observe this value.
    I have a client connecting to the cluster using the cluster address.
    If I lookup the the datasource, then in a loop do
    Connection ccc = ds.getConnection();
    ccc.close();
    Thread.sleep(500);
    (with exception handling omittted)
    Then I expect that when the server to which the ds is connected (the
    server on which it was looked up, which is determined by the context
    which I used to look it up, which is determined by round robin) goes
    down, that the getConnection() will failover to another machine.
    That is, I expected getconnection() to return a connection from another
    pool to which the datasource was targetted. This does not happen.
    Instead I get a connection error:
    weblogic.rmi.extensions.RemoteRuntimeException: Unexpected Exception
    - with nested exception:
    [java.rmi.ConnectException: Could not establish a connection with
    -4447598948888936840S:10.0.10.10:[7001,7001,7002,7002,7001,7002,-1]:10.0.10.10:7001,10.0.10.14:7001:my2:ServerA,
    java.rmi.ConnectException: Destination unreachable; nested exception is:
         java.net.ConnectException: Connection refused: connect; No available
    router to destination]
    The datasource is after all a cluster-aware stub?
    What is the intended behaviour?
    Thanks,
    Q     
    "Clustered JDBC eases the reconnection process: the cluster-aware nature
    of WebLogic data sources in external client applications allow a client
    to request another connection from them if the server instance that was
    hosting the previous connection fails."
    "External Clients Connections?External clients that require a database
    connection perform a JNDI lookup and obtain a replica-aware stub for the
    data source. The stub for the data source contains a list of the server
    instances that host the data source?which should be all of the Managed
    Servers in the cluster. Replica-aware stubs contain load balancing logic
    for distributing the load among host server instances."

    This is a bug in WL 7 (& 8).
    "QQuatro" <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    (Weblogic 7 SP2 )
    I have a Connection pool and datasource both targeted to a cluster.
    The cluster is working as a cluster, HTTP session failover is working
    for instance, as is context failover. However, ds.getConnection() does
    not failover to another DS when the machine hosting the ds is down. I
    don't expect connection failover, I just expect DS failover as indicated
    by the docs.
    The JDBC documentation indicates that the datasource is cluster aware
    (see the quotes below). However, I do not observe this value.
    I have a client connecting to the cluster using the cluster address.
    If I lookup the the datasource, then in a loop do
    Connection ccc = ds.getConnection();
    ccc.close();
    Thread.sleep(500);
    (with exception handling omittted)
    Then I expect that when the server to which the ds is connected (the
    server on which it was looked up, which is determined by the context
    which I used to look it up, which is determined by round robin) goes
    down, that the getConnection() will failover to another machine.
    That is, I expected getconnection() to return a connection from another
    pool to which the datasource was targetted. This does not happen.
    Instead I get a connection error:
    weblogic.rmi.extensions.RemoteRuntimeException: Unexpected Exception
    - with nested exception:
    [java.rmi.ConnectException: Could not establish a connection with
    -4447598948888936840S:10.0.10.10:[7001,7001,7002,7002,7001,7002,-1]:10.0.10.10:7001,10.0.10.14:7001:my2:ServerA,
    java.rmi.ConnectException: Destination unreachable; nested exception is:
    java.net.ConnectException: Connection refused: connect; No available
    router to destination]
    The datasource is after all a cluster-aware stub?
    What is the intended behaviour?
    Thanks,
    Q
    "Clustered JDBC eases the reconnection process: the cluster-aware nature
    of WebLogic data sources in external client applications allow a client
    to request another connection from them if the server instance that was
    hosting the previous connection fails."
    "External Clients Connections—External clients that require a database
    connection perform a JNDI lookup and obtain a replica-aware stub for the
    data source. The stub for the data source contains a list of the server
    instances that host the data source—which should be all of the Managed
    Servers in the cluster. Replica-aware stubs contain load balancing logic
    for distributing the load among host server instances."

  • Query about clustering unrelated large amounts of data together vs. keeping it separate.

    I would like to ask the talented enthusiasts who frequent the devolper network to tell me if I have understood how Labview deals with clusters. A generic description of a situation involving clusters and what I believe Labview does is shown below. An example of this type of situation is shown for generating the Fibonacci sequence is attached to illustrate what I am saying.
    A description of the general situation:
    A cluster containing several different variables (mostly unrelated) has one or two of these variables unbundled for immediate use and then the modified values bundled back into the cluster for later use.
    What I think Labview does:
    As the original cluster is going into the unbundle (to get original variable values) and the bundle (to update stored variable values) a duplicate of the entire cluster is made before picking out the individual values chosen to be unbundled. This means that if the cluster also contains a large amount of unrelated data then processor time is wasted duplicating this data.
    If on the other hand this large amount of data is kept separate then this would not happen and no processor time is wasted.
    In the attached file the good method does have the array (large amount of unrelated data) within the cluster and does not use the array in more than one place, so it is not duplicated. If tunnels were used instead, I believe at least one duplicate is made.
    Am I correct in thinking that this is the behaviour Labview uses with clusters? (I expected Labview only to duplicate the variable values chosen in the unbundle code object only. As this choice is fixed at compile time it would seem to me that the compiler should be able to recognise that the other cluster variables are never used.)
    Is there a way of keeping the efficiency of using many separate variables (potentialy ~50) whilst keeping the ease of using a single cluster variable over using separate variables?
    The attachment:
    A vi that generates the Fibonacci sequence (the I32 used wraps at ~44th value, so values at that point and later are wrong) is attached. The calculation is itterative using a for loop. 2 variables are needed to perform the iteration which are stored in a cluster (and passed from iteration to iteration within the cluster). To provide the large amount of unrelated data, a large array of reasonably sized strings is provided.
    The bad way is to have the array stored within the cluster (causing massive overhead). The good way is to have the array separate from the other pieces of data, even if it passes through the for loop (no massive overhead).
    Try replacing the array shift registers with tunnels in the good case and see if you can repeat my observation that using tunnels causes overhead in comparison to shift registers whenever there is no other reason to duplicate the array.
    I am running Labview 7 on windows 2000 with sufficient memory so that the page file is not used in this example.
    Thank you all very much for your time and for sharing your Labview experience,
    Richard Dwan
    Attachments:
    Fibonacci_test.vi ‏71 KB

    > That is an interesting observation you have made and seems to me to be
    > quite inexplicable. The trick is interesting but not practical for me
    > to use in developing a large piece of software. Thanks for your input
    > - I think I'll be contacting technical support for an explaination
    > along with some other anomolies involving large arrays that I have
    > spottted.
    >
    The deal here is that the bundle and unbundle nodes must be very careful
    when they are swapping elements around. This used to make copies in the
    normal cases, but that has been improved. The reason that the sequence
    affects it is that it affects the algorithm so that it orders the
    element movement so that the algorithm succeeds in avoiding a copy.
    Another, more obvious way
    is to use a regular bundle and unbundle, not
    the named variety. These tend to have an easier time in the algorithm also.
    Technically, I'd report the diagram to tech support to see if the named
    bundle/unbundle case can be handled as well. In the meantime, you can
    leave the data unbundled, as in the faster version.
    Greg McKaskle

  • Q: Hidden clustered jmx execute functionality?

    Hello,
    In my dev environment I have a cluster of two servers plus a node manager.
    I have deployed an ear to the cluster using the console. The ear has a
    servlet that registers an MBean.
    The interesting thing is that with a third party MC4j JMX utility I can
    execute a JMX method on serverA and somehow WebLogic 8.1SP3 replicates
    calling this method on serverB.
    I can verify this because MC4J shows an attribute has changed on serverB but
    I never called the JMX method on serverB.
    I've Google'd and read the WLS docs and haven't found this described so I'm
    not sure what the limitations are.
    Any pointers to documentation or comments on this behaviour would be most
    welcome. It seems too good to be true, and I'm likely missing something if
    I can't find documentation about such a powerful feature.
    Cheers.

    Hello Mark,
    It indeed seems too good to be true. Perhaps its a bug, or shall I call it a feature :-).
    I will try to reproduce this on my environment and see if I can reproduce.
    Thanks,
    -satya

  • Is there a difference in behaviour between clustered systems vs. standalone

    OpenVMS, MessageQ (could not find another place to ask :().
    Suppose 2 nodes, same DMQ bus.
    Node 1: Application X starts up, binds to group Y (not yet existing and so creates connection)
    Node 2: Application X starts up, binds to group Y.
    If node1 and node2 are in one VMS cluster, Node2::X would now be unable to do anyting since group Y is now owned by Node1::X and so node2::X will get into a wait state, to become active only if, for whatever reason, Node1::X gives up.
    But what happens if node1 and node2 are stand-alone systems, only connected using a DECNet (or TCPIP) connetcion:
    1: Node2::X waits until Node1::X stops and group Y is released (as if the systems were clustered)
    2: Node2::X will startup and access group Y so does not enter a wait state. If so, can this be forced to a wait (like (1))?
    (You may assume that the DMQ configuration is correct)

    Most people either use one app or the other, and have very limited knowledge of the one they don't use.  But from what I can find from a quick look round the web, they seem to be the same.
    Review: Photoshop Elements 13 gets Photomerge and content-aware fill | Macworld

  • Behaviour of Service Broker during clustered SQL Server failover

    Hi, 
    I have 3 instances of SQL Server 2005 hosted on a 3-node cluster, using Polyserve clustering.  Each instance is nominally hosted on its own node in the cluster.
    I have configured Service Broker to route messages from INST-A (on NODE-1) to INST-C (on NODE-3), using TCP and a NetBIOS name.  This NetBIOS name obviously uses the machine name of the node, rather than the virtual machine name of the instance.
    Under normal conditions, this works, messages sent from INST-A to INST-C are received and processed.
    However, should INST-C failover (for example to NODE-1), the route created to INST-C from INST-A is no longer valid; INST-C is now on a different node to that specified in the route.
    Service Broker stops, unable to put messages onto the queue on INST-C, so backing them up in sys.transmission_queue on INST-A.  To fix, I have to update the route as appropriate.
    We have thousands of messages being sent every minute.  While a delay in sending them during an actual failover is expected, would it not also be expected to recover itself and process any backlog without manual intervention?
    Is there something I'm missing in my configuration?  Or do I need to set up some other means to automatically update the route upon failover?  In which case, how can I programmatically determine the node to which it has failed-over?
    Configuration scripts:
    Target Endpoint:
    CREATE ENDPOINT ServiceBrokerTargetEndpoint
    STATE = STARTED
    AS TCP(LISTENER_PORT = xxxx)
    FOR SERVICE_BROKER(AUTHENTICATION = WINDOWS, ENCRYPTION = REQUIRED)
    CREATE ROUTE RouteToTargetService
    WITH SERVICE_NAME = 'ServiceBrokerTargetService', ADDRESS = 'TCP://INST-C:xxxx'
    INST-A and INST-C are both SQL 2005 Enterprise edition.
    Thanks for any assistance.
    Simon

    Hi David, 
    Thanks for correctly guessing our longer term plans.  It's good to know this won't be a problem in the future.
    However, that doesn't answer my original question, which is a bit more pressing than waiting until we have migrated.  If it can't be done, it can't be done, and we'll just bear that in mind when responding to failover events.  But I would prefer
    a definitive "no, it can't" rather than a speculative "no".  
    Regards, 
    Simon

  • JMS Clustering : Load Balancing expected Behaviour

    Hi All,
              I have a Cluster with a 2 managed servers A and B . ConnectionFactory is deployed to the cluster and Server B hosts JMS Server.Destinations on the JMS Server are not distributed, but the JNDI Names of the same are replicated across the cluster.Both Load Balancing and server affinity are enabled on the connectionFactory(I hope these attributes are required only if the destinations are distributed).
              An application containing MDBs and EJBs are deployed to the cluster and onMessage MDB looks up for a Facade and makes calls on it.An external java client sets up the initialContext based on the cluster address and starts sending messages to the destination
              What should be the expected behaviour in this scenario ?According to my understanding,
              -Eventhough, the connectionFactory is deployed across the cluster, since the physical destinations are available only in the weblogic server hosting the JMS Server(Server B), the actual message handling(MDB invocation) would be done only here.
              -When the MDBs are invoked on serverB, it would performs a lookup for the Facade.Because of the colocation optimisation, the replica aware stub used would be the one in ServerB and henceforth all the method processing should be done on Server B.
              Is this correct ? But this would also mean that no load balancing would happen because of the colocation optimisation ? Do i need to use a distributed destination to enable load balancing in this scenario ?
              Any help would be greatly appreciated..
              thanks,
              Josh

    Hi All,
              I have a Cluster with a 2 managed servers A and B . ConnectionFactory is deployed to the cluster and Server B hosts JMS Server.Destinations on the JMS Server are not distributed, but the JNDI Names of the same are replicated across the cluster.Both Load Balancing and server affinity are enabled on the connectionFactory(I hope these attributes are required only if the destinations are distributed).
              An application containing MDBs and EJBs are deployed to the cluster and onMessage MDB looks up for a Facade and makes calls on it.An external java client sets up the initialContext based on the cluster address and starts sending messages to the destination
              What should be the expected behaviour in this scenario ?According to my understanding,
              -Eventhough, the connectionFactory is deployed across the cluster, since the physical destinations are available only in the weblogic server hosting the JMS Server(Server B), the actual message handling(MDB invocation) would be done only here.
              -When the MDBs are invoked on serverB, it would performs a lookup for the Facade.Because of the colocation optimisation, the replica aware stub used would be the one in ServerB and henceforth all the method processing should be done on Server B.
              Is this correct ? But this would also mean that no load balancing would happen because of the colocation optimisation ? Do i need to use a distributed destination to enable load balancing in this scenario ?
              Any help would be greatly appreciated..
              thanks,
              Josh

  • Singleton behaviour in clustered environment

    Hi,
              I have a very basic question. In a WebLogic clustered
              environment, is a Singleton replicated? In other words, if the
              mySingleton on node A in a cluster is updated with a piece of data,
              will this show in the mySingleton in node B?
              Thanks
              Ciao
              Ferruccio
              

    No. If you need this type of functionality, use Coherence.
              http://www.tangosol.com/coherence.jsp
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "Ferruccio" <[email protected]> wrote in message
              news:[email protected]..
              > Hi,
              > I have a very basic question. In a WebLogic clustered
              > environment, is a Singleton replicated? In other words, if the
              > mySingleton on node A in a cluster is updated with a piece of data,
              > will this show in the mySingleton in node B?
              >
              > Thanks
              >
              > Ciao
              > Ferruccio
              

  • MDB behaviour in Clustered environment

    Hi, I am a bit confused with regards to how a message delivery will behave in a clustered environment on a app server.
    As far as Queue is concerned, I am clear that it will be delevered to one and only one of the MDBs.
    But for Topic, how can load balancing effect the Topic subscriptions? Does the Topic then behave in the same manner as the Queue?
    I have a J2EE app that is running in clustered env( can be any app server). It has one MDB. I need to send messages to all the application instances at the same time. I am not sure if this solution will work in a clustered environment with JMS load balancing, and I am afraid that this solution will misbehave. Hence the question :)

    Delivery to topics is based on subscriptions. One copy of a message is delivered to each subscription. A subscription is identified (in most providers) by the combination of the client id and subscription name. In a clustered environment, each node of the cluster will have a different client id and therefore get a different copy of a message.
    Some JMS providers allow for something called group or shared subscriptions where the client id is removed from the subscription identifier and replaced by a group id. This allows two nodes in a cluster to share a subscription to a topic allowing whichever client is up to consume exactly one copy of a message among all nodes in a cluster.
    I have to admit that I haven't gotten around to implementing such a feature in my own JMS implementation, but it is on the list.
    Dwayne
    ============
    http://dropboxmq.sourceforge.net

  • Can single JMX manage two clusters

    Hi
    Just want to check that is that possible to manage 2 cluster on same machine with one JMX node..
    Thanks,
    Gaurav

    No.  The management/jmx mode is a member of the cluster, as such it can't be a member of another cluster.

  • Unxepected behaviour with clusters inside of while loop with shift register

    Colleagues,
    I just would like to post here small, but important bug (I guess this is the bug), which was found by coworker today morning. Just would like to share knowledge, and probably this will save some debugging time for you...
    So, the problem that you can get wrong content of the cluster in some cases. The cluster used inside of while loop with shift register (we using this construction for functional globals), and after bundle operation you can get data, which are not expected:
    See also attached code for details (LabVIEW 8.6.1, WinXP Prof SP3).
    best regards,
    Andrey.
    PS
    Bug report already sent to NI.
    Message Edited by Andrey Dmitriev on 10-16-2008 12:30 PM
    Attachments:
    BugwithClusters.png ‏15 KB
    BugwithClusters.zip ‏10 KB

    Thanks Andrey for brining this to our attention!
    The "Show Buffer Allocations" reveals that LV is not processing the code in the right order.
    Under ideal conditions, all of the data should be manipulated using the buffer "A". But as this demo shows the data is being processed in the wrong order.
    The previously posted workaround get around this by forcing the array operation to happen first, but this resluts in two additional buffers "C" and "D" and then copy this back into "B".
    Using an "Always Copy" is another workaround that uses a seperate buffer "F" to hold the data being moved from the first cluster to the second inside "E".
    I think you won a shinny new CAR* Andrey!
    Ben
    CAR = Corrective Action Report
    Message Edited by Ben on 10-16-2008 08:05 AM
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Attachments:
    Cluster_Bug.PNG ‏57 KB

  • Behaviour of staic variables in clustered environment.

    HI,
              I have a class "XYZ" with a static variable "number".
              This holds some information vital to the application.
              Say there are two weblogic servers in a cluster.
              Now what will happen if a server(say server1) in a cluster goes down.
              Will the other server(say server2) which takes up the work of server1 ,
              have the same value for the static variable "number" of class "XYZ" ??
              Regards
              Suchak Jani
              

    FWIW ... only thing to remember is that if two servers bind the same name at
              the same time, they will both see their binding (meaning that each one will
              not see the other's binding). Best way to work around is to bind a
              semaphore (e.g. use the server's bind address as a JNDI name) and then scan
              for others (using a timeout) to make sure that only one server binds.
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "suchak jani" <[email protected]> wrote in message
              news:[email protected]...
              > Hi Cameron
              >
              > As Always , Great to hear from you !
              > Congratulations on winning the Award !
              > May God Bless You !
              >
              > Yes i guess i will try JNDI binding .
              > If the application dies it is not a problem
              > if the information is lost.
              >
              > Thank you so much.
              >
              > Regards
              > Suchak Jani
              >
              >
              >
              > Cameron Purdy wrote:
              >
              > > No. Statics are scoped by the instance of Class that represents their
              > > storage, and thus by class loader, so you are not even guaranteed that
              there
              > > will be only one instance of that static variable on one of those two
              > > Weblogic servers.
              > >
              > > > I have a class "XYZ" with a static variable "number".
              > > > This holds some information vital to the application.
              > >
              > > I believe you need to look for different options for holding information
              > > that is vital to the application. For example, entity EJBs were
              designed to
              > > maintain such information.
              > >
              > > Failover (in Weblogic) only applies to HttpSessions. (In 6.0, it also
              > > applies to stateful session EJBs.) Failover does not imply a single
              > > synchronization point, which is what it sounds like you want. If the
              data
              > > is non-transactional, you may be able to use a Weblogic workspace or
              JNDI
              > > binding to share the data across the cluster, however, I know the JNDI
              > > information is lost on server death and I am not sure about workspaces.
              > >
              > > Peace,
              > >
              > > --
              > > Cameron Purdy
              > > Tangosol, Inc.
              > > http://www.tangosol.com
              > > +1.617.623.5782
              > > WebLogic Consulting Available
              > >
              > > "suchak jani" <[email protected]> wrote in message
              > > news:[email protected]...
              > > > HI,
              > > >
              > > > I have a class "XYZ" with a static variable "number".
              > > > This holds some information vital to the application.
              > > >
              > > > Say there are two weblogic servers in a cluster.
              > > >
              > > > Now what will happen if a server(say server1) in a cluster goes down.
              > > >
              > > > Will the other server(say server2) which takes up the work of server1
              > > > have the same value for the static variable "number" of class "XYZ" ??
              > > >
              > > > Regards
              > > > Suchak Jani
              > > >
              > > >
              >
              

  • ObjectName conventions for multi-classloader, clustered environments

    I've been reading a number JMX documents, including these best practice articles [1] [2], but I've run into a question about the practical application of the ObjectName conventions within a multi-classloader and/or clustered environment.
    The general suggestion is to use a singleton name that's unique within an MBeanServer (environment). For example, my.counter:name=CounterPool. This is fine when running one instance of an object within a JVM, but what about mult-classloader environments where an object could be loaded multiple times and causing multiple registrations, which would obviously fail. The best example of this would be code running in a Web container that gets deployed to two separate applications (WARs).
    What's the best practice for naming an MBean that uses the platform MBeanServer, but could be loaded multiple times by multiple class loaders?
    [1] http://java.sun.com/products/JavaManagement/best-practices.html
    [2] http://www-128.ibm.com/developerworks/java/library/j-jtp09196/index.html

    That's certainly an interesting question!
    If an MBean can be created by two different apps running in the same JVM, then the question is whether it reflects something about each app individually or some global state. For example, if the my.counter:name=CounterPool MBean sometimes represents the counter pool of the Fred app and sometimes of the Jim app, then its name should reflect that. You might imagine adding an extra key to the ObjectName, like my.counter:name=CounterPool,app=Fred. (You should also have a type key, by the way.) Or, you could imagine that each app has its own domain, like Fred/my.counter:name=CounterPool.
    This can be tricky if the MBeans are being created by a library. I think a library that could be used independently by different apps in the same JVM should not create an MBean with a hardcoded ObjectName unless the behaviour of that MBean is the same no matter what app registers it. If the MBean reflects the state of the library, then that state will be different for the copy of the library in each app, so registering a single MBean is wrong.
    If the MBean reflects global state (like the hostname or the filesystem or whatever, which will be the same for each app), then it does seem reasonable to use a fixed name. In that case, the library could try to do the following. Suppose the name of the MBean is d:k=v.
    * Register an MBeanServerNotification listener on the MBeanServerDelegate using an MBeanServerNotificationFilter, to be informed when the d:k=v MBean is unregistered.
    * Try to register d:k=v. If this fails with InstanceAlreadyExistsException, then somebody else got there first.
    * When you get an MBeanServerNotification saying that d:k=v has been unregistered (the other app went away), you register your own d:k=v. Again, if you get InstanceAlreadyExistsException you can ignore it.

Maybe you are looking for