JMX monitoring front Coherence cache running on IBM WebSphere

I have a question regarding JMX monitoring a Coherence front cache running in a IBM WebSphere v6.1 JVM (1.5).
My setup is that I have three Coherence nodes (back cache) each running 10 JVMs each. Via JMX all these 30 JVMs are setup to be monitored from one of the 30 JVMs which is configured to be the JmxServer for the other 29 instances. The other 29 JVMs set -Dtangosol.coherence.management.remote=true at startup to indicate that their MBean tree will be managed centrally. This works fine but when I try to extend this to also be able to manage the front cache (running on WebSphere on a separate node in the tier above) by adding the same flag (-Dtangosol.coherence.management.remote=true) it does not appear in the MBean tree as the other 30 Coherence cache (Sun) JVMs do.
Are there other things I have to consider to get this working? There are no firewalls in between the nodes so that is not the problem. How much of IBM WebSpheres (non-standard) JMX-functionality is required to get this to work?
/Jonas

How is the WebSphere node connected to the cluster? Is it using TCMP? Extend?
Local, Distributed, Replicated, Near, Overflow, External and Optimistic cache statistics appear in the Coherence JMX server. However, near and local caches created on extend nodes do not appear. Therefore the -Dtangosol.coherence.management.remote=true on an Extend client will not register the near or local caches.
Thanks,
Everett

Similar Messages

  • Invalidate EJB cache using JMS ibm.websphere.ejbpersistence.InvalidateTCF

    Hi,
    I work with WSAD 5.1, and I need to invalidate manually some EJB entities in cache. After looking for information in websphere documentation, it seems that I have to use JMS to do this.
    I create an Object PMCacheInvalidationRequest, and It seems that i have to publish it.
    I found in websphere documentation some indications, based on a topicConnectionFactoryJNDIName named "com.ibm.websphere.ejbpersistence.InvalidateTCF".
    Is someone has already work on it? Is someone can help me to understand if I have to parameter something in my server?
    Thanks.

    Please,
    nobody has used this API PMCacheInvalidationRequest with JMS? Or is my message not comprehensible?
    Thanks

  • How to get Oracle Coherence caching running within CQ5

    I've been trying to get Oracle Coherence running within CQ5 and I'm having problems with the setup.  First, I've been successful in creating an OSGi bundle jar from coherence.jar and it does deploy successfully through the Felix OSGi console.  But when I try to execute code using the Coherence classes, I get RMI marshalling run-time errors.  I've resolved the other run-time errors that have to do with the Import-Package definition in manifest.mf for the bundle.  I believe that the root problem that I'm facing is the deployment of the files tangosol-coherence-override.xml and the additional xml file that it points to.  I don't believe that these two files are being found within CQ5 correctly.  I've tried placing these files in the following locations:
    1.  Within coherence.jar itself (at the root directory)
    2.  In the CQ5 application's install folder
    3.  In the CQ5 application's src/main/resources folder
    Has anyone ever deployed Oracle Coherence in CQ5 or can give me a suggestion on what to try?  I appreciate any help or ideas.
    Thanks!  - Charlie

    There are 3 options which is described below. The steps may slightly vary based on cq & Oracle Coherence version. The details mentioned below are for cq5.4 & Coherence 3.7.1
    Option 1:-  Oracle recommended way
    *     Create a basic Web application directory
    /Sample.jsp
    /WEB-INF/web.xml
    /WEB-INF/classes/tangosol-coherence-override.xml
    /WEB-INF/lib/coherence.jar
    *     jar -cvf hello.war *
    *     Deploy the hello.war file by going to cq servlet engine at http://<host>:<port>/admin
    *     Access your sample with http://<host>:<port>/<contextroot>/Sample.jsp
    Option 2:-  Combination of Option 1 & Option 3
    *      Stop the cq
    *      Place coherence.jar at <CQ_Install_Dir>/crx-quickstart/server/runtime/0/_/WEB-INF/lib/
    *      Place tangosol-coherence-override.xml and all additional xml files at <CQ_Install_Dir>/crx-quickstart/server/runtime/0/_/WEB-INF/classes/
    *      Modify <CQ_Install_Dir>/crx-quickstart/launchpad/sling.properties to add below properties
    sling.bootdelegation.com.tangosol.net.cache=com.tangosol.net.cache
    sling.bootdelegation.com.tangosol.net=com.tangosol.net
    sling.bootdelegation.com.tangosol.util=com.tangosol.util
    *       Start the cq
    *       Verify jsp script etc... by putting into cache.. Same thing like
            <%
                String key = "k2";
                String value = "Hello World from cq!";
    ClassLoader oldLoader = Thread.currentThread().getContextClassLoader();
    ClassLoader newLoader = com.tangosol.net.CacheFactory.class.getClassLoader();
    Thread.currentThread().setContextClassLoader(newLoader);
                CacheFactory.ensureCluster();
                NamedCache cache = CacheFactory.getCache("hello-example");
                cache.put(key, value);
                out.println((String)cache.get(key));
                CacheFactory.shutdown();
    Thread.currentThread().setContextClassLoader(oldLoader);
             %>
    Option 3 :-   The one you are trying by creating an OSGi bundle jar from coherence.jar & prefeerd approach also. I haven't tried this approach but just throughing some thaughts.
    *    From some of oracle seminors I remembered oracle had a plans to have OSGi jar for Coherence. Try to get that instead of building the one ourself.
    *    If it is not available then Looking at your problem as a workaround i can think of is try with putting the XML file in the repository  & load xml in java class something like as
    getClass().getClassLoader().getResourceAsStream("tangosol-coherence-override.xml")

  • EES running in IBM Websphere

    Has anyone successfully configuredEES with IBM WAS 4.x ??If so, I would appreciate any tips.Has anyone developed web applicationsthat can use EES as an EJB and haveyour application EJBs talk to EES (EJB)and deliver reporting information ??If so, any tips are appreciated.Thanks.

    Hi "npatil", I ask in that servant wanted to carry out the implementation of the EDS as a EJB, since that the application server executes in N platforms, doesn't mean that the EDS either like alone program, as EJB, as servlet or as a object CORBA, it can be connected to Essbase, the verification point resides in that the classes that make the bridge process, as I mention it "jcole", they use a characteristic of Java called JNI (Java Native Interface), therefore the interaction between your classes of Java and the server of Essbase, carries out it in fact a dll group (win32) or so libraries (unix) of the one it run-cheats of Essbase, therefore you have the restrictive of platform compatibility and version of where it executes the version of Essbase that do you want to use, I recommend you to verify on that platform of operating system implemented your development in Java. Currently in one of our clients has several developments of business applications in WebLogic 6.0 and 6.1 in Solaris 8 (SunOS 5.8), it is planned to carry out an automation system and monitor for its servers of Essbase, the proposal is to use the JAPI of Essbase, the problem resides in that the version of the operating system in the development environment is Solaris 2.6 (SunOS 5.6) in this the run-time of Essbase version 6.5.1 it doesn't execute, therefore, even if the developing, implementing and making the deployment of all the component ones in a satisfactory way, to the moment of run your application any call toward the server didn't execute.For example the versions of AIX for Essbase 6.5.1 are AIX 4.3.3 and 5LOmar Aecio Garcia RamirezBusiness Intelligence ArchitectOmniSys S.A. de [email protected]

  • * IBM Websphere Vs Oracle's Weblogic server *

    IBM Websphere Vs Oracle's Weblogic server
    Can ODI run on IBM Websphere?
    How feasible is it to shift ODI from Oracle Web logic to IBM Websphere?
    What challenges , efforts required in ODI code to do so?
    Please suggest.
    Thanks in Advance.
    Regards,
    Dinesh.

    1. Well for your 1st question I believe you can.
    2. I believe it will go fine because you just need one application server to deploy your ee agent.
    3. I have never tried IBM websphere as an application sewrver in ODI but it shouldnt be that much completed as you are thinking. Better you do R&D in your local system. Also I am not sure about the issues you will face in future. ( for ex maple PermGen errors, issues in clustering etc)
    Thanks

  • IBM Websphere to ActiveDirectory ( Win 2003 ) LDAP SSL.

    I am trying to connect to Win 2003 Ad LDAP from websphere Application server.
    I have installed certificates Win2k in to local key store.
    I used ikeyman of Websphere. Win 2k3 certificates were in .arm format ( thatz how Win2k3 admin gave me) . I succesfully installed the certificates in local keystore. and pointed to the keystoere when LDAP connection is happening.
    I am getting a MalformedURLException canot parse url ldaps://xx.xx.x.x:636
    Not an LDAP url .
    At the same time i also tried with Sun JDK . it shows another error .
    default context init failed: java.security.cert.CertificateParsingException: java.io.IOException: subject key, Unknown k
    ey spec: Invalid RSA modulus size.
    Please help me . I want this program to run from IBM Websphere Env.
    Please find my code below
    thanks in advance.
    import java.util.Hashtable;
    import javax.naming.*;
    import javax.naming.ldap.*;
    import javax.naming.directory.*;
    import java.io.*;
    public class Test {
    public static void main(String args[] ) {
              //String userName = "CN=Renjith\\, Vasudevan";
              String userName = null;
              String test = ",OU=xx,OU=xx,DC=xx,DC=xxm";
              String newPassword = "xxx";
              String oldPassword = "xx";
              Hashtable env = new Hashtable();
              //Hard coded values - will be moved to properties file.
              env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
              //env.put(Context.PROVIDER_URL, "ldap://X.X.X.X:389");
              env.put(Context.PROVIDER_URL, "ldaps://X.X.X.X:636");
              env.put(Context.SECURITY_AUTHENTICATION, "simple");
              //env.put(Context.SECURITY_PRINCIPAL, "[email protected]");
              env.put(Context.SECURITY_PRINCIPAL, "[email protected]");
              env.put(Context.SECURITY_CREDENTIALS, "xxxx");
              //env.put(Context.SECURITY_PROTOCOL,"ssl");
              String keystore = "C:\\j2sdk1.4.2_04\\jre\\lib\\security\\cacerts";
              System.setProperty("javax.net.ssl.trustStore",keystore);
              System.setProperty("javax.net.ssl.trustStorePassword", "changeit");
              try {
                   // Create the initial directory context
                   LdapContext ctx = new InitialLdapContext(env,null);
                   // This following code only for getting correct dn - Hardcoded dn had some tabbing/char problem.
                   // Renjith - begin
                   SearchControls constraints = new SearchControls();
                   constraints.setSearchScope(SearchControls.SUBTREE_SCOPE);
                   String[] strAttributes = { "sAMAccountName", "memberOf" };
                   //String FILTER = "(&(objectClass=user))";
                   String FILTER = "(&(objectClass=user)(sAMAccountName=prrev))";
                   String searchBase = "OU=xx,OU=xx,DC=infores,DC=xx";
                   constraints.setReturningAttributes(strAttributes);
                   NamingEnumeration results =
                        ctx.search(searchBase, FILTER, constraints);
                   System.out.println("results : " + results);
                   while (results != null && results.hasMore()) {
                        SearchResult sr = (SearchResult) results.next();
                        String dn = sr.getName();
                        //String dn =  ((Context)sr.getObject()).getNameInNamespace();
                        if(dn.indexOf("Renjith") != -1 ) {
                        System.out.println("Distinguised Name : " + dn);
                        //System.out.println("Charg"+dn.toCharArray());
                        userName = dn+test;
                        break;
                   // Renjith - end.
                   //set password is a ldap modify operation
                   ModificationItem[] mods = new ModificationItem[2];
                   String oldQuotedPassword = "\"" + oldPassword + "\"";
                   byte[] oldUnicodePassword = oldQuotedPassword.getBytes("UTF-16LE");
                   String newQuotedPassword = "\"" + newPassword + "\"";
                   byte[] newUnicodePassword = newQuotedPassword.getBytes("UTF-16LE");
                   mods[0] = new ModificationItem(DirContext.REMOVE_ATTRIBUTE,
                              new BasicAttribute("unicodePwd", oldUnicodePassword));
                   mods[1] = new ModificationItem(DirContext.ADD_ATTRIBUTE, new BasicAttribute("unicodePwd",
                             newUnicodePassword));
                   System.out.println("Trying to reset Password for: " + userName);
                   // Perform the update
                   ctx.modifyAttributes(userName, mods);
                   System.out.println("Reset Password for: " + userName);     
                   ctx.close();
              catch (NamingException e) {
                   e.printStackTrace();
                   System.out.println("Problem resetting password: " + e);
              catch (UnsupportedEncodingException e) {
                   System.out.println("Problem encoding password: " + e);
    }

    The first error you described "malformed URL" is possibly due to the fact that your JRE version 1.4 does not support the ldaps URL.
    If using 1.4 then you must use the following syntax:env.put(Context.PROVIDER_URL,"ldap://servername:636");If using 1.5, then it supports the syntax:env.put(Context.PROVIDER_URL,"ldaps://servername:636");I can't comment on the other error message you receive, however I am concerned at two things, one is that in your sample code you are using a "null" user name, and secondly, I have no idea what certificate you have installed. I do not recall seeing a Windows CA cert with the extension of .arm. Normally the Root CA exported trust cert has the extension of .cer

  • Near Cache JMX Monitoring

    Hi,
    Can the JMX monitoring capabilities of Coherence be used to monitor the hit-rate of a near cache, i.e. by adding the JMX command line options to the extend client - to determine how effective it is?
    Thanks

    Hi dxfelcey,
    You can get to the NearCache statistics by querying CacheMBean(s) with key attribute "tier=front". For example, the following query will return you a set of ObjectNames for all MBeans representing NearCache called "test" instances across the cluster:
    MBeanServer mbs=MBeanHelper.findMBeanServer();
    Set setNames = mbs.queryNames(new ObjectName("type=Cache,name=test,tier=front,*"), null);For more information please see the Registry Javadoc here:
    http://download.oracle.com/otn_hosted_doc/coherence/330/com/tangosol/net/management/Registry.html
    and the Coherence Wiki page here:
    http://wiki.tangosol.com/display/COH33UG/Managing+Coherence+using+JMX
    Regards,
    Gene

  • Specifying coherence-cache-config.xml for multiple clusters

    Hi,
    I am running two cache clusters (Cluster A and B that hold different cache types). we have a web application that needs to communicate with both the clusters. we have two coherence-cache-config-g.xml files, one for each cluster.
    where do we specify the two coherence-cache-config.xml for each of these clusters in our coherence.jar that we deploy on the web app server.
    pls provide some inputs...
    thanks in advance,
    - G.

    Hi G,
    You can define a path to the cache configuration descriptor in your operation configuration override file (tangosol-coherence=override.xml) or specify it in the system property "tangosol.coherence.cacheconfig".
    Please see this Wiki page for details:
    http://wiki.tangosol.com/display/COH32UG/configurable-cache-factory-config
    Regards,
    Gene

  • Looking for some advice on CEP HA and Coherence cache

    We are looking for some advice or recommendation on CEP architecture.
    We need to build a CEP application that conforms to the following:
    • HA with no loss of events or duplicate events when failing over to the backup server.
    • We have some aggregative rules that needs to see all events.
    • Events are XMLs with size of 3KB-50KB. Not all elements are needed for the rules but they are there for other systems that come after the CEP (the customer services).
    • The XML elements that the CEP needs are in varying depth in the XML.
    Running the EPN on a single thread is not fast enough for the required throughput mainly because network latency to the JMS and the heavy task of parsing of the XML. Because of that we are looking for a solution that will read the messages from the JMS in parallel (multi thread) but will keep the same order of events between the Primary and Secondary CEPs.
    One idea that came to our minds is to use Coherence cache in the following way:
    • On the CEP inbound use a distributed queue and not topic (at the CEP outbound it is still topic).
    • On the CEPs side use a Coherence cache that runs on the CEPs JVMs (since we already have a Coherence cluster for HA).
    • Both CEPs read from the queue using multi threading (10 reading threads – total of 20 threads) and putting it to the Coherence cache.
    • The Coherence cache is publishing the events to both CEPs on a single thread.
    The EPN looks something like this:
    JMS adapter (multi threaded) -> replicated cache on both CEPs -> event bean -> HA adapter -> channel -> processor -> ….
    Does this sounds sound to you?
    Are we over shooting here? Is there a simpler solution for our needs?
    Is there a best practice for such requirements?
    Thanks

    Hi,
    Just to make it clear:
    We do not parse the XML on the event bean after the Coherence. We do it on the JMS adapter on multiple threads in order to utilize all the server resources (CPUs) and then we put it in the replicated cache.
    The requirements from our application are:
    - There is an aggregative query that needs to "see" all events (this means that we need to pass all events thru a single processor and we cannot partition them to several processors).
    - Because this is a HA solution the events on both CEPs (primary and secondary) needs to be at the same order when reaching the HA inbound adapter and the processor.
    - A single thread JMS adapter is not reading the messages from the JMS fast enough mainly because it takes time to parse the XML to an event.
    - Using a multi-threaded adapter or many single threaded adapters with message selector will create a situation that the order of events on both CEPs will not be the same at the processor inbound.
    This is why we needed a mediator so we can read in multiple threads that will parse the XMLs in parallel without concerning on order of messages and on the other hand publish all the messages on a single thread to the processors on both CEPs from this shared mediator (we use a replicated cache that runs on both JVMs).
    We use queue instead of topic because if we read the messages from a topic on both CEPs it will be stored twice on the Coherence replicated cache. But if we use a queue, when server 1 read the message and put it in the Coherence replicated cache then server 2 will not read it because it was removed from the queue.
    If I understand correctly you are suggesting replacing the JMS adapter with an event bean that will read the messages from the JMS directly?
    Are you also suggesting that we will not use a replicated cache but instead a stand alone cache on each server? In this case how do we keep the same order of events on both CEPs (on both caches)?

  • Verify whether the session data is kept in the Coherence caches

    I have successfully combined the MapViewer application with WebLogic and Oracle Coherence*Web.
    How to verify whether the session data of MapViewer application is kept in the Coherence caches or not?
    All out put show that both of MapViewer and WebLogic server as well as Coherence are running well.
    All the following steps are right?
    The procedure is as the following:
    1. Create a WebLogic domain: Map_domain.
    2. Start the WebLogic domain Map_domain by running startWebLogic.sh script.
    3. Install Coherence.jar as a library on WebLogic.
    4. Copy the coherence.jar in the WAR's WEB-INF/lib directory.
    5. Create a reference to the shared library by modifying the weblogic.xml in web applications WEB-INF directory
    and add the following contents:
    <weblogic-web-app>
         <library-ref>
              <library-name>coherence-web-spi</library-name>
              <specification-version>1.0.0.0</specification-version>
              <implementation-version>1.0.0.0</implementation-version>
              <exact-match>false</exact-match>
         </library-ref>
    <weblogic-web-app>6. Install Coherence-web-spi.war as a WebLogic library.
    7. Install the MapViewer as a WebLogic application.
    8. Start a Coherence cache server using the cmd file web-cache-server.cmd and then start MapViewer application.
    The content of web-cache-server.cmd file:
    @echo off
    @rem This will start a cache server
    setlocal
    :config
    @rem specify the Coherence installation directory
    set coherence_home=F:\coherence
    @rem specify the JVM heap size
    set memory=256m
    :start
    if not exist "%coherence_home%\lib\coherence.jar" goto instructions
    if "%java_home%"=="" (set java_exec=java) else (set java_exec=%java_home%\bin\java)
    :launch
    set java_opts="-Xms%memory% -Xmx%memory%"
    "%java_exec%" -server -showversion "%java_opts%" -cp %coherence_home%\lib\coherence.jar;
    %coherence_home%\lib\coherence-web-spi.war
    -Dtangosol.coherence.management.remote=true
    -Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml
    -Dtangosol.coherence.session.localstorage=true
    com.tangosol.net.DefaultCacheServer %1
    goto exit
    :instructions
    echo Usage:
    echo   ^<coherence_home^>\bin\cache-server.cmd
    goto exit
    :exit
    endlocal
    @echo onEdited by: jetq on Jan 13, 2010 9:32 AM

    Any opinions are welcome.

  • Implementing Oracle DCN with Coherence Cache in a weblogic 10 app server

    I m trying to implements a DCN ( Database change notification ) on oracle to notify a listener of an event of DB so I can update Coherence Cache.
    I followed the tutorial here and it is working fine using a sample program with a main method to execute the listener class and keep it running.
    My question is how would this notification and listener gets implemented on a production environment since my local test was only running a main method to keep the listener running? what technology to use to keep the listener always running on the background and receive the notification from the database )?
    would a [weblogic startup class|http://docs.oracle.com/cd/E13222_01/wls/docs81/ConsoleHelp/startup_shutdown.html] work for this purpose?
    We are using Weblogic 10 as our app server.

    That's a very simple question with (many) potentially complex answers. I think that first uou need to study information on TimesTen to understand what it is and what it does. Then you need to relate that to you current performance bottleneck (I assume you have analysed those). If your bottleneck is database access then maybe TimesTen can help you.
    Please bear in mind that TimesTen is not a 'transparent' drop in performance booster. To implement TimesTen and to realise significant performance improvement you will almost certainly need to make changes to both the application and the overall architecture. The cost/difficulty of doing that also needs to be factored in.
    Chris

  • How to Test coherence cache configuration

    Hi,
    I have configured coherence using the below two config xmls, I had started out by trying to configure a distributed cache scheme but I am not sure if it has come up correctly. This configuration works fine from caching point of view, it even does the clustering, but my only doubt here is that how can I test whether it is actually a distributed cache or a replicated cache?
    coherence-cache-config.xml
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>dist-ABCCache</cache-name>
                   <scheme-name>ABC-distributed-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Distributed caching scheme.
    -->
              <distributed-scheme>
                   <scheme-name>ABC-distributed-cache-scheme</scheme-name>
                   <lease-granularity>member</lease-granularity>
                   <backing-map-scheme>
                        <local-scheme/>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server1</address>
                                  <port>####</port>
                             </local-address>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    tangosol-coherence-override.xml
    <coherence>
         <cluster-config>
              <member-identity>
                   <cluster-name>MyCluster</cluster-name>
              </member-identity>
              <unicast-listener>
                   <well-known-addresses>
                        <socket-address id="1">
                             <address>server1</address>
                             <port>####</port>
                             <port-auto-adjust>false</port-auto-adjust>
                        </socket-address>
                        <socket-address id="2">
                             <address>server2</address>
                             <port>####</port>
                             <port-auto-adjust>false</port-auto-adjust>
                        </socket-address>                    
                   </well-known-addresses>
              </unicast-listener>
              <multicast-listener>
                   <time-to-live system-property="tangosol.coherence.ttl">4</time-to-live>
                   <join-timeout-milliseconds>3000</join-timeout-milliseconds>
              </multicast-listener>
              <packet-publisher>
                   <packet-delivery>
                        <timeout-milliseconds>30000</timeout-milliseconds>
                   </packet-delivery>
              </packet-publisher>
              <service-guardian>
                   <timeout-milliseconds system-property="tangosol.coherence.guard.timeout">35000
                   </timeout-milliseconds>
              </service-guardian>
         </cluster-config>
         <logging-config>
              <severity-level system-property="tangosol.coherence.log.level">9</severity-level>
              <character-limit system-property="tangosol.coherence.log.limit">0</character-limit>
         </logging-config>
    </coherence>

    user1945969 wrote:
    Thanks for your answer but I also wanted to know if there is anyway I can verify that by the data in the cluster? You can start up the [command line application|http://coherence.oracle.com/pages/viewpage.action?pageId=16684] or write a quick class to display the information for that particular cache.
    I mean can check what all data is present in each cluster member?I would suggest taking a look via JMX. In this case, you would want to look at the ServiceMBean, CacheMBean and StorageManagerMBean MBeans (take a look at the Registry for more information).
    Another reason why I am not so confident if this scheme is distributed or not is that, in my config xml I do not have any backing map scheme configured so how is coherence going to do the backups in this case?
    <backing-map-scheme>
         <local-scheme/>
    </backing-map-scheme>You do have a "backing map" configured, it will just use the defaults.
    Coherence always manages the backups automatically, transparently and dynamically for you. When using the partitioned cache (i.e. "distributed-scheme") Coherence will place the backup in a storage enabled node on a separate physical machine as the primary.
    Rob
    :Coherence Team:

  • Verify if the sessision data of MapViewer app is stored in Coherence caches

    I have successfully combined the MapViewer application with WebLogic and Oracle Coherence*Web.
    How to verify whether the session data of MapViewer application is kept in the Coherence caches or not?
    All out put show that both of MapViewer and WebLogic server as well as Coherence are running well.
    All the following steps are right?
    The procedure is as the following:
    1. Create a WebLogic domain: Map_domain.
    2. Start the WebLogic domain Map_domain by running startWebLogic.sh script.
    3. Install Coherence.jar as a library on WebLogic.
    4. Copy the coherence.jar in the WAR's WEB-INF/lib directory.
    5. Create a reference to the shared library by modifying the weblogic.xml in web applications WEB-INF directory
    and add the following contents:
    <weblogic-web-app>
         <library-ref>
              <library-name>coherence-web-spi</library-name>
              <specification-version>1.0.0.0</specification-version>
              <implementation-version>1.0.0.0</implementation-version>
              <exact-match>false</exact-match>
         </library-ref>
    <weblogic-web-app>6. Install Coherence-web-spi.war as a WebLogic library.
    7. Install the MapViewer as a WebLogic application.
    8. Start a Coherence cache server using the cmd file web-cache-server.cmd and then start MapViewer application.
    The content of web-cache-server.cmd file:
    @echo off
    @rem This will start a cache server
    setlocal
    :config
    @rem specify the Coherence installation directory
    set coherence_home=F:\coherence
    @rem specify the JVM heap size
    set memory=256m
    :start
    if not exist "%coherence_home%\lib\coherence.jar" goto instructions
    if "%java_home%"=="" (set java_exec=java) else (set java_exec=%java_home%\bin\java)
    :launch
    set java_opts="-Xms%memory% -Xmx%memory%"
    "%java_exec%" -server -showversion "%java_opts%" -cp %coherence_home%\lib\coherence.jar;
    %coherence_home%\lib\coherence-web-spi.war
    -Dtangosol.coherence.management.remote=true
    -Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml
    -Dtangosol.coherence.session.localstorage=true
    com.tangosol.net.DefaultCacheServer %1
    goto exit
    :instructions
    echo Usage:
    echo   ^<coherence_home^>\bin\cache-server.cmd
    goto exit
    :exit
    endlocal
    @echo onEdited by: jetq on Jan 12, 2010 9:02 PM

    Any opinions are welcome.

  • About using OSB JMX Monitoring API

    hi Experts,
    My customer is using OSB11.1.1.7, they are trying to use JMX Monitoring API to get the statistic info, but have the following issues:
    1. For the non-soap service, the serviceDomainMbean.getBusinessServiceStatistics method always return error when typeFlag was set with ResourceType.WEBSERVICE_OPERATION.value(), reason is there is no ws operation for this service, customer wants to know how to know if a service has no ws operation ,therefore they can avoid to get ws-operation statistic info for those kind of services.
    2. Is there a way to get statistic info for the current aggregation interval? the api seems works with period from last reset.
    3. Further more, How to get statistic info for special period, for example, customer want to query statistic info from 05/01 to 05/03, or from 05/03 12:00 to 05/03 13:00
    Thanks for the help.
    Best regards

    Davinder Singh wrote:
    Hi ,
    I have an application deployed on weblogic server 8.1 . This has some MBeans exposed
    for management purposes. This needs to be accessed from another web application
    running on another weblogic instance on different machine. For this the managed
    application has a Connector server and the management application is trying to
    connect it through Connector client.
    At the managed application side , i am getting NoSuchMethodError: javax.management.MBeanServer.getClassLoaderRepository()
    I am not using weblogic implementation of JMX (MBean Server).
    My guess about the error :the weblogic implements JMX version 1.0 and i am using
    JMX remoting API (RI from Sun) which requires JMX 1.2
    Is there a way i can make weblogic use JMX 1.2 ?I dont think there is a way to do this. But I might be wrong.
    >
    Thanx,
    Davinder

  • Start a new Coherence cache server

    Hi experts,
    I have an application configured with oracle coherence running on the weblogic server. Can I start a new Coherence cache server from a command prompt so that, this newly created cache server joins the existing cluster already running on the weblogic server. Can you please guide me through this or point to the relevant documentation.
    Thanks for your time.
    Cheers,

    You just need to run...
    java <parameters> -cp <your class path> DefaultCacheServerwhere...
    <parameters> is set to the same set of -D paramters you have given the application inside WebLogic.
    <your class path> is all the jar files in your application, including coherence.jar (but you don't need any ear or web app stuff obviously)
    Presumably you have configured the system properties for your application in WebLogic to set the multicast address (-Dtangosol.coherence.clusteraddress
    and -Dtangosol.coherence.clusterport), cache config (-Dtangosol.coherence.cacheconfig) etc... If you have not then you really should! You basically run DefaultCacheServer with the same parameters.
    When I have used Weblogic and Coherence in the past the usual configurationis to run a number of storage enabled cache server nodes (which is what you are asking about) and then run the WebLogic nodes as storage diabled cluster members. This is much more efficient as it lets the storage enabled node just concentrate on storing data nad the WebLogic node just concentrate on you application.
    JK

Maybe you are looking for

  • How does warranty work for accesories?

    The Lightning Tor USB charger cable that original came with the iphone no longer works. My best guess is that the wires inside have probably been damaged due to wear and tear over the months. The warranty for the iphone is still active, but does it c

  • How to create News Authoring Iview

    Hi All, Actually i want to create 2 News Authoring Iview for different department by that they can publish there news.For this i copied SAPDemonews forms with 2 different names like adept , bdept and forms are stored in etc/xmlforms now when i want 2

  • X130e / X131e configuration changes

    Hi, I'm an avid user of X series TP's, namely X220 and X230. These are terrific for being able to make changes with RAM, SSD, batteries and network cards. I want to try using the smaller brother of these, the X130e or X131e, and wondering if I'd be a

  • I want to print a calendar and some graph paper but the apps are gone

    I wanted to print some graph paper but the app was gone. when I use the more app on the printer it says conniction problem. so I did the network configuration and I am connected, next I did the HP software installer and it says im up to date but I st

  • Displaying the supply source on Product view "/sapapo/rrp3".

    Hi experts, I want to display production version (supply source) on prduct view. someone, do you know the solution? Thanks in advance. satoru