Coherence 1.2.1rc available

Tangosol Coherence provides replicated and distributed data storage for
          Weblogic clusters, and also supports HTTP session replication in Weblogic 7.
          The 1.2.1 release adds failover/failback to the partitioned distributed
          cache. Download a free evaluation from:
          http://www.tangosol.com/coherence.jsp
          Peace,
          Cameron Purdy
          Tangosol, Inc.
          http://www.tangosol.com/coherence.jsp
          Tangosol Coherence: Clustered Replicated Cache for Weblogic
          

Warning: I upgraded a couple hours ago here in CA and things aren't exactly happening.
Maybe it's just my set-up here (OS X 10.4.10) but I'm scrambling to get the phone to work again without a restore to the old iPhone version. (I synced up before the upgrade so have all settings etc except a large number of pix which will be erased with restore).
What? After upgrading to v1.1.2 (via iTunes) the phone now only allows an "Emergency call" option, as displayed on iPhone's default slide to unlock screen as well as the only functional screen to follow - above the numbers keypad (rotating in other languages as "Appel d'urgence" and "Notruf"). "No service" is also new with the upgrade so any "Emergency call" option is also null and void.
In other words - back up all photos and movies from the phone as well as sync up with your MAC before upgrading to v1.1.2 and if anyone has any tips on how to restore without losing photos please hit me back.
Thanks.

Similar Messages

  • Creating sub-cluster within a Coherence cluster

    Hi all,
    Does Coherence support creation of 'sub-clusters' within a larger coherence cluster - such that certain caches can be configured to run only on these subclusters, and other caches run on the entire coherence cluster as usual.
    E.g., suppose my application consists of 3 websphere clusters (under same cell) - each cluster consists of 2 Websphere server instances. Each Websphere cluster has got a specific functional responsbility (e.g., 1 cluster handles the UI, one handles core processing functionality and the 3rd cluster handles links with external legacy systems). Since the functionality itself is 'partitioned' - its possible that certain data managed by a particular WAS cluster should only be managed within that cluster and not across all 6 WAS instances.
    So - in this case - suppose I do have an 'outer' Coherence cluster of all 6 WAS instances (and some Caches are configured to be acessible to all 6 WAS instances - since the data managed in these caches is needed by all 6 WAS instances). Can I configure a smaller Coherence cluster to be available only on say 2 of the Websphere instances (say the WAS cluster which handles legacy links) - and configure certain caches which are available only on this smaller sub-cluster.
    regards,
    Sanjeev.

    I am quite confused about the purpose of the service-name. How would you tie down a cache to a particular service? In the context of the above example, the requirement seems to be:
    CacheA should be spread over the UI cluster.
    CacheB should be spread over the legacy cluster.
    CacheC should be spread over the global cluster.
    Are you suggesting something like the following:
    <u>Cache config file on a UI node</u>:
    <cluster-config>
       <caching-scheme-mapping>
          <cache-mapping>
             <cache-name>CacheA</cache-name>
             <scheme-name>ui</scheme-name>
          </cache-mapping>
          <cache-mapping>
             <cache-name>CacheC</cache-name>
             <scheme-name>global</scheme-name>
          </cache-mapping>
       </caching-scheme-mapping>
       <caching-schemes>
          <distributed-scheme>
              <scheme-name>ui</scheme-name>
              <service-name>ui</service-name>
         </distributed-scheme>
         <distributed-scheme>
              <scheme-name>global</scheme-name>
              <service-name>global</service-name>
         </distributed-scheme>
       </caching-schemes>
    </cluster-config><u>Cache config file on a legacy node</u>:
    <cluster-config>
       <caching-scheme-mapping>
          <cache-mapping>
             <cache-name>CacheB</cache-name>
             <scheme-name>legacy</scheme-name>
          </cache-mapping>
          <cache-mapping>
             <cache-name>CacheC</cache-name>
             <scheme-name>global</scheme-name>
          </cache-mapping>
       </caching-scheme-mapping>
       <caching-schemes>
          <distributed-scheme>
              <scheme-name>legacy</scheme-name>
              <service-name>legacy</service-name>
         </distributed-scheme>
         <distributed-scheme>
              <scheme-name>global</scheme-name>
              <service-name>global</service-name>
         </distributed-scheme>
       </caching-schemes>
    </cluster-config>The basic question seems to be: how do you control the nodes over which a cache is spread, purely from the cache name?
    Also, the 3.2 <role-name> feature seems to be something that addresses this requirement. How does that play v/s a service-name?
    My requirement is similar (needing to control the nodes over which different caches are spread) but I do not quite understand how the service-name would be used to satisfy this example. Could you please explain via cache configurations for this example?
    Thanks
    Ghanshyam

  • Coherence MBean not registering

    Hi,
    We are using coherence with WLS9.1 and Coherence 3.3.1 On Solaris 10. We start Tangosol by calling DefaultCacheServer.start() in WLS Application Life Cycle. When I openup JConsole, I see Coherence MBean.
    We removed WLS Application Lifecycle and replaced it with SpringContextListener and WSSpringServlet.
    -- Pre Start is called in contextInitialzed() of SpringContextListener
    -- Post Start is called in init of WSSpringServlet
    -- Post Stop is called in contextDestroyed() of SpringContextListener
    Now when i start the server, I do not see Coherence MBean anymore.
    I tried downloadeding jmx-console app from oracle website, and it couldn't find MBeans, meaning that Coherence MBeans was not available.
    Any one know, why this is happening or Am I doing something wrong?
    My TANGOSOL OPTIONS:
    -Dtangosol.coherence.cacheconfig=coherence-cache-config.xml -Dtangosol.coherence.clusteraddress=xxx.xxx.xxx.xxx -Dtangosol.coherence.clusterport=9001 -Dtangosol.coherence.ttl=2 -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.management.readonly=true -Dtangosol.coherence.log=log4j -Dtangosol.coherence.log.level=5 -Dtangosol.coherence.log.limit=4096 -Dtangosol.coherence.replicated.request.timeout=120000 -Dtangosol.coherence.optimistic.request.timeout=120000 -Dtangosol.coherence.hibernate.cacheconfig=coherence-cache-config.xml -Dtangosol.coherence.hibernate.lockattemptmillis=60000 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9003 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
    Thanks,
    Tim
    Edited by: user10967321 on Apr 20, 2010 1:56 PM

    After digging through the configuration, I found the following:
    Earlier we extraced all the jars (including coherence.jar and tangosol.jar) to a folder and that folder was added to classpath during server startup. We also had the jars in the ear. Since we had ApplicationLifecycle impemented, we did not see any issues.
    The moment we removed the ApplicationLifeCycle and change it to contextlistener and servlet, there was some class conflicts since the jars were both in a folder outside that was in classpath as well as jars being inside ear.
    So we removed the jars from outside folder as well as removed the folder from classpath during server startup.
    Now coherence starts fine with application using DefaultCacheServer.start() and I am able to put/get entries fron cache.
    Only thing I don't see is that MBeans don't show up in JConsole or any JMX Viewer.
    I am not sure if we have to have coherence.jar and tangosol.jar explicity defined in classpath?
    Any ideas?
    Edited by: Tim 2010 on Apr 21, 2010 10:43 AM

  • How to properly size Coherence? (3.6)

    My Customer needs to setup a new Coherence environment and needs to understand the sizing that of the systems in term of CPU and memory.
    Is there any tool that can help?
    Thank you
    Chiara

    Hi Chiara,
    In order to get a better understanding of sizing in terms of CPU and memory, I would suggest your customer to take a look at the Coherence Best Practices document available at http://coherence.oracle.com/display/COH35UG/Best+Practices
    Also, in order to achieve maximum performance your customer should also take a look at the Performance Tuning guide available at http://coherence.oracle.com/display/COH35UG/Performance+Tuning
    If your customer is going to use Coherence*Extend, then the Best Practices for Coherence Extend document available at http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appbestextend.htm would be useful too.
    Finally, to monitor the Coherence cluster, your customer can use JMX tools as explained at http://coherence.oracle.com/display/COH35UG/How+to+Manage+Coherence+Using+JMX
    Hope it helps.
    -Cris

  • Lightroom 4 missing RAW files on import

    After upgrading from lightroom 3 I'm having a problem importing my raw files. Most of the time Lightoom 4 will only recognize some of my .CR2 files and only the JPGS of the rest of them. The weird thing is that it's not my card reader. When I access the card through finder I can see all of my files. It's only in Lightroom that I can't access all the files.
    It's very random about which CR2s will and will not be recognized at any give time. Sometimes it's one day only of a shoot, sometimes it's completely random photos spread across different days. I always shoot RAW+JPEG on a Canon 60D. As of now I just keep plugging and unplugging my card until all the RAWs show up for an import but this is annoying and time consuming. Anybody know what's going on?

    Suggest you try the 4.1RC (available here: http://labs.adobe.com/downloads/lightroom4-1.html) as there were some bugs in this area which were fixed in the RC.

  • Do these two tools use the same algorithm?

    Do Oracle In-Memory Database(TimesTen) and Oracle Coherence use the same algorithm?
    Both of them are in-memory databases?
    If not, what is the difference?
    Edited by: qck on Aug 11, 2009 8:17 PM

    No, they are different tools and they use different algorithm.
    Oracle TimesTen In-Memory Database is a memory-resident relational database. Deployed in the application tier as an
    embedded database, Oracle TimesTen In-Memory Database operates on databases that fit entirely in physical memory using standard SQL interfaces.
    Oracle TimesTen In-Memory
    Database libraries are also embedded within applications, eliminating context switching and unnecessary network operations, further improving performance.
    Oracle TimesTen In-Memory Database (TimesTen) is designed with the knowledge that data resides in main memory and can take more direct routes to
    data, reducing the length of the code path and simplifying algorithms and structure.
    Oracle Coherence is a JCache-compliant in-memory distributed data grid solution for clustered applications and application servers. Organizations can predictably scale mission-critical applications by using Oracle Coherence to provide fast and reliable access to frequently used data. Oracle Coherence enables customers to push data closer to the application for faster access and greater resource utilization. By automatically and dynamically partitioning data in memory across multiple servers, Oracle Coherence enables continuous data availability and transactional integrity, even in the event of a server failure. Oracle Coherence is a shared infrastructure that combines data locality with local processing power to perform real-time data analysis, in-memory grid computations, and parallel transaction and event processing.
    In my opinion, Oracle Coherence is more powerful. It mostly co-located on multiple server's memories. It is not only for databases (by integrating with The Hibernate or TopLink) but also for other applications.
    Edited by: jetq on Aug 11, 2009 11:20 PM

  • Custom session validator

    Hi All!
    Is it possible to create custom session validation mechanism in coherence, which checks session availability in third party system (Siebel in our case) and provides data caching assigned to this session from external Web Services?
    Thank you very much :)

    Coherence supports HTTP Session Caching via Coherence*Web.
    In terms of doing something similar for some other session context such as Siebel it may or may not be doable depending upon the nature of
    the session object you are caching. You probably would need to use standard Coherence API mechanisms and application logic to manage the session state
    and relate it to other entities.

  • Coherence-eclipselink.jar available in TopLink11.1.1.0.1 not in 11.1.1.1.0

    Hi,
    I am following the link : [ http://www.oracle.com/technology/products/ias/toplink/doc/11110/grid/tlgug001.htm|http://www.oracle.com/technology/products/ias/toplink/doc/11110/grid/tlgug001.htm] for integrating coherence with Oracle Toplink.
    The jar files mentioned are available in TopLink11.1.1.0.1 but not in the later version.
    Has the implementation or the road map for coherence toplink integration changed?
    Thanks!
    Amor

    Just noticed. The toplink-grid.jar in release 11.1.1.1.0 is coherence integration with Toplink.

  • Com.oracle.coherence.environment.extensible.ExtensibleEnvironment not available anymore in Coherence incubator

    We are migrating from incubator 11.x to 12.2.0.
    I noticed that com.oracle.coherence.environment.extensible.ExtensibleEnvironment is not available anymore.
    We used it to load custom cache configs filenames based on standard coherence environment properties.
    Something like:
    If local storage enabled
         load cache-config-a.xml
    else
         locad cache-config-b.xml
    Setting factory and getting cache was like:
    ConfigurableCacheFactory fctry = new ExtensibleEnvironment(cacheConfigName, this.getClass().getClassLoader());
    NamedCache namedCache = fctry.ensureCache(myCacheName, this.getClass().getClassLoader());
    This allowed us to have auto-discovery of cache config filename on client side (thus minimizing JVM settings and simplifying dev environment setup).  Note that we are not using GARs on either client side or Coherence server side.
    What would be the simplest alternative with Coherence 12.1.2 / incubator 12.2 without the com.oracle.coherence.environment.extensible.ExtensibleEnvironment class?
    Tks
    Message was edited by: 962259

    Here is the alternative I found:
    // Let's say I want to programmatically load custom named cache-config-a.xml and pof-config-a.xml
    XmlElement xmlConfig = XmlHelper.loadFileOrResource("cache-config-a.xml", (new StringBuilder()).append("Cache Configuration from:").append(defaultCacheConfigFilename).toString(), null);
    ExtensibleConfigurableCacheFactory.Dependencies coherenceDependencies = ExtensibleConfigurableCacheFactory.DependenciesHelper.newInstance(xmlConfig, null, "pof-config-a.xml");
    ConfigurableCacheFactory fctry = new ExtensibleConfigurableCacheFactory(coherenceDependencies);
    NamedCache namedCache = fctry.ensureCache(myCacheName, this.getClass().getClassLoader());

  • Can you cluster Coherence over data centers?

    We're currently running two separate Coherence clusters in different data centers. One is prod, the other DR.
    Would it be possible to cluster the nodes from each of these to create one cluster spanning both data centers? Then in a failover scenario the data would already be available.
    I know Coherence nodes heartbeat to one another to retain cluster membership and that there is a TTL setting to determine packet life. Would have nodes in different data centers result in heartbeats being missed or TTLs killing packets?
    Has anyone had any success with this?

    Coherence performance is related to the latency between nodes. Having one cluster spread over 2 data centers could harm performance (some timeouts could have to be changed to prevent nodes from data center A to claim another node in datacenter B is out of reach/possibly dead).
    When you lose network connectivity between the 2 data centers (note i'm not saying "if you lose connectivity". It WILL happen), you're welcome into the "split brain world", each half of the grid believing the other is dead and claiming to be the "master". And thus, if you have data replicated on N nodes, the master/backups are redispatched all over each datacenter, harming performance for a few minutes (the timing depending of course on many parameters...). And of course the data will no longer be synchronized between the 2 data centers. The quorum has to be thought of, and stuff like that...
    I might be wrong, but AFAIK I'd rather have 2 separate clusters. I believe 12.1 has new features to replicate data the the master grid to the DR one, I have not been through all the new documentation.

  • OIM 11g High Availability Deployment

    Hi Experts,
    I'm deploying OIM 11g in High Available schema, following Oracle docs: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF, I have succesfully installed and configured OIM & SOA in weblogic domain on 'OIMHOST1', trying to propagate the configuration from 'OIMHOST1' to 'OIMHOST2' I have packed (using pack.sh) the domain on 'OIMHOST1' and unpacked (using unpack.sh) it to 'OIMHOST2' so I have updated the NodeManager executing setNMProps.sh and finally Ihave started the NodeManager. In order to Test everything is fine and following the documentation I'm traying to perform the following steps, but I'm not succeed
    I'M MUST TO SAY THAT I'M RUNNING ON SINGLE STANDARD EDITION DB INSTANCE AND NOT RAC AS MENTIONED IN ORACLE DOCS, PLEASE CLARIFY IF RAC IS REQUIRED, FOR NOW I'M IN DEVELOPMENT ENVIRONMENT, SO I THINK RAC IS NOT REQUIRED FOR NOW, PLEASE CLARIFY
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1&
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console.
    Here its not possible start AdminServer on OIMHOST2, first of all, it looks like boot.properties file under WLS_OIM_DOMAIN_HOME/servers/AdminSever/security is not valid, the first time I try to execute startWeblogic.sh script, it ask for username/password, I have updated boot.properties (vi boot.properties) and manually set clear username and password, this time startWeblogic.sh script passed this stage, but fails:
    <Error> <util.install.help.BuildMasterHelpSet> <BEA-000000> <IOException ioe java.io.IOException: No such file or directory>
    <Error> <oracle.adf.share.config.ADFMDSConfig> <BEA-000000> <MDSConfigurationException encountered in parseADFConfigurationMDS-01330: unable to load MDS configuration document
    MDS-01329: unable to load element "persistence-config"
    MDS-01370: MetadataStore configuration for metadata-store-usage "writeable" is invalid.
    MDS-00503: The metadata path "/u01/app/oracle/product/Middleware/user_projects/domains/IDMDomain/sysman/mds" does not contain any valid directories.
    I have verified that this directory "mds" does not exists, as reported by the IOException, in OIMHOST2, but it exists in OIMHOST1. from here its not possible for me following Oracle's documentation, I test this starting Adminserver in OIMHOST1, and starting WLS_SOA2 and WLS_OIM2 managed servers from OIMHOST1 AdminServer console, I have tested 2 ways:
    1.- All managed servers in OIHOST1 are shutdown, for this, managed servers in OIMHOST2 works as expected
    2.- All managed servers in OIMHOST1 are RUNNING, for this, first I have started SOA2 managed server, after that, I have fired OIM2 managed server, when it finish boot process the following message appears in server's output:
    <Warning> <org.quartz.impl.jdbcjobstore.JobStoreCMT> <BEA-000000> <This scheduler instance (servername.domainname1304128390936) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.>
    Start the WLS_SOA2 managed server using the WebLogic Administration Console.
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started.
    8.9.3.9 Validate the Oracle Identity Manager Instance on OIMHOST2
    Validate the Oracle Identity Manager Server instance on OIMHOST2 by bringing up the Oracle Identity Manager Console using a web browser.
    The URL for the Oracle Identity Manager Console is:
    http://oimvhn2.mycompany.com:14000/oim
    Log in using the xelsysadm password.
    Your help is highly apprecciated
    Regards
    Juan

    Hi Vaasu,
    I have succeeded deploying OIM in HA, just now my customer and I are working on the installation of webtier. Now I have a better understand about HA concepts and the way weblogic works -really nice, but little tricky-
    All the magic about HA is configuring properly the network interfaces in each Linux boxes (our case) so, first of all you need to create 2 new floating IP's on each Linux boxes (google: how to create virtual Ip in linux, if you don't know) clone and modify your 'eth0' network script to create the virtual IPs
    Follow the procudere in the HA guide: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF
    create DB schemas with RCU
    install weblogic
    install SOA
    patch SOA
    install IAM
    ---if you are working on a virtual machine is good idea to take a snapshot here---
    Create and configure the weblogic domain (special attentention whe configuring the cluster), see step 13 of 8.9.3.2 Creating and Configuring the WebLogic Domain for OIM and SOA on OIMHOST1, here you need to cofigure:
    For the oim_server1 entry, change the entry to the following values:
    Name: WLS_OIM1
    Listen Address: the IP that is confured in eth0:1 of Linux box1
    Listen Port: 14000
    For the soa_server1 entry, change the entry to the following values:
    Name: WLS_SOA1
    Listen Address: the IP configure on eth0:2 of Linux box1
    Listen Port: 8001
    For the second OIM Server, click Add and supply the following information:
    Name: WLS_OIM2
    Listen Address: the IP configured on eth0:1 of Linux box2
    Listen Port: 14000
    For the second SOA Server, click Add and supply the following information:
    Name: WLS_SOA2
    Listen Address: the IP configured on eth0:2 of Linux box2
    Listen Port: 8001
    Click Next.
    On Step 16 ensure you are using the UNIX tab to configure the machines, also ensure that for machine1 you use the IP configured on the eth0 interface of Linux box1, the same for machine2
    please confirm you have performered 8.9.3.3.2 Update Node Manager on OIMHOST1
    if everything is ok you must be able to start the AdminServer as described in the guide.
    configure OIM: 8.9.3.4.2 Running the Oracle Identity Management Configuration Wizard, in my case I don't need LDAPsync, I have skipped this section, if you configure properly OIM, then you mus perform 8.9.3.5 Post-Configuration Steps for the Managed Servers
    resrtar AdminServer then from the weblogic console, start OIM and SOA if node manager is properly configured SOA and OIM must run properly, update deployment mode and coherence as described in the guide and verify that OIM run perfectly in Linux box1.
    Propagate OIM from Linux box1 to Linux box2 as described in the guide, using pack and unpack (you MUST use the same filesystem directory structure on both Linux boxes)
    Update and start NodeManager as described in the guide
    VERY IMPORTAN OBSERVATION
    the guide say:
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    JUAN OBSERVATION:
    IS NOT POSSIBLE TO START OR STOP ADMINSERVER ON HOST2 SINCE ADMIN SERVER WERE CONFIGURED TO LISTEN ON THE IP ADDRES OF eth0 INTERFACE ON HOST1, SO, ITS NOT POSSIBLE TO PLAY IT ON HOST2, I THINK AND ADDITIONAL PROCEDURE SHOULD BE FOLLOWED TO CONFIGURE ADMINSERVER IN HA IN A ACTIVE-PASSIVE MODE
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1& -----NOT APPLICABLE
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console. -----NOT APPLICABLE
    Start the WLS_SOA2 managed server using the WebLogic Administration Console. ----START SOA2 FROM THE CONSOLE RUNNING ON HOST1, IT DOESN'T MATTER
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started. ------ START OIM2 FROM THE CONSOLE RUNNING ON HOST1
    HERE YOU MUST BE ABLE TO LOGIN TO OIM2 SERVER AS DESCRIBED IN THE GUIDE, YOU DON'T NEED TO EXECUTE config.sh SCRIPT THIS SHOULD WORK AS DESCRIBED.
    Server migration should work straight-forward if you have configured the floating IPs as described, I have not configured the persistence yet since my customer does not have the skills to share a storage.
    I hope this helps, and feel free to comment or complement.
    By the way, did you know how to set up a valid SSL certificate in Windows 2003 server??? I need it to test and Exchange 2007 I'm tryin to integrate
    Regards
    Juan

  • Error while starting resin 3.0.14  after installing coherence evaluation

    I am getting the following error...
    java.lang.SecurityException: The necessary license to perform the operation is not available; a license for "com.tangosol.run.xml.SimpleParser@e3
    570c" is required.
    As I see the license file is in the tangosol.jar, which is in the app server's lib.
    Do I need to place the license file anywhere.
    I have jive forums application installed and I am trying to cluster it with two instances of resin.
    would it have any conflict of license that comes with jive , which might cause the problem.
    Please help

    Hi Jon,
    I am trying to set up a standard clustered installation of jive.
    I have been able to do a http clustering of two resin servers with jive application.But I get the following error while accessing jive.
    type: com.caucho.jsp.JspLineParseException
    com.caucho.jsp.JspLineParseException: /accountbox.jsp:1: extends `com.tangosol.coherence.servlet.api22.JspServlet' conflicts with previous value of extends `com.tangosol.coherence.servlet.api22.JspServlet'. Check the .jsp and any included .jsp files for conflicts.
         at com.caucho.jsp.java.JspNode.error(JspNode.java:1249)
         at com.caucho.jsp.java.JspNode.error(JspNode.java:1240)
         at com.caucho.jsp.java.JspDirectivePage.addAttribute(JspDirectivePage.java)
         at com.caucho.jsp.java.JavaJspBuilder.attribute(JavaJspBuilder.java:338)
         at com.caucho.jsp.JspParser.parseDirective(
    Thanks in anticipation

  • EM for Coherence - Cluster upgrade

    Hi Gurus,
    I noticed that there are "Coherence Node Provisioning" process in EM12c, and it says "You can also update selected nodes by copying configuration files and restarting the nodes.". Does the EM will internally check the service HA status before update (stop/start) each node?  The "NODE-SAFE" should be the minimum HA Status criterion to meet to ensure there is no data loss. 
    Thanks in advance
    Hysun

    Thanks for your hints, but it didn't work either. Maybe because the metaset uses the disks DID-name and those are not available when the node is not booted as part of the cluster.
    What I hope will work is this:
    - deactivate the zones resourcegroup
    - make a backup of the non-global zones root
    - restore the backup to a temporary filesystem on the nodes bootdisk
    - mount the temporary filesystem as the zones root (via vfstab)
    - upgrade this node including the zone
    - reboot as part of the cluster (the zone should not start because of autoboot=false and the RG being deactivated)
    - acquire access to the zones shared disk resource
    - copy the content of the zones root back to its original place
    - activate the zones resourcegroup
    - upgrade the other node
    - and of cource backups, backups and even more backups at the right moments :-)
    I will test this scenario as soon as I can find the time for it. If I am successful I will post again.
    Regards, Paul

  • Coherence not working on AIX

    Hi,
    Installed coherence, even the new 3.7.1.6 full distribution. but when it started, it raised an exception about socket:
    bash-3.2# ./coherence.sh
    ./coherence.sh[11]: pushd: not found.
    ./coherence.sh[15]: popd: not found.
    ** Starting storage disabled console **
    java version "1.6.0"
    Java(TM) SE Runtime Environment (build pap6460sr10fp1-20120321_01(SR10 FP1))
    IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc64-64 jvmap6460sr10fp1-20120202_101568 (JIT enabled, AOT enabled)
    J9VM - 20120202_101568
    JIT - r9_20111107_21307ifx1
    GC - 20120202_AA)
    JCL - 20120320_01
    2013-05-02 15:39:29.755/1.990 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/u02/app/oracle/coherence_3.7.1/coherence/lib/coherence.jar!/tangosol-coherence.xml"
    2013-05-02 15:39:29.948/2.182 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "jar:file:/u02/app/oracle/coherence_3.7.1/coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
    2013-05-02 15:39:29.950/2.184 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified
    2013-05-02 15:39:29.974/2.208 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2013-05-02 15:39:31.987/4.221 Oracle Coherence GE 3.7.1.0 <Warning> (thread=main, member=n/a): PreferredUnicastUdpSocket failed to set receive buffer size to 1444 packets (1.99MB); actual size is 42%, 609 packets (863KB). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performance.
    2013-05-02 15:39:31.987/4.221 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /192.168.1.74:8088 using SystemSocketProvider
    2013-05-02 15:39:31.991/4.225 Oracle Coherence GE 3.7.1.0 <Error> (thread=main, member=n/a): Error while starting cluster: (Wrapped) java.net.SocketException: The socket name is not available on this system.
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:40)
    at com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
    at com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
    at com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:10)
    at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:26)
    at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
    at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:427)
    at com.tangosol.coherence.component.application.console.Coherence.run(Coherence.CDB:25)
    at com.tangosol.coherence.component.application.console.Coherence.main(Coherence.CDB:3)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:600)
    at com.tangosol.net.CacheFactory.main(CacheFactory.java:827)
    Caused by: java.net.SocketException: The socket name is not available on this system.
    at java.net.PlainDatagramSocketImpl.socketSetOption(Native Method)
    at java.net.PlainDatagramSocketImpl.setOption(PlainDatagramSocketImpl.java:408)
    at java.net.MulticastSocket.setInterface(MulticastSocket.java:424)
    at com.tangosol.coherence.component.net.socket.udpSocket.MulticastUdpSocket.instantiateDatagramSocket(MulticastUdpSocket.CDB:33)
    at com.tangosol.coherence.component.net.socket.UdpSocket.open(UdpSocket.CDB:12)
    at com.tangosol.coherence.component.net.Cluster$SocketManager.bindSockets(Cluster.CDB:129)
    at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:36)
    ... 13 more
    Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:600)
    at com.tangosol.net.CacheFactory.main(CacheFactory.java:827)
    Caused by: (Wrapped) java.net.SocketException: The socket name is not available on this system.
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:40)
    at com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
    at com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
    at com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:10)
    at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:26)
    at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
    at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:427)
    at com.tangosol.coherence.component.application.console.Coherence.run(Coherence.CDB:25)
    at com.tangosol.coherence.component.application.console.Coherence.main(Coherence.CDB:3)
    ... 5 more
    Caused by: java.net.SocketException: The socket name is not available on this system.
    at java.net.PlainDatagramSocketImpl.socketSetOption(Native Method)
    at java.net.PlainDatagramSocketImpl.setOption(PlainDatagramSocketImpl.java:408)
    at java.net.MulticastSocket.setInterface(MulticastSocket.java:424)
    at com.tangosol.coherence.component.net.socket.udpSocket.MulticastUdpSocket.instantiateDatagramSocket(MulticastUdpSocket.CDB:33)
    at com.tangosol.coherence.component.net.socket.UdpSocket.open(UdpSocket.CDB:12)
    at com.tangosol.coherence.component.net.Cluster$SocketManager.bindSockets(Cluster.CDB:129)
    at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:36)
    ... 13 more
    Any help?
    K.

    Hi,
    From your original post it looks like you are using the coherence.sh script file so to set JVM properties you need to edit this script. Specifically, towards the bottom of this file you will see a line like this...
    JAVA_OPTS="-Xms$MEMORY -Xmx$MEMORY -Dtangosol.coherence.distributed.localstorage=$STORAGE_ENABLED $JMXPROPERTIES"...you need to edit this line to read...
    JAVA_OPTS="-Djava.net.preferIPv4Stack=true -Xms$MEMORY -Xmx$MEMORY -Dtangosol.coherence.distributed.localstorage=$STORAGE_ENABLED $JMXPROPERTIES"JK

  • Coherence SimpleParser class is not thread safe?

    Coherense has very convinent XML utility class, which we use it a lot within our Coherence related applications.
    But we encounter some mysterious lock up (maybe deadlock?) issue and identified that it might be that the com.tangosol.run.xml.SimpleParser class is not thread safe.
    We are using tomcat 6 and spring 2.0.6.
    One of the webapp has 2 bean which implements InitializingBean interface.
    Bean A's afterPropertiesSet() method will use com.tangosol.run.xml.XmlHelper.loadXml method to parse a XML file.
    Bean B's afterPropertiesSet() method will acts as a tcp extend client and retrieve some data from a coherence cluster. Which I believe coherence will also use it's XML utility class when parsing the configuration files.
    We encounter tomcat lockup (which never finish startup webapp deployment porcess) randomly like 1 out of 2 or 3 tries.
    Use jconsole and connect to tomcat we can see that the main thread stuck in SimpleParser class. Here is the thread dump.
    Name: main
    State: RUNNABLE
    Total blocked: 156 Total waited: 0
    Stack trace:
    com.tangosol.run.xml.SimpleParser.instantiateDocument(SimpleParser.java:150)
    com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:115)
    - locked com.tangosol.run.xml.SimpleParser@f10c77
    com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:71)
    com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:84)
    com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:109)
    org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1201)
    org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1171)
    org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:425)
    org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:251)
    org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:156)
    - locked java.util.concurrent.ConcurrentHashMap@dee55c
    org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:248)
    org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:160)
    org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:287)
    org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:352)
    - locked java.lang.Object@d21555
    org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:244)
    org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:187)
    org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:49)
    org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3830)
    org.apache.catalina.core.StandardContext.start(StandardContext.java:4337)
    - locked org.apache.catalina.core.StandardContext@1c64ed8
    org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
    - locked java.util.HashMap@76a6d9
    org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
    org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525)
    org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:825)
    org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:714)
    org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490)
    org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
    org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
    org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
    org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
    - locked org.apache.catalina.core.StandardHost@1c42c4b
    org.apache.catalina.core.StandardHost.start(StandardHost.java:719)
    - locked org.apache.catalina.core.StandardHost@1c42c4b
    org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
    - locked org.apache.catalina.core.StandardEngine@37fd24
    org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
    org.apache.catalina.core.StandardService.start(StandardService.java:516)
    - locked org.apache.catalina.core.StandardEngine@37fd24
    org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
    - locked [Lorg.apache.catalina.Service;@1cc55fb
    org.apache.catalina.startup.Catalina.start(Catalina.java:566)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    java.lang.reflect.Method.invoke(Unknown Source)
    org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
    org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)
    After we add the depends-on tag to enforce bean B wait on bean A to finish initialization, we no longer encounter the lockup during tomcat startup.
    We suspect that maybe SimpleParser class is not thread safe and will cause potential deadlock issue.
    Edited by: user639604 on Jun 22, 2009 10:36 AM

    While it doesn't show up as deadlock, I believe it probably is, as evidenced by these two threads:
    "Timer-0" prio=10 tid=0xcb9a2800 nid=0x454b in Object.wait() [0xcb6e0000..0xcb6e10a0]
       java.lang.Thread.State: RUNNABLE
         at com.tangosol.run.xml.SimpleParser.instantiateDocument(SimpleParser.java:150)
        at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:115)
         - locked <0xf44e52f0> (a com.tangosol.run.xml.SimpleParser)
         at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:71)
         at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:99)
         at com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:129)
         at com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:95)
         at com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:72)
         at com.tangosol.util.ExternalizableHelper.<clinit>(ExternalizableHelper.java:4466)
         at com.evidentsoft.opcache.coherence.OPCacheCoherenceStorage.retrieve(OPCacheCoherenceStorage.java:341)
         at com.evidentsoft.opcache.coherence.OPCacheCoherenceStorage.retrieve(OPCacheCoherenceStorage.java:420)
         at com.evidentsoft.opcache.OPCacheManager.find(OPCacheManager.java:68)
         at com.evidentsoft.logserver.coherence.ClusterDetector.detectNewClusters(ClusterDetector.java:97)
         at com.evidentsoft.logserver.coherence.ClusterDetector.access$000(ClusterDetector.java:19)
         at com.evidentsoft.logserver.coherence.ClusterDetector$1.run(ClusterDetector.java:67)
         at java.util.TimerThread.mainLoop(Unknown Source)
         at java.util.TimerThread.run(Unknown Source)
    "main" prio=10 tid=0x08059000 nid=0x4539 in Object.wait() [0xf7fd0000..0xf7fd11f8]
       java.lang.Thread.State: RUNNABLE
         at com.tangosol.run.xml.SimpleParser.instantiateDocument(SimpleParser.java:150)
         at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:115)
         - locked <0xf44ecd90> (a com.tangosol.run.xml.SimpleParser)
         at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:71)
         at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:84)
         at com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:109)
         at com.evidentsoft.coherence.util.ClusterConfigurator.generateConfigFile(ClusterConfigurator.java:319)
         at com.evidentsoft.coherence.util.ClusterConfiguratorProxy.afterPropertiesSet(ClusterConfiguratorProxy.java:51)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1201)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1171)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:425)
         at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:251)
         at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:156)
         - locked <0xd65efb88> (a java.util.concurrent.ConcurrentHashMap)
         at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:248)
         at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:160)
         at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:287)
         at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:352)
         - locked <0xd65efc28> (a java.lang.Object)
         at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:244)
         at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:187)
         at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:49)
         at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3830)
         at org.apache.catalina.core.StandardContext.start(StandardContext.java:4337)
         - locked <0xd6092f60> (a org.apache.catalina.core.StandardContext)
         at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
         - locked <0xd54ff278> (a java.util.HashMap)
         at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
         at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525)
         at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:825)
         at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:714)
         at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490)
         at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
         at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
         at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
         at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
         - locked <0xd54ff1e8> (a org.apache.catalina.core.StandardHost)
         at org.apache.catalina.core.StandardHost.start(StandardHost.java:719)
         - locked <0xd54ff1e8> (a org.apache.catalina.core.StandardHost)
         at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
         - locked <0xd4fa60b8> (a org.apache.catalina.core.StandardEngine)
         at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
         at org.apache.catalina.core.StandardService.start(StandardService.java:516)
         - locked <0xd4fa60b8> (a org.apache.catalina.core.StandardEngine)
         at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
         - locked <0xd4f17ea0> (a [Lorg.apache.catalina.Service;)
         at org.apache.catalina.startup.Catalina.start(Catalina.java:566)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
         at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)The reason it isn't showing up as a deadlock in the thread dump is that the ExternalizableHelper static initializer isn't completing, so the other thread (blocking it) is waiting indefinitely on that class to become available.
    Peace,
    Cameron Purdy | Oracle Coherence

Maybe you are looking for