Clustering in author env

In my company we are using 2 publish server for internet and 2 publish server in intranet and one author server.
We have around 10 Intranet sites and 10 internet sites. We are planning to migrate 80 more sites in internet and 100 in intranet.
Content size is approx 20GB in internet and intranet it will increase it to 3 times after migration.
No of authors will be 500+ currently it is 100.
I am thinking of upgrading CQ publish to 64 bit OS and use SAN disk increase memory and cpu .Also business wants to make author env resilient so i am planning to use shared data store clustering in author env and multiple publish server ( no clustering  )
My question is will i be able to use replication and reverse replication in this case ? Please let me know your view on this approch.Do you suggest to use shared nothing clustering in author env?
high level architectural of my solution is ( except author cluster with shared data store)    
http://dev.day.com/content/docs/en/crx/current/administering/cluster/_jcr_content/par/imag e_0.img.png/1350922018297.png
Regards,
Manish

Whether to use shared data store or shared nothing is really dependent on why you are clustering, and whether or not your shared data storage has its own built redunecy or not.
If you are implementing the cluster to ensure high availability and diaster recovery then you need decide whether or not the shared data store is a single point of failure. Usually with shared data store the shared data store is mounted to either something like a SAN or some other remote mounted storage system. If this storage system is itself highly available then you can use the shared data store to reduce total disk usage and still have a highly available environment.
If on the other hand the data storage mechanism isn't highly available then you would want to consider the shared nothing cluster in order to eliminate a single point of failure. Now keep in mind that if just mount both instances data stores to the same SAN then you still have a single point of failure and you may as well go with a shared data store.
No matter what clustering option you select you will still be able to use replication and reverse replication. The replications services implement a cluster aware interface that ensures that they only run on the master node of the cluster.

Similar Messages

  • Getting SOAP Exception in clustered WL8.1 env...need help

    I have an application that uses jakarta Axis v1.4 to access a WebService as a client from an EJB. It works fine on WLS v7 and on v8.1.6 when running on my desktop. However when I deploy the app to the test servers (running in a managed cluster) I get the followign stack trace:
    AxisFault
    faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException
    faultSubcode:
    faultString: javax.xml.soap.SOAPException: java.lang.ClassCastException: javax.xml.soap.MimeHeaders$MatchingIterator
    faultActor:
    faultNode:
    faultDetail:
    {http://xml.apache.org/axis/}stackTrace:javax.xml.soap.SOAPException: java.lang.ClassCastException: javax.xml.soap.MimeHeaders$MatchingIterator
    at org.apache.axis.message.MessageElement.addTextNode(MessageElement.java:1396)
    at org.apache.axis.message.SOAPHandler.addTextNode(SOAPHandler.java:148)
    at org.apache.axis.message.SOAPHandler.endElement(SOAPHandler.java:112)
    at org.apache.axis.encoding.DeserializationContext.endElement(DeserializationContext.java:1087)
    at weblogic.apache.xerces.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:585)
    at weblogic.apache.xerces.impl.XMLNamespaceBinder.handleEndElement(XMLNamespaceBinder.java:898)
    at weblogic.apache.xerces.impl.XMLNamespaceBinder.endElement(XMLNamespaceBinder.java:644)
    at weblogic.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1008)
    at weblogic.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1469)
    at weblogic.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:329)
    at weblogic.apache.xerces.parsers.DTDConfiguration.parse(DTDConfiguration.java:525)
    at weblogic.apache.xerces.parsers.DTDConfiguration.parse(DTDConfiguration.java:581)
    at weblogic.apache.xerces.parsers.XMLParser.parse(XMLParser.java:152)
    at weblogic.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1175)
    at weblogic.xml.jaxp.WebLogicXMLReader.parse(WebLogicXMLReader.java:135)
    at weblogic.xml.jaxp.RegistryXMLReader.parse(RegistryXMLReader.java:152)
    at javax.xml.parsers.SAXParser.parse(SAXParser.java:345)
    at org.apache.axis.encoding.DeserializationContext.parse(DeserializationContext.java:227)
    at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:696)
    at org.apache.axis.Message.getSOAPEnvelope(Message.java:435)
    at org.apache.axis.handlers.soap.MustUnderstandChecker.invoke(MustUnderstandChecker.java:62)
    at org.apache.axis.client.AxisClient.invoke(AxisClient.java:206)
    at org.apache.axis.client.Call.invokeEngine(Call.java:2784)
    at org.apache.axis.client.Call.invoke(Call.java:2767)
    at org.apache.axis.client.Call.invoke(Call.java:2443)
    at org.apache.axis.client.Call.invoke(Call.java:2366)
    at org.apache.axis.client.Call.invoke
    Could anyone shed any light as to what may be causing this error to occur?

    I am having the same problem. Have you resolved this??
    Thanks,
    David
    I have an application that uses jakarta Axis v1.4 to
    access a WebService as a client from an EJB. It works
    fine on WLS v7 and on v8.1.6 when running on my
    desktop. However when I deploy the app to the test
    servers (running in a managed cluster) I get the
    followign stack trace:
    AxisFault
    faultCode:
    {http://schemas.xmlsoap.org/soap/envelope/}Server.user
    Exception
    faultSubcode:
    faultString: javax.xml.soap.SOAPException:
    java.lang.ClassCastException:
    javax.xml.soap.MimeHeaders$MatchingIterator
    faultActor:
    faultNode:
    faultDetail:
    {http://xml.apache.org/axis/}stackTrace:javax.xml.soap
    .SOAPException: java.lang.ClassCastException:
    javax.xml.soap.MimeHeaders$MatchingIterator
    at
    org.apache.axis.message.MessageElement.addTextNode(Mes
    sageElement.java:1396)
    at
    org.apache.axis.message.SOAPHandler.addTextNode(SOAPHa
    ndler.java:148)
    at
    org.apache.axis.message.SOAPHandler.endElement(SOAPHan
    dler.java:112)
    at
    org.apache.axis.encoding.DeserializationContext.endEle
    ment(DeserializationContext.java:1087)
    at
    weblogic.apache.xerces.parsers.AbstractSAXParser.endEl
    ement(AbstractSAXParser.java:585)
    at
    weblogic.apache.xerces.impl.XMLNamespaceBinder.handleE
    ndElement(XMLNamespaceBinder.java:898)
    at
    weblogic.apache.xerces.impl.XMLNamespaceBinder.endElem
    ent(XMLNamespaceBinder.java:644)
    at
    weblogic.apache.xerces.impl.XMLDocumentFragmentScanner
    Impl.scanEndElement(XMLDocumentFragmentScannerImpl.jav
    a:1008)
    at
    weblogic.apache.xerces.impl.XMLDocumentFragmentScanner
    Impl$FragmentContentDispatcher.dispatch(XMLDocumentFra
    gmentScannerImpl.java:1469)
    at
    weblogic.apache.xerces.impl.XMLDocumentFragmentScanner
    Impl.scanDocument(XMLDocumentFragmentScannerImpl.java:
    329)
    at
    weblogic.apache.xerces.parsers.DTDConfiguration.parse(
    DTDConfiguration.java:525)
    at
    weblogic.apache.xerces.parsers.DTDConfiguration.parse(
    DTDConfiguration.java:581)
    at
    weblogic.apache.xerces.parsers.XMLParser.parse(XMLPars
    er.java:152)
    at
    weblogic.apache.xerces.parsers.AbstractSAXParser.parse
    (AbstractSAXParser.java:1175)
    at
    weblogic.xml.jaxp.WebLogicXMLReader.parse(WebLogicXMLR
    eader.java:135)
    at
    weblogic.xml.jaxp.RegistryXMLReader.parse(RegistryXMLR
    eader.java:152)
    at
    javax.xml.parsers.SAXParser.parse(SAXParser.java:345)
    at
    org.apache.axis.encoding.DeserializationContext.parse(
    DeserializationContext.java:227)
    at
    org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.ja
    va:696)
    at
    org.apache.axis.Message.getSOAPEnvelope(Message.java:4
    35)
    at
    org.apache.axis.handlers.soap.MustUnderstandChecker.in
    voke(MustUnderstandChecker.java:62)
    at
    org.apache.axis.client.AxisClient.invoke(AxisClient.ja
    va:206)
    at
    org.apache.axis.client.Call.invokeEngine(Call.java:278
    4)
    at
    org.apache.axis.client.Call.invoke(Call.java:2767)
    at
    org.apache.axis.client.Call.invoke(Call.java:2443)
    at
    org.apache.axis.client.Call.invoke(Call.java:2366)
    at org.apache.axis.client.Call.invoke
    Could anyone shed any light as to what may be causing
    this error to occur?

  • Help in setting up a clustered scalable JMS env

    hello
    I have two separate machines
    I have created a cluster spanning the two machines, with 2 managed servers
    The cluster is working well
    I am experimenting with JMS and i want to utilize the two machine for scalability reasons
    I have created One JMS server and targeted it to managed server on the first machine with JDBC persistence
    I have created one Connection factory and targeted it to the cluster
    I have created on Queue and targeted it to JMS server.
    If i send a message to the Queue accessing the First machine, would i be able to read it from a client running on the second machine?
    another confusion i have is the meaning of targeting the connection factor to a JMS server, or Managed Server or a Cluster, i do not get is how to decide which one to use. I read many Weblogic documents, which tend to describe steps , but not much about the concepts, architecture etc. Your help is appreciated
    Ammar

    With the above setup, you should be able to send a message to the Queue accessing the First machine and read it from a client running on the second machine because JMS Destinations(Queue/Topic)are "pinned" services, after they are registered they are available only from the host with which they are registered and will not provide transparent failover or load balancing. Pinned services are available cluster-wide, because they are bound into the replicated cluster-wide JNDI tree. However, if the individual server that hosts the pinned services fails, the client cannot failover to another server.
    You can target connection factories to one or more JMS server, to one or more WebLogic Server instances, or to a cluster.
    JMS server(s) — You can target connection factories to one or more JMS servers along with destinations. You can also group a connection factory with standalone queues or topics in a subdeployment targeted to a specific JMS server, which guarantees that all these resources are co-located to avoid extra network traffic. Another advantage of such a configuration would be if the targeted JMS server needs to be migrated to another WebLogic server instance, then the connection factory and all its connections will also migrate along with the JMS server’s destinations. However, when standalone queues or topics are members of a subdeployment, a connection factory can only be targeted to the same JMS server.
    Weblogic server instance(s) — To establish transparent access to JMS destinations from any server in a domain, you can target a connection factory to multiple WebLogic Server instances simultaneously.
    Cluster — To establish cluster-wide, transparent access to JMS destinations from any server in a cluster, you can target a connection factory to all server instances in the cluster, or even to specific servers within the cluster.
    As you mentioned, that you are targeting scalability, below info might be helpful.
    1. Load balancing of destinations across multiple servers in the cluster. You can establish load balancing of destinations across multiple servers in the cluster by configuring multiple JMS servers and targeting them to the defined WebLogic Servers. Each JMS server is deployed on exactly one WebLogic Server instance and handles requests for a set of destinations.
    For details refer: http://download.oracle.com/docs/cd/E13222_01/wls/docs90/jms_admin/advance_config.html#1067265
    2. Distribution of application load across multiple JMS servers through connection factories, thus reducing the load on any single JMS server and enabling session concentration by routing connections to specific servers.
    -Akshay

  • Configuring JMS over a Weblogic Cluster

    I have a setup of 2 Weblogic managed servers configured as part of a cluster.
    1) The requirement is to send requests to another system and receive responses using JMS as interface.
    2) The request could originate from either of the Managed Servers. So the corresponding response should reach the managed server which originated the request.
    3) The external system (to which requests are sent) should not be aware of how many managed servers are in the cluster (not a must have requirement)
    How should JMS be configured for meeting these requirments?

    Refer:- Re: help in setting up a clustered scalable JMS env
    -Akshay

  • EJB and Tools

    Hi there,
    3 questions here:
    First, does CMP can call stored procedure from a database? or BMP only? or both? Please show me some codes on whichever.
    Secondly, can a session bean call a stored procedure? Any codes?
    Last, does there any tools or IDE that have source codes control and works with WebLogic servers?
    Thanks a lot
    Neo

    Some further explanation please...
    1. What does these method doing inside my EJBean?
    - ejbPassivate()As the name suggest this method helps in passivation... i.e. persistence of ejb state.
    The container invokes this method on an instance when the container decides to disassociate
    the instance from an entity object identity, and to put the instance back into the pool of avail-able
    instances. The ejbPassivate() method gives the instance the chance to release any
    resources that should not be held while the instance is in the pool. (These resources typically
    had been allocated during the ejbActivate() method.)
    For session beans in advanced cases, a session object�s conversational state may contain open resources, such as open sockets and open database cursors. A container cannot retain such open resources when a session bean instance is passivated. A developer of such a session bean must close and open the resources in the ejbPassivate and ejbActivate notifications.
    - setSessionContext(SessionContext ctx), since my
    my codes in the Remote interface does not call this
    method which also passes a parameter ctx?setSessiontContext is invoked by container and the ctx required by it is supplied by container. Thus u dont require to pass ctx parameter.
    The bean�s container calls the setSessionContext method to associate a session bean instance
    with its context maintained by the container. Typically, a session bean instance retains its session context as part of its conversational state.
    The context supplied by the container helps you to identify the poperties in deployment descriptor...i.e. your next question..
    - Where does this referred in the EJBean ->
    -> cntx.lookup("java:comp/env/jdbc/mydatabase");
    where that i can see the reference to?
    You can refer these statements any where u need to access database. Suppose you need to fire a procedure after you have created a record. You can write these statments in ejbPostCreate method.
    2. Inside my ejb-jar.xml file, which is a deployment
    descriptor, what are these tags meant?
    - <env-entry>, <env-entry-name>, <env-entry-type>,
    e>, <env-entry-value>
    env-entry stands for environment entries. These are like env. variables which u need to access at runtime.
    Each env-entry element describes a single environment entry. The env-entry element consists of
    an optional description of the environment entry, the environment entry name relative to the
    java:comp/env context, the expected Java type of the environment entry value (i.e., the type of the
    object returned from the JNDI lookup method), and an optional environment entry value.
    An environment entry is scoped to the enterprise bean whose declaration contains the env-entry ele-ment.
    This means that the environment entry is inaccessible from other enterprise beans at runtime, and
    that other enterprise beans may define env-entry elements with the same env-entry-name with-out
    causing a name conflict.
    The environment entry values may be one of the following Java types: String, Character, Inte-ger,
    Boolean, Double, Byte, Short, Long, and Float.
    If the Bean Provider provides a value for an environment entry using the env-entry-value ele-ment,
    the value can be changed later by the Application Assembler or Deployer. The value must be a
    string that is valid for the constructor of the specified type that takes a single String parameter, or for
    java.lang.Character, a single character.
    For example:
    <env-entry>
    <description>Return the developer of the bean</description>
    <env-entry-name>author</env-entry-name>
    <env-entry-type>java.lang.String</env-entry-type>
    <env-entry-value>Neo</env-entry-value>
    </env-entry>
    Here description tag is optional.
    <env-entry-name> specifies the property name, which in our case is author.
    <env-entry-type> specifiest the data type of property, i.e String
    <env-entry-value>The value for the property i.e Neo
    You can imagine this as a java statement like
    String author = Neo; //Return the developer of the bean.
    TQ
    NeoHope this explanation was helpful to u
    abhishek

  • HWF UI work list not loading in BPM worklist page (on clustered env)

    hello - Please reply directly as I am not on this alias.
    My SOA project (BPEL+HWF) is successfully deployed on the clustered env successfully.
    My BPEL process runs fine. But my HWF_UI taskflow fails to load on the browser (IE and Firefox, same error)
    I can see the composite & UI ear(enterprise app) on EMConsole & AdminConsole, and all configs/ settings looks ok.
    During execution, the work item pops up on the BPM worklist, but when I click on the work-item, the HWF_UI component is not loading....
    Not Found
    The requested URL /workflow/DOO_Simulation_ProjectUI/faces/adf.task-flow was not found.
    How do I overcome this or get some meaningfull error info.? The log files donot tell much..
    Caused By: oracle.adf.controller.ControllerException: ADFC-12002: The ADF Controller is unable to pop the top-level ADF unbounded task flow from the page flow stack.
    at oracle.adfinternal.controller.state.PageFlowStack.pop(PageFlowStack.java:184)
    Another question:
    How do I test if my HWF task flow component (UI) is installed & working / executing properly? any error logs/ error level settings?
    Edited by: ssondur on Apr 5, 2013 4:17 PM

    It is deploying correctly, the worklist app displays my instance. Btu when I click on my worklist item, the UI does not load or display. Below is the browser error and logfile error messages.
    -----Browser error-----
    Not Found
    The requested URL /workflow/DOO_Simulation_ProjectUI/faces/adf.task-flow was not found.
    ---Log file message---------
    Caused By: oracle.adf.controller.ControllerException: ADFC-12002: The ADF Controller is unable to pop the top-level ADF unbounded task flow from the page flow stack.
    at oracle.adfinternal.controller.state.PageFlowStack.pop(PageFlowStack.java:187. No
    8. No customization to env or app. This is a user defined (but standard) BPEL process deployed. The same app works fine on my local laptop & SOA env

  • Open script play doesnt work in the clustered env

    We have Clustered Webcenter env. Here, we record our flow in the open script. If we play back as the recorded user, it works fine.
    However if we randomize the users it fails with the following two errors :
    1.Failed to solve variable web.input.wccontextURL_1 using path .//INPUT[@name='wc.contextURL']/@value
    2.Failed to solve variable afrloop3
    ** This issue is not seen, if we done through browser multiple times with different users.
    **However if we run this in a non-clustered environemnt, it works fine.
    **Also if we bring down one of the managed servers, the script works fine.
    Can you plesae help?

    I don't think play() is the problem.  That error occurs when an object the code is trying to target does not exist as far as the code is concerned.  If that is the only code you have in the file, then you have a problem with "b2" not being seen by the code in frame 1.
    Did you assign "b2" as the instance name via the Properties panel, or is that the library name of the button?
    Are both the code and the button in frame 1 of the same timeline?  The way you said it can give the impression the code is in frame 1 inside the button.

  • Database Poller issues in Clustered env

    Hi,
    I am running Oracle BPEL 10.1.3.4 on top of Weblogic version 9.x in a clustered env. I created a simple database poller BPEL process (for target table in Oracle database) with below parameters:
    Max Raise Size: 1
    Max Transaction Size: 2
    Polling Interval:30
    Distributed Polling is enabled
    All other BPEL engine and domain level parameters have default values.
    In my clustered env two instances of database poller BPEL process are running on diff nodes. Now I populated my table with 1000 records in one go. According to parameters I expect:
    - In each polling interval max 2 record will be processed by database poller BPEL process running on single node. And there will be 2 instances of this bpel process as max raise size is 1. Same I expect for database poller BPEL process running on another node. So overall 4 records should be processed in each polling interval and 4 instances of BPEL processes should be visible in BPEL console. But in actual its always 2.
    Is I am missing something here? Won't the load balancer distributes records to BPEL processes on both nodes equally?
    Also then I raise Max transaction size to higher value e.g. 20. But in this case after processing nearly 1/3 rd of records, the BPEL process stopped picking up any further records. Is there any known issue where Adapter stops picking any further database records if transaction size limit is higher?
    thanks
    Ankit
    Edited by: AnkitAggarwal on 22-Feb-2010 03:36

    Hi Ankit,
    I have Oracle BPEL env11.1.1.2.0 clustered over 2 nodes.
    I am facing a similar issue. In my scenario I have Bpel (with DB poller) and its deployed on the cluster. And I am connected to DB via Multi data source (MDS-1) with only one datasource (DS-1) configured in it.
    So whenever I update the table which is being polled, in some cases I have TWO instances and in some cases I have ONE Instance running. My requirement is to have only one instance running every time the DB poller is initiatied.
    Kindly help me out.
    Thank You
    Best Regards
    Prasanth

  • Best Practices: Clustered Author Environment

    Hello,
    We are setting our CQ 5.5 infrastructure in 3 datacenters with ultimately an Authoring instance in each (total of three).  Our plan was to Cluster the three machines using “Share Nothing” and each would replicate to the Publish instances in all data centers.  To eliminate confusion within our organization, I’d like to create a single URL resource for our Authors so they wouldn’t have to remember to log into 3 separate machines?
    So instead of providing cqd1.acme.com, cqd2.acme.com, cqd3.acme.com, I would distribute something like “cq5.acme.com” which would resolve to one of the three author instances.  While that’s certainly possible by putting a web server/load balancer in front of the three, I’m not so sure that’s even a best practice for supporting internal users.
    I’m wondering what have other multi-datacenter companies done (or what does Adobe recommend) to solve this issue, did you:
    Only give one destination and let the other two serve as backups? (this appears to defeat the purpose of clustering)
    Place a web server/load balancer in front of each machine and distribute traffic that way?
    Do nothing, e.g., provide all 3 author URLs and let the end-user choose the one closest to them geographically?
    Something else???
    It would be nice if there was a master UI an Author could use that communicated with the other author machines in a way that’s transparent to the end-user – so if Auth01 went down, the UI would continue to work with the remaining machiness without the end-user (author) even knowing the difference (e.g., not have to change machines).
    Any thoughts would be greatly appreciated.

    Day's documentation (for CRX 2.3) states in part, "whenever a write operation is received by a slave instance, it is redirected to the master instance ..."  So, all writes will always go to the master, regardless of which instance you hit.
    Day's documentation also states, "Perhaps surprisingly, clustering can also benefit the author environment because even in the author environment the vast majority of interactions with the repository are reads. In the usual case 97% of repository requests in an author environment are reads, while only 3% are writes."
    This being the case, it seems the latency of hitting a remote author would far outweight other considerations.  If I were you, New2CQ, I would probably have my users hit the instance that's nearest to them (in terms of network latency, etc...) regardless or whether it's a master or a slave.

  • JMS bridge is not working in clustered env

              We have set up a JMS bridge between WLS7SP3 and WLS8.1. It works very well in
              stand alone server env (testing env). However, we cannot get it to work on clustered
              env (preprod env). Anyone has experienced working with clustered env? If so,
              please help!
              Thanks.
              

    I forgot to say, we are using WLS8.1 SP1
              "Pete Inman" <[email protected]> wrote in message
              news:[email protected]...
              > If you are in a clusterd environment and you deploy a bridge to the WHOLE
              > cluster it does not work and will not find the adapter. If you deploy to
              the
              > INDIVIDUAL cluster members it will work.
              >
              > We have a cluster with 4 managed servers, deploy to whole cluster - no
              > bridge working, deploy to Server1,2,3,4 bridges work fine.
              >
              > I have a case logged with BEA on this topic.
              >
              > "Tom Barnes" <[email protected]> wrote in message
              > news:[email protected]...
              > > "Not working" is too little information. I suggest
              > > that you start with the messaging bridge FAQ. There is
              > > a link to it here:
              > >
              > > http://dev2dev.bea.com/technologies/jms/index.jsp
              > >
              > > Then post with traces, exceptions, configuration, etc, if
              > > you are still having trouble.
              > >
              > > Tom, BEA
              > >
              > > jyang wrote:
              > >
              > > > We have set up a JMS bridge between WLS7SP3 and WLS8.1. It works very
              > well in
              > > > stand alone server env (testing env). However, we cannot get it to
              work
              > on clustered
              > > > env (preprod env). Anyone has experienced working with clustered env?
              > If so,
              > > > please help!
              > > >
              > > > Thanks.
              > >
              >
              >
              

  • How to deploy jetspeed 2.1 in clustered env.

    pl. suggest me how to deploy jetspeed 2.1 in clustered env.
    thanks in advance
    abhiagar

    Strat Oracle em, browse your .war file . then in class loading configuration unchecked all except oracle.jdbc
    Hope it works for you.

  • Serialization Exception with a MailSession in a WLS 9.1 Clusterred Env.

    I have a WLS 9.1 cluster of 2 machines. My EJB's are targetted to both
    machines in the cluster, and I have a single WLS MailSession that is
    targetted to both machines in the cluster.
    One of the EJB's is responsible for sending emails. Every so often I get a
    NotSerializableException when the EJB looksup the mail session. After some
    diagnosis it appears that every once in a while the JNDI lookup for the mail
    session returns the MailSession from the opposite machine in the cluster:
    IE: the EJB running on server 1, gets the MailSession from server 2, and
    vice-versa.
    When the same EJB is running in a non clustered env, there are no problems.
    And the problem in the cluster is intermittent - sometimes the lookup
    succeeds and the EJB gets a handle to the local MailSession object, and
    sometimes it fails.
    Has anyone else experienced this problem? Is this a known problem? If so,
    are there any patches for it?
    Thanks,
    Brett
    The exception is:
    javax.naming.ConfigurationException [Root exception is
    java.rmi.MarshalException: error marshalling return; nested exception is: \n
    java
    .io.NotSerializableException: javax.mail.Session]\n at
    weblogic.jndi.internal.ExceptionTranslator.toNamingException(ExceptionTra
    nslator.java:46)\n at
    weblogic.jndi.internal.ExceptionTranslator.toNamingException(ExceptionTranslator.java:78)\n
    at weblogic.
    jndi.internal.WLContextImpl.translateException(WLContextImpl.java:421)\n at
    weblogic.jndi.internal.WLContextImpl.lookup(WLCon
    textImpl.java:377)\n at
    weblogic.jndi.internal.WLContextImpl.lookup(WLContextImpl.java:362)\n at
    javax.naming.InitialConte
    xt.lookup(InitialContext.java:351)\n at
    com.certapay.ejb.Mailer.send(Mailer.java:55)

    I am having exact same problem (albeit accessing the mail session through spring wrapper). Has anyone got this to work? If not, looks liek a bug in BEA :-).
    Anuj

  • Uploading & Processing of CSV file  fails in clustered env.

              We have a csv file which is uploaded to the weblogic application server, written
              to a temporary directory and then manipulated before being written to the database.
              This process works correctly in a single server environment but fails in a clustered
              environment.
              The file gets uploaded to the server into the temporary directory without problem.
              The processing starts. When running in a cluster the csv file is replicated
              to a temporary directory on the secondary server as well as the primary.
              The manipulation process is running but never finishes and the browser times out
              with the following message:
              Message from the NSAPI plugin:
              No backend server available for connection: timed out after 30 seconds.
              Build date/time: Jun 3 2002 12:27:28
              The server which is loading the file and processing it writes this to the log:
              17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
              lfilment) One of the getParameter family of methods called after reading from
              the Serv
              letInputStream, not merging post parameters>.
              

    Anna Bancroft wrote:
              > We have a csv file which is uploaded to the weblogic application server, written
              > to a temporary directory and then manipulated before being written to the database.
              > This process works correctly in a single server environment but fails in a clustered
              > environment.
              >
              > The file gets uploaded to the server into the temporary directory without problem.
              > The processing starts. When running in a cluster the csv file is replicated
              > to a temporary directory on the secondary server as well as the primary.
              >
              > The manipulation process is running but never finishes and the browser times out
              > with the following message:
              > Message from the NSAPI plugin:
              > No backend server available for connection: timed out after 30 seconds.
              >
              >
              >
              > --------------------------------------------------------------------------------
              >
              > Build date/time: Jun 3 2002 12:27:28
              >
              > The server which is loading the file and processing it writes this to the log:
              > 17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
              > lfilment) One of the getParameter family of methods called after reading from
              > the Serv
              > letInputStream, not merging post parameters>.
              >
              >
              It doesn't make sense? Who is replicating the file? How long does it
              take to process the file? Which plugin are you using as proxy to the
              cluster?
              -- Prasad
              

  • Automation failing in OSM clustered env and getting -unread block data

    One of the customer is getting following exceptions while trying to place orders on the clustered environment. The same issue is also reported by other two and is discussed in communities (https://communities.oracle.com/portal/server.pt?open=514&objID=187443&mode=2&threadid=367195)
    <04-Jun-2012 11:20:11,369 ICT AM> <ERROR> <message.ClusterMessageHandlerBean> <ExecuteThread: '37' for queue: 'oms.automation'> <Failed to process cluster request for order ID [100739]>
    java.lang.IllegalStateException: unread block data
         at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2376)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1360)
         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
         at weblogic.rmi.extensions.server.CBVInputStream.readObject(CBVInputStream.java:64)
         at weblogic.rmi.internal.ServerRequest.copy(ServerRequest.java:261)
         at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:166)
         at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:222)
         at com.mslv.oms.security.base.OMSRequestBalancer_y7pdy3_EOImpl_1033_WLStub.routeRequestToRemoteJMSDestination(Unknown Source)
         at com.mslv.oms.automation.plugin.l.a(Unknown Source)
         at oracle.communications.ordermanagement.cluster.message.ClusterMessageHandlerBean.onMessage(Unknown Source)
         at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466)
         at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371)
         at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4659)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:4345)
         at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3821)
         at weblogic.jms.client.JMSSession.access$000(JMSSession.java:115)
         at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5170)
         at weblogic.work.ExecuteRequestAdapter.execute(ExecuteRequestAdapter.java:21)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)
    <Jun 4, 2012 11:20:11 AM ICT> <Error> <oms> <BEA-000000> <message.ClusterMessageHandlerBean: Failed to process cluster request for order ID [100739]
    java.lang.IllegalStateException: unread block data
         at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2376)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1360)
         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
         Truncated. see log file for complete stacktrace
    >

    Logs clearly indicates that OSM is running in legacy mode. “OMSRequestBalancer_y7pdy3_EOImpl_1033_WLStub.routeRequestToRemoteJMSDestination” will not come in picture in case of optimized mode.
    Please make sure below settings:
    1)     Studio – optimized mode.
    2)     Cartridge target version 7.0.3

  • WebCache in clustered env gives Blank Page with Http error code 508

    Hi All,
    I have Oracle webcache in a clustered mode ,which is acting as a loadbalancer for the Oracle Application Server.
    The Oracle Application Server is hosting custom j2ee application .
    Scenario 1:
    Only one Node of WebCache is Up
    If i try to access the web Application through Browser, then the page is displayed successfully.
    Scenario 2:
    Both the WebCache Nodes are up and Running
    Problem statement:
    If i try to access the web Application through Browser, then it gives me blank page and HttpError Code 508 in background.
    The Problem what seems to me is the session persistence problem.
    As far as i know,
    In a cluster environment, developers must be aware that the HTTP session can run in multiple JVMs. The session attributes should be kept consistent in each JVM. Otherwise, the same input may cause different results when the application runs on different nodes because the user-related data is inconsistent in these two nodes.
    The precondition is that all shared attributes must be serialized and deserialized. When you put a Java object into a session and want it shared across all nodes, declare this Java object as a serializable interface
    I want to know, is my understanding correct and code is causing the problem and needs to be checked in the source code of application
    or there is any configuration missing on WebCache?
    I would appriciate any pointers.
    Regards
    Ak

    TO confirm this, did you try to run anypage that doesnt contain a session object? and see if you can use the pages?
    If so work on making your application clusterizable.
    Also why dont you try to create VirtualHost and register them into WebCache so it will get this request and attend them
    Hope this helps
    Greetings.

Maybe you are looking for

  • How to view AirPort / Time Capsule logs

    I want to be able to view logs of all connections made using my AirPort or Time Capsule.  I know how to use the AirPort Utility to view a list of who is currently connected to the WIFI network, but that is a snapshot in time.  I want to be able to lo

  • Payment term functionality

    Does SAP have the ability to grant a discount on an invoice, but only if paid within a certain time frame?  I created a payment term with a 2% discount and due date = posting date + 30 days.  I was hoping the invoice would post for $1,000, but that t

  • How to create value help in web-dynpro-abap

    Hi ,      can anybody tell me detailed step for creating value help in web-dynpro-abap .

  • Why is Premiere outputting whole clip..?!?

    Hi - fairly new so I apologise if this is a stupid question, though I have searched hard online before posting here. I am reading a clip from a DVD of which I only want to export a short segment. I have created 'in points' and 'out points' in the pro

  • [SOLVED] Cloning boot and var partitions to a new drive for booting

    Ok first of all here's my setup: fakeraid (dmraid) / and home on an OCZ Revodrive boot with GRUB on a flash drive (since fakeraid doesn't support grub) var and downloads/media folder on a 500GB WD Caviar drive When I originally installed Arch I had t