OBPM Enterprise Deployment on WLS - No Cluster, But Load Balanced

All,
Does anyone know of any gotchas when deploying BPM to WLS on 2 separate nodes, sharing the same directory, but not clustered. The system is load balanced based on F5. Basically we are talking a hot server/cold server deployment.
When we deploy projects, they default to the hot server even if the cold server is specified for deployment.
Anyone done this before?
TIA,
IGS

Hi,
Sorry, but I could not understand completely your architecture.
Are you talking about the Workspace (not clustered but load balanced). That's supported.
Or, are you trying to load balance the engine? (a single engine with 2 or more nodes)
If so.... I wouldn't recommend that you to do that.
Let me explain you why.
The engine uses the queue to balance the work among the different nodes. (that's why you have to configure a Distributed Queue and disable the server affinity in the connection factory).
Even more, the engine has some internal mechanism of synchronization among nodes so as to avoid some inter-node locking. If your engine nodes are not in a cluster, that mechanism will be disabled and the overall engine performance will be significantly degraded.
I'm not sure if I have answered your question. If not, please add more details of your configuration.
Hope this helps,
Ariel

Similar Messages

  • CF8/JRun4 Cluster for Load Balancing

    Does anyone have an example of how to set up a CF8/JRun4
    cluster for load balancing?
    I have three servers:
    x004 - Linux - Apache2 (10.0.0.54,10.1.0.54)
    x020 - Linux - JRun4/CF8 (10.0.0.70,10.1.0.70)
    x021 - Linux - JRun4/CF8 (10.0.0.71,10.1.0.71)
    Every server in our network has two network cards. One
    network card is attached to 10.0.x.x which has a gateway to the
    internet and runs at 100Mbps and is firewalled, and the other is
    attached to 10.1.x.x which runs at 1Gbps and is internal with no
    gateway. I'm trying to set it up so web traffic arrives on
    10.0.0.54 into Apache and mod_jrun20 bootstraps a cluster named
    STST using 10.1.0.54 which consists of STST_x020 coldfusion server
    running on x020 and STST_x021 running on x021. I want the
    communications between JRun4 on x020 and x021 to occur on the
    10.1.x.x network and eventhough JRun and ColdFusion will only use
    the 10.1.x.x network I still need the 10.0.x.x network card
    attached for other purposes which require a gateway. I have
    installed JRun4/CF8 about 10 times already and it seems I have no
    control over what network JRun4 clusters on... sometimes it will
    communicate on one, sometimes the other and without being able to
    set which network is being used there always seems to be "network
    error" on at least one of the two CF8 servers. I was able to get
    everything working fine by disabling the network cards on the
    10.0.x.x network and re-installing everything... but as soon as I
    added the network cards back the whole thing was broken again.
    How is this supposed to work? Most of the examples are either
    no clustering or clustering on the same machine with Apache running
    on the same box... I don't see any clustering across machines
    examples.
    How do I install a connector on a web server which doesn't
    have JRun on it and get wsconfig to connect to a multi-machine
    cluster when wsconfig only accepts a single IP address as a host
    and the cluster is not listed?
    How do I get JRun to bind to a specific network card?
    Does this work if I choose a J2EE server other than JRun?
    Any help anyone can provide is greatly appreciated. I'm
    getting close to giving up which means staying on the non-clustered
    environment and figuring out how we can deal with scalability by
    switching to something else.

    The article at
    http://www.adobe.com/go/1e8e9170
    is specific to configuring two or more cluster nodes that reside on
    separate networks, e.g. 10.0.1.0/24 and 10.0.2.0/24. (The article
    doesn't state it, but you can only use unicast peers if your
    cluster nodes host a single instance of JRun or multiple instances
    of JRun in the same cluster domain. When performing unicast
    discovery, JRun looks for all Jini groups and not just the cluster
    group.)
    Anyhow, that's not your problem. The simplest solution is you
    haven't enabled the jrun.servlet.jrpp.JRunProxyService service. I'm
    most familiar with the Windows version of JRun, but I'm assuming
    the directory structure is similar across platforms. In
    <jrun_root>/servers/<name>/SERVER-INF/jrun.xml, set the
    deactivated attribute of the jrun.servlet.jrpp.JRunProxyService
    service to false and restart JRun. You should now see JRun
    listening on the appropriate port. (The default for the first
    manually created instance is 51000.) You can limit the proxy
    service to a single interface using the interface attribute.
    If you have enabled the proxy service, verify your security
    settings in <jrun_root>/lib/security.properties. It's usually
    best to limit access to specific hosts. Comment out the
    jrun.subnet.restriction parameter and set the jrun.trusted.hosts to
    the IP address of your web server, e.g. 10.1.0.54.
    Forcing all JRun processes/services to listen on a single
    interface isn't difficult, but it does require modifying quite a
    few configuration files by hand. If you need assistance with that,
    I can elaborate.
    Configuring the JRun module under Apache is pretty
    straightforward. If you're not using virtual hosts, it's very
    simple. If you are using virtual hosts, it's still simple, but your
    JRun configuration can be virtual host-specific.
    On your Apache server, you'll want to create a directory
    structure for the JRun module. I'll assume
    /opt/jrun/lib/wsconfig/1, but you can use anything you want. Once
    the directory structure is created, extract the appropriate JRun
    module from wsconfig.jar to the new directory. You're most likely
    interested in the Apache 2.0 module,
    wsconfig.jar/connectors/apache/intel-linux/prebuilt/mod_jrun20.so.
    Let's assume you've extracted the module to
    /opt/jrun/lib/wsconfig/1/mod_jrun20.so. Your Apache service account
    should have read, write, and execute permissions on the
    /opt/jrun/lib/wsconfig/1 directory.
    The JRun module configuration is normally appended to your
    current httpd.conf file by wsconfig. Here's a sample configuration:
    LoadModule jrun_module
    "/opt/jrun/lib/wsconfig/1/mod_jrun20.so"
    <IfModule mod_jrun20.c>
    JRunConfig Verbose false
    JRunConfig Apialloc false
    JRunConfig Ssl false
    JRunConfig Ignoresuffixmap false
    JRunConfig Serverstore
    "/opt/jrun/lib/wsconfig/1/jrunserver.store"
    JRunConfig Bootstrap 10.1.0.70:51000
    #JRunConfig Errorurl <optionally redirect to this URL on
    errors>
    #JRunConfig ProxyRetryInterval 600
    #JRunConfig ConnectTimeout 30
    #JRunConfig RecvTimeout 30
    #JRunConfig SendTimeout 30
    AddHandler jrun-handler .jsp .jws .cfm .cfml .cfc .cfr
    .cfswf
    </IfModule>
    You may also want to update your DirectoryIndex directive
    with an appropriate index page, e.g. index.cfm.
    After the first request to a page handled by the JRun module
    is received, the module will query the boostrap server,
    10.1.0.70:51000, for a list of cluster peers. If you've configured
    your cluster correctly, a line similar to following will be written
    to /opt/jrun/lib/wsconfig/1/jrunserver.store:
    proxyservers=10.1.0.70:51000;10.1.0.71:51000
    You can create/edit this file manually as well.
    Unfortunately, the bootstrap option only accepts one server. If
    your bootstrap server is down, the JRun module will use the values
    in jrunserver.store directly, if the file exists.
    Here's a complete list of JRun module options:
    metrics *
    debugger *
    ssl *
    verbose
    traceflags
    serverstore
    bootstrap
    errorurl
    apialloc
    ignoresuffixmap
    proxyretryinterval
    connecttimeout
    recvtimeout
    sendtimeout
    sslcalist
    Options flagged with an asterisk can only be configured at
    the Apache server level. All other options can be configured at the
    server level and/or the virtual host level. The usage of these
    options is in the JRun documentation, and the JRun module source
    code is included in wsconfig.jar. Keep in mind that versions of the
    JRun module shipped prior to ColdFusion 8 were coded to assign the
    connecttimeout and sendtimeout options to the socket connection
    timeout. Whichever option appeared last in your configuration ended
    up as the final value. This has been fixed in ColdFusion 8 and
    presumably the next release of the JRun updater.
    I think that's a good start. If you need more information or
    can't find what you need in the JRun or ColdFusion documentation,
    let me know.
    If you're looking for resiliency, I highly recommend
    expanding your configuration to include a second web server and a
    hardware load-balancer (preferably one that supports redudancy via
    multiple paths and devices, e.g. devices from Cisco, F5, or Foundry
    Networks). Often, however, running Apache on the ColdFusion
    server(s) provides adequate performance, and round-robin DNS
    records coupled with the ability to update DNS quickly in the event
    of a failure may be all you need for load-balancing and
    failover.

  • OCS on a cluster with Load balancing and fail safe environment

    Dear all,
    i want to ask is there any documat or hints on how to do an OCS R2 installaiotn on 3 server with RAC option (clustered Fail Safe), how can i install OCS on a cluster with Load balancing and fail safe environment.
    plz i need ur help
    thanking u
    [email protected]

    Dear all,
    i want to ask is there any documat or hints on how to do an OCS R2 installaiotn on 3 server with RAC option (clustered Fail Safe), how can i install OCS on a cluster with Load balancing and fail safe environment.
    plz i need ur help
    thanking u
    [email protected]

  • How to setup Adobe Media Server Professional x 2 run as cluster for load balance?

    How to setup Adobe Media Server Professional x 2 run as cluster for load balance?

    Hi,
    Welcome to adobe forums,
    Please refer to these help files in order to setup AMS as a cluster : https://helpx.adobe.com/adobe-media-server/config-admin/load-balancing.html
                                                                                                                https://helpx.adobe.com/adobe-media-server/tech-overview/scaling-server.html
    Let me know if you need any help.
    Regards,
    Puspendra

  • Cluster not load-balancing, ideas?

    I've been struggling to identify why my JMS producers are not load-balancing against a remote cluster.
              I've ruled out the producer as being the problem (I see the same non-load-balancing behavior regardless of what I use to create messages - Hermes, ALSB, simple Java producer...) I also don't think the JMS Connection Factory config is the problem, judging by the help I've received from folks over on the jms forum.
              I believe something is wrong with our cluster setup because in addition to the problem I just mentioned, we also are not seeing JNDI entries propagate to all managed servers - for example, if I create one jms queue on m1, that queue does not appear in the jndi tree on m2.
              I've been trying to find any documentation on what settings I should look at to verify the cluster configuration. If I go through the WLS console and look at the Cluster settings, I see both managed servers there, is there some other place that the configuration could be messed up?
              Added 6/11, 9:30 am:
              We're focusing on multicast now as the most likely problem. Can anyone tell me whether clusters on the same multicast address but different ports will interfere with each other? It looks like the infrastructure team has set up 5 clusters like that (same multicast address in each cluster, but different ports).
              We've got a ticket open with BEA but it's been two weeks now and nothing except requests for more information.
              Any ideas/help are much appreciated!
              Meghan
              Edited by pietila at 06/11/2008 7:38 AM

    Meghan Pietila wrote:
              > I've been struggling to identify why my JMS producers are not load-balancing against a remote cluster.
              >
              > I've ruled out the producer as being the problem (I see the same non-load-balancing behavior regardless of what I use to create messages - Hermes, ALSB, simple Java producer...) I also don't think the JMS Connection Factory config is the problem, judging by the help I've received from folks over on the jms forum.
              >
              > I believe something is wrong with our cluster setup because in addition to the problem I just mentioned, we also are not seeing JNDI entries propagate to all managed servers - for example, if I create one jms queue on m1, that queue does not appear in the jndi tree on m2.
              >
              > I've been trying to find any documentation on what settings I should look at to verify the cluster configuration. If I go through the WLS console and look at the Cluster settings, I see both managed servers there, is there some other place that the configuration could be messed up?
              >
              > Added 6/11, 9:30 am:
              > We're focusing on multicast now as the most likely problem. Can anyone tell me whether clusters on the same multicast address but different ports will interfere with each other? It looks like the infrastructure team has set up 5 clusters like that (same multicast address in each cluster, but different ports).
              >
              > We've got a ticket open with BEA but it's been two weeks now and nothing except requests for more information.
              >
              > Any ideas/help are much appreciated!
              >
              > Meghan
              >
              > --
              > Edited by pietila at 06/11/2008 7:38 AM
              You could be right. I think we have had problems where the same IP but
              different ports were used for multicast. This is on 8.1 though.
              I think as a rule, it's best to have a different ip and port for each
              cluster.
              Also - can you be sure that no one else is using the multicast addresses
              on the network for anything else - we had someone bring up a test
              cluster using our addresses which caused a few issues and took a while
              to find! We also have security cameras which also use multicast, which
              if they are using the same address/port can cause issues!
              We're using 239.192.1.4:8001 for one cluster and 239.192.1.3:7001 for
              the other - I think it's best to keep those as different as you can.
              In 8.1, there is also the multicast monitor utility - there's a support
              pattern on e-support on how to diagnose it. I've found this useful in
              the past when I've suspected a cluster issue.
              https://support.bea.com/application_content/product_portlets/support_patterns/wls/MulticastErrorsPattern.html
              Check also that you're using a valid range for the address - we weren't
              for a while and had odd problems from time to time.
              There are also cluster debug flags available which you'll see listed in
              the support document.
              Are you seeing dropped multicast packets?
              Hope that helps.
              Pete

  • OCS 10g Cluster Installation  - Load Balancing

    Hi all,
    Anybody have sucessfully install and configure OCS 10g Cluster ?, with load balancing ?
    I'm trying to install OCS 10g cluster with a two node server setup, and using Oracle web Cache as the load balancer, but not sucessfull. Any hints ?
    Regards
    Lanang

    Just found out that Oracle Web Cache support HTTP and HTTPS only, no LDAP traffic yet. That why the cluster node installation failed. Trying using iptables NAT for the LDAP traffic, and the HTTP will use web cache.
    Regards
    Din

  • Question about Cluster/DSync/Load Balance

    According to the admin doc of iplanet, primary server is
    the "manager" for data sync, is there any impact on
    load balance when the iAS run as primary or backup?
    will the primary kxs get the request first and do dispatching?
    Thanks.
    Heng

    First of all lets discuss load balancing....
    The type of load balancing you are using will determine which process manages the load balancing. If you are using Response time (per server or per component response time) or round robin (regular or weighted) the web connector does the load balancing. If you are using User Defined (iAS based) load balancing then the kxs process becomes involved with load balancing of requests since the "Load Balancing System" is part of the kxs process.
    Now for Dsync and how it impacts load balancing.
    When a server is a sync primary or a sync backup role it is doing more work. For the sync primary the extra work is making sure the backup has the latest Dsync Data and processing requests from the other servers in the cluster about the Distributed data. All state/session information is updated/created/deleted on the sync primary, when this happens the sync primary immediately updates the sync backup(s) with this new information. As you can guess managing the Dsync information and making the updates to the sync backups causes extra processing on the sync primary, so this will impact the overall performance of the machine (whether it be in server load or response time of processing). All lookup of state/session information is done on the sync primary only so the more lookups/updates you have to more impact on the server.
    The sync backup(s) also have the extra work of managing their copy of the Dsync Data which will impact server performance but to a lessor degree of the sync primary.
    Ultimately the extra overhead involved does have an impact on loadbalancing due to the extra load on the sync primary and sync backups.
    Hope that helps,
    Chris Buzzetta

  • Weblogic cluster software load balancer

    Hi,
    We are currently using Weblogic domain as a Proxy Plug-In for High Availability test as it’s explained in this blog http://andrejusb.blogspot.com/2009/04/weblogic-load-balancing-for-oracle-adf.html.
    Its working fine for POC project but what software load balancer would you recommended for production environment on Linux? (Assume that we don’t have a hardware load balancer).
    - Oracle active-passive OHS web-tier clustering.
    -Using Linux open source Linux Software(e.g. HAProxy and KeepAlived as explained here http://biemond.blogspot.com/2010/04/high-availability-load-balancer- for.html
    -Using any other software load Balancer
    I would appreciate if anybody can provide some recommendations/links etc.
    Thanks
    Alex

    Hi Alex,
    Yes you should never use HttpClusterServlet, not even for fun ;-)
    We use mod_wl (Web Server Plug-In) for Apache for several customers and that works fine.
    Check this:
    http://docs.oracle.com/cd/E23943_01/web.1111/e14395/toc.htm
    Regards Peter

  • Distributed HA cluster with load-balancing and failover: advice?

    My workplace has a Xeon Xserve, which acts as our primary external server, with an attached ActiveStorage XRAID. We have just purchased a second Xserve/XRAID set to act as a mirror, which we will colocate. Both have Leopard Server installed, along with an array of additional software.
    What we want to do is have both servers load-balance between the two, with failover in case of a server or XRAID fault. I plan on using RSYNC to mirror static files between the two, and I'm looking into PostgreSQL replication and load-balancing solutions for our database. I gather that Apache supports web-server failover and load-balancing, as well. But, that still leaves the actual host and network setup to arrange.
    Does Leopard server support such a thing? The only information I found on IP failover instructs the user to place the two servers on the same subnet, directly connected via ethernet cable; obviously, this would not work in my case.
    Ideally, what we'd end up with is a situation in which the two systems kept each other in sync, both in static files and database data, and load-balanced between themselves; in cases of failure, the remaining system would transparently assume all duties until the other was restored, at which time they would resynchronize
    Any suggestions on how I could arrange such a thing?

    Interesting. Does this DNS-based approach support session tracking, though? I would need to have a user directed to just one of the two servers for the duration of their session, to avoid having to synchronize temporary files and such.
    You can't have it both ways. You need to build tolerance into the app.
    At the simplest level where you run all traffic to one site and use the second site as a failover/standby site you'll be OK most of the time - all users will go to the same server and their sessions will be intact.
    However, under any failover situation (your primary site is down for some reason), there is going to be some level of session traffic that it going to switch over to the other site. If your site depends on sessions then you're going to need to tolerate this kind of situation - your app will need to be able to fail gracefully if a user comes in with an invalid session cookie.
    Note, though, that this may be less of an issue than you at first think - all DNS clients will cache DNS data for whatever TTL you set. This means that if a user looks up your site name and you return an IP address with a 30 minute TTL, then that user is going to use the same IP address for the next 30 minutes and isn't going to ask the server again. This should negate most chance of a user suddenly switching from one server location to the other in mid-session.
    The trick comes in setting the DNS TTL low enough to effect a failover, yet not so long that you impact performance - e.g. you don't want the user to perform a DNS lookup on every page load. You may find that 10 minutes is appropriate. Just bear in mind that this affects how long a user could see your site 'down' before the failover DNS kicks in. Clearly you don't want to set the DNS TTL to a day since that may prevent the user switching to the secondary site for 24 hours by which time, hopefully, the primary site is back up, anyway.
    The 'right' TTL value may take some analysis on your traffic to see how long a typical user 'session' is. If the average user spends 20 minutes on your site, then it would make sense to set your TTL to somewhere around 20 minutes to give the best chance of their entire session staying on the same server.

  • OAM 11g integration with Kerberos on cluster with load-balanced virtualhost

    Hello!
    I need to make a Kerberos integration with OAM.
    I find following notes about OAM 11g: WNA Configuration for HA Clusters [ID 1365888.1] (https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_afrLoop=223640518878014&type=DOCUMENT&id=1365888.1&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=14ehvbh4z2_61).
    "In an OAM Clustered environment, the OAM Principal for WNA must be the same on all tiers i.e. the load-balanced virtualhost for the OAM cluster.
    Therefore each OAM managed server will reference the same keytab file, generated for Principal HTTP/<virtualhost.domain>, and the keytab file will be in the same location on all OAM managed servers.
    For example: ${DOMAIN_HOME}/domains/${DOMAIN_NAME}/config/fmwconfig/oam/<keytab filename>.
    After copying the keytab file to the same directory on all OAM managed server machines, complete the configuration of the Kerberos authentication module in OAM Administration Console (/oamconsole).
    The AdminServer will ensure that the oam-config.xml file on all OAM managed server tiers in the cluster is updated with this configuration."
    The question is; When I generate oam.keytab with following command, What is the name of the server that I will must put in the command? Virtualhost (load-balanced), Node1 or Node2?
    ktpass -princ HTTP/<servername>@DOMAIN -pass XXXXXXX mapuser DOMAIN\user -out oam.keytab.
    Thanks in advance and best regards!
    PS: Sorry if my english is not clear.

    David,
    Your Principal name should be the SSO LB URL.(ie :sso.mycomany.com)
    ktpass -princ HTTP/sso.mycomany.com@DOMAIN -pass XXXXXXX mapuser DOMAIN\user -out oam.keytab.
    Also make sure sso.mycomany.com has a reverse DNS configured correctly.
    you can check using dig command
    ping sso.mycomany.com
    What ever the ip-address
    dig -x <IP-ADDRESS>
    Check in the reverse DNS section there should be 1 record.
    ;; ANSWER SECTION:
    1.1.1.1.in-addr.arpa. 3600 IN PTR sso.mycomany.com.
    Let me know if you have more questions.
    Thanks
    Saurabh

  • Cluster without load balancer?

    Can two FMS interactive servers work this way:
    1. they both serve the same VOD flv file existing on both
    machines
    2. whe one server gets X users connected, next user (X+1) is
    routed to the next server
    3. there is no hardware for the load balancer
    Is this possible and if it is, how?

    Yes.... it's possible, but there's nothing built-in to FMS to
    handle it. You need to write your own application to do it.
    I like to handle this by building a little app that polls the
    admin service of each involved server once a second or so, and
    retrieves stats about the application instances i need to keep
    track of (we'll call it the "load balancer app" . In this
    application, I include functions to loop through the stats data for
    each server, and determine which is the most logical to send the
    next client to.
    On the client side, I first connect to the load balancer app,
    providing an identifier for the application I want to connect to as
    an argument in the connect() call. The load balancer makes the best
    server determination, and returns the host name of the target
    server. The client then disconnects from the load balancer, and
    connects to the target host.

  • Problem when using f:invoke or f:invokeUrl over OBPM Enterprise

    Hi All,
    I am installing a new environment of Oracle BPM 10.3.1 Enterprise for Weblogic (and using Weblogic Server 10.3.2), and I have creates a small project to test the funcionality of OBPM Tag Library, invoking from custom JSP. However, I test my process in OBPM Studio with a JSP that using f:field and f:invoke and works fine but when I published and deployed the project over OBPM Enterprise, an exception is throwed in the f:invoke (the f:field works correctly, because if I test only this, it is show me the attributes of BPM object). This is the stack trace of the exception:
    <09-oct-2012 19H42' CDT> <Error> <HTTP> <BEA-101362> <[ServletContext@22416257[app:08-workspace-XAFDIDS.ear module:workspace path:/workspace spec-version:2.5]] could not deserialize the request scoped attribute with name: "agreement"
    java.lang.ClassNotFoundException: xobject.Com.Posadas.BPM.DAT.Model.Objects.AgreementRequest: This error could indicate that a component was deployed on a cluster member but not other members of that cluster. Make sure that any component deployed on a server that is part of a cluster is also deployed on all other members of that cluster
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:218)
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:85)
    at weblogic.common.internal.WLObjectInputStream.resolveClass(WLObjectInputStream.java:61)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
    Truncated. see log file for complete stacktrace
    >
    <09-oct-2012 19H42' CDT> <Error> <HTTP> <BEA-101362> <[ServletContext@22416257[app:08-workspace-XAFDIDS.ear module:workspace path:/workspace spec-version:2.5]] could not deserialize the request scoped attribute with name: "agreement"
    java.lang.ClassNotFoundException: xobject.Com.Posadas.BPM.DAT.Model.Objects.AgreementRequest: This error could indicate that a component was deployed on a cluster member but not other members of that cluster. Make sure that any component deployed on a server that is part of a cluster is also deployed on all other members of that cluster
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:218)
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:85)
    at weblogic.common.internal.WLObjectInputStream.resolveClass(WLObjectInputStream.java:61)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
    Truncated. see log file for complete stacktrace
    >
    <09-oct-2012 19H42' CDT> <Error> <HTTP> <BEA-101362> <[ServletContext@22416257[app:08-workspace-XAFDIDS.ear module:workspace path:/workspace spec-version:2.5]] could not deserialize the request scoped attribute with name: "agreement"
    java.lang.ClassNotFoundException: xobject.Com.Posadas.BPM.DAT.Model.Objects.AgreementRequest: This error could indicate that a component was deployed on a cluster member but not other members of that cluster. Make sure that any component deployed on a server that is part of a cluster is also deployed on all other members of that cluster
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:218)
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:85)
    at weblogic.common.internal.WLObjectInputStream.resolveClass(WLObjectInputStream.java:61)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
    Truncated. see log file for complete stacktrace
    >
    <09-oct-2012 19H42' CDT> <Error> <HTTP> <BEA-101362> <[ServletContext@22416257[app:08-workspace-XAFDIDS.ear module:workspace path:/workspace spec-version:2.5]] could not deserialize the request scoped attribute with name: "agreement"
    java.lang.ClassNotFoundException: xobject.Com.Posadas.BPM.DAT.Model.Objects.AgreementRequest: This error could indicate that a component was deployed on a cluster member but not other members of that cluster. Make sure that any component deployed on a server that is part of a cluster is also deployed on all other members of that cluster
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:218)
    at weblogic.j2ee.ApplicationManager.loadClass(ApplicationManager.java:85)
    at weblogic.common.internal.WLObjectInputStream.resolveClass(WLObjectInputStream.java:61)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
    Truncated. see log file for complete stacktrace
    >
    ================Oracle® BPM - WorkSpace================
    fuego.lang.ComponentExecutionException: No se ha podido ejecutar correctamente la tarea.
    Motivo: 'fuego.web.execution.exception.InternalForwardException: Se ha producido un error inesperado durante el proceso de reenvío interno.'.
    at fuego.web.execution.InteractiveExecution.setExecutionError(InteractiveExecution.java:307)
    at fuego.web.execution.InteractiveExecution.process(InteractiveExecution.java:166)
    at fuego.web.execution.impl.WebInteractiveExecution.process(WebInteractiveExecution.java:54)
    at fuego.web.papi.TaskExecutor.processRedirect(TaskExecutor.java:239)
    at fuego.web.papi.TaskExecutor.execute(TaskExecutor.java:104)
    at fuego.workspace.servlet.ExecutorServlet.doAction(ExecutorServlet.java:117)
    at fuego.workspace.servlet.BaseServlet.doPost(BaseServlet.java:229)
    at fuego.workspace.servlet.BaseServlet.doGet(BaseServlet.java:220)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at fuego.workspace.servlet.AuthenticatedServlet.service(AuthenticatedServlet.java:83)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:499)
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:248)
    at fuego.web.execution.servlet.ServletExternalContext.forwardInternal(ServletExternalContext.java:197)
    at fuego.web.execution.servlet.ServletExternalContext.processAction(ServletExternalContext.java:110)
    at fuego.workspace.execution.WorkspaceInteractiveExecution.dispatchComponentExecution(WorkspaceInteractiveExecution.java:98)
    at fuego.web.execution.InteractiveExecution.invokePrepare(InteractiveExecution.java:351)
    at fuego.web.execution.InteractiveExecution.process(InteractiveExecution.java:192)
    at fuego.web.execution.impl.WebInteractiveExecution.process(WebInteractiveExecution.java:54)
    at fuego.web.execution.InteractiveExecution.process(InteractiveExecution.java:223)
    at fuego.web.papi.TaskExecutor.runApplicationTask(TaskExecutor.java:349)
    at fuego.web.papi.TaskExecutor.execute(TaskExecutor.java:95)
    at fuego.workspace.servlet.ExecutorServlet.doAction(ExecutorServlet.java:117)
    at fuego.workspace.servlet.BaseServlet.doPost(BaseServlet.java:229)
    at fuego.workspace.servlet.BaseServlet.doGet(BaseServlet.java:220)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at fuego.workspace.servlet.AuthenticatedServlet.service(AuthenticatedServlet.java:83)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at fuego.web.filter.SingleThreadPerSessionFilter.doFilter(SingleThreadPerSessionFilter.java:64)
    at fuego.web.filter.BaseFilter.doFilter(BaseFilter.java:63)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at fuego.web.filter.CharsetFilter.doFilter(CharsetFilter.java:48)
    at fuego.web.filter.BaseFilter.doFilter(BaseFilter.java:63)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: fuego.web.execution.exception.InternalForwardException: Se ha producido un error inesperado durante el proceso de reenvío interno.
    at fuego.web.execution.servlet.ServletExternalContext.redirectView(ServletExternalContext.java:131)
    at fuegoblock.net.web.NewJspController.forward(NewJspController.java:98)
    at fuegoblock.net.web.NewJspController.service(NewJspController.java:50)
    at fuego.web.execution.servlet.ServletRedirector$ControllerRedirector.forward(ServletRedirector.java:197)
    at fuego.web.execution.servlet.ServletRedirector.redirect(ServletRedirector.java:58)
    at fuego.web.papi.TaskExecutor.processRedirect(TaskExecutor.java:224)
    ... 47 more
    Caused by: javax.servlet.ServletException: javax.servlet.jsp.JspException: MethodUrl Tag: Object 'agreement' null or not a BPMObject
    at weblogic.servlet.jsp.PageContextImpl.handlePageException(PageContextImpl.java:417)
    at jsp_servlet._webroot._schema_45_415478435_45_1144662710._21._customjsp._launchagreement.__showagreements._jspService(__showagreements.java:787)
    at weblogic.servlet.jsp.JspBase.service(JspBase.java:34)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:499)
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:248)
    at fuego.web.execution.servlet.ServletExternalContext.redirectView(ServletExternalContext.java:128)
    ... 52 more
    Caused by: javax.servlet.jsp.JspException: MethodUrl Tag: Object 'agreement' null or not a BPMObject
    at fuego.taglib.tags.fo.InvokeUrlTag.evaluateAttributes(InvokeUrlTag.java:168)
    at fuego.taglib.tags.fo.InvokeUrlTag.doStartTag(InvokeUrlTag.java:47)
    at jsp_servlet._webroot._schema_45_415478435_45_1144662710._21._customjsp._launchagreement.__showagreements._jsp__tag95(__showagreements.java:4588)
    at jsp_servlet._webroot._schema_45_415478435_45_1144662710._21._customjsp._launchagreement.__showagreements._jspService(__showagreements.java:738)
    ... 60 more
    Error inesperado.:No se ha podido ejecutar correctamente la tarea.
    Motivo: 'fuego.web.execution.exception.InternalForwardException: Se ha producido un error inesperado durante el proceso de reenvío interno.'.
    * Unable to render the error:
    java.lang.IllegalStateException: Cannot forward a response that is already committed
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:122)
    at fuego.workspace.servlet.BaseServlet.forward(BaseServlet.java:55)
    at fuego.workspace.servlet.BaseServlet.forward(BaseServlet.java:67)
    at fuego.workspace.servlet.BaseServlet.errorHandler(BaseServlet.java:325)
    at fuego.workspace.servlet.BaseServlet.doPost(BaseServlet.java:232)
    at fuego.workspace.servlet.BaseServlet.doGet(BaseServlet.java:220)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at fuego.workspace.servlet.AuthenticatedServlet.service(AuthenticatedServlet.java:83)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:499)
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:248)
    at fuego.web.execution.servlet.ServletExternalContext.forwardInternal(ServletExternalContext.java:197)
    at fuego.web.execution.servlet.ServletExternalContext.processAction(ServletExternalContext.java:110)
    at fuego.workspace.execution.WorkspaceInteractiveExecution.dispatchComponentExecution(WorkspaceInteractiveExecution.java:98)
    at fuego.web.execution.InteractiveExecution.invokePrepare(InteractiveExecution.java:351)
    at fuego.web.execution.InteractiveExecution.process(InteractiveExecution.java:192)
    at fuego.web.execution.impl.WebInteractiveExecution.process(WebInteractiveExecution.java:54)
    at fuego.web.execution.InteractiveExecution.process(InteractiveExecution.java:223)
    at fuego.web.papi.TaskExecutor.runApplicationTask(TaskExecutor.java:349)
    at fuego.web.papi.TaskExecutor.execute(TaskExecutor.java:95)
    at fuego.workspace.servlet.ExecutorServlet.doAction(ExecutorServlet.java:117)
    at fuego.workspace.servlet.BaseServlet.doPost(BaseServlet.java:229)
    at fuego.workspace.servlet.BaseServlet.doGet(BaseServlet.java:220)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at fuego.workspace.servlet.AuthenticatedServlet.service(AuthenticatedServlet.java:83)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at fuego.web.filter.SingleThreadPerSessionFilter.doFilter(SingleThreadPerSessionFilter.java:64)
    at fuego.web.filter.BaseFilter.doFilter(BaseFilter.java:63)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at fuego.web.filter.CharsetFilter.doFilter(CharsetFilter.java:48)
    at fuego.web.filter.BaseFilter.doFilter(BaseFilter.java:63)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    I don't know what is missing in the OBPM Enterprise installation or configuration, I have followed the process and recommendations. Anyone know what is happening or is likely ever happened to anyone else?
    Thanks,
    César

    Someone can help me?
    I really appreciate any help
    Thanks in advance to all

  • Deployments are in Active state on Cluster but in Prepared state on server

    Hi,
    We initially deployed our application (Oracle Identity Manager 11gR2) on standalone managed server and then convert it into cluster using wlst commands.
    readDomain('/app/oracle/Middleware_OAM/user_projects/domains/oim_domain')
    setDistDestType('JRFWSAsyncJmsModule', 'UDD')
    setDistDestType('OIMJMSModule', 'UDD')
    create('oim_cluster','Cluster')
    assign('Server','oim_server1','Cluster','oim_cluster')
    updateDomain()
    The cluster is created fine and we can start the application successfully and access it. However the problem is that all the deployments are in 'Active' state only under Cluster but in prepared state under the server oim_server1 (though application access working on managed server port). Please advise where we are missing and what could be the wrong?
    Thanks

    Hi,
    1. What if you are trying to access the application page, does it come up ?
    2. Use the below wlst command to get the actual state of the deployed application:
    The deploy command returns a WLSTProgress object that you can access to check the status of the command. The WLSTProgress object is captured in a user-defined variable, in this case, progress.
    wls:/mydomain/serverConfig/Servers> progress= deploy(appName='businessApp',path='c:/myapps/business',createplan='true')
    The previous example stores the WLSTProgress object returned in a user-defined variable, in this case, progress. You can then use the progress variable to print the status of the deploy command. For example:
    wls:/mydomain/serverConfig/Servers> progress.printStatus()
    Current Status of your Deployment:
    Deployment command type: deploy
    Deployment State       : completed
    Deployment Message     : null
    wls:/mydomain/serverConfig/Servers>
    3. Restart all the servers, instances, related DB and check if it helps.
    Thanks,
    Sharmela

  • DDBeanCreateException Deploying to WLS 10.0

    Hi All,
    I would very much appreciate any help on the following problem.
    I used WorkSpace 1.1 to develop my app and tested in on WLS 10.0 within it. In WorkSpace the server runs on port 7021 and it has ALSB as well on the same port.
    I created EARs and tryed deploying them to another WLS 10.0 that was used by ALSB, outside of Work Space. All module deployed fine.
    Then I created a standalone WLS 10.0 server and tried deploying my modules on it. The server by default is on port 7001. Deployment failed with this error:
    javax.enterprise.deploy.model.exceptions.DDBeanCreateException: [J2EE Deployment SPI:260142]The descriptor at 'META-INF/annotation-manifest.xml' in module 'XXX.war' is not recognized, and could not be parsed.
    I found this article in Chris Hogue's blog (http://dev2dev.bea.com/blog/hogue/archive/2006/09/changing_annota.html_)
    He recommends adding weblogic J2EE weblogic-controls-1.0 library to the EAR. I already have weblogic-controls-10.0 library attached. The only thing that jumps at me is the difference in ports. I manually changed references in annotation.xml from port 7021 to port 7001, but it did not help.
    Thoughts?? Thanks in advance!

    Try deploying weblogic-controls-1.0.ear as a library directly in WLS console instead of adding it to your project ear file.
    If you still get error try deploying beehive-controls-1. 0.ear and beehive-netui-1.0.war also. This is how I resolved this error in WLS 9.2 and I hope this works for WLS 10 too. No changes in annotation-manifest.xml were needed for my app.

  • Enterprise Deployment Reference Topology

    I have seen the latest enterprise deployment reference topology specified in the following Enterprise Deployment Guide:
    http://download.oracle.com/docs/cd/E10291_01/core.1013/e10294/toc.htm
    Had a few follow up questions/points:
    1. Is there a reason why BPEL and ESB clubbed together into one container ?
    2. Is there a reason why the OWSM gateway warrants a separate ORACLE_HOME on the same box as opposed to having it sit in a container of its own ?
    3. Has anyone tried to fit in registry into this same topology ? I presume it would be just another container dedicated to it in the same ORACLE_HOME as that of BPEL.
    4. It is great to see a reference topology with all of the detailed steps. But it would help to understand the rationale behind this recommendation as well.
    Appreciate any feedback you may have.

    Hi
    A litle help:
    1 - BPEL and ESB (runtime) are together into the same container because of the native integration between them. This way BPEL can call ESB using JCA instead of SOAP. The same applies to ESB calling BPEL.
    2 - It's a good question. I dont't know if there's a techinical reason for OWSM gateway being in a separate ORACLE_HOME, but I was told that the gateway component will have a differente architecture in release 11g, so having it in a separate ORACLE_HOME could easy its migration process in the future. I'm not sure if this is an accurate information.
    Now, if you install the gateway on a separate machine (optionally on a separate DMZ), then it makes more sense to have this distributed topology.
    3 - I'm just working on a cluster production install, which includes service registry. We decided to just add another OC4J container into the same ORACLE_HOME as BPEL
    4 - I believe there are many reasons behind the reference topology. I can name a few:
    a) The distributed topology is necessary for security reasons. For example, HTTP Servers on a separate DMZ.
    b) Having a separate OC4J container for each product, or a group of products (like BPEL and ESB) is a good approach so you can allocate the right amout of memory and JVMs for that specifc product or group.
    c) Most of the complexy behing configuring the reference topology are related to it's hight availability purpose. Some components can be active/active, while others must be configured to be active/passive
    d) In order to achieve the performance benefits from item B and the HA benefits of item C, you have to install the SOA Suite components one by one, using their specific install medias.
    Regards
    Denis
    Message was edited by:
    [email protected]

Maybe you are looking for