Session Affinity WLS8.1

Our setup includes .NET client communicating (via a .NET-to-J2EE bridge) with a session facade. Now, as long as the .NET client retains the client-side proxy (of the session facade), all requests are routed to the original node -- regardless of load.
For our purposes, this is desirable behavior. The problem is, I'm having a hard time explaining why this is so, especially in high load situations when one node is idle while the other is being hammered. Is it a config issue i.e. can it be turned off/on.
Also, were would I find tracing/logging information on the session cookies -- or whatever means is used to route the the request to the appropriate server.
Thanks...and pardon the lenghty post.

          try to use url-rewriting
          and check if you enabled cookies with your browser.
          fear
          Mark Swanson <[email protected]> wrote:
          >Hello,
          >
          >I have a Session used between several JSP pages and it works fine. However,
          >when I submit the results of a form to a servlet the Session becomes
          >null.
          >I've spent hours on this and haven't been able to find out why this is.
          >
          >JSP Code:
          >System.out.println("*** jsp sessionID:" + session.getId()); // works
          >
          >JSP calls servlet using this path:
          >(/webapp/servlet/ZZZServlet)
          >request.getContextPath() + "/servlet/ZZZServlet";
          >
          >Servlet calls this:
          >
          >HttpSession session = request.getSession(false);
          > if (session == null)
          > Logger.info(LID + "ZZZServlet HTTPSESSION FAILED")
          >
          >The session is always null. It's a 10-line JSP and a 20-line servlet.
          >
          >Does anyone see anything wrong? Any suggestions?
          >
          >Thanks.
          >
          

Similar Messages

  • What is session affinity?

    What is session affinity in weblogic?

    Session Affinity....Means the request coming from the same client should always be intertained by the Same WLS Server instance where that clients request was processed earlier....And if that server is not able to process the request then only the request will be processed by another available Cluster Node .
    Example:
    By default WLS server's Session affinity is ON. It means suppose if i have one application deployed on a Cluster (which contains 2-ManagedServers)...
    In this Case if even if the Cluster follows Round-Robin algorithm to serve the incoming requests..first client (Client-A) request comes then it will be served by MS1 managed Server (Prmary Session will be created here)...and the replicated Session (Secondary Session) will be created on MS2.
    If the Same Client (Client-A) sends a second request...so in that case Ideally the request should be processed by MS2 server...but because Session Affinity which is Enable by default...WLS Cluster will first check where the Primary Session is created for this client...then It will try to redirect the request to MS1 only...because primary session was created here for that client.
    If MS1 is not able to process the request then only the request will be sent to MS2 for processing...
    Thanks
    Jay SenSharma
    http://jaysensharma.wordpress.com (WebLogic Wonders Are Here)

  • WL Clustering - session affinity not working

    Hello,
    I have a cluster setup in WL 9.2 with two nodes. Our requirement is to a session when establishes the session with Node1 in the cluster should always stay connected to it. It should not point to Node2 through its life.
    In otherwords i want to have session affinity established.
    What parameter I should change to make that I aceive this.
    WL 9.2 is on solaris 10 and the webserver used are iplanet.

    hi
    Enable Default Load Algorithm: round robin Affinity in Admin console->cluster-->general tab.try this...let me know wat happen....Thank You

  • OHS Routing to JVM responsible for creating HttpSession (Session Affinity)

    Hello,
    I am using OAS 10.1.3.1 and have a plan in place to create an HttpSession for a just authenticated end user in an OC4J instance/JVM of the my application's choosing. What I then would like to happen is for every subsequent HTTP request initiated by that end user, until such a time as they logout or their session times out, that OHS route their requests to that single OC4J instance/JVM that initially created an HttpSession on their behalf. This is simply Session Affinity.
    What I am wondering is do I simply get this behavior by default if I specify no select method (e.g. random, round robin, etc.) in mod_oc4j.conf? Are their combinations of select method (e.g. roundrobin:local) or other mod_oc4j directives that could result in different behavior than this simple Session Affinity I seek?
    Thanks,
    Doug

    Hello
    Within 10.1.3 OC4Js announce their mount-point(s) in the notifications they send out and mod_oc4j dynamically adjusts its routing table using this information. This eliminates the need for static mount point configuration and enables mod_oc4j to update its mount point configuration dynamically. So. routing is handled by opmn to OHS.
    regards
    Michel

  • Stateless with Session Affinity and set jbo.ampool.maxpoolsize = 1

    Hi,
    I would like to know more about "Stateless with Session Affinity".
    If I set
    jbo.ampool.maxpoolsize = 1
    jbo.ampool.minavailablesize = 1
    jbo.ampool.maxavailablesize = 1
    in application module configuration
    and have two browser clients at different computers.
    If I update some values and post changed data to database without commit
    at one browser
    some updates using jdbc prepared statements through database connection
    with following code snippet in application module
    Implementation:
    private Connection getCurrentConnection() throws SQLException {
    PreparedStatement st = getDBTransaction().createPreparedStatement("commit",1);
    Connection conn = st.getConnection();
    st.close();
    return conn;
    , will the other web client see these post changed data with the only one
    application module through the only one database connection (ignores PS_TXN connection)?
    If the answer is "no", can someone explain why bc4j can do this?

    Hi,
    It's not clear to me from your post what you are trying to test. Generally speaking, every HttpSession (~browser) is assocated with a pooled AM that is associated with a single transaction context that is associated with a single "application" JDBC connection.
    HttpSession->AM->Txn->Connection
    The session affinity features of the BC4J framework maintain this logical dependency for you without necessarily having to tie up the physical resources at each layer.
    Now, if one session posts but does not commit then the other sessions will not "see" those changes (different AMx, Txns, Connections). Once a session does commit then those changes will be visible to other sessions. However, if those other sessions have already queried/cached the modified data then those sessions will not "see" it until their caches are refreshed. Please see the documentation for a description of BC4J caching and refreshing caches. The standard technique to refresh a VO cache is to invoke executeQuery.
    Hope this helps,
    JR

  • Set IP Affinity in a Server Cluster

    I have a SharePoint server with two web front ends.
    I've been informed that I need to set the two servers to IP affinity or 'session stickyness' in TMG to resolve an issue I am having with certain web parts. I think I have set this up but is there anyway I can test if it is working correctly?

    Hi,
    According to your description, it seems that you have grouped the two web servers into a farm, right? In these scenario, TMG treats all the Web servers in the farm as a single entity and use session affinity (cookie-based load balancing) or IP address affinity
    (source IP address-based load balancing) to implement the load balancing algorithm. In your case, you can choose source-IP based load balancing when specify server farm. You can publish a SharePoint site in a server farm by using the SharePoint Site Publishing
    Rule Wizard, then you can access the site and check the TMG real time logging to see if load balancing is working as you expected.
    More information:
    server farms
    Best regards,
    Susie

  • Apache plugin won't do sticky sessions

    Hi,
    I'm trying to use the apache plugin, with apache 1.3.26, front-ending 2 WL6.0SP2;
    and I can't get session affinity to work.
    the plugin get loaded properly, and "works" since it is load balancing requests.
    unfortunately, sticky sessions don't
    here the relevant config
    <IfModule mod_weblogic.c>
    WebLogicCluster 10.2.255.35:7070,10.2.255.50:7070
    DebugConfigInfo ON
    CookieName SID
    Debug ALL
    </IfModule>
    and a dump of the URLs for a single session. you can see it's going to the 2 servers.
    ================New Request:
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.35:7070
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    Wed Sep 25 08:23:29 2002 Hdrs from WLS:[Set-Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to client:[Set-Cookie]=[SID=PZHUuWPR; path=/]
    ================New Request:
    Wed Sep 25 08:23:29 2002 Init: availcookie=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.35:7070
    Wed Sep 25 08:23:29 2002 Hdrs from clnt:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    ================New Request:
    Wed Sep 25 08:23:29 2002 Init: availcookie=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.50:7070
    Wed Sep 25 08:23:29 2002 Hdrs from clnt:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    ================New Request:
    Wed Sep 25 08:23:29 2002 Init: availcookie=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.50:7070
    Wed Sep 25 08:23:29 2002 Hdrs from clnt:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    The weird part is that I'm seeeing this behavior with the iplanet plugin as well.
    anyone got any luck in setting that up ?
    thanks,
    jm.

    You can use the console to edit the deployment descriptor for the webapp.
    Or you can modify the weblogic.xml and add the following:
    <session-descriptor>
    <session-param>
    <param-name>
    PersistentStoreType
    </param-name>
    <param-value>
    Replicated
    </param-value>
    </session-param>
    </session-descriptor>
    Some useful links:
    http://e-docs.bea.com/wls/docs61/webapp/weblogic_xml.html#1014231
    http://e-docs.bea.com/wls/docs61/webapp/sessions.html#100659
    http://e-docs.bea.com/wls/docs61/cluster/servlet.html
    Regards,
    Eric
    "Pancday Pac" <[email protected]> wrote in message
    news:[email protected]...
    Eric,
    I also met this problem.
    How to "have replicated sessions set for the webapp". What I found is just
    the CookieName which I didn't change and just use the default value.
    "Eric Gross" <[email protected]> wrote in message
    news:[email protected]...
    It looks like you have properly setup clustering on WebLogic itself.
    You need to make sure you have replicated sessions set for the webapp in
    question.
    Regards,
    Eric
    "Jean-Michel Leon" <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    I'm trying to use the apache plugin, with apache 1.3.26, front-ending
    2
    WL6.0SP2;
    and I can't get session affinity to work.
    the plugin get loaded properly, and "works" since it is load balancingrequests.
    unfortunately, sticky sessions don't
    here the relevant config
    <IfModule mod_weblogic.c>
    WebLogicCluster 10.2.255.35:7070,10.2.255.50:7070
    DebugConfigInfo ON
    CookieName SID
    Debug ALL
    </IfModule>
    and a dump of the URLs for a single session. you can see it's going to
    the
    2 servers.
    ================New Request:
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.35:7070
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    Wed Sep 25 08:23:29 2002 Hdrs from WLS:[Set-Cookie]=[SID=PZHUuWPR;
    path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to client:[Set-Cookie]=[SID=PZHUuWPR;path=/]
    ================New Request:
    Wed Sep 25 08:23:29 2002 Init: availcookie=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.35:7070
    Wed Sep 25 08:23:29 2002 Hdrs from clnt:[Cookie]=[SID=PZHUuWPR;
    path=/]
    >>>
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    ================New Request:
    Wed Sep 25 08:23:29 2002 Init: availcookie=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.50:7070
    Wed Sep 25 08:23:29 2002 Hdrs from clnt:[Cookie]=[SID=PZHUuWPR;path=/]
    >>>
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    ================New Request:
    Wed Sep 25 08:23:29 2002 Init: availcookie=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Connected to 10.2.255.50:7070
    Wed Sep 25 08:23:29 2002 Hdrs from clnt:[Cookie]=[SID=PZHUuWPR;path=/]
    >>>
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[Cookie]=[SID=PZHUuWPR; path=/]
    Wed Sep 25 08:23:29 2002 Hdrs to WLS:[X-WebLogic-Force-Cookie]=[true]
    The weird part is that I'm seeeing this behavior with the iplanetplugin
    as well.
    anyone got any luck in setting that up ?
    thanks,
    jm.

  • Session Problem With Cluster Environment

    Issue:
    To test our web application, we build a simply cluster environment containing 2 nodes, web server and admin. server. if we start all related servers and nodes in the following manner, the web server can not maintain the correct session ID, and mirror them across the nodes due to which the first session/request goes to the first node (node1) and the second request goes to the second (Node2). But Node2 does not have a successful session ID. Thus, our deployed application get a Session Time Out message.
    1.     Start WebLogic Application server
    2.     Started the nodes.
    3.     First start the Webserver
    However, when we change the start sequence as follows, the issue will disappear.
    1.     First start the WebServer
    2.     Start WebLogic Application server
    3.     Lastly started the nodes.
    Question:
    Why does the different start sequences effect the different result of Session Affinity in the cluster environment?
    Thanks for any responses!

    Hmmm... Never thought about it.
    Somehow I always start the environment in this sequence:
    - Admin Server
    - WebServer
    - Cluster
    I assume you are using the Apache HTTP server with the WebLogic proxy plug-in as your WebServer.
    Here you can configure
    - WebLogicCluster - which is a static starting point for the server list, and
    - DynamicServerList - enables that WebLogic automatically adds new servers to the server list when they become part of the cluster.
    What I can think of is that the last option is set to OFF in which case when new servers are added to the cluster, the plug-in cannot proxy requests to the new server.
    This could explain why the start-up sequence matters, i.e., when you first start the WebServer than the cluster nodes, the static list will do its work.
    When on the other hand, the cluster nodes are started before the WebServer the starting point for the server list is not created, because somehow
    to WebServer is not receiving notifications.
    One remark is in order, is that when you did not alter the DynamicServerList property, the default is ON.
    Hope the above makes some sense to you. (http://download.oracle.com/docs/cd/E12840_01/wls/docs103/plugins/index.html)

  • Helps for WLP9.2 in a cluster:  deploy errors, propagation errors

    I wrote a document for our portal project's production support that details how to fix several recuring problems with WLP9.2 (GA) in cluster. I'm sharing this with the community as a help.
    Email me if you'd like updates or have additional scenarios / fixes or you'd like the word doc version.
    Curt Smith, Atlanta WLP consultant, [email protected], put HELP WLP as a part of the subject please.
    1     WLP 9.2 Environment
    The cluster environment where these symptoms are frequently seen is described by the following:
    1.     Solaris 9 on 4 cpu sparc boxes. 16Gb Ram. The VM is given ?Xmx1034m.
    2.     Bea private patch: RSGT, fixed cluster deployments.
    3.     Bea private patch: BG74, fixed session affinity, changed the session tracking cookie name back to the standard name: JSESSIONID.
    4.     IBM UDB (DB2) using the Bea DB2 driver. FYI: the problems described below I don?t feel have any relationship to the DB used for a portal.
    2     Common problems and their fixes
    2.1     Failed deploy (Install) of a new portal application via the cluster console.
    Problem symptoms:
    After clicking Finish (or) Commit the console displays that there where errors. All errors require this procedure.
    Fix steps:
    1.     Shut down the whole cluster. Using Bea?s shutdown script takes too long, or doesn?t work if the Admin is down or hung.
    a.     Find the PID to kill: lsof ?C | grep <listen port number>
    b.     kill <pid>
    c.     Run kill again and if the pid is still running then do: kill -9 <pid>
    2.     Admin server:
    a.     Remove file: <domain>/config/config.lok
    b.     Edit file: <domain>/config/config.xml
    c.     Remove all elements of: <app-deployment> ? </app-deployment>
    d.     Start the Admin server
    e.     Make sure it comes up in the running state. Use the console: servers ? admin ? control - resume if needed.
    3.     Both Managed servers:
    a.     cd <domain>/config
    b.     rm ?rf *
    This forces the re-down load of the cleaned up config.xml.
    c.     cd <domain>/servers/<managed_name>
    d.     rm ?rf tmp cache stage data
    This cleans up stale or jammed sideways deploys. Be sure to delete all three directorys: tmp, cache, stage and data.
    e.     Start each managed server.
    f.     Make sure the managed instance comes up in the running state and not admin. Go to server-<instance>-control-Resume to set the run state to running.
    You can now use the console to Install your applications.
    2.2     The portal throws framework / container exceptions on one managed instance.
    Problem symptoms:
    If you see exceptions from the classloader re a framework class not found, serialization failure or error etc. In general the symptom is that the container is not stable, running correctly, not making sense or your application works on one managed instance but not on the other.
    Fix steps:
    1.     Shut down the problem managed instance. Using Bea?s shutdown script takes too long, or doesn?t work if the Admin is down or hung.
    a.     Find the PID to kill: lsof ?C | grep <listen port number>
    b.     kill <pid>
    c.     Run kill again and if the pid is still running then do: kill -9 <pid>
    2.     Perform these clean up steps on one or both managed instances:
    a.     cd <domain>/config
    b.     rm ?rf *
    This forces the re-down load of the cleaned up config.xml.
    c.     cd <domain>/servers/<managed_name>
    d.     rm ?rf tmp cache stage data
    This cleans up stale or jammed sideways deploys. Be sure to delete all three directorys: tmp, cache, stage and data.
    e.     Start each managed server.
    f.     Make sure the managed instance comes up in the running state and not admin. Go to server-<instance>-control-Resume to set the run state to running.
    3.     The libraries and applications should auto deploy as the managed instance comes up. Once the managed instance goes into the running state, or you Resume into the running state. Your application should be accessible. Sometimes it takes a few seconds after going into the running state for all applications to be instantiated.
    2.3     Content propagation fails on the commit step.
    Problem symptoms:
    In the log of the managed instance you specified in the propagation ant script you?ll see exceptions regarding not being able to create or instantiate a dynamic delegated role.
    There is an underlying bug / robustness issue with WLP9.2 (GA) where periodically you can?t create delegated roles either with the PortalAdmin or via the propagation utility.
    Important issue:
    This procedure was supplied by Bea which will remove from the internal LDAP and the portal DB your custom / created roles. This will leave your cluster in the new installation state with just the default users and roles: weblogic and portaladmin. The implications are that you?ll have to boot your cluster with console user: weblogic / weblogic. You can then add back your secure console user/password but you?ll have to do this over and over as propagations fail. The observed failure rate is once every 2-3 weeks if you do propagations daily.
    Note:
    The following assumes that you left the default console user weblogic password to be the default password of: weblogic. The following procedure deletes the local LDAP but leaves the rows in the DB.users table including the SHA-1 hashed passwords. The following procedure should still work if you changed the password for weblogic, but it probably won?t work if you try to substitute your secure console user/pw because there will be no delegated authorization roles mapping to your custom console user. You might experiment with this scenario.
    Fix steps:
    1.     Shut down the whole cluster. Using Bea?s shutdown script takes too long, or doesn?t work if the Admin is down or hung.
    a.     Find the PID to kill: lsof ?C | grep <listen port number>
    b.     kill <pid>
    c.     Run kill again and if the pid is still running then do: kill -9 <pid>
    2.     Admin server:
    a.     Remove directory: <domain>/servers/AdminServer/data/ldap
    b.     Run this SQL script after you edit it for your schema:
    delete from yourschema.P13N_DELEGATED_HIERARCHY;
    delete from yourschema.P13N_ENTITLEMENT_POLICY;
    delete from yourschema.P13N_ENTITLEMENT_RESOURCE;
    delete from yourschema.P13N_ENTITLEMENT_ROLE;
    delete from yourschema.P13N_ENTITLEMENT_APPLICATION;
    commit;
    c.     Start the Admin server
    d.     Make sure it comes up in the running state. Use the console: servers ? admin ? control - resume if needed.
    3.     Both Managed servers:
    a.     cd <domain>/servers/<managed_name>
    b.     rm ?rf data
    This forces the re-down load of the LDAP directory.
    c.     Start each managed server.
    d.     Make sure the managed instance comes up in the running state and not admin. Go to server-<instance>-control-Resume to set the run state to running.
    2.4     The enterprise portal DB fails and needs to be restored OR switch DB instances
    Restoring a portal DB or switching existing DBs are similar scenarios. The issues that you?ll face with WLP9.2 since it now uses a JDBCauthenticator to authenticate and authorize the console / boot user you need to first be able to connect to the DB before the admin and managed instances can boot. If you haven?t properly encrypted the DB user?s password in the <domain>/config/jdbc/*.xml files, then you?ll not be able to boot the admin server since you won?t be able to create a JDBC connection to the DB. The boot messages are not clear as to what the failure is.
    You?ll need to know an Admin role user and password that?s in the DB you?re wanting to connect to, to put into boot.properties and on the managed instances in their boot.properties or startManaged scripts. Don?t forget that the managed instances have local credentials which is new for 9.2. They are in the startManaged script in clear text or a local boot.properties.
    Note:
    The passwords in the DB are SHA-1 hashed and there is no SHA-1 hash generator tool so you can?t change a password via SQL, but you can move the password from one DB to another. This is possible because the domain encryption salt is not used to generate the SHA-1 string. As it turns out, the SHA-1 string is compatible with all 9.2 cluster domains. IE the domain DES3 salt has nothing to do with password verification. The same password SHA-1 string taken from different domains or even same domain but different users will be different, this is just a randomization put into the algorithm yet every domain will be able to validate the given password against the DB?s SHA-1 string. Because of this, I?ve not had any problem moving DB instances between clusters, especially if I?ve given up on security and use weblogic/weblogic as the console user and this user / pw is in every DB.
    Steps:
    The assumption for restoring a DB is that the DB has been restored but it?s an older version and doesn?t have the console user/pw that is in boot.properties. At this point swaping a DB is the same as restoring an old version of the portal DB.
    1.     Edit the console user and password as clear text into: <domain>/servers/AdminServers/security/boot.properties
    This is where you may give in and use weblogic/weblogic.
    2.     Set the correct DB access password encryption in the jdbc/*.xml files.
    a.     cd <domain>/bin
    b.     . ./setDomainEnv.sh or if you?re on windows just run: setDomainEnv.cmd
    c.     java weblogic.security.Encrypt <the_db_password>
    d.     Edit the returned string into every <domain>/config/jdbc/*.xml
    e.     Make sure the *.xml files point to the correct DB host, port, schema, DB name, DB user.
    3.     Start the Admin server. I should come up. If it doesn?t it has to be not being able to create a connection to the DB, which depends on the *.xml having the correct user and DES-3 encrypted password.
    4.     Edit the new console user/password on the managed instance bin/startManaged script or the local boot.properties.
    5.     Start the managed instances.
    2.5     Install patches on a host that does not have internet access
    The short description is to run smart update on a host that does have internet access. Fetch out of the <domain>/utils/bsu/cache_dir     directory the downloaded patch jar and xml. Manually apply the patch to your non-internet accessible hosts.

    I wrote a document for our portal project's production support that details how to fix several recuring problems with WLP9.2 (GA) in cluster. I'm sharing this with the community as a help.
    Email me if you'd like updates or have additional scenarios / fixes or you'd like the word doc version.
    Curt Smith, Atlanta WLP consultant, [email protected], put HELP WLP as a part of the subject please.
    1     WLP 9.2 Environment
    The cluster environment where these symptoms are frequently seen is described by the following:
    1.     Solaris 9 on 4 cpu sparc boxes. 16Gb Ram. The VM is given ?Xmx1034m.
    2.     Bea private patch: RSGT, fixed cluster deployments.
    3.     Bea private patch: BG74, fixed session affinity, changed the session tracking cookie name back to the standard name: JSESSIONID.
    4.     IBM UDB (DB2) using the Bea DB2 driver. FYI: the problems described below I don?t feel have any relationship to the DB used for a portal.
    2     Common problems and their fixes
    2.1     Failed deploy (Install) of a new portal application via the cluster console.
    Problem symptoms:
    After clicking Finish (or) Commit the console displays that there where errors. All errors require this procedure.
    Fix steps:
    1.     Shut down the whole cluster. Using Bea?s shutdown script takes too long, or doesn?t work if the Admin is down or hung.
    a.     Find the PID to kill: lsof ?C | grep <listen port number>
    b.     kill <pid>
    c.     Run kill again and if the pid is still running then do: kill -9 <pid>
    2.     Admin server:
    a.     Remove file: <domain>/config/config.lok
    b.     Edit file: <domain>/config/config.xml
    c.     Remove all elements of: <app-deployment> ? </app-deployment>
    d.     Start the Admin server
    e.     Make sure it comes up in the running state. Use the console: servers ? admin ? control - resume if needed.
    3.     Both Managed servers:
    a.     cd <domain>/config
    b.     rm ?rf *
    This forces the re-down load of the cleaned up config.xml.
    c.     cd <domain>/servers/<managed_name>
    d.     rm ?rf tmp cache stage data
    This cleans up stale or jammed sideways deploys. Be sure to delete all three directorys: tmp, cache, stage and data.
    e.     Start each managed server.
    f.     Make sure the managed instance comes up in the running state and not admin. Go to server-<instance>-control-Resume to set the run state to running.
    You can now use the console to Install your applications.
    2.2     The portal throws framework / container exceptions on one managed instance.
    Problem symptoms:
    If you see exceptions from the classloader re a framework class not found, serialization failure or error etc. In general the symptom is that the container is not stable, running correctly, not making sense or your application works on one managed instance but not on the other.
    Fix steps:
    1.     Shut down the problem managed instance. Using Bea?s shutdown script takes too long, or doesn?t work if the Admin is down or hung.
    a.     Find the PID to kill: lsof ?C | grep <listen port number>
    b.     kill <pid>
    c.     Run kill again and if the pid is still running then do: kill -9 <pid>
    2.     Perform these clean up steps on one or both managed instances:
    a.     cd <domain>/config
    b.     rm ?rf *
    This forces the re-down load of the cleaned up config.xml.
    c.     cd <domain>/servers/<managed_name>
    d.     rm ?rf tmp cache stage data
    This cleans up stale or jammed sideways deploys. Be sure to delete all three directorys: tmp, cache, stage and data.
    e.     Start each managed server.
    f.     Make sure the managed instance comes up in the running state and not admin. Go to server-<instance>-control-Resume to set the run state to running.
    3.     The libraries and applications should auto deploy as the managed instance comes up. Once the managed instance goes into the running state, or you Resume into the running state. Your application should be accessible. Sometimes it takes a few seconds after going into the running state for all applications to be instantiated.
    2.3     Content propagation fails on the commit step.
    Problem symptoms:
    In the log of the managed instance you specified in the propagation ant script you?ll see exceptions regarding not being able to create or instantiate a dynamic delegated role.
    There is an underlying bug / robustness issue with WLP9.2 (GA) where periodically you can?t create delegated roles either with the PortalAdmin or via the propagation utility.
    Important issue:
    This procedure was supplied by Bea which will remove from the internal LDAP and the portal DB your custom / created roles. This will leave your cluster in the new installation state with just the default users and roles: weblogic and portaladmin. The implications are that you?ll have to boot your cluster with console user: weblogic / weblogic. You can then add back your secure console user/password but you?ll have to do this over and over as propagations fail. The observed failure rate is once every 2-3 weeks if you do propagations daily.
    Note:
    The following assumes that you left the default console user weblogic password to be the default password of: weblogic. The following procedure deletes the local LDAP but leaves the rows in the DB.users table including the SHA-1 hashed passwords. The following procedure should still work if you changed the password for weblogic, but it probably won?t work if you try to substitute your secure console user/pw because there will be no delegated authorization roles mapping to your custom console user. You might experiment with this scenario.
    Fix steps:
    1.     Shut down the whole cluster. Using Bea?s shutdown script takes too long, or doesn?t work if the Admin is down or hung.
    a.     Find the PID to kill: lsof ?C | grep <listen port number>
    b.     kill <pid>
    c.     Run kill again and if the pid is still running then do: kill -9 <pid>
    2.     Admin server:
    a.     Remove directory: <domain>/servers/AdminServer/data/ldap
    b.     Run this SQL script after you edit it for your schema:
    delete from yourschema.P13N_DELEGATED_HIERARCHY;
    delete from yourschema.P13N_ENTITLEMENT_POLICY;
    delete from yourschema.P13N_ENTITLEMENT_RESOURCE;
    delete from yourschema.P13N_ENTITLEMENT_ROLE;
    delete from yourschema.P13N_ENTITLEMENT_APPLICATION;
    commit;
    c.     Start the Admin server
    d.     Make sure it comes up in the running state. Use the console: servers ? admin ? control - resume if needed.
    3.     Both Managed servers:
    a.     cd <domain>/servers/<managed_name>
    b.     rm ?rf data
    This forces the re-down load of the LDAP directory.
    c.     Start each managed server.
    d.     Make sure the managed instance comes up in the running state and not admin. Go to server-<instance>-control-Resume to set the run state to running.
    2.4     The enterprise portal DB fails and needs to be restored OR switch DB instances
    Restoring a portal DB or switching existing DBs are similar scenarios. The issues that you?ll face with WLP9.2 since it now uses a JDBCauthenticator to authenticate and authorize the console / boot user you need to first be able to connect to the DB before the admin and managed instances can boot. If you haven?t properly encrypted the DB user?s password in the <domain>/config/jdbc/*.xml files, then you?ll not be able to boot the admin server since you won?t be able to create a JDBC connection to the DB. The boot messages are not clear as to what the failure is.
    You?ll need to know an Admin role user and password that?s in the DB you?re wanting to connect to, to put into boot.properties and on the managed instances in their boot.properties or startManaged scripts. Don?t forget that the managed instances have local credentials which is new for 9.2. They are in the startManaged script in clear text or a local boot.properties.
    Note:
    The passwords in the DB are SHA-1 hashed and there is no SHA-1 hash generator tool so you can?t change a password via SQL, but you can move the password from one DB to another. This is possible because the domain encryption salt is not used to generate the SHA-1 string. As it turns out, the SHA-1 string is compatible with all 9.2 cluster domains. IE the domain DES3 salt has nothing to do with password verification. The same password SHA-1 string taken from different domains or even same domain but different users will be different, this is just a randomization put into the algorithm yet every domain will be able to validate the given password against the DB?s SHA-1 string. Because of this, I?ve not had any problem moving DB instances between clusters, especially if I?ve given up on security and use weblogic/weblogic as the console user and this user / pw is in every DB.
    Steps:
    The assumption for restoring a DB is that the DB has been restored but it?s an older version and doesn?t have the console user/pw that is in boot.properties. At this point swaping a DB is the same as restoring an old version of the portal DB.
    1.     Edit the console user and password as clear text into: <domain>/servers/AdminServers/security/boot.properties
    This is where you may give in and use weblogic/weblogic.
    2.     Set the correct DB access password encryption in the jdbc/*.xml files.
    a.     cd <domain>/bin
    b.     . ./setDomainEnv.sh or if you?re on windows just run: setDomainEnv.cmd
    c.     java weblogic.security.Encrypt <the_db_password>
    d.     Edit the returned string into every <domain>/config/jdbc/*.xml
    e.     Make sure the *.xml files point to the correct DB host, port, schema, DB name, DB user.
    3.     Start the Admin server. I should come up. If it doesn?t it has to be not being able to create a connection to the DB, which depends on the *.xml having the correct user and DES-3 encrypted password.
    4.     Edit the new console user/password on the managed instance bin/startManaged script or the local boot.properties.
    5.     Start the managed instances.
    2.5     Install patches on a host that does not have internet access
    The short description is to run smart update on a host that does have internet access. Fetch out of the <domain>/utils/bsu/cache_dir     directory the downloaded patch jar and xml. Manually apply the patch to your non-internet accessible hosts.

  • How to achieve BC4J stateful management in a web app?

    I have a web application developed with JSP, Struts as the view / controller layer and BC4J as the model layer.
    I am not using the complete ADF framework. Just plain struts, JSP and BC4J.
    What I want to do is to have one struts action to set a
    query condition on a view object and execute the query.
    Then i release the application module. In another struts action I want to get the same application module with the
    viewobject query set.
    Pseudo for Action 1:
    try{
    Configuration.createRootApplicationModule("test.bc.AppModule", "AppModuleLocal");
    .. get view object
    .. build and perform the query on the view object.
    finally
    Configuration.releaseRootApplicationModule(am, true);
    pseudo for action2:
    try{
    Configuration.createRootApplicationModule("test.bc.AppModule", "AppModuleLocal");
    .. get view object with the query stored
    .. use the results.
    finally
    Configuration.releaseRootApplicationModule(am, true);
    I cannot make this work. If I put the am from action1 in the session object and retrieves it from the session in action2 it works. But I need to release the appmodule in action 1 since I have no guarantees that the user will run action2..
    I have read the paper of Steve Muench "Understanding Application Module Pooling Concepts and: Configuration Parameters"
    This document says:
    "ADF application modules provides the ability to snapshot and reactivate their pending state to XML (stored on the file system or in the database), and the ADF application module pooling mechanism leverages this capability to deliver a "managed state" option to web application developers that simplifies building applications like the example just given.
    As a performance optimization, when an instance of an AM is returned to the pool in "managed state" mode, the pool keeps track that the AM is referenced by that particular session. The AM instance is still in the pool and available for use, but it would prefer to be used by the same session that was using it last time because maintaining this so-called "session affinity" improves performance.
    I have set my bc4j configurations to
    releasmode = STATEFUL
    jbo.passivationstore = databse
    jbo.server.internal_connection= my jdbc connection
    But the appmodule returned in action2 is not the same as the one returned in action1.
    I am using jdev 9052 and deploying to oc4j 904 standalone.
    I guess I have to use som other commands for retrieving/ releasing the AM or set some properties on it, but I have not found any documentation for this.
    Any tips on how to achieve this bc4j session state to work in a plain struts , bc4j environment is appreciated.

    The Configuration.createRootApplicationModule() and companion releaseRootApplicationModule() API's are not for use in web applications. They are designed for use by simple command-line console programs.
    Using the BC4J application module pool programmatically in a web environment should be done like the JDeveloper 9.0.3/9.0.4 "Pooling" sample illustrates.
    In the JDev 9.0.3 or 9.0.4 version, see the sample project under ./BC4J/samples/Pooling
    Hope this helps.

  • Proxy error when running reports in Infoview

    We are running XI 3.1 sp4 and are getting a Proxy error on some reports when running in PDF mode. Any ideas?
    Here's the error:
    Proxy Error
    The proxy server received an invalid response from an upstream server.
    The proxy server could not handle the request GET /AnalyticalReporting/ViewAsPDF/Disposition Analysis Report.pdf.
    Reason: Error reading from remote server
    Apache Server at preprod.nalts.com Port 443

    Hi Sylvia,
    It seems that the reverse proxy you are using: Apache could have problems with the sessions of the users. I have seen the same behaviour with other Load Balancers. If you have different WAS in the backend, make sure they stay with the same WAS (session affinity).
    Regards,
    Julian

  • Exchange 2010 to Exchange 2013 Migration and Architect a resilient and high availability exchange setup

    Hi,
    I currently have a single Exchange 2010 Server that has all the roles supporting about 500 users. I plan to upgrade to 2013 and move to a four server HA Exchange setup (a CAS array with 2 Server as CAS servers  and one DAG with 2 mailbox Servers). My
    goal is to plan out the transition in steps with no downtime. Email is most critical with my company.
    Exchange 2010 is running SP3 on a Windows Server 2010 and a Separate Server as archive. In the new setup, rather than having a separate server for archiving, I am just going to put that on a separate partition.
    Here is what I have planned so far.
    1. Build out four Servers. 2 CAS and 2 Mailbox Servers. Mailbox Servers have 4 partitions each. One for OS. Second for DB. Third for Logs and Fourth for Archives.
    2. Prepare AD for exchange 2013.
    3. Install Exchange roles. CAS on two servers and mailbox on 2 servers. Add a DAG. Someone had suggested to me to use an odd number so 3 or 5. Is that a requirement?
    4. I am using a third party load balancer for CAS array instead of NLB so I will be setting up that.
    5. Do post install to ready up the new CAS. While doing this, can i use the same parameters as assigned on exchange 2010 like can i use the webmail URL for outlook anywhere, OAB etc.
    6. Once this is done. I plan to move a few mailboxes as test to the new mailbox servers or DAG.
    7. Testing outlook setups on new servers. inbound and outbound email tests.
    once this is done, I can migrate over and point all my MX records to the new servers.
    Please let me know your thoughts and what am I missing. I like to solidify a flowchart of all steps that I need to do before I start the migration. 
    thank you for your help in advance

    Hi,
    okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
    and so on. It's much more simpler, better and less expensive.
    CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
    http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
    First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
    On channel 9 there is many stuff from MEC:
    http://channel9.msdn.com/search?term=exchange+2013
    Migration:
    http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
    Additional informations:
    http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
    Hope this helps :-)

  • Important conceptual question about Application Module, Maximum Pool Size

    Hello everyone,
    We have a critical question about the Application Module default settings (taking the DB connections from a DataSource)
    I know that on the Web it is generally suggested that each request must end with either a commit or rollback when executing PL/SQL blocks "directly" on the DB without the framework BC/ViewObject/Entity service intervention.
    Now, for some reasons, we started to develop our applications with thinking that each Web Session would reference exactly one DB session (opened by any instance taken from the AM pool) for the whole duration of the session, so that the changes made by each Web session to its DB session would never interfere with the changes made by "other" Web Sessions to "other" DB sessions .
    In other words, because of that convincement we often implemented sort of "transactions" that open and close (with either commit or rollback) each DB session not in/after a single HTTP request, but during many HTTP Requests.
    As a concrete example think of this scenario:
    1. the user presses the "Insert" button. An HTTP request is fired. The action listener is executed and ends up with inserting rows in a table via a PL SQL block (not via the ViewObjects API).
    2. no commit or rollback after the above PL/SQL block is done yet.
    3. finally the user presses a "Commit" or "Rollback" button, firing the call to the appropriate AM methos.
    Those three requests consist of what I called "transaction".
    From the documentation it's clear that there is no guarantee that the couple AM istance + DB session is the same during all the requests.
    This means that, during step 2, it's possible that another user might reference the same "pending" AM/DbSession for his needs and "steal" somehow the work done via PL/SQL after step 1. (This happens because sessions taken by the pool are always rolled back by default.)
    Now my question is:
    Suppose we set the "Maximum Pool Size" parameter to very a great number (always inferior to the maximum number of concurrent users):
    Is there any guarantee that all the requests will be isolated in that case?
    I hope the problem is clear.
    Let me know if you want more details.

    Thanks for the answers.
    If I am right, from all your answers about resource avaiability, this means that even supposing the framework is able to always give us the same AM instance back from the AM pool (by following the session-affinity criterias), there is, however, no "connection affinity" with the connections from the DataSource. This means that the "same AM instance" might take the "a new DB connection", if necessary, from the connection pool of the DataSource. If that happens, that could give us the same problems as taking "a new AM instance" (that is, not following session-affinity) from the beginning, since each time an a new connection is taken (either via a new AM instance or via the same AM instance plus a new DB connection), the corresponding DB session is rolle back by default, clearing all the pending transactions we might have performed before with direct PL/SQL calls bypassing the AM services during the life cycle of our application, so that the new HTTP request will have a clean DB session to start to work with.

  • WebSphere Custering and JSF Backing Beans

    Hello All,
    Can anyone enlighten me on exactly how backing bean state is maintained by JSF?
    In what scope does JSF store the backing beans? (session perhaps)
    Does it make sense to use JSF without stateful backing beans?
    What would happen if a client post did not find its corresponding backing bean on the application server calling it?
    Background:
    In a clustered WebSphere Server environment, a client post may be directed to any one of the machines in the cluster. If the client post gets directed to a machine other then the machine that originally sent the client a response, the page will not find the corresponding page beans.
    It is possible to serialize session beans in WebSphere and have each machine share its memory with all the boxes in the cluster. However, this practice is not permitted in some environments.
    Any thoughts? Thanks in advance.

    I am probably not an expert on the subject. But here are some thoughts:
    1. A backing bean can have (application, session, request or None) as a scope.
    Less app or session scope beans the better (from server perspective). But
    There are practical reasons why you would want to use them in many cases.
    2. JSF supports Client State Saving mode. But keep in mind the overhead
    Involved (serialization, de-serialization, bandwidth,...).
    3. You probably need to read about WebSphere load balancing capability. From
    what I know it is somehow sophisticated. People typically use session affinity
    to force requests initiated by same session to be served by the same app
    server (enforced by network, hardware/software tools available to do so,...). Of
    course with failover mechanism to ensure high availability.
    Hope that helps!

  • Load balancing Exchange Server 2013 in coexistence

    I am trying to load balance our Exchange 2013 CU7 environment with Citrix Netscaler. I am trying to start the process to migrate our users from 2010 SP3 to 2013 CU7. Every once in a while I experience a looped login when trying to login to OWA. You enter
    your username and password and it loops right back to the login page... stopping you from ever logging in.
    This made me think it was something with session affinity on the load balancer... however I thought they didn't need that anymore. So I'm wondering if since i'm in coexistence with 2010 it still is required.
    I have no users on 2013 yet so all my test accounts are on 2010.
    My environment:
    3x Exchange 2013 CU7 CAS/MBX combined
    3x Exchange 2010 CAS
    3x Exchange 2010 MBX
    2x Citrix Netscaler 10.5 VPX
    Does anyone know the proper way to configure this? Information on Netscaler for this would be great but it doesn't have to be related to Netscaler. I'm just looking for the proper session affinity values for Exchange 2013 with 2010 coexistence.

    Hello. Perhaps you have problems in the CAS Exchange.
    Suggest that the test plan.
    1. Check the operation through
    Troubleshooting Exchange Server 2013 with Test Cmdlets
    Get-webservicesvirtualdirectory
    Get-oabvirtualdirectory
    Get-owavirtualdirectory
    Get-ecpvirtualdirectory
    Get-ActiveSyncVirtualDirectory
    Get-AutodiscoverVirtualDirectory
    Test-ServiceHealth 
    Test-MapiConnectivity  
    Test-OutlookConnectivity
    Test-OutlookWebServices 
    Test-WebServicesConnectivity 
    Test-EcpConnectivity 
    Test-ActiveSyncConnectivity 
    Test-PowerShellConnectivity 
    2. In turn one of the output of the CAS and balancing on a test machine, through HOST file list the test CAS. If all CAS correct answer, then check Netstsaler.
    MCITP, MCSE. Regards, Oleg

Maybe you are looking for

  • Can't do detailed line chart in cfchart

    Hi is it possible to get a line chart suitable for say a financial series ie with ~1000 points of data in cfchart with CFMX7? I cant seem to get the line fine enough so have to limit chart to say 50 points. Someone must have done it but cant see any

  • SAP ECC 6.0 extractor 0FI_GL_4 -extracting GL data in delta mode into MSSQL

    Hi, Currently we are having the following need. We have to build delta interface from SAP ECC into MSSQL for G/L data. If possible, we would like to use for this standard ECC extractor: 0FI_GL_4. As far as I know this extractor is dedicated for SAP B

  • PS CC checkerboard pixel grid doesn't display correctly

    Hi, When I try to change checkerboard's grid size to small from preferences > transparency and gamut, it displays bigger pixels than it should. I need 1px grid checkerboard but it shows bigger checkerboard pixels. I have checked screen resolution sho

  • Accessing  oracle 10g form through LAN

    hiii,, i have just started working on oracle form 10g,here i have created a shortcut for accessing this form & named this form as "Oracle".so that everytime when i click on the shortcut the particular form get opened...... Now i want this form to be

  • New 2nd monitor - xrandr & awesome

    I just bought another monitor (yay craigslist!) with the intention of having a workspace stretched across the two of them.  Both are 17-in that support 1280x1024.  I've got one plugged into the VGA port on my nvidia card, with the other plugged into