HTTP Replica

          Hi Guys,
          I installed WLS 6.1 , I configured for Clustering.
          Now I am having Single cluster with two managdserver.
          Now I want to test the HTTP memory replication , how to I test
          the Http Replica?
          Please help me how to proceed further...
          Thanks
          BG
          

I would try creating a WebApp that made use of session information (such as
          one that you logged in to), and then taking down the server that you were
          interacting with. If the in-memory session replication is working, you
          should still be logged in when your request fails over to the other node.
          Good luck!
          George Murnock
          BEA Systems, Inc.
          "BG" <[email protected]> wrote in message news:[email protected]..
          >
          > Hi Guys,
          > I installed WLS 6.1 , I configured for Clustering.
          > Now I am having Single cluster with two managdserver.
          > Now I want to test the HTTP memory replication , how to I test
          > the Http Replica?
          >
          > Please help me how to proceed further...
          >
          > Thanks
          > BG
          >
          >
          

Similar Messages

  • Viewing files in Browser

    Dear all,
    We are runnig OAS 10.1.2 on solaris10 .
    We have a location in the OAS to store PDF documents. We are facing a strange issue in opening the PDF documents through web browser wither IE or firefox.
    for example :
    the URL :
    http://replica.mov.cm:7778/medp3/faces/med/pdf/EN-1000426256.pdf
    gives 404 message when opening and the pdf with extension .PDF (capital PDF) opening fine.. and both the files exist on the server
    http://replica.mov.cm:7778/medp3/faces/med/pdf/EN-1000426256.PDF
    Any idea ?
    Kai
    Edited by: KaiS on Jul 13, 2010 10:14 AM

    Could be a CASE issue.
    What I would try to debug is to create a folder under htdocs/ and put a pdf file in there.
    The url would be something like this:
    ttp://test.myhost:7777/USTEST/Install PHP5 on IAS.pdf
    so put the file in USTEST
    I'm running a very similar version and this works on mine.
    Let me know if this makes sense and if you try the debug if it works.

  • How can I flag HTTP session attributes to not be replicated ?

              WLS 5.1 SP8 - In memory replication of a stateful servlet. Is
              there any way to flag data that has been, or is being, loaded into
              a HTTP session so as to not be replicated ? It's a long story,
              but we have some data loaded into a hashtable, more specifically
              a Properties object, stored in our session. With the hastable
              already loaded into the HTTP session, weblogic does not detect
              when we add values to the table, and therefore does not replicate
              the changes. Which is ok since WLS is working as designed. To
              get around this we load the hashtable back into the session everytime
              we add a value to the hashtable. Yup, I know that's ugly. Anyway,
              what I'm trying to find out if is there is an attribute that we
              can set to indicate, to not replicate this data or that data.
              Something along the lines of a attribute on a per hastable value
              basis. For example, I load value A into my hashtable, then I
              need to put my hashtable into the session to get it replicated.
              Next I load value B into the hastable, and again to get it replicated
              I have to load the entire hashtable back into the session. The
              problem here is that the entire table gets replciated. Does anyone
              know if I can set an attribute on value A, when I'm adding value
              B, so as to not replicate value A when I reload the table ? Of
              course all goal is to not store so much data in the session, but
              I'm trying to find a work around until that is completed.
              Thanks,
              David
              

              Where can I find documentation on the details of how Weblogic decides what will
              be replicated in the HTTPSession object (for example, it only replicates attributes
              which are set or updated using "setAttribute()"?
              Prasad Peddada <[email protected]> wrote:
              >Robert,
              >
              > It is true that we replicate only when you call setAttribute in case
              >of servlets.
              >
              >In case of EJB it is slightly different. EJB sends diff's across the
              >wire. It doesn't
              >replicate the entire state with every request.
              >
              >-- Prasad
              >
              >Chris Palmer wrote:
              >
              >> I think Viresh was referring to modifying part of one attribute causing
              >the whole
              >> of that attribute (i.e. the hashtable) to be replicated.
              >>
              >> Is it actually true though that the behaviour would be different in
              >a stateful
              >> session EJB? I had assumed that the WHOLE ejb state would be replicated
              >each time
              >> (i.e. after each invocation), without even the benefit of having a
              >mechanism such
              >> as setAttribute() to flag the attributes that had changed...
              >>
              >> Chris
              >>
              >> Robert Patrick wrote:
              >>
              >> ? Huh? Since when does it always replicate the entire HttpSession?
              > We were
              >> ? told that it uses a hook in the setAttribute() call to determine
              >which
              >> ? attributes have changed and only replicates those attributes. I
              >know this to
              >> ? be the case because if I retrieve a previously stored attribute from
              >the
              >> ? HttpSession and modify it, my changes do not get replicated unless
              >I call
              >> ? setAttribute again...
              >> ?
              >> ? Robert
              >> ?
              >> ? Viresh Garg wrote:
              >> ?
              >> ? ? Servlet sessions don't work on diffs, so no matter what you do,
              >entire
              >> ? ? Hashtable will be replicated. If you want the optimization for
              >only
              >> ? ? replicating the diff ( the stuff that has changed between 2 updates)
              >,
              >> ? ? consider using stateful session bean and having hashtable part
              >of
              >> ? ? conversational state of stateful session bean.
              >> ? ?
              >> ? ? The solution that you suggested on your own for your problem is
              >not ugly
              >> ? ? as the same solution is used by many customers that I know of.
              >This is
              >> ? ? particularly a problem for people that use Java Beans and use the
              >set
              >> ? ? Property directive of Java Bean in JSP. There also in our auto
              >generated
              >> ? ? code, we put the JavaBean back in HTTP Session, when a setter is
              >called to
              >> ? ? enforce replication.
              >> ? ?
              >> ? ? Keep in mind that whatever we do for replication, we want to support
              >it
              >> ? ? using ONLY standard servlet API and we don't want to introduce
              >any WLS
              >> ? ? specific API to achieve this, and so putting value back in session
              >is the
              >> ? ? only way.
              >> ? ?
              >> ? ? Viresh Garg
              >> ? ? Principal Developer Relations Engineer
              >> ? ? BEA Systems
              >> ? ?
              >> ? ? Dave Javu wrote:
              >> ? ?
              >> ? ? ? WLS 5.1 SP8 - In memory replication of a stateful servlet.
              >Is
              >> ? ? ? there any way to flag data that has been, or is being, loaded
              >into
              >> ? ? ? a HTTP session so as to not be replicated ? It's a long story,
              >> ? ? ? but we have some data loaded into a hashtable, more specifically
              >> ? ? ? a Properties object, stored in our session. With the hastable
              >> ? ? ? already loaded into the HTTP session, weblogic does not detect
              >> ? ? ? when we add values to the table, and therefore does not replicate
              >> ? ? ? the changes. Which is ok since WLS is working as designed.
              >To
              >> ? ? ? get around this we load the hashtable back into the session everytime
              >> ? ? ? we add a value to the hashtable. Yup, I know that's ugly.
              >Anyway,
              >> ? ? ? what I'm trying to find out if is there is an attribute that
              >we
              >> ? ? ? can set to indicate, to not replicate this data or that data.
              >> ? ? ? Something along the lines of a attribute on a per hastable value
              >> ? ? ? basis. For example, I load value A into my hashtable, then
              >I
              >> ? ? ? need to put my hashtable into the session to get it replicated.
              >> ? ? ? Next I load value B into the hastable, and again to get it replicated
              >> ? ? ? I have to load the entire hashtable back into the session. The
              >> ? ? ? problem here is that the entire table gets replciated. Does
              >anyone
              >> ? ? ? know if I can set an attribute on value A, when I'm adding value
              >> ? ? ? B, so as to not replicate value A when I reload the table ?
              >Of
              >> ? ? ? course all goal is to not store so much data in the session,
              >but
              >> ? ? ? I'm trying to find a work around until that is completed.
              >> ? ? ? Thanks,
              >> ? ? ? David
              >
              

  • FATAL: replicated HTTP session specified but clustering not enabled for webapp: /BV

              Hi,
              I tried to startup weblogic server 6.1 using
              in-memory replication, and the I got an error
              <Error> <HTTP> <FATAL: replicated HTTP sessioned specified but clustering not
              enabled for webapp: /bv>
              I set <param-name>PersistentStoreType</param-name><param-value>replicated</param-value>
              in weblogic.xml
              And, I sepecified in my console
              Deployment-> Web Applications -> bv -> Target cluster -> bvcluster.
              I have 2 nodes cluster environment using Solaris.
              Each node has 1 managed server joining this cluster.
              What am I missing? Please let me know.
              Thanks.
              hiromu
              

              Thank you for the reply.
              The target cluster for my web application is my cluster.
              When I checked the target server as well,
              then there is a server which is not a member of the cluster.
              I deleted the server name, then that error is gone.
              // hiromu
              ><html>
              >Interesting.
              ><br>You shouldn't get this if persistentStoreType is
              ><br>"replciated" . I just tried this and it works as expected.
              ><p>Did you re-deployed the webapp after making
              ><br>the following change in weblogic.xml?
              ><p>--
              ><br>Kumar
              

  • Replicated http sessions : classcastexception

    has anyone else seen this with wls6.1 sp1 when trying to run in memory
              replicated http session?
              -peter
              <Oct 18, 2001 12:01:26 PM EDT> <Error> <HTTP>
              <[WebAppServletContext(7569280,v21
              ,/v21)] Servlet failed with Exception
              java.lang.ClassCastException:
              weblogic.servlet.internal.session.MemorySessionCon
              text
              Start server side stack trace:
              java.lang.ClassCastException:
              weblogic.servlet.internal.session.MemorySessionCon
              text
              at
              weblogic.servlet.internal.session.SessionData.getContext(SessionData.
              java:270)
              at
              weblogic.servlet.internal.session.ReplicatedSessionData.becomeSeconda
              ry(ReplicatedSessionData.java:178)
              at weblogic.cluster.replication.WrappedRO.<init>(WrappedRO.java:34)
              at
              weblogic.cluster.replication.ReplicationManager$wroManager.create(Rep
              licationManager.java:352)
              at
              weblogic.cluster.replication.ReplicationManager.create(ReplicationMan
              ager.java:1073)
              at
              weblogic.cluster.replication.ReplicationManager_WLSkel.invoke(Unknown
              Source)
              at
              weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              at
              weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.jav
              a:265)
              at
              weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest
              .java:22)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              End server side stack trace
              at
              weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundR
              equest.java:85)
              at
              weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:134)
              at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
              at $Proxy88.create(Unknown Source)
              at
              weblogic.cluster.replication.ReplicationManager.trySecondary(Replicat
              ionManager.java:870)
              at
              weblogic.cluster.replication.ReplicationManager.createSecondary(Repli
              cationManager.java:825)
              at
              weblogic.cluster.replication.ReplicationManager.register(ReplicationM
              anager.java:393)
              at
              weblogic.servlet.internal.session.ReplicatedSessionData.<init>(Replic
              atedSessionData.java:119)
              at
              weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSess
              ion(ReplicatedSessionContext.java:193)
              at
              weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletReq
              uestImpl.java:1948)
              at
              weblogic.servlet.internal.ServletRequestImpl.getSession(ServletReques
              tImpl.java:1729)
              at jsp_servlet.__login._jspService(__login.java)
              at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              at
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              pl.java:265)
              at
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              pl.java:200)
              at
              weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppSe
              rvletContext.java:2456)
              at
              weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestIm
              pl.java:2039)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              >
              

    When do you see this? While shutting down one of the servers?
              Make sure your clustered servers are uniform. ie.the webapp
              you have deployed is deployed on all servers and PersistentType
              is set to "replicated" in all servers.
              --Vinod.
              Peter Ghosh wrote:
              > has anyone else seen this with wls6.1 sp1 when trying to run in memory
              > replicated http session?
              > -peter
              >
              > <Oct 18, 2001 12:01:26 PM EDT> <Error> <HTTP>
              > <[WebAppServletContext(7569280,v21
              > ,/v21)] Servlet failed with Exception
              > java.lang.ClassCastException:
              > weblogic.servlet.internal.session.MemorySessionCon
              > text
              >
              > Start server side stack trace:
              > java.lang.ClassCastException:
              > weblogic.servlet.internal.session.MemorySessionCon
              > text
              > at
              > weblogic.servlet.internal.session.SessionData.getContext(SessionData.
              > java:270)
              > at
              > weblogic.servlet.internal.session.ReplicatedSessionData.becomeSeconda
              > ry(ReplicatedSessionData.java:178)
              > at weblogic.cluster.replication.WrappedRO.<init>(WrappedRO.java:34)
              > at
              > weblogic.cluster.replication.ReplicationManager$wroManager.create(Rep
              > licationManager.java:352)
              > at
              > weblogic.cluster.replication.ReplicationManager.create(ReplicationMan
              > ager.java:1073)
              > at
              > weblogic.cluster.replication.ReplicationManager_WLSkel.invoke(Unknown
              > Source)
              > at
              > weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              > at
              > weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.jav
              > a:265)
              > at
              > weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest
              > .java:22)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              > End server side stack trace
              >
              > at
              > weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundR
              > equest.java:85)
              > at
              > weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:134)
              > at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
              > at $Proxy88.create(Unknown Source)
              > at
              > weblogic.cluster.replication.ReplicationManager.trySecondary(Replicat
              > ionManager.java:870)
              > at
              > weblogic.cluster.replication.ReplicationManager.createSecondary(Repli
              > cationManager.java:825)
              > at
              > weblogic.cluster.replication.ReplicationManager.register(ReplicationM
              > anager.java:393)
              > at
              > weblogic.servlet.internal.session.ReplicatedSessionData.<init>(Replic
              > atedSessionData.java:119)
              > at
              > weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSess
              > ion(ReplicatedSessionContext.java:193)
              > at
              > weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletReq
              > uestImpl.java:1948)
              > at
              > weblogic.servlet.internal.ServletRequestImpl.getSession(ServletReques
              > tImpl.java:1729)
              > at jsp_servlet.__login._jspService(__login.java)
              > at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              > at
              > weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              > pl.java:265)
              > at
              > weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              > pl.java:200)
              > at
              > weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppSe
              > rvletContext.java:2456)
              > at
              > weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestIm
              > pl.java:2039)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              > >
              

  • E-mails with attachments, mostly photos, are replicating themselves in my inbox, 100 or more at a time. What is causing this and how do I fix it?

    It is an intermittent, but a regular-enough-to-be-really-annoying occurrence........when someone sends an e-mail, to which is attached a photo of some kind, or a picture document, it will replicate in my inbox, sometimes 100 or more copies at a time. I delete all of them, empty the trash, and will also go to the Yahoo web version of my e-mail and delete the offending e-mail from there and empty the trash. Sometimes worked. Now that process isn't working, and this particular e-mail just filled up my inbox with over 100 copies of itself, even after the other steps were taken. Can't get any incoming mail.
    This log jam of replicated e-mails prevents incoming and new mail from reaching the inbox.
    Needless to say, am not too happy about it. What is causing it, and how do I fix it, other than to go to Outlook for my e-mail client....? :0) This, coupled with the constant "not responding" each time I try to send an e-mail, delete items from the "sent" or inbox folders, is really driving me nuts. We have a crappy internet connection, and that is likely part of it, but the "not responding" is not a new thing with Thunderbird, either.
    Thanks in advance for any assistance you can provide. I dislike the web version of Yahoo, and don't want to have to switch to that, as it just doesn't meet e-mail needs. Would like to get Thunderbird fixed.

    I have not seen your error in either my IE or my chrome. I do note that you are using a "strict" Document Type Declaration.
    You might try a transitional DTD, which should be more forgiving than the strict:
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml">
    Other than that, I don't see anything particular (because I don't see the error, either) that would cause the menu to drop vertically instead of horizontally.
    Beth

  • B1BC BR02 SP01 e 2007B PL09 , continuar sem fazer a replicação do item e PN

    Paulo,
    1. Uma vez que está utilizando o 2007B, poderia confirmar que tem o
    PL08 que é o patch supportado para essa versão. ESTOU COM O PL09
    2. Você esta utilizand o B1BC SP01? SIM
    3. Vejo na impressão de tela 'Replicaçao Fonte Castro5.JPG' que o B1
    Company nome tem espaços. Verifiqe que o manual é claro quanto a não
    utilização de espaços nesse campo OK IREI FAZER A CORREÇÃO
    4. No MessageLog você tem alguma mensagem de erro? NAO
    5. Você introduziu os dados na Matriz e depois restaurou essa base
    como filiais? SIM,DE ACORDO COM ORIENTAÇÃO DO CARLOS HERNANDEZ
    6. Tendo o 2007B você removeu o B1i que vem de raiz com o SAP B1 e
    depois reinstalou o Add-on e o B1i que vem com o B1BC? REMOVIU DOS BANCOS O A VERSAO DO B1BC 2.7 E DEPOIS INSTALEI A VERSAO 2.9, QUANDO VC FALA B1i É O USUARIO, ELIMINEI O USUARIO, QDO INSTALEI ELE CRIOU O B1i1.REFIZ OS BANCO DO ZERO E FIZ A INSTALAÇÃO O USUARIO B1i FOI CRIADO AUTOMATICAMENTE.
    GOSTARIA DE SABER , COMO DEVO CONFIGURAR O B1I DENTRO DO SAP TABELA QUE TEM NO TREINAMENTO PARA O 2005B , LOCALHOST - 8080 - SA -SENHA DO SA, É ASSIM? OU TEM ALGUNHA MUDANÇA SAP 2007B B1BC SP01?
    Apesar de ter feito todas as correções utilizado o copy express em todos os bancos , ainda continuo sem replicar o item e PN, o que devo fazer mais para que seja replicada.

    Paulo,
    Fiz com as filiais sem nada(dados), e não replicou, creio que o problema seja referente ao endereço do Messagelog, pois qdo abrir o caminho Http:
    localhost:8080Messagelog , não aparece se existe erro ou que fez com sucesso, fiz a instalação na minha maquina e este caminho aparecia com os eventos, onde é criada esta tabela? e em que momento, creio que seja isto.
    Quais os bancos que preciso ter ou criar para a utilização do B1BC, sei que a landscape é criada no momento da instalação, é necessaria algum outra, pois na versão antiga solicita a criação de um banco Event.
    Estou fazendo novamente a instalação do B1BC, para ver se foi algo no momento da instalação.
    Jaqueline Martins

  • Open Directory Replica Over VPN

    Hey All,
    I've got two servers, one in the office running as our Open Directory Master and one that I've placed in a remote data centre as our new web/e-mail box that I'm hoping to make a OD Replica before I move these services out to it.
    After a lot of blood/sweat/tears/coffee I was able to get it connected back to the office over site-to-site VPN with our Linksys RV082 in the office and using raccoon on the remote Tiger Server with the help of s2svpnadmin.
    I've got DNS configured on both and can ping back and forth, resolve back and forth, the VPN tunnel is running quite beautifully as if they were right beside each other on the same switch.
    The remote is on the 192.168.4.x subnet and our internal is on the local 192.168.1.x subnet. Really works well.
    But...
    When I try to make the remote box a replica of our OD Master things seem to go well, but shortly after it's done the initial 'replication' the remote box reverts back into standalone mode and I can't login to it using any directory users. (The local OD Master stays humming along just fine)
    I've found this post that mentions a very similar situation:
    http://discussions.apple.com/thread.jspa?threadID=1173913&tstart=221
    Basically it appears that the Directory Service doesn't like to talk over Tiger Server's own VPN implementation.
    I tried replicating the issue on a remote client's Tiger xServe connecting to their SonicWall and I was able to replicate over to them just fine and it sticks, so it makes me think it's definitely something about the VPN service on Tiger Server.
    This remote box is in a data centre so I want to avoid having to buy and install a dedicated hardware device to solve this problem if I can (not even sure if they'd let me). It seems silly that they wouldn't have tested this configuration as I have to expect that it would be a common one.
    Any help or insight you could offer would be invaluable! Thanks!

    Hey Leif,
    The remote box has a public IP and then I've created an internal duplicate running at 192.168.4.1 with itself as the 'router/gateway'. This seems to work.
    I can ping 'to' the remote box from the office side over the VPN tunnel by pinging '192.168.4.1'.
    And from the remote box I can ping back to the office but only after I add a route:
    route add -net 192.168.1.0/24 192.168.4.1
    ...on the remote machine.
    After that I can get traffic back and forth. It seems to work perfectly.
    I can connect using just about any service I want over the VPN, ex. AFP and things work as if the box was in the office, it's nice.
    My OD Master on the local side is also my Primary DNS Server, the remote box doubles as a Secondary DNS Slave.
    I use views in my DNS to handle both private and public traffic (we're a small business so getting the most out of our gear is important), I can ask both boxes about themselves in both public and private views and they respond correctly.
    Box A: (In The Office)
    (Internal)
    boxa.domain.com has address 192.168.1.170
    170.1.168.192.in-addr.arpa domain name pointer boxa.domain.com.
    (External)
    boxa.domain.com has address 215.25.xx.xx
    xx.xx.25.215.in-addr.arpa domain name pointer boxa.domain.com.
    (Testing Localhost)
    localhost has address 127.0.0.1
    1.0.0.127.in-addr.arpa domain name pointer localhost.
    Box B: (In The Datacentre)
    (Internal)
    boxb.domain.com has address 192.168.4.1
    1.4.168.192.in-addr.arpa domain name pointer boxb.domain.com.
    (External)
    boxb.domain.com has address 216.46.xx.xx
    xx.xx.46.216.in-addr.arpa domain name pointer boxb.domain.com.
    (Testing Localhost)
    localhost has address 127.0.0.1
    1.0.0.127.in-addr.arpa domain name pointer localhost.
    I'm convinced it's something on the remote box as I can get the replication to work reliably when trying another box whose VPN is handled by a dedicated device. I've seen posts like this one:
    http://blog.aaronmarks.com/?p=31
    That seem to discuss similar issues.

  • In-memory replication of http session is not working in BEA7 cluster

              Hi everyone,
              I have 3 managed servers in Bea7.0 SP4 in a cluster. The client requests are sent
              through apache web server. I have given cluster address as URL in httpd.conf of
              apache server which sends the client requests for dynamic pages such as JSPs and
              servlets to the weblogic cluster.
              Load balancing is working fine. I ensured this from the log files of all the 3
              servers. All the 3 servers are getting different client requests and thus load
              balancing is working.
              Now, I wanted to achieve Fail-over. I do not think that i should use proxy plug-in
              for this. I feel the cluster itself will handle fail-over provided i make the
              http session as memory replicated.
              I updated the weblogic.xml with the following entry :
              <session-descriptor>
              <param-name>PersistentStoreType</param-name>
              <param-value>replicated</param-value>
              </session-param>
              </session-descriptor>
              I guess this is sufficient to make the http session as cluster aware.
              But when I shutdown server1, the user connected to server1 will be kicked out
              of the session and come to login page through server2 or server3 which are running
              fine.
              Could anyone help me to achieve http session as cluster aware. Does it indicate
              that I have to go for WLS proxy – HttpClusterServlet to achieve fail over for
              http session ?
              BTW, for your info, i am using setAttribute() and getAttribute() while manipulating
              the session.
              thanks in advance.
              

              Hi Ryan,
              Thanks for ur valuable input.
              I can see failover working.
              But, I can not continue with the same session in my application.
              I printed session Ids before and after failover, I found both are different.
              I guess session replication is a responsibility of weblogic/apache plugin.
              If not please let me know which all settings I should do to make failover working?
              Thanks again.
              Plad
              "ryan upton" <ryanjupton at learningvoyage dot com> wrote:
              >Plad,
              >
              >Are you trying to gracefully shut down the server? If you are then the
              >problem that you say you can't identify is simply the server's default
              >behavior which is to wait for all non-replicated sessions to be dropped
              >or
              >timed out before killing the process. Try forcing the shutdown: kill
              >-9 the
              >PID or CTRL-C if you started the server from the command line. You can
              >also
              >check the ``Ignore Sessions During Shutdown" checkbox under the server's
              >control tab in the admin console, this should allow you to shut down
              >gracefully without waiting for session timeout. BTW your sequence is
              >off
              >in #5 below, the replication doesn't occur upon failure, the replication
              >has
              >already happened once you created the session object on the first server,
              >I
              >think maybe you're confusing replication with failover.
              >
              >~RU
              >
              >"Plad" <[email protected]> wrote in message
              >news:[email protected]...
              >>
              >> Hi,
              >> I have 2 managed servers in a cluster.
              >>
              >> 1. I have got a DNS name configured which maps to these 2 managed server's
              >IP
              >> addresses.
              >> 2. I can browse my site using this DNS name.
              >> In HTTPD.conf I have :
              >>
              >> ServerName dev.a.b.net
              >>
              >> <IfModule mod_weblogic.c>
              >> WebLogicCluster 10.1.38.232:7023,10.1.34.51:7023
              >> MatchExpression *.*
              >> </IfModule>
              >>
              >> LoadModule weblogic_module modules/mod_wl_20.so
              >>
              >> 3. I have adeded session descriptor in weblogic.xml , also enabled
              >proxy
              >plugin
              >> in weblogic console.
              >>
              >> 4. I tested accessing my application using DNS url after shutting down
              >alternatively
              >> each manaed server. I can access application.
              >>
              >> 5. Now, problem comes when I access a managed server1 , keeping server2
              >down.
              >> I am able to access my application.
              >> Now, I start the server2.
              >> (Here I am supposing that replication should occur)
              >> Then I am shutting down server1.
              >> But, this time the server log shows me following:
              >>
              >>
              >> 9:58:51 AM GMT+05:30 NOTICE Web application(s) chlist still have
              >non-replicated
              >> sessions after 2 minutes of initiating SUSPEND. Waiting for non-replicated
              >sessions
              >> to finish.
              >> 10:00:51 AM GMT+05:30 NOTICE Web application(s) chlist still have
              >non-replicated
              >> sessions after 4 minutes of initiating SUSPEND. Waiting for non-replicated
              >sessions
              >> to finish.
              >>
              >> I am unable to make out where the problem is?
              >> Can it be a problem of Liecense? Is there any specialcluster liecense
              >for
              >weblogic8?
              >>
              >> Hoping to get replies.
              >> Thanx.
              >> Plad
              >>
              >> "ryan upton" <ryanjupton at learningvoyage dot com> wrote:
              >> >See my reply to your first post, but I've also added a few comments
              >here.
              >> >
              >> >"jyothi" <[email protected]> wrote in message
              >> >news:[email protected]...
              >> >>
              >> >> I guess someone from bea support team only can answer both your
              >question
              >> >and mine.
              >> >> As per my knowledge, we do not need to do any setup at Apache
              >side
              >> >regarding
              >> >> cluster other than mentioning cluster address as URL while
              >contacting
              >> >WLS
              >> >> from apache.
              >> >>
              >> >> I hope someone from Bea, will help us. I do not think that we
              >> >go for
              >> >WLS
              >> >> proxy plug-in using HttpClusterServlet for making session replication.
              >> > I
              >> >strongly
              >> >> feel that the cluster itself be able to manage the fail-over of
              >> >http
              >> >sessions
              >> >> provided we put the entry "PersistentStoreType" in weblogic.xml
              >> >regarding
              >> >> the session replication.
              >> >>
              >> >
              >> >The cluster does handle the management of Sessions. The clustered
              >> >applications still create the Session objects and the cluster manages
              >> >them
              >> >as per your deployment descriptor settings (replicated, JDBC, File)
              >however
              >> >the proxy has to be aware of which server the client has an affinity
              >> >for
              >> >(only with replicated sessions) and it does that by reading a cookie
              >> >passed
              >> >back from the server that handled the initial request and created
              >the
              >> >primary session object. The proxy has a list of both the primary
              >and
              >> >secondary server locations from this cookie that it can use to failover
              >> >the
              >> >request if the primary server fails. Clusters _DO NOT_ failover nor
              >> >do they
              >> >load balance, that's the job of your proxy, whether you're using the
              >> >HTTPClusterServlet, WLS Plug-in or a more sophisticated hardware load
              >> >balancer like Big IPs F5
              >> >
              >> >> jyothi
              >> >>
              >> >
              >> >~RU
              >> >
              >> >
              >>
              >
              >
              

  • The changes made in the WAD web template is not replicating on web browser

    Hi,
    We are using BI 7.0 WAD. Suppose if i create a one web template and if i run it ...it is working fine.
    If i change the present web template and if i do change anything on the present web template and if run this web template it is giving the previous result....it is not reflecting the present changes.
    The changes made in the WAD web template is not replicating on web browser result.
    I went to transaction SMICM, then choose "Goto" from the top menu. From there, go to HTTP Server Cache, then choose Invalidate, then choose Global in system. With this thing also it didnt slove.
    thanks

    Clear your browser cache also and see if the changes are visible.....
    Arun

  • Questions on InitialContext and replica-aware stub caching

    Hi All,
    We have a cluster and deployed with some stateless ejb session beans. Currently we only cached the InitialContext object in the client code, and I have several questions:
    1. in the current case, if we call lookup() to get a replica-aware stub, which server will return the stub object, the same server we get the InitialContext, or it will load balanced to other servers every time we call the lookup method?
    2. should we just cache the stub? is it thread safe? if it is, how does the stub handle concurrent requests from the client threads? in parallels or in sequence?
    One more question, when we call new InitialContext(), it will take a long time before it can return a timeout exception if the servers are not reachable, how can we set a timeout for this case?

    809364 wrote:
    You can set the timeout value programatically by using the
    weblogic.jndi.Environment.setRequestTimeout()
    Refer: http://docs.oracle.com/cd/E12839_01/apirefs.1111/e13941/weblogic/jndi/Environment.html
    or
    set the REQUEST_TIMEOUT in the weblogic.jndi.WLContext
    Refer: http://docs.oracle.com/cd/E11035_01/wls100/javadocs/weblogic/jndi/WLContext.html
    Hi, I tried setting the parameters before, but it only works for stub lookup and ejb call timeout, not for the creation of InitialContext. And any idea for my 2nd question?

  • Unable to recreate replica as I cannot delete the xml files for the old VM on Hyper-v Cluster

    My setup
    There are 2 sites each running a Hyper-v 2012-R2 cluster comprising of 5 nodes each.
    All the machines are in a HA state and stored on the shared clustered resource.
    Initially we had replication working and it was replicating all the servers from Site A to Site B
    There were some replication issues which needed me to stop the replication and redo it again, in doing so I firstly removed replication on Fail-over cluster manager on Site A and did the same on Site B.
    On one of the servers I had to remove the VM, although the VM config and Disks still remain on the Site B Cluster.
    I was trying to recreate the replica but it always fails with the following error :
    Hyper-v failed to enable replication for virtual machine xxxxxxxx operation aborted (0x80004004).....
    I followed an article to stop the Hyper-V Virtual Machine Management Service and try delete the file,
    When i tried to stop the service it would restart automatically and I was unable to delete the file. I put the service in disabled which kept it stopped however I was still unable to delete the files. I also tried to get File assassin which can help delete
    and clear locked files but it too was unable to delete it while the service was running or stopped as well as its additional mode whereby it will delete it on next reboot.
    The  Article Https://social.technet.microsoft.com/Forums/windowsserver/en-US/4c4f2535-81ee-4a21-a920-b5632de2be37/hyperv-old-vm-folders-cannot-be-deleted?forum=winserverhyperv
     makes mention of stopping another service Hyper-V
    Image Management Service but I am unable to find it on 2012-R2 server.
    I have also tried to put the additional nodes in a paused drained roles state leaving only one node as the active one but it still will not let me delete the VM config
    files, without removing those files I cannot re-initiate the Replication.
    Any assistance would be greatly appreciated.

    Can you storage migrate any other vms or resources off of the CSV that contains the XML file in question? If so you may be able to then take that CSV offline, remove it from available storage, re-add as available storage, and add to CSV Volumes. Maybe
    that will clear your lock. 

  • Partner functions missing in the replicated order

    Hi Experts,
    I Have created an order in ECC which has replicated to CRM with some errors partner functions are missing. could help me how to solve this  problem in below link you can see the detail analysis of the error.
    1)http://s657.photobucket.com/albums/uu298/vallabhaneni/?action=view&current=orderecc.jpg
    2)http://s657.photobucket.com/albums/uu298/vallabhaneni/?action=view&current=orderincrm.jpg
    Thanks & Regards
    V.Srinivas

    Hi V.srinivas,
                     For Order Replication from ECC system to CRM system your Configurations for both the system must be Coinciding.
    Hope your Partner Function Determination in ECC and CRM are also maintained the Same way,else check for the Determination of Sub-Partner Function determination (Ship to Party,Bill to Party..) in ECC and CRM the way its maintained.
    The Main Problem arising in your Order is after replication from ECC System your Sub-Partner Functions aren't getting Determined,
    Check for the Partner Determination Procedure in CRM for the Sub-Partners and try to Determine them from your Main Partner Function e.g. Sold to Party.
    Hope it Solves your Query..
    Revert back for Further Queries..
    Thanks and Regards,
    RK.

  • Error while putting an object in Replicated Cache

    Hi,
    I am running just a single node of coherence with Replicated cache. But when I am trying to add an object to it I am getting the below exception. However I don't get this error while doing the same thing in a Distributed cache. Can someone please tell me what could I be doing wrong here.
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:242)
         at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
         at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    ClassLoader: java.net.URLClassLoader@b5f53a
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    This is my config file -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Replicated caching scheme.
    -->
              <replicated-scheme>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
                   <service-name>ReplicatedCache</service-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <lease-granularity>member</lease-granularity>
                   <autostart>true</autostart>
              </replicated-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server</address>
                                  <port>port</port>
                             </local-address>
                             <receive-buffer-size>768k</receive-buffer-size>
                             <send-buffer-size>768k</send-buffer-size>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    Edited by: user1945969 on Jun 5, 2010 4:16 PM

    By default, it should have used FIXED as unit-calculator. But look at the trace seems your replicated cache was using BINARY as unit-calculator.
    Could you try add <unit-calculator>FIXED</unit-calculator> in your cache config for the replicate cache.
    Or just try insert a object (both key and value) which implement Binary.
    Check the unit-calculator part on this link
    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • Calling a method on all replicas of a bean in one shot

              Hi,
              Is there any properties by which one can make a replica aware stub direct a method call to all
              instances of the bean in a cluster? The requirement is to update data cached in all instances
              of a bean in the cluster by invoking a method on all instances of the bean.
              Regards,
              Sreeja
              

              You can try to think about the issue in another way.
              Hope this link can help you.
              http://www.weblogic.com/docs51/classdocs/API_jndi.html#exactlyOnce
              If you do want to do it in your way.
              Try to use JMS Topic. One server can send a topic of updating and all other servers subscribe this topic.
              Good luck.
              Tao Zhang
              "Sreeja" <[email protected]> wrote:
              >
              >Hi,
              >Is there any properties by which one can make a replica aware stub direct a method call to all
              >instances of the bean in a cluster? The requirement is to update data cached in all instances
              >of a bean in the cluster by invoking a method on all instances of the bean.
              >Regards,
              >Sreeja
              

Maybe you are looking for

  • How do I retrieve my imovies or idvds from the Lacie external harddrive.?

    Hi, My Lacie External hard drives have been saving away for years, but only now that my computer has run out of space am I wondering how  to retrieve the files saved on the Lacie HDD. I am unable to make any more imovies as I have no space, so I have

  • Which tutorials may you suggest for JDBC and servlets?

    Hello, i would to learn about JDBC and servlets. May you please suggest me a good tutorial(e.g.post me a link) in order to study for JDBC and servlets? e.g. How to make a servlet etc. Thanks, in advance!

  • Camera LED Flash Bug/Defect

    okay so I think theres something wrong with 4.3.1 or, its just my phone. Okay so you start the camera app.(make sure its the back facing camera when opened*the one with the flash*). Then press the button to switch to the front facing camera. Now pres

  • Little different use case to load detail page....

    Guys. .... again in need of you expert advice.... version: Jdev 11.1.1.3 Following is my use case: - On Department detail page.... I am showing all employees details in form... (using navigation buttons) (example: Dept 1 has employee e1,e2,e3) - User

  • RMAN 0 level backup failed while releasing channels at end.

    In RMAN 0 level backup, when it's about to finish, reports an error as releasing channels. It backed up about 98% datafiles. Is this backup is usable. As we can not affort to take backup again as it's a big terabyte databse. Please suggest. Thanks in