Weblogic 11g Fail over Cluster

Hi,
I'm Using WebLogic Server 11g (10.3.6). I have installed ATG and Commerce reference store in same machine with weblogic.(Endeca Has Separate server). In addition to i have oracle DB server and apache server.
I did following things.
*I have configure one physical machine (WM1) with Web Logic Domain. Other physical machine (WM2) i installed weblogic.
*I configured ATG and Commerce reference store in WM1.(Using cim.sh)
*and Configured Endeca app for WM1.
*I am using weblogic for production environment. I Created 3 Managed servers according to cim.sh production.publishing and staging.
I want to Create fail-over Cluster with WM1 and WM2.
Now is it possible to create fail over cluster?
Please give me instructions ,suggestions or guide to configure cluster for this environments.
Thanks
Nish.

Try to see the cluster log information, you should see an event that describes the error that causes the cluster reource to fail.
Regards, Samir Farhat Infrastructure and Virtualization Consultant || Virtualization, Cloud, Azure ? Follow and Ask here https://buildwindows.wordpress.com

Similar Messages

  • Which role do I need DFS or File server on fail over cluster server 2012 R2?

    what I want to achieve is that I want to share all my user data files in a central location and to be highly available all the time whether it's a general share or folder redirection data. BUT I'm a bit confused;  I have fail over cluster  set-up
    on server 2012, now I would like to add DFS as a role but than we have another role called File server and virtually it does the same thing as DFS? Means it creates a namespace share that can be access even one of the nodes goes down. Now I am thinking is
    that DFS does the replication between two physical location but fail over cluster works slightly differently  and with file server it pretty much does the same thing except for replicating data from one drive to another. Now what do you suggest I do or
    did I get the concept wrong like a noob?

    DFS and Failover Clustering for file shares provides a similar end result for file access, but they are significantly different implementations.
    Clustering provides high availability to files by presenting shared access to set a files served from a cluster.  With 2012 R2 Microsoft added the ability to create a Scale-out File Server that even allows all nodes of the cluster to server access to
    the files for a higher level of performance and other great things.  Bottom line with Failover Clusters for files is that there is a single copy of the file presented from the cluster.
    DFS on the other hand provides high availability to files by presenting multiple copies of the file by making a copy in two or more locations and presenting a naming space that allows access to the file through any of the network paths.  DFS works very
    well for files that are primarily read-only.  When you get into a situation where there is a lot of updating of the shared files, DFS is not a very good solution.  There are ways to implement DFS for read/write files, but it generally requires a
    good knowledge of how the files are used and how you want to manage them.
    The key to answering your question comes in your first sentence "I want to share all my user data files in a central location and to be highly available all the time".  My initial reaction to this is that central location means Failover Cluster
    - there is only a single copy of the file.  However, "all the time" can be compromised by network failures to the central site.  Remote sites would not have access if they can't access the central site.  DFS provides the ability to
    have copies remotely, but then if you allow updating at multiple sites, you have to manage the merging of the changes, among other things.
    . : | : . : | : . tim

  • What hardware is required to setup Fail over cluster using windows 2003 enterprise edition.

    I want to setup fail over cluster...i have already installed HP 350 G6 server in my environment. now i want to know which hardware i may require to setup failover cluster for statefull application. and secondly, does my existing server can be utilized .

    AN Update:
    The Oracle Universal Installer shows the following in the screen before the error appears:
    Starting Oracle Universal Installer...
    No pre-requisite checks found in oraparam.ini, no system pre-requisite checks w
    ill be executed.
    Preparing to launch Oracle Universal Installer from D:\DOCUME~1\ADMINI~1\LOCALS
    ~1\Temp\OraInstall2011-03-02_04-25-26PM. Please wait ... Oracle Universal Instal
    ler, Version 10.1.0.6.0 Production
    Copyright (C) 1999, 2007, Oracle. All rights reserved.
    ...............................................................Val: 0
    Val: 0
    Val: 0
    Val: 2
    Val: 0
    Val: 0
    Val: 0
    Val: 2
    Val: 0
    Val: 0
    Val: 0
    Val: 0
    Val: 0
    Val: 0
    Val: 2
    Val: 0
    Val: 0
    Val: 0
    Val: 0
    Val: 2
    Val: 0
    Val: 0
    path: D:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\OraInstall2011-03-02_04-25-26PM\jre\bin
    ;.;D:\WINDOWS\system32;D:\WINDOWS;D:\StageR12\startCD\Disk1\rapidwiz\unzip\NT;D:
    \MVS\VC\bin;D:\cygwin\bin;D:\WINDOWS\system32;D:\WINDOWS;D:\WINDOWS\System32\Wbe
    m
    toload is D:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\OraInstall2011-03-02_04-25-26PM\Win
    dowsGPortQueries.dll
    100% Done.
    Copying files in progress (Wed Mar 02 16:25:59 IST 2011)
    .................................................Val: 0
    . 79% Done.
    Copy successful
    Setup in progress (Wed Mar 02 16:26:05 IST 2011)
    .....Oracle JAAS [Wed Mar 02 16:26:28 IST 2011]: exception: 9
    opmnctl: opmn started
    Please help me.
    Thanks and regards,
    Adm

  • Is my installation of SQL Server Fail Over cluster correct?

    I made a 2 node SQL Server 2012 fail over cluster but having some problems during installation so I wanted to know if the steps below I performed are correct.
    Hardware
    Node1 192.168.1.10
    Node2 192.168.1.11
    Added following entries in DNS
    cluster.domain.local 192.168.1.12 (for Windows Cluster)
    msdtc.domain.local 192.168.1.13 (for MSDTC)
    sql.domain.local 192.168.1.14 (for SQL Server Cluster)
    Cluster Storage
    Disk1 (for Quorum)
    Disk2 (for MSDTC
    Disk3 (for SQL Server)
    Now comes the installation. I am performing all these steps as DOMAIN ADMIN.
    1. First I installed clustering role on both nodes
    2. Then I ran fail over validation wizard on Node1 adding both nodes which went fine (there were some warnings)
    3. Then I made a Windows Cluster on Node1 using these two nodes. I gave the name and IP to this cluster which I wrote above i.e. cluster.domain.local 192.168.1.12
    4. Cluster was created and boths nodes are UP.
    Now I want to ask a question here. Is it best practice to perform the above operation using DOMAIN ADMIN? Or if I use a standard domain user account with local admin rights, will it work? If not then exactly what rights are required to perform this operation.
    5. Then I installed "Application Server" role on both Node1 and Node2 and also added "Distributed Transaction" feature
    6. Then I right clicked on Windows Cluster I created and added a new role/feature which is "DTC"
    7. I gave it the same name which I wrote above i.e. msdtc.domain.local 192.168.1.13
    8. MSDTC was created but when it tried to UP its service, it threw an error. Upon investigation it turns out the Windows Cluster cluster.domain.local doesn't have proper rights to created some objects in AD. I didn't know what rights to give so I gave it full
    permission and after that when I created MSDTC again, the service went up fine.
    So I want to know what rights does cluster.domain.com require to make MSDTC?
    Am I doing good so far?

    Hello,
    >>Then I made a Windows Cluster on Node1 using these two nodes. I gave the name and IP to this cluster which I wrote above i.e. cluster.domain.local 192.168.1.10
    Hello I suppose this IP was physical node IP windows cluster IP was 192.168.1.12  I suppose yo must have given this IP as windows cluster IP.10 and 11 are physical nodes in Cluster but 12 is Cluster IP .Correct me if I am wrong.
    Did you do failover and failback to check whether cluster is configured correctly or not ,If not please do it .
    >>Then I ran fail over validation wizard on Node1 adding both nodes which went fine (there were some warnings)
    Please remove warnings also ,it might cause issue.Not sure its correct every time but make sure cluster validation should be free of error and warning.
    >>Now I want to ask a question here. Is it best practice to perform the above operation using DOMAIN ADMIN?
    You can do it with domain admin account as this is required to create Cluster NAme object(CNO) in domain and local account might not have that right so I would say its ok.
    >>I gave it the same name which I wrote above i.e. msdtc.domain.local
    192.168.1.11
    again this IP is node 2 IP how can you give it to MSDTC.Use below link for reference
    http://blogs.msdn.com/b/cindygross/archive/2009/02/22/how-to-configure-dtc-for-sql-server-in-a-windows-2008-cluster.aspx
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • How to add a cloud machine as a node to existing windows fail over cluster having on-premise node in Windows server 2008 R2

    Hi All,
    We have a windows fail over cluster having one windows machine on local network as one of its node.
    I want to add a virtual cloud machine available on microsoft azure as another node to this existing cluster.
    Please suggest how to do this?
    Thanking all in advance,
    Raghvendra

    Before you even start working on the SQL side, you will need to create a Windows Server 2008 R2 cluster with no shared storage.  You can actually test that in-house.  Create a VM running 2008 R2 and cluster it with your physical (from your description,
    I am assuming physical) 2008 R2 machine. Create it with a file share witness for quorum. Then configure your environment to see that it works as expected.
    Once you know how to configure the cluster between physical and VM with a file share witness, build it to Azure.  The location of the FSW gets to be an interesting choice.  To have a FSW in Azure means that you will need another VM in Azure to
    host the file share, meaning you have two quorum votes in Azure and one in-house.  Or, you could create a file share witness on an in-house system, giving you two quorum votes in-house and one in Azure.
    In the FSW in Azure scenario, if you have a loss of the in-house server, automatic failover occurs because two quorum votes exist in Azure.  With FSW in-house, depending on the loss you have in-house, you might have to force quorum to get the Azure
    single-node cluster to run.  Loss of access to Azure reverses those scenarios.  Neither one is optimal, but it does provide some level of recoverability.
    . : | : . : | : . tim

  • Backup & Restore Fail-over Cluster

    i ask for the best practice of backup and restore SQL fail-over cluster with Active-Active solution.

    Hi Sir ,
    Here is an article regarding baking up and recovering the cluster configuration :
    http://blogs.msdn.com/b/clustering/archive/2008/01/20/7176982.aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Weblogic not failing over correctly

    Hi,
              I've got two weblogic servers in a cluster recieving requests from a
              weblogic proxy server. To demonstrate and test failover I've written
              three servlets: Login, Logout, and Test2. They perform session
              tracking ie. Login creates a session, Test2 tests it and Logout kills
              the session. The failover is very inconsistent at best. Shutting down
              one of the servers in the cluster causes the other one to failover OK
              but turning it back on and shutting down the second one doesn't work.
              Also the Test2 servlet doesn't see the session when it is loaded for the
              first time. Reloading causes it to work properly. Why?
              Could you look through these weblogic.properties files and see if they
              are configured correctly? One is from the cluster servers and the other
              is from the proxy server.
              These are in the weblogic root directory. I don't have any properties
              files in the cluster - wide or server specific directories. I am not
              using a shared file system.
              I am using weblogic 5.1 sp1 on NT and jdk1.2.2.
              Thank you
              Timur M.
              [weblogic.properties]
              [weblogic.properties]
              

    The mistakes I found so far are the following.
              You have to register HttpClusterServlet to proxy all the servlet/jsp requests to weblogic
              cluster.
              ** PROXY PROPERTIES **
              weblogic.httpd.register.cluster=weblogic.servlet.internal.HttpClusterServlet
              weblogic.httpd.initArgs.cluster=defaultServers=jer:7001|tmaltaric:7001
              weblogic.httpd.defaultServlet=cluster
              ** WEBLOGIC SERVER PROPERTIES **
              weblogic.httpd.register.Login=Login
              weblogic.httpd.register.Logout=Logout
              blah blah blah
              Hope this helps.
              - Prasad
              Timur Maltaric wrote:
              > Hi,
              >
              > I've got two weblogic servers in a cluster recieving requests from a
              > weblogic proxy server. To demonstrate and test failover I've written
              > three servlets: Login, Logout, and Test2. They perform session
              > tracking ie. Login creates a session, Test2 tests it and Logout kills
              > the session. The failover is very inconsistent at best. Shutting down
              > one of the servers in the cluster causes the other one to failover OK
              > but turning it back on and shutting down the second one doesn't work.
              > Also the Test2 servlet doesn't see the session when it is loaded for the
              > first time. Reloading causes it to work properly. Why?
              >
              > Could you look through these weblogic.properties files and see if they
              > are configured correctly? One is from the cluster servers and the other
              > is from the proxy server.
              >
              > These are in the weblogic root directory. I don't have any properties
              > files in the cluster - wide or server specific directories. I am not
              > using a shared file system.
              > I am using weblogic 5.1 sp1 on NT and jdk1.2.2.
              >
              > Thank you
              > Timur M.
              >
              > ------------------------------------------------------------------------
              > Name: weblogic.properties
              > weblogic.properties Type: application/x-unknown-content-type-properties_auto_file
              > Encoding: base64
              >
              > Name: weblogic.properties
              > weblogic.properties Type: application/x-unknown-content-type-properties_auto_file
              > Encoding: base64
              Cheers
              - Prasad
              

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Multiple types of database and fail over clustering

    Hi,
    I have a few questions here.
    1) Can I have 2 types of databases (eg: OLTP and OLAP)run at the same time on a same machine?
    2) Can I implement a cross fail over cluster in this situation? Meaning I have 2 machines with OLAP and OLTP database instances installed in them (replica of each other), 1st machine running OLTP and 2nd running OLAP. In the situation where one of machines fail, the passive instance on the other machine takes over (back to situation on question 1).
    Thanks
    Regards
    Lai Ling

    Dear All,
    My problem is solved by disabling antivirus.
    thanks for the support
    Sunil
    SUNIL PATEL SYSTEM ADMINISTRATOR

  • Two wistnesses in a SQL Server fail over group

    Is it possible to have two witnesses in a SQL Server Always on Availability Group Fail Over Cluster? Our goal is to have redundant witnesses in an Azure availability set.
    Thanks,
    Mike

    AlwaysOn uses Windows Failover Clustering for quorum.  See, eg Understanding Quorum Configurations in a Failover Cluster
    You can do this, but with Dynamic Quorum it's probably not helpful.  If you loose your witness vote, the cluster will adjust the quorum requirements.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Http cluster servlet not failing over when no answer received from server

              I am using weblogic 510 sp9. I have a weblogic server proxying all requests to
              a weblogic cluster using the httpclusterservlet.
              When I kill the weblogic process servicing my request, I see the next request
              get failed over to the secondary server and all my session information has been
              replicated. In short I see the behavior I expect.
              r.troon
              However, when I either disconnect the primary server from the network or just
              switch this server off, I just get a message back
              to the browser - "unable to connect to servers".
              I don't really understand why the behaviour should be different . I would expect
              both to failover in the same manner. Does the cluster servlet only handle tcp
              reset failures?
              Has anybody else experience this or have any ideas.
              Thanks
              

    I think I might have found the answer......
    The AD objects for the clusters had been moved from the Computers OU into a newly created OU. I'm suspecting that the cluster node computer objects didn't have perms to the cluster object within that OU and that was causing the issue. I know I've seen cluster
    object issues before when moving to a new OU.
    All has started working again for the moment so I now just need to investigate what permissions I need on the new OU so that I can move the cluster object in.

  • Is there a way to config WLS to fail over from a primary RAC cluster to a DR RAC cluster?

    Here's the situation:
    We have two Oracle RAC clusters, one in a primary site, and the other in a DR site
    Although they run active/active using some sort of replication (Oracle Streams? not sure), we are being asked to use only the one currently being used as the primary to prevent latency & conflict issues
    We are using this only for read-only queries.
    We are not concerned with XA
    We're using WebLogic 10.3.5 with MultiDatasources, using the Oracle Thin driver (non-XA for this use case) for instances
    I know how to set up MultiDatasources for an individual RAC cluster, and I have been doing that for years.
    Question:
    Is there a way to configure MultiDatasources (mDS) in WebLogic to allow for automatic failover between the two clusters, or does the app have to be coded to failover from an mDS that's not working to one that's working (with preference to a currently labelled "primary" site).
    Note:
    We still want to have load balancing across the current "primary" cluster's members
    Is there a "best practice" here?

    Hi Steve,
    There are 2 ways to connect WLS to a Oracle RAC.
    1. Use the Oracle RAC service URL which contains the details of all the RAC nodes and the respective IP address and DNS.
    2. Connect to the primary cluster as you are currently doing and use a MDS to load-balance/failover between multiple nodes in the primary RAC (if applicable).
        In case of a primary RAC nodes failure and switch to DR RAC nodes, use WLST scripts to change the connection URL and restart the application to remove any old connections.
        Such DB fail-over tests can be conducted in a test/reference environment to set up the required log monitoring and subsequent steps to measure the timelines.
    Thanks,
    Souvik.

  • Weblogic Admin server fail over

    Hi,
    Please let me know if there is a official documentation from Oracle for admin server fail over for version 8.x, 9.x & 10.x?

    I am not sure if there is something as weblogic Admin Server Failover
    For Managed Server failover please read
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/cluster/failover.html

  • Fail over is not happening in Weblogic JSP Server

    Hi..
    We have 6 Weblogic instances running as application server (EJB) and 4 Weblogic
    instances running as web server (JSP). We have configured one cluster for EJB
    servers and one cluster for JSP servers. In front-end we are using four Apache
    servers to proxy the request to Weblogic JSP cluster. In my httpd.conf file I
    have configured with the Weblogic cluster. I can see the requests are going in
    all the servers and believe the cluster is working fine in terms of load balancing
    (round-robin). The clients are accessing the servers using CSS (Cisco Load Balancer).
    But when we test the fail-over in the cluster, we are facing problems. Let me
    explain the scenarios of the fail-over test:
    1.     The load was generated by the Load Generator
    2.     When the load is there, we shut down one Apache server, even though there was
    some failed transaction, immedialty the servers become stable. So fail-over is
    happening in this stage.
    3.     When I shutdown one EJB instance, again after some failed transactions, the
    transactions become stable
    4.     But, when I shutdown one JSP instance, immediately the transaction failed and
    it is not able to fail over to another JSP server and the number of failed transactions
    increased.
    So I guess, there is some problem in the proxy plug-in configuration, so that
    when I shutdown one JSP server, still the requests are being send to the JSP server
    by the Apache proxy plug-in.
    I have read various queries posted in the News Groups and found some information
    about configuring session and cookie information in the Weblogic.xml file. Also
    I’m not sure what are all the configurations needs to be done in the Weblogic.xml
    and httpd.conf file. Kindly help me to resolve the problem. I would appreciate
    your response.
    ===============================================================
    My httpd.conf file plug-in configuration:
    ###WebLogic Proxy Directives. If proxying to a WebLogic Cluster see WebLogic
    Documentation.
    <IfModule mod_weblogic.c>
    WebLogicCluster X.X.X.X1:7001,X.X.X.X2:7001,X.X.X.X3:7001,X.X.X.X4:7001
    MatchExpression *.jsp
    </IfModule>
    <Location /apollo>
    SetHandler weblogic-handler
    DynamicServerList ON
    HungServerRecoverSecs 600
    ConnectTimeoutSecs 40
    ConnectRetrySecs 2
    </Location>
    ==============================================================
    Thanks in advance,
    Siva.

    Hi,
    I can see that bug 13703600 is already got fixed in 12.1.2 but still you same problem please raise ticket with oracle support.
    Regrds,
    Kal

  • Help : Cluster Fail over Test - Could not establish a connection

    Hi All
              I'm trying to do Cluster fail over test with two Weblogic 8.1 sp2 instances in cluster.
              During that testing, I'm restarting the one of the instance which is handing my request, to make sure the session is replicated smoothly to the other instance,so that can continue accessing my application without any interuption. But when I restart the instance, I'm getting following exception
              Error 500--Internal Server Error
              java.rmi.ConnectException: Could not establish a connection with 8909815174098071019S:dappsn03:[8201,8201,-1,-1,8201,-1,-1,0,0]:dappsn03-04:TNL:tnl1_81dappsn03, java.rmi.ConnectException: Destination unreachable; nested exception is:
                   java.net.ConnectException: Connection refused; No available router to destination
                   at weblogic.rjvm.RJVMImpl.getOutputStream(RJVMImpl.java:316)
                   at weblogic.rjvm.RJVMImpl.getRequestStream(RJVMImpl.java:488)
                   at weblogic.rjvm.RJVMImpl.getOutboundRequest(RJVMImpl.java:584)
                   at weblogic.rmi.internal.BasicRemoteRef.getOutboundRequest(BasicRemoteRef.java:91)
                   at weblogic.rmi.internal.activation.ActivatableRemoteRef.invoke(ActivatableRemoteRef.java:69)
                   at com.sns.pfk.ejb.PfkSessionBean_mz6mqm_EOImpl_812_WLStub.getPortalRecord(Unknown Source)
                   at com.sns.pfk.servlet.PfkMainServlet.getInfofromSB(Unknown Source)
                   at com.sns.pfk.servlet.PfkMainServlet.doActionDisplay(Unknown Source)
                   at com.sns.pfk.servlet.PfkMainServlet.doGet(Unknown Source)
                   at com.sns.pfk.servlet.PfkMainServlet.doPost(Unknown Source)
                   at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
                   at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
                   at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:971)
                   at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:402)
                   at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:305)
                   at weblogic.servlet.internal.RequestDispatcherImpl.include(RequestDispatcherImpl.java:607)
                   at weblogic.servlet.internal.RequestDispatcherImpl.include(RequestDispatcherImpl.java:400)
                   at com.sns.ana.ui.servlet.AuthorisationBaseServlet.service(AuthorisationBaseServlet.java:109)
                   at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
                   at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:971)
                   at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:402)
                   at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:305)
                   at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6350)
                   at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
                   at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
                   at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3635)
                   at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2585)
                   at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
                   at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
              Buddies, anyone hit this issue before, pls show up some light to escape this hickup.
              With Regs
              -SHAN

    Hi,
              Thanx for ones, spend time on reading this thread.This problem was due to some missing entries in weblogic-ejb.xml. This got fixed as we got support from BEA.
              With Regs
              -SHAN

Maybe you are looking for

  • How do I get internet to work on ibook G3?

    I have an ibook G3. I'm trying to connect to the internet via DSL cable. (I have an airport extreme, but no airport card in my G3, which is why I connect with cable.) When I plug in the ethernet cord, the internet does not work. How do I get my inter

  • Adobe Media Encoder CS4 Crashing Immediately

    Hello, I am running the CS4 Production Premium suite on a Windows 7 64-bit computer with 12GB RAM. Up til last week, the whole Adobe suite had been running smoothly. (I primarily use Premiere, Media Encoder, Photoshop, and Encore).  However, Encore s

  • Rule button in Mail

    In installing SpamSieve I am asked to click the Rules button in Preferences in Mail. I cannot see a rules button. Using Mail 3.6 in Mac OS 5.8 Thanks, HJZ

  • Role based authorization in initiative

    Hi, We can assign default authorization for role types in Projects. For example a the role PM can be assigned Admin auth and the person assigned to PM role gets admin role. We want the same functionality in initiatives but it is not working. Has anyo

  • Is it possible to change authorship of all my comments in given document at once?

    Hi, I am finishing commenting a .pdf document and I would need to change name of my comments / notes. Is there any faster way to take then editing one by one? Thanks for any tips!