Failover is not working in clustering

we installed infrastructure in the one system and added 2 instances app1.mycompany.com,app2.mycompany.com into it.
for loadbalancing we r using webcache.
we configured origin servers,site definitions,site-server mappings.
in the cluster two instances showing up.
that we can see in health monitor in Up/Down* parameter of web cache administrator console.
we deployed same ear in two instances.
but when i down one instance say app1.mycompany.com,
In the health monitor its not showing up DOWN parameter for host: app1.mycompany.com.same for UP also.
immediately its not showing changes when i am testing failover.
Is webcache loadbalancing is Round robin based ?
when i down one of the instances session replication is not happening properly.sometimes session expired is coming.
when 2 instances r up if user access application all the requests r coming to one instance if down that instance session expired is coming.
i think failover is not working in clustering.
i checked replication properties and added <distributable> tag in both the instances.
in webcache console page what will sessionbinding will do?i have not configured anything.

Why are you using Webcache?
Web cache will certainly work, but its more common role is to more access as a simple load balancer over HTTP servers, not OC4J instances.
What I'd do is to simplify your situation to verify you have the server setup correctly.
That means using the Oracle HTTP Server which will be part of your cluster as the common routing point. OHS and mod_oc4j are session state aware and know about all the OC4J instances. In the situation where an OC4J instance dies for some reason, mod_oc4j will know to which other OC4J instance(s) the request can be routed to pickup the replicated session state.
Once you have verified that the failover is working on the backend, you can then configure another OHS instance and position webcache in front of them to act as a request router and failover handler for when the OHS instances are inactive.
The Enterprise Deployment Guide offers some guidance in typical architectures, well worth a read.
cheers
-steve-

Similar Messages

  • Sun Access Manager 2005Q1 session failover is not working

    Hi All
    I m using Sun access manager 2005Q1,message queue 2005Q1, Sun Directory server 5.2 ,BerkelyDb 4.2.52 and radware hardware load balancer with sticky session.
    I m have configured message queue and BerkeleyDB and both are running with any error.
    I m using http://docs.sun.com/source/817-7644/ch5_scenarios.html#wp41008 doc for session failover.
    Simple failover is working fine but the Session failover is not working.
    Any body has done session failover with Sun Access manager 2005 Q1 I m trying to resolve this issue last two month.
    Please it is urgent.

    It works fine in 2005Q4, after applying a patch 120954 if I am not mistaken. But 2005Q4 and 2005Q1 are probably different in terms of session failover (site configuration etc.)
    1. Stop both AM servers
    2. Set logging to debug mode in AMConfig.properties.
    3. Delete / move everything in /var/opt/SUNWam/debug
    4. tail -f /var/opt/SUNWam/debug/amSession
    5. Post that file here... you should be able to see if session failover is enabled etc....
    hope this helps.

  • Exchange 2010 DAG Failover does not works

    Hi Experts,
    I have a Exchange 2010 setup in  a DAG environment. We have 2 MBX servers in the main site and 1 MBX server in the DR site , all part of one DAG. We have 2 HUB/CAS servers in the main site and 1 HUB/CAS server in the DR site.
    Recently we had to do our BCP test for audit purpose. We had issues in doing failover to the DR site and below is the error faced.
    Please advise urgently on the possible causes and resolution steps for it as we need to do this test again on the coming weekend.
    "EvictDagClusterNode got exception Microsoft.Exchange.Cluster.Replay.AmClusterEvictWithoutCleanupException: An Active Manager operation failed. Error An error
    occurred while attempting a cluster operation. Error: Evict node 'sme-ho-mbx01' returned without the node being fully cleaned up. Please run cluster.exe node <NodeName> /forcecleanup to complete clean up for this node.. ---> System.ComponentModel.Win32Exception:
    The wait operation timed out"
    So, basically one of the MBX server was not evicting from the Cluster due to which failover did not work.
    Would appreciate some urgent thoughts for the possible resolution.
    regards
    abubakar
    Md.Abubakar Noorani IT Systems Engineer Serco Ltd.

    Hi,
    Yes, you can run the Stop-DatabaseAvailabilityGroup without shutting down the Mailbox server. During the process of DAG failover to DR site, the Stop-DatabaseAvailabilityGroup cmdlet should be run against all servers in the primary datacenter. If the Mailbox
    server is unavailable but Active Directory is operating in the primary datacenter, the Stop-DatabaseAvailabilityGroup command with the ConfigurationOnly parameter must be run against all servers in this state in the primary datacenter.
    And please note that the Stop-DatabaseAvailabilityGroup cmdlet can be run against a DAG only when the DAG is configured with a DatacenterActivationMode value of DagOnly. 
    Based on the error message, it seems that you should run the cluster node nodename /forcecleanup cmdlet against the specified node in the main site. Have you tried this to check the result?
    Best regards,
    Belinda
    Belinda Ma
    TechNet Community Support

  • DbControl not working in clustered environment

    Hi all,
    I have a situation here with Oracle dbcontrol -it is not working in clustered environment. It is working on one node but fails on other node. Any suggestions ?
    Thanks in advance !

    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/emca.htm
    in this article Oracle has given example with RAC. so thats why i asked that question.Exactly, with RAC. If you specify -cluster option, you should have a running cluster, isn't it ?

  • JMS bridge is not working in clustered env

              We have set up a JMS bridge between WLS7SP3 and WLS8.1. It works very well in
              stand alone server env (testing env). However, we cannot get it to work on clustered
              env (preprod env). Anyone has experienced working with clustered env? If so,
              please help!
              Thanks.
              

    I forgot to say, we are using WLS8.1 SP1
              "Pete Inman" <[email protected]> wrote in message
              news:[email protected]...
              > If you are in a clusterd environment and you deploy a bridge to the WHOLE
              > cluster it does not work and will not find the adapter. If you deploy to
              the
              > INDIVIDUAL cluster members it will work.
              >
              > We have a cluster with 4 managed servers, deploy to whole cluster - no
              > bridge working, deploy to Server1,2,3,4 bridges work fine.
              >
              > I have a case logged with BEA on this topic.
              >
              > "Tom Barnes" <[email protected]> wrote in message
              > news:[email protected]...
              > > "Not working" is too little information. I suggest
              > > that you start with the messaging bridge FAQ. There is
              > > a link to it here:
              > >
              > > http://dev2dev.bea.com/technologies/jms/index.jsp
              > >
              > > Then post with traces, exceptions, configuration, etc, if
              > > you are still having trouble.
              > >
              > > Tom, BEA
              > >
              > > jyang wrote:
              > >
              > > > We have set up a JMS bridge between WLS7SP3 and WLS8.1. It works very
              > well in
              > > > stand alone server env (testing env). However, we cannot get it to
              work
              > on clustered
              > > > env (preprod env). Anyone has experienced working with clustered env?
              > If so,
              > > > please help!
              > > >
              > > > Thanks.
              > >
              >
              >
              

  • NXT Flatten to String Not Working with Clusters and Arrays

    Hello,
    My name is Joshua and I am from the FIRST Tech Challenge Team 4318, Green Machine. We are trying to write a program that will write to a configuration file and read it back. The idea is that we will be able to write to a config file from our computer that will be read by our autonomous program when it runs. This will define what the autonomous program does.
    The easiest way to do this seems to be flattening a data structure to a string, saving it to a file, and then reading back and unflattening it. The issue is that the flatten to string and unflatten from string VIs don't seem to work with arrays and clusters on the NXT. It does work when running on the computer. We've tried arrays, clusters, clusters in arrays and arrays in clusters, none seem to work. Thinking it was something to do with reading the string from a file, we tried bypassing the file functionality, still not working. It does work with basic data types though, such as strings and numbers.
    No error is thrown from what we can tell. All you get is a blank data structure out of the unflatten VI.
    The program attached is a test program I've been working on to get this functionality to work. It will display the hex content of what is going into the file, coming out of the file, and then the resulting data from the unflatten string, as well as any errors that have been thrown. The data type we are using simulates what we would like to store. There is also a file length in and out counter. The out file is a little larger because the NXT write file VI adds a new line character on to the end (thus the use of the strip white space VI). This character was corrupting even basic data types saved to file.
    I would like to know if there is a problem with what we are doing, or if it is simply not possible to flatten arrays on the NXT. Please ask if you have any questions about the code. Thank you in advanced!
    Joshua
    Attachments:
    ReadableTest.vi ‏20 KB

    Hi jfireball,
    This is a very interesting situation. Take a look at what kbbersch said. I also urge you to post in the FTC Forums. You posted your question to the general LabVIEW forums, but by posting to the FTC Forums, you will have access to others that are using the NXT hardware.
    David B.
    Applications Engineer
    National Instruments

  • Ip SLA failover config not working need help urgent cisco 2911 K9 router

    Hi,
    I am setting up failover wan for one of my cient and seems everything i have configured correctly but its not working. For track i am using google DNS ip 8.8.8.8 and 8.8.4.4 where if i ping 8.8.8.8 from router it pings but not 8.8.4.4. I I think because 8.8.4.4 no pinging so router does not jump if primary gigabitethernet0/0 down.
    Not sure what i am doing wrong. Please find below config details:
    -------------------------------------------config-----
    username admin privilege 15 password 7 XXXXX
    redundancy
    track 10 ip sla 1 reachability
     delay down 5 up 5
    track 20 ip sla 2 reachability
     delay down 5 up 5
    interface GigabitEthernet0/0
     ip address 122.160.79.18 255.0.0.0
     ip nat outside
     ip virtual-reassembly
     duplex auto
     speed auto
    interface GigabitEthernet0/1
     ip address 182.71.34.71 255.255.255.248
    ip nat outside
     ip virtual-reassembly
     duplex auto
     speed auto
    interface GigabitEthernet0/2
     description $ES_LAN$
     ip address 200.200.201.1 255.255.255.0
     ip nat inside
     ip virtual-reassembly
     duplex auto
     speed auto
    ip forward-protocol nd
    no ip http server
    no ip http secure-server
    ip nat inside source route-map giga0 interface GigabitEthernet0/0 overload
    ip nat inside source route-map giga0 interface GigabitEthernet0/0 overload
    ip route 0.0.0.0 0.0.0.0 GigabitEthernet0/0 track 10
    ip route 0.0.0.0 0.0.0.0 GigabitEthernet0/1 track 20
    ip route 8.8.4.4 255.255.255.255 GigabitEthernet0/1 permanent
    ip route 8.8.8.8 255.255.255.255 GigabitEthernet0/0 permanent
    ip sla 1  
     icmp-echo 8.8.8.8 source-interface GigabitEthernet0/0
     frequency 10
    ip sla schedule 1 life forever start-time now
    ip sla 2  
     icmp-echo 8.8.4.4 source-interface GigabitEthernet0/1
     frequency 10
    ip sla schedule 2 life forever start-time now
    access-list 100 permit ip any any
    access-list 101 permit ip any any
    route-map giga0 permit 10
     match ip address 100
     match interface GigabitEthernet0/0
    route-map giga1 permit 10
     match ip address 101
     match interface GigabitEthernet0/1
    control-plane
    ------------------------------------------config end

    Hello,
    as Richard Burts state correct the nat configuration is not right. But the ICMP echo request for the IP SLA is traffic, which is generated from the router with a source-interface specified. There shouldn't be any NAT operation at all, or? Iam using IP SLA  for two WAN connections too, but I can't recall  ever seen an entry for the icmp operation in the output of sh ip nat trans.
    To me the static route configuration looks wrong too. As far as I remember it's necessary to specify a next-hop address (Subnet/mask via x.x.x.x) on Multiple Access Broadcast Networks like ethernet, otherwise the Subnet appears as directly connected on the routing table. The configuration "ip route subnet mask <outgoing interface> only works correct for p2p links. With the configuration above i would say there is no routing at all possible except for "real" direct attached networks. Vibs said it's possible to reach the google dns 8.8.8.8 but not the second one 8.8.4.4. I verified that 8.8.4.4 usually answers to ICMP echo-request.
    My guess is that the next hop for the gig 0/0 interface has proxy arp enabled but the next hop for the gig0/1 interface hasn't proxy arp turned on.
    kind regards
    Lukasz

  • ACS Failover is not working

    We are running primary and secondary ACS servers 4.0 on appliance and it has been configured for automatic replication every 6 hours between them. When the primary server goes offline bcos of network issue, the secondary is supposed to authenticate but it is not happening. Hence we are forced to use the local accounts configured in the networking device to login and make configuration. Please note all our devices are configured to use both primary and secondary ACS servers.
    have anyone in this group has come across such a problem?

    Sudipto
    There could be several things that cause your problem.
    My first question would be whether the network devices and the backup server are correctly configured for each other. If you change the configuration of some network device, removing the definition of the primary ACS server so that the only server configured is the backup, does the network device authenticate with the backup?
    My second question would be when there is a network issue with the primary server is it possible that the network issue also impacts connectivity to the backup server? Can you check the logs on the backup server and see whether it received authentication requests? If it did receive authentication requests what was its response (were they authenticated or denied)?
    My third question is whether the network devices are attempting to failover. The best way to determine this would be from the output of some debugs. I suggest that on the router you configure debug aaa authentication and debug tacacs authentication (or radius if you are using radius instead of tacacs) . If you could post the debug output, taken when the problem is going on, it would help us to analyze your problem.
    I have had some experience with certain failure modes on the ACS server in which the network devices would not fail over to the backup. I had a TAC case on this which resulted in a bugID. I am aware of several other bugIDs for similar issues where failover did not occur on remote devices due to certain failure modes on the server. But in these cases there was connectivity to the server and the server was sending a response which was not expected by the remote network device. From your description it sounds like there is no connectivity, so I assume it is not the same issue.
    If you can answer the questions that I listed and provide the debug output I hope that we can help to resolve your issue.
    HTH
    Rick

  • Session-failover-enabled not working in iWS6 with a FileStore

    I'm trying to use a FileStore to implement session persistence using IWSSessionManager. I have the following in my web-apps.xml:
    <web-app uri="/Banking" dir="c:/java/online">
    <session-manager class='com.iplanet.server.http.session.IWSSessionManager'>
    <init-param>
    <param-name>session-data-store</param-name>
    <param-value>com.iplanet.server.http.session.FileStore</param-value>
    </init-param>
    <init-param>
    <param-name>session-data-dir</param-name>
    <param-value>c:/iplanet/servers/SessionData</param-value>
    </init-param>
    <init-param>
    <param-name>session-failover-enabled</param-name>
    <param-value>false</param-value>
    </init-param>
    </session-manager>
    </web-app>
    I'm seeing the following exception in my log:
    [12/Jun/2002:10:10:56] info ( 320): java.io.NotSerializableException: com.iplanet.server.http.servlet.WebApplication
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1148)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at java.io.ObjectOutputStream.outputClassFields(ObjectOutputStream.java:1827)
    at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:480)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1214)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at java.io.ObjectOutputStream.outputClassFields(ObjectOutputStream.java:1827)
    at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:480)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1214)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at java.util.Hashtable.writeObject(Hashtable.java:764)
    at java.lang.reflect.Method.invoke(Native Method)
    at java.io.ObjectOutputStream.invokeObjectWriter(ObjectOutputStream.java:1864)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1210)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at com.iplanet.server.http.session.IWSHttpSession.writeObject(IWSHttpSession.java:764)
    at java.lang.reflect.Method.invoke(Native Method)
    at java.io.ObjectOutputStream.invokeObjectWriter(ObjectOutputStream.java:1864)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1210)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at com.iplanet.server.http.session.FileStore.save(FileStore.java:167)
    at com.iplanet.server.http.session.IWSSessionManager.update(IWSSessionManager.java:499)
    at com.iplanet.server.http.servlet.NSHttpServletRequest.closeInputStream (NSHttpServletRequest.java:612)
    at com.iplanet.server.http.servlet.NSServletRunner.servicePostProcess(NSServletRunner.java:857)
    at com.iplanet.server.http.servlet.NSServletRunner.invokeServletService(NSServletRunner.java:942)
    at com.iplanet.server.http.servlet.WebApplication.service(WebApplication.java:1065)
    at com.iplanet.server.http.servlet.NSServletRunner.ServiceWebApp(NSServletRunner.java:959)
    Any ideas what's wrong?
    I should note that I don't think it is because I am storing non-serializable things in the session attributes. I think this because originally I was getting an exception that said that a specific attribute wasn't serializable. I changed the class definition of the class I was storing in that attribute to include "implements java.io.Serializable" and that problem went away.

    Hi Sija,
    Can i have detailed scenario in your cluster configuration.
    Means you are saying that going to start cluster package manually, if it is right please make sure that you had the same copy of start, instance profiles of NodeA to Node B. Means you need to maintain two startup, two instance profiles for both nodes. In a normal situation it will picik the profile of node A to start databse from A node. But in a failover situation it will not pick node A profile to start, it should pick Node B s profiles.
    Just make a copy from node A and change the profile name accordingly to Node b. Then try to restart.
    Regards
    Nick Loy

  • CTIOS failover is not working

    Hi,
    Incase of one CTIOS failure the agents are not switching to the other CTIOS server. I am using UCCE and CTIOS Version 8.0.
    Thanks and Regards,
    Ashfaque

    Check out this post for the invalid message header:
    https://supportforums.cisco.com/thread/270670
    They mention it taking 60 seconds, but I'm wondering if it doesn't occur faster in your new 8.x version...  If you failover to the peer, I bet you'll see the same message.
    You're still getting a CONNECTION REFUSED error, which I've only seen related to invalid port configurations.  Make sure all your 42027/42028/43027/43028 settings are correct (it can get VERY confusing, for some reason).
    Here's an excellent doc that I often refer people to: https://supportforums.cisco.com/docs/DOC-1390.  There's a graphic in the section "Install CTI Server (CG or CTI PG)" that will help you visualize what's going on.  The only problem with the graphic is that the CG1 representation is a little misleading- It should have CG1A (on your PG1A server) using ports 42027 and 42028, and CG1B (on your PG1B server) using ports 43027 and 43028.
    Cheers,

  • RV042 Failover does not work properly in certain WAN1 signal condition

        Our RV042 has cable modem in WAN1 and ADSL in WAN2; it is set in smart link backup mode.
    In certain cases of WAN1 signal loss, RV042 seems not to detect this condition. Consequently it does not switch automatically to WAN2.
    One way to get it to switch is to disconnect WAN1 modem power (manually in situ), then WAN2 assumes as active link.
    We conclude that, in the mentioned cases, although WAN1 signal is not good enough to provide internet service, RV042 makes a wrong decision and determines WAN1 is ok.
    Is there a way to have a correct switchover for these cases?
    May be with a firmware fix, or an internal user programming/setting, or different router model- or a combination of these elements, or any other solution you can provide us.

    Eduardo,
    It sounded like you have the device setup properly, however under the system management tab, it has the ways it will detect if there is a disconnect.
    If you have wan 1 and wan 2 set checked beside the default gateway it will ping the gateway and if it gets a reply from the modem, it will stay connected. 
    You might not have internet connectivity, but the router thinks you do cause it can ping the modem.  If you uncheck this and set it to remote host.  Then
    set wan 1 to www.google.com and wan 2 to www.yahoo.com.  This way it has to get all the way out to the internet to resolve internet names.  If it can't,
    it starts the failover process.

  • Unity Connection 8.5.1 directory replication not working between clusters

    Last night we deleted 10 users from one cluster (there are 10 networked together), with the intention of moving them to another site on another cluster.   The users deleted without error, however when we did a search on other clusters the users were still appearing as remote.  We have seen this before and ran the command line to delete the globalusers from each cluster.    We then proceeded to add the users to the destination cluster without issue.  This morning I have noted that none of the other 9 clusters are aware of these new users.  I did push out the directory under intrasite links however it is still in the "in progress" state after 12 hours. 
    Can anyone tell me what service is tied to this function or how I can restart this process or fix this?  I will open a TAC ticket as I suspect we may need some additional steps as this is not the first directory replication issue we have seen over the years.

    Thanks Rob, I did know about this bug and have dealt with it before.  I have been able to workaround this for years now (these clusters deployed 2011), however I have never experienced an issue where the added users did not replicate out to other servers.   Another bullet in my list of reasons why we need to upgrade these versions.

  • Failover not working while connecting to Oracle 11g database

    Hi,
    We have a J2EE application that connects to Oracle 11g database.
    The connection URL we are using is as below
    jdbc:oracle:thin:@(DESCRIPTION =(SDU=32768)(ADDRESS=(PROTOCOL=TCP)(HOST=abc.hostname.com)(PORT=1525))(ADDRESS=(PROTOCOL=TCP)(HOST= xyz.hostname.com)(PORT=1525))(CONNECT_DATA=(SERVICE_NAME=PQR)))
    The issue that we are facing is, the database failover is not working.The application only connects to the first host in the TNS entry.Evertime there is a failure in the connection to the first host,manual steps are required to swap the hosts.
    This started happening after we upgraded Oracle DB from 9i to 11g.
    We are using the client jar of 9i to connect to 11g. Could this be causing the problem?
    Thanks In Advance.
    -Tara

    889517 wrote:
    Yes, you are right. Nothing else was updated.
    The application still works as expected except the failover.If you are correct then I seriously doubt it has anything to do with java. It would be something to do with Oracle and/or network infrastructure.
    If not then it is some small problem with the driver. You can try updating the driver but I wouldn't expect a fix.

  • Failover not working in a cluster environment asks Relogin to Application

    Hi,
    I have setup cluster on weblogic and deployed CRM (ejb/ear) application.
    When I stop managed server which is handling request other managed server picks the request but it asks for relogin.
    I think failover is not working properly.
    Could you please help?
    Thanks,
    Vishal

    Hi Kalyan,
    Please See below Diagnostics:
    Managed Server1:
    ===================================================
    Date: Tue Sep 04 18:33:12 IST 2012
    DdCompnt=11,SessionContext=7,CurrJvmHeapMB=235,Bos=2,JForm=1,MaxBoRowCount=493,JApp=13,JField=1,JDdField=1,JSession=6,Sessions=7,JBaseBo=2,Misc=74,JPTrace=1,LpmErrors=1,Forms=1,Active Sessions=6,JDdTable=1021,MaxJvmHeapMB=467,MaxBos=18,BoRowCount=33,JError=1,JFields=1,UtlRecs=175,FlexAttr=-5,UserContext=3
    Managed Server2:
    ===================================================
    Date: Tue Sep 04 18:33:20 IST 2012
    DdCompnt=11,SessionContext=8,CurrJvmHeapMB=226,JForm=1,JApp=13,JSession=7,Sessions=8,Misc=148,JPTrace=1,Forms=1,Active Sessions=7,JDdTable=1124,MaxJvmHeapMB=472,UtlRecs=81,FlexAttr=-5,UserContext=4
    Free Java Heap Memory: 258400 KB
    Total Java Heap Memory: 495779 KB
    I tried to open many sessions. Its assigning randomly, but when I kill Managed Server2, its corresponding sessions are not working.
    System Error error executing LPoms.getUserContext()
    Also I found following warning in Weblogic logs.
    <Sep 4, 2012 6:26:15 PM IST> <Warning> <Socket> <BEA-000402> <There are: 5 active sockets, but the maximum number of socket reader threads allowed by the configuration is: 4. The configuration may need altered.>
    Thanks for your immediate help.

  • Jms is not working properly in clustered environment

    Hi all,
    i am using the application server oc4j 10.1.3.1.0 enterprise edition . my application is standalone application(thick client)
    we are using the jndi.properties as follows ...
    java.naming.factory.initial=com.evermind.server.rmi.RMIInitialContextFactory
    java.naming.provider.url=opmn:ormi://172.16.1.38:6005:group/Security,opmn:ormi://172.16.1.38:6006:deceval_group/Security
    java.naming.security.principal=oc4juser
    java.naming.security.credentials=oc4juser
    oracle.j2ee.rmi.loadBalance=lookup
    we have two application servers in cluster topology as u can see above we have used one instance from one application server and one from another.
    i have seen that for every instance there one jms server. at runtime it is taking one application servers services
    say opmn port 6005 but when the application is connecting to the another application server say opmn 6006
    here jms is not working properly when i send message
    As we have clustered environment message must be propagated all the applications who use above jndi.properties.
    if i keep use only one application server opmn say
    java.naming.provider.url=opmn:ormi://172.16.1.38:6005:group/Security
    then its is working excellently
    please can u provide any solution ASAP
    thanks in advance
    Manu

    Dear Aravindth
      (.*?) means -> Select all contents from where you start and end,
    For Ex. <month>(.*?)</month> then Select for start <month> and end last </month> tag.
    (?) Match zero or one occurrences. Equivalent to {0,1}.
    (*) Match zero or more occurrences. Equivalent to {0,}.
    (+) Match one or more occurrences. Equivalent to {1,}.
    (.) (Dot). Match any character except newline or another Unicode line terminator.
    (.*?) means -> Zero or more times Match any character except newline or another Unicode line terminator + Match zero or more occurrences. Equivalent to {0,}.+Match zero or one occurrences. Equivalent to {0,1}.
    Could you please refere the below cite :
    http://www.javascriptkit.com/jsref/regexp.shtml
    Thanks & Regards
    T.R.Harihara SudhaN

Maybe you are looking for