Broker/Cluster Message Persistence and Replication

I am evaluating the Sun JMS MQ implementation and I am trying to determine if messages sent to a Broker within a Cluster are replicated to different Brokers and as such the different Brokers' persistence message stores?
I what a HA solution that has no single point of failure within a site (and ultimately across sites, but let's focus on a single site to start with). I need the replication to ensure that if Broker-A goes down the message is still available and can be consumed via Broker-B; and not relying on Broker-A being restarted.
There is a statement within the Technical Overview document that might suggest that what I am looking for is not possible. It says...
"Note that broker clusters provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may lose some data while they are reconnected to the alternate broker."
If I am not asking to much already any cases studies, blue prints etc. would be nice. But simple answers to this question will also get my deep thanks.
Regards
Paul

"Note that broker clusters provide service
availability but not data availability. If one broker
in a cluster fails, clients connected to that broker
can reconnect to another broker in the cluster but
may lose some data while they are reconnected to the
alternate broker."Yes, that's clustering. Messages are stored-and-forwarded. If the broker where the message is stored goes down, it is not available until the broker restarts.
What you are looking for is high availability where messages from the active instance are synchronously replicated to a standby instance. In the failover case the JMS clients tranparently reconnect and operation continues.
I don't know if Sun IMQ provides that but I do know that SwiftMQ does that very well:
http://www.swiftmq.com/products/harouter/introduction/index.html
Full docs start here:
http://www.swiftmq.com/products/harouter/index.html
-- Andreas

Similar Messages

  • Broker/Cluster Message Replication

    I am trying to determine if messages sent to a Broker within a Cluster are replicated to different Brokers and as such the different Brokers' persistence message stores?
    I what a HA solution that has no single point of failure within a site (and ultimately across sites, but let's focus on a single site to start with). I need the replication to ensure that if Broker-A goes down the message is still available and can be consumed via Broker-B; and not relying on Broker-A being restarted.
    There is a statement within the Technical Overview document that might suggest that what I am looking for is not possible. It says...
    "Note that broker clusters provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may lose some data while they are reconnected to the alternate broker."
    If I am not asking to much already any cases studies, blue prints etc. would be nice. But simple answers to this question will also get my deep thanks.
    Regards
    Paul

    Thank you for the response. I think you have confirmed my fears, but I will ask some more questions, just to make sure I am 100% clear...
    Can 2 brokers share the exact same JDBC message store? From what you are saying we could do that but Broker-B would have to be passive. But to be clear could both Brokers be active and using the same store at the same time?
    I am baffled as to how guaranteed message order can be provided if a Broker and its store is unavailable. Is it that new messages are processed out of order? ie, new messages are processed ignoring the failed broker's messages? Are there known issues with guaranteed message order when a Broker fails?
    Thank you again for the response.
    Regards
    Paul.
    Message was edited by:
    paularmstrong

  • Cluster only enqueue and replication

    Dear SDNers:
    We are using SUN cluster technology to implement mission-critical Java systems.
    Limited by hardware and budget, we plan to cluster enqueue and replicator ONLY.
    Therefore JC00 will not be clustered.
    Do you think this will meet all HA requirement?  I doubt but management insists ...
    Please help advise.
    Thanks a lot!

    Hi Joy,
    Though this won't be a complete high available environment but by means of enqueue and ERS cluster, you can achieve partial HA.
    Your SAP locks will be safe if you cluster enqueue and implement ERS also.
    But make sure that you cluster enqueue along with message server that will be best approach.
    Your JC (CI) DB can run together. Where as ENMSG (SCS/ASCS) can be clubbed together. In this scenario your ERS will be running on FO node. In case of Enqueue crashes, it will switch to FO node, acquire all locks from ERS and your SAP locks will be safe.
    However in above scenario your DB and CI will not be safe in case of any hardware failure.
    Cheers !!!
    Ashish

  • Session migration and replication

    Hi All,
    I am having a hard time in configuring my application for HTTP session migration. Our weblogic server consists of two managed servers running in same cluster. Each server has an Ehcache that stores some information of user with key as session ID and value as info object. In case, if a server needs restart, we would want to take this updated info Object from cache residing on server being restarted to another managed server within the same cluster.
    I browsed through many documentations online. Most of them explained about session replication but not migration. so I followed replication (I don't want a real time sync up of HTTP session. I want it to migrate if something goes wrong with one of the managed server).
    However, I could not achieve this task after following the steps to configure this feature. I would appreciate a lot if someone can help me figuring out the issue here.
    Here is what I did.
    1) Weblogic.xml
         <session-descriptor>
    <persistent-store-type>replicated_if_clustered</persistent-store-type>
    </session-descriptor>
    2) An implementation class of HttpSessionActivationListener, HttpSessionListener, HttSessionAttributeListener:
    public void sessionDidActivate(HttpSessionEvent sessionEvent) {
    // NEVER GETS CALLED
    over here i would check if session has any attribute with name 'CACHE_ELEMENT'. If yes, then it is a migration case for current managed server
              Log.info(UserCacheMigrationListener.class, "inside sessionDidActivate");//
    public void sessionWillPassivate(HttpSessionEvent sessionEvent) {
    // NEVER GETS CALLED
    Over here I would set attribute 'CACHE_ELEMENT' so that it is available for target managed server when it's sessionDidActivate is called
              Log.info(UserCacheMigrationListener.class, "inside sessionWillPassivate");
    public void sessionCreated(HttpSessionEvent sessionEvent) {
    THIS GETS CALLED! and I set the following attribute
    sessionEvent.getHttpSession().setAttribute(UserCacheMigrationListener.class.getName(), this);
    public void sessionDestroyed(HttpSessionEvent arg0) {
    THIS GETS CALLED!
    public void valueBound(HttpSessionBindingEvent arg0) {
    THIS GETS CALLED! WHICH MEANS THAT I WAS ABLE TO setAttribute this class's instance in sessionCreatedMethod
    In the above code, the setAttribute method used inside the sessionCreated(..) method successfully sets the attribute to this session. This is apparent because valueBound(..) method is called when session is created. But why does it not call the sessionWillActivate method???
    3.) An entry in web.xml for this listener.
         <listener>
              <listener-class>com.xyz.UserCacheMigrationListener</listener-class>
         </listener>
    4) from weblogic config.xml. I am copying all meaning stuffs from config.xml to describe as much as I can.
    <server>
    <name>AdminServer</name>
    <ssl>
    <enabled>false</enabled>
    </ssl>
    <listen-address>localhost</listen-address>
    <network-access-point>
    <name>AdminChannel</name>
    <protocol>t3</protocol>
    <listen-address>localhost</listen-address>
    <http-enabled-for-this-protocol>true</http-enabled-for-this-protocol>
    <tunneling-enabled>false</tunneling-enabled>
    <outbound-enabled>false</outbound-enabled>
    <enabled>true</enabled>
    <two-way-ssl-enabled>false</two-way-ssl-enabled>
    <client-certificate-enforced>false</client-certificate-enforced>
    </network-access-point>
    <data-source>
    <rmi-jdbc-security xsi:nil="true"></rmi-jdbc-security>
    </data-source>
    </server>
    <server>
    <name>Node1</name>
    <ssl>
    <enabled>false</enabled>
    </ssl>
    <machine>DevMachine</machine>
    <listen-port>7002</listen-port>
    <cluster>DevCluster</cluster>
    <replication-group>devGroup1</replication-group>
    <preferred-secondary-group>devGroup2</preferred-secondary-group>
    <web-server>
    <keep-alive-secs>500</keep-alive-secs>
    <post-timeout-secs>120</post-timeout-secs>
    </web-server>
    <listen-address>localhost</listen-address>
    <jta-migratable-target>
    <user-preferred-server>Node1</user-preferred-server>
    <cluster>DevCluster</cluster>
    </jta-migratable-target>
    <data-source>
    <rmi-jdbc-security xsi:nil="true"></rmi-jdbc-security>
    </data-source>
    </server>
    <server>
    <name>Node2</name>
    <ssl>
    <enabled>false</enabled>
    </ssl>
    <machine>DevMachine</machine>
    <listen-port>7003</listen-port>
    <cluster>DevCluster</cluster>
    <replication-group>devGroup2</replication-group>
    <preferred-secondary-group>devGroup1</preferred-secondary-group>
    <listen-address>localhost</listen-address>
    <network-access-point>
    <name>Node2Channel</name>
    <protocol>t3</protocol>
    <listen-address>localhost</listen-address>
    <http-enabled-for-this-protocol>true</http-enabled-for-this-protocol>
    <tunneling-enabled>true</tunneling-enabled>
    <outbound-enabled>false</outbound-enabled>
    <enabled>true</enabled>
    <two-way-ssl-enabled>false</two-way-ssl-enabled>
    <client-certificate-enforced>false</client-certificate-enforced>
    </network-access-point>
    <jta-migratable-target>
    <user-preferred-server>Node2</user-preferred-server>
    <cluster>DevCluster</cluster>
    </jta-migratable-target>
    <data-source>
    <rmi-jdbc-security xsi:nil="true"></rmi-jdbc-security>
    </data-source>
    </server>
    <cluster>
    <name>DevCluster</name>
    <cluster-messaging-mode>unicast</cluster-messaging-mode>
    </cluster>
    <machine>
    <name>DevMachine</name>
    <node-manager>
    <nm-type>Plain</nm-type>
    </node-manager>
    </machine>
    <migratable-target>
    <name>Node1 (migratable)</name>
    <notes>This is a system generated default migratable target for a server. Do not delete manually.</notes>
    <user-preferred-server>Node1</user-preferred-server>
    <cluster>DevCluster</cluster>
    </migratable-target>
    <migratable-target>
    <name>Node2 (migratable)</name>
    <notes>This is a system generated default migratable target for a server. Do not delete manually.</notes>
    <user-preferred-server>Node2</user-preferred-server>
    <cluster>DevCluster</cluster>
    </migratable-target>
    ------------------------------------------------------------------------------------------------------------------

    Hi,
    So you want to migrate your server to server.
    Here are the following links which help you.
    http://docs.oracle.com/cd/E15051_01/wls/docs103/cluster/migration.html
    http://www.oracle.com/technetwork/middleware/weblogic/messaging/wlasm-1853193.pdf
    let me know the status if you need any further help on this issue.
    Regards,
    Kal

  • Cannot add multiple members of a failover cluster to a DFSR replication group

    Server 2012 RTM. I have two physical servers, in two separate data centers 35 miles apart, with a GbE link over metro fibre between them. Both have a large (10TB+) local RAID storage arrays, but given the physical separation there is no physical shared storage.
    The hosts need to be in a Windows failover cluster (WSFC), so that I can run high-availability VMs and SQL Availability Groups across these two hosts for HA and DR. VM and SQL app data storage is using a SOFS (scale out file server) network share on separate
    servers.
    I need to be able to use DFSR to replicate multi-TB user data file folders between the two local storage arrays on these two hosts for HA and DR. But when I try to add the second server to a DFSR replication group, I get the error:
    The specified member is part of a failover cluster that is already a member of the replication group. You cannot add multiple members for the same cluster to a replication group.
    I'm not clear why this has to be a restriction. I need to be able to replicate files somehow for HA & DR of the 10TB+ of file storage. I can't use a clustered file server for file storage, as I don't have any shared storage on these two servers. Likewise
    I can't run a HA single DFSR target for the same reason (no shared storage) - and in any case, this doesn't solve the problem of replicating files between the two hosts for HA & DR. DFSR is the solution for replicating files storage across servers with
    non-shared storage.
    Why would there be a restriction against using DFSR between multiple hosts in a cluster, so long as you are not trying to replicate folders in a shared storage target accessible to both hosts (which would obviously be a problem)? So long as you are not replicating
    folders in c:\ClusterStorage, there should be no conflict. 
    Is there a workaround or alternative solution?

    Yes, I read that series. But it doesn't address the issue. The article is about making a DFSR target highly available. That won't help me here.
    I need to be able to use DFSR to replicate files between two different servers, with those servers being in a WSFC for the purpose of providing other clustered services (Hyper-V, SQL availability groups, etc.). DFSR should not interfere with this, but it
    is being blocked between nodes in the same WSFC for a reason that is not clear to me.
    This is a valid use case and I can't see an alternative solution in the case where you only have two physical servers. Windows needs to be able to provide HA, DR, and replication of everything - VMs, SQL, and file folders. But it seems that this artificial
    barrier is causing us to need to choose either clustered services or DFSR between nodes. But I can't see any rationale to block DFSR between cluster nodes - especially those without shared storage.
    Perhaps this blanket block should be changed to a more selective block at the DFSR folder level, not the node level.

  • Messaging Server and Calendar Server Mount points for SAN

    Hi! Jay,
    We are planning to configure "JES 05Q4" Messaging and Calendar Servers on 2 v490 Servers running Solaris 9.0, Sun Cluster, Sun Volume Manager and UFS. The Servers will be connected to the SAN (EMC Symmetrix) for storage.
    I have the following questions:
    1. What are the SAN mount points to be setup for Messaging Server?
    I was planning to have the following on SAN:
    - /opt/SUNWmsgsr
    - /var/opt/SUNWmsgsr
    - Sun Cluster (Global Devices)
    Are there any other mount points that needs to be on SAN for Messaging to be configured on Sun Cluster?
    2. What are the SAN mount points to be setup for Calendar Server?
    I was planning to have the following on SAN:
    - /opt/SUNWics5
    - /var/opt/SUNWics5
    - /etc/opt/SUNWics5
    3. What are the SAN mount points to be setup for Web Server (v 6.0) for Delegated Admin 1.2?
    - /opt/ES60 (Planned location for Web Server)
    Delegated Admin will be installed under /opt/ES60/ida12
    Directory server will be on its own cluster. Are there any other storage needs to be considered?
    Also, Is there a good document that walks through step-by-step on how to install Messaging, Calendar and Web Server on 2 node Sun Cluster.
    The installation document doesn't do a good job or atleast I am seeing a lot of gaps.
    Thanks

    Hi,
    There are basically two choices..
    a) Have local binaries in cluster nodes (e.g 2 nodes) ... which means there will be two sets of binaries, one on each node in your case.
    Then when you configure the software ..you will have to point the data directory to a cluster filesystem which may not be neccasarily global. But this filsystem should be mountable on both nodes.
    The advantage of this method is that ... during patching and similar system maintenance activities....the downtime is minimum...
    The disadvantage is that you have to maintain two sets of binaries ..i.e patch twice.
    The suggested filesystems can be e.g.
    /opt for local binaries
    /SJE/SUNWmsgr for data (used during configure option)
    This will mean installing the binaries twice...
    b) Having a single copy of binaries on clustered filesystem....
    This was the norm in iMS5.2 era ...and Sun would recommend this...though I have seen type a) also for iMs 5.2
    This means there should no configuration files in local fs. Everything related to iPlanet on clustered filesystem.
    I have not come accross type b) post SUNONE i.e 6.x .....it seems 6.x has to keep some files on the local filesystem anyway..so b) is either not possible or needs some special configuration
    so may be you should try a) ...
    The Sequence would be ..
    After the cluster framework is ready:
    1) Insall the binaries on both side
    2 ) Install agent on one side
    3) switch the filesytem resource on any node
    4) Configure the software with the clustered FS
    5) Switch the filesystem resource on the other node and useconfig of first node.
    Cheers--

  • Integration Broker Application Messages Issue

    Hi Gurus,
    We are having an issue with the Integration Broker Application Messages.
    Here's the issue, when we run Dynrole it process Application Messages. All messages go to Done status, but checking the instance, the Footer comes first, where the header should comes first. This issue happens from time to time, we only run 8 application messages at a time (header and footer included). BTW, failover is enabled on our server.
    We are trying to find a solution on this, please help us.
    Thanks,
    Red

    Hi,
    Did it resolved? If yes, please share here if possible.
    Cheers,
    Elvis.

  • Unbale to start Message Server and Dispatcher

    Hi
    When i am trying to start j2ee engine, message server and dispatcher are not starting.I have checked in developer trace of message server i got the following error.....
    <b>
    [Thr 5076] Fri Nov 16 17:37:39 2007
    [Thr 5076] *** ERROR => MsSRead: NiBufReceive (rc=NIECONN_BROKEN) [msxxserv.c   9163]
    [Thr 5076] *** ERROR => MsSClientHandle: MsSRead C1 (sapep_QN7_00), MSEINTERN [msxxserv.c   3778]
    [Thr 5076] MsSExit: received SIGINT (2)
    [Thr 5076] ***LOG Q02=> MsSHalt, MSStop (Msg Server 5100) [msxxserv.c   5334]</b>
    and i checkd in default trace i got the following error
    <b>1.5#00111120E5260012000000020000147000043BBD4424E5AA#1191583984968#com.sap.engine.services.httpserver.dispatcher##com.sap.engine.services.httpserver.dispatcher#######OrderedChannel for p4 service##0#0#Error##Plain###Failure in session communication between current dispatcher and server with ID 9661150. Sending notification message for disconnected client failed.
    com.sap.engine.frame.cluster.message.DestinationNotAvailableException: Participant 9,661,150 is not available.
         at com.sap.engine.core.cluster.impl6.session.SessionConnectorImpl.send(SessionConnectorImpl.java:181)
         at com.sap.engine.core.cluster.impl6.ClusterManagerImpl.ss_send(ClusterManagerImpl.java:2502)
         at</b>
    Thanks & Regards
    Sowmya

    Hi
    When i am trying to start j2ee engine, message server and dispatcher are not starting.I have checked in developer trace of message server i got the following error.....
    <b>
    [Thr 5076] Fri Nov 16 17:37:39 2007
    [Thr 5076] *** ERROR => MsSRead: NiBufReceive (rc=NIECONN_BROKEN) [msxxserv.c   9163]
    [Thr 5076] *** ERROR => MsSClientHandle: MsSRead C1 (sapep_QN7_00), MSEINTERN [msxxserv.c   3778]
    [Thr 5076] MsSExit: received SIGINT (2)
    [Thr 5076] ***LOG Q02=> MsSHalt, MSStop (Msg Server 5100) [msxxserv.c   5334]</b>
    and i checkd in default trace i got the following error
    <b>1.5#00111120E5260012000000020000147000043BBD4424E5AA#1191583984968#com.sap.engine.services.httpserver.dispatcher##com.sap.engine.services.httpserver.dispatcher#######OrderedChannel for p4 service##0#0#Error##Plain###Failure in session communication between current dispatcher and server with ID 9661150. Sending notification message for disconnected client failed.
    com.sap.engine.frame.cluster.message.DestinationNotAvailableException: Participant 9,661,150 is not available.
         at com.sap.engine.core.cluster.impl6.session.SessionConnectorImpl.send(SessionConnectorImpl.java:181)
         at com.sap.engine.core.cluster.impl6.ClusterManagerImpl.ss_send(ClusterManagerImpl.java:2502)
         at</b>
    Thanks & Regards
    Sowmya

  • Failover Zones / Containers with Sun Cluster Geographic Edition and AVS

    Hi everyone,
    Is the following solution supported/certified by Oracle/Sun? I did find some docs saying it is but cannot find concrete technical information yet...
    * Two sites with a 2-node cluster in each site
    * 2x Failover containers/zones that are part of the two protection groups (1x group for SAP, other group for 3rd party application)
    * Sun Cluster 3.2 and Geographic Edition 3.2 with Availability Suite for SYNC/ASYNC replication over TCP/IP between the two sites
    The Zones and their application need to be able to failover between the two sites.
    Thanks!
    Wim Olivier

    Fritz,
    Obviously, my colleagues and I, in the Geo Cluster group build and test Geo clusters all the time :-)
    We have certainly built and tested Oracle (non-RAC) configurations on AVS. One issue you do have, unfortunately, is that of zones plus AVS (see my Blueprint for more details http://wikis.sun.com/display/BluePrints/Using+Solaris+Cluster+and+Sun+Cluster+Geographic+Edition). Consequently, you can't built the configuration you described. The alternative is to sacrifice zones for now and wait for the fixes to RG affinities (no idea on the schedule for this feature) or find another way to do this - probably hand crafted.
    If you follow the OHAC pages (http://www.opensolaris.org/os/community/ha-clusters/) and look at the endorsed projects you'll see that there is a Script Based Plug-in on the way (for OHACGE) that I'm writing. So, if you are interested in playing with OHACGE source or the SCXGE binaries, you might see that appear at some point. Of course, these aren't supported solutions though.
    Regards,
    Tim
    ---

  • How the broker cluster determine which broker to be connected

    Currently,there are four server instances in glassfish a cluster.By default classfish used MQ broker cluster to provide JMS services.The question is when a client use the JMS connection factory to create a connection,what's the policy the brokers cluster used to determine which broker to be connected to?
    In MQ's document, it's said that the connection was create use the imqAddressListBehavior and imqAddressList properties.
    As a fact,I found that the connection distribution is 350/200/50/200(broker3,broker4,broker1,broker2);
    the imqAddressList =broker3,broker4,broker1,broker2
    the imqAddressListBehavior= priority
    Can anyone tell me what's the policy the brokers cluster to routing the connection?

    Hi,
    The brokers is started up by glassfish nodeagent process,when i started a nodeagent.All of the brokers are behind the firewall.
    can anyone share some document about this subject?
    thanks a lots!

  • Is DB/SE supported for message persistence through OEMS?

    Hi all,
    I'm in the process of designing an high availability configuration for ESB.
    We will use Oracle database RAC for the metadata repository and for message persistence through Oracle Enterprise Messaging Service (OEMS) as well.
    I'm considering to use the Oracle Database Standard Edition (SE) for clustering (Although the license policy for SE not only depends on the number of processor sockets but also on the MCM's since the 1st of May!)
    Does anyone know if the database standard edition is officially supported for message persistence through OEMS?
    Thanks in advance,
    Jeroen van Schaijk

    Hi Scott,
    as I know the only difference in the partnr. is:
    370-xxxx Sun StorEdge 3510 FC Array (Non-RoHS)
    371-xxxx Sun StorEdge 3510 FC Array RoHS
    hth
    Gerhard

  • ASA 8.0 VPN cluster with WEBVPN and Certificates

    I'm looking for advice from anyone who has implemented or tested ASA 8.0 in a VPN cluster using WebVPN and the AnyConnect client. I have a stand alone ASA configured with a public certificate for SSL as vpn.xxxx.org, which works fine.
    According to the config docs for 8.0, you can use a FQDN redirect for the cluster so that certificates match when a user is sent to another ASA.
    Has anyone done this? It looks like each box will need 2 certificates, the first being vpn.xxxx.org and the second being vpn1.xxxx.org or vpn2.xxxx.org depending on whether this is ASA1 or ASA2. I also need DNS forward and reverse entries, which is no problem.
    I'm assuming the client gets presented the appropriate certificate based on the http GET.
    Has anyone experienced any issues with this? Things to look out for migrating to a cluster? Any issues with replicating the configuration and certificate to a second ASA?
    Example: Assuming ASA1 is the current virtual cluster master and is also vpn1.xxxx.org. ASA 2 is vpn2.xxxx.org. A user browses to vpn.xxxx.org and terminates to ASA1, the current virtual master. ASA1 should present the vpn.xxxx.org certificate. ASA1 determines that it has the lowest load and redirects the user to vpn1.xxxx.org to terminate the WebVPN session. The user should now be presented a certificate that matches vpn1.xxxx.org. ASA2 should also have the certificate for vpn.xxxx.org in case it becomes the cluster master during a failure scenario.
    Thanks,
    Mark

    There is a bug associated with this issue: CSCsj38269. Apparently it is fixed in the iterim release 8.0.2.11, but when I upgraded to 8.0.3 this morning the bug is still there.
    Here are the details:
    Symptom:
    ========
    ASA 8.0 load balancing cluster with WEBVPN.
    When connecting using a web browser to the load balancing ip address or FQDN,
    the certifcate send to the browser is NOT the certificate from the trustpoint
    assigned for the load balancing using the
    "ssl trust-point vpnlb-ip" command.
    Instead its using the ssl trust-point certificate assigned to the interface.
    This will generate a certificate warning on the browser as the URL entered
    on the browser does not match the CN (common name) in the certificate.
    Other than the warning, there is no functional impact if the end user
    continues by accepting to proceed to the warning message.
    Condition:
    =========
    webvpn with load balancing is used
    Workaround:
    ===========
    1) downgrade to latest 7.2.2 interim (7.2.2.8 or later)
    Warning: configs are not backward compatible.
    2) upgrade to 8.0.2 interim (8.0.2.11 or later)

  • [svn:bz-trunk] 21327: Updated the sample destination config to show the new "none" value for cluster-message-routing

    Revision: 21327
    Revision: 21327
    Author:   [email protected]
    Date:     2011-06-02 08:51:22 -0700 (Thu, 02 Jun 2011)
    Log Message:
    Updated the sample destination config to show the new "none" value for cluster-message-routing
    Modified Paths:
        blazeds/trunk/resources/config/messaging-config.xml

    Thanks Carlo for your reply.
    I have read again the link and you are correct that in using the preferred command together with localhost under POTS dial-peer, I can now select which correct path to choose for my outbound calls. I'm just not very strong with dial-peer and translation rules at the moment.
    I will try this solution during the weekend and let you know. But it would have been better if there was a sample configuration for this option.

  • I have an iphone 5. I have no idea of the ios version in it. Recently i broke my iphone screen and its showing up nothing. it was never synced to itunes or to any pc. When i tried to connect iphone to itunes in my pc it was asking was passcode which

    I have an iphone 5. I have no idea of the ios version in it. Recently i broke my iphone screen and its showing up nothing. it was never synced to itunes or to any pc. When i tried to connect iphone to itunes in my pc it was asking for passcode which i had forgotten ages ago. Suggest me the possible ways to recover my iphone.

    Since the screen doesn't work, it will be challenging to get it working, even thru iTunes. You will need to ask yourself if you want to invest more money in it or get a new one, especially since the contents will be lost since you don't know the passcode. Apple will gladly replace the screen for a cost that represents a significant portion of the price of a new. Once the screen is done, you may or may not be eligible for the free iPhone 5 Battery Replacement Program.
    Or, if you decide to salvage it and are strong of heart and firm of hand, you can replace the screen yourself for less. See here and note they carry kits with all needed tools and parts.

  • Problem with internet. When i open System preferences, Network, message drops down: 'your network settings have been changed by another application'. I click OK, but it drops a message again and again, preventing me to do anything about the setting.

    Problem with internet. When i open System preferences, Network, message drops down: 'your network settings have been changed by another application'. I click OK, but it dropps the message again and again, preventing me to do anything about the setting.

    A Fix for "Your network preferences have been changed by another application" Error 
    In the Library/Preferences/SystemConfiguration/ folder delete the following:
    com.apple.airport.preferences.plist
    NetworkInterfaces.plist
    preferences.plist
    com.apple.nat.plist
    You will have to re-configure all your network settings since deleting.
    (10.4.10)
    Use Software Update to update your OS to last version of Tiger.  Install all the other updates that goes along w/it.

Maybe you are looking for