JMQ cluster and unstable connections

Hello all.
I have a few architectural questions about building an OpenMQ message-passing infrastructure between multiple offices which do not always have on-line internet connections. We also need to distribute the MQ mesh configuration info.
From the scale of my questions it seems, that I or our developers don't fully understand MQ, because I think that many of our problems and/or solution ideas (below) should already be implemented within the MQ middleware, and not by us from outside it.
The potential client currently has a (relatively ugly) working solution which they wanted to revise for simplification, if possible, but this matter is not urgent and answers are welcome at any timeframe :)
I'd welcome any insights, ideas and pointers as to why our described approach may be plain wrong :)
To sum this post up, here's my short questionnaire:
1) What is a good/best way to distribute MQ mesh config when not all nodes are available simultaneously?
2) What are the limitations on number of brokers and queues in one logical mesh?
3) Should we aim for separate "internal" and "external" MQ networks, or can they be combined into one large net?
4) Should we aim for partial solution external to OpenMQ (such as integration with SMTP for messaging, or SVN for config distribution), or can this quest be solved within OpenMQ functionality?
5) Can a clustered broker be forced to fully start without available master broker connection?
6) Are broker clusters inherently local-network, or is there some standard solution (pattern) for geographically disperse MQ clusters?
7) How to enforce pushing of the messages from one broker to another? Are any priority assignments available for certain brokers and "their" queues?
Detailed rumblings follow below...
We are thinking about implementing JMQ in a geographically disperse project, where it will be used for asynchronous communications to connect application servers in different branch offices with a central office. The problematic part is, that the central and especially branch offices are not expected to be always on-line, hence the MQ - whenever a connection is available, queued messages (requests, responses, etc.) are to be pushed to the other side's MQ broker. And if all goes well with the project, there may eventually be hundreds of such branch offices and more than one central office for failover, and a mesh of interconnection MQ agreements.
The basic idea is simple: an end-user of the app server in a branch generates a request, this request is passed via message queue to another branch or to a central office, then another app server processes it to generate a response and the answer is queued back to the requesting app server. At some time after the initial request, the end-user would see in his web-page that the request's status has been updated with a response value. A branch office's app server and MQ broker may be an appliance-server distributed as a relatively unmaintained "black box".
During the POC we configured several JMQ broker instances in this manner and it worked. From what I gather from our developers, each branch office's request and response queues are separate destinations in the system, and requests (from a certain branch) may be subscribed by any node, and responces (to a certain branch) may be submitted by any node. This may be restricted by passwords and/or certificate-based SSL tunnel channels, for example (suggestions welcome, though).
However, we also wanted to simplify spreading the configuration of the MQ nodes' network by designating "master brokers" (as per JMQ docs) which keep track of the config and each other broker downloads the cluster config from its master. Perhaps it was wrong on our side, and a better idea is available to avoid manual reconfiguration of each MQ broker whenever another broker or a queue destination is added?
Problem here is: it seems an "MQ cluster" is a local-network oriented concept. When we have a master broker in a central office, and the inter-connection is not up, branch offices loop indefinitely waiting for connection to a master, and reject client connections (published JMS port remains 0, and appropriate comments in the log files). In this case the branch office can not function until its JMQ broker connects to a central office, updates the MQ config, and permits client connections to itself.
Also we are not certain (and it seems to be a popular question on Google, too) how to enforce a queued message to be pushed to another side - to a broker "nearest" to the target app server? Can this be done within OpenMQ config, or does this require an MQ client application to read and manipulate such messages somehow? For example, when a branch office's "request" queue has a message, and a connection to central office comes online, this request data should end up in the central office's broker. Apparently, a message which physically remains in the branch office broker when the interconnection goes offline, is of little use to the central appserver...
I was thinking along the lines of different-priority brokers for a certain destinations, so that messages would automatically flow from further brokers to neares ones - like water flows from higher ground to lower ground in an aqueduct. It would then be possible to easily implement transparent routing between branch offices (available at non-intersecting times) via central office (always up).
How many brokers and destination can be interconnected at all (practically or theoretically/hardcoded)?
Possibly, there are other means to do some or all of this?
Ideas we've discussed internally include:
* Multiple networks of MQ brokers:
Have an "internal" broker (cluster) in each branch office which talks to the app server, and a separate "external" broker which is clustered with the central office's "master broker". Some branch office application transfers messages between two brokers local to its branch. Thus the local appserver works okay, and remote queuing works whenever network is available.
Possibly, the central office should also have separate internal and external broker setups?
* Multi-tiered net of MQ brokers:
Perhaps there can be "clusters of clusters" - with "external" tier-1 brokers being directly master brokers for local "internal" tier-2 clusters? Otherwise the idea of "miltiple networks of MQ brokers" above, without an extra app to relay messages between MQ brokers local to this app.
* Multi-protocol implementation of MQ+SMTP(+POP3/IMAP)
Many of our questions are solvable by SMTP. That is, we can send messages to a mailbox residing on a specific server (local in each office), and local appserver clients retrieve them by POP3 from the local mailbox server, and then submit responses over SMTP. This is approximately how the client currently solves this task now.
We don't really want to invent a bicycle, but maybe this approach can also be applied to JMQ (asynch traffic not over MQ protocol, but over SMTP like in SOAP-SMTP vs. SOAP-HTTP webservices)?
* HTTP/RCS-based config file:
The OpenMQ config allows for the detailed configuration file to be available in local filesystem or on a web server. It is possible to fetch the config file from a central office whenever the connection is up (wget, svn/cvs/etc.) and restart the branch broker.
Why is this approach good or bad? Advocates welcome :)
Thanks for reading up to the end,
and thanks in advance for any replies,
//Jim Klimov

Hello all.
I have a few architectural questions about building an OpenMQ message-passing infrastructure between multiple offices which do not always have on-line internet connections. We also need to distribute the MQ mesh configuration info.
From the scale of my questions it seems, that I or our developers don't fully understand MQ, because I think that many of our problems and/or solution ideas (below) should already be implemented within the MQ middleware, and not by us from outside it.
The potential client currently has a (relatively ugly) working solution which they wanted to revise for simplification, if possible, but this matter is not urgent and answers are welcome at any timeframe :)
I'd welcome any insights, ideas and pointers as to why our described approach may be plain wrong :)
To sum this post up, here's my short questionnaire:
1) What is a good/best way to distribute MQ mesh config when not all nodes are available simultaneously?
2) What are the limitations on number of brokers and queues in one logical mesh?
3) Should we aim for separate "internal" and "external" MQ networks, or can they be combined into one large net?
4) Should we aim for partial solution external to OpenMQ (such as integration with SMTP for messaging, or SVN for config distribution), or can this quest be solved within OpenMQ functionality?
5) Can a clustered broker be forced to fully start without available master broker connection?
6) Are broker clusters inherently local-network, or is there some standard solution (pattern) for geographically disperse MQ clusters?
7) How to enforce pushing of the messages from one broker to another? Are any priority assignments available for certain brokers and "their" queues?
Detailed rumblings follow below...
We are thinking about implementing JMQ in a geographically disperse project, where it will be used for asynchronous communications to connect application servers in different branch offices with a central office. The problematic part is, that the central and especially branch offices are not expected to be always on-line, hence the MQ - whenever a connection is available, queued messages (requests, responses, etc.) are to be pushed to the other side's MQ broker. And if all goes well with the project, there may eventually be hundreds of such branch offices and more than one central office for failover, and a mesh of interconnection MQ agreements.
The basic idea is simple: an end-user of the app server in a branch generates a request, this request is passed via message queue to another branch or to a central office, then another app server processes it to generate a response and the answer is queued back to the requesting app server. At some time after the initial request, the end-user would see in his web-page that the request's status has been updated with a response value. A branch office's app server and MQ broker may be an appliance-server distributed as a relatively unmaintained "black box".
During the POC we configured several JMQ broker instances in this manner and it worked. From what I gather from our developers, each branch office's request and response queues are separate destinations in the system, and requests (from a certain branch) may be subscribed by any node, and responces (to a certain branch) may be submitted by any node. This may be restricted by passwords and/or certificate-based SSL tunnel channels, for example (suggestions welcome, though).
However, we also wanted to simplify spreading the configuration of the MQ nodes' network by designating "master brokers" (as per JMQ docs) which keep track of the config and each other broker downloads the cluster config from its master. Perhaps it was wrong on our side, and a better idea is available to avoid manual reconfiguration of each MQ broker whenever another broker or a queue destination is added?
Problem here is: it seems an "MQ cluster" is a local-network oriented concept. When we have a master broker in a central office, and the inter-connection is not up, branch offices loop indefinitely waiting for connection to a master, and reject client connections (published JMS port remains 0, and appropriate comments in the log files). In this case the branch office can not function until its JMQ broker connects to a central office, updates the MQ config, and permits client connections to itself.
Also we are not certain (and it seems to be a popular question on Google, too) how to enforce a queued message to be pushed to another side - to a broker "nearest" to the target app server? Can this be done within OpenMQ config, or does this require an MQ client application to read and manipulate such messages somehow? For example, when a branch office's "request" queue has a message, and a connection to central office comes online, this request data should end up in the central office's broker. Apparently, a message which physically remains in the branch office broker when the interconnection goes offline, is of little use to the central appserver...
I was thinking along the lines of different-priority brokers for a certain destinations, so that messages would automatically flow from further brokers to neares ones - like water flows from higher ground to lower ground in an aqueduct. It would then be possible to easily implement transparent routing between branch offices (available at non-intersecting times) via central office (always up).
How many brokers and destination can be interconnected at all (practically or theoretically/hardcoded)?
Possibly, there are other means to do some or all of this?
Ideas we've discussed internally include:
* Multiple networks of MQ brokers:
Have an "internal" broker (cluster) in each branch office which talks to the app server, and a separate "external" broker which is clustered with the central office's "master broker". Some branch office application transfers messages between two brokers local to its branch. Thus the local appserver works okay, and remote queuing works whenever network is available.
Possibly, the central office should also have separate internal and external broker setups?
* Multi-tiered net of MQ brokers:
Perhaps there can be "clusters of clusters" - with "external" tier-1 brokers being directly master brokers for local "internal" tier-2 clusters? Otherwise the idea of "miltiple networks of MQ brokers" above, without an extra app to relay messages between MQ brokers local to this app.
* Multi-protocol implementation of MQ+SMTP(+POP3/IMAP)
Many of our questions are solvable by SMTP. That is, we can send messages to a mailbox residing on a specific server (local in each office), and local appserver clients retrieve them by POP3 from the local mailbox server, and then submit responses over SMTP. This is approximately how the client currently solves this task now.
We don't really want to invent a bicycle, but maybe this approach can also be applied to JMQ (asynch traffic not over MQ protocol, but over SMTP like in SOAP-SMTP vs. SOAP-HTTP webservices)?
* HTTP/RCS-based config file:
The OpenMQ config allows for the detailed configuration file to be available in local filesystem or on a web server. It is possible to fetch the config file from a central office whenever the connection is up (wget, svn/cvs/etc.) and restart the branch broker.
Why is this approach good or bad? Advocates welcome :)
Thanks for reading up to the end,
and thanks in advance for any replies,
//Jim Klimov

Similar Messages

  • Windows 2008 R2 cluster and FTP-connection

    Hi,
    We have windows 2008 r2 server cluster. There is a resource which is connected by the third part
    via ftp. When they establish the connection, there is configured ip-address to certain node.
    When we manage cluster, it might happen that the resource is left on the other node and when they
    establish next time the ftp-connection, it says that can't connect. I've made similiar usernames with
    similiar passwords on both nodes.
    Is there a possibility to configure the environment so, thta ftp-connection is not depend on which
    node the resource is?
    Br, Petteri

    Hi Peter Castle,
    When you want to keep the session on one server you must use the Single Affinity option.
    The related KB:
    Specifying the Affinity and Load-Balancing Behavior of the Custom Port Rule
    http://technet.microsoft.com/en-us/library/cc759039(v=ws.10).aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • How to have RAC accept Forms and Reports connection to non specific node?

    Hi all,
    I have posted this in the Forms forum w/ no result, hoping someone here can help;
    Oracle Forms and Reports services, not the entire app server. Version 10.1.2.0.2, one server
    Oracle database, RAC Version 11.1.0.6.0, 3 nodes
    Instance=RMSTEST
    Nodes = RMSTEST1, RMSTEST2, RMSTEST3
    I can TNSPING RMSTEST w/o issue
    When I have userid in formsweb.cfg = user/pw@RMSTEST we are being prompted when logging into forms and reports to supply the password for the RMSTEST instance, enter the password and we are in.
    So we changed this in formsweb.cfg = user/pw@RMSTEST? (?= any of the nodes 1-3) and the password prompt is no longer being requested.
    What do I need to do to have Forms and Reports connect to the RMSTEST cluster directly for it to determine which node to connect to instead of having an unbalanced node(s)?
    Thanks,
    Steve

    This is what I have;
    RMSTEST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.111)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.112)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.113)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = RMSTEST)
    LISTENERS_RMSTEST =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.111)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.112)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.113)(PORT = 1521))
    RMSTEST3 =
    (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.113)(PORT = 1521))
         (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = RMSTEST)
         (INSTANCE_NAME = RMSTEST3)
    RMSTEST2 =
    (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.112)(PORT = 1521))
         (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = RMSTEST)
         (INSTANCE_NAME = RMSTEST2)
    RMSTEST1 =
    (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.12.111)(PORT = 1521))
         (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = RMSTEST)
         (INSTANCE_NAME = RMSTEST1)
    )

  • "unstable connection" when using my External Bluray burner!

    I have a Mac Pro Quad (2008)and a Pioneer bdr-205bk in an external caddie.
    I have a connection from one of the hidden internal SATA ports mounted as eSATA on the rear of the tower. My bluray burner has two methods of connection - usb and esata.
    In both modes the drive works fine, reading and playing films. When I try to burn I get an unstable connection error and the drive stops responding. I have no idea whats going on!
    I am trying to burn with Toast v10 - which has bluray support. As I mentioned prior to burning, the drive shows up as expected and all seems well.
    I am running OS X 10.5 - latest patch...
    Please can someone offer some advice....

    I have heard of ppl this error in SL and running old versions of toast...but looks like you don't have either of those issues!
    have you tried burning a regular DVD with finder? same results? Do you only get the error when trying to burn bluray in toast?
    when I installed my BD i had issues with the drive not responding, if I restarted the drive would work again, then stop responding again after about 5-10 minutes.
    To fix that issue I went to sys prefs > energy saver > and unchecked put hard disk(s) to sleep when possible.

  • Using ASM in a cluster, and new snapshot feature

    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?
    How does ASM work in a cluster environment? How does each node in a cluster connect to it?
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?
    Thanks!

    Markus Waldorf wrote:
    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?Well, you are missing one point that it would entirely depend on the architecture that you are going to use. If you are going to use ASM for a single node, it would be available right there. If installed for a RAC system, ASM instance would be running on each node of the cluster and would be managing the storage which would be lying on the shared storage. The process ASMB would be responsible to exchange messages, take response and push the information back to the RDBMS instance.
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?Listener is not need Markus as its required to create the server processes which is NOT the job of the ASM instance. ASM instance would be connecting the client database with itself immediately when the first request would come from that database to do any operation over the disk group. As I mentioned above, ASMB would be working afterwards to carry forward the request/response tasks.
    How does ASM work in a cluster environment? How does each node in a cluster connect to it? Each node would be having its own ASM instance running locally to it. In the case of RAC, ASM sid would be +ASMn where n would be 1, 2,... , going forward to the number of nodes being a part of teh cluster.
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?You are probably talking about the ACFS Snapshot feature of 11.2 ASM. This is not to take the backup of the disk group but its like a o/s level backup of the mount point which is created over the ASM's ACFS mount point. Oracle has given this feature so that you can take the backup of your oracle home running on the ACFS mount point and in the case of any o/s level failure like someone has deleted certain folder from that mountpoint, you can get it back through the ACFS snapshot taken. For the disk group, the only backup available is metadata backup(and restore) but that also does NOT get the data of the database back for you. For the database level backup, you would stillneed to use RMAN only.
    HTH
    Aman....

  • Cluster and ConnectionPooling

    Hi,
    I need your help.
    I have a WLS-Cluster with 2 instances and my webApp uses connection pooling.
    By looking up for connections in the pool the application get one, but it
    will never be released.
    In my opinion, the webApp get the connection and looses the reference to
    release it.
    How do I have to configure my cluster to work well with that stuff ?
    Thanks a lot,
    Markus.

    What if the code between db.connect() and db.disconnect() throws an
    exception? In that case the disconnect() would never happen and you
    have basically lost a connection from your pool. Could it be that in the
    clustered version the code between those calls is throwing an exception
    that the non-clustered version is not being thrown??
    Regards
    alex
    "M. Hammer" wrote:
    Here it is...
    But I don't think that you'll see anything in the source.
    My WebApp works well standalone, but not in a CLUSTER.
    By the way I put an new WLS Instance to a standalone running one,
    connections will be used and never be released.
    Exactly the connection will be open up in the partner-instance of the
    requested one.
    So, I think that the cluster make the connection-pool-request to both,
    Instance 1 and 2, then on one connection, the reference is lost.
    Hope you know what I mean. ;o)
    public class MyServlet extends HttpServlet
    public void doGet (HttpServletRequest request, HttpServletResponse
    response)
    throws IOException, ServletException
    Db db = mySession.getDb();
    if( mySession.useConnectionPooling && !mySession.useStaticDb ) {
    db.connect();
    if( mySession.useConnectionPooling && !mySession.useStaticDb ) {
    db.disconnect();
    "Michael Reiche" <[email protected]> schrieb im Newsbeitrag
    news:[email protected]...
    Ok, that was the code that connects. Now the code that uses it and
    disconnects.
    Mike
    "M. Hammer" <[email protected]> wrote in message
    news:[email protected]...
    Ok, here it is...
    public boolean connect()
    Hashtable jndi_env = new Hashtable();
    Context context = null;
    DataSource ds = null;
    if ( driverRegistered )
    if( isConnected() ) {
    disconnect();
    if( useConnectionPooling ) {
    try {
    if( activateSettings ) {
    //--- JNDI-Settings aus App-Property-Datei
    setzen
    jndi_env.put( Context.INITIAL_CONTEXT_FACTORY,
    JNDI_factory_initial );
    jndi_env.put( Context.PROVIDER_URL,
    JNDI_provider_url );
    jndi_env.put( Context.SECURITY_PRINCIPAL,
    JNDI_security_principal );
    jndi_env.put( Context.SECURITY_CREDENTIALS,
    JNDI_security_credentials );
    if( activateSettings ) {
    context = new InitialContext( jndi_env );
    } else {
    context = new InitialContext();
    ds = (DataSource)context.lookup(
    myDataSource );
    connection = ds.getConnection();
    } catch( SQLException ex_sql ) {
    return false;
    } catch( NamingException ex_nam ) {
    return false;
    } else {
    try {
    connection = DriverManager.getConnection(url, login,
    password);
    } catch(SQLException ex) {
    return false;
    else
    return false;
    return true;
    "Michael Reiche" <[email protected]> schrieb im Newsbeitrag
    news:[email protected]...
    Can you post the code that gets the connection, uses it, and returns
    it
    to
    the pool?
    Mike
    "M. Hammer" <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    I need your help.
    I have a WLS-Cluster with 2 instances and my webApp uses connectionpooling.
    By looking up for connections in the pool the application get one,
    but
    it
    will never be released.
    In my opinion, the webApp get the connection and looses the
    reference
    to
    release it.
    How do I have to configure my cluster to work well with that stuff ?
    Thanks a lot,
    Markus.

  • RedHat Enterprise Cluster and Cisco IGMP Snooping/Querying

    Has anyone else had any experience with IGMP Snooping/Querying and RedHat Enterprise Cluster?
    We have been experiencing a large amount of problems with this functionality.
    We are running IGMP Querying in our environment and we recently set up a second querier.
    Here's the steps we took
    Existing querier:  192.168.3.248
    Everything was running fine.
    Added a new querier on a different switch: 192.168.3.247
    At this point, all of our RedHat Enterprise Clusters fenced themselves and needed to be restarted in order to restore
    access.  In order to restart the RedHat Enterprise Clusters, the physical servers must be rebooted.
    Are there any known issues with RedHat Enterprise Clustering and Cisco Switches (3750
    series)?  I would expect the querier change to be seamless, but it does not seem that this
    is the case. 

    Hi,
    In our organizaiton we have Red Hat Cluster with 2 cisco switch (Model: cisco WS-C2960S-24TD-L, Version: "flash:/ c2960s-universalk9-mz.122-55.SE3/c2960s-universalk9-mz.122-55.SE3.bin").
    - We are using HP Chassis c7000 and Server is on the chassis. There are 2 service IC & Med. Each server has one service primary and other secondary running.
    - The two cluster switches are connected each other with Ether channer trunk (1+1) link. Also these 2 switches are connected to our Mgmt switch for Server Admin access to HP Chassis via OA port. The Red Hat system has cluster lan (pri & sec) & OA lan (01 & 02 of HP chassis) connected to Cluster switches. The Mgmt VLAN is 501 - 172.31.10.0/24.
    Problem:
    When the CluserSW01 goes down the cluser shifted to CluseterSW02 with Cluser_Secondary_LAN and OA2. But when the ClusterSW01 switch comes again than the communication breaks and cluster don come up.
    I was thinking this is either STP or IGMP, well sure though. As these are production systems hence we also couldn't do much more test as well.
    If you have face any such issue or have experience with it or know what the problem might be... kindly share with me.
    Thanks,
    Adnan

  • 6398 ERRORS Cache cluster is down, restart the cache cluster and Retry. Collection was modified; enumeration operation may not execute.

    Recently started getting these 6398 errors with SharePoint 2013 Single Farm and haven't been able to fix with any google results.  Everything seems to run fine.  They usually appear overnight.  Average 5-9 daily event logs.  See
    Errors below.
    Get-CacheHost
    always reports UP
    Have already tried: 
    -  Upgrading to AppFabric CU5
    -  Restart-Service AppFabricCachingService
    -  Clear Configuration Cache
    - habaneroconsulting Distributed Cache Bug
    - nobadthing Unexpected exception in feedcacheservice
    - mmman itgroove fixing the appfabric cache cluster
    - Microsoft unable to start appfabriccachingservice
    - dhasalprashantsharepoint lets-troubleshoot-sharepoint
    ...and many other readings.  (sorry I cant post links even though I verified my account 100 times)
    Errors:
    The Execute method of job definition Microsoft.Office.Server.UserProfiles.LMTRepopulationJob (ID 414cbbe9-cdb1-4f7a-beed-85fbfd8a10c7) threw an exception. More information is included below.
    Unexpected exception in FeedCacheService.IsRepopulationNeeded: Cache cluster is down, restart the cache cluster and Retry.
    The Execute method of job definition Microsoft.Office.Server.UserProfiles.LMTRepopulationJob (ID 414cbbe9-cdb1-4f7a-beed-85fbfd8a10c7) threw an exception. More information is included below.
    Unexpected exception in FeedCacheService.IsRepopulationNeeded: Connection to the server terminated,check if the cache host(s) is running .
    The Execute method of job definition Microsoft.SharePoint.Administration.SPProductVersionJobDefinition (ID 95aee52d-88a1-4355-b6b6-9d43d753414e) threw an exception. More information is included below.
    Collection was modified; enumeration operation may not execute.

    Hi,
    Please firstly go to Central Administration > Application Management > Manage Services on server, make sure the Distributed Cache service has been started on all servers.
    If you execute Get-CacheHost and Get-Cache in SharePoint 2013 Management Shell, is it return the expected information? If it returns the red error message, please refer to the article below to remove SPDistributedCacheServiceInstance and add SPDistributedCacheServiceInstance.:
    http://blogs.technet.com/b/saantil/archive/2013/03/31/distributed-cache-in-sharepoint-2013-quot-unexpected-exception-in-feedcacheservice-isrepopulationneeded-cache-cluster-is-down-restart-the-cache-cluster-and-retry-quot.aspx
    Regards,
    Rebecca Tu
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Edit RemoteApp and Desktop Connections

    As the title suggests, we want to change/edit the top left text of the RDWeb page:
    As you can see, we have changed the upper text (using powershell). I have scoured the internet looking for how to do this to no avail. How can i change the text that says RemoteApp and Desktop Connections?

    You can edit this file "C:\Windows\Web\RDWeb\Pages\en-US\Default.aspx" on the RDWeb server. It contains all of the text strings available on the site except for the Work Resources title, that can be changed with PowerShell. If the RDWeb server is in an
    NLB cluster, remember it will need to be changed on all the servers in the cluster.

  • AP541N cluster and VLAN

    Hi.
    Simple but not obvious question.
    I've added separated wifi for guest with VLAN ID 300. Now I have 2 more access points. They are in cluster but only one is connected to smart switch SLM2008.
    Should I need to connect all of them to smart switch? I do not understand how cluster and VLAN work.

    Hello Tomasz,
    Yes. I guess you need to connect all APs to the switch (same bridged network). Clustering only makes all your AP act as one single entity ( you don't have to connect to the second AP In a cluster separately. Same wireless configuration will do).
    Refer Clustering section under the below manual for further details:
    http://www.cisco.com/en/US/docs/wireless/access_point/csbap/AP541N/administration/guide/AP541Nadmin.pdf#page139
    Hope this helps,
    Vijay
    Please rate useful posts.
    Sent from Cisco Technical Support iPad App

  • WoW + Airport = unstable connection

    I play WoW a lot and recently have experienced an issue with my wireless connection dropping during play. WoW will stutter and freeze (cursor still moves, nothing else responsive) for anywhere from 10 seconds to 5 minutes. Eventually it will unfreeze but my wireless connection will be dead. I have to turn off the airport and turn it back on.
    Also, this seems to usually only happen while flying or traveling on a mount. Seems many others with Macs are having this problem on the WoW forums. The Blizzard people have investigated and seem to think the problem is the airport driver rather than WoW itself.
    Anyone else play WoW and have this problem?

    Although i have not experienced that,the connection is really bad and unstable via Airport(always 3 red bars).......
    Try this Update ,
    http://www.apple.com/support/downloads/airportextremeupdate2007004.html

  • Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch

    Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch
    Hi,
    Currently the customer have 3 nodes clusters that are connected to the SE3510 via to the Sun StorEdge[TM] Network Fibre Channel Switch(SAN_Box Manager) and running SUN Cluster 3.X with Disk-Set.The customer want to decommised the system but want to access the 3510 data on the NEW system.
    Initially, I remove one of the HBA card from one the cluster nodes and insert it into the NEW System and is able to detect the 2 LUNS from the SE3150 but not able to mount the file system.After some checking, I decided follow the steps from SunSolve Info ID:85842 as show below:
    1.Turn off all resources groups
    2.Turn off all device groups
    3.Disable all configured resources
    4.remove all resources
    5.remove all resources groups
    6.metaset �s < setname> -C purge
    7.boot to non cluster node, boot �sx
    8.Remove all the reservations from the shared disks
    9.Shutdown all the system
    Now, I am not able to see the two luns from the NEW system from the format command.cfgadm �al shows as below:
    Ap_Id Type Receptacle Occupant Condition
    C4 fc-fabric connected configured
    Unknown
    1.Is it possible to get back the data and mount back accordingly?
    2.Any configuration need to done on the SE3150 or the SAN_Manager?

    First, you will probably need to change the LUN masking on the SE3510 and probably the zoning on the switches to make the LUN available to another system. You'll have to check the manual for this as I don't have these commands committed to memory!
    Once you can see the LUNs on the new machine, you will need to re-create the metaset using the commands that you used to create it on the Sun Cluster. As long as the partitioning hasn't changed from the default, you should get your data back intact. I assume you have a backup if things go wrong?!
    Tim
    ---

  • WLS 6.1 - targeting pools to BOTH cluster and servers?

    Hello everybody,
    I'm currently debugging JDBC issues on an existing WLS 6.1 environment.
    I see that the connection pools and their data sources are targeted to BOTH a cluster AND each of the individual WLS instances that compose THAT cluster.
    In your opinion, is that a correct configuration?
    The WLS documentation states that a pool/source pair may actually be targeted to both a cluster and an individual server, but it does not specify anything about the individual server belonging or not to that cluster.
    Can anyone help, please?
    TIA!!!
    Paola R.

    Hello everybody,
    I'm currently debugging JDBC issues on an existing WLS 6.1 environment.
    I see that the connection pools and their data sources are targeted to BOTH a cluster AND each of the individual WLS instances that compose THAT cluster.
    In your opinion, is that a correct configuration?
    The WLS documentation states that a pool/source pair may actually be targeted to both a cluster and an individual server, but it does not specify anything about the individual server belonging or not to that cluster.
    Can anyone help, please?
    TIA!!!
    Paola R.

  • Pathetic speeds and unreliable Connection

    Hi
    I have been having pathetic speeds and an unstable connection on my BT broadband since the start of 2010 I suspect this coincides with the roll out of 21CN on my local exchange. I use BT to mainly talk to my famly over skype but since the start of 2010 the quality of calls has gradually gone downhill and it became almost impossible to use my broadband connection in august when it would randomly disconnect and the audio and video quality on skype became utterly pathetic.
    On talking to indian call centre, I got told that my router was faulty and since my contract ran out in august, I need to sign up to a new contract to recieve a replacement router. on signing up to a years new contract on 2nd September and going with the ussual hassle of a few weeks delay in getting the router and arguing with the indian call centres to get the router sent as they delibrately cancel the router delivery after making the sale, when I eventually got the router the broadband was no better, I complaining further to BT I got told there is a fault at the exchange which they'll fix and the broadband speed would go up. Guess what nothing happened to the speed. In 2009 the typical speed on my connection according to speed test used to be 5.5 Mbs up and 500kbs down now I was stuck with 3Mbs and 270kbs and then things took a turn for worse in the last week of October the speed went down to 50kbs (down) on checking with bt speed test, I found my IP Profile was limited to 65kbs. I called up BT to complain again this time they decided to send and engineer over to my house who while doing the test casually told me that line doesn't have any problems but even though you should be connected at ADSL2+ none of my checks confirm this is the case the switches may be ADSL2+ but the backend is not connected. Now I don't know what to make of this except may be say BT is cheating with its customers.
    Please free to make suggestions to help me, my router stats are:
    ADSL line status
    Connection information
    Line state Connected
    Connection time 0 days, 0:43:43
    Downstream 4,188 Kbps
    Upstream 612 Kbps
    ADSL settings
    VPI/VCI 0/38
    Type PPPoA
    Modulation ITU-T G.992.5
    Latency type Interleaved
    Noise margin (Down/Up) 14.1 dB / 4.2 dB
    Line attenuation (Down/Up) 52.0 dB / 29.4 dB
    Output power (Down/Up) 18.8 dBm / 12.4 dBm
    Loss of Framing (Local) 39
    Loss of Signal (Local) 80
    Loss of Power (Local) 0
    FEC Errors (Down/Up) 3152 / 3
    CRC Errors (Down/Up) 64 / 2147480000
    HEC Errors (Down/Up) nil / 89761
    Error Seconds (Local) 2
    and the latest BT Speed tester results are as follows :
    Thanks
    Zuhair

    Hi Zuhair,
    Thanks for posting. I can check what's going on with the connection. Drop me an email to the address in my profile wwith your account details and a link to this post for reference.
    Cheers
    David
    BTCare Community Mod
    If we have asked you to email us with your details, please make sure you are logged in to the forum, otherwise you will not be able to see our ‘Contact Us’ link within our profiles.
    We are sorry but we are unable to deal with service/account queries via the private message(PM) function so please don't PM your account info, we need to deal with this via our email account :-)

Maybe you are looking for