ESA c160 and c170 in one cluster?

  Heelo Community,
is there something that I should take care about if I want to run a c160 and a c170 in one management-cluster?

Raph,
In order to successfully create cluster, both appliances (ESA - Email Security Appliances) must be running the exact same version and build. So, to answer your question, no, that will not work.
It is interesting tough, that your devices cannot see the same version.
Either you please send us the serial number (only the digits after the hifen will do it) or you open a TAC case and ask for assistance.
I hope this helps and if it does, please mark the question as answered.
Regards,
-Valter

Similar Messages

  • Connection problems with ESA C160 and WSA S160

    currently I have deployed ESA C160 and WSA S160 devices in a network but I cannot remotely connect to the devices.
    I have installed Cisco 2811 Terminal server with octal cable connections and cannot seem to get terminal access.
    As well I have connected the Management Interface to a local switch and provisioned VLANs on subnet 192.168.42.X to allow for access but no connection seems to work to gain access to the devices.
    I am wondering if there is a specific cable configuration or connection which will allow me access to the applicances for configuration.
    Any help is appreciated!

    HI
    Are you attempting a remote connection to the serial ports via 2811? I may be missunderstanding your post.
    The serial ports are 9600 Baud 8N1.  Typically you will use a null modem cable for the connection.
    For the network you should be able to connect to the manamgement interface  SSH and HTTPS should be enabled by default. If you connect directly to this port using a crossover cable can you establish a connection?
    If the network connection is failing I would first start with the serial port so you can verify that the configuration is as you expect it to be, meaning the IP address and services enabled. If everything checks out in the configuration. I would next test using a crossover cable on the same subnet. If that works then I would connect the appliance to a switch and test from there. The biggest questions that come up are can you route to the appliance over the network and can you resolve the host over the network.
    Christopher C Smith
    CSE
    Cisco IronPort Customer Support

  • Update issues when ESA Virtual replacing C170 Appliance in Cluster Config

    I have opened a TAC ticket on this one but was curious if any others experienced the same issue.
    I have C170s in Centralized ClusterConfig. I recently learned about the Virtual ESAs after reading about the EOL for C170s in a few years. I think the Virtual ESAs will add a lot of flexibility. The only issue I've noticed was trying to join Virtual ESAs to our Cluster are updates so far. 
    The first virtual ESA I brought up I was able to initially update it so it could join the cluster. I thought maybe I messed up the network config somewhere. So after messing with it over the Weekend and opening a TAC case with Cisco. I thought I would try configuring the second Virtual ESA. Sure enough updates are working, and no errors. Hooked it up enough to do some quick testing to make sure the listeners were working. Feeling pretty good about it, I join the cluster. Everything copied over configuration wise, I also setup a new ClusterGroup for the Virtual ESAs so I could customize the listeners and interfaces. Before I got too crazy I quickly realized that my updates stop working on the second virtual appliance.
    So just curious if there are some configuration compatibility issues between appliance hardware and Virtual we should be aware of. I found some great information from the Forums about forcing updates and reading the tail of the updater_logs, which produced the following:
    Info: Dynamic manifest fetch failure: Received invalid update manifest response
    I found the fix for non-cluster configured Virtuals for this Update error:
    http://www.cisco.com/c/en/us/support/docs/security/email-security-appliance/118065-maintainandoperate-esa-00.html
    But  this does not for for clusterconfig.
    So is my best course of action to:
    run the clusterconfig on one of my virtuals, 
    Remove Virtual from ClusterConfig after config is migrated
    Apply CLI fix to point post-cluster config Virtual so it now points to the right update servers
    Create new cluster with the now fully Updating Virtual-Uno ESA
    Join Remaining virtuals to the newly created cluster and phase out the old physical cluster?
    Obviously I left out all the fine details about MX records, IP addresses, Central Reporting and Spam and outbreak reporting. Just want to make sure I'm not missing something, maybe tare down the old clusterconfig first, set it to point to the update servers in the article above. Then I can phase out my old physicals later on down the line as they break down over time and avoid configuring two clusters for every rule change.

    So it looks like I have found the answer to my own question. Looks like the fix in the following article does apply to Virtual ESA in a cluster. 
    http://www.cisco.com/c/en/us/support/docs/security/email-security-appliance/118065-maintainandoperate-esa-00.html
    Some things I'd like to figure out is, will this change stick, will new virtual nodes pick up the incorrect update URL when I join them to the cluster? I made the changes and all my hosts seem to be updating fine. Will wait and see how well they do over the next few days and let them bake in a little before I push e-mail through them.
    Step by Step how it looks with a cluster config from the CLI:
    (Machine esa1.yourcompany.com)> updateconfig
    Service (images):
    Update URL:                                  
    Feature Key updates
    http://downloads.ironport.com/asyncos        
    RSA DLP Engine Updates
    Cisco IronPort Servers                       
    PXE Engine Updates
    Cisco IronPort Servers                       
    Sophos Anti-Virus definitions
    Cisco IronPort Servers                       
    IronPort Anti-Spam rules
    Cisco IronPort Servers                       
    Outbreak Filters rules
    Cisco IronPort Servers                       
    Timezone rules
    Cisco IronPort Servers                       
    Enrollment Client Updates (used to fetch certificates for URL Filtering)
    Cisco IronPort Servers                       
    Cisco IronPort AsyncOS upgrades
    Cisco IronPort Servers                       
    Service (list):
    Update URL:                                  
    RSA DLP Engine Updates
    Cisco IronPort Servers                       
    PXE Engine Updates
    Cisco IronPort Servers                       
    Sophos Anti-Virus definitions
    Cisco IronPort Servers                       
    IronPort Anti-Spam rules
    Cisco IronPort Servers                       
    Outbreak Filters rules
    Cisco IronPort Servers                       
    Timezone rules
    Cisco IronPort Servers                       
    Enrollment Client Updates (used to fetch certificates for URL Filtering)
    Cisco IronPort Servers                       
    Service (list):
    Update URL:                                  
    Cisco IronPort AsyncOS upgrades
    Cisco IronPort Servers                       
    Update interval: 5m
    Proxy server: not enabled
    HTTPS Proxy server: not enabled
    Choose the operation you want to perform:
    - SETUP - Edit update configuration.
    - CLUSTERSET - Set how updates are configured in a cluster
    - CLUSTERSHOW - Display how updates are configured in a cluster
    []>dynamichost
    Enter new manifest hostname:port
    [update-manifests.ironport.com:443]>update-manifests.sco.cisco.com:443
    Choose the operation you want to perform:
    - SETUP - Edit update configuration.
    - CLUSTERSET - Set how updates are configured in a cluster
    - CLUSTERSHOW - Display how updates are configured in a cluster
    []> 
    (Machine esa1.yourcompany.com)> commit

  • SAP Installation in Cluster for ECC Ehp4 SR1 in one cluster

    Dear Experts,
    Platform is Windows 2008, SQl Server 2008, ECC Ehp4 SR1
    In our project, we are implementing ECC 6.0 Ehp4 SR1. Prior to this release, we use to have ABAP & JAVA (dual) stack in single installation. But going through installation guides and posts in SDN understood that now ABAP and JAVA has to be installed separately with different SID.
    We have installed development system with separate SID for ABAP stack and java stack in the same box. Similarly, we did it for quality system also.
    Now, we need to install the production system which will be in cluster. Earlier we have done cluster installation for ABAP & JAVA stack which would come in same cluster only. But now, since ABAP and JAVA have to be installed with separate SID, can I install both ABAP and JAVA stack in same cluster with two nodes or do we need to have separate cluster for ABAP and JAVA. As of now, we only have two nodes available. If we need to go with separate cluster for ABAP and JAVA we need to get two additional nodes.
    So, let me know whether we can use existing two nodes and create a cluster and install ABAP and JAVA with different SID's on it. I have gone through the installation guide and could understand that it can be done. Hence let me know, whether I can go with single cluster. If so, then would be the advantages and disadvantages.
    Thanks & Regards,
    Sharath

    Hi Sharath Babu,
    The ASCS & SCS instance must be installed and configured to run on two MSCS nodes in one MSCS
    cluster. Ofcourse ESR is also on the same lines.
    In brief, for each SAP system you have to install one central instance and at least one dialog instance.
    For example, if your local instance on both nodes should have below listed items
    <drive>:\usr\sap\<SID>\SYS
    <drive>:\usr\sap\<SID>\ASCS20
    <drive>:\usr\sap\<SID>\SCS10
    And these above folders are junction point with Central Instance that's build in SAN Drive with similar set of folders
    <drive>:\usr\sap\<SID>\SYS
    <drive>:\usr\sap\<SID>\ASCS20
    <drive>:\usr\sap\<SID>\SCS10
    Regards
    Sekhar
    Edited by: sekhar on Nov 27, 2009 10:25 AM

  • What's the easiest way to select all my files from all the folders on a hard drive and place into one folder?

    Hi there,
    I have about 30,000 images all in hundreds of folders. I'm wondering what the easiest way to get them all into one folder so I can select and convert them all from .psd, .tiff, etc all to jpg. The reason I'm doing this is because the folder structure is such a mess, that I'm just going to import them all into aperture to sort everything. But the .tiffs and .psds are 100 mb each so I want to scale them to jpgs that are only 4 or 5 mb before I even think about importing them into aperture.
    I tried doing a search for "kind is image" and it shows them all but a ton of them are renamed the same thing so when I try to select all and move into one folder it tells me I can skip that one or stop copying.
    Any thoughts or ideas on this?
    Thanks,
    Caleb

    Hi russelldav,
    one note on your data handling:
    When  each of the 50 participants send the same 60 "words" you don't need 3000 global variables to store them!
    You can reorganize those data into a cluster for each participant, and using an array of cluster to keep all the data in one "block".
    You can initialize this array at the start of the program for the max number of participants, no need to (dynamically) add or delete elements from this array...
    Edited:
    When all "words" have the same representation (I16 ?) you can make a 2D array instead of an array of cluster...
    Message Edited by GerdW on 10-26-2007 03:51 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Show the company I work for that we should have one cluster

    The company I work for have several projects where we are going to use RAC. The issue I'm having is they want each database to be in its own cluster hardware. I'm trying to show then that we should cluster all the hardware together and make small RAC database in the one cluster.
    Does any know of anyone that has written a go doc how why you should have one cluster versus multiple clusters.
    Example of what they are thinking of doing note this is all separate hardware cluster1 will not be cluster with cluster2 I'm try to show them if we cluster it all together we get more power and may not need so much hardware:
    Project 1
    cluster1_dev
    2 nodes
    cluster2_qa
    4 nodes
    cluster3_prd
    4 nodes
    Project 2
    cluster2_dev
    2 nodes
    cluster2_qa
    4 nodes
    cluster2_prd
    4 nodes
    What I’m proposing is the following setup:
    Cluster_dev
    3 nodes
    Project1 instance cluster on all 3 nodes
    Project2 instance cluster on all 3 nodes
    Cluster_qa
    6 nodes
    Project1 instance cluster on all 6 nodes
    Project2 instance cluster on all 6 nodes
    Cluster_prd
    6 nodes
    Project1 instance cluster on all 6 nodes
    Project2 instance cluster on all 6 nodes

    I am always impressed by choosing rac without having expressed what types of failures to be covered.
    maybe you really don't need rac ...
    moreover, rac is imho really not 2 nodes oriented but well grid oriented.
    a lot of small server in the same cluster is for me the way to go or the the way to think.
    Services features were introduced to allow one rac db to be shared by multiple applications.

  • Reg: how to join more than one cluster table into one

    Hi gurus
    How to join more than one cluster table into one
    amk

    Hi,
    You cannot join cluster tables
    Best way is to select from the header table and then select from the item table table using for all entries of header table.
    regards,
    Advait

  • Easiest way getting one cluster node "clean" (without messages) in conv clu

    hi *,
    i would like to know if any of you have (production) experience with MQ when it comes to troubleshooting...
    for the case something really bad happens to one of your brokers like his file store is getting bigger and bigger (no matter why or e.g. [something similar like this|http://forums.sun.com/thread.jspa?threadID=5334175&tstart=0] )
    how do you deal with this normally?
    we until now do not have any clustered JMS server in our production system since our JMS clients are always configured to access single points (single JMS servers) we nowadays do it like this:
    turn off all producers to the JMS server
    wait some time till all messages have been consumed
    stop JMS server
    delete his file store
    boot up JMS server again (clean)
    start producers again
    since we are planning to rollout a conv cluster with 15 nodes (brokers) and with severel cross referencing clients (clients are configured to use up to 3-5 servers for ensuring they are aleways served) we do not know how we would do the same for one cluster noedes of this cluster.
    we can not do the exact thing as i mentioned above since i do not know which clients are at this time bound to which server.
    any idea?
    regards chris

    hi linda,
    thanks fo your feedback.
    we will implement it like you suggested. by scripting a drain scenario with imqcmd.
    one thing i do not understand about your last post is:
    when you say: "We'll look @ adding something that removes all consumers on a service in the next release (to make this easier) "
    what would this feature help me to drain a broker?
    ideally for me it would be like that:
    1)st quiesce a broker
    2)nd kill producers (force them to failover)
    3)rd wait till all messages are gone
    4)th kill consumers (force them to failover)
    stop broker.
    so my "feature request" would be killing cxn due to what they are (consumers / producers / maybe all).
    do i have to log an enhancement request for this to make your life easier?
    regards chris

  • Migrate virtual machine from one cluster to another 2012 r2 SCVMM

    The process of migrating a virtual machine from one cluster to another involves deleting the source vm/vhd/vhdx files. Is there a way to keep these files on the source after the migration is complete? We want to keep the files there just in case there
    are issues and we want to turn the virtual back up on the source.

    Hi,
    For this issue, i think you may ask in:
    http://social.technet.microsoft.com/Forums/en-US/home?category=virtualmachinemanager
    Thanks for your understanding.
    Regards.
    Vivian Wang

  • O Cluster Algorithm giving only one cluster

    hello,
    I am using Oracle 10g O Cluster Algorithm for clustering. i have 790 rows of data with 18 attributes all are of float data type.
    I am using ODM for creating clustering, but clustering resulting in creation of only one cluster.
    can any body knows what is the actual problem.
    please reply soon..
    thanks in advance

    It is possible for OCluster (OC) to return a single cluster if it did not find any natural clusters in the data. That is, it did not find any split in the data that would separate well the data into two groups. The number of clusters settings in OC is not the desired number of cluster but the maximum number of clusters. One way to validate this is to look at the histogram of your 18 attributes and see if there are clear peaks and valleys. In order for OC to find a split you should see at least two peaks with a reasonably deep valley (the actual criterion is statistical) in one of the attributes. If you don't see that than OC would not create more than one cluster. I am assuming that you are using ODMR and the data preparation steps suggested by ODMR.
    You can also try to increase the sensitivity parameter. This will allow OC to accept clusters that it would not consider significant otherwise.
    If you still want to partition your data even when OC did not find any clusters than you may try KM. KM will split the data into K clusters as long as there is enough data for that.
    --Marcos                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • SevletContextImpl not serializable exception when one cluster is restarted.

    Hello,
              I am using WLS 5.1 SP8 with apache plug-in and in-memory session
              replication. There is one proxy server and two cluster: A and B. I started
              cluster A and B together and each cluster would "see" each other. I shut
              down cluster A and my session will continue working fine on cluster B.
              However, when I restart cluster A, I get the the following stack trace from
              cluster B:
              Mon Apr 02 12:04:56 PDT 2001:<I> <Cluster> Adding server
              6345049800752707933S172
              .17.17.77:[80,80,443,443,80,-1] to cluster view
              Mon Apr 02 12:04:57 PDT 2001:<E> <Kernel> ExecuteRequest failed.
              java.io.NotSerializableException:
              weblogic.servlet.internal.ServletContextImpl
              at java.lang.Throwable.fillInStackTrace(Native Method)
              at java.lang.Throwable.fillInStackTrace(Compiled Code)
              at java.lang.Throwable.<init>(Compiled Code)
              at java.lang.Exception.<init>(Compiled Code)
              at java.io.IOException.<init>(Compiled Code)
              at
              java.io.ObjectStreamException.<init>(ObjectStreamException.java:29)
              at
              java.io.NotSerializableException.<init>(NotSerializableException.java
              :31)
              at java.io.ObjectOutputStream.outputObject(Compiled Code)
              at java.io.ObjectOutputStream.writeObject(Compiled Code)
              at java.io.ObjectOutputStream.outputClassFields(Compiled Code)
              at java.io.ObjectOutputStream.defaultWriteObject(Compiled Code)
              at java.io.ObjectOutputStream.outputObject(Compiled Code)
              at java.io.ObjectOutputStream.writeObject(Compiled Code)
              at java.util.Hashtable.writeObject(Compiled Code)
              at java.lang.reflect.Method.invoke(Native Method)
              at java.lang.reflect.Method.invoke(Compiled Code)
              at java.io.ObjectOutputStream.invokeObjectWriter(Compiled Code)
              at java.io.ObjectOutputStream.outputObject(Compiled Code)
              at java.io.ObjectOutputStream.writeObject(Compiled Code)
              at
              weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled Code)
              at
              weblogic.servlet.internal.session.ReplicatedSession.writeExternal(Replicated
              Session.java:74)
              at
              weblogic.common.internal.WLObjectOutputStreamBase.writePublicSerializable(Co
              mpiled Code)
              at
              weblogic.common.internal.WLObjectOutputStreamBase.writeObjectBody(Compiled
              Code)
              at
              weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled Code)
              at
              weblogic.common.internal.WLObjectOutputStreamBase.writeObjectWL(Compi
              at
              weblogic.rmi.extensions.AbstractOutputStream2.writeObject(Compiled Code)
              at weblogic.rmi.extensions.AbstractOutputStream.writeObject(Compiled
              Code)
              at
              weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationMan
              ager_WLStub.java:86)
              at
              weblogic.cluster.replication.ReplicationManager.createSecondary(Compiled
              Code)
              at
              weblogic.cluster.replication.ReplicationManager.checkHosts(Compiled Code)
              at
              weblogic.cluster.replication.ReplicationManager.clusterMembersChanged(Replic
              ationManager.java:582)
              at
              weblogic.cluster.MemberStash$ClusterMembersChangeDeliverer.execute(MemberSta
              sh.java:207)
              at weblogic.kernel.ExecuteThread.run(Compiled Code)
              --------------- nested within: ------------------
              weblogic.rmi.MarshalException: error marshalling arguments
              - with nested exception:
              [java.io.NotSerializableException:
              weblogic.servlet.internal.ServletContextImpl]
              at java.lang.Throwable.fillInStackTrace(Native Method)
              at java.lang.Throwable.fillInStackTrace(Compiled Code)
              at java.lang.Throwable.<init>(Compiled Code)
              at java.lang.Exception.<init>(Compiled Code)
              at java.io.IOException.<init>(Compiled Code)
              at weblogic.common.T3Exception.<init>(T3Exception.java:47)
              at weblogic.rmi.RemoteException.<init>(RemoteException.java:41)
              at weblogic.rmi.MarshalException.<init>(MarshalException.java:31)
              at
              weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationMan
              ager_WLStub.java:90)
              at
              weblogic.cluster.replication.ReplicationManager.createSecondary(Compiled
              Code)
              at
              weblogic.cluster.replication.ReplicationManager.checkHosts(Compiled Code)
              at
              weblogic.cluster.replication.ReplicationManager.clusterMembersChanged(Replic
              ationManager.java:582)
              at
              weblogic.cluster.MemberStash$ClusterMembersChangeDeliverer.execute(MemberSta
              sh.java:207)
              at weblogic.kernel.ExecuteThread.run(Compiled Code)
              --------------- nested within: ------------------
              weblogic.rmi.extensions.RemoteRuntimeException: Undeclared checked
              exception - w
              ith nested exception:
              [weblogic.rmi.MarshalException: error marshalling arguments
               - with nested exception:
              [java.io.NotSerializableException:
              weblogic.servlet.internal.ServletContextImpl]
              at java.lang.Throwable.fillInStackTrace(Native Method)
              at java.lang.Throwable.fillInStackTrace(Compiled Code)
              at java.lang.Throwable.<init>(Compiled Code)
              at java.lang.Exception.<init>(Compiled Code)
              at java.lang.RuntimeException.<init>(RuntimeException.java:47)
              at
              weblogic.utils.NestedRuntimeException.<init>(NestedRuntimeException.java:23)
              at
              weblogic.rmi.extensions.RemoteRuntimeException.<init>(RemoteRuntimeException
              .java:22)
              at
              weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationMan
              ager_WLStub.java:108)
              at
              weblogic.cluster.replication.ReplicationManager.createSecondary(Compiled
              Code)
              at
              weblogic.cluster.replication.ReplicationManager.checkHosts(Compiled Code)
              at
              weblogic.cluster.replication.ReplicationManager.clusterMembersChanged(Replic
              ationManager.java:582)
              at
              weblogic.cluster.MemberStash$ClusterMembersChangeDeliverer.execute(MemberSta
              sh.java:207)
              at weblogic.kernel.ExecuteThread.run(Compiled Code)
              I looked everywhere in our code and we don't store ServletContext object
              anywhere in the session level..Can someone give me some hints as to what the
              problem is?
              Thanks for your help in advance.
              Vincent
              

    You probably have a non serializable object in your session.
              You can have only serializable data in the session.
              Cheers,
              -- Prasad
              Vincent Shek wrote:
              > Hello,
              >
              > I am using WLS 5.1 SP8 with apache plug-in and in-memory session
              > replication. There is one proxy server and two cluster: A and B. I started
              > cluster A and B together and each cluster would "see" each other. I shut
              > down cluster A and my session will continue working fine on cluster B.
              > However, when I restart cluster A, I get the the following stack trace from
              > cluster B:
              >
              > Mon Apr 02 12:04:56 PDT 2001:<I> <Cluster> Adding server
              > 6345049800752707933S172
              > .17.17.77:[80,80,443,443,80,-1] to cluster view
              > Mon Apr 02 12:04:57 PDT 2001:<E> <Kernel> ExecuteRequest failed.
              > java.io.NotSerializableException:
              > weblogic.servlet.internal.ServletContextImpl
              > at java.lang.Throwable.fillInStackTrace(Native Method)
              > at java.lang.Throwable.fillInStackTrace(Compiled Code)
              > at java.lang.Throwable.<init>(Compiled Code)
              > at java.lang.Exception.<init>(Compiled Code)
              > at java.io.IOException.<init>(Compiled Code)
              > at
              > java.io.ObjectStreamException.<init>(ObjectStreamException.java:29)
              > at
              > java.io.NotSerializableException.<init>(NotSerializableException.java
              > :31)
              > at java.io.ObjectOutputStream.outputObject(Compiled Code)
              > at java.io.ObjectOutputStream.writeObject(Compiled Code)
              > at java.io.ObjectOutputStream.outputClassFields(Compiled Code)
              > at java.io.ObjectOutputStream.defaultWriteObject(Compiled Code)
              > at java.io.ObjectOutputStream.outputObject(Compiled Code)
              > at java.io.ObjectOutputStream.writeObject(Compiled Code)
              > at java.util.Hashtable.writeObject(Compiled Code)
              > at java.lang.reflect.Method.invoke(Native Method)
              > at java.lang.reflect.Method.invoke(Compiled Code)
              > at java.io.ObjectOutputStream.invokeObjectWriter(Compiled Code)
              > at java.io.ObjectOutputStream.outputObject(Compiled Code)
              > at java.io.ObjectOutputStream.writeObject(Compiled Code)
              > at
              > weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled Code)
              > at
              > weblogic.servlet.internal.session.ReplicatedSession.writeExternal(Replicated
              > Session.java:74)
              > at
              > weblogic.common.internal.WLObjectOutputStreamBase.writePublicSerializable(Co
              > mpiled Code)
              > at
              > weblogic.common.internal.WLObjectOutputStreamBase.writeObjectBody(Compiled
              > Code)
              > at
              > weblogic.common.internal.WLObjectOutputStreamBase.writeObject(Compiled Code)
              > at
              > weblogic.common.internal.WLObjectOutputStreamBase.writeObjectWL(Compi
              > at
              > weblogic.rmi.extensions.AbstractOutputStream2.writeObject(Compiled Code)
              > at weblogic.rmi.extensions.AbstractOutputStream.writeObject(Compiled
              > Code)
              > at
              > weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationMan
              > ager_WLStub.java:86)
              > at
              > weblogic.cluster.replication.ReplicationManager.createSecondary(Compiled
              > Code)
              > at
              > weblogic.cluster.replication.ReplicationManager.checkHosts(Compiled Code)
              > at
              > weblogic.cluster.replication.ReplicationManager.clusterMembersChanged(Replic
              > ationManager.java:582)
              > at
              > weblogic.cluster.MemberStash$ClusterMembersChangeDeliverer.execute(MemberSta
              > sh.java:207)
              > at weblogic.kernel.ExecuteThread.run(Compiled Code)
              >
              > --------------- nested within: ------------------
              > weblogic.rmi.MarshalException: error marshalling arguments
              > - with nested exception:
              > [java.io.NotSerializableException:
              > weblogic.servlet.internal.ServletContextImpl]
              > at java.lang.Throwable.fillInStackTrace(Native Method)
              > at java.lang.Throwable.fillInStackTrace(Compiled Code)
              > at java.lang.Throwable.<init>(Compiled Code)
              > at java.lang.Exception.<init>(Compiled Code)
              > at java.io.IOException.<init>(Compiled Code)
              > at weblogic.common.T3Exception.<init>(T3Exception.java:47)
              > at weblogic.rmi.RemoteException.<init>(RemoteException.java:41)
              > at weblogic.rmi.MarshalException.<init>(MarshalException.java:31)
              > at
              > weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationMan
              > ager_WLStub.java:90)
              > at
              > weblogic.cluster.replication.ReplicationManager.createSecondary(Compiled
              > Code)
              > at
              > weblogic.cluster.replication.ReplicationManager.checkHosts(Compiled Code)
              > at
              > weblogic.cluster.replication.ReplicationManager.clusterMembersChanged(Replic
              > ationManager.java:582)
              > at
              > weblogic.cluster.MemberStash$ClusterMembersChangeDeliverer.execute(MemberSta
              > sh.java:207)
              > at weblogic.kernel.ExecuteThread.run(Compiled Code)
              > --------------- nested within: ------------------
              > weblogic.rmi.extensions.RemoteRuntimeException: Undeclared checked
              > exception - w
              > ith nested exception:
              > [weblogic.rmi.MarshalException: error marshalling arguments
              >  - with nested exception:
              > [java.io.NotSerializableException:
              > weblogic.servlet.internal.ServletContextImpl]
              > ]
              > at java.lang.Throwable.fillInStackTrace(Native Method)
              > at java.lang.Throwable.fillInStackTrace(Compiled Code)
              > at java.lang.Throwable.<init>(Compiled Code)
              > at java.lang.Exception.<init>(Compiled Code)
              > at java.lang.RuntimeException.<init>(RuntimeException.java:47)
              > at
              > weblogic.utils.NestedRuntimeException.<init>(NestedRuntimeException.java:23)
              > at
              > weblogic.rmi.extensions.RemoteRuntimeException.<init>(RemoteRuntimeException
              > .java:22)
              > at
              > weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationMan
              > ager_WLStub.java:108)
              > at
              > weblogic.cluster.replication.ReplicationManager.createSecondary(Compiled
              > Code)
              > at
              > weblogic.cluster.replication.ReplicationManager.checkHosts(Compiled Code)
              > at
              > weblogic.cluster.replication.ReplicationManager.clusterMembersChanged(Replic
              > ationManager.java:582)
              > at
              > weblogic.cluster.MemberStash$ClusterMembersChangeDeliverer.execute(MemberSta
              > sh.java:207)
              > at weblogic.kernel.ExecuteThread.run(Compiled Code)
              >
              > I looked everywhere in our code and we don't store ServletContext object
              > anywhere in the session level..Can someone give me some hints as to what the
              > problem is?
              >
              > Thanks for your help in advance.
              >
              > Vincent
              

  • IronPort ESA - HA and Dual Homing

    Hello, i have a customer that want to do HA and Dual Homing implementation. I want to ask what is the best way to implement HA for IronPort ESA? As i know the cluster configuration only used so the policy can be distributed equally. And what about dual-homing scenario? Is it supported with IronPort, and how do it works ?
    Regards
    Alkuin Melvin

    What exactly do you mean by multi-homing? Ironport email appliances support configuration of multiple interfaces (physical or vlan) , to which you can then attach Listeners (SMTP processes). You could thus configure your servers to receive or send email on multiple IP addresses, depending on your network config.
    Sent from Cisco Technical Support iPad App

  • One cluster node shows erro

    Hi,
    This FTP channel Sender and in RWB i see one cluster node Red ( error ) and other as Green
    The error is Unknown host exception. Files are being pulled by other node but first node shows error.
    Any guess why would this happen.
    Thanks!

    >>>Files are being pulled by other node but first node shows error.
    This is fine. If you observer the error time-stamp it might not be latest unlike the one which is processing.
    AFAIK - At a given point your sender channel will point to only one instance.
    So whenever (random behavior) the channel points to other instance, this should go away.
    If you notice a different behavior then you should configure the advanced mode parameter "clusterSyncMode".

  • Hi. I am using a time capsule for few PC s. I have made 5 different account to access time capsule. but in windows when i enter account name and password for one account, i cannot access other accounts, because windows saves username

    Hi. I am using a time capsule for few PC s. I have made 5 different account to access time capsule. but in windows when I enter account name and password for one account, i cannot access other accounts, because windows saves username. how can i prevent this from happenning. I really need to access all my accounts and dont want it to save automaticlly.

    Why have 5 accounts if you need to access all of them.. just have one account?
    Sorry I cannot follow why you would even use the PC to control the Time Capsule. Apple have not kept the Windows version of the utility up to date.. so they keep making it harder and harder to run windows with apple routers.

  • HT201250 Can I partition my external hard drive and use one partion for time machine and the other one for data that i may want to use in different computers?

    I have this doubt. I've just bought an external drive, especifically a Seagate GoFlex Desk 3 tb.
    I want to know if it is recomendable to make a partion exclusively for time machine and let another one so I can put there music, photos, videos, etc that I should need to use or copy to another computer.
    May half and half, 1.5 tb for time machine and 1.5 tb for data.
    I have an internal hard drive of 500 GB (499.25 GB) in my macbook pro.
    Any recommendation?

    As I said, yes. Be sure your Time Machine partition has at least 1 TB for backups.
    1. Open Disk Utility in your Utilities folder.
    2. After DU loads select your hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Click on the Partition tab in the DU main window.
    3. Under the Volume Scheme heading set the number of partitions from the drop down menu to two (2). Click on the Options button, set the partition scheme to GUID then click on the OK button. Set the format type to Mac OS Extended (Journaled.) Click on the Partition button and wait until the process has completed.

Maybe you are looking for

  • Can't open Word documents

    Recently, when I try to open email attachments created in Word the Word program opens up but no document! It doesn't even open a blank document, it just has all the toolbars and then nothing. I can open the documents in a different program (like Text

  • Apple bootlogo wont go away

    One of my realitives has jailbroken my iphone when I was not paying attention and when I try to put it in recovery mode, Itunes wont recognize it as a iphone. Please help!

  • Time Dimension in Dynamic Reporting

    When creating a report that compares a current period (for example 2015.FEB) to the same period in the prior year (2014.FEB), what is the best logic to use in a report?  Do I use a local member formula to perform this?  I am willing to build in a pro

  • Intermittent jitter/lag when gaming (PC)

    Hi there, I was wondering if anyone could help... I seem to experience intermittent jitter/lag in games, my average ping is around 70 which I already think is relatively high, this can fluctuate to 120-150 on regular occasions. The majority of the ti

  • Firefox crashes all the time. I have a hard time getting on my own site

    Why is fire fox crashing all the time. I can't even get into my own web site, and when I do get in My plugins crash, but only on fire fox on EX it works fine. but I don't want to change, I would have to readjust my site of over 2000 pages, I have che