WLS6.0 Cluster DNS

          I've got only broken pieces of information about setting up a cluster DNS name
          (so that you can use it as url for JNDI connection) from WLS6.0 documentations.
          One place it says (actually suggested) to use cluster's DNS name, the other place
          says that the DNS name for the cluster can be defined by the weblogic.cluster.name
          property (Didn't say how ?) and it is a command-line property which is set when
          the cluster is started. I'm sure it is not the "Name" of a cluster in the Admin
          console. May be it is set up in the host file?
          Thanks,
          Lieyong
          

Hi,
          Yes you could set up your hosts file as
          myCluster 192.168.1.11
          myCluster 192.168.1.12
          myCluster 192.168.1.13
          and use "t3://myCluster:7001" to lookup your clustered EJB.
          Peace,
          Lynch
          "Lieyong Fu" <[email protected]> ¼¶¼g©ó¶l¥ó news:3b8e7dcf$[email protected]..
          >
          > I've got only broken pieces of information about setting up a cluster DNS
          name
          > (so that you can use it as url for JNDI connection) from WLS6.0
          documentations.
          > One place it says (actually suggested) to use cluster's DNS name, the
          other place
          > says that the DNS name for the cluster can be defined by the
          weblogic.cluster.name
          > property (Didn't say how ?) and it is a command-line property which is set
          when
          > the cluster is started. I'm sure it is not the "Name" of a cluster in the
          Admin
          > console. May be it is set up in the host file?
          > Thanks,
          > Lieyong
          

Similar Messages

  • WLS6.0-cluster

              hi all,
              I happened to see your contributions to the news groups and thought you would
              help me understand a particular scenario.
              We have 2 solaris boxes..(each one &#8211; 2 cpu).
              Is it ok if we run the admin server on one box and run 2 server instances on the
              other box.
              Now we want to create a cluster of the 2 server instances running on box 2.
              Through the admin console running on box 1, we created the &#8216;machine&#8217;
              entry for the box2 and then created entries for the two server instances.
              We then created a new cluster and assigned the 2 server instances to this cluster.
              We defined the cluster address, multicast addr and other entries.
              In the admin server&#8217;s domain we create 2 start scripts to start the servers
              as managed servers requesting their config info from the admin server.
              Now how do we deploy a .EAR file among this cluster&#8230;does selecting the targets
              for the .ear as the cluster simply deploy the file?? Do we need to have matching
              directory entries on the second box running the 2 server instances??
              Since the 2 servers are on the same machine..how do we create separate &#8216;applications&#8217;
              directories?? OR do we have just one &#8216;applications&#8217; directory on that
              machine hosting these 2 servers.
              Our .EAR file has 1 web archive and 4 jar files.
              Does clustering mean deploying this .EAR file among the 2 server instances?? Or
              should we consider intelligently placing the .ear contents among different nodes
              of the cluster??
              Please help me understand this better..
              Thanks & regards
              jyothi
              

    Hi,
              Yes you could set up your hosts file as
              myCluster 192.168.1.11
              myCluster 192.168.1.12
              myCluster 192.168.1.13
              and use "t3://myCluster:7001" to lookup your clustered EJB.
              Peace,
              Lynch
              "Lieyong Fu" <[email protected]> ¼¶¼g©ó¶l¥ó news:3b8e7dcf$[email protected]..
              >
              > I've got only broken pieces of information about setting up a cluster DNS
              name
              > (so that you can use it as url for JNDI connection) from WLS6.0
              documentations.
              > One place it says (actually suggested) to use cluster's DNS name, the
              other place
              > says that the DNS name for the cluster can be defined by the
              weblogic.cluster.name
              > property (Didn't say how ?) and it is a command-line property which is set
              when
              > the cluster is started. I'm sure it is not the "Name" of a cluster in the
              Admin
              > console. May be it is set up in the host file?
              > Thanks,
              > Lieyong
              

  • EJB lookup in a cluster (DNS names to servers mapping)

              I am using weblogic6.0 to setup a object clusters. Currently I have 2 physical
              machine each running one object server.
              I have a servlet in the web tier (it is not part of the cluster) that lookups
              the EJB's.
              From the documentation, I understand intialContext lookup should use one DNS name
              that maps to both the object servers.
              My question is how can I setup that ?
              - I tried the following, I am not sure if this is right
              I created a virtualhost with
              name: MyVirtualHost
              virtual host names : MyCluster (this is name of the cluster I had setup )
              Targets : MyCluster
              I tried looking up with
              t3//MyVirtualHost:7001 and t3//MyCluster:7001
              No luck in both the cases !!!!!
              Note : I restarted my weblogic servers after
              creating the virtual host. But I did not reboot my machine ( don't know if this
              is required )
              Please help !!!!
              Thanks
              Abi -
              

              You need add an entry in Domain Name Server to include your host1 and host2, and
              use it in your JNDI lookup. Or you can do:
              t3://host1,host2:7001
              Both ways should work. Will load balancing to two JNDI trees.
              Jim Zhou.
              "Abinesh Puthenpurackal" <[email protected]> wrote:
              >
              >I am using weblogic6.0 to setup a object clusters. Currently I have 2
              >physical
              >machine each running one object server.
              >
              >I have a servlet in the web tier (it is not part of the cluster) that
              >lookups
              >the EJB's.
              >
              >From the documentation, I understand intialContext lookup should use
              >one DNS name
              >that maps to both the object servers.
              >
              >My question is how can I setup that ?
              > - I tried the following, I am not sure if this is right
              > I created a virtualhost with
              > name: MyVirtualHost
              > virtual host names : MyCluster (this is name of the cluster I had
              >setup )
              > Targets : MyCluster
              >
              >I tried looking up with
              >t3//MyVirtualHost:7001 and t3//MyCluster:7001
              >
              >No luck in both the cases !!!!!
              >
              >Note : I restarted my weblogic servers after
              >creating the virtual host. But I did not reboot my machine ( don't know
              >if this
              >is required )
              >
              >Please help !!!!
              >
              >Thanks
              >Abi -
              >
              

  • Windows 2008r2 cluster dns issue

    We are running GW 8.0.2 not hot patches. System is running on a windows 2008r2 2 node cluster. When we failover the gwia from node 1 to node 2 we can no longer send email to the internet. We receive just fine, also imap and pop connections are working. We have checked firewall settings and they are the same for both nodes. We get 450 hosts unknown errors for all email going out. We think this is a dns issue, but can't find where the problem may be. We have checked the registry dns nameserver entries on both nodes and they match.
    Thanks,
    Bill

    On the GWIA, is the bind exclusive set on the addressing tab?
    The GWIA will send on the primary address of the box unless this box
    is checked - but it will listen on all IP addresses (this inbound will
    work but recieve will not)
    T
    On Fri, 01 Apr 2011 12:38:04 GMT, [email protected]: wrote:
    >We are running GW 8.0.2 not hot patches. System is running on a windows 2008r2 2 node cluster. When we failover the gwia from node 1 to node 2 we can no longer send email to the internet. We receive just fine, also imap and pop connections are working. We have checked firewall settings and they are the same for both nodes. We get 450 hosts unknown errors for all email going out. We think this is a dns issue, but can't find where the problem may be. We have checked the registry dns nameserver entries on both nodes and they match.
    >
    >Thanks,
    >Bill

  • 2008 R2 failover cluster dns

    Hello,
    Wondering if there is any best practices when it comes to setting up DNS. Options are either dynamic DNS or add static A records for nodes and cluster name. Also I understand that there will be no DNS servers specified in TCP properties of the heartbeat
    NIC.
    Thanks

    Hi,
    please check "Network infrastructure and domain account requirements for a failover Cluster"
    http://technet.microsoft.com/en-us/library/ff182359(v=ws.10).aspx#BKMK_Account_Infrastructure
    The cluster should automatic register DNS records for ech cluster node and for the cluster name.
    And yes, you do not need to specify DNS servers for heartbeat, livemigration or CSV NIC.
    Hope that helps
    Regards
    Sebastian

  • WLS6.1 Cluster Exception

              Hi,
              I am using an 8 server Weblogic 6.1 Cluster on AIX. It gives the following exception
              when I try to access the application.
              <Dec 10, 2001 5:13:43 PM CST> <Error> <HTTP> <[WebAppServletContext(1564960562,wise,/wise)]
              Servlet failed with Exception java.lang.ClassCastException: weblogic.servlet.internal.session.MemorySessionContext
              Start server side stack trace: java.lang.ClassCastException: weblogic.servlet.internal.session.MemorySessionContext
              at weblogic.servlet.internal.session.SessionData.getContext(SessionData.java:270)
              at weblogic.servlet.internal.session.ReplicatedSessionData.becomeSecondary(ReplicatedSessionData.java:178)
              at weblogic.cluster.replication.WrappedRO.<init>(WrappedRO.java:34) at weblogic.cluster.replication.ReplicationManager$wroManager.create(ReplicationManager.java:352)
              at weblogic.cluster.replication.ReplicationManager.create(ReplicationManager.java:1073)
              at weblogic.cluster.replication.ReplicationManager_WLSkel.invoke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java(Compiled Code))
              at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java(Compiled
              Code)) at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java(Compiled
              Code)) at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java(Compiled Code))
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120) End server side stack
              trace
              at weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:85)
              at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java(Compiled Code))
              at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java(Compiled Code)) at $Proxy118.create(Unknown
              Source) at weblogic.cluster.replication.ReplicationManager.trySecondary(ReplicationManager.java(Compiled
              Code)) at weblogic.cluster.replication.ReplicationManager.createSecondary(ReplicationManager.java(Compiled
              Code)) at weblogic.cluster.replication.ReplicationManager.register(ReplicationManager.java:393)
              at weblogic.servlet.internal.session.ReplicatedSessionData.<init>(ReplicatedSessionData.java:119)
              at weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSession(ReplicatedSessionContext.java:193)
              at weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletRequestImpl.java:1948)
              at weblogic.servlet.internal.ServletRequestImpl.getSession(ServletRequestImpl.java:1729)
              at jsp_servlet.__Login._jspService(__Login.java:86) at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:2456)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2039)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java(Compiled Code)) at
              weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              Any idea what this is related to?
              Thanks for any help
              Sowjanya
              

    hi,
              i am also getting similar error..
              any clues, how to resolve this problem ?
              cc to Sowjanya:appreciate if u culd let me know, how u hv solved it ?
              cheers
              jagdip
              

  • Automating the creation of a HDinsight cluster

    Hi,
    I am trying to automate the creation of a HDinsight cluster using Azure Automation to execute a powershell script (the script from the automation gallery). When I try and run this (even without populating any defaults), it errors with the following error:
    "Runbook definition is invalid. In a Windows PowerShell Workflow, parameter defaults may only be simple value types (such as integers) and strings. In addition, the type of the default value must match the type of the parameter."
    The script I am trying to run is:
    <#
     This PowerShell script was automatically converted to PowerShell Workflow so it can be run as a runbook.
     Specific changes that have been made are marked with a comment starting with “Converter:”
    #>
    <#
    .SYNOPSIS
      Creates a cluster with specified configuration.
    .DESCRIPTION
      Creates a HDInsight cluster configured with one storage account and default metastores. If storage account or container are not specified they are created
      automatically under the same name as the one provided for cluster. If ClusterSize is not specified it defaults to create small cluster with 2 nodes.
      User is prompted for credentials to use to provision the cluster.
      During the provisioning operation which usually takes around 15 minutes the script monitors status and reports when cluster is transitioning through the
      provisioning states.
    .EXAMPLE
      .\New-HDInsightCluster.ps1 -Cluster "MyClusterName" -Location "North Europe"
      .\New-HDInsightCluster.ps1 -Cluster "MyClusterName" -Location "North Europe"  `
          -DefaultStorageAccount mystorage -DefaultStorageContainer myContainer `
          -ClusterSizeInNodes 4
    #>
    workflow New-HDInsightCluster99 {
     param (
         # Cluster dns name to create
         [Parameter(Mandatory = $true)]
         [String]$Cluster,
         # Location
         [Parameter(Mandatory = $true)]
         [String]$Location = "North Europe",
         # Blob storage account that new cluster will be connected to
         [Parameter(Mandatory = $false)]
         [String]$DefaultStorageAccount = "tavidon",
         # Blob storage container that new cluster will use by default
         [Parameter(Mandatory = $false)]
         [String]$DefaultStorageContainer = "patientdata",
         # Number of data nodes that will be provisioned in the new cluster
         [Parameter(Mandatory = $false)]
         [Int32]$ClusterSizeInNodes = 2,
         # Credentials to be used for the new cluster
         [Parameter(Mandatory = $false)]
         [PSCredential]$Credential = $null
     # Converter: Wrapping initial script in an InlineScript activity, and passing any parameters for use within the InlineScript
     # Converter: If you want this InlineScript to execute on another host rather than the Automation worker, simply add some combination of -PSComputerName, -PSCredential, -PSConnectionURI, or other workflow common parameters as parameters of
    the InlineScript
     inlineScript {
      $Cluster = $using:Cluster
      $Location = $using:Location
      $DefaultStorageAccount = $using:DefaultStorageAccount
      $DefaultStorageContainer = $using:DefaultStorageContainer
      $ClusterSizeInNodes = $using:ClusterSizeInNodes
      $Credential = $using:Credential
      # The script has been tested on Powershell 3.0
      Set-StrictMode -Version 3
      # Following modifies the Write-Verbose behavior to turn the messages on globally for this session
      $VerbosePreference = "Continue"
      # Check if Windows Azure Powershell is avaiable
      if ((Get-Module -ListAvailable Azure) -eq $null)
          throw "Windows Azure Powershell not found! Please make sure to install them from 
      # Create storage account and container if not specified
      if ($DefaultStorageAccount -eq "") {
          $DefaultStorageAccount = $Cluster.ToLowerInvariant()
          # Check if account already exists then use it
          $storageAccount = Get-AzureStorageAccount -StorageAccountName $DefaultStorageAccount -ErrorAction SilentlyContinue
          if ($storageAccount -eq $null) {
              Write-Verbose "Creating new storage account $DefaultStorageAccount."
              $storageAccount = New-AzureStorageAccount –StorageAccountName $DefaultStorageAccount -Location $Location
          } else {
              Write-Verbose "Using existing storage account $DefaultStorageAccount."
      # Check if container already exists then use it
      if ($DefaultStorageContainer -eq "") {
          $storageContext = New-AzureStorageContext –StorageAccountName $DefaultStorageAccount -StorageAccountKey (Get-AzureStorageKey $DefaultStorageAccount).Primary
          $DefaultStorageContainer = $DefaultStorageAccount
          $storageContainer = Get-AzureStorageContainer -Name $DefaultStorageContainer -Context $storageContext -ErrorAction SilentlyContinue
          if ($storageContainer -eq $null) {
              Write-Verbose "Creating new storage container $DefaultStorageContainer."
              $storageContainer = New-AzureStorageContainer -Name $DefaultStorageContainer -Context $storageContext
          } else {
              Write-Verbose "Using existing storage container $DefaultStorageContainer."
      if ($Credential -eq $null) {
          # Get user credentials to use when provisioning the cluster.
          Write-Verbose "Prompt user for administrator credentials to use when provisioning the cluster."
          $Credential = Get-Credential
          Write-Verbose "Administrator credentials captured.  Use these credentials to login to the cluster when the script is complete."
      # Initiate cluster provisioning
      $storage = Get-AzureStorageAccount $DefaultStorageAccount
      New-AzureHDInsightCluster -Name $Cluster -Location $Location `
            -DefaultStorageAccountName ($storage.StorageAccountName + ".blob.core.windows.net") `
            -DefaultStorageAccountKey (Get-AzureStorageKey $DefaultStorageAccount).Primary `
            -DefaultStorageContainerName $DefaultStorageContainer `
            -Credential $Credential `
            -ClusterSizeInNodes $ClusterSizeInNodes
    Many thanks
    Brett

    Hi,
    it appears that [PSCredential]$Credential = $null is not correct, i to get the same
    error, let me check further on it and revert back to you.
    Best,
    Amar

  • JNDI lookup in a cluster

    Hi,
              The WL documentation contains the following:
              "When clients obtain an initial JNDI context by supplying the cluster DNS
              name, weblogic.jndi.WLInitialContextFactory obtains the list of all
              addresses that are mapped to the DNS name"
              Should clients (e.g. an EJB) be concerned about retrying the call to get the
              Initial Context. In other words, can this call fail if one of the servers in
              the cluster fails?
              After the intital contect is obtained, it seems that the lookup should
              always work (since WL will take care of individual server failures and retry
              the lookup in needed).
              Not clear whether the call to get the initial context is guaranteed to
              succeed (as long as one server in the cluster is up, of course)... Any
              information would be appreciated.
              Thanks,
              Philippe
              

              Hello Philippe,
              I had posted a similar question but now can't find it...got lost I suppose. Anyway,
              I wanted to add in my findings on this. I have a stateful session object running
              in a clustered setup. This stateful object has Home references to multiple stateless
              beans. When I create a failover my stateful object does its failover properly.
              But, if I don't perform a new Home lookup for the stateless objects needed I receive
              the following error:
              ####<Nov 9, 2001 2:00:06 PM CST> <Error> <> <gwiz> <testServer1> <ExecuteThread:
              '9' for queue: 'default'> <> <> <000000> <<TestDeliveryActionHandler>Problem occured
              when trying to do a save and goto. java.rmi.NoSuchObjectException: Activation
              failed with: java.rmi.NoSuchObjectException: Unable to locate EJBHome: 'GBTestManagerHome'
              on server: 't3://10.1.17.3:7001
              When I perform a lookup during the ejbactivate() method to get a new Home reference
              all seems to work OK. My question though is, is this correct? From what I have
              read I had the same impression that the unserialized Home reference should be
              able to locate a new reference in the cluster without having to perform a lookup
              again.
              Any advice from anyone is greatly appreciated,
              Rich
              "Philippe Fajeau" <[email protected]> wrote:
              >Hi,
              >
              >The WL documentation contains the following:
              >
              >"When clients obtain an initial JNDI context by supplying the cluster
              >DNS
              >name, weblogic.jndi.WLInitialContextFactory obtains the list of all
              >addresses that are mapped to the DNS name"
              >
              >Should clients (e.g. an EJB) be concerned about retrying the call to
              >get the
              >Initial Context. In other words, can this call fail if one of the servers
              >in
              >the cluster fails?
              >
              >After the intital contect is obtained, it seems that the lookup should
              >always work (since WL will take care of individual server failures and
              >retry
              >the lookup in needed).
              >
              >Not clear whether the call to get the initial context is guaranteed to
              >succeed (as long as one server in the cluster is up, of course)... Any
              >information would be appreciated.
              >
              >Thanks,
              >
              >Philippe
              >
              >
              

  • Session Bean Client Hangs when one Server in Cluster Fails

    We are testing several failure scenarios and one has come up that concerns us.
    Some background: Were running a WLS6.1 cluster on two separate machines. We
    start a test client consisting of 50 active threads and let them start calling
    into a session bean. After a couple minutes we pull the network plug out of one
    of the machines to simulate an uncontrolled crash of the machine. Once the plug
    is pulled the clients hang and of more concern any new clients that we startup
    also hang. Has anyone successfully solved this problem?

    When we kill one of Weblogic instances in the cluster none of the clients fail.
    All of our clients fail-over to the remaining servers. It's pulling the network
    plug to our of the server that causes everything to hang. Not just or test client,
    but the other servers in the cluster hang. The control panel doesn't respond
    at all either. We currently have a support case open with BEA #348184 about this.
    We've gotten a prompt response in which we were asked to modify our configuration
    by deploying our beans to each individual server rather than the cluster. We
    did this, but the results so far have not changed.
    Thanks for the feedback,
    Howard.
    "Ade Barkah" <[email protected]> wrote:
    We haven't encountered something like that, so it could be a setup problem.
    Can you verify that the t3 url hostname the client threads use resolves
    to the
    ip addresses of each machine in the cluster? Are all machines in the
    cluster
    listening at the same port number? Also, does it matter if you kill one
    of the
    weblogic processes instead of pulling the plug? (i.e., if you leave the
    network
    layer up?)
    Check also that your threads aren't simply blocking each other when the
    server
    goes down? E.g. start multiple test client processes with one thread
    each just
    to test.
    What we notice is (with round-robin cluster policy), as we bring down
    one of
    the servers, the clients will continue to work on the second server,
    but will slow
    down between method invocations as they still attempt to connect to the
    downed
    server.
    After a short period of time (~30 seconds) the clients will fully switch
    to the
    second machine and processing continues at full speed again, until the
    downed
    machine is brought back up, at which point work is distributed evenly
    again.
    Also, when the first server is brought down, some of the clients may
    terminate
    with a PeerGoneException (or something similar to that.) So unless your
    threads
    are catching exceptions, they might terminate as well.
    regards,
    -Ade
    "howard spector" <[email protected]> wrote in message news:[email protected]...
    We are testing several failure scenarios and one has come up that concernsus.
    Some background: Were running a WLS6.1 cluster on two separate machines.We
    start a test client consisting of 50 active threads and let them startcalling
    into a session bean. After a couple minutes we pull the network plugout of one
    of the machines to simulate an uncontrolled crash of the machine. Once the plug
    is pulled the clients hang and of more concern any new clients thatwe startup
    also hang. Has anyone successfully solved this problem?

  • Servlet cluster problem

    I am having trouble with a wls6.1 cluster. I am trying write a pdf out
              via a servlet. When I run the following code with the cluster turned
              off I have no problems. If I turn it on the servlet is returning no
              data. I am including the servlet and the stack trace in case someone
              can help. GenericFileObject.getTheFile returns a byte array.
              Jeff
              public void service(HttpServletRequest request, HttpServletResponse
              response) throws ServletException, IOException {
              DataOutputStream activityreportOut = new
              DataOutputStream(response.getOutputStream());
                        try {
                        HttpSession session=request.getSession(true);
                        response.setContentType("application/pdf");
                   String fileid = request.getParameter("fileid");
              String type=request.getParameter("type");
                        byte[] buffer;
                   ClientFacadeHome cfhome = (ClientFacadeHome)
                   EJBHomeFactory.getInstance().getBeanHome(Constants.CLASS_CLIENT_FACADE,
              Constants.JNDI_CLIENT_FACADE);
                   ClientFacade cf= cfhome.create();
                        GenericFileObject file =(GenericFileObject)
              cf.getFile(fileid,type);
                   buffer =(byte[]) file.getThefile();
                   activityreportOut.write(buffer);
              catch (Exception e){
                   e.printStackTrace();
              activityreportOut.flush();
              java.io.IOException: Broken pipe
              at java.net.SocketOutputStream.socketWrite(Native Method)
              at java.net.SocketOutputStream.write(Unknown Source)
              at weblogic.servlet.internal.ChunkUtils.writeChunkTransfer(ChunkUtils.java:189)
              at weblogic.servlet.internal.ChunkUtils.writeChunks(ChunkUtils.java:165)
              at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:248)
              at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:306)
              at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:197)
              at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:121)
              at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:155)
              at java.io.DataOutputStream.write(Unknown Source)
              at java.io.FilterOutputStream.write(Unknown Source)
              at com.bi.micardis.security.clientaction.ActivityAndScriptServlet.service(ActivityAndScriptServlet.java:41)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:2456)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2039)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              

              "Subir Das" <[email protected]> wrote:
              >So, Applet#1 will always talk to the servlet hosted by WLInstance#1 and
              >Applet#2
              >will always talk to the servlet hosted by WLInstance#2.
              This statement is not entirely true.
              Suppose WLInstance#1 were to be brought down (for whatever reason), Applet#1 will
              then talk to servlet hosted by WLInstance#2.
              Server pinning could be modified by different load balancing algorithms, configurable
              via containers (or hardware).
              So don't count on which servlet instance your applet is going to be served by.
              Instead consider to give a second look into the design of the servlet data structure
              (object):
              1.Read from data store, if it has been persisted.
              2.If the data is client related then consider sticking the data into session, which
              would then replicate to other WL instances.
              3.Stateless EJB in a cluster ? Don' know much about this(yet).
              My 2 cents..Good luck.
              Rama
              

  • The requested object does not exist. (Exception from HRESULT: 0x80010114)

    Hello,
    I have a 3 node cluster that is setup as active, active, passive.  Two of the nodes report the following error when trying to connect to the cluster:
    One of the active nodes is successfully able to connect the "Cluster" while the other two are not.  The objects do exists in AD and the virtual cluster name has full rights of itself.  I turned on DNS Client events logging and receive
    the following messages on nodes that are able to connect to the cluster:
    DNS FQDN Query operation for the name "ClusterNodeC" and for the type 28 is completed with result 0x251D
    DNS Cache lookup operation for the name "ClusterNodeC"
    and for the type 28 is completed with result 0x251D
    DNS Cache lookup is initiated for the name
    "ClusterNodeC" and for the type 28 with query options 0x40026010
    Any help or direction would be greatly appreciated.
    Thanks,
    zWindows

    Hi,
    Sorry for the delay in reply.
    Please try the following steps first:
    Open Powershell as Administrator
    Go to Start--> Run and type wbemtest.exe.
    •Click Connect. 
    •In the namespace text box type "root" (without quotes).
    •Click Connect.
    •Click Enum Instances…
    •In the Class Info dialog box enter Superclass Name as "__ProviderHostQuotaConfiguration" (without quotes) and press OK. Note: the Superclass name includes a double underscore at the front.
    •In the Query Result window, double-click "__ProviderHostQuotaConfiguration=@"
    •In the Object Editor window, double-click HandlesPerHost.
    •In the Value dialog, type in 8192
    •Click Save Property.
    •Click Save Object.
    Under properties find the property "MemoryPerHost" or any other ones you need to modify  and double click it
    Change the value from 512 MB which is 536870912 to 1GB which is 1073741824
    Click Save Property
    Click Save Object.
    •Close Wbemtest.
    •Restart the computer.
    And if all nodes are Windows server 2012, install the following update rollup as well:
    Windows RT, Windows 8, and Windows Server 2012 update rollup: August 2013
    http://support.microsoft.com/KB/2862768
    If you have any feedback on our support, please send to [email protected]

  • Questions in Lync 2013 HADR

    Hi Team,
    One of the customer raised the query:
    In our scenario, we want Active/Active High availability between different geolocations with RPO=0 and RTO near zero (seconds).
    Questions:
    1. Isn’t this possible with pool pairing and database availability AlwaysOn synchronous commit?
    2. What is the bandwidth needed between both sites?
    3. Do you think to achieve Active/Active high availability (RPO=0, RTO=+/-0) for Lync between 2 datacenters we should go with the following scenario:
    --> Storage: virtualization (stretched LUNs)
    -->Compute: Hyper-v Clustering (failover cluster)
    -->DNS: Global Datacenter Server Load Balancer
    4. What is the RTO and RPO in your proposed solution?
    Please advise. Many Thanks.

    1) No.  Pool paring doesn't automatically failover, therefore it does not meet the requirements.  Also, HA within a pool isn't supported across geographic locations, so I don't believe this requirement can be met within the supported model. 
    It's possible if you have a solid enough pipe between the locations with very low latency that you could go unsupported with the old Metropolitan Site Resiliency model:
    https://technet.microsoft.com/en-us/library/gg670905(v=ocs.14).aspx but not supported in 2013.
    2) This can't be answered easily, it depends on what they're doing and using. How many users, how much archived data... the SQL mirroring will be quite a bit, as well as the shared presence data on front ends.  Will they use video between sites?  
    Too many questions to get any kind of reliable answer.
    3) If RTO/RPO is this critical, then I'm assuming it's voice.  If it's not, then a short outage should be more tolerable.  If it is voice, do not leave the supported model... just don't.  You don't want to be in that
    situation when systems are down and it's your phone.  No live migrations, just what's supported via TechNet and virtualization whitepapers.
    4) My proposed solution would be HA pools in both datacenters, built big enough it's unlikely to go down.   If the site does go down, pool failover can happen in a reasonable amount of time, perhaps 15 minutes if you're well prepared,
    but phones could potentially stay online during this time. 
    -Anthony
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications
    This forum post is based upon my personal experience and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Looking up JMS destinations with clustered WLS

              From scanning the postings, it appears that in a clustered WLS environment, the
              JMS servers are not clustered. As a result, the JMS destinations must be unique
              across all of the WLS in the cluster. In addition, there is no failover available
              when a JMS server goes down.
              With that stated, what I want to know is:
              When establishing a JMS connection with a JMS server in a WLS cluster, do I need
              to know the JNDI URL for each specific JMS server that is managing the destination(s)
              I wish to pub/sub?
              Or, is there a 'global' JNDI tree that I can reference and the clustered WLS behind
              the scenes will route me to the appropriate JMS server?
              If resolving the URL is a manual process, I will need to keep track of which destinations
              reside on which JMS servers. This adds an additional maintenance point that I
              would like to avoid if possible.
              Thanks,
              Bob.
              

    One can use Connection Factory to establish connection to particular
              destination (queue/topic). connection factories are clustered. so, one don't
              need to have knowledge of particular WLS.
              "Neal Yin" <[email protected]> wrote in message
              news:[email protected]...
              > Although there is only one JMS server instance, you can lookup it from
              > anywhere in a cluster.
              > In another words, JNDI tree is global in a WLS cluster. Just give cluster
              > DNS name in your
              > URL, you will be fine.
              >
              > -Neal
              >
              >
              > "Bob Peroutka" <[email protected]> wrote in message
              > news:[email protected]...
              > >
              > > From scanning the postings, it appears that in a clustered WLS
              > environment, the
              > > JMS servers are not clustered. As a result, the JMS destinations must
              be
              > unique
              > > across all of the WLS in the cluster. In addition, there is no failover
              > available
              > > when a JMS server goes down.
              > >
              > > With that stated, what I want to know is:
              > >
              > > When establishing a JMS connection with a JMS server in a WLS cluster,
              do
              > I need
              > > to know the JNDI URL for each specific JMS server that is managing the
              > destination(s)
              > > I wish to pub/sub?
              > >
              > > Or, is there a 'global' JNDI tree that I can reference and the clustered
              > WLS behind
              > > the scenes will route me to the appropriate JMS server?
              > >
              > > If resolving the URL is a manual process, I will need to keep track of
              > which destinations
              > > reside on which JMS servers. This adds an additional maintenance point
              > that I
              > > would like to avoid if possible.
              > >
              > > Thanks,
              > >
              > > Bob.
              > >
              > >
              > >
              >
              >
              

  • Entity remote in clustered http session ?

              we put a entity remote object into a http session which is replicated
              across a cluster of 3 linux machines ( each with 1 weblogic 6.1 SP2).
              http session replication works fine: the entity is accessible on every
              machine in this cluster.
              it seems that there is no failover for entity remote object in this
              case: i kill the server where the entity was originally created. the
              web application just reports an "10.5.1 500 Internal Server Error".
              the server log of the machine where the current request was executed
              says:
              1.
              Removing solarium jvmid:-995414765884053603S:192.168.145.41:
              [2357,2357,7002,7002,2357,7002,-1]:beacluster.hybris.de:hybr
              is:solarium from cluster view due to PeerGone
              ( solarium is the killed server; beacluster the cluster DNS name )
              2.
              Removing -995414765884053603S:192.168.145.41:[2357,2357,7002
              ,7002,2357,7002,-1]:beacluster.hybris.de:hybris:solarium to
              the cluster
              3.
              [WebAppServletContext(729829,dummyweb,/dummyweb)] Servlet fa
              iled with IOException
              java.rmi.ConnectException: Unable to get direct or routed connection to:
              '-995414765884053603S:192.168.145.41:[2357,2357,7002,7002,2357,7002,-1]:beacluster.hybris.de:hybris:solarium'
                   at
              weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:85)
                   at
              weblogic.rmi.cluster.EntityRemoteRef.privateInvoke(EntityRemoteRef.java:144)
                   at
              weblogic.rmi.cluster.EntityRemoteRef.invoke(EntityRemoteRef.java:115)
                   at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
                   at $Proxy86.getText(Unknown Source)
                   at jsp_servlet.__dummyEntity._jspService(__dummyEntity.java:148)
                   at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
                   at
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
                   at
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
                   at
              weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:2495)
                   at
              weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2204)
                   at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
                   at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              thanks in advance,
              axel grossmann.
              

    HTTP session is only replicated to the secondary server, not all servers in
              the cluster.
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              Clustering Weblogic? You're either using Coherence, or you should be!
              Download a Tangosol Coherence eval today at http://www.tangosol.com/
              "Axel Großmann" <[email protected]> wrote in message
              news:[email protected]...
              >
              > we put a entity remote object into a http session which is replicated
              > across a cluster of 3 linux machines ( each with 1 weblogic 6.1 SP2).
              >
              > http session replication works fine: the entity is accessible on every
              > machine in this cluster.
              >
              > it seems that there is no failover for entity remote object in this
              > case: i kill the server where the entity was originally created. the
              > web application just reports an "10.5.1 500 Internal Server Error".
              >
              > the server log of the machine where the current request was executed
              > says:
              >
              > 1.
              > Removing solarium jvmid:-995414765884053603S:192.168.145.41:
              >
              > [2357,2357,7002,7002,2357,7002,-1]:beacluster.hybris.de:hybr
              > is:solarium from cluster view due to PeerGone
              >
              > ( solarium is the killed server; beacluster the cluster DNS name )
              >
              > 2.
              > Removing -995414765884053603S:192.168.145.41:[2357,2357,7002
              > ,7002,2357,7002,-1]:beacluster.hybris.de:hybris:solarium to
              > the cluster
              >
              > 3.
              > [WebAppServletContext(729829,dummyweb,/dummyweb)] Servlet fa
              > iled with IOException
              > java.rmi.ConnectException: Unable to get direct or routed connection to:
              >
              '-995414765884053603S:192.168.145.41:[2357,2357,7002,7002,2357,7002,-1]:beac
              luster.hybris.de:hybris:solarium'
              > at
              >
              weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.
              java:85)
              > at
              >
              weblogic.rmi.cluster.EntityRemoteRef.privateInvoke(EntityRemoteRef.java:144)
              > at
              > weblogic.rmi.cluster.EntityRemoteRef.invoke(EntityRemoteRef.java:115)
              > at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
              > at $Proxy86.getText(Unknown Source)
              > at jsp_servlet.__dummyEntity._jspService(__dummyEntity.java:148)
              > at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              > at
              >
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
              :265)
              > at
              >
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
              :200)
              > at
              >
              weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletCo
              ntext.java:2495)
              > at
              >
              weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java
              :2204)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              >
              > thanks in advance,
              > axel grossmann.
              

  • URGENT!!!! clustering help needed

    Hi,
              I am trying to start up a cluster set up by someone else.
              Two sun boxes w/ 1 instance of wl on each.(wl60sp1)
              The admin server is defined as part of the cluster. (name is nemesis,
              minefield is the managed)
              I start up the admin server, it cannot deploy the defaultWebApp of the
              managed server...which, I guess, is because the managed server is not
              running yet....
              After the admin server starts, I run the startManagedWebLogic.sh script w/
              the vars
              SERVER_NAME , ADMIN_URL set in the file...
              it seems to start up okay, to it can't get the DefaultLogHandle, it starts
              to read the config and then
              poo:
              <Adding
              server -5198242724643762591S:10.1.1.228:[7001,7001,7002,7002,7001,7002,-1]:l
              ocalhost to cluster view>
              ####<May 2, 2001 12:44:13 PM CDT> <Debug> <Cluster> <minefield> <nemesis>
              <ExecuteThread: '11' for queue: 'default'> <> <> <000000> <dropped fragment
              from foreign domain/cluster domainhash=-324669072 clusterhash=1228358286>
              ####<May 2, 2001 12:44:13 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              <ExecuteThread: '11' for queue: 'default'> <> <> <000111> <Adding server
              6587229607407573383S:10.1.0.141:[7001,7001,7002,7002,7001,7002,-1]:10.1.0.14
              3 to cluster view>
              ####<May 2, 2001 12:44:13 PM CDT> <Debug> <Cluster> <minefield> <nemesis>
              <ExecuteThread: '11' for queue: 'default'> <> <> <000000> <dropped fragment
              from foreign domain/cluster domainhash=1113319721 clusterhash=-548483879>
              ####<May 2, 2001 12:44:15 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              <ExecuteThread: '11' for queue: 'default'> <> <> <000111> <Adding server
              5536649879841736387S:10.1.0.145:[7001,7001,7002,7002,7001,7002,-1]:10.1.0.14
              8 to cluster view>
              ####<May 2, 2001 12:44:15 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              <ExecuteThread: '11' for queue: 'default'> <> <> <000115> <Lost 2 multicast
              message(s)>
              ####<May 2, 2001 12:44:15 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              <ExecuteThread: '9' for queue: 'default'> <> <> <000127> <Adding
              5536649879841736387S:10.1.0.145:[7001,7001,7002,7002,7001,7002,-1]:10.1.0.14
              8 to the cluster>
              ####<May 2, 2001 12:44:15 PM CDT> <Error> <Cluster> <minefield> <nemesis>
              <ExecuteThread: '8' for queue: 'default'> <system> <> <000123> <Conflict
              start: You tried to bind an object under the name
              weblogic.transaction.coordinators.nemesis in the jndi tree. The object you
              have bound weblogic.transaction.internal.CoordinatorImpl from 10.1.0.146 is
              non clusterable and you have tried to bind more than once from two or more
              servers. Such objects can only deployed from one server.>
              

    OK, first things first. The Admin Server:
              - Should not be a member of the cluster
              - Should not be a part of the cluster dns alias
              - Should not a target for any of the cluster object deployments
              So, given the following list:
              "Two sun boxes w/ 1 instance of wl on each.(wl60sp1)
              The admin server is defined as part of the cluster. (name is nemesis,
              minefield is the managed)"
              Is "nemesis" one of the two boxes? If it, does is have mulitple IP's associated
              with it? The reason why I'm asking is, there are two basic ways Weblogic can be
              clustered:
              1. Use a single machine which may be multi-homed (has more than one ip-address
              associated with it)
              2. Use more than one machine, with a single ip address for each machine.
              So, if you have scenerio 2, your cluster is going to consist of one admin box
              and one cluster box. If that is the case, you might want to consider
              re-architecting your cluster configuration.
              Mike Wiles
              Emerald Solutions
              http://www.emeraldsolutions.com
              To reply to me via E-mail, you will need to pull the "_WEEDS_" :)
              james wrote:
              > Hi,
              > I am trying to start up a cluster set up by someone else.
              > Two sun boxes w/ 1 instance of wl on each.(wl60sp1)
              > The admin server is defined as part of the cluster. (name is nemesis,
              > minefield is the managed)
              > I start up the admin server, it cannot deploy the defaultWebApp of the
              > managed server...which, I guess, is because the managed server is not
              > running yet....
              > After the admin server starts, I run the startManagedWebLogic.sh script w/
              > the vars
              > SERVER_NAME , ADMIN_URL set in the file...
              > it seems to start up okay, to it can't get the DefaultLogHandle, it starts
              > to read the config and then
              > poo:
              >
              > ...
              > <Adding
              > server -5198242724643762591S:10.1.1.228:[7001,7001,7002,7002,7001,7002,-1]:l
              > ocalhost to cluster view>
              > ####<May 2, 2001 12:44:13 PM CDT> <Debug> <Cluster> <minefield> <nemesis>
              > <ExecuteThread: '11' for queue: 'default'> <> <> <000000> <dropped fragment
              > from foreign domain/cluster domainhash=-324669072 clusterhash=1228358286>
              > ####<May 2, 2001 12:44:13 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              > <ExecuteThread: '11' for queue: 'default'> <> <> <000111> <Adding server
              > 6587229607407573383S:10.1.0.141:[7001,7001,7002,7002,7001,7002,-1]:10.1.0.14
              > 3 to cluster view>
              > ####<May 2, 2001 12:44:13 PM CDT> <Debug> <Cluster> <minefield> <nemesis>
              > <ExecuteThread: '11' for queue: 'default'> <> <> <000000> <dropped fragment
              > from foreign domain/cluster domainhash=1113319721 clusterhash=-548483879>
              > ####<May 2, 2001 12:44:15 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              > <ExecuteThread: '11' for queue: 'default'> <> <> <000111> <Adding server
              > 5536649879841736387S:10.1.0.145:[7001,7001,7002,7002,7001,7002,-1]:10.1.0.14
              > 8 to cluster view>
              > ####<May 2, 2001 12:44:15 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              > <ExecuteThread: '11' for queue: 'default'> <> <> <000115> <Lost 2 multicast
              > message(s)>
              > ####<May 2, 2001 12:44:15 PM CDT> <Info> <Cluster> <minefield> <nemesis>
              > <ExecuteThread: '9' for queue: 'default'> <> <> <000127> <Adding
              > 5536649879841736387S:10.1.0.145:[7001,7001,7002,7002,7001,7002,-1]:10.1.0.14
              > 8 to the cluster>
              > ####<May 2, 2001 12:44:15 PM CDT> <Error> <Cluster> <minefield> <nemesis>
              > <ExecuteThread: '8' for queue: 'default'> <system> <> <000123> <Conflict
              > start: You tried to bind an object under the name
              > weblogic.transaction.coordinators.nemesis in the jndi tree. The object you
              > have bound weblogic.transaction.internal.CoordinatorImpl from 10.1.0.146 is
              > non clusterable and you have tried to bind more than once from two or more
              > servers. Such objects can only deployed from one server.>
              

Maybe you are looking for