Clustering with HttpClusterServlet

I have created a domain on my machine in which I have configured an admin server and two more instances that are part of a cluster. When I try to connect to any of the cluster instances after I deploy an application on them it runs fine. However when I try to access the HttpClusterServlet deployed on the admin server it says Error 404--Not Found
          From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
          10.4.5 404 Not Found
          

Sorry I meant to send this to the ejb newsgroup.
          dan
          dan benanav wrote:
          > Do any vendors provide for clustering with automatic failover of entity
          > beans? I know that WLS does not. How about Gemstone? If not is there
          > a reason why it is not possible?
          >
          > It seems to me that EJB servers should be capable of automatic failover
          > of entity beans.
          >
          > dan
          

Similar Messages

  • Proxying with HttpClusterServlet

    Hi,
              I'm using a multihomed NT Server with WLS 6.0. I have one domain configured
              with an admin server and two managed servers which are clustered. I
              configured a second domain with one server and a web app deployed with
              HttpClusterServlet (I extracted the HttpClusterServlet classes from
              weblogic.jar and deployed them in the web app). I edited web.xml adding
              HttpClusterServlet and the associated parm (defaultServers) as described in
              the doc.
              I'm testing load balancing of a servlet deployed on both managed servers.
              Load balancing works fine as I see the servlet execution alternating from
              one server to the next.
              My problem is duplicating this set up on a second NT Server. With the
              exception of IP addresses, hardware and the WLS 6.0 installation, the boxes
              are identical. But, on this box the servlet is not load balanced. The
              servlet always runs on the first server in the list. I have to admit I'm
              stumped! Short of reinstalling WLS, I have tried everything I can think of
              without success.
              My question is how can I debug HttpClusterServlet to figure out why the
              servlet is not being load balanced?
              Your help would be much appreciated.
              Mike
              

              Ok, I did find and set the DebugConfigInfo to ON. When I run the servlet via the
              proxy I get the following
              results:
              Server with load balancing not working:
              Query String: __WebLogicBridgeConfig
              Dynamic Server List:
              Host: xxx.yy.zz.179 Port: 7019
              Static Server List:
              Host: xxx.yy.zz.179 Port: 7019
              Host: xxx.yy.zz.186 Port: 7019
              CookieName: JSESSIONID
              Server with load balancing working:
              Query String: __WebLogicBridgeConfig
              Dynamic Server List:
              Host: xxx.yy.zz.75 Port: 7019
              Host: xxx.yy.zz.74 Port: 7019
              Static Server List:
              Host: xxx.yy.zz.74 Port: 7019
              Host: xxx.yy.zz.75 Port: 7019
              CookieName: JSESSIONID
              Obviously, the second server is not in the dynamic server list. Any ideas????
              Mike
              >
              >
              

  • SAP ECC 6.0 installation in windows 2008 clustering with db2 ERROR DB21524E

    Dear Sir.,
    Am installing sap ECC 6.0 on windows 2008 clustering with db2.
    I got one error in the phase of Configure the database for mscs. The error is  DB21524E 'FAILED TO CREATE THE RESOURCE DB2 IP PRD' THE CLUSTER NETWORK WAS NOT FOUND .
    DB2_INSTANCE=DB2PRD
    DB2_LOGON_USERNAME=iil\db2prd
    DB2_LOGON_PASSWORD=XXXX
    CLUSTER_NAME=mscs
    GROUP_NAME=DB2 PRD Group
    DB2_NODE=0
    IP_NAME = DB2 IP PRD
    IP_ADDRESS=192.168.16.27
    IP_SUBNET=255.255.0.0
    IP_NETWORK=public
    NETNAME_NAME=DB2 NetName PRD
    NETNAME_VALUE=dbgrp
    NETNAME_DEPENDENCY=DB2 IP PRD
    DISK_NAME=Disk M::
    TARGET_DRVMAP_DISK=Disk M
    Best regards.,
    please help me since am already running late with this installation to run the db2mscs utility to Create resource.
    Best regards.,
    Manjunath G
    Edited by: Manjug77 on Oct 29, 2009 2:45 PM

    Hello Manjunath.
    This looks like a configuration problem.
    Please check if IP_NETWORK is set to the name of your network adapter and
    if your IP_ADDRESS and IP_SUBNET are set to the correct values.
    Note:
    - IP_ADDRESS is a new IP address that is not used by any machine in the network.
    - IP_NETWORK is optional
    If you still get the same error debug your db2mscs.exe-call:
    See the answer from  Adam Wilson:
    Can you run the following and check the output:
    db2mscs -f <path>\db2mscs.cfg -d <path>\debug.txt
    I suspect you may see the following error in the debug.txt:
    Create_IP_Resource fnc_errcode 5045
    If you see the fnc_errcode 5045
    In that case, error 5045 which is a windows error, means
    ERROR_CLUSTER_NETWORK_NOT_FOUND. This error is occuring because windows
    couldn't find the "public network" as indicated by IP_NETWORK.
    Windows couldn't find the MSCS network called "public network". The
    IP_NETWORK parameter must be set to an MSCS Network., so running the
    Cluster Admin GUI and expanding the Cluster Configuration->Network to
    view all MSCS networks that were available and if "public network" was
    one of them.
    However, the parameter IP_NETWORK is optional and you could be commented
    out. In that case the first MSCS network detected by the system was used.
    Best regards,
    Hinnerk Gildhoff

  • Exporting data clusters with type version

    Hi all,
    let's assume we are saving some ABAP data as a cluster to database using the IMPORT TO ... functionality, e.g.
    EXPORT VBAK FROM LS_VBAK VBAP FROM LT_VBAP  TO DATABASE INDX(QT) ID 'TEST'
    Some days later, the data can be imported
    IMPORT VBAK TO LS_VBAK VBAP TO LT_VBAP FROM DATABASE INDX(QT) ID 'TEST'.
    Some months or years later, however, the IMPORT may crash: Since it is the most normal thing in the world that ABAP types are extended, some new fields may have been added to the structures VBAP or VBAK in the meantime.
    The data are not lost, however: Using method CL_ABAP_EXPIMP_UTILITIES=>DBUF_IMPORT_CREATE_DATA, they can be recovered from an XSTRING. This will create data objects apt to the content of the buffer. But the component names are lost - they get auto-generated names like COMP00001, COMP00002 etc., replacing the original names MANDT, VBELN, etc.
    So a natural question is how to save the type info ( = metadata) for the extracted data together with the data themselves:
    EXPORT TYPES FROM LT_TYPES VBAK FROM LS_VBAK VBAP FROM LT_VBAP TO DATABASE INDX(QT) ID 'TEST'.
    The table LT_TYPES should contain the meta type info for all exported data. For structures, this could be a DDFIELDS-like table containing the component information. For tables, additionally the table kind, key uniqueness and key components should be saved.
    Actually, LT_TYPES should contain persistent versions of CL_ABAP_STRUCTDESCR, CL_ABAP_TABLEDESCR, etc. But it seems there is no serialization provided for the RTTI type info classes.
    (In an optimized version, the type info could be stored in a separate cluster, and being referenced by a version number only in the data cluster, for efficiency).
    In the import step, the LT_TYPES could be imported first, and then instances for these historical data types could be created as containers for the real data import (here, I am inventing a class zcl_abap_expimp_utilities):
    IMPORT TYPES TO LT_TYPES FROM DATABASE INDX(QT) ID 'TEST'.
    DATA(LO_TYPES) = ZCL_ABAP_EXPIMP_UITLITIES=>CREATE_TYPE_INFOS ( LT_TYPES ).
    assign lo_types->data_object('VBAK')->* to <LS_VBAK>.
    assign lo_types->data_object('VBAP')->* to <LT_VBAP>.
    IMPORT VBAK TO <LS_VBAK> VBAP TO <LT_VBAP> FROM DATABASE INDX(QT) ID 'TEST'.
    Now the data can be recovered with their historical types (i.e. the types they had when the export statement was performed) and processed further.
    For example, structures and table-lines could be mixed into the current versions using MOVE-CORRESPONDING, and so on.
    My question: Is there any support from the standard for this functionality: Exporting data clusters with type version?
    Regards,
    Rüdiger

    The IMPORT statement works fine if target internal table has all fields of source internal table, plus some additional fields at the end, something like append structure of vbak.
    Here is the snippet used.
    TYPES:
    BEGIN OF ty,
      a TYPE i,
    END OF ty,
    BEGIN OF ty2.
            INCLUDE TYPE ty.
    TYPES:
      b TYPE i,
    END OF ty2.
    DATA: lt1 TYPE TABLE OF ty,
          ls TYPE ty,
          lt2 TYPE TABLE OF ty2.
    ls-a = 2. APPEND ls TO lt1.
    ls-a = 4. APPEND ls TO lt1.
    EXPORT table = lt1 TO MEMORY ID 'ZTEST'.
    IMPORT table = lt2 FROM MEMORY ID 'ZTEST'.
    I guess IMPORT statement would behave fine if current VBAK has more fields than older VBAK.

  • Problem with clustering with JBoss server

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • Problem with clustering with JBoss server---help needed

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • 3 Node hyper-V 2012 R2 Failover Clustering with Storage spaces on one of the Hyper-V hosts

    Hi,
    We have 3x Dell R720s with 5x 600GB 3.5 15K SAS and 128 GB RAM each. Was wondering if I could setup a Fail-over Hyper-V 2012 R2 Clustering with these 3 with the shared storage for the CSV being provided by one of the Hyper-V hosts with storage spaces installed
    (Is storage spaces supported on Hyper-V?) Or I can use a 2-Node Failover clustering and the third one as a standalone Hyper-V or Server 2012 R2 with Hyper-V and storage spaces.  
    Each Server comes with QP 1G and a DP10G nics so that I can dedicate the 10G nics for iSCSI
    Dont have a SAN or a 10G switch so it would be a crossover cable connection between the servers.
    Most of the VMs would be Non-HA. Exchange 2010, Sharepoint 2010 and SQL Server 2008 R2 would be the only VMS running as HA-VMs. CSV for the Hyper-V Failover cluster would be provided by the storage spaces.

    I thought I was tying to do just that with 8x600 GB RAID-10 using the H/W RAID controller (on the 3rd Server) and creating CSVs out of that space so as to provide better storage performance for the HA-VMs.
    1. Storage Server : 8x 600GB RAID-10 (For CSVs to house all HA-VMs running on the other 2 Servers) It may also run some local VMs that have very little disk I/O
    2. Hyper-V-1 : Will act has primary HYPER-V hopst for 2x Exchange and Database Server HA-VMs (the VMDXs would be stored on the Storage Servers CSVs on top of the 8x600GB RAID-10). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    3. Hyper-V-2 : Will act as a Hyper-V host when the above HA-VMs fail-over to this one (when HYPER-V-1 is down for any reason). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    The single point of failure for the HA-VMs (non HA-VMs are non-HA so its OK if they are down for some time) is the Storage Server. The Exchange servers here are DAG peers to the Exchange Servers at the head office so in case the storage server mainboard
    goes down (disk failure is mitigated using RAID, other components such as RAM, mainboard may still go but their % failure is relatively low) the local exchange servers would be down but exchange clients will still be able to do their email related tasks using
    the HO Exchange servers.
    Also they are under 4hr mission critical support including entire server replacement within the 4 hour period. 
    If you're OK with your shared storage being a single point of failure then sure you can proceed the way you've listed. However you'll still route all VM-related I/O over Ethernet which is obviously slower then running VMs from DAS (with or without virtual SAN
    LUN-to-LUN replication layer) as DAS has higher bandwidth and smaller latency. Also with your scenario you exclude one host from your hypervisor cluster so running VMs on a pair of hosts instead of three would give you much worse performance and resilience:
    with 1 from 3 physical hosts lost cluster would be still operable and with 1 from 2 lost all VMs booted on a single node could give you inadequate performance. So make sure your hosts would be insanely underprovisioned as every single node should be able to
    handle ALL workload in case of disaster. Good luck and happy clustering :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • CSA 5.1 Agent Installation on Microsoft Clusters with Teamed Broadcom NICs

    I'm searching all over Cisco.com for information on installing CSA 5.1 agent on Microsoft Clusters with Teamed Broadcom NICs, but I can't find any information other than "this is supported" in the installation guide.
    Does anyone know if there is a process or procedure that should be followed to install this? For example, some questions that come to mind are:
    - Do the cluster services are needed to be stopped?
    - Should the cluster be broken and then rebuilt?
    - Is there any documentation indicating this configuration is approved by Microsoft?
    - Are there case studies or other documentation on previous similar installations and/or lessons learned?
    Thanks in advance,
    Ken

    Ken, you might just end up being the case study! Do you have a non-production cluster to with?
    If not and you already completed pilot testing, you probably have an idea of what you want to do with the agent. Do you have to stop the cluster for other software installations? I guess you might ask MS about breaking the cluster it since it's their cluster.
    The only caveat I've seen with teamed NICs is when the agent tries to contact the MC it may timeout a few times. You could probably increase the polling time if this happens.
    I'd create an agent kit that belongs to a group in test mode with minimal or no policies attached to test first and install it on one of the nodes. If that works ok you could gradually increase the policies and rules until you are comfortable that it is tuned correctly and then switch to protect mode.
    Hope this helps...
    Tom S

  • Clustering with wl6.1 - Problems

    Hi,
              After reading a bit about clustering with weblogic 6.1 (and thanks to
              this list), I have done the following:
              1. Configure machines - two boxes (Solaris and Linux).
              2. Configure servers - weblogic 6.1 running on both at port 7001.
              3. Administration server is Win'XP. Here is the snippet of config.xml
              on the Administration Server:
              <Server Cluster="MyCluster" ListenAddress="192.168.1.239"
              Machine="dummy239" Name="wls239">
              <Log Name="wls239"/>
              <SSL Name="wls239"/>
              <ServerDebug Name="wls239"/>
              <KernelDebug Name="wls239"/>
              <ServerStart Name="wls239"/>
              <WebServer Name="wls239"/>
              </Server>
              <Server Cluster="MyCluster" ListenAddress="192.168.1.131"
              Machine="dummy131" Name="wls131">
              <Log Name="wls131"/>
              <SSL Name="wls131"/>
              <ServerDebug Name="wls131"/>
              <KernelDebug Name="wls131"/>
              <ServerStart Name="wls131"
              OutputFile="C:\bea\wlserver6.1\.\config\NodeManagerClientLogs\wls131\startserver_1029504698175.log"/>
              <WebServer Name="wls131"/>
              </Server>
              Problems:
              1. I can't figure out how I set the "OutputFile" parameter for the
              server "wls131".
              2. I have NodeManager started on 131 listening on port 5555. But when
              I try to start server "wls131" from the Administration Server, I get
              the following error:
              <Aug 16, 2002 6:56:58 AM PDT> <Error> <NodeManager> <Could not start
              server 'wls131' via Node Manager - reason: '[SecureCommandInvoker:
              Could not create a socket to the NodeManager running on host
              '192.168.1.131:5555' to execute command 'online null', reason:
              Connection refused: connect. Ensure that the NodeManager on host
              '192.168.1.131' is configured to listen on port '5555' and that it is
              actively listening]'
              Any help will be greatly appreciated.
              TIA,
              

    I have made some progress:
              1. The environment settings on 131 were missing. I executed setEnv.sh
              to setup the required environment variables.
              2. nodemanager.hosts (on 131) had the following entries earlier:
                   # more nodemanager.hosts
                   127.0.0.1
                   localhost
                   192.168.1.135
              I changed it to:
                   #more nodemanager.hosts
                   192.168.1.135
              3. The Administration Server (135) did not have any listen Address
              defined (since it was working without it), but since one of the errors
              thrown by NodeManager on 131 was - "could not connect to
              localhost:70001 via HTTP", I changed the listen Address to
              192.168.1.135 (instead of the null).
              4. I deleted all the logs (NodeManagerInternal logs on 131) and all
              log files on NodeManagerClientLogs on 135.
              5. Restarted Admin Server. Restarted NodeManager on 131.
              NodeManagerInternalLogs on 131 has:
                   [root@]# more NodeManagerInternal_1029567030003
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
              listenAddress to '1
                   92.168.1.131'>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
              listenPort to '5555
                   '>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting WebLogic
              home to '/
                   home/weblogic/bea/wlserver6.1'>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting java
              home to '/home
                   /weblogic/jdk1.3.1_03'>
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <SecureSo
                   cketListener: Enabled Ciphers >
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <TLS_RSA_
                   EXPORT_WITH_RC4_40_MD5>
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <SecureSo
                   cketListener: listening on 192.168.1.131:5555>
                   And the wls131 logs contain:
                   [root@dummy131 wls131]# more config
                   #Saved configuration for wls131
                   #Sat Aug 17 12:24:42 IST 2002
                   processId=18437
                   savedLogsDirectory=/home/weblogic/bea/wlserver6.1/NodeManagerLogs
                   classpath=NULL
                   nodemanager.debugEnabled=false
                   TimeStamp=1029567282621
                   command=online
                   java.security.policy=NULL
                   bea.home=NULL
                   weblogic.Domain=domain
                   serverStartArgs=NULL
                   weblogic.management.server=192.168.1.135\:7001
                   RootDirectory=NULL
                   nodemanager.sslEnabled=true
                   weblogic.Name=wls131
                   The error generated for the client (131) was:
                   [root@dummy131 wls131]# more wls131_error.log
                   The WebLogic Server did not start up properly.
                   Exception raised:
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.
                   DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   Reason: Fatal initialization exception
                   and the output on the admin server (135) is:
                   <Aug 17, 2002 12:24:42 PM IST> <Info>
              <[email protected]:5555> <BaseProcessControl: saving process
              id of Weblogic Managed server 'wls131', pid: 18437>
                   Starting WebLogic Server ....
                   Connecting to http://192.168.1.135:7001...
                   <Aug 17, 2002 12:24:50 PM IST> <Emergency> <Configuration Management>
              <Errors detected attempting to connect to admin server at
              192.168.1.135:7001 during initialization of managed server (
              192.168.1.131:7001 ). The reported error was: <
              weblogic.security.acl.DefaultUserInfoImpl > This condition generally
              results when the managed and admin servers are using the same listen
              address and port.>
                   <Aug 17, 2002 12:24:50 PM IST> <Emergency> <Server> <Unable to
              initialize the server: 'Fatal initialization exception
                   Throwable: weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   '>
                   The WebLogic Server did not start up properly.
                   Exception raised:
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   Reason: Fatal initialization exception
              6. Now from the client (131) error, I thought it was something to do
              with security. So I tried to start weblogic manually (connected as the
              same user). Curiosly enough, it does start (it threw some errors for
              some EJBs, but I got the final message):
                   <Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
              <ListenThread listening on port 7001>
                   <Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
              <SSLListenThread listening on port 7002>
                   <Aug 17, 2002 12:30:40 PM IST> <Notice> <WebLogicServer> <Started
              WebLogic Admin Server "myserver" for domain "mydomain" running in
              Production Mode>
              7. As you can see the domain on the client (131) is "mydomain". But
              shouldn't the Admin server be 192.168.1.135, since this is what I have
              configured for the NodeManager? Or is it that the error occurs because
              the Admin server Node Manager is configured to work with is 135, while
              in the default scripts the admin server is itself? I'm confused :-)
              Help, anyone?
              

  • Clustering with UC560/540

    Hi,
    Is it possible to configure UC560 and UC540s kept in different locations as Publisher and Subscribers and there by avail the centralised deployment advandages?
    I have one client with one UC560 at Head office and two UC540s at two different branch offices. I wnat to configure the extension mobility feature for all the users to access their profile at any site. They got VPN over normal internet connection not a MPLS connection. Can anybody help me on this?

    Sorry I meant to send this to the ejb newsgroup.
              dan
              dan benanav wrote:
              > Do any vendors provide for clustering with automatic failover of entity
              > beans? I know that WLS does not. How about Gemstone? If not is there
              > a reason why it is not possible?
              >
              > It seems to me that EJB servers should be capable of automatic failover
              > of entity beans.
              >
              > dan
              

  • BM clustering with load balancing

    I want to implement BM clustering with load balancing according to AppNote written by Steve Aitken from March 25, 2005.
    It's clear that I need to use two private addresses (from the example, these are 10.10.10.10 and 10.10.10.11). However, I'm not sure what are IP addresses 10.10.10.1 and 10.10.10.2 used for?
    Existing BM servers have two NICs: first defined as private and the second as public addresses connected directly to Internet (they are from different subnets).
    Sinisa

    Originally Posted by phxazcraig
    In article <[email protected]>, Tnelson 2000
    wrote:
    > I've set this up per the appnote and aren't able to get out through any
    > of the ip addresses. I get a 504 Gateway Time out error. I also noticed
    > that the cluster master ip address is different, 10.10.10.12, for
    > example. Do you know what I need to look at to verify I have this
    > configured correctly?
    >
    What do you mean "aren't able to get out through any of the ip
    addresses"?
    Do the addresses show up in any of the proxy nodes with display secondary
    ipaddress? Does the proxy console option 17 show the server listening on
    those addresses?
    Is the gateway timeout error a BorderManager (or Windows) error? If
    BMgr, then check that BMgr has a correct default gateway, DNS is working
    (option 4 on proxy console screen) and try dropping filters for a test.
    Craig Johnson
    Novell Knowledge Partner
    *** For a current patch list, tips, handy files and books on
    BorderManager, go to Craig Johnson Consulting - BorderManager, NetWare, and More ***
    Got it working. I noticed that my dns setting in BM2 didn't coincide with settings in BM1. So, I made them the same and reinitialized the system on both servers. Of course, when I did that, it added the secondary IP addresses. So, I'm really not sure what was stopping it from working before, unless I have something misconfigured that's preventing the secondary addresses from loading. Go figure.

  • Multiple Clusters with same computers

    Is there a way to set up multiple clusters with the same computers.
    For example
    Cluster A has computers 1, 2
    Cluster B has computers 1, 2, 3, 4, 5
    The reason behind this is that computers 1 and 2 are always available, but 3, 4, 5 are used during the day. I'd like to have a day cluster and a night cluster that I can just select as needed.

    Hi Jake, I was playing with this type of thing when FCS1 first came out using some old powerbooks.
    Yes you can kind of do this using UNMANAGED SERVICES but I will stand corrected for COMPRESSOR3.app.
    In FCS2, compressor 3, simply if you use *MANAGED SERVICES* of the service nodes available through a defined a specific cluster (you would have used appleqmaster utility.app) than I believe those service nodes are dedicated to that cluster.
    Sure you can make 'INSTANCES" of compressor for example and the 'renderer" instances dedicated to particular cluster using this as an example... let say...
    HOST1 = MACPRO8CORE with 4 instances of compressor (as C1,C2,C3,C4 via vi 4 virtual clusters) and 8 instances of renderer (R1,R2,..R7 & R8).
    HOST2 = MACBOOKPRO with 1 instance of compressor (as C1 & C2 with 2 virtual clusters) and 2 instances of renderer (R1,R2).
    HOST3 = PowerbookG4 with 1 instance of compressor (as C1) and 1 instances of renderer (R1).
    only as an example
    ClusterA: Host1[C1,C2C3, R1,R2,R4,R5), Host2[C2]
    ClusterB: Host1[C4, R6,R3,R6,R7), Host2[C2,R1R2], HOST3{C1,R1]
    In fact I just tried it...
    However as I thought for managed services be assured that you cannot SHARE a service node with two or more clusters. I will stand corrected.
    Messy but seems to work but useless with the powerbook you'd agree
    However depending on your workflow and commecrial needs (say for busness prioriries for a specific client) for best results use *UNMANAGED SERVICES* ....
    Simply hook everything up over GBE, with all the usual tweaks such as "NEVER COPY SOURCE" and MOUNTING all source targets and compressor work files over HFS on the GBE subnet (and many other tweaks) and treat the set up as a huge bucket.
    Use priority on the batch submission. It's not too smart but at least you have some manipulation over the queues.
    The resource manager in QMASTER is not so smart. So despite a service node going idle it does not seem to redirect work from wone node to another for load balancing..
    I have only tried this with SEGMENTED TRANSCODING.
    Rendering (Shake) works great and seem simpler. My time is with multipass segented transcoding where I want H.264 from DVCPROHD and dont want to wait all day for it, especially where I have tweaked the timing in compressor a bit.
    Try it out.
    BTW as many contest on this forum, QMASTER/COMPRESSOR can be a bugger to fix if it plays up. and it has for me as well.
    post your results.
    HTH.
    w

  • How do i send an array of clusters with variable size over TCP/IP?

    Hi,
            I'm trying to send an array of clusters with varible size over TCP/IP,. But I'm facing the following problems:
    1) I need to accept the size of array data from the user and increase the size dynamically.
    I'm doing this using the property node but how do I convey the new size to my TCP read?
    2) I need to wire an input to my 'bytes to read' of the TCP read.
    But the number of bytes to read changes dynamically
    How do I ensure  the correct number of bytes are read and reflected on the client side?
    3) Is there anyway I can use global varibles over a network such that their values are updated just as if they would on one computer?
     Will be a great help if someone posts a solution!
    Thank you...

    twilightfan wrote:
    Altenbach,
     ... xml string. ...number of columns that I'm varying using property node s... I solved these problems by using a local variable as the type input ...o TCP read is creating a problem.... second TCP read gets truncated data because
    its no longer just the first four bytes that specify the length of the data, it could be more as my array of cluster can be pretty huge.
    Instead of writing long and complicated sentences that make little sense, why don't you simply show us your code? 
    What does any of this have to do with xml strings???? I don't see how using a local variable as type input changes anything. The user cannot interact with "property nodes", just with controls. Please clarify. Once the array of clusters is flattened to a string you only have one size that describes the size of the data, no matter how huge it is (as long as it is within the limits of I32). Similarly, you read the string of that same defined length and form the array of clusters from it. How big are the strings? What is your definition of "huge"?
    Here's is my earlier code, but now dealing with an array of clusters. Not much of a change. Since you have columns, you want 2D. Add as many diensions you want, but make sure that the control, diagram constant, and indicator all match.
    The snipped shows for a 1D array, while the attached VI shows the same for a 2D array. Same difference.  
    Message Edited by altenbach on 01-31-2010 01:13 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    FlattenArrayOfClusters.vi ‏12 KB
    ioclusters3MOD.png ‏25 KB

  • Array of Clusters with Graph - Y Scale Change Event

    Hello everyone
    I have a Array of Clusters with a Graph inside each cluster. I need to trigger a event when the user type a new value on the graphs' Y scale and hit "Enter" to apply the change. Any ideas how to trigger that?
    I am not considering the "mouse enter" event because I have other events linked with that already.
    Thanks
    Dan07
    Solved!
    Go to Solution.

    dan07 wrote:
    I think that I was not clear. Sorry. Lets thing about two arrays of clusters with graphs: Array of clusters 1 and Array of clusters 2. Both of them have their own graphs insides the clusters.
    I want to do the same thing that you told me to do, but with two arrays of clusters instead of one. Changing the scale range of any graph of array of cluster 1 will trigger case A of event structure (just an example), and changing the scale range of any graph of array of clusters 2 will trigger case B of event structure (just an example again).
    Then try the following. Dynamic Event Registration nodes are expandable:
    a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"] {color: black;} a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"]:after {content: '';} .jrd-sig {height: 80px; overflow: visible;} .jrd-sig-deploy {float:left; opacity:0.2;} .jrd-sig-img {float:right; opacity:0.2;} .jrd-sig-img:hover {opacity:0.8;} .jrd-sig-deploy:hover {opacity:0.8;}

  • Linux Clustering with oracle

    Hai All,
    Please forward the good pdf documents and links describing the Redhat Linux Clustering with Oracle.
    Please help...
    Shiju

    Hai All,
    Please forward the good pdf documents and
    links describing the Redhat Linux Clustering with
    Oracle.Check following link
    http://www.oracle.com/pls/db102/to_pdf?pathname=install.102%2Fb15660.pdf&remark=portal+%28Getting+Started%29
    Virag

Maybe you are looking for

  • How to download data into one cell of excel in different lines

    Hi All, My Report downloads data from R/3 to excel. Now one particular field contains multiple lines, which i needs to download into different lines into that particular cell(of excel). how can i do this. I found one method for this and that is cl_ab

  • Final Cut Pro 7 initial setup settings (DSLR user) new user

    Hello Guys, I just bought Final Cut Pro 7 and installed it. Can anyone give me some tips on how to do an initial setup or how I should set up my Final Cut Pro. Scratch disk, ext. How can I optimize it for HD DSLR? I mainly shoot with Canon 5d and Can

  • Cannot view artworks in Finder the feature"Quicklook" don't work for song

    from finder new window all pictures thumbnails other files pdf file thumbnails i see. Except music, songs.. reinstall new itunes try to everything.. still won't work.. my macpro lion installed and my desktop wont show each song albyum art work. NO th

  • Font - Text rendering

    When i apply a particular font, using setFont() . It not rendered correctly. Applies for both Logical and Physical fonts

  • Hi,unauthorised error!!!

    hi, i will explain in detail.I have created a folder like this C:\myproj.i have created another folder under myproj named WEB-INF.I have placed my .class file under C:\myproj\WEB-INF\classes.Now how do i let the server know that my .class file is in