Clustering with wl 6.1 - Questions
Hi,
Sorry for the duplicity, but I realized I need to start a new thread.
Regards,
Manav.
==================
Hi,
My setup is:
Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239, 192.168.1.71
managed server 1: 192.168.1.239:7001 (SunOS)
managed server 1: 192.168.1.71:7001 ('98)
I have a sample SLSB deployed on the server. The deployment desriptors are
configured as:
<stateless-session-descriptor>
<stateless-clustering>
<stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
<stateless-bean-load-algorithm>random</stateless-bean-load-algorithm>
<stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idem
potent>
</stateless-clustering>
</stateless-session-descriptor>
A test client acess the cluster by specifying the JNDI URL
"t3://192.168.1.71, 192.168.1.239:7001" and works properly.
1. If I kill one of the managed servers, the requests are load balanced to
the other server. But the behaviour is erratic - after one server dies, it
actually executes faster!?! And no, its independed of whichever managed
server I kill.
2. How do I monitor how many instances were created on each managed server?
Does it the WL console show this somewhere?
3. I have not done anything special to setup the JNDI tree? I'm a li'l hazy
on this (am reading documents on the same). Any pointers? I'm still trying
to grasp the need (I understand the bit about synchronizing the JNDI tree in
the cluster to keep all servers aware of the EJBs deployed on them, and on
others on the cluster, but i'm sure there's more to it that I don't know
about).
4. I find when the managed servers join the cluster, any errors/info
messages are not stored in their respective logs. Is this the right
behaviour? Are these logs stored on the admin server instead? (Is this what
the log file wl-domain.log used for?)
5. How do i monitor the deployed ejbs through the console? Does it tell me
the number of creations/destructions, method calls, etc? How to determine if
the pool is adequate or requires to be increased?
6. What is a realm? How is it helpful?
7. What is a replication group? How does it help me in clustering?
The more I read, the more I keep getting confused. I don't expect the gurus
to answers all questions, but any help will be appreciated.
With Warm Regards,
Manav.
Hi,
Sorry for the duplicity, but I realized I need to start a new thread.
Regards,
Manav.
==================
Hi,
My setup is:
Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239, 192.168.1.71
managed server 1: 192.168.1.239:7001 (SunOS)
managed server 1: 192.168.1.71:7001 ('98)
I have a sample SLSB deployed on the server. The deployment desriptors are
configured as:
<stateless-session-descriptor>
<stateless-clustering>
<stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
<stateless-bean-load-algorithm>random</stateless-bean-load-algorithm>
<stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idem
potent>
</stateless-clustering>
</stateless-session-descriptor>
A test client acess the cluster by specifying the JNDI URL
"t3://192.168.1.71, 192.168.1.239:7001" and works properly.
1. If I kill one of the managed servers, the requests are load balanced to
the other server. But the behaviour is erratic - after one server dies, it
actually executes faster!?! And no, its independed of whichever managed
server I kill.
2. How do I monitor how many instances were created on each managed server?
Does it the WL console show this somewhere?
3. I have not done anything special to setup the JNDI tree? I'm a li'l hazy
on this (am reading documents on the same). Any pointers? I'm still trying
to grasp the need (I understand the bit about synchronizing the JNDI tree in
the cluster to keep all servers aware of the EJBs deployed on them, and on
others on the cluster, but i'm sure there's more to it that I don't know
about).
4. I find when the managed servers join the cluster, any errors/info
messages are not stored in their respective logs. Is this the right
behaviour? Are these logs stored on the admin server instead? (Is this what
the log file wl-domain.log used for?)
5. How do i monitor the deployed ejbs through the console? Does it tell me
the number of creations/destructions, method calls, etc? How to determine if
the pool is adequate or requires to be increased?
6. What is a realm? How is it helpful?
7. What is a replication group? How does it help me in clustering?
The more I read, the more I keep getting confused. I don't expect the gurus
to answers all questions, but any help will be appreciated.
With Warm Regards,
Manav.
Similar Messages
-
Clustering with wl 6.1 - Questions Part II
Again, here's the setup:
Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239, 192.168.1.71
managed server 1: 192.168.1.239:7001 (SunOS)
managed server 2: 192.168.1.71:7001 ('98)
I have a single SLSB deployed on the cluster. The deployment descriptor
has been configured to stateless clustering. The bean has a single method
(returnMessage(String msg)) that accepts the strings and returns it.
A standalone java client acesses the cluster by specifying the JNDI URL
"t3://192.168.1.71, 192.168.1.239:7001" and works properly. The java client
does a JNDI lookup once, caches the home reference and call home.create in
an infinite loop.
1. After reading one of the posts here, I physically restarted one of the
machines in the cluster (192.168.1.71). The java client was stuck for a 4 *
30 secs. The managed server whose network cable I yanked out kept showing
the following message every 30 secs (as the docs say):
<25/08/2002 13:52:21> <Error> <Cluster> <Multicast socket receive error:
java.io.InterruptedIOException: Receive timed out
java.io.InterruptedIOException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive(Native Method)
at java.net.DatagramSocket.receive(DatagramSocket.java:392)
at weblogic.cluster.FragmentSocket.receive(FragmentSocket.java:145)
at
weblogic.cluster.MulticastManager.execute(MulticastManager.java:293)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
justthe way it should be (hopefully)! Now, the real fun started when i put
the cord back. The admin server should have detected that the managed
server, right (The console should show the managed server as "Running" in
the "Servers" tab)? Wrong, it doesn't! Why?
2. I restarted the weblogic server on 192.168.1.71 and sure enough, it
joined the cluster. After it joined the cluster, the last line of
"weblogic.log" is:
####<Aug 25, 2002 2:15:53 PM PDT> <Info> <Management> <DUMMY135> <wlsdomain>
<ExecuteThread: '8' for queue: 'default'> <system> <> <140009>
<Configuration changes for domain saved to the repository.>
And the "wl-domain.log" ("domain" is the name of my domain) has the last
entry as:
####<Aug 25, 2002 1:43:56 AM PDT> <Notice> <WebLogicServer> <dummy71>
<dummy71> <ExecuteThread: '14' for queue: 'default'> <system> <> <000332>
<Started WebLogic Managed Server "dummy71" for domain "domain" running in
Development Mode>
The last few lines on the console window of the managed server (dummy71)
shows:
Starting Cluster Service ....
<25/08/2002 14:13:55> <Notice> <WebLogicServer> <ListenThread listening on
port 7001, ip address 192.168.1.71>
<25/08/2002 14:13:56> <Notice> <Cluster> <Listening for multicast messages
(cluster wlcluster) on port 7001 at address 237.0.0.1>
<25/08/2002 14:13:56> <Notice> <WebLogicServer> <Started WebLogic Managed
Server "dummy71" for domain "domain" running in Development Mode>
2.1 I presume the "Configuration changes for domain saved to repository"
indicates weblogic changed "cofig.xml"?
2.2 Why would it update config.xml?
2.3 Where is the entry in the logs to indicate that dummy71 joined the
cluster?
2.4 dummy71 joined the cluster at 14:13:56 while the admin server has
the entry at 1:43:56 AM PDT. Why is there a timezone mentioned in
"wl-domain.log"? Further investigation found out that time zone on the admin
server was Pacific (US). Changed it to GMT+5:30 (so its the same on both),
restarted weblogic on dummy71. Again, the logs mention PDT on the admin
server, while there is no mention of the time zone on the managed server?
Strange! Finally, the system clock on admin server shows 3:01 PM, while
weblogic is about an hour behind in a different time zone (the last entry in
the logs now has the timestamp Aug 25, 2002 2:02:14 AM PDT).
3. Finally, i had modified my config.xml to add the following:
<ServerDebug DebugCluster="true"
DebugClusterAnnouncements="true"
DebugClusterFragments="true" DebugClusterHeartbeats="true"
Name="dummy239"/>
I added the lines above for both the servers in the cluster to take a look
at the chatter between the admin server and the managed servers
(specifically the "heartbeat" every 30 seconds, the synchronization of
clustered services (emphasis on EJB and JNDI)). Are the entries above
correct and in the right place? If yes, what log files do i check for this?
Can anyone tell me a sample message to look for (assuming this chatter is
recorded in weblogic.log or wl-domain.log (the two log files i have)) so I
can verify the same in my logs (and probably learn something from them
With Warm Regards,
Manav.
BTW - when I said "bad test", I didn't mean you should not do it, but rather
that it is a "worst case scenario" (and something that BEA should be testing
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Manavendra Gupta" <[email protected]> wrote in message
news:[email protected]...
> Thanks for the reply Cameron. I conducted the same test by physically
> shutting down one of the machines (which I believe simulates a crash) and
> got the same results.
>
> And since I'm rather new to weblogic i've been really keen on getting the
> answers to my queries :-)
>
> --
> With Warm Regards,
> Manav.
>
> "Cameron Purdy" <[email protected]> wrote in message
> news:[email protected]...
> > Plugging and unplugging a network cable is a very bad test. Some of the
> JVM
> > implementations do not re-establish multicast traffic if you do that, or
> > they re-establish it only in one direction (it could be a bug in the OS
or
> > socket libs too). We implemented a solution for this exact problem, but
it
> > took a lot of research and coding.
> >
> > A better test would be to have a computer plugged to a switch plugged to
a
> > switch plugged to the other computer, then unplug the connection between
> the
> > switches.
> >
> > Peace,
> >
> > Cameron Purdy
> > Tangosol, Inc.
> > http://www.tangosol.com/coherence.jsp
> > Tangosol Coherence: Clustered Replicated Cache for Weblogic
> >
> >
> > "Manavendra Gupta" <[email protected]> wrote in message
> > news:[email protected]...
> > > Again, here's the setup:
> > > Admin: 192.168.1.135:7001 (XP). Cluster address:192.168.1.239,
> > 192.168.1.71
> > > managed server 1: 192.168.1.239:7001 (SunOS)
> > > managed server 2: 192.168.1.71:7001 ('98)
> > >
> > > I have a single SLSB deployed on the cluster. The deployment
> descriptor
> > > has been configured to stateless clustering. The bean has a single
> method
> > > (returnMessage(String msg)) that accepts the strings and returns it.
> > >
> > > A standalone java client acesses the cluster by specifying the JNDI
URL
> > > "t3://192.168.1.71, 192.168.1.239:7001" and works properly. The java
> > client
> > > does a JNDI lookup once, caches the home reference and call
home.create
> in
> > > an infinite loop.
> > >
> > > 1. After reading one of the posts here, I physically restarted one of
> the
> > > machines in the cluster (192.168.1.71). The java client was stuck for
a
> 4
> > *
> > > 30 secs. The managed server whose network cable I yanked out kept
> showing
> > > the following message every 30 secs (as the docs say):
> > > <25/08/2002 13:52:21> <Error> <Cluster> <Multicast socket receive
error:
> > > java.io.InterruptedIOException: Receive timed out
> > > java.io.InterruptedIOException: Receive timed out
> > > at java.net.PlainDatagramSocketImpl.receive(Native Method)
> > > at java.net.DatagramSocket.receive(DatagramSocket.java:392)
> > > at
> > weblogic.cluster.FragmentSocket.receive(FragmentSocket.java:145)
> > > at
> > > weblogic.cluster.MulticastManager.execute(MulticastManager.java:293)
> > > at
> weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
> > > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
> > >
> > > justthe way it should be (hopefully)! Now, the real fun started when i
> put
> > > the cord back. The admin server should have detected that the managed
> > > server, right (The console should show the managed server as "Running"
> in
> > > the "Servers" tab)? Wrong, it doesn't! Why?
> > > 2. I restarted the weblogic server on 192.168.1.71 and sure enough, it
> > > joined the cluster. After it joined the cluster, the last line of
> > > "weblogic.log" is:
> > > ####<Aug 25, 2002 2:15:53 PM PDT> <Info> <Management> <DUMMY135>
> > <wlsdomain>
> > > <ExecuteThread: '8' for queue: 'default'> <system> <> <140009>
> > > <Configuration changes for domain saved to the repository.>
> > >
> > > And the "wl-domain.log" ("domain" is the name of my domain) has the
last
> > > entry as:
> > > ####<Aug 25, 2002 1:43:56 AM PDT> <Notice> <WebLogicServer> <dummy71>
> > > <dummy71> <ExecuteThread: '14' for queue: 'default'> <system> <>
> <000332>
> > > <Started WebLogic Managed Server "dummy71" for domain "domain" running
> in
> > > Development Mode>
> > >
> > > The last few lines on the console window of the managed server
(dummy71)
> > > shows:
> > > Starting Cluster Service ....
> > > <25/08/2002 14:13:55> <Notice> <WebLogicServer> <ListenThread
listening
> on
> > > port 7001, ip address 192.168.1.71>
> > > <25/08/2002 14:13:56> <Notice> <Cluster> <Listening for multicast
> messages
> > > (cluster wlcluster) on port 7001 at address 237.0.0.1>
> > > <25/08/2002 14:13:56> <Notice> <WebLogicServer> <Started WebLogic
> Managed
> > > Server "dummy71" for domain "domain" running in Development Mode>
> > >
> > > 2.1 I presume the "Configuration changes for domain saved to
> > repository"
> > > indicates weblogic changed "cofig.xml"?
> > > 2.2 Why would it update config.xml?
> > > 2.3 Where is the entry in the logs to indicate that dummy71 joined
> the
> > > cluster?
> > > 2.4 dummy71 joined the cluster at 14:13:56 while the admin server
> has
> > > the entry at 1:43:56 AM PDT. Why is there a timezone mentioned in
> > > "wl-domain.log"? Further investigation found out that time zone on the
> > admin
> > > server was Pacific (US). Changed it to GMT+5:30 (so its the same on
> both),
> > > restarted weblogic on dummy71. Again, the logs mention PDT on the
admin
> > > server, while there is no mention of the time zone on the managed
> server?
> > > Strange! Finally, the system clock on admin server shows 3:01 PM,
while
> > > weblogic is about an hour behind in a different time zone (the last
> entry
> > in
> > > the logs now has the timestamp Aug 25, 2002 2:02:14 AM PDT).
> > >
> > > 3. Finally, i had modified my config.xml to add the following:
> > > <ServerDebug DebugCluster="true"
> > > DebugClusterAnnouncements="true"
> > > DebugClusterFragments="true" DebugClusterHeartbeats="true"
> > > Name="dummy239"/>
> > > I added the lines above for both the servers in the cluster to take a
> look
> > > at the chatter between the admin server and the managed servers
> > > (specifically the "heartbeat" every 30 seconds, the synchronization of
> > > clustered services (emphasis on EJB and JNDI)). Are the entries above
> > > correct and in the right place? If yes, what log files do i check for
> > this?
> > > Can anyone tell me a sample message to look for (assuming this chatter
> is
> > > recorded in weblogic.log or wl-domain.log (the two log files i have))
so
> I
> > > can verify the same in my logs (and probably learn something from them
> > > :-) ).
> > >
> > > --
> > > With Warm Regards,
> > > Manav.
> > >
> > >
> > >
> >
> >
>
>
-
Exporting data clusters with type version
Hi all,
let's assume we are saving some ABAP data as a cluster to database using the IMPORT TO ... functionality, e.g.
EXPORT VBAK FROM LS_VBAK VBAP FROM LT_VBAP TO DATABASE INDX(QT) ID 'TEST'
Some days later, the data can be imported
IMPORT VBAK TO LS_VBAK VBAP TO LT_VBAP FROM DATABASE INDX(QT) ID 'TEST'.
Some months or years later, however, the IMPORT may crash: Since it is the most normal thing in the world that ABAP types are extended, some new fields may have been added to the structures VBAP or VBAK in the meantime.
The data are not lost, however: Using method CL_ABAP_EXPIMP_UTILITIES=>DBUF_IMPORT_CREATE_DATA, they can be recovered from an XSTRING. This will create data objects apt to the content of the buffer. But the component names are lost - they get auto-generated names like COMP00001, COMP00002 etc., replacing the original names MANDT, VBELN, etc.
So a natural question is how to save the type info ( = metadata) for the extracted data together with the data themselves:
EXPORT TYPES FROM LT_TYPES VBAK FROM LS_VBAK VBAP FROM LT_VBAP TO DATABASE INDX(QT) ID 'TEST'.
The table LT_TYPES should contain the meta type info for all exported data. For structures, this could be a DDFIELDS-like table containing the component information. For tables, additionally the table kind, key uniqueness and key components should be saved.
Actually, LT_TYPES should contain persistent versions of CL_ABAP_STRUCTDESCR, CL_ABAP_TABLEDESCR, etc. But it seems there is no serialization provided for the RTTI type info classes.
(In an optimized version, the type info could be stored in a separate cluster, and being referenced by a version number only in the data cluster, for efficiency).
In the import step, the LT_TYPES could be imported first, and then instances for these historical data types could be created as containers for the real data import (here, I am inventing a class zcl_abap_expimp_utilities):
IMPORT TYPES TO LT_TYPES FROM DATABASE INDX(QT) ID 'TEST'.
DATA(LO_TYPES) = ZCL_ABAP_EXPIMP_UITLITIES=>CREATE_TYPE_INFOS ( LT_TYPES ).
assign lo_types->data_object('VBAK')->* to <LS_VBAK>.
assign lo_types->data_object('VBAP')->* to <LT_VBAP>.
IMPORT VBAK TO <LS_VBAK> VBAP TO <LT_VBAP> FROM DATABASE INDX(QT) ID 'TEST'.
Now the data can be recovered with their historical types (i.e. the types they had when the export statement was performed) and processed further.
For example, structures and table-lines could be mixed into the current versions using MOVE-CORRESPONDING, and so on.
My question: Is there any support from the standard for this functionality: Exporting data clusters with type version?
Regards,
RüdigerThe IMPORT statement works fine if target internal table has all fields of source internal table, plus some additional fields at the end, something like append structure of vbak.
Here is the snippet used.
TYPES:
BEGIN OF ty,
a TYPE i,
END OF ty,
BEGIN OF ty2.
INCLUDE TYPE ty.
TYPES:
b TYPE i,
END OF ty2.
DATA: lt1 TYPE TABLE OF ty,
ls TYPE ty,
lt2 TYPE TABLE OF ty2.
ls-a = 2. APPEND ls TO lt1.
ls-a = 4. APPEND ls TO lt1.
EXPORT table = lt1 TO MEMORY ID 'ZTEST'.
IMPORT table = lt2 FROM MEMORY ID 'ZTEST'.
I guess IMPORT statement would behave fine if current VBAK has more fields than older VBAK. -
CSA 5.1 Agent Installation on Microsoft Clusters with Teamed Broadcom NICs
I'm searching all over Cisco.com for information on installing CSA 5.1 agent on Microsoft Clusters with Teamed Broadcom NICs, but I can't find any information other than "this is supported" in the installation guide.
Does anyone know if there is a process or procedure that should be followed to install this? For example, some questions that come to mind are:
- Do the cluster services are needed to be stopped?
- Should the cluster be broken and then rebuilt?
- Is there any documentation indicating this configuration is approved by Microsoft?
- Are there case studies or other documentation on previous similar installations and/or lessons learned?
Thanks in advance,
KenKen, you might just end up being the case study! Do you have a non-production cluster to with?
If not and you already completed pilot testing, you probably have an idea of what you want to do with the agent. Do you have to stop the cluster for other software installations? I guess you might ask MS about breaking the cluster it since it's their cluster.
The only caveat I've seen with teamed NICs is when the agent tries to contact the MC it may timeout a few times. You could probably increase the polling time if this happens.
I'd create an agent kit that belongs to a group in test mode with minimal or no policies attached to test first and install it on one of the nodes. If that works ok you could gradually increase the policies and rules until you are comfortable that it is tuned correctly and then switch to protect mode.
Hope this helps...
Tom S -
SAP ECC 6.0 installation in windows 2008 clustering with db2 ERROR DB21524E
Dear Sir.,
Am installing sap ECC 6.0 on windows 2008 clustering with db2.
I got one error in the phase of Configure the database for mscs. The error is DB21524E 'FAILED TO CREATE THE RESOURCE DB2 IP PRD' THE CLUSTER NETWORK WAS NOT FOUND .
DB2_INSTANCE=DB2PRD
DB2_LOGON_USERNAME=iil\db2prd
DB2_LOGON_PASSWORD=XXXX
CLUSTER_NAME=mscs
GROUP_NAME=DB2 PRD Group
DB2_NODE=0
IP_NAME = DB2 IP PRD
IP_ADDRESS=192.168.16.27
IP_SUBNET=255.255.0.0
IP_NETWORK=public
NETNAME_NAME=DB2 NetName PRD
NETNAME_VALUE=dbgrp
NETNAME_DEPENDENCY=DB2 IP PRD
DISK_NAME=Disk M::
TARGET_DRVMAP_DISK=Disk M
Best regards.,
please help me since am already running late with this installation to run the db2mscs utility to Create resource.
Best regards.,
Manjunath G
Edited by: Manjug77 on Oct 29, 2009 2:45 PMHello Manjunath.
This looks like a configuration problem.
Please check if IP_NETWORK is set to the name of your network adapter and
if your IP_ADDRESS and IP_SUBNET are set to the correct values.
Note:
- IP_ADDRESS is a new IP address that is not used by any machine in the network.
- IP_NETWORK is optional
If you still get the same error debug your db2mscs.exe-call:
See the answer from Adam Wilson:
Can you run the following and check the output:
db2mscs -f <path>\db2mscs.cfg -d <path>\debug.txt
I suspect you may see the following error in the debug.txt:
Create_IP_Resource fnc_errcode 5045
If you see the fnc_errcode 5045
In that case, error 5045 which is a windows error, means
ERROR_CLUSTER_NETWORK_NOT_FOUND. This error is occuring because windows
couldn't find the "public network" as indicated by IP_NETWORK.
Windows couldn't find the MSCS network called "public network". The
IP_NETWORK parameter must be set to an MSCS Network., so running the
Cluster Admin GUI and expanding the Cluster Configuration->Network to
view all MSCS networks that were available and if "public network" was
one of them.
However, the parameter IP_NETWORK is optional and you could be commented
out. In that case the first MSCS network detected by the system was used.
Best regards,
Hinnerk Gildhoff -
I need help with Changing my Security Questions, I have forgotten them.
Its simple, I tried buying a Gym Buddy Application and I had to answer my security questions... Which I have forgotten I made this a while ago so I probably entered something stupid and fast to make I really regert it now. When i'm coming to this...
Hello Adrian,
The steps in the articles below will guide you in setting up your rescue email address and resetting your security questions:
Rescue email address and how to reset Apple ID security questions
http://support.apple.com/kb/HT5312
Apple ID: All about Apple ID security questions
http://support.apple.com/kb/HT5665
If you continue to have issues, please contact our Account Security Team as outlined in this article for assistance with resetting the security questions:
Apple ID: Contacting Apple for help with Apple ID account security
http://support.apple.com/kb/HT5699
Thank you for using Apple Support Communities.
Best,
Sheila M. -
Problem with clustering with JBoss server
Hi,
Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
in my cluster.
I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
Thanking you in advance
with respect
abhiramihi ,
In this scenario u can use the load balancer instead of fail over clustering .
I would suggest u to create apache proxy for redirect the request for many jboss instance.
Rgds
kathir -
Problem with clustering with JBoss server---help needed
Hi,
Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
in my cluster.
I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
Thanking you in advance
with respect
abhiramihi ,
In this scenario u can use the load balancer instead of fail over clustering .
I would suggest u to create apache proxy for redirect the request for many jboss instance.
Rgds
kathir -
PDF prints with little boxes and question marks
I prepared a document in Microsoft Publisher 360 and created a pdf with creative cloud.
Looks great and prints great for me.
I emailed it to a few people and it looks great on their screens. However, when they print it,
it comes out with little boxes and question marks.
If I email them the publisher file it is fine.
How I can get them the PDF file without all the odd markings?You probably used a font that is not on their system and did not embed the font in the PDF when you created it. Don't feel bad. We all have done it. Just make sure to embed your fonts in the Joboptions settings before you recreate the PDF.
-
3 Node hyper-V 2012 R2 Failover Clustering with Storage spaces on one of the Hyper-V hosts
Hi,
We have 3x Dell R720s with 5x 600GB 3.5 15K SAS and 128 GB RAM each. Was wondering if I could setup a Fail-over Hyper-V 2012 R2 Clustering with these 3 with the shared storage for the CSV being provided by one of the Hyper-V hosts with storage spaces installed
(Is storage spaces supported on Hyper-V?) Or I can use a 2-Node Failover clustering and the third one as a standalone Hyper-V or Server 2012 R2 with Hyper-V and storage spaces.
Each Server comes with QP 1G and a DP10G nics so that I can dedicate the 10G nics for iSCSI
Dont have a SAN or a 10G switch so it would be a crossover cable connection between the servers.
Most of the VMs would be Non-HA. Exchange 2010, Sharepoint 2010 and SQL Server 2008 R2 would be the only VMS running as HA-VMs. CSV for the Hyper-V Failover cluster would be provided by the storage spaces.I thought I was tying to do just that with 8x600 GB RAID-10 using the H/W RAID controller (on the 3rd Server) and creating CSVs out of that space so as to provide better storage performance for the HA-VMs.
1. Storage Server : 8x 600GB RAID-10 (For CSVs to house all HA-VMs running on the other 2 Servers) It may also run some local VMs that have very little disk I/O
2. Hyper-V-1 : Will act has primary HYPER-V hopst for 2x Exchange and Database Server HA-VMs (the VMDXs would be stored on the Storage Servers CSVs on top of the 8x600GB RAID-10). May also run some non-HA VMs using the local 2x600 GB in RAID-1
3. Hyper-V-2 : Will act as a Hyper-V host when the above HA-VMs fail-over to this one (when HYPER-V-1 is down for any reason). May also run some non-HA VMs using the local 2x600 GB in RAID-1
The single point of failure for the HA-VMs (non HA-VMs are non-HA so its OK if they are down for some time) is the Storage Server. The Exchange servers here are DAG peers to the Exchange Servers at the head office so in case the storage server mainboard
goes down (disk failure is mitigated using RAID, other components such as RAM, mainboard may still go but their % failure is relatively low) the local exchange servers would be down but exchange clients will still be able to do their email related tasks using
the HO Exchange servers.
Also they are under 4hr mission critical support including entire server replacement within the 4 hour period.
If you're OK with your shared storage being a single point of failure then sure you can proceed the way you've listed. However you'll still route all VM-related I/O over Ethernet which is obviously slower then running VMs from DAS (with or without virtual SAN
LUN-to-LUN replication layer) as DAS has higher bandwidth and smaller latency. Also with your scenario you exclude one host from your hypervisor cluster so running VMs on a pair of hosts instead of three would give you much worse performance and resilience:
with 1 from 3 physical hosts lost cluster would be still operable and with 1 from 2 lost all VMs booted on a single node could give you inadequate performance. So make sure your hosts would be insanely underprovisioned as every single node should be able to
handle ALL workload in case of disaster. Good luck and happy clustering :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Clustering with wl6.1 - Problems
Hi,
After reading a bit about clustering with weblogic 6.1 (and thanks to
this list), I have done the following:
1. Configure machines - two boxes (Solaris and Linux).
2. Configure servers - weblogic 6.1 running on both at port 7001.
3. Administration server is Win'XP. Here is the snippet of config.xml
on the Administration Server:
<Server Cluster="MyCluster" ListenAddress="192.168.1.239"
Machine="dummy239" Name="wls239">
<Log Name="wls239"/>
<SSL Name="wls239"/>
<ServerDebug Name="wls239"/>
<KernelDebug Name="wls239"/>
<ServerStart Name="wls239"/>
<WebServer Name="wls239"/>
</Server>
<Server Cluster="MyCluster" ListenAddress="192.168.1.131"
Machine="dummy131" Name="wls131">
<Log Name="wls131"/>
<SSL Name="wls131"/>
<ServerDebug Name="wls131"/>
<KernelDebug Name="wls131"/>
<ServerStart Name="wls131"
OutputFile="C:\bea\wlserver6.1\.\config\NodeManagerClientLogs\wls131\startserver_1029504698175.log"/>
<WebServer Name="wls131"/>
</Server>
Problems:
1. I can't figure out how I set the "OutputFile" parameter for the
server "wls131".
2. I have NodeManager started on 131 listening on port 5555. But when
I try to start server "wls131" from the Administration Server, I get
the following error:
<Aug 16, 2002 6:56:58 AM PDT> <Error> <NodeManager> <Could not start
server 'wls131' via Node Manager - reason: '[SecureCommandInvoker:
Could not create a socket to the NodeManager running on host
'192.168.1.131:5555' to execute command 'online null', reason:
Connection refused: connect. Ensure that the NodeManager on host
'192.168.1.131' is configured to listen on port '5555' and that it is
actively listening]'
Any help will be greatly appreciated.
TIA,
I have made some progress:
1. The environment settings on 131 were missing. I executed setEnv.sh
to setup the required environment variables.
2. nodemanager.hosts (on 131) had the following entries earlier:
# more nodemanager.hosts
127.0.0.1
localhost
192.168.1.135
I changed it to:
#more nodemanager.hosts
192.168.1.135
3. The Administration Server (135) did not have any listen Address
defined (since it was working without it), but since one of the errors
thrown by NodeManager on 131 was - "could not connect to
localhost:70001 via HTTP", I changed the listen Address to
192.168.1.135 (instead of the null).
4. I deleted all the logs (NodeManagerInternal logs on 131) and all
log files on NodeManagerClientLogs on 135.
5. Restarted Admin Server. Restarted NodeManager on 131.
NodeManagerInternalLogs on 131 has:
[root@]# more NodeManagerInternal_1029567030003
<Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
listenAddress to '1
92.168.1.131'>
<Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
listenPort to '5555
'>
<Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting WebLogic
home to '/
home/weblogic/bea/wlserver6.1'>
<Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting java
home to '/home
/weblogic/jdk1.3.1_03'>
<Aug 17, 2002 12:20:33 PM IST> <Info>
<[email protected]:5555> <SecureSo
cketListener: Enabled Ciphers >
<Aug 17, 2002 12:20:33 PM IST> <Info>
<[email protected]:5555> <TLS_RSA_
EXPORT_WITH_RC4_40_MD5>
<Aug 17, 2002 12:20:33 PM IST> <Info>
<[email protected]:5555> <SecureSo
cketListener: listening on 192.168.1.131:5555>
And the wls131 logs contain:
[root@dummy131 wls131]# more config
#Saved configuration for wls131
#Sat Aug 17 12:24:42 IST 2002
processId=18437
savedLogsDirectory=/home/weblogic/bea/wlserver6.1/NodeManagerLogs
classpath=NULL
nodemanager.debugEnabled=false
TimeStamp=1029567282621
command=online
java.security.policy=NULL
bea.home=NULL
weblogic.Domain=domain
serverStartArgs=NULL
weblogic.management.server=192.168.1.135\:7001
RootDirectory=NULL
nodemanager.sslEnabled=true
weblogic.Name=wls131
The error generated for the client (131) was:
[root@dummy131 wls131]# more wls131_error.log
The WebLogic Server did not start up properly.
Exception raised:
java.lang.ClassCastException:
weblogic.security.acl.DefaultUserInfoImpl
<<no stack trace available>>
--------------- nested within: ------------------
weblogic.management.configuration.ConfigurationException:
weblogic.security.acl.
DefaultUserInfoImpl - with nested exception:
[java.lang.ClassCastException:
weblogic.security.acl.DefaultUserInfoImpl]
at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
at weblogic.management.Admin.start(Admin.java:381)
at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
at weblogic.Server.main(Server.java:35)
Reason: Fatal initialization exception
and the output on the admin server (135) is:
<Aug 17, 2002 12:24:42 PM IST> <Info>
<[email protected]:5555> <BaseProcessControl: saving process
id of Weblogic Managed server 'wls131', pid: 18437>
Starting WebLogic Server ....
Connecting to http://192.168.1.135:7001...
<Aug 17, 2002 12:24:50 PM IST> <Emergency> <Configuration Management>
<Errors detected attempting to connect to admin server at
192.168.1.135:7001 during initialization of managed server (
192.168.1.131:7001 ). The reported error was: <
weblogic.security.acl.DefaultUserInfoImpl > This condition generally
results when the managed and admin servers are using the same listen
address and port.>
<Aug 17, 2002 12:24:50 PM IST> <Emergency> <Server> <Unable to
initialize the server: 'Fatal initialization exception
Throwable: weblogic.management.configuration.ConfigurationException:
weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
[java.lang.ClassCastException:
weblogic.security.acl.DefaultUserInfoImpl]
java.lang.ClassCastException:
weblogic.security.acl.DefaultUserInfoImpl
<<no stack trace available>>
--------------- nested within: ------------------
weblogic.management.configuration.ConfigurationException:
weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
[java.lang.ClassCastException:
weblogic.security.acl.DefaultUserInfoImpl]
at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
at weblogic.management.Admin.start(Admin.java:381)
at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
at weblogic.Server.main(Server.java:35)
'>
The WebLogic Server did not start up properly.
Exception raised:
java.lang.ClassCastException:
weblogic.security.acl.DefaultUserInfoImpl
<<no stack trace available>>
--------------- nested within: ------------------
weblogic.management.configuration.ConfigurationException:
weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
[java.lang.ClassCastException:
weblogic.security.acl.DefaultUserInfoImpl]
at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
at weblogic.management.Admin.start(Admin.java:381)
at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
at weblogic.Server.main(Server.java:35)
Reason: Fatal initialization exception
6. Now from the client (131) error, I thought it was something to do
with security. So I tried to start weblogic manually (connected as the
same user). Curiosly enough, it does start (it threw some errors for
some EJBs, but I got the final message):
<Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
<ListenThread listening on port 7001>
<Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
<SSLListenThread listening on port 7002>
<Aug 17, 2002 12:30:40 PM IST> <Notice> <WebLogicServer> <Started
WebLogic Admin Server "myserver" for domain "mydomain" running in
Production Mode>
7. As you can see the domain on the client (131) is "mydomain". But
shouldn't the Admin server be 192.168.1.135, since this is what I have
configured for the NodeManager? Or is it that the error occurs because
the Admin server Node Manager is configured to work with is 135, while
in the default scripts the admin server is itself? I'm confused :-)
Help, anyone?
-
Hi,
Is it possible to configure UC560 and UC540s kept in different locations as Publisher and Subscribers and there by avail the centralised deployment advandages?
I have one client with one UC560 at Head office and two UC540s at two different branch offices. I wnat to configure the extension mobility feature for all the users to access their profile at any site. They got VPN over normal internet connection not a MPLS connection. Can anybody help me on this?Sorry I meant to send this to the ejb newsgroup.
dan
dan benanav wrote:
> Do any vendors provide for clustering with automatic failover of entity
> beans? I know that WLS does not. How about Gemstone? If not is there
> a reason why it is not possible?
>
> It seems to me that EJB servers should be capable of automatic failover
> of entity beans.
>
> dan
-
BM clustering with load balancing
I want to implement BM clustering with load balancing according to AppNote written by Steve Aitken from March 25, 2005.
It's clear that I need to use two private addresses (from the example, these are 10.10.10.10 and 10.10.10.11). However, I'm not sure what are IP addresses 10.10.10.1 and 10.10.10.2 used for?
Existing BM servers have two NICs: first defined as private and the second as public addresses connected directly to Internet (they are from different subnets).
SinisaOriginally Posted by phxazcraig
In article <[email protected]>, Tnelson 2000
wrote:
> I've set this up per the appnote and aren't able to get out through any
> of the ip addresses. I get a 504 Gateway Time out error. I also noticed
> that the cluster master ip address is different, 10.10.10.12, for
> example. Do you know what I need to look at to verify I have this
> configured correctly?
>
What do you mean "aren't able to get out through any of the ip
addresses"?
Do the addresses show up in any of the proxy nodes with display secondary
ipaddress? Does the proxy console option 17 show the server listening on
those addresses?
Is the gateway timeout error a BorderManager (or Windows) error? If
BMgr, then check that BMgr has a correct default gateway, DNS is working
(option 4 on proxy console screen) and try dropping filters for a test.
Craig Johnson
Novell Knowledge Partner
*** For a current patch list, tips, handy files and books on
BorderManager, go to Craig Johnson Consulting - BorderManager, NetWare, and More ***
Got it working. I noticed that my dns setting in BM2 didn't coincide with settings in BM1. So, I made them the same and reinitialized the system on both servers. Of course, when I did that, it added the secondary IP addresses. So, I'm really not sure what was stopping it from working before, unless I have something misconfigured that's preventing the secondary addresses from loading. Go figure. -
Clustering with HttpClusterServlet
I have created a domain on my machine in which I have configured an admin server and two more instances that are part of a cluster. When I try to connect to any of the cluster instances after I deploy an application on them it runs fine. However when I try to access the HttpClusterServlet deployed on the admin server it says Error 404--Not Found
From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
10.4.5 404 Not Found
Sorry I meant to send this to the ejb newsgroup.
dan
dan benanav wrote:
> Do any vendors provide for clustering with automatic failover of entity
> beans? I know that WLS does not. How about Gemstone? If not is there
> a reason why it is not possible?
>
> It seems to me that EJB servers should be capable of automatic failover
> of entity beans.
>
> dan
-
Multiple Clusters with same computers
Is there a way to set up multiple clusters with the same computers.
For example
Cluster A has computers 1, 2
Cluster B has computers 1, 2, 3, 4, 5
The reason behind this is that computers 1 and 2 are always available, but 3, 4, 5 are used during the day. I'd like to have a day cluster and a night cluster that I can just select as needed.Hi Jake, I was playing with this type of thing when FCS1 first came out using some old powerbooks.
Yes you can kind of do this using UNMANAGED SERVICES but I will stand corrected for COMPRESSOR3.app.
In FCS2, compressor 3, simply if you use *MANAGED SERVICES* of the service nodes available through a defined a specific cluster (you would have used appleqmaster utility.app) than I believe those service nodes are dedicated to that cluster.
Sure you can make 'INSTANCES" of compressor for example and the 'renderer" instances dedicated to particular cluster using this as an example... let say...
HOST1 = MACPRO8CORE with 4 instances of compressor (as C1,C2,C3,C4 via vi 4 virtual clusters) and 8 instances of renderer (R1,R2,..R7 & R8).
HOST2 = MACBOOKPRO with 1 instance of compressor (as C1 & C2 with 2 virtual clusters) and 2 instances of renderer (R1,R2).
HOST3 = PowerbookG4 with 1 instance of compressor (as C1) and 1 instances of renderer (R1).
only as an example
ClusterA: Host1[C1,C2C3, R1,R2,R4,R5), Host2[C2]
ClusterB: Host1[C4, R6,R3,R6,R7), Host2[C2,R1R2], HOST3{C1,R1]
In fact I just tried it...
However as I thought for managed services be assured that you cannot SHARE a service node with two or more clusters. I will stand corrected.
Messy but seems to work but useless with the powerbook you'd agree
However depending on your workflow and commecrial needs (say for busness prioriries for a specific client) for best results use *UNMANAGED SERVICES* ....
Simply hook everything up over GBE, with all the usual tweaks such as "NEVER COPY SOURCE" and MOUNTING all source targets and compressor work files over HFS on the GBE subnet (and many other tweaks) and treat the set up as a huge bucket.
Use priority on the batch submission. It's not too smart but at least you have some manipulation over the queues.
The resource manager in QMASTER is not so smart. So despite a service node going idle it does not seem to redirect work from wone node to another for load balancing..
I have only tried this with SEGMENTED TRANSCODING.
Rendering (Shake) works great and seem simpler. My time is with multipass segented transcoding where I want H.264 from DVCPROHD and dont want to wait all day for it, especially where I have tweaked the timing in compressor a bit.
Try it out.
BTW as many contest on this forum, QMASTER/COMPRESSOR can be a bugger to fix if it plays up. and it has for me as well.
post your results.
HTH.
w
Maybe you are looking for
-
I have to subtract a number of 'working days' of a date field in my message mapping. I wrote a user defined function. I used cal.add(Calendar.DATE, -3); on the instance cal of the class Calendar. It subtracts 3 days, but I have to subtract 3 'working
-
Ipod touch 4g- no music, and apps not working after iOS5 update
Hi, I just updated to IOS 5 on my Ipod touch 4G, and ever since then the music library (in the Ipod) is cleared out, and none of the apps work. It's like it didn't finish updating and transferring back the data to the Ipod. When I go to plug in the I
-
How do I sync my iPhone with my new Mac? Without loosing all of my music?
How do I get my Mac to "back up" my info off my iPhone that has been synced with a PC before?
-
Moving from iPhoto to Aperture
I just purchased Aperture. I have been using iPhoto for a few years. Several months ago I downloaded the trial Aperture version. Thus, on the menu on the left, I see hundreds of photos labeled in the Aperture trial library folder and thousands under
-
IWeb Internet connection problem
Ever since I installed Snow Leopard, things have gone from great to stressful. I finished by blog, told iWeb to publish and it says I don't have an internet connection. I do. I went to Safari and surfed multiple sites. Tried to publish again and go t