Networking: problems servering multiple clients
hi all
i'm writing a simple client server system, with a multithread server, in order to serve multiple clients.
the client's requests to connect to the server arrive to a port (ie 1025), and then the server, through a method returns to the client another port number, and then the comunication between them starts through the new port.
all work very fine, but i tried, with 2 computers, to start two clients at the "same time" (with a gap of few milliseconds), and my system "crashes".
i think that is a problem due to the second request that arrives while the comunication of the port from the server to the client happens.
is there a way to "queue" the requests arriving to the 1025 port of my server?
if i wasn't clear i can post some code
thanx in advance
sandro
Yes, teh code I posted does nothing more then listen for incoming conections and create a new Thread wich gets the Socket created by the accept to play with. This will happen for any incoming connection on the right port and will always be handeled the same.
As you'll see in the code i posted, there is some time between ServerSocket.accept returning a Socket and ServerSocket.accept being started again. This time shouldn't be to long to be sure the serversocket is listening for incoming connections when they arrive, so don't do to much inside the loop. If your system should handle a lot of connections simultaneously wou might have to optimise this be doing thing like having a few ClientThreads created allready to save the time of creating a new Thread. This becomes more important if you ClientThread is complex and slow to create. But when handelin less the say 25 clients you should be fine with this.
Similar Messages
-
Problem with multiple client numbers from a view
Hi Gurus,
I have a problem with a view
Creates a view with a UNION ALL stmt
=====================================
Create view vw_benifits
as
SELECT
Client_num, -- can have multiple values like 200,201,250
PERNR,
OBJPS,
ENDDA,
BEGDA,
AEDTM,
UNAME,
COB_MNTH_AMT
FROM
STG_SAP_PA9211_TB
UNION ALL
SELECT
null, -- no client number for legacy data
PERNR,
OBJPS,
ENDDA,
BEGDA,
AEDTM,
UNAME,
COB_MNTH_AMT
from
LEG_STG_SAP_PA9211_TB;
==============================
The second table contains legacy data (LEG_STG_SAP_PA9211_TB). The first table now contains multiple client data (ie the client_num can be 201,202,250 like that.
Now if the users qery the view they will only get that clients data.
eg selet * from vw_benifits where client_num=250 results only client 250 data. But I want to add the legacy data also with that.
I don't want to propose
selet * from vw_benifits where client_num in (250,NULL) since the users will be confused.
Is there any other way to do this . my requirement is like
If they query
select * from vw_benifits where client_num=250, the data should include all the records satisfying client=250 + the records from the legacy data. The view need to be created like that.
Appreciate your help
DeepakHi Thanks for the suggestion.
But I am not sure this may work for me. Here my users may not be able to use that since they don't know Oracle.
I want to hide that details from them
They may just issue a statement like this
select * from vw_benifits where client_num =250
Or
select * from vw_benifits where client_num =400 . But both times I need to show them the data from the legacy table.
Deepak -
Problem using multiple Client Certificates
Hi folks, I had (mistakenly) posted an earlier version of this question to the crypto forum.
My problem is that I have multiple client certs in my keystore, but only one is being used as the selected certificate for client authentication for all connection�s. So, one connection works fine, the rest fail because the server doesn�t like the client cert being presented.
I have been trying to get the JSSE to select the proper client certificate by making use of the chooseClientAlias method. (init the SSL context with a custom key manager that extends X509ExtendedKeyManager and implements the inherited abstract method X509KeyManager.chooseClientAlias(String[], Principal[], Socket))
But, still no luck.. the JSSE is not calling in to the my version of chooseClientAlias, and it just keeps presenting the same client certificate.
No clue why, any thoughts on how to get the JSSE to call my version of chooseClientAlias?
Thanks!
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(createCustomKeyManagers(Keystore, KeystorePassword),
createCustomTrustManagers(Keystore, KeystorePassword),null);
SSLSocketFactory factory = sslContext.getSocketFactory();
URL url = new URL(urlString);
URLConnection conn = url.openConnection();
urlConn = (HttpsURLConnection) conn;
urlConn.setSSLSocketFactory(factory);
BufferedReader rd = new BufferedReader(new InputStreamReader(urlConn.getInputStream()));
String line;
while ((line = rd.readLine()) != null) {
System.out.println(line); }
public class CustomKeyManager extends X509ExtendedKeyManager
private X509ExtendedKeyManager defaultKeyManager;
private Properties serverMap;
public String chooseClientAlias(String[] keyType, Principal[] issuers, Socket socket)
SocketAddress socketAddress = socket.getRemoteSocketAddress();
String hostName = ((InetSocketAddress)socketAddress).getHostName().toUpperCase();
String alias = null;
if(serverMap.containsKey(hostName)){
alias = serverMap.getProperty(hostName.toUpperCase());
if(alias != null && alias.length() ==0){
alias = null; }
else {
alias = defaultKeyManager.chooseClientAlias(keyType, issuers, socket);
return alias;
.Topic was correctly answered by ejp in the crypto forum..
namely: javax.net.ssl.X509KeyManager.chooseClientAlias() is called if there was an incoming CertificateRequest, according to the JSSE source code. If there's an SSLEngine it calls javax.net.ssl.X509ExtendedKeyManager.chooseEngineClientAlias() instead.*
You can create your own SSLContext with your own X509KeyManager, get its socketFactory, and set that as the socket factory for HttpsURLConnection.*
Edited by: wick123 on Mar 5, 2008 10:26 AM -
Bizarre networking problem with multiple 8.1 GA systems -- RT and Pro
Hi everyone,
I'm having a particularly bizarre problem with the networking on two Windows 8.1 machines, one running Pro and one running RT. Both demonstrate the same behavior.
In both cases, networking (wireless most of the time, although I'm not sure it matters) works fine for some period of time. Eventually it just stops working. Resetting the adapter, or flipping in and out of Airplane mode will bring it back again. The issues
happen on at least 3 wireless access points. In most cases, the networking actually appears to be up -- the limited access warning icon isn't up, although occasionally it does realize the networking isn't working.
Here's the bizarre part -- on both systems, local IP connectivity is fine. I can communicate perfectly well between devices on the same internal network, wired or wifi. During the time the networking isn't working on one or the other 8.1 system, other systems
on the network running Windows 8 (and other systems) work fine. There's no correlation between one 8.1 system stopping working and the other.
On *both* systems, when the networking appears to be failing, a traceroute actually gets 4-5 (or more) hops before going out to lunch. Other systems on the same internal network will cruise along to the same point on the route that is failing on the 8.1
machines and keep right on happily continuing the trace.
So that's the bizarre thing -- something on two 8.1 machines is doing *something* to the packets that is causing them to not route properly to a target system 10-15 hops away, the same routers handling the same kind of traffic from other systems behind my
firewall work fine, and whatever is happening to the packets that is causing issues resolves itself for some period of time if I reset the adapter. (The two systems are a MacBook Air and a Surface RT, so there's no drivers in common between them, either.)
I'm completely at a loss. Anyone see anything like this before, or have any thoughts on what to check?
(And, please, if you're one of those Microsoft support forum people who skim questions for keywords and cut-n-paste back a generic reply without any actual understanding of the problem -- like so many replies on here -- please do not waste my time. I am
trying to find someone who knows Windows 8.1 networking, not someone who knows just enough English to use ctrl-c/ctrl-v)Hi George,
There is a known issue about Cisco device. It caused some windows 8 clients disconnected from wifi. I understand your windows 8 clients works fine, when the issue occurred on windows 8.1 clients. However, I think the KB article is worth a shot
http://support.microsoft.com/kb/2749073
Best regards,
Alex Du
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread -
Multiple network location servers possible?
As we all know the network location server is an important part of any Direct Access deployment to ensure that DA clients can know whether they are connected directly to the internal LAN or connecting from external via DA.
I have seen discussion about deploying the network location server (simple blank IIS/Apache web site) in an NLB configuration but is there any way to have multiple network location servers for high availability reasons? During the DA configuration
process you can only input a single dns record for the NLS so it does not appear possible. Has anyone found a way to do this?Hi,
Yes it's a good practice to have NLB in high-availability. So a single FQDN with NLB or HLB as high-availability solution. Major problem is when DirectAccess clients connected on LAN cannot join the Network Location server. They consider they are connected
on Internet not on LAN and try to activate DirectAccess. In such situation, If users can disable DirectAccess (so no force tunneling) they can solve the problem. Once NLS is back online, computer automatically change the firewall profile to domain.
BenoitS - Simple by Design http://danstoncloud.com/blogs/simplebydesign/default.aspx -
Hello All,
we have created shared folder on multiple client machine in domain environment on different 2 OS like-XP,Vista, etc.
from some day's When we facing problem when we are access from host name that shared folder is accessible but same time same computer when we are trying to access the share folder with IP it asking for credentials i have type again and again
correct credential but unable to access that. If i re-share the folder then we are access it but when we are restarted the system then same problem is occurring.
I have checked IP,DNS,Gateway and more each & everything is well.
Pls suggest us.
Pankaj KumarHi,
According to your description, my understanding is that the same shared folder can be accessed by name, but can’t be accessed be IP address and asks for credentials.
Please try to enable the option below on the device which has shared folder:
Besides, check the Advanced Shring settings of shared folder and confrim that if there is any limitation settings.
Best Regards,
Eve Wang
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected] -
Hi,
I have a question on using EJB / or RMI servers with CORBA clients using
RMI-IIOP transport, which in theory should work, but in practice has few
glitches.
Basically, I have implemented a very simple server, StockTreader, which
looks up for a symbol and returns a 'Stock' object. In the first example, I
simplified the 'Stock' object to be a mere java.lang.String, so that lookup
would simply return the 'synbol'.
Then I have implemented the above, as an RMI-IIOP server (case 1) and a
CORBA server (case 2) with respective clients, and the pair of
client-servers work fine as long as they are CORBA-to-CORBA and RMI-to-RMI.
But the problem arises when I tried using the RMI server (via IIOP) with the
CORBA client, when the client tries to narrow the object ref obtained from
the naming service into the CORBA idl defined type (StockTrader) it ends up
with a class cast exception.
This is what I did to achieve the above results:
[1] Define an RMI interface StockTrader.java (extending java.rmi.Remote)
with the method,
public String lookup( String symbol) throws RMIException;
[2] Implement the StorckTrader interface (on a PortableRemoteObject derived
class, to make it IIOP compliant), and then the server to register the stock
trader with COS Naming service as follows:
String homeName =....
StockTraderImpl trader =new StockTraderImpl();
System.out.println("binding obj <" homeName ">...");
java.util.Hashtable ht =new java.util.Hashtable();
ht.put("java.naming.factory.initial", args[2]);
ht.put("java.naming.provider.url", args[3]);
Context ctx =new InitialContext(ht);
ctx.rebind(homeName, trader);
[3] Generate the RMI-IIOP skeletons for the Implementation class,
rmic -iiop stock.StockTraderImpl
[4] generate the IDL for the RMI interface,
rmic -idl stock.StockTraderImpl
[5] Generate IDL stubs for the CORBA client,
idlj -v -fclient -emitAll StockTraderImpl.idl
[6] Write the client to use the IDL-defined stock trader,
String serverName =args[0];
String symList =args[1];
StockClient client =new StockClient();
System.out.println("init orb...");
ORB orb =ORB.init(args, null);
System.out.println("resolve init name service...");
org.omg.CORBA.Object objRef
=orb.resolve_initial_references("NameService");
NamingContext naming =NamingContextHelper.narrow(objRef);
... define a naming component etc...
org.omg.CORBA.Object obj =naming.resolve(...);
System.out.println("narrow objRef: " obj.getClass() ": " +obj);
StockTrader trader =StockTraderHelper.narrow(obj);
[7] Compile all the classes using Java 1.2.2
[8] start tnameserv (naming service), then the server to register the RMI
server obj
[9] Run the CORBA client, passing it the COSNaming service ref name (with
which the server obj is registered)
The CORBA client successfully finds the server obj ref in the naming
service, the operation StockTraderHelper.narrow() fails in the segment
below, with a class cast exception:
org.omg.CORBA.Object obj =naming.resolve(...);
StockTrader trader =StockTraderHelper.narrow(obj);
The <obj> returned by naming service turns out to be of the type;
class com.sun.rmi.iiop.CDRInputStream$1
This is of the same type when stock trader object is registered in a CORBA
server (as opposed to an RMI server), but works correctly with no casting
excpetions..
Any ideas / hints very welcome.
thanks in advance,
-hariOn the contrary... all that is being said is that we needed to provide clearer examples/documentation in the 5.1.0 release. There will be no difference between the product as found in the service pack and the product found in the 5.1.1. That is, the only substantive will be that 5.1.1 will also
include the examples.
"<=one way=>" wrote:
With reference to your and other messages, it appears that one should not
expect that WLS RMI-IIOP will work in a complex real-life system, at least
not now. In other words, support for real-life CORBA clients is not an
option in the current release of WLS.
TIA
"Eduardo Ceballos" <[email protected]> wrote in message
news:[email protected]...
We currently publish an IDL example, even though the IDL programmingmodel in Java is completely non-functional, in anticipation of the support
needs for uses who need to use IDL to talk to the Weblogic server,
generically. This example illustrates the simplest connectivity; it does not
address how
to integrate CORBA and EJB, a broad topic, fraught with peril, imo. I'llnote in passing that, to my knowledge, none of the other vendors attempt
this topic either, a point which is telling if all the less happy to hear.
For the record then, what is missing from our distribution wrt RMI-IIOPare a RMI-IIOP example, an EJB-IIOP example, an EJB-C++. In this you are
correct; better examples are forth coming.
Still, I would not call our RMI-IIOP implementation fragile. I would saythat customers have an understandably hard time accepting that the IDL
programming model is busted; busted in the sense that there are no C++
libraries to support the EJB model, and busted in the sense that there is
simply no
support in Java for an IDL interface to an EJB. Weblogic has nothing to doit being busted, although we are trying to help our customers deal with it
in productive ways.
For the moment, what there is is a RMI (over IIOP) programming model, aninherently Java to Java programming model, and true to that, we accept and
dispatch IIOP request into RMI server objects. The way I look at it is this:
it's just a protocol, like HTTP, or JRMP; it's not IDL and it has
practically nothing to do with CORBA.
ST wrote:
Eduardo,
Can you give us more details about the comment below:
I fear that as soon as the call to narrow succeeds, the remainingapplication will fail to work correctly because it is too difficult ot
use an idl client in java to work.It seems to me that Weblogic's RMI-IIOP is a very fragile
implementation. We
don't need a "HelloWorld" example, we need a concrete serious example(fully
tested and seriously documented) that works so that we can get a betteridea
on how to integrate CORBA and EJB.
Thanks,
Said
"Eduardo Ceballos" <[email protected]> wrote in message
news:[email protected]...
Please post request to the news group...
As I said, you must separate the idl related classes (class files and
java
files) from the rmi classes... in the rmic step, you must set a newtarget
(as you did), emit the java files into that directory (it's not clearyou
did this), then remove all the rmi class files from the class path... ifyou
need to compile more classes at that point, copy the java files to theidl
directly is you must, but you can not share the types in any way.
I fear that as soon as the call to narrow succeeds, the remainingapplication will fail to work correctly because it is too difficult otuse
an idl client in java to work.
Harindra Rajapakshe wrote:
Hi Eduardo,
Thanks for the help. That is the way I compiled my CORBA client, by
separating the IDL-generated stubs from the RMI ones, but still I
get a
CORBA.BAD_PARAM upon narrowing the client proxy to the interfacetype.
Here's what I did;
+ Define the RMI interfaces, in this case a StockTrader interface.
+ Implement RMI interface by extendingjavax.rmi.PortableRemoteObject
making
it IIOP compliant
+ Implemnnt an RMI server, and compile using JDK1.2.2
+ use the RMI implementation to generate CORBA idl, using RMI-IIOPplugin
utility rmic;
rmic -idl -noValueMethods -always -d idl stock.StockTraderImpl
+ generate Java mappings to the IDL generated above, using RMI-IIOPplugin
util,
idlj -v -fclient -emitAll -tf src stocks\StockTrader.idl
This creates source for the package stock and also
org.omg.CORBA.*
package, presumably IIOP type marshalling
+ compile all classes generated above using JDK1.2.2
+ Implement client (CORBA) using the classes generated above, NOTthe
RMI
proxies.
+ start RMI server, with stockTrader server obj
+ start tnameserv
+ start CORBA client
Then the client errors when trying to narrow the obj ref from the
naming
service, into the CORBA IDL defined interface using,
org.omg.CORBA.Object obj =naming.resolve(nn);
StockTrader trader =StockTraderHelper.narrow(obj); // THIS
ERRORS..!!!
throwing a CORBA.BAD_PARAM exception.
any ideas..?
Thanks in advance,
-hari
----- Original Message -----
From: Eduardo Ceballos <[email protected]>
Newsgroups: weblogic.developer.interest.rmi-iiop
To: Hari Rajapakshe <[email protected]>
Sent: Wednesday, July 26, 2000 4:38 AM
Subject: Re: problem using CORBA clients with RMI/EJBservers..!!!???
Please see the post on june 26, re Errors compiling... somewherein
there,
I suspect, you are referring to the rmi class file when you are
obliged
to
completely segregate these from the idl class files.
Hari Rajapakshe wrote:
Hi,
I have a question on using EJB / or RMI servers with CORBA
clients
using
RMI-IIOP transport, which in theory should work, but in practice
has
few
glitches.
Basically, I have implemented a very simple server,
StockTreader,
which
looks up for a symbol and returns a 'Stock' object. In the firstexample, I
simplified the 'Stock' object to be a mere java.lang.String, so
that
lookup
would simply return the 'synbol'.
Then I have implemented the above, as an RMI-IIOP server (case
1)
and a
CORBA server (case 2) with respective clients, and the pair of
client-servers work fine as long as they are CORBA-to-CORBA andRMI-to-RMI.
But the problem arises when I tried using the RMI server (via
IIOP)
with
the
CORBA client, when the client tries to narrow the object ref
obtained
from
the naming service into the CORBA idl defined type (StockTrader)
it
ends
up
with a class cast exception.
This is what I did to achieve the above results:
[1] Define an RMI interface StockTrader.java (extending
java.rmi.Remote)
with the method,
public String lookup( String symbol) throws RMIException;
[2] Implement the StorckTrader interface (on a
PortableRemoteObject
derived
class, to make it IIOP compliant), and then the server to
register
the
stock
trader with COS Naming service as follows:
String homeName =....
StockTraderImpl trader =new StockTraderImpl();
System.out.println("binding obj <" homeName ">...");
java.util.Hashtable ht =new java.util.Hashtable();
ht.put("java.naming.factory.initial", args[2]);
ht.put("java.naming.provider.url", args[3]);
Context ctx =new InitialContext(ht);
ctx.rebind(homeName, trader);
[3] Generate the RMI-IIOP skeletons for the Implementation
class,
rmic -iiop stock.StockTraderImpl
[4] generate the IDL for the RMI interface,
rmic -idl stock.StockTraderImpl
[5] Generate IDL stubs for the CORBA client,
idlj -v -fclient -emitAll StockTraderImpl.idl
[6] Write the client to use the IDL-defined stock trader,
String serverName =args[0];
String symList =args[1];
StockClient client =new StockClient();
System.out.println("init orb...");
ORB orb =ORB.init(args, null);
System.out.println("resolve init name service...");
org.omg.CORBA.Object objRef
=orb.resolve_initial_references("NameService");
NamingContext naming=NamingContextHelper.narrow(objRef);
... define a naming component etc...
org.omg.CORBA.Object obj =naming.resolve(...);
System.out.println("narrow objRef: " obj.getClass() ":"
+obj);
StockTrader trader =StockTraderHelper.narrow(obj);
[7] Compile all the classes using Java 1.2.2
[8] start tnameserv (naming service), then the server to
register
the
RMI
server obj
[9] Run the CORBA client, passing it the COSNaming service ref
name
(with
which the server obj is registered)
The CORBA client successfully finds the server obj ref in the
naming
service, the operation StockTraderHelper.narrow() fails in thesegment
below, with a class cast exception:
org.omg.CORBA.Object obj =naming.resolve(...);
StockTrader trader =StockTraderHelper.narrow(obj);
The <obj> returned by naming service turns out to be of the
type;
class com.sun.rmi.iiop.CDRInputStream$1
This is of the same type when stock trader object is registeredin a
CORBA
server (as opposed to an RMI server), but works correctly with
no
casting
excpetions..
Any ideas / hints very welcome.
thanks in advance,
-hari -
Accessing the same stateful session bean from multiple clients in a clustered environment
I am trying to access the same stateful session bean from multiple
clients. I also want this bean to have failover support so we want to
deploy it in a cluster. The following description is how we have tried
to solve this problem, but it does not seem to be working. Any
insight would be greatly appreciated!
I have set up a cluster of three servers. I deployed a stateful
session bean with in memory replication across the cluster. A client
obtains a reference to an instance of one of these beans to handle a
request. Subsequent requests will have to use the same bean and could
come from various clients. So after using the bean the first client
stores the handle to the bean (actually the replica aware stub) to be
used by other clients to be able to obtain the bean. When another
client retrieves the handle gets the replica aware stub and makes a
call to the bean the request seems to unpredictably go to any of the
three servers rather than the primary server hosting that bean. If the
call goes to the primary server everything seems to work fine the
session data is available and it gets backed up on the secondary
server. If it happens to go to the secondary server a bean that has
the correct session data services the request but gives the error
<Failed to update the secondary copy of a stateful session bean from
home:ejb20-statefulSession-TraderHome>. Then any subsequent requests
to the primary server will not reflect changes made on the secondary
and vice versa. If the request happens to go to the third server that
is not hosting an instance of that bean then the client receives an
error that the bean was not available. From my understanding I thought
the replica aware stub would know which server is the primary host for
that bean and send the request there.
Thanks in advance,
Justin
If 'allow-concurrent-call' does exactly what you need, then you don't have a problem,
do you?
Except of course if you switch ejb containers. Oh well.
Mike
"FBenvadi" <[email protected]> wrote:
>I've got the same problem.
>I understand from you that concurrent access to a stateful session bean
>is
>not allowed but there is a
>token is weblogic-ejb-jar.xml that is called 'allow-concurrent-call'
>that
>does exactly what I need.
>What you mean 'you'll get a surprise when you go to production' ?
>I need to understand becouse I can still change the design.
>Thanks Francesco
>[email protected]
>
>"Mike Reiche" <[email protected]> wrote in message
>news:[email protected]...
>>
>> Get the fix immediately from BEA and test it. It would be a shame to
>wait
>until
>> December only to get a fix - that doesn't work.
>>
>> As for stateful session bean use - just remember that concurrent access
>to
>a stateful
>> session bean is not allowed. Things will work fine until you go to
>production
>> and encounter some real load - then you will get a surprise.
>>
>> Mike
>>
>> [email protected] (Justin Meyer) wrote:
>> >I just heard back from WebLogic Tech Support and they have confirmed
>> >that this is a bug. Here is their reply:
>> >
>> >There is some problem in failover of stateful session beans when its
>> >run from a java client.However, it is fixed now.
>> >
>> >The fix will be in SP2 which will be out by december.
>> >
>> >
>> >Mike,
>> >Thanks for your reply. I do infact believe we are correctly using
>a
>> >stateful session bean however it may have been misleading from my
>> >description of the problem. We are not accessing the bean
>> >concurrently from 2 different clients. The second client will only
>> >come into play if the first client fails. In this case we want to
>be
>> >able to reacquire the handle to our stateful session bean and call
>it
>> >from the secondary client.
>> >
>> >
>> >Justin
>> >
>> >"Mike Reiche" <[email protected]> wrote in message
>news:<[email protected]>...
>> >> You should be using an entity bean, not a stateful session bean
>for
>> >this application.
>> >>
>> >> A stateful session bean is intended to be keep state (stateful)
>for
>> >the duration
>> >> of a client's session (session).
>> >>
>> >> It is not meant to be shared by different clients - in fact, if
>you
>> >attempt to
>> >> access the same stateful session bean concurrently - it will throw
>> >an exception.
>> >>
>> >> We did your little trick (storing/retrieving handle) with a stateful
>> >session bean
>> >> on WLS 5.1 - and it did work properly - not as you describe. Our
>sfsb's
>> >were not
>> >> replicated as yours are.
>> >>
>> >> Mike
>> >>
>> >> [email protected] (Justin Meyer) wrote:
>> >> >I am trying to access the same stateful session bean from multiple
>> >> >clients. I also want this bean to have failover support so we want
>> >to
>> >> >deploy it in a cluster. The following description is how we have
>tried
>> >> >to solve this problem, but it does not seem to be working. Any
>> >> >insight would be greatly appreciated!
>> >> >
>> >> >I have set up a cluster of three servers. I deployed a stateful
>> >> >session bean with in memory replication across the cluster. A client
>> >> >obtains a reference to an instance of one of these beans to handle
>> >a
>> >> >request. Subsequent requests will have to use the same bean and
>could
>> >> >come from various clients. So after using the bean the first client
>> >> >stores the handle to the bean (actually the replica aware stub)
>to
>> >be
>> >> >used by other clients to be able to obtain the bean. When another
>> >> >client retrieves the handle gets the replica aware stub and makes
>> >a
>> >> >call to the bean the request seems to unpredictably go to any of
>the
>> >> >three servers rather than the primary server hosting that bean.
>If
>> >the
>> >> >call goes to the primary server everything seems to work fine the
>> >> >session data is available and it gets backed up on the secondary
>> >> >server. If it happens to go to the secondary server a bean that
>has
>> >> >the correct session data services the request but gives the error
>> >> ><Failed to update the secondary copy of a stateful session bean
>from
>> >> >home:ejb20-statefulSession-TraderHome>. Then any subsequent requests
>> >> >to the primary server will not reflect changes made on the secondary
>> >> >and vice versa. If the request happens to go to the third server
>that
>> >> >is not hosting an instance of that bean then the client receives
>an
>> >> >error that the bean was not available. From my understanding I
>thought
>> >> >the replica aware stub would know which server is the primary host
>> >for
>> >> >that bean and send the request there.
>> >> >
>> >> >Thanks in advance,
>> >> >Justin
>>
>
>
-
WDS 2012 R2 - Cannot PXE multiple clients at the same time
Hello All,
This is my first post on here so I apologize if this is the wrong place. I work for a school district and we are implementing WDS 2012 R2. We've been extremely satisfied with the speeds and ease of use through unattend files. However, for
the past month I've been looking for a possible answer to a problem that has plagued us from day one of implementation.
So here's the problem:
I have a stand alone WDS server which is not a domain controller and is not our DHCP server. I have IP helpers and broadcast forwarders setup on the network. As well as option 66 and 67 in DHCP. So far so good right!
Well that's partially right. When we boot one client at a time to the WDS server. Everything works as intended. We can TFTP the necessary files from the WDS server. Everything boots up and we're off and running.
However, if we boot up two or more clients at the same time. The WDS server never responds to the traffic. The clients get their DHCP information. They start the referral and download from the WDS server, but get no response. I'm
really hoping that someone on here would have some insight of something I can try. I've about exhausted my list of peers and contacts. They're all stumped as well and were smart enough to stay with 2008 WDS.
I would prefer to stick with 2012 R2 since it's setup and working for the most part. With only this one hiccup.
Thanks in advance for any guidance!Hello Daniel,
I appreciate the reply and apologize for taking so long to get back to this. Things have been a little hectic over here.
I have tried everything on this forum and I am still unsuccessful in PXE booting multiple clients at the same time.
Multicast is enabled on the server, and it works for the clients. However, as stated in the original post. I cannot boot multiple machines at the same time. I can start them from the image selection screen around the same time though. So,
that appears to be working fine. -
How to control one server with multiple clients via TCP/IP
I am wanting to control a single server with multiple clients. Only one client would be active at a time, so there would be no conflict. I want to use TCP/IP. So far, I have programmed a cluster that passes data back to the server with no problems. The challenge come in when a second client is added to the mix. I have't been able to figure out how to turn each client on and send the appropriate data and then turn it off so it doesn't keep sending the same data to the server.
Here are the things that I have considered and did some preliminary testing, but don't really know how to impliment:
1. Send a numeric on the front of the cluster packet that tells the server that data is on the way.
2. Send a boolean on the front of the cluster packet to somehow turn the server TCP/IP on.
The problem I have found is that LabVIEW TCP/IP doesn't like to be turned on and off. If it doesn't get the data it expects, it goes into a reset mode and that kills the response time.
Any help?You should consider implementing a set of simple one-byte commands that can be sent back and forth between the Server and the Clients. You can base all of these ideas off the example in the Example Finder under Networking >> TCP and UDP called Multiple Connections - Server.
You will have two loops in the server VI: one to wait for new connections, and one to send and receive data from the existing connections. For instance, after one of the clients connects, it can request control of the server to send data to it by sending the character "R" for request. Every time the send/receive loop of the Server executes, the first thing it can do is to check all the existing connections to see if any of the clients have sent a control request ("R"). If so, it will create a buffer (array) of control requests. This could be in the form of Connection IDs or indexes in the array for a particular Connection ID. Your choice.
After the Server receives a request for contol, if it is not already under control by another client, then it can send a response to the first client on the control request list. For instance, the server could send the first client a "S" command for send. Note that after the clients send their control request, they should execute a TCP Read and wait indefinitely for the server to respond with the one-byte "S" command. Then, once the client in control is finished sending data to the server, it could send the character "X" telling the Server to release it from control.
The example I mentioned above already does a similar thing. Note how when a client wants to disconnect, they send the letter "Q". You can see this in the Multiple Connections - Client VI. The Server then checks each individual connection to see if it's received this one-byte command, and if it has, it closes the connection to the client. This is what you would want to implement, but instead of having just one command, you'll have to distinguish between a few and build up a buffer of control requests.
Finally, if a client does decide to disconnect in your application, they could send the command "Q" just like the example above. At this point, close the connection and remove that Connection ID from the array of connections. You will also have to handle the case that this client was in the request control waiting line when it disconnected, in which case you need to delete it from that array as well.
This will definitely work for you, but it will take some work. Best of luck!
Jarrod S.
National Instruments -
ISE Could not locate Network Device or AAA Client
When authenticating using 802.1x and MAB, I recieve an authentication failure with the error 11007(Could not locate Network Device or AAA Client). The root cause that ISE spits back at me is "Could not find the network device or the AAA Client while accessing NAS by IP during authentication." I did pretty much everything by the book except instead of using a loopback interface I used a vlan with a defined ip address. Could this be causing the problem?
Here is the config of the port that I'm testing on:
interface GigabitEthernet1/0/9
switchport access vlan 9
switchport mode access
switchport voice vlan 8
ip access-group ACL-ALLOW in
srr-queue bandwidth share 1 30 35 5
queue-set 2
priority-queue out
authentication event fail action next-method
authentication event server dead action reinitialize vlan 4
authentication event server dead action authorize voice
authentication host-mode multi-auth
authentication open
authentication order dot1x mab
authentication priority dot1x mab
authentication port-control auto
authentication violation restrict
mab
mls qos trust device cisco-phone
mls qos trust cos
dot1x pae authenticator
dot1x timeout tx-period 10
auto qos voip cisco-phone
spanning-tree portfast
service-policy input AUTOQOS-SRND4-CISCOPHONE-POLICY
endI can ping both the vlan and the endpoint from the ISE. As far as allowing ISE to speak snmp and RADIUS to the NAD, I have enabled it on the NAD config inside the ISE. I have also double checked the snmp and radius shared passwords.
I have gotten MAB authentication to work but I am still getting the same error for dot1x authentication. Here are some of the configs on the switch.
aaa new-model
aaa authentication dot1x default group radius
aaa authentication dot1x defualt group radius
aaa authentication dot1x group group radius
aaa authorization network default group radius
aaa accounting dot1x default start-stop group radius
aaa server radius dynamic-author
aaa session-id common
ip radius source-interface TenGigabitEthernet1/0/1
radius-server attribute 6 on-for-login-auth
radius-server attribute 6 support-multiple
radius-server attribute 8 include-in-access-req
radius-server attribute 25 access-request include
radius-server dead-criteria time 5 tries 3
radius-server host 10.10.10.47 auth-port 1812 acct-port 1813 test username test key 7 097940581F5412162B464D
radius-server vsa send accounting
radius-server vsa send authentication
dot1x system-auth-control
authentication order dot1x mab
authentication priority dot1x mab
dot1x pae authenticator
dot1x timeout tx-period 10 -
Default Domain Policy Not Applying Settings to Servers or Clients
I have 2008 R2 DC's with a functioning level of 2003. Our domain servers are a mix of 2003, 2008, 2008 R2, and 2012 and our clients are a mix of Windows 7 Pro and Windows 8.1 Pro.
I recently made a change to the Default Domain Policy located at Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Local Policies -> Security Options
For the Security Policy setting called: Network security: Configure encryption types allowed for Kerberos
The change was to enable DES because of a specific need that I have with an application that I work with but enabling DES and leaving the other options such AES unselected caused other applications to not work right. I decided to revert the changes
back to "Not Defined" but those changes did not reflect on the servers even after running the gpupdate /force command.
In order to keep the application working that broke, we enabled all of the encryption levels such as DES, AES, etc. on the server that's running the application via it's Local Security Policy as a temporary fix.
Now, I want to make sure all servers receive the settings from the Default Domain Policy and have their Local Security Policies reflect the "Not Defined" setting but it's not applying. It seems like they worked when I first applied them but
when I try to remove them it does not work.
If I change the setting directly on the Local Security Policy on the server or clients it shows "No minimum" instead of "Not Defined" which I've heard can be fixed by identifying the registry entry for that setting and deleting it...so
help with the location and how to identify that key would also be helpful.
My goal is not to manually have to change servers and clients to revert back to their default settings...I want the Domain policy to apply and override the servers and client's Local Security Policy.
Any help with this would be greatly appreciated and thank you in advance.I have 2008 R2 DC's with a functioning level of 2003. Our domain servers are a mix of 2003, 2008, 2008 R2, and 2012 and our clients are a mix of Windows 7 Pro and Windows 8.1 Pro.
I recently made a change to the Default Domain Policy located at Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Local Policies -> Security Options
For the Security Policy setting called: Network security: Configure encryption types allowed for Kerberos
refer:
http://technet.microsoft.com/en-us/library/jj852180(v=ws.10).aspx
We needed to implement a similar scenario a few years ago (when we introduced Windows7 into our estate).
We had an SAP/NetWeaver implementation which always worked on WinXP, but failed on Win7.
We had to enable the DES ciphers, since those were disabled by default in Win7. We discovered that we also needed to enable all the other ciphers (those which are enabled by default[not configured]).
i.e., when we changed the setting from "Not Configured", enabled DES, and left the RC4/AES stuff untouched by us, the RC4/AES stuff attracted a status of disabled.
So, we had to set the DES ciphers to Enabled, and, also set the RC4/AES ciphers to Enabled - this gave us the "resultant" enablement of the default stuff and the needed change/addition of DES.
When you set a GP setting "back to Not Configured", depending upon the setting *AND* the individual Windows feature itself - one of two things will happen:
a) the feature will "revert" to default behaviour
b) the feature will retain the current configured behaviour but becomes un-managed
In classic Group Policy terms, condition (b) above is often referred to as "tattooing", i.e., the last GP setting remains in effect even though GPMC/RSOP/etc does not reveal that to be the case.
(This is also a really good example of not doing this sort of stuff in the DDP. It could have borked your whole domain :)
What I'd suggest, is that you re-enable your ciphers for KRB settings again - this time, enable all the ciphers that would normally be "default", let that replicate around, and allow time for domain members to action it.
Then, set the setting back to Not Configured. This way, the "last" settings issued by GP will be those you want to remain as the "legacy".
Note: the GP settings reference s/sheet, has this to say:
Network security: Configure encryption types allowed for Kerberos
This policy setting allows you to set the encryption types that Kerberos is allowed to use.
If not selected, the encryption type will not be allowed. This setting may affect compatibility with client computers or services and applications. Multiple selections are permitted.
This policy is supported on at least Windows 7 or Windows Server 2008 R2.
Don
(Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!) -
Hello,
This server code accepts only one client connection at a time. However, I have several lines that are specifically for the server to accept more than one client. What do I need to do in addition for the server to recognize that it can accept more than one client at a time?
import java.io.EOFException;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.net.ServerSocket;
import java.net.Socket;
import java.awt.BorderLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JFrame;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
import javax.swing.JTextField;
import javax.swing.SwingUtilities;
import java.util.concurrent.Executors;
import java.util.concurrent.ExecutorService;
import javax.swing.JFrame;
public class ServerTest
public static void main( String args[] )
Server application = new Server();
application.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );
application.runServer();
class Server extends JFrame
private JTextField enterField;
private JTextArea displayArea;
private ObjectOutputStream output;
private ObjectInputStream input;
private ServerSocket server;
private Socket connection;
private int counter = 1;
private ExecutorService serverExecutor;
private MultiServer clients[];
public Server()
super( "Server" );
enterField = new JTextField();
enterField.setEditable( false );
enterField.addActionListener(
new ActionListener()
public void actionPerformed( ActionEvent event )
sendData( event.getActionCommand() );
enterField.setText( "" );
add( enterField, BorderLayout.NORTH );
displayArea = new JTextArea();
add( new JScrollPane( displayArea ), BorderLayout.CENTER );
setSize( 300, 150 );
setVisible( true );
public void runServer()
serverExecutor = Executors.newCachedThreadPool();
try
server = new ServerSocket( 12345, 100 );
while ( true )
try
waitForConnection();
getStreams();
processConnection();
catch ( EOFException eofException )
displayMessage( "\nServer terminated connection" );
finally
closeConnection();
counter++;
catch ( IOException ioException )
ioException.printStackTrace();
private void waitForConnection() throws IOException
displayMessage( "Waiting for connection\n" );
connection = server.accept();
serverExecutor.execute( new MultiServer(server, connection));
displayMessage( "Connection " + counter + " received from: " + connection.getInetAddress().getHostName() );
private void getStreams() throws IOException
output = new ObjectOutputStream( connection.getOutputStream() );
output.flush();
input = new ObjectInputStream( connection.getInputStream() );
displayMessage( "\nGot I/O streams\n" );
private void processConnection() throws IOException
String message = "Connection successful";
sendData( message );
setTextFieldEditable( true );
serverExecutor.execute(new MultiServer(server, connection));
do
try
message = ( String ) input.readObject();
displayMessage( "\n" + message );
catch ( ClassNotFoundException classNotFoundException )
displayMessage( "\nUnknown object type received" );
} while ( !message.equals( "CLIENT>>> TERMINATE" ) );
private void closeConnection()
displayMessage( "\nTerminating connection\n" );
setTextFieldEditable( false );
try
output.close();
input.close();
connection.close();
catch ( IOException ioException )
ioException.printStackTrace();
private void sendData( String message )
try
output.writeObject( "SERVER>>> " + message );
output.flush();
displayMessage( "\nSERVER>>> " + message );
catch ( IOException ioException )
displayArea.append( "\nError writing object" );
private void displayMessage( final String messageToDisplay )
SwingUtilities.invokeLater(
new Runnable()
public void run()
displayArea.append( messageToDisplay );
private void setTextFieldEditable( final boolean editable )
SwingUtilities.invokeLater(
new Runnable()
public void run()
enterField.setEditable( editable );
class MultiServer extends Thread
public MultiServer(ServerSocket servers, Socket connection)
servers = server;
public void run(){System.out.print("Yes");}
}Check out the "Supporting Multiple Clients" bit here: http://java.sun.com/docs/books/tutorial/networking/sockets/clientServer.html
Start a Thread for each client. I'd recommend you create a class:
class Client
Socket socket;
ObjectOutputStream out;
ObjectInputStream in;
...any other client-specific data you need...
public void sendMessage(String s) { ...send to this client...; }
...any other client-specific methods you need...;
}Create one of those when accept() gets a new connection, then start the thread for that client.
You may want to keep a LinkedList that contains all the Client objects. E.g. if you want to send a message to all clients you'd loop over the LinkedList and send to each in turn. Synchronize the list appropriately. Removing clients from the list when they close down can be interesting :-) so be careful. -
Network Problem AFP/SAMBA - MAC OSX 10.4.8 (Server & Desktop)
Hi all,
I don't know if someone have already post this problem.
We have many Apple ( + - 200) as desktop (G5,G4 no Intel) and with the last 2 update of 10.4 (7&8) we have many problems with the volumes of the servers.
The servers have Linux as OS and Fullpress for Filesystem.
The linux version is : Red Hat enterprise Linux 3.
The Samba Version is : 3.0.14 a
The Full Press Version is : 15.01
10.4.7
In AFP sharing we see some folder with lock put from Old OS9 (with the other relase of 10.4 before 7 we don't see them).
10.4.8(desktop & server)
The network speed is very slow. If we copy from Server to Server or we copy from server to desktop the speed is avarage to 500-700 kb/s with 100mb or 1000mb network connection (LAN).
If we copy (with the same network setup in the client) from a Unix server with Helios the speed is more and more fast (from Fullpress 2 gb -> 2 hours from Helios 2gb -> 3 minutes)
the problem is , if we don't update to version 7 & 8 , the release 10.4.5(for example) copy the same file to server or desktop more and more and more fast (2gb from full press avg. 5 minutes).
How can we fix this?
(sorry for my english)
All Mac OS X (10.4.8) All System"maybe change the protocol is a solution (but i can't change ) .
the "problem" is for esample if you have an Online Databank with images/xpress/indesign ( for me with fullpress and helios) and if we change the protocol we "destroy" the resources of files and something going wrong in the db(many many problems).
We must use Fullpress and Appleshare (years and years of data stored) and a change is not very easy."
Well not necessarily. Im sure your system coud be tranistioned to something new. You have all the data.. its just a matter of converting to another system. I doubt its impossible - just REALLY expensive
"For the fullpress (i'm not sure) i think to listen talk about bonjour , but i'm not sure and i don't knows the difference from appleshare (that "fullpress" says it is an old protocol).
Time ago i have also 6 Silicon with Full press but Os X don't have problem , but the Os arrived at version 10.4.5 so... The helios (in appleshare and smb) don't have problem."
Well technically i beleive Appleshare was basically the old afp protocol over appletalk, this is pre OS X (so OS n-9.x.x). Appletalk was the old apple networking protocol similar to what Bonjour is now. However now afp runs over tcp/ip and 3.0 functions more like NFS or SMB. You can access it with Bonjour networking or standard tcp/ip. I cant imagine this Full Press software not supporting that from here on out. I suspect most bureaus are all but rid of non OS X macs so it would make sense theyd let appleshare support die while moving to afpovertcp, smb and possibly webDAV and NFS.
"Now i think maybe is a problem of Linux and how manage the connection , but something change in the clients (upgrade) and not in the servers so....."
THis is most likely the case.. or something very simiilar. -
How can I fix my network problems?
Recently, I've been having some network problems. To be a little more detailed, I'll start from what usually happens.
Every now and then, my network connection will stop responding, or lag quite a bit. Sometimes when I'm playing World of Warcraft, I'm playing in a dungeon and suddenly I experience this sudden lag in the connection. Everything I do is delayed and the connection speed between my computer and the WoW servers suddenly spikes. After having to quit the game, I try to go on the internet and I can't do anything because it takes incredibly long and most of the time I don't get anywhere near the website I typed in--including google.
However, after I restart my Mac, the network connection works again! What is going on? It's been happening pretty much most days at some time of the day. I've been trying to tackle the problem with diagnostics and going into the preferences to see if there's something wrong, but I'm not sure. Do you guys have any particular recommendations to avoid this?
If not, I guess I'll just have to continue to restart my Mac every time it happens..With internet connections it's a daisy chain. you have your computer and who ever your connecting. ethernet or wifi. Then router and or modem. any can lag or drop a connection. When it's a router or modem unplugging them for a few seconds resets them. When you restart your computer you are kind of doing that. I've been told by my internet provider to do just that. Then you have your internet provider themselves that can have slowdowns during certain times of the day not to mention a server you might be playing an online game through. You can also have an internet line coming into you house that is getting bad. I've gone through this. Done multiple bandwidth tests replaced modems and had the line to the house replaced. It's kind of process of elimination.
Maybe you are looking for
-
My iMac no longer syncs with my iPad, but it thinks it is.
I have playlists on my iMac that I want to sync to my iPad2. The iMac lists the selected playlists on the 'On This iPad' tab but the playlists don't change in the iPad. This is a new development. Is there a setting that needs to be set to correct
-
I have a table with 15 columns of various questions/topics (year, course number, instructor, etc) and 14 rows of input fields. A couple are dropdown menus, but the rest are just text fields. The client wants (most) of the fields to be required but th
-
Here's the problem. My router was working fine with an NTL modem, then I moved house, new broadband provider and the router just will not work. The broadband works fine when connected directly to the computer, but insert the router, nothing! During s
-
I'm sorry, I'm somewhat paranoid and there is no place nearby where I can take of the doubt
-
F4 search help as tree structure in a module for one field
hi experts, i have created a z module by copying standard module SAPMV23N of vbg1/2/3 transaction and i have added one tab in vbg1/2/3. the 5th tab is working. i am calling a subscreen using 5 tabstrip. there are one field MAKTL i want to use a F4 se