Connected Environments
Hi,
We have connected two environments. Environment A has
an application with an Environment Visible SO. This SO
gets triggered at a certain moment to create a certain
anchored object and publish this object with the Name-
Service. Applications in environment A and B can then
access the anchored object.
When environment B is offline and the partition containing
the EVSO in environment A has just been started, the
first attempt to register the anchored object with the Name-
Service fails. The error says:
USER ERROR: Failed to create object (/sub/tst) in a remote
environment AF5C47B0-61F5-11D2-B9FC-627494AEAA77.
Class: qqsp_SystemResourceException
The UUID in this error belongs to environment B.
The second attempt succeeds, even though environment B
is still offline.
Can anyone explain what the cause of this error is? Is
this normal behaviour?
Pascal Rottier
STP - MSS Support & Coordination Group
Philip Morris Europe
e-mail: [email protected]
Phone: +49 (0)89-72472530
+++++++++++++++++++++++++++++++++++
Origin IT-services
Desktop Business Solutions Rotterdam
e-mail: [email protected]
Phone: +31 (0)10-2428100
+++++++++++++++++++++++++++++++++++
Don't meddle in the affairs of dragons
'cause you're crunchy and taste good with ketchup
Hi,
We have connected two environments. Environment A has
an application with an Environment Visible SO. This SO
gets triggered at a certain moment to create a certain
anchored object and publish this object with the Name-
Service. Applications in environment A and B can then
access the anchored object.
When environment B is offline and the partition containing
the EVSO in environment A has just been started, the
first attempt to register the anchored object with the Name-
Service fails. The error says:
USER ERROR: Failed to create object (/sub/tst) in a remote
environment AF5C47B0-61F5-11D2-B9FC-627494AEAA77.
Class: qqsp_SystemResourceException
The UUID in this error belongs to environment B.
The second attempt succeeds, even though environment B
is still offline.
Can anyone explain what the cause of this error is? Is
this normal behaviour?
Pascal Rottier
STP - MSS Support & Coordination Group
Philip Morris Europe
e-mail: [email protected]
Phone: +49 (0)89-72472530
+++++++++++++++++++++++++++++++++++
Origin IT-services
Desktop Business Solutions Rotterdam
e-mail: [email protected]
Phone: +31 (0)10-2428100
+++++++++++++++++++++++++++++++++++
Don't meddle in the affairs of dragons
'cause you're crunchy and taste good with ketchup
Similar Messages
-
Re: Connecting environments via modem
John,
The circumstance surrounding your question (namely a modem)
suggested the use of 'Disconnected clients' or 'Nomadic clients.
So I'll just throw that out just in case you are not aware
of it.
Through the use of DistObjectMgr class and the -fs switch, you can
have a client come up in standalone mode, and only connecting to a
service object on demand. There is an example on how to do this
in /install/examples/frame/nomad.pex.
Lee Wei
At 10:06 AM 5/19/98 +1000, [email protected] wrote:
I was just wondering if it was possible to have connected environments over
a modem, and are there any issues I should address if this is done?
For instance should service objects be session dialog or message?
Do I need to do anything special for failover?
The connection won't be there all of the time, so should we just use
external connections or connected environments?
Also, what is the best way to connect via modem? Can forte handle the
dial-up features or do we need to wrapper some external application?
Any suggestions would be appreciated,
John Twomey - CSC Australia
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>Couple of thoughts ... some you can immediately dismiss ...
1. Cert has expired (dismissed I know)
2. Run the console through an xserver or through the UNIX console to see if you have the same issues.
3. Check the cn=encryption,cn=config entry's attribute to see if it reads nsSSLClientAuth: allowed. If it reads required - you can ldapmodify it to allowed and restart the server. However this will obviously allow clients to bind in non-SSL fashion - which maybe is ok?
HTH
-Chris Larivee -
Hello,
Is there anybody on this mailing list who has experience in
using forte connected environments? Particularly to do with named
anchored objects?
We are attempting to use named anchored objects to communicate
between 2 forte connected environments and are having a few problems
with the nameservice.
When the application starts up we register the anchored objects.
However when we check the nameservice entries some (not all) of the
anchored objects are shown as unavailable and cannot be found using
BindObject. If we deregister the object and register it again it
becomes available and can be used.
No exceptions are raised when the objects are registered but the
object cannot be found.
We are running Forte release 3.0.G.2 on Windows NT server and
Window95 client.
Any Thoughts?
David McPaul
Lumley Technology
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>Hello,
Is there anybody on this mailing list who has experience in
using forte connected environments? Particularly to do with named
anchored objects?
We are attempting to use named anchored objects to communicate
between 2 forte connected environments and are having a few problems
with the nameservice.
When the application starts up we register the anchored objects.
However when we check the nameservice entries some (not all) of the
anchored objects are shown as unavailable and cannot be found using
BindObject. If we deregister the object and register it again it
becomes available and can be used.
No exceptions are raised when the objects are registered but the
object cannot be found.
We are running Forte release 3.0.G.2 on Windows NT server and
Window95 client.
Any Thoughts?
David McPaul
Lumley Technology
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/> -
Re: Connected Environments and Fail/Over
Hi,
Yes, the SO in P1 is a session-duration, and I catch the
DistributedAccessException
and I retry the call in the exception handler, so that Forte directs me
to the next
available replicate. But I still get a DistributedAccessException on the
retry from my
P2 server partition, while the client partitions reconnect successfully.
After further investigation, the difference between the P2 server
partition and the client
ones was that the clients had a -fns ServerA:5000;ServerB:5000 in their
shortcut.
After removing this option, the clients fail on the retry just like P2
does, which
proves that the -fns option is not used only on partition startup, but
has a
greater meaning behind the scenes
The next step was thus to add the -fns option to the P2 server partition,
but then,
when retrying the call from the exception handler, the partition either
hangs
or terminates with the following error :
WARNING: Task [6443A488-9C05-11D1-A703-A8262ADEAA77:0x1bc, 6] (cm.Recv)
terminated while still holding mutex(es).
Locks were cancelled - shared data may be corrupted.
Cancelled mutex: do.NsClient (0x166fd38)
FATAL ERROR: Internal mutex corrupted - terminating partition
Any thoughts ?
Vincent Figari
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866]Hi,
Yes, the SO in P1 is a session-duration, and I catch the
DistributedAccessException
and I retry the call in the exception handler, so that Forte directs me
to the next
available replicate. But I still get a DistributedAccessException on the
retry from my
P2 server partition, while the client partitions reconnect successfully.
After further investigation, the difference between the P2 server
partition and the client
ones was that the clients had a -fns ServerA:5000;ServerB:5000 in their
shortcut.
After removing this option, the clients fail on the retry just like P2
does, which
proves that the -fns option is not used only on partition startup, but
has a
greater meaning behind the scenes
The next step was thus to add the -fns option to the P2 server partition,
but then,
when retrying the call from the exception handler, the partition either
hangs
or terminates with the following error :
WARNING: Task [6443A488-9C05-11D1-A703-A8262ADEAA77:0x1bc, 6] (cm.Recv)
terminated while still holding mutex(es).
Locks were cancelled - shared data may be corrupted.
Cancelled mutex: do.NsClient (0x166fd38)
FATAL ERROR: Internal mutex corrupted - terminating partition
Any thoughts ?
Vincent Figari
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866] -
Urgent help : Nomadic computing / connected env(long)
Hi,
I need some urgent help on the following problem.
Platform : NT 3.51 as the Env Mgr, server and clients.
Forte : 2.0.E.1
The test case is simple :
Multiple, lets say 3, Forte environments (each a NT machine, but that
shouldn't be significant), of which one is the "home" environment, Env0,
as well as the other remote environments, Env1 and Env2. I need to
connect to Env1, Env2 from Env0 on a completely dynamic basis, i.e. I
have a list of NS addresses (IP & port) and the particular one is
selected by the user. "Connect" means anything from the very basic
invoke a method on a service, to the more sophisticated inspect (&
change) instruments and invoke commands on all system agents and custom
agents.
I started off with the new nomadic stuff : As per Forum and technote
(#?) x 3. I installed my server partition on Env1 and Env2 and started
the client with the -fnoins flag. Then I dynamically set the environment
variable (in NT its in HKEY_CURRENT_USER in the NT registry)
FORTE_NS_ADDRESS to the new environment, Env1. When I invoked the method
it fired off the part in the right Env. Then I changed the
FORTE_NS_ADDRESS, to Env2, and did the task.part.DistObjLOcationMgr.Rele
aseConnection(EnvMgr).. thing. This was fine. The question now was : How
do I "connect" to the newly set Env ? I tried everything from simply
calling my server partition, to server...task.part.GetEnvMgr, to
registering anchored objects on the blessed ObjLocationMgr ! It just
didnt connect to the new Env : It stayed connected to Env1 !
Then I got onto connected environments. Here I also tried a couple of
things: Most significantly, connecting to Env1 and setting the
environment search path also did the trick, but again disconnecting from
Env1 seem troublesome : Q : How do I seperate the original Env0 and Env1
? Forte seem to think that its all one big environment now and gave
strange errors when I disconnected from Env1, connected to Env2 and
tried to set the environment search path to Env2. Another Q, is there a
way to dynamically set the environment search path for a SO ? All I
could do was set the NS environment search path.
Anyway, my apologies for this long dragged out e-mail, but any ideas
will help...I'm getting desperate !
Anton van NiekerkWell I hate to say it but I think you borked it somehow. I have no idea what you mean by that green chip connected with two wires and that blue long chip but the fact that you are referring to internal parts that way tells me you didn't really know what you were doing in there. There is a Manual and it can be followed step by step rather than just rushing in and turning screws. I suggest putting it all in a basket and taking it to a repair shop where they can figure it out if possible. Trust me, you will not be the first such case they have ever seen.
-
How can two Forte installtion communicate - exactrequirement giv en at
To me, this clearly looks like a design issue. Here are two options that I
can think of:
1) You can achieve this through sql i.e. the sql service on location 1 can
allow you to query the database on location 2. Your service object uses the
user's criteria to decide which database to get to. This however is
inefficient since the dbsession is not on the same machine as the database.
2) Create a new layer ( umbrella ) which maintians a list of services that
register with it. This could be a seperate application and can be in a
seperate environment too. Your "database" service objects should also be
seperate applications. They register themselves to this layer. The remaining
part of your app should now be a seperate application which uses the "layer"
to get to the appropriate database. These different applications can talk to
each other using reference partitions accross connected environments.
Location 1 Location 2
app 1 app2
| |
(query the layer) (query the
layer)
| |
---------------------------------service
layer--------------------------------------------
|
(register self)
(register self)
|
database services 1 database services 2
I am sure there are more solutions to this problem. Others??
Ravi Kalidindi
Born Info Svcs Group
-----Original Message-----
From: Rajiv Srivastava [SMTP:[email protected]]
Sent: Tuesday, February 09, 1999 11:45 AM
To: 'Kalidindi, Ravi CWT-MSP'
Subject: RE: [email protected]
Thnaks to yr reply:
Actual requirment is :
an Forte application running at Location 1 on Local Prodution data.
same application is running at Location2 with its own local data.
(both has their own separate resources) there is Network connectivity
available.
Now i wana to fire a query that can be done either way to retive some
information.
i.e. i should be able to retrive some data based on some criteria, from
location one while sitting at location 2. and vis-versa.
i think its clearly says what i want.
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>To me, this clearly looks like a design issue. Here are two options that I
can think of:
1) You can achieve this through sql i.e. the sql service on location 1 can
allow you to query the database on location 2. Your service object uses the
user's criteria to decide which database to get to. This however is
inefficient since the dbsession is not on the same machine as the database.
2) Create a new layer ( umbrella ) which maintians a list of services that
register with it. This could be a seperate application and can be in a
seperate environment too. Your "database" service objects should also be
seperate applications. They register themselves to this layer. The remaining
part of your app should now be a seperate application which uses the "layer"
to get to the appropriate database. These different applications can talk to
each other using reference partitions accross connected environments.
Location 1 Location 2
app 1 app2
| |
(query the layer) (query the
layer)
| |
---------------------------------service
layer--------------------------------------------
|
(register self)
(register self)
|
database services 1 database services 2
I am sure there are more solutions to this problem. Others??
Ravi Kalidindi
Born Info Svcs Group
-----Original Message-----
From: Rajiv Srivastava [SMTP:[email protected]]
Sent: Tuesday, February 09, 1999 11:45 AM
To: 'Kalidindi, Ravi CWT-MSP'
Subject: RE: [email protected]
Thnaks to yr reply:
Actual requirment is :
an Forte application running at Location 1 on Local Prodution data.
same application is running at Location2 with its own local data.
(both has their own separate resources) there is Network connectivity
available.
Now i wana to fire a query that can be done either way to retive some
information.
i.e. i should be able to retrive some data based on some criteria, from
location one while sitting at location 2. and vis-versa.
i think its clearly says what i want.
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/> -
RE: Named anchored objects
Albert,
In my case I was using a named anchored object to get a handle to an actual
service object. My named object that I registered in the name service was
an intermediary to which I did not maintain a connection. So I have not
explicitly tested what you are asking.
However, I too was not using a hard coded reference to the SO, and fail over
and load balancing worked fine. The functions of fail over and load
balancing are not done by the service object but by the name service, proxy
and router. Since you are getting a proxy back any time you do a lookup in
the name service I would think that fail over should work with any anchored
object that is registered in the name service. When you do a RegisterObject
call you will notice that one of the arguments is the session duration,
which implies to me that fail over will be handled the same as for service
objects.
Load balancing adds another wrinkle. Load balancing is handled by a router.
You must get a proxy to the router and not a proxy to an instance of the
object that the router is doing the load balancing for. In the latter
scenario you will be bypassing the router. If you are creating, anchoring
and registering your objects dynamically you will not have a router so you
will not be able to load balance! This applies even if the objects are
instantiated within partitions that are load balanced because you will still
be getting proxies back to a particular instance of the anchored objects.
There are ways to accomplish load balancing between objects that you
register yourself. However, the best solution will vary depending on the
actual problem trying to be solved. If you would like to discuss this
further, include a little more detail about the scenario you need to
implement and I will give you what I know.
BTY what I have outlined above also applies to getting references via a
system agent.
Sean
Cornice Consulting, Inc.
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of Albert Dijk
Sent: Friday, July 03, 1998 11:01 AM
To: [email protected]
Subject:
Alex, David, Jez, Sean,...
My question about both solutions (using Nameservice and agents) is:
If I reach a remote service object using either a BindObject or an agent, do
fail-over and load-balancing work the same way as they normally do when
using a hard coded reference to the SO.
Albert Dijk
From: Sean Brown[SMTP:[email protected]]
Reply To: [email protected]
Sent: Thursday, June 25, 1998 6:55 AM
To: Ananiev, Alex; [email protected]
Subject: RE: multiple named objects with the same name and
interface
Alexander,
I can not comment on the speed difference because I never tested it.
But, I
will say that we looked at the agent solution at a client sight
before. I
will give the same warning I gave them. If you go the agent direction
you
are now using agents for a purpose that they were not intended. Even
though
it technically works, as soon as you start using a piece of
functionality in
a way the developer did not intend it to be used you run the risk of
forward
compatibility problems. By this I mean, since agents were not
originally
intended to be used to look up service / anchored object references,
it may
not work in the future because it is not likely to be given
consideration in
any future design.
As we all know, programmers are always stretching the bounds of the
tools
they use and you may have a good reason (i.e. performance). I just
wanted to
let you know the possible risk.
One final note on a limitation of using system agents to obtain
references
to anchored objects. You can not access agents across environments.
So, if
you have connected environments and need to get references to services
in
another environment for fail-over or whatever, you will not be able to
do it
with agents.
Just some thoughts!
Sean
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of Ananiev, Alex
Sent: Wednesday, June 24, 1998 12:14 PM
To: '[email protected]'
Subject: RE: multiple named objects with the same name and interface
David,
The problem with dynamic binding is that in this case you have to keep
the reference to the service object somewhere. You don't want to call
"bindObject" every time you need to use this service object, "bind" is
a
time-consuming operation, even on the same partition. Keeping
reference
could be undesirable if your object could be moved across partitions
(e.g. business object).
The alternative solution is to use agents. You can create custom
agent,
make it a subagent of an active partition agent and use it as a
placeholder for whatever service you need. "FindSubAgent" works much
faster than "bindObject", we verified that and agent is "user-visible"
by its nature.
Alexander
From: "Sean Brown" <[email protected]>
Date: Wed, 24 Jun 1998 09:12:55 -0500
Subject: RE: multiple named objects with the same name and interface
David,
I actually determined it through testing. In my case I did not want
this to
happen and was trying to determine why it was happing. It makes sense
if
you think about it. Forte is trying to avoid making a remote method
invocation if it can.
Now, for anything more complex than looking locally first and if none
is
found give me any remote instance you can find, you will need to do
more
work. Using a naming scheme like Jez suggests below works well.
Sean
- -----Original Message-----
From: Jez Sygrove [<a href="mailto:[email protected]">mailto:[email protected]</a>]
Sent: Wednesday, June 24, 1998 4:34 AM
To: [email protected]; 'David Foote'
Cc: [email protected]
Subject: RE: multiple named objects with the same name and interface
David,
there's a mechanism used within SCAFFOLDS that allows the
location of the 'nearest' SO when more than one is available.
It involves registering each duplicated SO under three dynamically
built
names. The names include the partition, the node or the environment
name.
When wishing to locate the nearest SO the BO builds a SO name using
its
own partition and asks the name service for that.
If there is an SO registered under that name then it must be in the
same
partition and all is well. No cross partition calls.
If not, then the BO builds the name using its node and asks the name
service for that.
This means that if there is an SO outside the BO partition but still
on
the same node then this can be used. Again, relatively 'local'.
If neither of these work then the BO has to resort to an environment
wide search.
It may be that this approach could be adapted / adopted; I like it's
ingenuity.
Cheers,
Jez
From: David Foote[SMTP:[email protected]]
Reply To: David Foote
Sent: 24 June 1998 03:17
To: [email protected]
Cc: [email protected]
Subject: RE: multiple named objects with the same name and
interface
Sean,
First, thank you for your response. I have wondered about this fora
long time.
I looked at the documentation for ObjectLocationManager and on page
327
of the Framework Library and AppletSupport Library Guide indescribing
the BindObject method Forte says:
"The name service allows more than one anchored object (from
different
partitions) to be registered in the name service under the same
registration name. When you invoke the BindObject method with a
request
for a name that has duplicate registration entries, the BindObject
method finds an entry corresponding to an active partition, skipping
any
entries that do not. If no such active partition is found, or if the
requested name is not found in the name service registry, a
RemoteAccessException will be raised when the BindObject method is
invoked."
My question is: How did you discover that in the case of duplicate
registrations the naming service will return the local object if one
exists? This is not apparent from the documentation I have quoted.
Is
it documented elsewhere? Or did you determine it empirically?
David N. Foote,
Consultant
----Original Message Follows----
David,
First I will start by saying that this can be done by using named
anchored
objects and registering them yourself in the name service. There is
documentation on how to do this. And by default you will get mostof
the
behavior you desire. When you do a lookup in the name service
(BindObject
method) it will first look in the local partition and see if thereis
a
local copy and give you that copy. By anchoring the object and
manually
registering it in the name service you are programmatically creating
your
own SO without defining it as such in the development environment.
BTW
in
response to your item number 1. This should be the case there as
well.
If
your "mobile" object is in the same partition where the serviceobject
he is
calling resides, you should get a handle to the local instance ofthe
service object.
Here is the catch, if you make a bind object call and there is no
local
copy
you will get a handle to a remote copy but you can not be sure which
one!
It end ups as more or less a random selection. Off the top of myhead
and
without going to the doc, I am pretty sure that when you register an
anchored object you can not limit it's visibility to "User".
Sean
-----Original Message-----
From: [email protected]
[<a href=
"mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of David Foote
Sent: Monday, June 22, 1998 4:51 PM
To: [email protected]
Subject: multiple named objects with the same name and interface
All,
More than once, I have wished that Forte allowed you to place named
objects with the same name in more than one partition. There aretwo
situations in which this seems desirable:
1) Objects that are not distributed, but are mobile (passed by value
to
remote objects), cannot safely reference a Service Object unless it
has
environment visibility, but this forces the overhead of a remote
method
call when it might not otherwise be necessary. If it were possibleto
place a copy of the same Service Object (with user visibility) ineach
partition, the overhead of a remote method call could be avoided.
This
would only be useful for a service object whose state could besafely
replicated.
2) My second scenario also involves mobile objects referencing a
Service
Object, but this time I would like the behavior of the calledService
Object to differ with the partition from which it is called.
This could be accomplished by placing Service Objects with the same
name
and the same interface in each partition, but varying the
implementation
with the partition.
Does anyone have any thoughts about why this would be a good thingor
a
bad thing?
David N. Foote
Consultant
Alexander Ananiev
Claremont Technology Group
916-558-4127
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive
<URL:<a href="http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
>
>
>
Alexander Ananiev
Claremont Technology Group
916-558-4127
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>Albert,
In my case I was using a named anchored object to get a handle to an actual
service object. My named object that I registered in the name service was
an intermediary to which I did not maintain a connection. So I have not
explicitly tested what you are asking.
However, I too was not using a hard coded reference to the SO, and fail over
and load balancing worked fine. The functions of fail over and load
balancing are not done by the service object but by the name service, proxy
and router. Since you are getting a proxy back any time you do a lookup in
the name service I would think that fail over should work with any anchored
object that is registered in the name service. When you do a RegisterObject
call you will notice that one of the arguments is the session duration,
which implies to me that fail over will be handled the same as for service
objects.
Load balancing adds another wrinkle. Load balancing is handled by a router.
You must get a proxy to the router and not a proxy to an instance of the
object that the router is doing the load balancing for. In the latter
scenario you will be bypassing the router. If you are creating, anchoring
and registering your objects dynamically you will not have a router so you
will not be able to load balance! This applies even if the objects are
instantiated within partitions that are load balanced because you will still
be getting proxies back to a particular instance of the anchored objects.
There are ways to accomplish load balancing between objects that you
register yourself. However, the best solution will vary depending on the
actual problem trying to be solved. If you would like to discuss this
further, include a little more detail about the scenario you need to
implement and I will give you what I know.
BTY what I have outlined above also applies to getting references via a
system agent.
Sean
Cornice Consulting, Inc.
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of Albert Dijk
Sent: Friday, July 03, 1998 11:01 AM
To: [email protected]
Subject:
Alex, David, Jez, Sean,...
My question about both solutions (using Nameservice and agents) is:
If I reach a remote service object using either a BindObject or an agent, do
fail-over and load-balancing work the same way as they normally do when
using a hard coded reference to the SO.
Albert Dijk
From: Sean Brown[SMTP:[email protected]]
Reply To: [email protected]
Sent: Thursday, June 25, 1998 6:55 AM
To: Ananiev, Alex; [email protected]
Subject: RE: multiple named objects with the same name and
interface
Alexander,
I can not comment on the speed difference because I never tested it.
But, I
will say that we looked at the agent solution at a client sight
before. I
will give the same warning I gave them. If you go the agent direction
you
are now using agents for a purpose that they were not intended. Even
though
it technically works, as soon as you start using a piece of
functionality in
a way the developer did not intend it to be used you run the risk of
forward
compatibility problems. By this I mean, since agents were not
originally
intended to be used to look up service / anchored object references,
it may
not work in the future because it is not likely to be given
consideration in
any future design.
As we all know, programmers are always stretching the bounds of the
tools
they use and you may have a good reason (i.e. performance). I just
wanted to
let you know the possible risk.
One final note on a limitation of using system agents to obtain
references
to anchored objects. You can not access agents across environments.
So, if
you have connected environments and need to get references to services
in
another environment for fail-over or whatever, you will not be able to
do it
with agents.
Just some thoughts!
Sean
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of Ananiev, Alex
Sent: Wednesday, June 24, 1998 12:14 PM
To: '[email protected]'
Subject: RE: multiple named objects with the same name and interface
David,
The problem with dynamic binding is that in this case you have to keep
the reference to the service object somewhere. You don't want to call
"bindObject" every time you need to use this service object, "bind" is
a
time-consuming operation, even on the same partition. Keeping
reference
could be undesirable if your object could be moved across partitions
(e.g. business object).
The alternative solution is to use agents. You can create custom
agent,
make it a subagent of an active partition agent and use it as a
placeholder for whatever service you need. "FindSubAgent" works much
faster than "bindObject", we verified that and agent is "user-visible"
by its nature.
Alexander
From: "Sean Brown" <[email protected]>
Date: Wed, 24 Jun 1998 09:12:55 -0500
Subject: RE: multiple named objects with the same name and interface
David,
I actually determined it through testing. In my case I did not want
this to
happen and was trying to determine why it was happing. It makes sense
if
you think about it. Forte is trying to avoid making a remote method
invocation if it can.
Now, for anything more complex than looking locally first and if none
is
found give me any remote instance you can find, you will need to do
more
work. Using a naming scheme like Jez suggests below works well.
Sean
- -----Original Message-----
From: Jez Sygrove [<a href="mailto:[email protected]">mailto:[email protected]</a>]
Sent: Wednesday, June 24, 1998 4:34 AM
To: [email protected]; 'David Foote'
Cc: [email protected]
Subject: RE: multiple named objects with the same name and interface
David,
there's a mechanism used within SCAFFOLDS that allows the
location of the 'nearest' SO when more than one is available.
It involves registering each duplicated SO under three dynamically
built
names. The names include the partition, the node or the environment
name.
When wishing to locate the nearest SO the BO builds a SO name using
its
own partition and asks the name service for that.
If there is an SO registered under that name then it must be in the
same
partition and all is well. No cross partition calls.
If not, then the BO builds the name using its node and asks the name
service for that.
This means that if there is an SO outside the BO partition but still
on
the same node then this can be used. Again, relatively 'local'.
If neither of these work then the BO has to resort to an environment
wide search.
It may be that this approach could be adapted / adopted; I like it's
ingenuity.
Cheers,
Jez
From: David Foote[SMTP:[email protected]]
Reply To: David Foote
Sent: 24 June 1998 03:17
To: [email protected]
Cc: [email protected]
Subject: RE: multiple named objects with the same name and
interface
Sean,
First, thank you for your response. I have wondered about this fora
long time.
I looked at the documentation for ObjectLocationManager and on page
327
of the Framework Library and AppletSupport Library Guide indescribing
the BindObject method Forte says:
"The name service allows more than one anchored object (from
different
partitions) to be registered in the name service under the same
registration name. When you invoke the BindObject method with a
request
for a name that has duplicate registration entries, the BindObject
method finds an entry corresponding to an active partition, skipping
any
entries that do not. If no such active partition is found, or if the
requested name is not found in the name service registry, a
RemoteAccessException will be raised when the BindObject method is
invoked."
My question is: How did you discover that in the case of duplicate
registrations the naming service will return the local object if one
exists? This is not apparent from the documentation I have quoted.
Is
it documented elsewhere? Or did you determine it empirically?
David N. Foote,
Consultant
----Original Message Follows----
David,
First I will start by saying that this can be done by using named
anchored
objects and registering them yourself in the name service. There is
documentation on how to do this. And by default you will get mostof
the
behavior you desire. When you do a lookup in the name service
(BindObject
method) it will first look in the local partition and see if thereis
a
local copy and give you that copy. By anchoring the object and
manually
registering it in the name service you are programmatically creating
your
own SO without defining it as such in the development environment.
BTW
in
response to your item number 1. This should be the case there as
well.
If
your "mobile" object is in the same partition where the serviceobject
he is
calling resides, you should get a handle to the local instance ofthe
service object.
Here is the catch, if you make a bind object call and there is no
local
copy
you will get a handle to a remote copy but you can not be sure which
one!
It end ups as more or less a random selection. Off the top of myhead
and
without going to the doc, I am pretty sure that when you register an
anchored object you can not limit it's visibility to "User".
Sean
-----Original Message-----
From: [email protected]
[<a href=
"mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of David Foote
Sent: Monday, June 22, 1998 4:51 PM
To: [email protected]
Subject: multiple named objects with the same name and interface
All,
More than once, I have wished that Forte allowed you to place named
objects with the same name in more than one partition. There aretwo
situations in which this seems desirable:
1) Objects that are not distributed, but are mobile (passed by value
to
remote objects), cannot safely reference a Service Object unless it
has
environment visibility, but this forces the overhead of a remote
method
call when it might not otherwise be necessary. If it were possibleto
place a copy of the same Service Object (with user visibility) ineach
partition, the overhead of a remote method call could be avoided.
This
would only be useful for a service object whose state could besafely
replicated.
2) My second scenario also involves mobile objects referencing a
Service
Object, but this time I would like the behavior of the calledService
Object to differ with the partition from which it is called.
This could be accomplished by placing Service Objects with the same
name
and the same interface in each partition, but varying the
implementation
with the partition.
Does anyone have any thoughts about why this would be a good thingor
a
bad thing?
David N. Foote
Consultant
Alexander Ananiev
Claremont Technology Group
916-558-4127
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive
<URL:<a href="http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
>
>
>
Alexander Ananiev
Claremont Technology Group
916-558-4127
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>> -
RE: (forte-users) Named Anchored Obj-EnvironmentFailover
I did some playing around with this stuff as well. I can tell you a few
things.
1) The search path option of connected environments only works for SO's, not
for named anchors.
2) When EnvA creates a directory "/glob", which contains object "obj1", then
EnvA owns directory "/glob". Even after restarting environments. If EnvB
tries to add a subdirectory to "/glob" or inserts its own objects into this
path, then the situation becomes unstable. It doesn't immediately produce an
error, but things go wrong anyway. Is this a bug or expected behaviour? I
don't know. I just learned not to do this. Every environment must place it's
named anchors in it's own tree. Directories can't be shared.
3) I think the relative name "glob/obj1" should work, but only if you set
the ObjectLocationMgr to start looking at the root. Default, it will start
looking in it's own environment basepath. But I don't have any experience
with this.
Pascal Rottier
Atos Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
-----Original Message-----
From: Master Programmer [mailto:masterprghotmail.com]
Sent: Monday, January 08, 2001 11:13 PM
To: forte-userslists.xpedior.com
Subject: (forte-users) Named Anchored Obj-Environment Failover
Hi to all,
We connect from EnvA to EnvB giving the user directory parameter as / and
set the Environment Search Path
as EnvA:EnvB. From both environments we start and register
'/glob/obj1' named anchored objects with the same name.
From a client we connect to EnvA and bind to'/glob/obj1' when we shutdown EnvA partition it fails-over to
EnvB. And then we restart EnvA partition. We restart/rebind the client and
try to use object. We see that it is using the EnvB object.
Although we started the primary environment object again.
It is not using the search path. Once we shutdown secondary environment
it starts using primary environment object.
When we try to use relative path when we are binding the object
First parameter ('glob/obj1') No first slash. Trying 3rd parameter
for bind function or just using environment search path, Is is not able to
find the object. From nsls command I figured out that
under the root directory
/forte/UUID of ENVA/node
/site
/UUID of ENVB
/glob/obj1
names are available. When we use relative path (without slash)
is it trying to find /glob/obj1 under the /forte/UUID of ENVA
but we are registering the name under the root.
What is the reason of this odd behaviour or is this a bug?
Any answer will be appreciated,
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comI did some playing around with this stuff as well. I can tell you a few
things.
1) The search path option of connected environments only works for SO's, not
for named anchors.
2) When EnvA creates a directory "/glob", which contains object "obj1", then
EnvA owns directory "/glob". Even after restarting environments. If EnvB
tries to add a subdirectory to "/glob" or inserts its own objects into this
path, then the situation becomes unstable. It doesn't immediately produce an
error, but things go wrong anyway. Is this a bug or expected behaviour? I
don't know. I just learned not to do this. Every environment must place it's
named anchors in it's own tree. Directories can't be shared.
3) I think the relative name "glob/obj1" should work, but only if you set
the ObjectLocationMgr to start looking at the root. Default, it will start
looking in it's own environment basepath. But I don't have any experience
with this.
Pascal Rottier
Atos Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
-----Original Message-----
From: Master Programmer [mailto:masterprghotmail.com]
Sent: Monday, January 08, 2001 11:13 PM
To: forte-userslists.xpedior.com
Subject: (forte-users) Named Anchored Obj-Environment Failover
Hi to all,
We connect from EnvA to EnvB giving the user directory parameter as / and
set the Environment Search Path
as EnvA:EnvB. From both environments we start and register
'/glob/obj1' named anchored objects with the same name.
From a client we connect to EnvA and bind to'/glob/obj1' when we shutdown EnvA partition it fails-over to
EnvB. And then we restart EnvA partition. We restart/rebind the client and
try to use object. We see that it is using the EnvB object.
Although we started the primary environment object again.
It is not using the search path. Once we shutdown secondary environment
it starts using primary environment object.
When we try to use relative path when we are binding the object
First parameter ('glob/obj1') No first slash. Trying 3rd parameter
for bind function or just using environment search path, Is is not able to
find the object. From nsls command I figured out that
under the root directory
/forte/UUID of ENVA/node
/site
/UUID of ENVB
/glob/obj1
names are available. When we use relative path (without slash)
is it trying to find /glob/obj1 under the /forte/UUID of ENVA
but we are registering the name under the root.
What is the reason of this odd behaviour or is this a bug?
Any answer will be appreciated,
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com -
RE: Accessing multiple Env from single Client-PC
Look in the "System Management Guide" under connected environments page
72. This will allow services in your primary environment to find
services in your connected environment. However, there is a bug
reported on this feature which is fixed in 2F4 for the HP and H1 for all
other servers. The following is from Forte:
The connected environments bug that was fixed in 2F4 is #24282. The
problem
was in the nodemgr/name server source code and caused the following to
occur:
Service1 is in connected envs A and B.
Client has env A as primary, B as secondary.
Envmgr A dies before the client has ever made a call to Service1.
Afer env A is gone, client makes a call to Service1 which causes Envmgr
B to
seg fault.
You should upgrade your node manager/env manager nodes to 2F4. The 2F2
development and runtime clients are fully compatible with 2F4 servers.
Kal Inman
Andersen Windows
From: Inho Choi[SMTP:[email protected]]
Sent: Monday, April 21, 1997 2:04 AM
To: [email protected]
Subject: Accessing multiple Env from single Client-PC
Hi, All!
Is there anybody has any idea to access multiple environments from
single client-PC? I have to have multiple environments because each
environment resides geographically remote node and network bandwidth,
reliability are not good enough to include all the systems into single
environment.
Using Control Panel for doing this is not easy for those who are not
familiar with Windows. The end-user tend to use just single application
to access all necessary services.
I could consider two option to doing this:
1. Make some DOS batch command file to switch different environment
like, copying back/forward between environment repositories and
set up forte.ini for changing FORTE_NS_ADDRESS. After then, invoke
proper client partition(ftexec).
2. Duplicate necessary services among each environment.
But, these two options have many drawbacks in terms of system
management(option 1), performance(option 2) and others.
Has anybody good idea to implement this? Any suggestion would be
appreciated.
Inho Choi, Daou Tech., Inc.
email: [email protected]
phone: +82-2-3450-4696Look in the "System Management Guide" under connected environments page
72. This will allow services in your primary environment to find
services in your connected environment. However, there is a bug
reported on this feature which is fixed in 2F4 for the HP and H1 for all
other servers. The following is from Forte:
The connected environments bug that was fixed in 2F4 is #24282. The
problem
was in the nodemgr/name server source code and caused the following to
occur:
Service1 is in connected envs A and B.
Client has env A as primary, B as secondary.
Envmgr A dies before the client has ever made a call to Service1.
Afer env A is gone, client makes a call to Service1 which causes Envmgr
B to
seg fault.
You should upgrade your node manager/env manager nodes to 2F4. The 2F2
development and runtime clients are fully compatible with 2F4 servers.
Kal Inman
Andersen Windows
From: Inho Choi[SMTP:[email protected]]
Sent: Monday, April 21, 1997 2:04 AM
To: [email protected]
Subject: Accessing multiple Env from single Client-PC
Hi, All!
Is there anybody has any idea to access multiple environments from
single client-PC? I have to have multiple environments because each
environment resides geographically remote node and network bandwidth,
reliability are not good enough to include all the systems into single
environment.
Using Control Panel for doing this is not easy for those who are not
familiar with Windows. The end-user tend to use just single application
to access all necessary services.
I could consider two option to doing this:
1. Make some DOS batch command file to switch different environment
like, copying back/forward between environment repositories and
set up forte.ini for changing FORTE_NS_ADDRESS. After then, invoke
proper client partition(ftexec).
2. Duplicate necessary services among each environment.
But, these two options have many drawbacks in terms of system
management(option 1), performance(option 2) and others.
Has anybody good idea to implement this? Any suggestion would be
appreciated.
Inho Choi, Daou Tech., Inc.
email: [email protected]
phone: +82-2-3450-4696 -
RE: Production Environment Definition
Brad,
We use connected environments so that we do not have a single point of
failure.
We use multiple environments and connect them together in a star topology
for reliability of service. Our servers (23 in total) sit out at branches
in the back of beyond and the WAN connections between the servers are
unreliable. One needs a reliable connection to the Name Service which sits
on each Environment Manager. We have thus created 23 connected
environments with an Environment Manager on each LAN. Connected
environments are still a bit buggy but Tech Support is currently working on
fixing the last of the problems. We are still on ver 2H15 for this reason.
Disadvantages of this topology are that making distributions take a long
time because referenced partitioning cannot be scripted in fscript and
econsole only connects to one environment at a time.
There is a Forté consultant in Denver called Pieter Pretorius who has had a
lot of experience with our connected environments. It may be worth
chatting to him.
Regards,
Richard Stobart
Technical Consultant for Forté
E-mail [email protected]
Quick-mail: [email protected]
Voice: (+ 27 83) 269 1942
(+27 11) 456 2238
Fax: (+ 27 83) 8269 1942
-----Original Message-----
From: Brad Wells [SMTP:[email protected]]
Sent: Tuesday, February 10, 1998 11:52 PM
To: 'Forte Users - Sage'
Subject: Production Environment Definition
Hello again,
We are just starting to look at what it will take to setup a production
Forte environment. I have some general questions regarding
considerations that may affect the environment definition and thought
maybe some of the more experienced users could share some thoughts on the
following:
1) What factors lead to the creation of multiple production environments?
a. How many environments should you use in a production situation?
b. Do people create separate environments for separate business units?
c. Are there performance improvements to be had by restricting the
number of server and client nodes included in a single environment?
d. How do the performance benefits of multiple environments compare to
the additional complexity of managing and maintaining multiple connected
environments?
The initial need is for an environment that will service approximately 50
clients and contain a couple of server nodes (database and service
related). However, as the environment grows, it could easily grow to a
size of 600 clients encompassing approximately 15-20 server nodes.
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as the
environment absorbs applications with high reliability needs. Should the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
Has anyone taken advantage of Forte consulting services in defining the
production environment? Where you satisfied with the results of the
service?
Thanks.
Bradley Wells
[email protected]
Strong Capital Management, Inc
http://www.strong-funds.comOn Tue, 10 Feb 98 13:52:00 PST Brad Wells <[email protected]>
writes:
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as
the
environment absorbs applications with high reliability needs. Should
the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
From the Forte Systems Management point of view, you can add them "asneeded"
fairly easily.
Now from the application source code point of view, implementing
Fail/Over support
is a different story... You will need to check your SO's dialog
durations, handle
DistributedAccessExceptions, "warm-up" your distributed references for
F/O,
design a solution for restoring global transient data, do lots of
testings etc...
So implementing Fail/Over is not only related to systems-management
issues, it can
have some influence on your application(s) source code.
Hope this helps,
Vincent Figari
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866] -
Hi everybody,
I have a little problem with distributed reference partition.
We work with Forte 30L2 and some connected environments, each one
corresponds as a node (a server).
1. ENV-DEVELOP: environment that contains repositories in which developers
are coding applications.
2. ENV-TEST: environment that contains repositories where some users and
help-desk test our applications.
3. ENV-DISTRIB: Super-Environment that contains repositories from where I
distribute, then deploy the versions for each connected env.
4. ENV-PRODUCTION: environment which contains some SO whose communicate
with our mainframes.
5-xx. ENVxx: our remote environments whose use our distributed applications.
The normal flow of our development is:
The developers work in ENV-DEVELOP
Once done, I export pex or cex into the ENV-TEST
Once ok, I export pex or cex into ENV-DISTRIB from where I distribute the
final applications into ENVxx.
In evidence, you imagine that the versions are different between this
consecutive flow.
The only way we can communicate with our mainframe is with ENV-PRODUCTION
where I have any sockets only available from this server. In this server I
have one SO which comes from ENV-DISTRIB (the only way I can distribute
app).
Let's say that this reference partition plan.SO's name is
MF_central.MF_online(CL1).
Now, my problem resides when I have to update and test this reference
partition with the different versions of our client applications (different
environments and repositories).
What I mean is :
Install a new version of MF_central.MF_online(CL2) on ENV-PRODUCTION (from
ENV-DISTRIB)
The source of this plan is the same in ENV-DEVELOP,ENV-TEST and
ENV-DISTRIB.
But I cannot communicate... The scope of this class is not the same.
The only way to run the client app properly is when I distribute the client
app. in the same environment from which I've distributed the ref.part. The
goal is to join this with all our versions of our application.
Thank you very much for your knowledge !You can't do this with a trigger. And you really, really don't want to.
As you've discovered, Oracle performs a number of checks to determine whether an insert is valid before any triggers are executed. It is going to discover that there is no appropriate partition before your trigger runs. Even if you somehow worked around that problem, it's likely that you'd end up in a rather problematic position from a lock standpoint since you'd be trying to do DDL on the object in the autonomous transaction while doing an insert likely from a stored procedure that would itself be made invalid because of the DDL in another-- that's quite likely to cause an error anyway. And that's before we get into the effects of doing DDL on other sessions running at the same time.
Why do you believe that you need to add partitions dynamically? If you are doing ETL into a data warehouse, it makes sense to add any partitions you need at the beginning of the load before you start inserting the data, not in a trigger. If this is some sort of OLTP application, it doesn't make sense to do DDL while the application is running. I could see potentially having a default partition and then having a nightly job that does a split partition to add whatever list partitions you need but that would strike me as a corner case at best. If you really don't know what the possible values are and users are constantly creating new ones, list partitioning seems like a poor choice to me-- wouldn't a hash partition make more sense?
Justin -
BE5k to BE6k Migration tar file fails
Hi we are upgrading from a CUCMBE5k to 6k (CUCM 9.1.2)
Our existing server is on an HP MSC server and we purchased the Cisco UCS C220 server. We are following the below guide which outlines creating the server then import/export the tar files from the old server into the new CUCM. Because this is on a new server I'm bring the new one up in parallel then i will cut over.
When I run the import it fails. Attached is the job schedular failier. The log files have some info however, how much do I need to manipulate this info, if I can. Has anyone run into this issue with the TAR files failing?
Thanks,
MikeThere is no migration path for CUCM off of a BE5k. You must build the new CUCM cluster from scratch. If you're patient, you can use BAT Import/Export to carry over most of the data; however, it'll require a lot of massaging in Excel as the columns will be different between the export from your older version and the import on 9.1(2). Note that you can carry CXN mailboxes over using COBRAS but you must be on CXN 7.x or newer.
If you have installed Cisco Unified Communications Manager Business Edition 5000 on an MCS-7828 server, and you decide that you need to migrate to separate Cisco Unified Communications Manager and Cisco Unity Connection environments for increased scalability and capacity, you can reuse that MCS-7828 server to run Cisco Unified Communications Manager in a MCS-7825 cluster. Although you can reuse the server, you must reenter your data on the server manually. You must also obtain another server to run Cisco Unity Connection.
http://www.cisco.com/en/US/docs/voice_ip_comm/cucmbe/install/8_6_1/install/cmins861.html#wp795012
Please remember to rate helpful responses and identify helpful or correct answers. -
Production Environment Definition
Hello again,
We are just starting to look at what it will take to setup a production
Forte environment. I have some general questions regarding
considerations that may affect the environment definition and thought
maybe some of the more experienced users could share some thoughts on the
following:
1) What factors lead to the creation of multiple production environments?
a. How many environments should you use in a production situation?
b. Do people create separate environments for separate business units?
c. Are there performance improvements to be had by restricting the
number of server and client nodes included in a single environment?
d. How do the performance benefits of multiple environments compare to
the additional complexity of managing and maintaining multiple connected
environments?
The initial need is for an environment that will service approximately 50
clients and contain a couple of server nodes (database and service
related). However, as the environment grows, it could easily grow to a
size of 600 clients encompassing approximately 15-20 server nodes.
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as the
environment absorbs applications with high reliability needs. Should the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
Has anyone taken advantage of Forte consulting services in defining the
production environment? Where you satisfied with the results of the
service?
Thanks.
Bradley Wells
[email protected]
Strong Capital Management, Inc
http://www.strong-funds.comOn Tue, 10 Feb 98 13:52:00 PST Brad Wells <[email protected]>
writes:
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as
the
environment absorbs applications with high reliability needs. Should
the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
From the Forte Systems Management point of view, you can add them "asneeded"
fairly easily.
Now from the application source code point of view, implementing
Fail/Over support
is a different story... You will need to check your SO's dialog
durations, handle
DistributedAccessExceptions, "warm-up" your distributed references for
F/O,
design a solution for restoring global transient data, do lots of
testings etc...
So implementing Fail/Over is not only related to systems-management
issues, it can
have some influence on your application(s) source code.
Hope this helps,
Vincent Figari
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866] -
Referencing Service Objects after FailOver
I have a service object Manager1SO in partition1, that calls a Start method on
another service object WorkerSO in partition3.
Manager1SO then monitors WorkerSO by registering for the RemoteAccessEvent on
WorkerSO.
When partition2 is brought offline, Manager1SO catches the RemoteAccessEvent on
WorkerSO successfully, and calls the Start method on WorkerSO again.
This seems to work a few times in a single environment, but after a while the
call to the Start method seems to hang.
When I attempt this kind of processing on 2 connected environments using
failover, the Manager1SO in partition1 in the environment on which WorkerSO
failed from cannot reference WorkerSO at all (hangs).
The Manager2SO in partition2 in the environment on which WorkerSO has failed
over to, references it okay i.e. the Start method completes.
If I restart Manager1SO, it then references WorkerSO in its environment rather
than the environment it failed over to.
I know this is very light on information, but any help would be appreciated.
Regards,
Moris Mihailidis
Consulting & Technology Department
CSC
570 St. Kilda Road, Melbourne VIC 3004
Ph: 61-3-95364675 Email: mmihailicsc.com.auI have a service object Manager1SO in partition1, that calls a Start method on
another service object WorkerSO in partition3.
Manager1SO then monitors WorkerSO by registering for the RemoteAccessEvent on
WorkerSO.
When partition2 is brought offline, Manager1SO catches the RemoteAccessEvent on
WorkerSO successfully, and calls the Start method on WorkerSO again.
This seems to work a few times in a single environment, but after a while the
call to the Start method seems to hang.
When I attempt this kind of processing on 2 connected environments using
failover, the Manager1SO in partition1 in the environment on which WorkerSO
failed from cannot reference WorkerSO at all (hangs).
The Manager2SO in partition2 in the environment on which WorkerSO has failed
over to, references it okay i.e. the Start method completes.
If I restart Manager1SO, it then references WorkerSO in its environment rather
than the environment it failed over to.
I know this is very light on information, but any help would be appreciated.
Regards,
Moris Mihailidis
Consulting & Technology Department
CSC
570 St. Kilda Road, Melbourne VIC 3004
Ph: 61-3-95364675 Email: mmihailicsc.com.au -
Dear Forte-users,
here is a tricky question.
I'm checking the Forte' functionality that allows you to put multiple
addresses in FORTE_NS_ADDRESS, so that a client can connect to an
alternative environment in case of failover.
I have deployed my application in two alternative environment, called
AEnv and BEnv respectively.
I've paid attention to deploy the application exactly in the same way in
the two environment, that is, with the same partition numbering, so that
my application is deployed correctly to the client.
I've started two environment manager:
start_nodemgr -e AEnv -fns srv1:5000 -fnd srv1Node
start_nodemgr -e BEnv -fns srv2:6000 -fnd srv2Node
onto two different machines.
I've set my client FORTE_NS_ADDRESS to srv1:5000;srv2:6000 so that when
srv1 crashes, my client "should" automatically connect to srv2.
My application consists of three partitions: a client partition and two
server partition. The two server partition are replicated for failover
and are distributed to two different logical nodes. In environment AEnv
the two logical nodes corrispond to two different machines, whilst in
Benv the two logical nodes are on the same physical machine.
The platforms in use are: PC client (Windows 95) and AIX servers.
After all these setup operations, I've started up with my work.
To simulate an hardware crash of srv1, I killed all Forte' processes
(ftexec and all nodemgr) while my application was running, then I made a
new request to a server partition.
What I espected from Forte' was that my application connected to the
alternative environment and an autostart of the server partitions were
made in order to satisfy my client request.
What happens instead is that I receive a DistributedAccessException and
I have to exit the application. If I then execute again the application,
this time Forte' correctly connects to BEnv and autostarts the server
partitions.
I've also tried to start manually the server partitions in BEnv from
econsole, but with the same result.
In order to understand this behaviour, I've set the trc:do:35 flag.
Reading the trace messages, I've seen that in the case of a new
execution of the application there is at a certain time an
"InitiateNsBind", a "ClientCreateEnvironment" and some
"RegisterPartition". When I read the trace messages generated in the
case of DistributedAccessException I noted that Forte' correctly
initiates a bind to BEnv, but after that there is a "Got NAMESERVERAVAIL
event" message and no "ClientCreateEnvironment" or "RegisterPartition"
messages.
So I think it tries to use old references in the new connected
environment.
Thanks in advance for reading till the end !
If anyone has any idea or opinion, please let me know.
Regards,
Cristina.Thank you Don.
I know that I have to handle the exception in case of Transaction or
Session Duration and I handled in fact the DistributedAccessException. I
think my problem could be connected to what you said by the end:
Specifying multiple name server addresses in FORTE_NS_ADDRESS onlyapplies to initial startup by the client. If the primary name server can't
be found, the client will try the other addresses in FORTE_NS_ADDRESS.
Once the client is connected, it uses the environment search path for
failover.
because I haven't used the search path.
And
Fourth - have you "warmed up" the application by talking to all serviceobjects in all partitions? Doing this will ensure that the client has a
list of all service objects in all partitions, in all connected
environments.
Yes, my request to the server partition involves al Service Objects, so my
client has a list of all of them.
I will try setting the environment search path.
Greeting,
Cristina.
[email protected] il 02/09/97 18.16.19
Per: Cristina Tomacelli/CSI/IT
cc: [email protected]
Oggetto: Re: Environment Failover
Cristina,
I'm assuming that you have connected the two environments. Keep in mind,
when connecting environments, that one environment is the "master", and
others are "subordinates." Thus, if Aenv was the master environment, you
should connect Benv to it.
Secondly, have you set the environment search path? For example:
@AEnv:@BEnv
Third, unless you're using message dialog duration, you will either have to
catch the AbortException (for transaction duration,) or the
RemoteAccessException for session duration. If you receive either of these
events, you would have to retry your message.
Fourth - have you "warmed up" the application by talking to all service
objects in all partitions? Doing this will ensure that the client has a
list of all service objects in all partitions, in all connected
environments.
Specifying multiple name server addresses in FORTE_NS_ADDRESS only applies
to initial startup by the client. If the primary name server can't be
found, the client will try the other addresses in FORTE_NS_ADDRESS. Once
the client is connected, it uses the environment search path for failover.
Hope this helps,
Don
At 04:34 PM 9/1/97 +0100, Cristina Tomacelli wrote:
Dear Forte-users,
here is a tricky question.
I'm checking the Forte' functionality that allows you to put multiple
addresses in FORTE_NS_ADDRESS, so that a client can connect to an
alternative environment in case of failover.
I have deployed my application in two alternative environment, called
AEnv and BEnv respectively.
I've paid attention to deploy the application exactly in the same way in
the two environment, that is, with the same partition numbering, so that
my application is deployed correctly to the client.
I've started two environment manager:
start_nodemgr -e AEnv -fns srv1:5000 -fnd srv1Node
start_nodemgr -e BEnv -fns srv2:6000 -fnd srv2Node
onto two different machines.
I've set my client FORTE_NS_ADDRESS to srv1:5000;srv2:6000 so that when
srv1 crashes, my client "should" automatically connect to srv2.
My application consists of three partitions: a client partition and two
server partition. The two server partition are replicated for failover
and are distributed to two different logical nodes. In environment AEnv
the two logical nodes corrispond to two different machines, whilst in
Benv the two logical nodes are on the same physical machine.
The platforms in use are: PC client (Windows 95) and AIX servers.
After all these setup operations, I've started up with my work.
To simulate an hardware crash of srv1, I killed all Forte' processes
(ftexec and all nodemgr) while my application was running, then I made a
new request to a server partition.
What I espected from Forte' was that my application connected to the
alternative environment and an autostart of the server partitions were
made in order to satisfy my client request.
What happens instead is that I receive a DistributedAccessException and
I have to exit the application. If I then execute again the application,
this time Forte' correctly connects to BEnv and autostarts the server
partitions.
I've also tried to start manually the server partitions in BEnv from
econsole, but with the same result.
In order to understand this behaviour, I've set the trc:do:35 flag.
Reading the trace messages, I've seen that in the case of a new
execution of the application there is at a certain time an
"InitiateNsBind", a "ClientCreateEnvironment" and some
"RegisterPartition". When I read the trace messages generated in the
case of DistributedAccessException I noted that Forte' correctly
initiates a bind to BEnv, but after that there is a "Got NAMESERVERAVAIL
event" message and no "ClientCreateEnvironment" or "RegisterPartition"
messages.
So I think it tries to use old references in the new connected
environment.
Thanks in advance for reading till the end !
If anyone has any idea or opinion, please let me know.
Regards,
Cristina.
Maybe you are looking for
-
Attach doc from external content server- using Generic Object Service (GOS)
Dear All, i have intergrated an external content server to SAP using SAP archive link. All the scanned document are there in Content server and corresponding entries are done in SAP.I can search and view document using tcode : OAAD Please tell me ste
-
Help cant update...Other didnt post question
Okay, Lets see if this one works. Sorry for the last post. I am trying to upgrade to the new I tunes on my PC. It downloads and everything but at the end says" file name "program file" has unrecognizable character. I did move the I tunes file to my H
-
Role to create business partner
Hi Experts, I am facing a problem that I want to create a role for the user and this role is for create business partner and the spacification is tat if one user careate a business partner then only that user can edit that business partner no other u
-
PROBLEM PLAYING MUSIC ON MY I-PAD
I've copied music from my PC (Windows 8) via I-Tunes to my I-Pad. When I tap on a song it doesn't play but scholls through all the songs on the album. How do i get my I-Pad to play my music ?
-
I had to type this thread on another computer because the Gateway GM5260 won't display the HTML toolbar, nor can I get any PDFs displayed on the Gateway to print from the HP Officejet 4315 to which it's connected. Don't know whether the problem is se