Referencing Service Objects after FailOver
I have a service object Manager1SO in partition1, that calls a Start method on
another service object WorkerSO in partition3.
Manager1SO then monitors WorkerSO by registering for the RemoteAccessEvent on
WorkerSO.
When partition2 is brought offline, Manager1SO catches the RemoteAccessEvent on
WorkerSO successfully, and calls the Start method on WorkerSO again.
This seems to work a few times in a single environment, but after a while the
call to the Start method seems to hang.
When I attempt this kind of processing on 2 connected environments using
failover, the Manager1SO in partition1 in the environment on which WorkerSO
failed from cannot reference WorkerSO at all (hangs).
The Manager2SO in partition2 in the environment on which WorkerSO has failed
over to, references it okay i.e. the Start method completes.
If I restart Manager1SO, it then references WorkerSO in its environment rather
than the environment it failed over to.
I know this is very light on information, but any help would be appreciated.
Regards,
Moris Mihailidis
Consulting & Technology Department
CSC
570 St. Kilda Road, Melbourne VIC 3004
Ph: 61-3-95364675 Email: mmihailicsc.com.au
I have a service object Manager1SO in partition1, that calls a Start method on
another service object WorkerSO in partition3.
Manager1SO then monitors WorkerSO by registering for the RemoteAccessEvent on
WorkerSO.
When partition2 is brought offline, Manager1SO catches the RemoteAccessEvent on
WorkerSO successfully, and calls the Start method on WorkerSO again.
This seems to work a few times in a single environment, but after a while the
call to the Start method seems to hang.
When I attempt this kind of processing on 2 connected environments using
failover, the Manager1SO in partition1 in the environment on which WorkerSO
failed from cannot reference WorkerSO at all (hangs).
The Manager2SO in partition2 in the environment on which WorkerSO has failed
over to, references it okay i.e. the Start method completes.
If I restart Manager1SO, it then references WorkerSO in its environment rather
than the environment it failed over to.
I know this is very light on information, but any help would be appreciated.
Regards,
Moris Mihailidis
Consulting & Technology Department
CSC
570 St. Kilda Road, Melbourne VIC 3004
Ph: 61-3-95364675 Email: mmihailicsc.com.au
Similar Messages
-
Hey guys,
I am pretty sure, my subject is kinda confusing. Sorry about that. Here is what happened.
1. 4510r with Supervisor V 1000BaseX, switched over to standby Sup, then reseated Active SUP, once reseat complete, switched again to get the reseated SUP up and running as Active SUP.
2. a simple maintenance which was supposed to cause no outage and it did not cause any outage as well.
3. however, what i did not notice was, even though the voice vlan was configured to access 2353, they were accessing vlan 453.
4. the change was made 2 weeks prior to this maintenance where voice vlans were previously accessing 453 and they were all changed to access 2353. configs were saved.
5. however, after the maintenance, the running config showed that they were acessing 2353 but when checking the mac address on the interface, it was seen accessing 453.
6. the fix was to remove the config and re add it , that fixed it.
Has anyone else experienced the issue ? What really happened there ?
software version: Version 15.0(2)SG5
#sh module
Chassis Type : WS-C4510R
Power consumed by backplane : 40 Watts
Mod Ports Card Type Model
---+-----+--------------------------------------+------------------+-----------
1 2 Supervisor V 1000BaseX (GBIC) WS-X4516
2 2 Supervisor V 1000BaseX (GBIC) WS-X4516
3 48 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4548-GB-RJ45V
5 48 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4548-GB-RJ45V
6 48 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4548-GB-RJ45V
7 48 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4548-GB-RJ45V
8 48 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4548-GB-RJ45V
9 48 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4548-GB-RJ45Vconfigs were saved many times prior to the maintenance. i did a " write mem ".
-
Service Object Init References
Has anyone come up with a good work around to allow Service Objects to
reference other service objects in their init methods or during application
startup. Since we can't specify the order in which Service Objects start,
is there a way we can execute some code once all Service Objects have come
online?
Will this idea work?
Start a task in the init method that loops for the referenced service object
to not be nil, then references the needed SO. For example:
while true do
if LogMgrSO = Nil then
task.delay(100);
else
exit;
end if;
end while;
Eric Rasmussen
Project Manager
Online Resources & Communications Corporation
(703)394-5128
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>Hi,
Sorry to answer so late ! I left (one year ago may be) some Tool code on
the Mailing list on that subject.
May be you have to consider some different cases :
1°) Is a Service started because it is only instanciated (<> NIL)?
2°) Forte insures that the first services to be started on a partition
are DBsession and DBResource Managers.
3°) A local Service or a distributed service are not exactly treated the
same way.
4°) The init() method has a specific way to run : the allocations are
made at the end.
1°) When a Service is not NIL it is only that it is instanciated. So
your initialization sequence is not endded may be or
the service is not insured to be started properly. It should be
important if you need to load a cache for instance. I
would recommand to test that a service is ON (for DBsessions for
instance) and to add (if possible) a state to determine
that a service is properly started.
2°) This is only available if you are inside the same partition on the
same machine. So if you have to synchronize with
external ressources from the partition you will need to treat them like
other services.
3°) A local Service will be NIL and then instanciated. The classname
will be the same as in the workshop.
A distrubuted service (exactly a service which is not on the same
partition) will have a different classname (Classname+Proxy).
So the external service proxy may be instanciated but the So May not,
and you will get a DistributedAccessException.
4°) The init() method may not be the best location for a synchronization
if you need to use an array for instance to
store you dependencies. So I would use a start task on an InitService()
method to avoid that problem.
Options :
- A dependency could be optionnal : after a certain amount of tries you
can abort synchronization on the service.
- You can use synchronization on "cold" and "hot" startup of services.
- You can develop a service agent which cold have instruments to see
dependencies and states, and commands to stop/start services.
- The Delay you may play should be different for each service you are
waiting for.
- The order of dependencies should have an importance (first put
mandatory dependancies, and then optional ones).
- A Service is not only a Service Object, but could also be just a
reference to an instance through a container for example.
- Some kind of autoStart : should I start all my services at the
beginning of my application or could I start some services
at the first call ? This should be available if you use your own
application protocole and if your services are inside some
service managers for instance.
Remarks :
Thoses concepts have been tested on a Framework from R2 to R3 of Forte
with success. With those, you can imagine
starting the application without knowing if the database is running, the
application will wait for the database
to be mounted. An other advantage of the synchronization is that you
will resolve the naming of the services at
the begining of the application. Then, you can stop the environment
manager and the application will still work
(for the clients which were already started of course). You can also
imagine transfering your partitions from one
node to an other at run-time.
Hope this helps,
Daniel Nguyen
Freelance Forte Consultant
Stephen McHenry wrote:
>
At 11:04 AM 10/1/98 -0700, John Jamison wrote:
begin
while true do
begin
..attempt "remote" SO reference..
exit; // while true do loop
exception
when e:UsageException do // if in same partition and not yet
initialized,
// you get a NIL object exception
task.errormgr.clear;
when e:DistributedAccessException do (or RemoteAccessException)
// if in a different partition, get this
error
task.errormgr.clear;
end;
event loop
aTimer : Timer = new (tickInterval=5*1000); // 5 seconds -
adjust to taste
aTimer.isActive=true;
when aTimer.Tick do
aTimer.IsActive=false;
exit;
end event;
end while;
end;One of the problems I see with all of these "catch the exception and try
again" schemes is that they fail to take into account that the SO you are
calling may, in fact, never appear (due to some sort of problem, of course)
and then you never exit this loop. It's a "liveness" problem with this
approach. So, be sure to add some alternate way out after 1 minute (or
whatever your particular threshold is) and raise an exception yourself.
Always gotta think about what happens if something goes wrong... ;-)
Stephen
|===========================================================================
===|
|Stephen McHenry | Design Consulting |Training courses
offered: |
|Advanced Software Tech | | -Distributed
Obj-Oriented |
|305 Vineyard Town Ctr #251| [email protected] | Analysis &
Design |
|Morgan Hill, CA 95037 | (408) 776-2720 x210 | -Intro to Object
Technology|
|USA | http://www.softi.com | -Advanced OO Design
|
|===========================================================================
===|
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>-
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/> -
Using Failover for DB Manager Service Objects
Michelin Tire Corporation
At Michelin, we are trying to implement a failover
service object using Oracle 7.3 on RS6000 platform (AIX 4.1.4).
We understand that we need to use HACMP (Clustering) and
Oracle's parallel server. This way the DB Service objects
on two different computer nodes can access the same database.
Has anyone used this configuration? If so, have you had any problems?
and how well does it work?
We would appreciate any information on this subject.
Thanks in advance,
Thomas SamsTommy Sams wrote:
>
Michelin Tire Corporation
At Michelin, we are trying to implement a failover
service object using Oracle 7.3 on RS6000 platform (AIX 4.1.4).
We understand that we need to use HACMP (Clustering) and
Oracle's parallel server. This way the DB Service objects
on two different computer nodes can access the same database.
Has anyone used this configuration? If so, have you had any problems?
and how well does it work?
We would appreciate any information on this subject.
Thanks in advance,
Thomas SamsAt CSI, we have planned to use HACMP with Forte' failover to provide a
high availability architecture for one of our customer.
There are a lot of stuff to consider, some related to hacmp
configuration, some to forte' mechanisms.
In particular we should use HACMP for managing RDBMS backup and Forte'
capabilities to deal with partitions/envs/nodemgrs failover. We have not
take into account Oracle's parallel server at the moment ( althought it
could be a good solution ) because we don't really need to access oracle
from 2 different nodes at the same time, but "just" to have a "realtime"
dbms backup in case of primary server fault.
The architecure we choosen is based on tre AIX server ( 1 application
server, 1 main dbms server, 1 backup server ( oracle failover, envs
failover).
We have started testing it using forte 2.0.h, but we realized that a
more complete functionality will be offered using 3.0 KEEP_ALIVE
features. So I could give you more feedback in the near future.
What type of solution have you in mind ?
Regards
Fabrizio Barbero
Barbero Fabrizio
CSI-PIEMONTE
Cso Unione Sovietica 216
10134 Torino ITALY
tel: +39 11 3168515
fax: +39 11 3168212
e-mail: [email protected] -
Service Object events and LockMgr
Hi folks,
We're currently looking at strategies for dealing with the simultaneous
updates to the database from multiple clients (concurrency
management). That is when two (or more) clients load the same object to
edit it, then make different changes and save them to the database.
We have a copy of a Forté document (from the "Patterns" course, I
think) which describes three methods of dealing with this:
1) Lock the database table row as soon as a client select it for editing
and hold the lock until it is saved.
2) Immediately before 'saving' check that the database hasn't changed
(either by reading what's there before updating, or by using a huge
'where' clause that contains all unchanged fields)
3) The Forté "LockMgr" pattern, which uses a service object with notifier
proxies to allow locking and updating notification between the clients.
Option 3 is obviously the most robust method, but it requires a fair
amount of coding and could also be a bottleneck for database reads and
writes.
But I have another option for which I was looking for opinions. What if
we had a "Change Event manager" which broadcast an event every time
a change is made to the database. Each business class would have its
own event. If the event had the object's primary key as a parameter, then
clients editing that particular object type could check to see if the object
currently on screen is the one that changed. That way you could disable
the 'save' until they had refreshed their on-screen data.
It's not particularily elegant, but it's reasonably simple to implement. It
also deals with changes sent across our WAN from other database
servers.
But this option is only worthwhile if you can replicate the "Change Event
manager" SO and still register for an event on the client. Can clients
register for SO events and receive an event generated by any of the SO's
replicates? Or when you register for an SO's event do you register for
only one instance of the SO?
Thanks in advance for any answers.
Cheers,
Duncan Kinnear,
McCarthy and Associates, Email: [email protected]
PO Box 764, McLean Towers, Phone: +64 6 834 3360
Shakespeare Road, Napier, New Zealand. Fax: +64 6 834 3369
Providing Integrated Software to the Meat Processing Industry for over 10 years
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/forte>Hi,
Just wonder exactly how this Lock Manager can be implemented. Do you mean that you are
going to cache every object that is instantiated from the database? Or you just cache
the object id, primary key, etc?
Frankly speaking, I won't attempt to due with this kind of currency coding myself as
the database vendor has spent years in coding just to do this.
Regards.
Dimitar Gospodinov wrote:
Hello Duncan,
Wednesday, July 28, 1999, 10:31:46 AM, you wrote:
DK> Hi folks,
DK> We're currently looking at strategies for dealing with the simultaneous
DK> updates to the database from multiple clients (concurrency
DK> management). That is when two (or more) clients load the same object to
DK> edit it, then make different changes and save them to the database.
DK> We have a copy of a Forté document (from the "Patterns" course, I
DK> think) which describes three methods of dealing with this:
DK> 1) Lock the database table row as soon as a client select it for editing
DK> and hold the lock until it is saved.
DK> 2) Immediately before 'saving' check that the database hasn't changed
DK> (either by reading what's there before updating, or by using a huge
DK> 'where' clause that contains all unchanged fields)
DK> 3) The Forté "LockMgr" pattern, which uses a service object with notifier
DK> proxies to allow locking and updating notification between the clients.
DK> Option 3 is obviously the most robust method, but it requires a fair
DK> amount of coding and could also be a bottleneck for database reads and
DK> writes.
DK> But I have another option for which I was looking for opinions. What if
DK> we had a "Change Event manager" which broadcast an event every time
DK> a change is made to the database. Each business class would have its
DK> own event. If the event had the object's primary key as a parameter, then
DK> clients editing that particular object type could check to see if the object
DK> currently on screen is the one that changed. That way you could disable
DK> the 'save' until they had refreshed their on-screen data.
DK> It's not particularily elegant, but it's reasonably simple to implement. It
DK> also deals with changes sent across our WAN from other database
DK> servers.
DK> But this option is only worthwhile if you can replicate the "Change Event
DK> manager" SO and still register for an event on the client. Can clients
DK> register for SO events and receive an event generated by any of the SO's
DK> replicates? Or when you register for an SO's event do you register for
DK> only one instance of the SO?
DK> Thanks in advance for any answers.
DK> Cheers,
DK> Duncan Kinnear,
DK> McCarthy and Associates, Email: [email protected]
DK> PO Box 764, McLean Towers, Phone: +64 6 834 3360
DK> Shakespeare Road, Napier, New Zealand. Fax: +64 6 834 3369
DK> -------------------------------------------------------------------------------
DK> Providing Integrated Software to the Meat Processing Industry for over 10 years
DK> -
DK> To unsubscribe, email '[email protected]' with
DK> 'unsubscribe forte-users' as the body of the message.
DK> Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/forte>
I would recommend you to use the following approach (of course if you
do not have some special requirements :) ):
1. You should have a LockManager that will synchronize all clients in
their attempt to modify/delete objects in your application.
2. Each client, when attempts to modify/delete some object, it must
LOCK it using the services provided with the LockManager.
3. The requested operation can be performed only after successful
locking.
4. If a lock can not be obtained (for example if the object is already
locked by some other client) then the operation is aborted.
The details of this pattern depends on your needs. :)
Hope this helps.
Best regards,
Dimitar mailto:[email protected]
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/forte>-
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/forte> -
Server 2012 File Server Cluster Shadow Copies Disappear Some Time After Failover
Hello,
I've seen similar questions posted on here before however I have yet to find a solution that worked for us so I'm adding my process in hopes someone can point out where I went wrong.
The problem: After failover, shadow copies are only available for a short time on the secondary server. Before the task to create new shadow copies happens the shadow copies are deleted. Failing back shows them missing on the primary server as
well when this happens.
We have a 2 node (hereafter server1 and server2) cluster with a quorum disk. There are 8 disk resources which are mapped to the cluster via iScsi. 4 of these disks are setup as storage and the other 4 are currently set up as shadow copy volumes
for their respective storage volume.
Previously we weren't using separate shadow copy volumes and seeing the same issue described in the topic title. I followed two other topics on here that seemed close and then setup the separate shadow copy volumes however it has yet to alleviate the
issue. These are the two other topics :
Topic 1: https://social.technet.microsoft.com/Forums/windowsserver/en-US/ba0d2568-53ac-4523-a49e-4e453d14627f/failover-cluster-server-file-server-role-is-clustered-shadow-copies-do-not-seem-to-travel-to?forum=winserverClustering
Topic 2: https://social.technet.microsoft.com/Forums/windowsserver/en-US/c884c31b-a50e-4c9d-96f3-119e347a61e8/shadow-copies-missing-after-failover-on-2008-r2-cluster
After reading both of those topics I did the following:
1) Add the 4 new volumes to the cluster for shadow copies
2) Made each storage volume dependent on it's shadow copy volume in FCM
3) Went to the currently active node directly and opened up "My Computer", I then went to the properties of each storage volume and set up shadow copies to go to the respective shadow copy volume drive letter with correct size for spacing, etc.
4) I then went back to FCM and right clicked on the corresponding storage volume and choose "Configure Shadow Copy" and set the schedule for 12:00 noon and 5:00 PM.
5) I noticed that on the nodes the task was created and that the task would failover between the nodes and appeared correct.
6) Everything appears to failover correctly, all volumes come up, drive letters are same, shadow copy storage settings are the same, and 4 scheduled tasks for shadow copy appear on the current node after failover.
Thinking everything was setup according to best practice I did some testing by changing file contents throughout the day making sure that previous versions were created as scheduled on server1. I then rebooted Server1 to simulate failure. Server2
picked up the role within about 10 seconds and files were avaiable. I checked and I could still see previous versions for the files after failover that were created on server1. Unfortunately that didn't last as the next day before noon I was going
to make more changes to files to ensure that not only could we see the shadow copies that were created when Server1 owned the file server role but also that the copies created on Server2 would be seen on failback. I was disappointed to discover that
the shadow copies were all gone and failing back didn't produce them either.
Does anyone have any insight into this issue? I must be missing a switch somewhere or perhaps this isn't even possible with our cluster type based on this: http://technet.microsoft.com/en-us/library/cc779378%28v=ws.10%29.aspx
Now here's an interesting part, shadow copies on 1 of our 4 volumes have been retained from both nodes through the testing, but I can't figure out what makes it different though I do suspect that perhaps the "Disk#s" in computer management / disk
management perhaps need to be the same between servers? For example, on server 1 the disk #s for cluster volume 1 might be "Disk4" but on server 2 the same volume might be called "Disk7", however I think that operations like this
and shadow copy are based on the disk GUID and perhaps this shouldn't matter.
Edit, checked on the disk numbers, I see no correlation between what I'm seeing in shadow copy and what is happening to the numbers. All other items, quotas, etc fail and work correctly despite these diffs:
Disk Numbers on Server 1:
Format: "shadow/storerelation volume = Disk Number"
aHome storage1 = 16
aShared storage2 = 09
sHome storage3 = 01
sShared storage4 = 04
aHome shadow1 = 10
aShared shadow2 = 11
sHome shadow3 = 02
sShared shadow4 = 05
Disk numbers on Server 2:
aHome storage1 = 16 (SAME)
aShared storage2 = 04 (DIFF)
sHome storage3 = 05 (DIFF)
sShared storage4 = 08 (DIFF)
aHome shadow1 = 10 (SAME)
aShared shadow2 = 11 (SAME)
sHome shadow3 = 06 (DIFF)
sShared shadow4 = 09 (DIFF)
Thanks in advance for your assistance/guidance on this matter!Hello Alex,
Thank you for your reply. I will go through your questions in order as best I can, though I'm not the backup expert here.
1) "Did you see any event ID when the VSS fail?
please offer us more information about your environment, such as what type backup you are using the soft ware based or hard ware VSS device."
I saw a number of events on inspection. Interestingly enough, the event ID 60 issues did not occur on the drive where shadow copies did remain after the two reboots. I'm putting my event notes in a code block to try to preserve formatting/readability.
I've written down events from both server 1 and 2 in this code block, documenting the first reboot causing the role to move to server 2 and then the second reboot going back to server 1:
JANUARY 2
9:34:20 PM - Server 1 - Event ID: 1074 - INFO - Source: User 32 - Standard reboot request from explorer.exe (Initiated by me)
9:34:21 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the running state."
9:34:21 PM - Server 1 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
\Device\HarddiskVolumeShadowCopy49
F:
T:
The locale specific resource for the desired message is not present"
9:34:21 PM - Server 1 - Event ID 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
\Device\HarddiskVolumeShadowCopy1
H:
V:
The locale specific resource for the desired message is not present"
***The above event repeats with only the number changing, drive letters stay same, citing VolumeShadowCopy# numbers 6, 13, 18, 22, 27, 32, 38, 41, 45, 51,
9:34:21 PM - Server 1 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
\Device\HarddiskVolumeShadowCopy4
E:
S:
The locale specific resource for the desired message is not present"
***The above event repeats with only the number changing, drive letters stay same, citing VolumeShadowCopy# numbers 5, 10, 19, 21, 25, 29, 37, 40, 46, 48, 48
9:34:28 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Legacy Network Service service entered the stopped state."
9:34:28 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the stopped state.""
9:34:29 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Client Service service entered the stopped state."
9:34:30 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Discovery Framework service entered the stopped state."
10:44:07 PM - Server 2 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the running state."
10:44:08 PM - Server 2 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Microsoft Software Shadow Copy Provider service entered the running state."
10:45:01 PM - Server 2 - Event ID: 48 - ERROR - Source: bxois - "Target failed to respond in time to a NOP request."
10:45:01 PM - Server 2 - Event ID: 20 - ERROR - Source: bxois - "Connection to the target was lost. The initiator will attempt to retry the connection."
10:45:01 PM - Server 2 - Event ID: 153 - WARN - Source: disk - "The IO operation at logical block address 0x146d2c580 for Disk 7 was retried."
10:45:03 PM - Server 2 - Event ID: 34 - INFO - Source: bxois - "A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name."
JANUARY 3
At around 2:30 I reboot Server 2, seeing that shadow copy was missing after previous failure. Here are the relevant events from the flip back to server 1.
2:30:34 PM - Server 2 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
\Device\HarddiskVolumeShadowCopy24
F:
T:
The locale specific resource for the desired message is not present"
2:30:34 PM - Server 2 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
\Device\HarddiskVolumeShadowCopy23
E:
S:
The locale specific resource for the desired message is not present"
We are using Symantec NetBackup. The client agent is installed on both server1 and 2. We're backing them up based on the complete drive letter for each storage volume (this makes recovery easier). I believe this is what you would call "software
based VSS". We don't have the infrastructure/setup to do hardware based snapshots. The drives reside on a compellent san mapped to the cluster via iScsi.
2) "Confirm the following registry is exist:
- HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VSS\Settings"
The key is there, however the DWORD value is not, would that mean that the
default value is being used at this point? -
Service objects inside libraries (WAS: Interfaces in Forte -has anyon
The following message is actually not about interfaces, but libraries:
> From: Jeanne Hesler <[email protected]>
> To: [email protected] <[email protected]>
> Date: Thursday, July 30, 1998 11:12 AM
> Subject: RE: Interfaces in Forte - has anyone used them?
>>
> Just to clarify a few things:
>>
1) Just to be 100% correct -- it is actually Libraries that areloaded and
not Interfaces. The distinction is important because a librarycould
potentially implement many interfaces (or provide manyimplementations for a
single interface).
2) The code in a Library may reference a service object, but itmay not
define a service object. Of course any SO's referenced by thelibrary
must already be known to the loading partition. It is OK to havecode like
this in a library:
MySO.doSomething();
The documentation is a little vague on this point, but I haveconfirmed that
this is true through Tech Support and by experimentation.
Actually you CAN define and use service objects inside libraries
(compiled or interpreted) with two restrictions:
1) You can not define two service objects inside library in different
projects and call one of them from another. If you need that, both
service objects must be in the same project.
2) If service object is defined and used only by library (if it never
referenced directly by application code), than in order to be able to
partition application, you will need to create dummy method inside
application, which will reference this service object (you do not need
to execute this method - just have in the code).
WBR,
Nickolay Sakharov.
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>The way stateful Web services are currently handled is through the use of cookies ... once your stub invokes a stateful Web service a cookie is created which routes subseqent requests back to the Web service.
In your scenario, the problem is given one client has creates Web service 1 and now Web service 2 would like to be able to use that state it really isn't possible unless you engineer a solution yourself ... you would need so somehow set the cookie on your Web service 2 client to that of the original client to Web service 1. State tends to be based around an individual client versus multiple clients for that state.
There are numerous ways around this but you would be engineering around the issue ... the easiest is to write the state out somewhere so that it can be shared.
This section of the doc gives a brief overview:
http://download-west.oracle.com/docs/cd/A97688_06/generic.903/b10004/javaservices.htm
Lastly be aware there is a bug with timeouts in stateful Web services in Oracle9iAS 9.0.3 that has been fixed in 9.0.4. I can't find the thread here that documents it but when I track it down I will post the link so you can see the workaround.
Mike. -
RE: (forte-users) User-visible service object
This solution will cause network traffic for all method calls on the
environment visible SO. This overhead is not incurred when calling methods
on a user visible SO in the same partition. Depending on the frequency of
calls and the volume of data being passed in and out, this could be
significant overhead.
We have successfully implemented the following.
Create a second User Visible SO based on the same class. Then you will be
able to partition the one SO into the client partition and the second into
the server partition.
For example, assume the underlying class is named MessageService then define
your SO's as
ClientMessageService -> MessageService
ServerMessageService -> MessageService
Andy
-----Original Message-----
From: Amin, Kamran [mailto:kamran.aminlendware.com]
Sent: Wednesday, August 23, 2000 10:17 PM
To: 'Duncan Kinnear'; kamranaminyahoo.com
Subject: RE: (forte-users) User-visible service
object
Duncan,
Make the user visible service object to an
environment visible
service object. This way the client and any service object
on the server
can access it.
ka
-----Original Message-----
From: Duncan Kinnear [mailto:duncanmccarthy.co.nz]
Sent: Wednesday, August 23, 2000 7:47 PM
To: kamranaminyahoo.com
Subject: (forte-users) User-visible service object
Hi folks!
We've got a user-visible service object that handles
initialisation of and
access to the message catalog.
This works well on the client, but we would like to use the
same
mechanism (and even the same service object) on the server
so that
service objects on the server have access to their message
catalog on
the
server.
I was hoping that if we referenced this user-visible service
object in
both the client and the server code, that it would partition
a copy in
each of the client and server partitions. However, we
cannot get this
user-visible service object duplicated on the server. If we
drag and drop
it onto the server partition in the partition workshop, it
disappears from
the client partition!
Anybody got any idea how we could do this?
Cheers,
Duncan Kinnear,
McCarthy and Associates, Email:
duncanMcCarthy.co.nz
PO Box 764, McLean Towers, Phone: +64
6 834 3360
Shakespeare Road, Napier, New Zealand. Fax: +64
6 834 3369
Providing Integrated Software to the Meat Processing
Industry for over 10
years
For the archives, go to:
http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe,
send in a new
email the word: 'Unsubscribe' to:
forte-users-requestlists.xpedior.com
For the archives, go to:
http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe,
send in a new
email the word: 'Unsubscribe' to:
forte-users-requestlists.xpedior.comI would try going to the "lowest common denominator" between WindowsNT and
Windows95 - DOS. Both windowing OS's sort of have their roots in DOS, or at
least both are capable of opening a DOS session.
Therefore, from a DOS prompt type "set" to view the environment variables for
both OS types. Look for a common variable between the two that stores the
userID. If you can find one of these your application will be that much more
portable between these two Windows mutations.
I used "set" on my NT and found my userID assigned to a few variables. I haven't
done this on a Windows95 machine in quite some time, but if the machine is on
the network it should have at least one environment variable with the userID.
I'm just guessing that DOS has a variable to store the userID that will be
common to both machines.
Good luck....
Kelsey PetrychynSaskTel Technical Analyst
ITM - Technology Solutions - Distributed Computing
Tel (306) 777 - 4906, Fax (306) 359 - 0857
Internet:kelsey.petrychynSasktel.sk.ca
Quality is not job 1. It is the only job!
"Olivier Andrieux" <oandrieuxaxialog.fr> on 07/19/2000 09:12:41 AM
To: forte-userslists.xpedior.com
cc: (bcc: Kelsey Petrychyn/SaskTel/CA)
Subject: (forte-users) user name
Hi
I use this command to catch the username:
task.part.operatingsystem.getenv('username')
with NT, there is no problem
but with windows95 or 98 the command doesn't find the username.
Thanks in advance.
Olivier Andrieux
Axialog
Lille
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com -
Re: Events in Service Objects
At 12:32 AM 8/20/97 EDT, Gerard R. Connolly wrote:
>
The default partitioning in Forte does some strange things, in particular its
placement of user-visible service objects in the client partition no matter
where they are referenced and used.Do you have any sense about whether this relates to how well-stuctured the
projects are when you go to do the partitioning? I.e., if you had a tool
which helped in analyzing the way in which projects referenced each other
and could use this to create an idea project structure before you did the
partitioning, would the default partitioning be more likely to conform to
the desired end partitioning?
=========================================================================
Thomas Mercer Hursh, Ph.D email: [email protected]
Computing Integrity, Inc. sales: 510-233-9329
550 Casey Drive - Cypress Point support: 510-233-9327
Point Richmond, CA 94801-3751 fax: 510-233-6950 -
Hi Andrew...
Service Objects, are, in essence, the central concept of any Forte
application, so I think that every participant in this forum can
probably offer some type of insight on using Forte Service Objects in
their application :).
Any type of shared application functionality is a candidate for
inclusion in a service object. The definition given on page 156 of the
Forte Programming Guide states that "[a] service object is a named
object that represents an existing external resource, a Forte shared
business service that is shared by multiple users, or a service that is
replicated to provide failover or load-balancing. The service object
contains information needed by the service as well as operations that
the service can perform."
Service objects are, imho, the most important concept for building a
successful Forte application, and by far the least well understood. I
suggest spending a good amount of time familiarizing yourself with the
Service Object concept, and then understanding the Service Object
properties (dialog duration, visibility, replication, search paths,
etc.) so that you can build a scalable, robust, and fault-tolerant
system to meet your user requirements. Forte is an incredibly powerful
tool when you understand all of the possibilities you have, especially
those related to Service Objects.
In order to get a better understanding of Forte Service objects and how
they relate to the design and development of your particular system, I
highly suggest participating in the Forte Object-Oriented Analysis and
Design (OOAD) course. You can get more information on the course and
register on-line by visiting the Forte Education website:
http://www.forte.com/Educate/index.htm. Additionally, you can take a
look at the Forte manuals, especially Chapter 8 of the Forte Programming
Guide and throughout the Guide to the Forte Workshops manual.
To answer one of your other questions... You can pass references to
objects from a service object back to a client in Forte (you can also,
for that matter, pass copies of objects - it depends upon your needs).
Again, to get a better understanding of these concepts, I suggest taking
the Forte OOAD class.
I hope this helps! Please let me know if you have any further questions
- I'd be happy to help.
-Katie
Andrew Lowther wrote:
>
We are currently in the process of rearchitecting our software systems
around Forte.
Could anyone tell me what experiences they have had with building a
system using Forte Service objects in a multi-tiered system?
It seems to us that these are intended to be used as high-value
facade-like interfaces which serve as an entry point to the underlying
business object model. Is this correct?
Can we pass a remote object reference back to a client for its
subsequent use? If not, does this mean that we have to build a local,
client-based object model to hold the data returned from the service
object methods?
Any other assistance you can give will be very much appreciated.
Thanks
Andrew Lowther--
Katie Carty
Senior Consultant
Forte Software, Inc.
http://www.forte.com
4801 Woodway Drive, Suite 300E
Houston, Texas 77056
vmail: (510) 986-3802
email: [email protected]
**************************************************Andrew,
We at Per-Se Technologies have developed an approach to alleviate many
"pains" with using service objects. Some things you will soon discover is
that although service objects provide fail-over, load balancing, etc., they
also,
1. Eat up valuable developer time because it takes time to repartition and
start partitions in development mode. In development mode (i.e., running
from the workshops), each developer gets their own copy of all partitions!
2. Limit the use of compiled libraries due to service object references in
TOOL code.
3. Consume valuable server resources because each developer has their own
copy of the partitions.
We have several alternatives to address all of the above problems. I am
currently working on converting a large application so that:
1. All developers share a single set of service objects/partitions. Each
developer doesn't have to wait while their copies of the partitions come
up. Therefore, development time is more fully utilized and server
resources are dramatically freed up.
2. Service objects are completely decoupled from an application.
Therefore, we can compile as much as possible.
You asked some other questions as well: You should always isolate SO
references as much as possible. We do this by using a facade. You can
pass a remote obj. reference to a client for future use.
Take care!
Dustin Breese
Supervising Technical Specialist
Per-Se Technologies
From: Andrew Lowther <[email protected]>
Date: Tue, 24 Feb 1998 16:24:31 -0000
Subject: Service Objects
We are currently in the process of rearchitecting our software systems
around Forte.
Could anyone tell me what experiences they have had with building a
system using Forte Service objects in a multi-tiered system?
It seems to us that these are intended to be used as high-value
facade-like interfaces which serve as an entry point to the underlying
business object model. Is this correct?
Can we pass a remote object reference back to a client for its
subsequent use? If not, does this mean that we have to build a local,
client-based object model to hold the data returned from the service
object methods?
Any other assistance you can give will be very much appreciated.
Thanks
Andrew Lowther -
Creating standby DB after failover
Hi,
I have performed failover to my standby DB, now i need to re create the standby db for the new production.
But there is some confusion, because previously , in my production db_name and unique name was same, suppose
test1. And the db_name and unique_name for standby was test1 and test2. And i created standby db that way using test1 and test2 in log_archive_config.
After failover, the scenario changed , the production has two different value,db name test1 and db unique name test2. And i need create standby from this. It makes me confuse, how i will create standby? What will be the db unique name for new standby??
Please help me...
regards,user8983130 wrote:
thanks.
we use db_unque_name in log_archive_config, ok???
and in the log_archive_dest_state_2, we service name..should it be necessary to be same of the db_unique_name???DB_NAME should be same across the primary and physical standby databases.
DB_UNIQUE_NAME choose different names for each database.
Service names whatever you use either in FAL_CLIENT/FAL_SERVER (or) DEST_n there is no relation with DB_NAME/DB_UNIQUE_NAME, Its just service name how you want to call.
HTH. -
Unplumb net0 in public network, the HA NFS service didn't failover
I have two node consisting the cluster 4.1. I set up the NFS service on the cluster, as below.
root@sgh28h13:~# scstat -g
-- Resource Groups and Resources --
Group Name Resources
Resources: resource-group-1 sgh28cluster global_Sym_R5_1G_d110-rs nfs-global-Sym_R5_1G-d110-admin-rs
-- Resource Groups --
Group Name Node Name State Suspended
Group: resource-group-1 sgh28h13 Online No
Group: resource-group-1 sgh28h17 Offline No
-- Resources --
Resource Name Node Name State Status Message
Resource: sgh28cluster sgh28h13 Online Online - LogicalHostname online.
Resource: sgh28cluster sgh28h17 Offline Offline
Resource: global_Sym_R5_1G_d110-rs sgh28h13 Online Online
Resource: global_Sym_R5_1G_d110-rs sgh28h17 Offline Offline
Resource: nfs-global-Sym_R5_1G-d110-admin-rs sgh28h13 Online Online - Service is online.
Resource: nfs-global-Sym_R5_1G-d110-admin-rs sgh28h17 Offline Offline
The NFS service is on sgh28h13 originally. sgh28h13 has one interface net0 in public network for this service. So I use "ifconfig unplumb net0" to shutdown net0 so the NFS service could failover to the other node. But the service is still online on sgh28h13 after I shutdown the net0 and sc_ipmp0. I don't know why the NFS service didn't failover. Could somebody help?
root@sgh28h13:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok sc_ipmp0 --
net1 ip ok -- --
net2 ip ok -- --
sc_ipmp0 ipmp ok -- --
sc_ipmp0/static1 static ok -- 10.103.117.103/22
帖子经 user9111646编辑过Hi.
IPMP not triggered on change or misconfigured configuration. You change configuration, so change configuration of IPMP to.
So cluster not trigered on this event.
In case you test failover for lost netowrk connections - initiate link down from switch side or just disconnect cable.
Regards. -
We use a user-visible service object to manage the positioning and
cascading of our client windows, whether modal or not. Before each
window performs an Open(), it registers it's reference and title
with the SO. If there are other active windows of the same type,
the title is modified to include a colon and count number, as in
normal Windows applications. After the window performs the Close()
method, it de-registers itself with the SO. Whilst open, the window
can also call a method on the SO to cascade all active windows.
(Every window has a Window\Cascade menu item for this purpose.)
One of the reasons for using the SO to implement cascading, was to
enable the user to cascade only the Forte application windows. If a
non-Forte window gets in the way, you can use the Forte cascade to
bring the Forte windows in front of it again !
As an aside, the only windows that we have that are modal, are very
small input windows that don't need much in the way of behaviour.
As most of our windows are started using START TASKs, all windows
need to keep references to child windows that they have
instantiated, in order to perform orderly shutdowns, iconising and
re-opening.
Have fun !
Justin Levis
Hydro-Electric Commission
Hobart, Tasmania
AustraliaPlease read the information posted @
http://msdn.microsoft.com/en-us/library/windows/desktop/aa969540%28v=vs.85%29.aspx
Desktop Window Manager
The desktop composition feature, introduced in Windows Vista, fundamentally changed the way applications display pixels on the screen. When desktop composition is enabled, individual windows no longer draw directly to the screen or primary display
device as they did in previous versions of Windows. Instead, their drawing is redirected to off-screen surfaces in video memory, which are then rendered into a desktop image and presented on the display.
Desktop composition is performed by the Desktop Window Manager (DWM). Through desktop composition, DWM enables visual effects on the desktop
as well as various features such as glass window frames, 3-D window transition animations, Windows Flip and Windows Flip3D, and high resolution support.
The Desktop Window Manager runs as a Windows service. It can be enabled and disabled through the Administrative Tools Control Panel item, under Services, as Desktop Window Manager Session Manager.
Many of the DWM features can be controlled or accessed by an application through the DWM APIs. The following documentation describes the features and requirements of the DWM APIs.
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. ” -
Certs did not working after failover
we had to failover our primary ace to secondary because primary ace crashed and we had to replace it. After we failied over certain certs stopped working
s
To fix this problem i had to remove the cert from ssl-proxy service and re-add it work. has anybody run into the problem like this? why would this fail after failover to secondary ace?Oracle-User wrote:
Hi All
We are facing issues on FNDWRR.exe after we failover to DR apps servers. We are not able to retrieve concurrent job logs using web browser. Apache log file shows "Premature end of script headers" error message.
[Sat May 25 19:51:21 2013] [error] [client 10.64.224.134] [ecid: 1369507879:10.166.3.16:3282:0:13,0] Premature end of script headers: /d01/app/ebs/gap/apps_st/comn/webapps/oacore/html/bin/FNDWRR.exe
Any idea what can be causing this issue?
Thanks in advance!Did AutoConfig complete successfully?
Please relink FNDWRR.exe manually or via adadmin and check then.
Thanks,
Hussein -
Service Objects with Dialog duration
m
Hi Forte`ans,
I am trying to listen to an event from a service object which has a
dialog duation of Message.The service object is configured for
failover.
I get an exception ( not an error message ) saying :
SYSTEM ERROR: Invalid attempt to register for an event on an object of
class (CKBaseServiceMgrProxy) which has a dialog duration of
message. The
semantics of message duration do not guarantee that the same object
instance will service each message, which is in conflict with the
semantics of event registration (which requires that the same object
instance to which the event is registered for generates the event;
these are two separate actions). To disable this restriction,
restart this process with cfg:do:4 specified.
If I make the dialog duration of the SO Session, it works without
screaming.
Does this mean I cannot listen to events from such SO( Failover
enabled with Message duration?) Is it because the event loop may still
point to the failed SO and Forte wants to avoid such situations???
Can somebody throw some light on this..?
Thanks
Ajith Kallambella M
International Business Corporation.We ran into this same problem when converting an application from R1 to
R2. In R1, you were allowed to do this. However, Forte won't
guarantee, even in a non-replicated, non-failover partition, that it
won't swap objects under certain situations unless the dialog duration
is session. If this happened, you would lose your registration and not
even know it. The recommended solution is for the client partition to
pass a reference to an object anchored in its partition to the service
object in the remote partition. The service object can then post events
on the anchored object, which is guaranteed to be there during the life
of that client partition. The logger flag was designed for backwards
compatibility. It's not really recommended, but it's not supposed to
have much overhead if you do use it. We already had a client
notification architecture in place, so we re-worked our application to
use it in the cases where we had been using direct registrations. Hope
this helps -- Chris
Chris Kelly, IS Architect
Andersen Windows
From:
[email protected][SMTP:[email protected].
net.in]
Sent: Thursday, September 18, 1997 1:44 PM
To: [email protected]
Subject: Service Objects with Dialog duration
m
Hi Forte`ans,
I am trying to listen to an event from a service object which
has a
dialog duation of Message.The service object is configured for
failover.
I get an exception ( not an error message ) saying :
SYSTEM ERROR: Invalid attempt to register for an event on an
object of
class (CKBaseServiceMgrProxy) which has a dialog duration of
message. The
semantics of message duration do not guarantee that the same
object
instance will service each message, which is in conflict with
the
semantics of event registration (which requires that the same
object
instance to which the event is registered for generates the
event;
these are two separate actions). To disable this restriction,
restart this process with cfg:do:4 specified.
If I make the dialog duration of the SO Session, it works without
screaming.
Does this mean I cannot listen to events from such SO( Failover
enabled with Message duration?) Is it because the event loop may
still
point to the failed SO and Forte wants to avoid such
situations???
Can somebody throw some light on this..?
Thanks
Ajith Kallambella M
International Business Corporation.
Maybe you are looking for
-
IMac G3 600 memory upgrade issue
I have just acquired a Snow iMac 600mhz, currently using 256mb RAM (one stick). I want to install a 512mb stick in the open slot, but the computer won't start with it installed. I have upgraded the Firmware to 4.1.9, and the RAM is known good in anot
-
HP Mini 110-1100 CTO keyboard not working fully
Hello, everyone; My Mini 110-1100 keyboard is acting strangely. Most of the time, certain keys will not register as being pressed. Space bar, the left shift key, t, y, [, ], z, x, c, v, and m all do not work when this happens. Sometimes (rarely) all
-
Video files not playing in iTunes
In iTunes on OS Lion on an iMac, I started getting an error message that the "video needs QuickTIme and it is no longer supported by iTunes." What gives? The files are m4v and won't play even 5 minutes after I download them. If I download them to my
-
No events from CWDataSocket. Viewing from ActiveX Container Tool
Using Labview 6.1 on Windows 2000. Exploring ways our C++ program can get and set values into a VI. Was looking at the CWDataSocket ActiveX Control in the VC++ ActiveX Test Container Tool and was trying to get it to fire events. I set both a VI contr
-
Previously visited sites in a Google search not highlighted on page
I'm referring to the purple highlighting of a previously visited site in a Google page. This isn't working in Safari (3.1.2) any more. It is working with Firefox for Google, and it still works with Safari in the Apple Discussions pages or Open DNS li