DFSR Failover for 2 VM File Servers
Current Setup:
2 VM Windows 2012 R2 Servers: File01 and File02 (File01 is set as Primary)
DFSR setup between the 2
How can I configure the 2 DFS servers HA even though they are VMs on a FA cluster? If File01 goes down I need my users to still be able to access the namespace, failover to File02. File01 comes online, File02 replicates the changes to File01.
I assumed having DFSR configured between the 2 would resolve that issue but I missed something because I shutdown File01 to test to see if it will switch over to File02 but did not work.
Any suggestions would greatly be appreciated. I know I am on the right track but I am missing something here.
Hi,
There will be a cache when client is firstly connected to one of your file server. This is for preventing client keep sending requests to DC.
So failover will not occur immediately. You can change the duration time shorter in DFS management but it will still fail for a while since failover happens.
If you have any feedback on our support, please send to [email protected]
Similar Messages
-
Migrating branch office 2008 R2 file servers to 2012 R2 with DFSR complications
We have 1 head office File Server and 4 branch office Servers that replicate back to head office using DFSR. We would like to move to 2012 R2 to take advantage of the deduplication features etc. What's the best way to do this? Do we do
a re-install as per this article:
http://blogs.technet.com/b/askds/archive/2010/09/10/replacing-dfsr-member-hardware-or-os-part-5-reinstall-and-upgrade.aspx
Leaving data disks untouched and simply attaching them again and then do the same for each branch office? We cannot afford to replicate the entire data set across the WAN links again.
Anyone had any experience doing this? Any advice appreciated.Hi,
Do you want to migrate the branch office 2008 R2 file servers to new 2012 R2 servers or in-place update the severs?
If you in-place update the servers, you need to do a re-install as per the article mentioned. The entire data will be replicated again.
If you migrate the files servers to new servers, here is our suggestion to migrate DFS-R branch server.
1. Add a new branch server.
2. When initial synch is finished (event 4104 for every replicated folder).
3. And then wait for AD replication.
4. Then remove the old branch server as usually.
To prevent large network synchronization flow, pre-seed the data with backup program to copy the amount of information from older branch server to the new branch server, this should be lower the replication network flow during initial replication.
Get out and push! Getting the most out of DFSR pre-staging
http://blogs.technet.com/b/askds/archive/2008/02/12/get-out-and-push-getting-the-most-out-of-dfsr-pre-staging.aspx
How to use the Backup program to prestage data before DFSR synchronization in Windows Server
http://support.microsoft.com/kb/947726
Regards,
Mandy
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Does FEP 2010 offer protection for NAS file servers?
Hello,
We are in the process of rolling out FEP 2010 and wanted to know if it has the capability to scan NAS file servers?
Thanks,
Tom
Tom Martin Email: [email protected]Could Microsoft Forefront Endpoint Protection scan NAS drive?
We have NAS drive (EMC back-end), network shares via Windows Server. We are using FEP 2010 with SCCM 2007. Today, we have Expiro virus/malware headache! Where is infected some network shares. Don't know yet how far it goes. Is there any easy way to do this?
The problem with Expiro where it mutating itself with different names, last one seen as Expiro.gen!S
Thanks for any suggestions. -
Re: Failover for SO's with context
Right, delivery of events is not guaranteed by Forte, even though
it is reasonable to rely on it in the case of two Forte servers on a LAN.
I would not go towards a solution for securing events delivery by
an acknowledgement mechanism (ack event or shared object notifier),
because of increased complexity and performance overhead.
On the other hand, a second simple security level can be provided by
enabling
your mirror/backup SO to be refreshed at will, by letting it get a
snapshot
of the current transient data to be mirrored, so you can :
- Start your partitions in any order (The mirror partition will first
task a
snapshot of the transient data, then will register for mirror events)
- Start and stop the mirror partition at will, without disrupting the
application
Then, if you do not trust events delivery, you can reinitialize your
mirror
periodically (say every 12 hours) to minimize the risks of losing
transient
data events.
Again, this solution is suited to low volumes of transient data.
I guess what Chad means by journaling is writing to a log file any
event (in a large sense) happening on data from its initial value. Then
if
you need to restore state, you re-play the events from the initial value.
This is a common solution in the banking area where you need to backup
values but also events on the values. I do not know how this can be
applied
to a generic mechanism with Forte, but it may be a good way to explore,
although probably more complex to implement with Forte than the
Backup SO/ Events pattern.
Hope this helps,
Vincent Figari
On Fri, 13 Feb 1998 10:39:03 -0600 Chad Stansbury
<[email protected]> writes:
Actually, since events (let alone distributed events) are not
'guaranteed delivery' in Forte, I would hesitate to use events
as a mechanism of mirroring your data - unless, of course, you
really don't require an industrial strength failover strategy.
This would also apply to asynchronous messaging (unless you
are careful to register for exception events (which again, aren't
guaranteed delivery) and have a mechanism to handle said
asynchronous exception events. I also know that Forte will retry
certain tasks when the service object it is sent to fails com-
pletely (like a NIL object exception), but don't know enough
about the internal workings of Forte to know under which conditions
this will occur.
I think that the most common method of a truly industrial-
strength, guaranteed-delivery mechanisms is via journaling...
which I know very little about, but is something that you should
be able to look up and study if that's what you require.
Again, if you don't care about the (admittedly small) chance
of an asynchronous call failing, then the suggestions that
Vincent has already made are good ones.
From: [email protected]
To: [email protected]
Cc: [email protected]
Sent: 2/13/98 9:13:17 AM
Subject: Re: Failover for SO's with context
Steven,
The pattern choice between external resource vs SO is dependent on the
type
of transient data you want to backup. Probably the external resource
is
better
suited to high volumes of data. We have implemented the 'Backup SO'
pattern because our transient data volumes are rather low (which I
guess
must
be the most common case for global, transient data).
Whatever the choice you do :
- Be sure to enforce encapsulation for updating the transient data, in
order to
guarantee that any modification to your transient data is duplicated
on
the backup
SO or the external resource
- About performances, the CPU cost is fairly low for your 'regular'
application if you
take care to :
* use asynchronous tasks to update the external resource
or
* use events to notify the backup SO
Now it is true that you will have a network overhead when using
events,
as your
backup SO shall be isolated in a remote partition on a remote
server.
That is one good argument to select the Backup SO pattern for low
volumes of
transient data.
If you choose the 'Backup SO' pattern, you will also have to be
careful
not sending
any distributed reference to your Backup SO but only clones.
Anyway, the backup SO pattern works fairly well for low volumes of
data,
but requires lots of testings and a good understanding of events and
communication
across partitions.
Hope this helps,
Vincent Figari
On Fri, 13 Feb 1998 09:24:57 +0100 Steven Arijs <[email protected]>
writes:
We're going to implement a failover scenario for our application.
Unfortunately, we also have to replicate the state of our failed
service
objects.
I've browsed the Forte site and found a TechNote concerning this
(TechNote 11074).
In this TechNote they talk about a service object that is responsible
for updating all backup service objects when needed.
It seems to me that when I implement that way, I will be creating a
lot
of overhead, i.e. I will be doing a lot of stuff several times.
What will be the effects on my performance ?
The way with the least performance loss would be to use an external
resource that is updated. But what if this external resource also
fails
Is there any one who has already implemented a failover scenario for a
service objects with state ?
Any help would be appreciated.
Steven Arijs
([email protected])
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866]
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866]Right, delivery of events is not guaranteed by Forte, even though
it is reasonable to rely on it in the case of two Forte servers on a LAN.
I would not go towards a solution for securing events delivery by
an acknowledgement mechanism (ack event or shared object notifier),
because of increased complexity and performance overhead.
On the other hand, a second simple security level can be provided by
enabling
your mirror/backup SO to be refreshed at will, by letting it get a
snapshot
of the current transient data to be mirrored, so you can :
- Start your partitions in any order (The mirror partition will first
task a
snapshot of the transient data, then will register for mirror events)
- Start and stop the mirror partition at will, without disrupting the
application
Then, if you do not trust events delivery, you can reinitialize your
mirror
periodically (say every 12 hours) to minimize the risks of losing
transient
data events.
Again, this solution is suited to low volumes of transient data.
I guess what Chad means by journaling is writing to a log file any
event (in a large sense) happening on data from its initial value. Then
if
you need to restore state, you re-play the events from the initial value.
This is a common solution in the banking area where you need to backup
values but also events on the values. I do not know how this can be
applied
to a generic mechanism with Forte, but it may be a good way to explore,
although probably more complex to implement with Forte than the
Backup SO/ Events pattern.
Hope this helps,
Vincent Figari
On Fri, 13 Feb 1998 10:39:03 -0600 Chad Stansbury
<[email protected]> writes:
Actually, since events (let alone distributed events) are not
'guaranteed delivery' in Forte, I would hesitate to use events
as a mechanism of mirroring your data - unless, of course, you
really don't require an industrial strength failover strategy.
This would also apply to asynchronous messaging (unless you
are careful to register for exception events (which again, aren't
guaranteed delivery) and have a mechanism to handle said
asynchronous exception events. I also know that Forte will retry
certain tasks when the service object it is sent to fails com-
pletely (like a NIL object exception), but don't know enough
about the internal workings of Forte to know under which conditions
this will occur.
I think that the most common method of a truly industrial-
strength, guaranteed-delivery mechanisms is via journaling...
which I know very little about, but is something that you should
be able to look up and study if that's what you require.
Again, if you don't care about the (admittedly small) chance
of an asynchronous call failing, then the suggestions that
Vincent has already made are good ones.
From: [email protected]
To: [email protected]
Cc: [email protected]
Sent: 2/13/98 9:13:17 AM
Subject: Re: Failover for SO's with context
Steven,
The pattern choice between external resource vs SO is dependent on the
type
of transient data you want to backup. Probably the external resource
is
better
suited to high volumes of data. We have implemented the 'Backup SO'
pattern because our transient data volumes are rather low (which I
guess
must
be the most common case for global, transient data).
Whatever the choice you do :
- Be sure to enforce encapsulation for updating the transient data, in
order to
guarantee that any modification to your transient data is duplicated
on
the backup
SO or the external resource
- About performances, the CPU cost is fairly low for your 'regular'
application if you
take care to :
* use asynchronous tasks to update the external resource
or
* use events to notify the backup SO
Now it is true that you will have a network overhead when using
events,
as your
backup SO shall be isolated in a remote partition on a remote
server.
That is one good argument to select the Backup SO pattern for low
volumes of
transient data.
If you choose the 'Backup SO' pattern, you will also have to be
careful
not sending
any distributed reference to your Backup SO but only clones.
Anyway, the backup SO pattern works fairly well for low volumes of
data,
but requires lots of testings and a good understanding of events and
communication
across partitions.
Hope this helps,
Vincent Figari
On Fri, 13 Feb 1998 09:24:57 +0100 Steven Arijs <[email protected]>
writes:
We're going to implement a failover scenario for our application.
Unfortunately, we also have to replicate the state of our failed
service
objects.
I've browsed the Forte site and found a TechNote concerning this
(TechNote 11074).
In this TechNote they talk about a service object that is responsible
for updating all backup service objects when needed.
It seems to me that when I implement that way, I will be creating a
lot
of overhead, i.e. I will be doing a lot of stuff several times.
What will be the effects on my performance ?
The way with the least performance loss would be to use an external
resource that is updated. But what if this external resource also
fails
Is there any one who has already implemented a failover scenario for a
service objects with state ?
Any help would be appreciated.
Steven Arijs
([email protected])
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866]
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866] -
Cluster servers (file servers)
Hello,
We are looking to use Windows 2012 servers as a file server. We want high availability so we like the concept of the cluster. We like the ability to take one server down for maintenance and still be able to provide files to end users. I have
read scattered information around the net (links below). We also want to incorporate the deduplication feature offers from Windows 2012. Our storage will be from Amazon S3 (iSCSI volume using gateway-cache). Our Windows 2012 Servers will
be virtual machines.
http://clusteringformeremortals.com/2012/12/31/windows-server-2012-clustering-step-by-step/
http://technet.microsoft.com/en-us/library/gg232621%28v=ws.10%29.aspx
From the information provided, what should I be aware of before I take this journey of building and testing? Our client machines are mostly Windows 7.
Thanks ahead,
TTThat's a pretty straight forward task. See a bunch of links below on how to create a failover file servers. See:
Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
Create a Clustered File Server
http://technet.microsoft.com/en-us/library/cc753969.aspx
HA File Server for SMB NAS
http://www.starwindsoftware.com/configuring-ha-file-server-for-smb-nas
(ignore StarWind replacing it with your S3 shared storage, Windows-related config part of the PDF is the same for all storages)
Using Windows 7 does not support SMB 3.0 so no Transparent Failover which is part of SMB 3.0 and up. I'd strongly suggest to upgrade to Windows 8 for your clients if you can. See:
Which SMB version?
http://blogs.technet.com/b/josebda/archive/2013/10/02/windows-server-2012-r2-which-version-of-the-smb-protocol-smb-1-0-smb-2-0-smb-2-1-smb-3-0-or-smb-3-02-you-are-using.aspx
Windows
7 and SMB3
http://windowsitpro.com/windows-7/smb-30-windows-7
Good luck!
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Sync two 10.6 file servers
Hi everyone,
this might be a very basic question, but I haven’t seen a good guide on how to accomplish this, hence the question.
I would like to use 2 absolutely identical file servers (no other services, only AFP) with one of them being a backup that is not in use unless the first one crashes, burns, gets stolen, etc. The servers will be a Mac mini Server each with both having a 12 TB FireWire RAID attached. They need to be in separate locations, but Gigabit Ethernet is available to connect them.
In order to be clear: I thought of syncing both the system and the data on the RAID.
Do you have any recommendations on how to best accomplish this? Is there a best practice? Is rsync what I need here?
Thanks
BjörnThere are many, many elements to your question and synching the data is the least (and easiest?) part of the equation.
For one, what's your failover model? Do you want the failover to happen instantly, with no disruption to the users? or do you mind if the users get disconnected and have to reconnect to their shares?
Or maybe you want failover to only happen manually? (i.e. only when you know the primary server is going to be down for a while). This is common because the cost of failback (i.e. resynching the 'backup' data to the primary server) is time consuming and could take longer than the primary server would be offline, anyway - if it'll take 2 hours to sync your data back then there's no point in failing over if your server is going to be back in 10 minutes.
Then there's the volume of data and, more importantly, the rate of change. Even if you have 10TB of data there may be only a few megabytes of data that changes daily and needs to be kept in sync. That wil ave a big impact on your replication strategy.
While on that subject, how much tolerance do you have for the servers being out of sync? If you need them to be real-time then you don't have the equipment for this - real-time replication of filesystems is a tricky (and expensive) task. If you want to sync daily, or even a few times a day, then that's easier, with the cost being a few hours' lost work should an unexpected failover happen. That may or may not be viable for you.
Either way I would not recommend Retrospect for this (or even for regular backups). A simple rsync shell script can replicate the data between two servers, it's largely an issue of frequency and volume that you have to consider. -
Active Directory domain migration with Exchange 2010, System Center 2012 R2 and File Servers
Greeting dear colleagues!
I got a task to migrate existing Active Directory domain to a new froest and a brand new domain.
I have a single domain with Forest/Domain level 2003 and two DC (2008 R2 and 2012 R2). My domain contains Exchange 2010 Organization, some System Center components (SCCM, SCOM, SCSM) and File Servers with mapped "My Documents" user folders. Domain
has about 1500 users/computers.
How do u think, is it realy possible to migrate such a domain to a new one with minimum downtime and user interruption? Maybe someone has already done something like that before? Please, write that here, i promise that i won't ask for instruction from you,
maybe only some small questions :)
Now I'm studying ADMT manual for sure.
Thanks in advance,
Dmitriy Titov
С уважением, Дмитрий ТитовHi Dmitriy,
I got a task to migrate existing Active Directory domain to a new froest and a brand new domain.
How do u think, is it realy possible to migrate such a domain to a new one with minimum downtime and user interruption?
As far as I know, during inter-forest migration, user and group objects are cloned rather than migrated, which means they can still access resources in the source forest, they can even access resources after the migration is completed. You can ask users
to switch domain as soon as the new domain is ready.
Therefore, there shouldn’t be a huge downtime/interruption.
More information for you:
ADMT Guide: Migrating and Restructuring Active Directory Domains
https://technet.microsoft.com/en-us/library/cc974332(v=ws.10).aspx
Best Regards,
Amy
Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
[email protected] -
Hi everybody,
I have a strange problem with Mount-DiskImage command.
Environment: Windows server 2012 without any updates.
All scripts signed as it was in Hanselman's blogpost
http://www.hanselman.com/blog/SigningPowerShellScripts.aspx
First script(script1) executing on one machine (server1), then copy another script(script2) to the remote server(server2) and run script2 in a PS-Session. Both are signed. Certificates are located on both servers.
In a script I tried to
Import-Module Storage
$mountVolume = Mount-DiskImage -ImagePath $ImageSourcePath -PassThru
where ImageSourcePath is a networkpath to iso image.
But getting exception.
Exception Text:
Cannot process Cmdlet Definition XML for the following file:
C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Storage\Disk.cdxml. At line:138 char:17
+ $__cmdletization_objectModelWrapper = Microsoft.PowerShell.Utili ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Executable script code found in signature block.
At line:139 char:17
+ $__cmdletization_objectModelWrapper.Initialize($PSCmdlet, $scrip ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Executable script code found in signature block.
At line:143 char:21
When I look into the C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Storage\Disk.cdxml I didn't get what's happend, because in line 138- was xml comment.
Any ideas?Hi,
I suggest referring to the following links:
http://blogs.msdn.com/b/san/archive/2012/09/21/iso-mounting-scenarios.aspx
http://blogs.technet.com/b/heyscriptingguy/archive/2012/10/15/oct-15-blog.aspx
Best Regards,
Vincent Wu
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
I want to move all my file servers from Aws to Azure
Hi All,
i want to move my file servers (each of size 1tb, totally 5) from AWS to Azure. So please suggest me the best practices to do this. If possible please provide me the links to do that.
ThanksHi,
I would suggest you look at this video
http://channel9.msdn.com/Shows/TechNet+Radio/TechNet-Radio-How-to-Migrate-Your-Virtual-Machines-from-Amazon-Web-Services-to-Windows-Azure Tune in as Keith Mayer demos for us how to quickly and easily migrate your AWS virtual machines to
Windows Azure Infrastructure Services.
[Edit] modify the video link.
Hope this helps
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click HERE to participate the survey. -
I have MS Visual Studio Express 2013 It has worked fine for many months and then suddenly (I have made no configuration changes or added new programs) when I try to publish I am getting the message:
Unable to create the Web site 'ftp://ftp.xx.xx/xxx.org.uk/www/htdocs'. The components for communicating with FTP servers are not installed.
(I have replaced actual name with x's).
I had a similar problem some months ago and found that saving all files, closing VS 2013 and re-starting the program fixed the problem. This time it has not.
I am at a loss to know how to take this forwards. I do not use IIS.
Any help would be appreciated.
Michael.Hi Michael,
For web site development, so you use the VS2013 express for web, am I right? We have to make sure that it is not the VS version issue.
As you said that it worked well before, did you install other add-ins or tools in your VS IDE like
Xamarin or others?
Maybe you could disable or remove all add-ins in your VS IDE, test it again.
please also install the VS2013 update 4 in your side.
Best Regards,
Jack
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Network Load Balancing and failover for AFP Sharing
Dear all,
Somebody kindly teach me to use round robin DNS to perform the network load balancing, it's success but not the failover.
I have 4 xserve and want to do the load balancing and failover at the same time.
I have read the IP failover document and setup it successfully, but anyone know is it possible to do the IP failover for more than 2 server?
For example, 4 server serving the AFP service at the same time, maybe I have 1 more extra server to do the IP failover for thoese 4 servers.
As I know, IP failover require Firewire as the heartbeat detection. But one xserve only have 2 firewire ports. May I setting up the IP failover only by a ethernet port and an IP address? does it possible to detect and failover to any server after server down has been detected?
I believe load balancer maybe the best solution but its cost is too high.
Thanks any advance!
Karlleewell, u have 2 options here
software load balancing
request comes it foo.com -> ws7u2 hosting foo.com is configured to run as reverse proxy . this server sends any incoming requests to one of the four back end web server 7 handling your incoming request
hardware load balancing (this you need to invest)
request comes to hardware load balancer who responds for foo.com -> sends requests to four ws7 server hosting your application
you could try out how software load balancing works out for you before you invest in hardware load balancing
here is more instruction on configuring ws7 + reverse proxy (software load configuration)
- install ws7 on foo.com
- create a new configuration (choose port 80, disable java -
Setup failover for a distributed cache
Hello,
For our production setup we will have 4 app servers one clone per each app server. so there will be 4 clones to a cluster. And we will have 2 jvms for our distributed cache - one being a failover, both of those will be in cluster.
How would i configure the failover for the distributed cache?
Thanksuser644269 wrote:
Right - so each of the near cache schemes defined would need to have the back map high-units set to where it could take on 100% of data.Specifically the near-scheme/back-scheme/distributed-scheme/backing-map-scheme/local-scheme/high-units value (take a look at the [Cache Configuration Elements|http://coherence.oracle.com/display/COH34UG/Cache+Configuration+Elements] ).
There are two options:
1) No Expiry -- In this case you would have to size the storage enabled JVMs to that an individual JVM could store all of the data.
or
2) Expiry -- In this case you would set the high-units a value that you determine. If you want it to store all the data then it needs to be set higher than the total number of objects that you will store in the cache at any given time or you can set it lower with the understanding that once that high-units is reached Coherence will evict some data from the cluster (i.e. remove it from the "cluster memory").
user644269 wrote:
Other than that - there is not configuration needed to ensure that these JVM's act as a failover in the event one goes down.Correct, data fault tolerance is on by default (set to one level of redundancy).
:Rob:
Coherence Team -
Configure directiry server failover for delegated admin schema 2
Hello,
I am using Delegated Admin for Schema 2 on solaris 9 sparc platform.
I want to configure directory servers failover for delegated admin.
Unfortunately I havent found any clue for the same.
Can anyone help me?
Regards,
Shujaat Nazir
Senior System Engineer
Cyber Internet Services, Pakistan
http://www.cyber.net.pkDifferent product.
Schema 1 used the old iPlanet Delegated Admin.
Schema 2 uses Delegated Admin, based on Identity Server.
As far as I know, failover is not in this product. -
Order of DC + Exchange + File servers
I have a newbie general question regarding restarting servers.
Which order is best to restart them for maintenance purposes? All running 2003 server
Domain Controller handling DNS
Exchange server handling DHCP
File Servers
Should I wait to restart the DC last?It doesn't matter what order you restart them. Obviously you'll be experiencing an outage when rebooting any of those servers so you need to make sure your server is completely back online before taking another one down.
You don't need to save the DC for last, but life would be 10x better if you stood up a second one. Directory Services is pretty important to always have online. It'll allow you to reboot either one without breaking DNS and authentication. It
can also reduce the amount of time it takes to bring a DC back online.
Stand alone DCs take significantly longer to come online if DNS is not available. So if you have the hardware, it's worth making another.
- If you have found my post to be helpful, or the answer, please mark it appropriately. Thank you.
Chris Ream -
Weblogic7/examples/clustering/ejb Automatic failover for idempotent methods ?
This one should be easy since it is from the examples folder of bea 7 about
clustering.
Ref : \bea7\weblogic007\samples\server\src\examples\cluster\ejb
I am referring to the cluster example provided with the weblogic server 7.0
on windows 2000.
I deployed Admin server and 2 managed server as described in document.
Everything works fine as shown by the example. I get load balancing and
failover both. Too Good.
Client.java is using the while loop to manage the failover. So on exception
it will go thru the loop again.
I understand from the documentation that the stateless session EJB will
provide the automatic failover for Idempotent stateless bean
Case Failover Idempotent : ( Automatic )
If methods are written in such a way that repeated calls to the same method
do not cause duplicate updates, the method is said to be "idempotent." For
idempotent methods, WebLogic Server provides the
stateless-bean-methods-are-idempotent deployment property. If you set this
property to "true" in weblogic-ejb-jar.xml, WebLogic Server assumes that the
method is idempotent and will provide failover services for the EJB method,
even if a failure occurs during a method call.
Now I made 2 changes to the code.
1 . I added as follows to the weblogic-ejb-jar.xml of teller stateless EJB
<stateless-clustering>
<stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
<stateless-bean-load-algorithm>random</stateless-bean-load-algorithm>
<stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idem
potent>
</stateless-clustering>
So I should get the automatic failover .............
2. Also I added the break statement in the catch on line around 230 in
Client .java
catch (RemoteException re) {
System.out.println(" Error: " + re);
// Replace teller, in case that's the problem
teller = null;
invoke = false;
break;
So that the client program does not loop again and again.
Now I compile and restart all my three servers and redeploy application (
just to be sure )
I start my client and I get a automatic load balancing between the server
which makes me happy.
But Failover ....?
I kill one of the managed application server in cluster at any particular
test fail point.
I expect the exception to be taken care automatically by error/failover
handler in the home/remote stub
But the client program fails and terminates.
1. What is wrong with the code ?
2. Does the automatic failover with the indempotent methods also has to be
taken care by coding the similar while loop for stateless ejb ?
Your help will be appreciated ASAP.
Let me know if you need any thing more from my system. But I am sure this
will be very easy as it is from the sample code.........
Thanks
Sorry I meant to send this to the ejb newsgroup.
dan
dan benanav wrote:
> Do any vendors provide for clustering with automatic failover of entity
> beans? I know that WLS does not. How about Gemstone? If not is there
> a reason why it is not possible?
>
> It seems to me that EJB servers should be capable of automatic failover
> of entity beans.
>
> dan
Maybe you are looking for
-
WS-Security - Single element encryption ?
Hi Guys, The credit card data is sensitive and we have messages going through XI which has CVV code the 3 digit code and since XI logs this, we are planning to use single element encryption (XML encryption) in the web service security. Customer asked
-
Applet won't proceed past "activate" command
I am unable to work around this any longer. I have a script saved as an application bundle: (*BEGIN SCRIPT FOR APPLET 1*) set BA_ to ((path to me as Unicode text) & "Contents:Resources:BeepApplet.app") beep 3 display dialog "Okay, this dialog shows.
-
After upgrading to Java 1.4.1_02, the compile would no longer work so I reinstalled Sun ONE Studio. Now I can compile, but the Form Editor tabs for Layouts, Swing, etc don't work. No response except a 2mm line moves around in the directly under the t
-
Excise Default - Change TAXINJ for AT1
Hi All, I like to know any Impact will be there if in Excise Default - Change Tax Proc. TAXINJ for AT1 instated of JA1S i have Placed J1CV for S&H Cess for the purpose has to be taken in J1IG for Sale Depot as it was not captured properly. Replac
-
POSSIBLE SIMPLE SOLUTION TO HEADPHONE JACK PROBL
It may have been suggested before but, purchase the wired remote cos its very useful particularly when you are walking about or in the car,and you can leave it permanantly plugged in the jack thereby considerably reducing wear and tear on the socket.