Disaster Recovery - Active Directory
Hello!
I have a client, where the server to the
domain controller has totally crashed
The customer does not have backup.
It is now not possible to log into the computers
of employees.
Now I have set up a new domain controller.
How can I log onto the computers
of all employees is and connect them up
to the new domain?
We do not have username
and password to the local users.
(PS: Do you know
any tool that can easily do the job for me?
So I did not have to do it manually
on 50 machines.)
Hopefully you can log on with the old domain Id and Password and the users were local admins. If so cached credentials should let you. Once logged on you should be able to change to the new domain. Otherwise you will need the local administrators
password and log on that way.
Otherwise you can try the below to see if you can reset local admin passwords:
http://www.petri.co.il/forgot_administrator_password.htm#
Paul Bergson
MVP - Directory Services
MCITP: Enterprise Administrator
MCTS, MCT, MCSE, MCSA, Security, BS CSci
2012, 2008, Vista, 2003, 2000 (Early Achiever), NT4
Twitter @pbbergs http://blogs.dirteam.com/blogs/paulbergson
Please no e-mails, any questions should be posted in the NewsGroup.
This posting is provided AS IS with no warranties, and confers no rights.
Similar Messages
-
Disaster Recovery: Active/Passive Mail Flow
Hello. I am planning to deploy an active/passive server environment for Exchange 2013. Here's what I plan to achieve:
EXCHANGE-A - on production site, active (all mail flow goes through here), member of DAG
EXCHANGE-B - on disaster recovery site, passive (mail flow goes through here ONLY when activated), member of DAG
I have tried disabling all receive connectors on EXCHANGE-B, however that causes OWA to delay sending messages for around 20 seconds (messages stuck in Drafts folder before being sent out). I have also tried disabling Exchange Transport services on EXCHANGE-B
but to no avail.
Is it possible to achieve active/passive mail flow server environment? What is the best way to achieve this? I appreciate any leads on this matter.Hi ,
Shadow redundancy configuration is available on the transport config and any change applied on the transport config parameters will get applied to all the transport servers in our organisation and it not specific to the server level.
Command
to disable the shadow redundancy feature :-
set-TransportConfig -shadowredundancyenabled $false
Note : Above command will disable the shadow redundancy feature for your entire organisation.
Thanks & Regards S.Nithyanandham -
MBAM bitlocker-protected removable drives recovery keys saved on sql database not active directory
Hi Guys
I need help in saving bitlocker protected removable drives on the sql database instead of active directory .
I have tried to play around with the policy and I am not winning , currently my GPO : Choose how bitlocker-protected removable drives can be recovered has only the allow data recovery agent chosen and I have left out all the AD DS option unticked
Please point me in the right direction on how to achieve this , I want all my keys in a SQL database so the users can recover the keys themselves using the mbam helpdesk websiteUnder client management, define your endpoint URLs. You can see the help and the description section for that particular policy. Copy and paste the URL removing the port number and replace the name of the Server with that of your MBAM Web server.
Also, Disable or don't configure the policy "Choose how bitlocker protected removable Drives can
be recovered".
This will save your recovery keys to the MBAM DBs.
Gaurav Ranjan -
Active Directory Operation on Disater Recovery Drill
Hye everybody,
Need some advice from you guys, i'll have to be ready for the DR Drill in upcoming week, my concern is this scenario, if we have a PDC in HQ, DC (GC) some branchs, and also DC (GC) in DR site. So when we do the DR operation where we disconnect the network
line in HQ and we swing all system apps including AD. it also involve the branch network that will pointing to DR site.
The testing that we have to do in DR Drill is such user authnetication, join domain session, replication, GPO and also make sure all the test are success or we get scold.. 8)
So what is the best practice from microsoft site or old timer here on what i should do to make this AD operation at DR is succcesfully working like normal.
Hope i'llvhave some input from you guys, thanks..Hello,
for the branch office, make sure that you have at least one DC / DNS / GC server. When a disaster occur on the HQ then you have to change the IP configuration settings of the client computers in the Branch Office so that they will point to this DC as a primary
DNS server. Like that, all should be okay with domain authentication, group policy applicance ...
Now, for FSMO roles, if you don't resize them then you will get problems like
You are unable to perform changes on AD schema
Time Sync problems
If the HQ DCs are unrecoverable then I recommend proceeding by resizing FSMO roles on DCs of the Branch Office. If not, you can:
Wait until these DCs are back
Resize FSMO roles and when the DCs of HQ will be back then force their demotion using
dcpromo /forceremoval. Note that if resized, DCs of HQ should never be online before demotion
Also, after these changes, you have to make sure that there is at least two DC / DNS / GC servers in the branch office. If not, add a new one. Like that, you will reduce risks of losing your domain.
Before a disaster appear, you have to make sure that AD replication is made correctly.
This
posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
Microsoft
Student Partner 2010 / 2011
Microsoft Certified
Professional
Microsoft Certified
Systems Administrator: Security
Microsoft Certified
Systems Engineer: Security
Microsoft Certified
Technology Specialist: Windows Server 2008 Active Directory, Configuration
Microsoft Certified
Technology Specialist: Windows Server 2008 Network Infrastructure, Configuration
Microsoft Certified
Technology Specialist: Windows Server 2008 Applications Infrastructure, Configuration
Microsoft Certified
Technology Specialist: Windows 7, Configuring
Microsoft Certified
IT Professional: Enterprise Administrator -
ACTIVE/ACTIVE Disaster Recovery configuration
If I have two separate 10.2.0 RAC databases in two separate geographical locations and each database is receiving updates and sending the changes to the other database via Streams, how would you configure a disaster recovery solution for this?
In this scenario, each database is intended to be a copy of the other. It is an ACTIVE/ACTIVE type of setup.
Do you need also have a data guard database for each of these databases to support disaster recovery?
Thanks for your feedback.Hello Sergio,
To get to Ironport Dcoumentation, please do the following:
1) go to www.cisco.com
2) Login with CCO id and password
3) Select support
4) On resulting page, under Prduct Name, select Security
5) You should see "Email Security" and "Web Security" option there, which will bring you to the Documents.
For WSA the doc guides are here http://www.cisco.com/en/US/customer/products/ps10164/products_user_guide_list.html
For The ESA the doc guides are here http://www.cisco.com/en/US/customer/products/ps10154/products_user_guide_list.html
Regards,
Eric -
This is the replication status for the following directory partition on this directory server.
Directory partition:
DC=ForestDnsZones,DC=shankarpack,DC=com
This directory server has not received replication information from a number of directory servers within the configured latency interval.
Latency Interval (Hours):
24
Number of directory servers in all sites:
1
Number of directory servers in this site:
1
The latency interval can be modified with the following registry key.
Registry Key:
HKLM\System\CurrentControlSet\Services\NTDS\Parameters\Replicator latency error interval (hours)
To identify the directory servers by name, use the dcdiag.exe tool.
You can also use the support tool repadmin.exe to display the replication latencies of the directory servers. The command is "repadmin /showvector /latency <partition-dn>".sir, i means that secondary domain server is down due to system motherboard issue.so guide to me that how remove all setting of the secondary domain from primary domain. (shankarpack.com).
errors are :
Active Directory Domain Services could not resolve the following DNS host name of the source domain controller to an IP address. This error prevents additions, deletions and changes in Active Directory Domain Services from replicating between one or more
domain controllers in the forest. Security groups, group policy, users and computers and their passwords will be inconsistent between domain controllers until this error is resolved, potentially affecting logon authentication and access to network resources.
Source domain controller:
AVS1
Failing DNS host name:
f0c8f1a9-50fd-4785-8ca4-29b1d824b251._msdcs.shankarpack.com
NOTE: By default, only up to 10 DNS failures are shown for any given 12 hour period, even if more than 10 failures occur. To log all individual failure events, set the following diagnostics registry value to 1:
Registry Path:
HKLM\System\CurrentControlSet\Services\NTDS\Diagnostics\22 DS RPC Client
User Action:
1) If the source domain controller is no longer functioning or its operating system has been reinstalled with a different computer name or NTDSDSA object GUID, remove the source domain controller's metadata with ntdsutil.exe, using the steps outlined
in MSKB article 216498.
2) Confirm that the source domain controller is running Active Directory Domain Services and is accessible on the network by typing "net view \\<source DC name>" or "ping <source DC name>".
3) Verify that the source domain controller is using a valid DNS server for DNS services, and that the source domain controller's host record and CNAME record are correctly registered, using the DNS Enhanced version of DCDIAG.EXE available on http://www.microsoft.com/dns
dcdiag /test:dns
4) Verify that this destination domain controller is using a valid DNS server for DNS services, by running the DNS Enhanced version of DCDIAG.EXE command on the console of the destination domain controller, as follows:
dcdiag /test:dns
5) For further analysis of DNS error failures see KB 824449:
http://support.microsoft.com/?kbid=824449
Additional Data
Error value:
11004 The requested name is valid, but no data of the requested type was found.
This is the replication status for the following directory partition on this directory server.
Directory partition:
DC=ForestDnsZones,DC=shankarpack,DC=com
This directory server has not received replication information from a number of directory servers within the configured latency interval.
Latency Interval (Hours):
24
Number of directory servers in all sites:
1
Number of directory servers in this site:
1
The latency interval can be modified with the following registry key.
Registry Key:
HKLM\System\CurrentControlSet\Services\NTDS\Parameters\Replicator latency error interval (hours)
To identify the directory servers by name, use the dcdiag.exe tool.
You can also use the support tool repadmin.exe to display the replication latencies of the directory servers. The command is "repadmin /showvector /latency <partition-dn>".
This server is the owner of the following FSMO role, but does not consider it valid. For the partition which contains the FSMO, this server has not replicated successfully with any of its partners since this server has been restarted. Replication errors are
preventing validation of this role.
Operations which require contacting a FSMO operation master will fail until this condition is corrected.
FSMO Role: DC=shankarpack,DC=com
User Action:
1. Initial synchronization is the first early replications done by a system as it is starting. A failure to initially synchronize may explain why a FSMO role cannot be validated. This process is explained in KB article 305476.
2. This server has one or more replication partners, and replication is failing for all of these partners. Use the command repadmin /showrepl to display the replication errors. Correct the error in question. For example there maybe problems with IP connectivity,
DNS name resolution, or security authentication that are preventing successful replication.
3. In the rare event that all replication partners being down is an expected occurance, perhaps because of maintenance or a disaster recovery, you can force the role to be validated. This can be done by using NTDSUTIL.EXE to seize the role to the same server.
This may be done using the steps provided in KB articles 255504 and 324801 on http://support.microsoft.com.
The following operations may be impacted:
Schema: You will no longer be able to modify the schema for this forest.
Domain Naming: You will no longer be able to add or remove domains from this forest.
PDC: You will no longer be able to perform primary domain controller operations, such as Group Policy updates and password resets for non-Active Directory Domain Services accounts.
RID: You will not be able to allocation new security identifiers for new user accounts, computer accounts or security groups.
Infrastructure: Cross-domain name references, such as universal group memberships, will not be updated properly if their target object is moved or renamed. -
Hi,
I'm looking at designing an ADFS solution to accomodate 4,500 users to provide SSO with a web application.
I'm thinking of using 2 2012 R2 ADFS servers on my internal network and 2 2012 R2 Web Application Proxies in my DMZ, then load balancing the connections using an F5.
What I'm not sure about is how to achieve DR in another data center. For example is active-active in 2 different data centres supported? I was thinking of replicating my ADFS servers across the Data Centres, then simply performing a failover in the event
of a disaster, but I'm not sure how well that would work in reality.
It would be great to hear from someone who's already done this.
Thanks
IT Support/EverythingHi,
To provide Office 365 Single Sign-On with integration of on-premises AD and Windows Azure, it is recommended to use the on-premises
environment for active use and Azure for business continuity. In case of a disaster, the failover between the on-premises infrastructure and the hosted infrastructure is a manual operation.
It is not recommended to setting up a cross-premises, high-availability (active/active)
configuration.
For more detailed information please refer to these articles below:
White paper: Office 365 Adapter - Deploying Office 365 single sign-on using Azure Virtual Machines
http://technet.microsoft.com/library/dn509539.aspx
Deployment scenario: Directory integration components in Azure for disaster recovery
http://technet.microsoft.com/en-US/library/dn509536.aspx
To get more efficient assistance, I suggest you refer to Azure and ADFS forums below:
Azure Active Directory Forum
https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=WindowsAzureAD
Claims based access platform (CBA), code-named Geneva Forum
http://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=Geneva
Best Regards,
Amy -
SharePoint 2010 backup and restore to test SharePoint environment - testing Disaster recovery
We have a production SharePoint 2010 environment with one Web/App server and one SQL server.
We have a test SharePoint 2010 environment with one server (Sharepoint server and SQL server) and one AD (domain is different from prod environment).
Servers are Windows 2008 R2 and SQL 2008 R2.
Versions are the same on prod and test servers.
We need to setup a test environment with the exact setup as production - we want to try Disaster recovery.
We have performed backup of farm from PROD and we wanted to restore it on our new server in test environment.Backup completed successfully with no errors.
We have tried to restore the whole farm from that backup on test environment using Central administration, but we got message - restore failed with errors.
We choosed NEW CONFIGURATION option during restore, and we set new database names...
Some of the errors are:
FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified user or domain group was not found.
Warning: Cannot restore object User Profile Service Application because it failed on backup.
FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
Verbose: Starting object: WSS_Content_IT_Portal.
Warning: [WSS_Content_IT_Portal] A content database with the same ID already exists on the farm. The site collections may not be accessible.
FatalError: Object WSS_Content_IT_Portal failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified component exists. You must specify a name that does not exist.
Warning: [WSS_Content_Portal] The operation did not proceed far enough to allow RESTART. Reissue the statement without the RESTART qualifier.
RESTORE DATABASE is terminating abnormally.
FatalError: Object Portal - 80 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
ArgumentException: The IIS Web Site you have selected is in use by SharePoint. You must select another port or hostname.
FatalError: Object Access Services failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: Object parent could not be found. The restore operation cannot continue.
FatalError: Object Secure Store Service failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified component exists. You must specify a name that does not exist.
FatalError: Object PerformancePoint Service Application failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: Object parent could not be found. The restore operation cannot continue.
FatalError: Object Search_Service_Application_DB_88e1980b96084de984de48fad8fa12c5 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
Aborted due to error in another component.
Could you please help us to resolve these issues?I'd totally agree with this. Full fledged functionality isn't the aim of DR, getting the functional parts of your platform back-up before too much time and money is lost.
Anything I can add would be a repeat of what Jesper has wisely said but I would very much encourage you to look at these two resources: -
DR & back-up book by John Ferringer for SharePoint 2010
John's back-up PowerShell Script in the TechNet Gallery
Steven Andrews
SharePoint Business Analyst: LiveNation Entertainment
Blog: baron72.wordpress.com
Twitter: Follow @backpackerd00d
My Wiki Articles:
CodePlex Corner Series
Please remember to mark your question as "answered" if this solves (or helps) your problem. -
SharePoint 2013 Search - Disaster Recovery Restore
Hello,
We are setting up a new SharePoint 2013 with a separate Disaster Recovery farm as a hot-standby. In a DR scenario, we want to restore all content and service app databases to the new farm, then fix any configuration issues that might arise due to changes
in server names, etc...
The issue we're running into is the search service components are still pointing to the production servers even though they're in the new farm with completely different server names. This is expected, so we're preparing a PowerShell script to remove
then re-create the search components as needed. The problem is that all the commands used to apply the new search topology won't function because they can't access the administration component (very frustrating). It appears we're in a chicken &
egg scenario - we can't change the search topology because we don't have a working admin component, but we can't fix the admin component because we can't change the search topology.
The scripts below are just some of the things we've tried to fix the issue:
$sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
$local = Get-SPEnterpriseSearchServiceInstance -Local;
$topology = New-SPEnterpriseSearchTopology -SearchApplication $sa;
New-SPEnterpriseSearchAdminComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchQueryProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchCrawlComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchContentProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchIndexComponent -SearchTopology $topology -SearchServiceInstance $local -IndexPartition 0 -RootDirectory "D:\SP_Index\Index";
$topology.Activate();
We get this message:
Exception calling "Activate" with "0" argument(s): "The search service is not able to connect to the machine that
hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a'
in search application 'Search Service Application' is in a good state and try again."
At line:11 char:1
+ $topology.Activate();
+ ~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : InvalidOperationException
Also, same as above with
$topology.BeginActivate()
We get no errors but the new topology is never activated. Attempting to call $topology.Activate() within the next few minutes will result in an error saying that "No modifications to the search topology can be made because previous changes are
being rolled back due to an error during a previous activation".
Next I found a few methods in the object model that looked like they might do some good:
$sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
$topology = Get-SPEnterpriseSearchTopology -SearchApplication $sa -Active;
$admin = $topology.GetComponents() | ? { $_.Name -like "admin*" }
$topology.RecoverAdminComponent($admin,"server1");
This one really looked like it worked. It took a few seconds to run and came back with no errors. I can even get the active list of components and it shows that the Admin component is running on the right server:
Name ServerName
AdminComponent1 server1
ContentProcessingComponent1
QueryProcessingComponent1
IndexComponent1
QueryProcessingComponent3
CrawlComponent0
QueryProcessingComponent2
IndexComponent2
AnalyticsProcessingComponent1
IndexComponent3
However, I'm still unable to make further changes to the topology (getting the same error as above when calling $topology.Activate()), and the service application in central administration shows an error saying it can't connect to the admin component:
The search service is not able to connect to the machine that hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again.
Lastly, I tried to move the admin component directly:
$sa.AdminComponent.Move($instance, "d:\sp_index")
But again I get an error:
Exception calling "Move" with "2" argument(s): "Admin component was moved to another server."
At line:1 char:1
+ $sa.AdminComponent.Move($instance, "d:\sp_index")
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : OperationCanceledException
I've checked all the most common issues - the service instance is online, the search host controller service is running on the machine, etc... but I can't seem to get this database restored to a different farm.
Any help would be appreciated!Thanks for the response Bhavik,
I did ensure the instance was started:
Get-SPEnterpriseSearchServiceInstance -Local
TypeName : SharePoint Server Search
Description : Index content and serve search queries
Id : e9fd15e5-839a-40bf-9607-6e1779e4d22c
Server : SPServer Name=ROYALS
Service : SearchService Name=OSearch15
Role : None
Status : Online
But after attempting to set the admin component I got the results below.
Before setting the admin component:
Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
IndexLocation : E:\sp_index\Office Server\Applications
Initialized : True
ServerName : prodServer1
Standalone :
After setting the admin component:
Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
IndexLocation :
Initialized : False
ServerName :
Standalone :
It's shown this status for a few hours now so I don't believe it's still provisioning. Also, the search service administration is still showing the same error:
The search service is not able to connect to the machine that hosts the administration component. Verify that the administration
component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again. -
This is for information to help others
KEYWORDS:
- Sharing EFS encrypted files over a personal lan wlan wifi ap network
- Access denied on create new file / new fold on encrypted EFS network file share remote mapped folder
- transfer encryption keys / certificates
- set trusted delegation for user + computer for EFS encrypted files via
Kerberos
- Windows Active Directory vs network file share
- Setting up WinDAV server on Windows 7 Pro / Ultimate
It has been a long painful road to discover this information.
I hope sharing it helps you.
Using EFS on Windows 7 pro / ultimate is easy and works great. See
here and
here
So too is opening + editing encrypted files over a peer-to-peer Windows 7 network.
HOWEVER, creating a new file / new folder over a peer-to-peer Windows 7 network
won't work (unless you follow below steps).
Typically, it is only discovered as an issue when a home user wants to use synchronisation software between their home computers which happens to have a few folders encrypted using windows EFS. I had this issue trying to use GoodSync.
Typically an "Access Denied" error messages is thrown when a \\clientpc tries to create new folder / new file in an encrypted folder on a remote file share \\fileserver.
Why such a EFS drama when a network is involved?
Assume a home peer-to-peer network with 2pc: \\fileserver and \\clientpc
When a \\clientpc tries to create a new file or new folder on a \\fileserver (remote computer) it fails. In a terribly simplified explanation it is because the process on \\fileserver that is answering the network requests is a process working for a user on
another machine (\\clientpc) and that \\fileserver process doesn't have access to an encryption certificate (as it isn't a user). Active Directory gets around this by using kerberos so the process can impersonate a \\fileserver user and then use their certificate
(on behalf of the clienpc's data request).
This behaviour is confusing, as a \\clientpc can open or edit an existing efs encrypted file or folder, just can't create a new file or folder. The reason editing + opening an encrypted file over a network file share is possible is because the encrypted
file / folder already has an encryption certificate, so it is clear which certificate is required to open/edit the file. Creating a new file/folder requires a certificate to be assigned and a process doesn't have a profile or certificates assigned.
Solutions
There are two main approaches to solve this:
1) SOLVE by setting up an Active Directory (efs files accessed through file shares)
EFS operations occur on the computer storing the files.
EFS files are decrypted then transmitted in plaintext to the client's computer
This makes use of kerberos to impersonate a local user (and use their certificate for encrypt + decrypt)
2) SOLVE by setting up WebDAV (efs files accessed through web folders)
EFS operations occur on the client's local computer
EFS files remain encrypted during transmission to the client's local computer where it is decrypted
This avoids active directory domains, roaming or remote user profiles and having to be trusted for delegation.
BUT it is a pain to set up, and most online WebDAV server setup sources are not for home peer-to-peer networks or contain details on how to setup WebDAV for EFS file provision
READ BELOW as this does
Create new encrypted file / folder on a network file share - via Active Directory
It is easily possible to sort this out on a domain based (corporate) active directory network. It is well documented. See
here. However, the problem is on a normal Windows 7 install (ie home peer-to-peer) to set up the server as part of an active directory domain is complicated, it is time consuming it is bulky, adds burden to operation of \\fileserver computer
and adds network complexity, and is generally a pain for a home user. Don't. Use a WebDAV.
Although this info is NOT for setting up EFS on an active directory domain [server],
for those interested here is the gist:
Use the Active Directory Users and Computers snap-in to configure delegation options for both users and computers. To trust a computer for delegation, open the computer’s Properties sheet and select Trusted for delegation. To allow a user
account to be delegated, open the user’s Properties sheet. On the Account tab, under Account Options, clear the The account is sensitive and cannot be delegated check box. Do not select The account is trusted for delegation. This property is not used with
EFS.
NB: decrypted data is transmitted over the network in plaintext so reduce risk by enabling IP Security to use Encapsulating Security Payload (ESP)—which will encrypt transmitted data,
Create new encrypted file / folder on a network file share - via WebDAV
For home users it is possible to make it all work.
Even better, the functionality is built into windows (pro + ultimate) so you don't need any external software and it doesn't cost anything. However, there are a few hotfixes you have to apply to make it work (see below).
Setting up a wifi AP (for those less technical):
a) START ... CMD
b) type (no quotes): "netsh wlan set hostednetwork mode=allow ssid=MyPersonalWifi key=12345 keyUsage=persistent"
c) type (no quotes): "netsh wlan start hostednetwork"
Set up a WebDAV server on Windows 7 Pro / Ultimate
-----ON THE FILESERVER------
1 click START and type "Turn Windows Features On or Off" and open the link
a) scroll down to "Internet Information Services" and expand it.
b) put a tick in: "Web Management Tools" \ "IIS Management Console"
c) put a tick in: "World Wide Web Services" \ "Common HTTP Features" \ "WebDAV Publishing"
d) put a tick in: "World Wide Web Services" \ "Security" \ "Basic Authentication"
e) put a tick in: "World Wide Web Services" \ "Security" \ "Windows Authentication"
f) click ok
g) run HOTFIX - ONLY if NOT running Windows 7 / windows 8
KB892211 here ONLY for XP + Server 2003 (made in 2005)
KB907306 here ONLY for Vista, XP, Server 2008, Server 2003 (made in 2007)
2 Click START and type "Internet Information Services (IIS) Manager"
3 in IIS, on the left under "connections" click your computer, then click "WebDAV Authoring Rules", then click "Open Feature"
a) on the right side, under Actions, click "Enable WebDAV"
4 in IIS, on the left under "connections" click your computer, then click "Authentication", then click "Open Feature"
a) on the "Anonymous Authentication" and click "Disable"
b) on the "Windows Authentication" and click "Enable"
NB: Some Win 7 will not connect to a webDAV user using Basic Authentication.
It can be by changing registry key:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WebClient\Parameters]
BasicAuthLevel=2
c) on the "Windows Authentication" click "Advanced Settings"
set Extended Protection to "Required"
NB: Extended protection enhances the windows authentication with 2 security mechanisms to reduce "man in the middle" attacks
5 in IIS, on the left under "connections" click your computer, then click "Authorization Rules", then click "Open Feature"
a) on the right side, under Actions, click "Add Allow Rule"
b) set this to "all users". This will control who can view the "Default Site" through a web browser
NB: It is possible to specify a group (eg Administrators is popular) or a user account. However, if not set to "all users" this will require the specified group/user account to be used for logged in with on the
clientpc.
NB: Any user account specified here has to exist on the server. It has a bug in that it usernames specified here are not validated on input.
6 in IIS, on the left under "connections" click your computer, then click "Directory Browsing", then click "Open Feature"
a) on the right side, under Actions, click "Enable"
HOTFIX - double escaping
7 in IIS, on the left under "connections" click your computer, then click "Request Filtering", then click "Open Feature"
a) on the right side, under Actions, click "Edit Feature Settings"
b) tick the box "Allow double escaping"
*THIS IS VERY IMPORTANT* if your filenames or foldernames contain characters like "+" or "&"
These folders will appears blank with no subdirectories, or these files will not be readable unless this is ticked
This is safe btw. Unchecked (default) it filters out requests that might possibly be misinterpreted by buggy code (eg double decode or build url's via string-concat without proper encoding). But any bug would need to be in IIS basic
file serving and this has been rigorously tested by microsoft, so very unlikely. Its safe to "Allow double escaping".
8 in IIS, on the left under "connections" right click "Default Web Site", then click "Add Virtual Directory"
a) set the Alias to something sensible eg "D_Drive", set the physical path
b) it is essential you click "connect as" and set
this to a local user (on fileserver),
if left as "pass through authentication" a client won't be able to create a new file or folder in an encrypted efs folder (on fileserver)
NB: the user account selected here must have the required EFS certificates installed.
See
here and
here
NB: Sharing the root of a drive as an active directory (eg D:\ as "D_Drive") often can't be opened on clientpcs.
This is due to windows setting all drive roots as hidden "administrative shares". Grrr.
The work around is on the \\fileserver create an NTFS symbollic link
e.g. to share the entire contents of "D:\",
on fileserver browse to site path (iis default this to c:\inetpub\wwwroot)
in cmd in this folder create an NTFS symbolic link to "D:\"
so in cmd type "cd c:\inetpub\wwwroot"
then in cmd type "mklink /D D_Drive D:\"
NB: WebDAV will open this using a \\fileserver local user account, so double check local NTFS permissions for the local account (clients will login using)
NB: If clientpc can see files but gets error on opening them, on clientpc click START, type "Manage Network Passwords", delete any "windows credentials" for the fileserver being used, restart
clientpc
9 in IIS, on the left under "connections" click on "WebDAV Authoring Rules", then click "Open Feature"
a) click "Add authoring rules". Control access to this folder by selecting "all users" or "specified groups" or "specified users", then control whether they can read/write/source
b) if some exist review existing allow or deny.
Take care to not only review the "allow access to" settings
but also review "permissions" (read/write/source)
NB: this can be set here for all added virtual directories, or can be set under each virtual directory
10 Open your firewall software and/or your router. Make an exception for port 80 and 443
a) In Windows Firewall with Advanced Security click Inbound Rules, click New Rule
choose Port, enter "80, 443" (no speech marks), follow through to completion. Repeat for outbound.
NB: take care over your choice to untick "Public", this can cause issues if no gateway is specified on the network (ie computer-to-computer with no router). See "Other problems+fixes"
below, specifically "Cant find server due to network location"
b) Repeat firewall exceptions on each client computer you expect to access the webDAV web folders on
HOTFIX - MAJOR ISSUE - fix KB959439
11 To fully understand this read "WebDAV HOTFIX: RAW DATA TRANSFERS" below
a) On Windows 7 you need only change one tiny registry value:
- click START, type "regedit", open link
-browse to [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MRxDAV\Parameters]
-on the EDIT menu click NEW, then click DWORD Value
-Type "DisableEFSOnWebDav" to name it (no speech marks)
-on the EDIT menu, click MODIFY, type 1, then click OK
-You MUST now restart this computer for the registry change to take effect.
b) On Windows Server 2008 / Vista / XP you'll FIRST need to
download Windows6.0-KB959439 here. Then do the above step.
NB microsoft will ask for your email. They don't care about licence key legality, it is more to keep you updated if they modify that hotfix
12 To test on local machine (eg \\fileserver) and deliberately bypass the firewall.
a) make sure WebClient Service is running
(click START, type "services" and open, scroll down to WebClient and check its status)
b) Open your internet software. Go to address "http://localhost:80" or "http://localhost:80"
It should show the default "IIS7" image.
If not, as firewall and port blocking are bypassed (using localhost) it must be a webDAV server setting. Check "Authorization Rules" are set to "Allow All Users"
c) for one of the "virtual directories" you added (8), add its "alias" onto "http://localhost/"
e.g. http://localhost/D_drive
If nothing is listed, check "Directory Browsing" is enabled
13 To test on local machine or a networked client and deliberately try and access through the firewall or port opening of your router.
a) make sure WebClient Service is running
(click START, type "services" and open, scroll down to WebClient and check its status)
b) open your internet software. Go to address "http://<computer>:80" or "http://<computer>:80".
eg if your server's computer name is "fileserver" go to "http://fileserver:80"
It should show the default "IIS7" image. If not, check firewall and port blocking.
Any issue ie if (12) works but (13) doesn't, will indicate a possible firewall issue or router port blocking issue.
c) for one of the "virtual directories" you added (8), add its "alias" onto "http://<computername>:80/"
eg if alias is "C_driver" and your server's computer name is "fileserver" go to "http://fileserver:80/C_drive"
A directory listing of files should appear.
--- ON EACH CLIENT ----
HOTFIX - improve upload + download speeds
14 Click START and type "Internet Options" and open the link
a) click the "Connections" tab at the top
b) click the "LAN Settings" button at the bottom right
c) untick "Automatically detect settings"
HOTFIX - remove 50mb file limit
15 On Windows 7 you need only change one tiny registry value:
a) click START, type "regedit", open link
b) browse to [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WebClient\Parameters]
c) click on "FileSizeLimitInBytes"
d) on the EDIT menu, click MODIFY, type "ffffffff", then click OK (no quotes)
HOTFIX - remove prompt for user+pass on opening an office or pdf document via WebDAV
16 On each clientpc click START, type "Internet Options" and open it
a) click on "Security" (top) and then "Custom level" (bottom)
b) scroll right to the bottom and under "User Authentication" select "Automatic logon with current username and password"
SUCH an easy fix. SUCH an annoying problem on a clientpc
NB: this is only an issue if the file is opened through windows explorer. If opened through the "open" dialogue of the software itself, it doesn't happen. This is as a WebDAV mapped drive is consdered a "web folder" by windows
explorer.
TEST SETUP
17 On the client use the normal "map network drive"
e.g. server= "http://fileserver:80/C_drive", tick reconnect at logon
e.g. CMD: net use * "http://fileserver:80/C_drive"
If it doens't work check "WebDAV Authoring Rules" and check NTFS permissions for these folders. Check that on the filserver the elected impersonation user that the client is logging in with (clientpc
"manage network passwords") has NTFS permissions.
18 Test that EFS is now working over the network
a) On a clientpc, map network drive to http://fileserver/
b) navigate to a folder you know on the \\flieserver is encrypted with EFS
c) create a new folder, create a new file.
IF it throws an error, check carefully you mapped to the WebDAV and not file share
i.e. mapped to "http://fileserver" not "\\fileserver"
Check that on clientpc the required efs certificate is installed. Then check carefully on clientpc what user account you specified during the map drive process. Then check on the \\fileserver this
account exists and has the required EFS certificate installed for use. If necessary, on clientpc click START, type "Manage Network Passwords" and delete the windows credentials currently in the vault.
d) on clientpc (through a webDAV mapped folder) open an encrypted file, edit it, save it, close it. On the \\fileserver now check that file is readable and not gobble-de-goup
e) on clientpc copy an encrypted efs file into a folder (a webDAV mapped folder) you know is not encrypted on \\fileserver. Now check on the \\fileserver computer that the file is readable and not gobble-de-goup (ie the
clientpc decrypted it then copied it).
If this fails, it is likely one in IIS setting on fileserver one of the shared virtual directories is set to: "pass through authentication" when it should be set to "connect as"
If this is not readable check step (11) and that you restarted the \\fileserver computer.
19 Test that clients don't get the VERY annoying prompt when opening an Office or PDF doc
a) on clientpc in windows explorer browse to a mapped folder you know is encrypted and open an office file and then PDF.
If a prompt for user+pass then check hotfix (16)
20 Consider setting up a recycling bin for this mapped drive, so files are sent to recycling bin not permanently deleted
a) see the last comment at the very bottom of
this page:
Points to consider:
- NB: WebDAV runs on \\fileserver under a local user account, so double check local NTFS permissions for that local account and adjust file permissions accordingly. If the local account doesn't have permission, the webDAV / web folder share won't
either.
- CONSIDER: IP Security (IPSec) or Secure Sockets Layer (SSL) to protect files during transport.
MORE INFO: HOTFIX: RAW DATA TRANSFERS
More info on step (11) above.
Because files remain encrypted during the file transfer and are decrypted by EFS locally, both uploads to and downloads from Web folders are raw data transfers. This is an advantage as if data is intercepted it is useless. This is a massive disadvantage as
it can cause unexpected results. IT MUST BE FIXED or you could be in deep deep water!
Consider using \\clientpc to access a webfolder on \\fileserver and copying an encrypted EFS file (over the network) to a web folder on \\fileserver that is not encrypted.
Doing this locally would automatically decrypt the file first then copy the decrypted file to the non-encrypted folder.
Doing this over the network to a web folder will copy the raw data, ie skip the decryption stage and result in the encrypted EFS file being raw copied to the non-encrypted folder. When viewed locally this file will not be recognised as encrypted (no encryption
file flag, not green in windows explorer) but it will be un-readable as its contents are still encrypted. It is now not possible to locally read this file. It can only be viewed on the \\clientpc
There is a fix:
It is implimented above, see (11) above
Microsoft's support page on this is excellent and short. Read "problem description" of "this microsoft webpage"
Other problems + fixes
PROBLEM: Can't find server due to network location.
This one took me a long time to track down to "network location".
Win 7 uses network locations "Home" / "Work" / "Public".
If no gateway is specified in the IP address, the network is set to '"unidentified" and so receives "Public" settings.
This is a disaster for remote file share access as typically "network discovery" and "file sharing" are disabled under "Public"
FIX = either set IP address manually and specify a gateway
FIX = or force "unidentified" network locations to assume "home" or "work" settings -
read here or
here
FIX = or change the "Public" "advanced network settings" to turn on "network discovery" and "file sharing" and "Password Protected Sharing". This is safe as it will require a windows
login to gain file access.
PROBLEM: Deleting files on network drive permanently deletes them, there is no recycling bin
By changing the location of "My Contacts" or similar to the root directory of your mapped drive, it will be added to recycling bin locations
Read
here (i've posted a batch script to automatically make the required reg files)
I really hope this helps people. I hope the keywords + long title give it the best chance of being picked up in web searches.What probably happens is that processes are using those mounts. And that those processes are not killed before the mounts are unmounted. Is there anything that uses those mounts?
-
Disaster Recovery set-up for SharePoint 2013
Hi,
We are migrating our SP2010 application to SP2013. It would be on-premise setup using virtual environment.
To handle disaster recovery situation, it has been planned to have two identical farms (active, passive) hosted in two different datacenters.
I have prior knowledge of disaster recovery only at the content DB level.
My Question is how do we make two farm identical and how do we keep Database of both the farm always in sync.
Also if a custom solution is pushed into one of the farm, how does it replicate to the other farm.
Can someone please help me in understanding this D/R situation.
Thanks,
RahulMetalogix Replicator will replicate content, but nothing below the Web Application level (you'd still have to configure the SAs, Central Admin settings, deploy solutions, etc.).
While AlwaysOn is a good choice, do remember that ASync AO is not supported across all of SharePoint's databases (see
http://technet.microsoft.com/en-us/library/jj841106.aspx). LogShipping is a good choice, especially with content databases as they can be placed in a read only mode on the DR farm for an
active crawl to be completed against them.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
I install Active Directory Domain Controller on Windows server 2008 enterprise and dont login on Sql Server 2008 R2. Before install ADDC, I have logon SQL Server 2008r2 Success, After when i install ADDC is don't logon on SQL Server 2008r2 -->not success.
I have uninstalled ADDC but i still can't login on SQL server 2008r2.
please help me. it is very very disaster!
I think is loss account SQL server 2008r2!Hello,
I stronly recommend you post the detail error message to us while you try to connect to SQL Server instance, it's useful for us to do further investigation.
Microsoft recommends that you do not install SQL Server 2008 R2 on a domain controller, there are some limitations:
You cannot run SQL Server services on a domain controller under a local service account or a network service account.
After SQL Server is installed on a computer, you cannot change the computer from a domain member to a domain controller. You must uninstall SQL Server before you change the host computer to a domain controller.
After SQL Server is installed on a computer, you cannot change the computer from a domain controller to a domain member. You must uninstall SQL Server before you change the host computer to a domain member.
SQL Server failover cluster instances are not supported where cluster nodes are domain controllers.
SQL Server Setup cannot create security groups or provision SQL Server service accounts on a read-only domain controller. In this scenario, Setup will fail.
On Windows Server 2003, SQL Server services can run under a domain account or a local system account.
So, I would suggest you try to open up Windows Services list and changed the account for SQL Server service.
Regards,
Elvis Long
TechNet Community Support -
Disaster Recovery For SAP ECC 6.0 On Oracle
Hi All,
This is our infrastructure
Windows 2003 Server
SAP ECC 6.0
Oracle 10
Legato Networker Client / Library
Actually our Backup Strategy is to do an Online Backup Every Night from monday to saturday. We want to test our backup by doing a Restore. We are assuming a complete loss of the system including hardware.
What we do:
1. Install the SAP System on a new host with the same hardware characteristic of the source system.
2. Install & Configure the legato utility.
3. Copy the E:\oracle\MIS\sapbackup\ directory from the source system to the test system.
4. Them we put the database in mount mode.
5. Execute the command: brrestore -b bdyxwoqv.fnf -m full (bdyxwoqv.fnf-> Full Online Backup that was executed without problems).
It gives the following errors:
BR0386E File 'F:\ORACLE\MIS\SAPDATA2\SR3_10\SR3.DATA10' reported as not found by
backup utility
BR0386E File 'F:\ORACLE\MIS\SAPDATA3\SR3700_9\SR3700.DATA9' reported as not foun
d by backup utility
BR0280I BRRESTORE time stamp: 2008-10-01 17.45.19
BR0279E Return code from 'backint -u MIS -f restore -i E:\oracle\MIS\sapbackup\.
rdyybrzp.lst -t file -p E:\oracle\MIS\102\database\initMIS.utl': 2
BR0374E 0 of 63 files restored by backup utility
BR0280I BRRESTORE time stamp: 2008-10-01 17.45.19
BR0231E Backup utility call failed
BR0406I End of file restore: rdyybrzp.rsb 2008-10-01 17.45.19
BR0280I BRRESTORE time stamp: 2008-10-01 17.45.19
BR0404I BRRESTORE terminated with errors
Since this is a new SAP system it never will find the SAPDATA files because they where on the source system and this is a new test system.
We found the following note:
96848 Disaster recovery for SAP R/3 on Oracle
But this note is for SAP R/3 no for SAP ECC!
It explain that you have to install your SAP system with the System Copy Method (is this the only way??).
2.) Installation of the R/3 System
The installation of SAP software contains the software installation of the database. The initial SAP database should be created again but the SAP data should not be loaded.
Install the SAP system with the Oracle-specific system copy method, which is based on backup/restore. This method is described in the system copy guide for your Product/Release. Refer to Note 659509 for products that are based on Web AS.
3.) Modification of the installation
Above all, you must take into account the mounted file systems at the time of the loss. If necessary create new SAPDATA directories (mount points). These generally identify a disk or a logical storage area (logical volume).
Are we working in the right way? maybe there is a formal procedure to do a Restore from an Online backup when you complete loss your system.
Please some tips.
Best Regards,
Erick IlarrazaHi Eric,
Thanks a lot for your reply, I will follow the Note 96848 Disaster recovery for SAP R/3 on ORACLE point 5.
5.) Restore profile and log files, as you sayed in case of disaster we will lost our "source" system.
On the other hand to configure the legato client we take care of the name of the server, we configured the .sap, .cfg and .utl files so in theory the restore will be done on the new test system since we run the brrestore command from that system.
Officially there is not a documentation from SAP to do a Restore from a Online Backup with SAP ECC, SAP Netweaver (ABAP / ABAP + Java). You only have the 96848 note??? I found the following information:
http://help.sap.com/saphelp_nw70/helpdata/en/65/cade3bd0c8545ee10000000a114084/frameset.htm
But there is not a official procedure like a System Copy Guide or Installation Guide.
Best Regards,
Erick Ilarraza -
Datafiles Mismatch in production & disaster recovery server
Datafiles Mismatch in production & disaster recovery server though everything in sync
We performed all the necessary prerequisites & processes related to DRS switchover successfully also our DRS server is constantly updated through logs application in mount state whan we opened DRS database in mount state it opened successfully however when we issued command
Recover database
It dumped with foll errors:
ora-01122 database file 162 failed verification check
ora-01110 data file 162 </oracle/R3P/sapdata7/r3p*****.data>
ora-01251 unknown file header version read for file 162
as I do not remember the exact name now
However upon comparing the same file at production(which was now in shutdown state) the file sizes were mismatching so we went ahead to copy the datafile from production to DRS in database shutdown state at DRS through RCP utitlity of AIX 5.3 since our Operating system is AIX 5.3
Though this remote copy was success
we again started database in mount state & issued command
Recover database
But still it dump the same error but now for another datafile in DRS
I mean I would appreciate if somebody could point out that despite our DRS was constantly being updated through our Production in terms of continuous log application before this activity logs were sync in both the systems also origlos,mirrorlogs & controlfiles were pulled from production system as it is infact the whole structure was as replica of production system , then why did datafile sizes mismatch??
Details
SAP version :- 4.7
WAS :- 620
Database :- Oracle 9.2
Operating system :- AIX 5.3I am in a process of DR Planning.
My present setup :
O/S : HP-UX 11.31
SAP ECC 6
Oracle 10g Database.
I know about dataguard and i have implemnted same on our other application but not for SAP.
I have ame doubts and I request you to please guide me for the same.
1. SAP License key -- it required hardware key. So will i have to generate license key another hardware key i.e. for machine where standby database.
2. Database on standby site will be in manage recovery mode. Will I have to make any changes or any other file realted to SAP on standby host ? (some i know such as I upgrade SAP kernal, same i have to upgrade at standby host)
Will you give me some link related to document for DR -
Active Directory: user has admin rights when logs in for the first time
I have an Xserve server running OS X server 10.5.8 and trying to host _open and active directory_ for both Mac and PC machines. The open directory works fine but what happens on the active directory side is that, when a user logs in from a windows machine he/she can access all the other users folders. In other words, he/she almost has *admin rights*. Is this normal or there is some settings that I can look into to fix this?
Details: The first time user logs in, his only effect on the server is the password change. What this means is that his changes dont get uploaded to the server. It is only the second time the user logs in from ANOTHER computer that the server starts saving the his profile. Also, after the second login the user doesnt have admin rights anymore.
Thanks,
MRIf you've just changed your login password in Recovery mode, follow these instructions. Otherwise, see below.
At some point, you may have reset your keychain to default in Keychain Access. That action would have caused your login keychain to be renamed.
Back up all data before proceeding.
In Keychain Access, delete the login keychain from the keychain list. Choose Delete References when prompted, not Delete References & Files.
Triple-click anywhere in the line below on this page to select it, then copy the text to the Clipboard by pressing the key combination command-C:
~/Library/Keychains
In the Finder, select
Go ▹ Go to Folder...
from the menu bar, paste into the box that opens (command-V), and press return. A folder will open. Rename the file "login.keychain" in that folder to something like "login-old.keychain". Rename the file "login_renamed_1.keychain" to "login.keychain". You can then close the folder.
Back in Keychain Access, select
File ▹ Add Keychain...
from the menu bar. Add back the file now named "login.keychain". If any of your needed keychain items are missing from it, also add back the file you named "login-old.keychain". I suggest you transfer any needed items from that keychain to the login keychain, then delete it. The transfers are made by drag-and-drop in Keychain Access. You'll need to enter your password for each item transferred.
Maybe you are looking for
-
Problem while invoking a web service within an XML native database
Within the eXist dbms, I'm writing a module which implements several extensions to XPath. These extensions must invoke a remote web-service. I initially tested a standalone consumer, and everythink worked out successfully. Thus I created a jar, I use
-
Need help on content server conversioning formats
Hi, I need a help on content server. My requirement is to convert PDF to HTML how can i achieve that wether by using IBR or by Dynamic converter and explain how to do this ? Srinath am fallowing you in the forum hope you can help in this... Thanks, D
-
How can I transfer Final Cut Pro 6 projects into Adobe Premiere CS5.5?
I produced a whole whack of videos in FCP7. Then I got a new Apple Quad Core Intel Mac about a year ago... To find that FCP7 was no longer even usable. I still have FCP 6 on my Intel Core 2 Duo Apple Powerbook... I've read that I can export FCP6 XML
-
hey everybody. I was wondering if anyone could help me with something. I have an iMac g4 snowball. It has a 700 MH Processor.I would like to put a newer processor in it. It is the best computer I have ever used as far as architecture. My own opinion
-
Since yesterday's upgrade, firefox won't fully close. It'll look like it has closed when I hit the red x but when I try to open a new instance I'll get an error saying that it's already running and can't be opened again until it's fully shut down. I'