Sql Server High availability failover trigger
Hello,
We are implementing sql server 2012 availability groups (AG). Our secondary databases are not accessible in order to save licenses.
We have a lot of issues concerning monitoring, backup and SSIS. They all come down to the fact that they want basic information from the secondary, that is not accessible. We are implementing SSIS, which is supported on AG, but the SSISDB is encrypted.
Backup problem
The secondary instance does not know anything about the backups made in the primary instance. After a failover differential backups fail.
SSIS problem:
There is a blog (http://blogs.msdn.com/b/mattm/archive/2012/09/19/ssis-with-alwayson.aspx) that suggest to make a job that checks whether the status has changed from secondary to primary. If so, you can decrypt and encrypt again.
This job has to be executed every minute. Which is way too much effort for an event that happens once in a while. There are a few other problems with this solution. The phrase "use ssisdb" has to be included in the a job step. And the
jobstep fails. The secondary is not accessible.
Monitoring problems:
We use Microsoft tooling for monitoring: SCOM. Scom does not recognize a non readable secondary and tries to login continuously.
There are a few solutions that I can think of:
- sql server build in failover trigger
- Special status of secondary database.
Failover trigger:
We would like a build-in failover trigger, in stead of a time based job, that starts a few standard maintenance actions if only at the time (or directly after) a failover has occurred. Because now our HA cluster is not really high available until :
- SSISDB works and is accessible after failover
- Backup information is synchronised
- SCOM monitoring skips the secondary database (scom produces loads of login failures)
Does anyone have any suggestion how to fix this?
No built in trigger can achieve your requirement.
Similar Messages
-
Hello All.
I have a question regarding OC4J and HTTP server High Availability.
I want to do something like the Figure 3-1 of the Oracle Application Server High Availability Guide 10.1.2. See this link
http://download-east.oracle.com/docs/cd/B14099_11/core.1012/b14003/midtierdesc.htm#CIHCEDFC
What I have now is the following:
Three hosts
Two of them are an OAS 10.1.2 which we already configured the Cluster and deployed our applications (used this tutorial: http://www.oracle.com/technology/obe/obe_as_1012/j2ee/deploy/j2eecluster/farmcluster.htm)
Let's say this nodes are:
- host1
- host2
The other one is the Oracle WebCache stand alone (will act as Load Balancer). We will call this
- hostwc3
We already configured the WebCache as Load Balancer and is working just fine. We also configured the session replication successful and work great with our applications.
What we have not clear is the following:
When a client try to visit http://hostwc3/application/ the LOAD BALANCER routes him to, let's say http://host1/application/ and in the browser's URL will not show the Virtual Server anymore (the webcache server) and will show the actual real Apache address (host1 )that is attending him. IF we "kill" on ENTIRE host1 (apache, oc4j, etc..) the clients WILL perceive the down and if they try to press F5, the will try to access to an Apache that doesn't is up and running.... The behavior expected is that the browser NEVER shows the actual Apache URL, so, when some apache goes down, the client do not disconnect (as it happens with and OC4J downfall ) and always works with the "virtual web server".
I came up with some ideas but I want you Guys to give me an advice:
- In Web Cache, do not route for load balancing to Apache, and route the Oc4J directly (Is this possible?)
- Configure a HTTP Server Cluster, this means that we have to have a "Virtual Name"to the Apaches (two of them). Is this possible? how?
- Use the rewrite mode of the Apache. Is this a good idea?
- Any other idea how to fix the Apache "Single Point Of Failure" ?
According with the figure 3-1 ( Link above ) we do can have HTTP Server in a cluster. But I have no idea how to manage it or configure it.
Thanks in advance any help!You cannot point Outlook Anywhere to your DAG cluster IP address. It must be pointed to the actual IP address of either server.
For no extra cost DNS round robin is the best you will get, but it does have some drawbacks as it may give the IP address of a server you have taken down for maintenance or the server has an issue.
You could look to implement a load balancer but again if you are doing this for high availability then you want more than one load balancer in the cluster - otherwise you've just moved your single point of failure.
Having your existing NAT and just remembering to update it to point to the other server during maintenance may suit your needs for now.
If you can go into more detail about what the high availability your business is looking to achieve and the budget we can suggest the best method to meet those needs for the price point.
Have a great day
Oliver
Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi -
MS DTC not coming online on SQL Server 2008 R2 failover cluster
Dear Experts,
On a SQL Server 2008 R2 failover cluster, MS DTC cluster service is not coming online. It fails with below error message.
"The DTC cluster resource's log file path was originally configured at: E:. Attempting to change that to: M:. This indicates a change in the path of the DTC cluster resource's dependent disk resource. This is not supported. The error code
returned: 0x8000FFFF".
From the Component Services, we can see under the clustered DTCs, in the properties of the log file that it is configured for E drive. The 'Transaction list' and 'Transaction Statistics' are empty. When I try to change the log file path to
point to M drive I get this warning message.
"An MSDTC log file already exists in the selected directory. Resetting an existing MS DTC log file may cause your databases and other transactional resource managers to become inconistent. Please review the MSDTC Administrator's manual before
proceeding. Do you wish to reset the existing MS DTC log"
Could you please advise if this is safe to proceed with this warning as the 'Transaction list' and 'Transaction Statistics' are empty or would it cause any other issue.
Thanks,
MMTauseef
Did you tried using wmi....win32_share
$path = "location where to create the folder"
$share = $Shares=[WMICLASS]'WIN32_Share' # class in wmi for share
$Sharename = 'Sharename'
New-Item -type directory -Path $Path
$Shares.Create($Path,$Sharename,0,255,$Sharename)
$Acl = Get-Acl $Path # cmdlet for getting access control list
$Access = New-Object system.security.accesscontrol.filesystemaccessrule("Username","FullControl","ContainerInherit, ObjectInherit", "None", "Allow")
$Acl.AddAccessRule($Access)
Set-Acl $Path $Acl
Thanks Azam When you see answers please Mark as Answer if Helpful..vote as helpful. -
SQL Server Agent available on Azure?
I would like to delete records from an database table on Azure when the table reaches a certain size.
I would normally do this by scheduling a stored procedure job using SQL Server Agent.
Is the SQL Server Agent available on databases held on Azure?
Should I use an alternative approach?
Thanks
StewHello ,
Please take look on :
Scheduling job on SQL Azure
We can execute sql procedures on Azure based on our need (schedule configuration).
Create a mobile service.
Create a scheduler. Mention the database to be used for this.
Configure the scheduler for the frequency.
Click on tab Script. The script can be written in java script or .net. The script should have the code to run a proc.
I used below code to run dbo.ExecuteDataRequest procedure Azure database.
function Execute_Process_Request() {
console.log("Executing ExecuteDataRequest...");
mssql.query('Exec dbo.ExecuteDataRequest',{
success: function(results){
console.log("Finished the Process Request job.");
error: function(err) {
console.log("error is: " + err);
Find the new user added which will run the scheduler (Under Login folder of Security of Azure db server). Need to grant execute permission to the new user on the database where we need to execute the job.
Ahsan Kabir Please remember to click Mark as Answer and Vote as Helpful on posts that help you. This can be beneficial to other community members reading the thread. http://www.aktechforum.blogspot.com/ -
Kerberos Configuration Manager for SQL Server is available
This thread describes the Microsoft Kerberos Configuration Manager diagnostic tool for SQL Server. This tool is available for download from the Microsoft Download Center:
Download the package now.
About Kerberos Configuration Manager
The Kerberos Configuration Manager for SQL Server is a diagnostic tool that helps troubleshoot Kerberos related connectivity issues with SQL Server, SQL Server Reporting Services (SSRS), and SQL Server Analysis Services (SSAS). It can perform the following
functions:
Collect information on operating systems and Microsoft SQL Server instances that are installed on a server.
Report on all Service Principal Name (SPN) and delegation configurations on the server.
Identify potential problems in SPNs and delegations.
Fix potential SPN problems.
More information
This tools helps troubleshoot the following exceptions:
401
Note: This error message is for http errors, SSRS errors, and some other similar errors.
Login failed for user 'NTAUTHORITY\ANONYMOUS'
Login failed for user '(null)'
Login failed for user ''
Cannot generate SSPI Context
For more information about Kerberos Configuration Manager, go to the following related blogs:
Released: Kerberos Configuration Manager for SQL Server
Kerberos Configuration Manager updated for Reporting Services
Kerberos Configuration Manager updated for Analysis Services and SQL Server 2014
Reference from the following KB article: Kerberos Configuration Manager for SQL Server is available
Elvis Long
TechNet Community SupportThanks for posting, Elvis. Can you post this to the SQL Security forum too?
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
RBS Licensing on a SQL Server Standard 2012 Failover Cluster
I am planning to implement a Sharepoint 2013 installation which is primarily to be used for Document Storage. We are considering using Sharepoint RBS with Filestream with SQL 2012 Standard and as far as I can see on our non-production environments this works
without any issue - i.e. we enable filestream and RBS with Sharepoint and SQL and everything works as expected on our non production systems. In non production we have single SQL Server nodes.
However, in production we have planned to have a 2 node SQL 2012 Standard failover cluster.
On this page, it indicates
"To run RBS on a remote server, you must be running SQL Server 2008 R2 Enterprise on the server that is running SQL Server where the
metadata is stored in the database."
My question is : Am I entitled to use RBS with Filestream on a SQL 2012 Standard Failover Cluster, or is SQL Enterprise required.
If enterprise is required, we will have to remove the RBS. I have reviewed the links below, but cannot see a definitive answer from a licensing perspective
http://social.technet.microsoft.com/Forums/en-US/76e86936-b7ee-4571-aa02-f45b80867515/which-edition-for-sql-server-2008-r2-with-sharepoint-2010-no-foundation?forum=sharepointadminprevious
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/2b04979f-c619-48a4-b0e4-3add00345fb0/sql-server-2008-edition-comparisons-in-terms-of-high-availability-rbs?forum=sqldisasterrecoveryHi antrimcoaster,
Indeed, in Microsoft official document, all SQL editions (Express, Standard, Enterprise…) of SQL Server 2008 and SQL Server 2008R2 support RBS (Remote Blob Store), however, as your post, if you run RBS on a remote server, we recommend you use Enterprise
version especially you want to deploy Remote BLOB Storage with SQL Server 2012 in Production environment. Usually, Enterprise edition is more for mirroring, supporting more RAM/CPU, and custom RBS providers and multiple nodes cluster.
If you had setup and configured RBS successfully with SQL Server 2012 standard, and now you need to want to upgrade to SQL Server 2012 Enterprise, I recommend you reconfigure your blob store, you can review the following similar article.
https://community.dynamics.com/ax/b/axdilip/archive/2012/10/22/configuring-and-implementing-sharepoint-2010-rbs-remote-blob-storage-with-sql-server-2012-part-1.aspx
In addition, for more detailed information regarding to the license issue, please call 1-800-426-9400, Monday through Friday, 6:00 A.M. to 6:00 P.M. (Pacific Time) to speak directly to a Microsoft licensing specialist. For international customers, please
use the Guide to Worldwide Microsoft Licensing Sites to find contact information in your locations.
Thanks,
Sofiya Li
If you have any feedback on our support, please click
here.
Sofiya Li
TechNet Community Support -
SQL Server 2012 availability groups
Hi, I'm new to SQL 2012 and availability groups. We have it setup for SharePoint 2013 for an Intranet and all is well. I migrated some ASP.NET applications from SQL 2005 and created a new group for them. It all seemed fine and was sync'd up ok but lately
I'm having to manually resume the data movement via Management Studio. The databases seem to be in a paused state although the database is up as far as the applications are concerned.
Should I have completed some kind of database upgrade prior to moving over to 2012? Is there anything I can do or check? Is there a way of automating the resume data movement rather than me manually doing this?
Thanks!Suspending and resuming an AlwaysOn secondary database does not directly affect the availability of the primary database. However, suspending a secondary database can impact redundancy and failover capabilities for the primary database, until the suspended
secondary database is resumed. This is in contrast to database mirroring, where the mirroring state is suspended on both the mirror database and the principal database until mirroring is resumed. Suspending an AlwaysOn primary database suspends data movement
on all the corresponding secondary databases, and redundancy and failover capabilities cease for that database until the primary database is resumed.
You can obviously automate this: all you need to do is check if the status is suspended and then resume with the help of T-SQL command - how about writing a job which checks it every 5 min to 30 mins. depending upon your requirement and execute the T-SQL
command to fix it.
The Command would be:
ALTER DATABASE database_name SET HADR RESUME
Sarabpreet Singh Anand
SQL Server MVP Blog ,
Personal website
This posting is provided , "AS IS" with no warranties, and confers no rights.
Please remember to click "Mark as Answer" and "Vote as Helpful" on posts that help you. This can be beneficial to other community members reading the thread. -
How can I make the JCo server implement "High Availability" functionality. The SAP Server which the makes calls to the JCo Server is HA-aware. So if there is a failover, the SAP Server switches over to the other instance but the JCo server keeps sending the message "Server unavailable". Is there a solution for this problem.
Thanks.Single Appliance not necessarily means Single Point of Failure, an appliance with HW Redundancy could handle failure and provide High Availability, if only configured well.
Does Symantec BrightMail Appliance provide such redundancy configuration?
You will have to ask their support or in a Symantec Forum.
Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied. -
Mac OS X Server High Availability
I'm getting two Xserve and a Vdeck for storage. The server will have the basic functions of file and network. I would like to implement a solution in high availability. In the microsoft world, the name is failover cluster.
The files and settings will be in storage. I need the second server performs the functions automatically if the first fails.
Can anyone help me?There is an IP Failover feature on XServes. Check this article: http://docs.info.apple.com/article.html?path=ServerAdmin/10.5/en/c3fs29.html
It is for Leopard (10.5) Server, though I think that many things did not change.
Kostas -
Upgrading from SQL Server 2012 Standard to SQL Server 2014 Standard Failover Cluster
Goal: To upgrade my default instance from SQL Server 2012 to SQL Server 2014 in a failover cluster.
Given:
1) Operating System Windows 2012 R2
2) 2 Virtual Machines in a cluster with SQL Server as a Guest Cluster resource. The two VMS are called APPS08 and APPS09. They are our development environment that is setup similar to our production environment.
Problem: When running the SQL Server 2014 upgrade, I started on the VM that was not running the instance. I then moved onto upgrading on the node that was running the instance. As soon as the install attempt to failover the running instance
an install error occurred that it could not failover. After many install attempts I consistently received the error
The SQL Server Failover cluster instance name 'Dev01'
already exists as a cluster resource. Opening Failover cluster manager there is no record of a DEV01.
New Strategy: Create a SQLCluster called DEV07. At the end of the Install I get 'Resource for instance 'MSSQLSERVER' should not exists. Neither I nor my Windows 2012 guy understand what Resource the install may be referring to.
We do not see anything out of the ordinary.
Any suggestions as to what Resource may be seeing the default instance would be greatly appreciated.Hi PSCSQLDBA,
As your description, you want to upgrade the default instance in SQL Server cluster.
>> 'SQL Server failover cluster instance name 'Dev01' already exists as cluster resource'
This error could occur when there is a previously used instance name which may not be removed completely.
To work around the issue, please use one of the ways below.
1. At command prompt, type Cluster res. This command will list you all the resources including orphan resources. To delete the orphan resource, type Cluster res <resource name>/delete.
For more information about the process, please refer to the article:
http://gemanjyothisqlserver.blogspot.in/2012/12/sql-2008r22012-cluster-installation.html
2. Delete DNS entries, and force a replication of DNS to the other DNS servers.
For more information about the process, please refer to the article:
http://jon.netdork.net/2011/06/07/failed-cluster-install-and-name-already-exists/#solution
>> 'Resource for instance 'MSSQLSERVER' should not exist'
This error could occur when you already have MSSQLSERVER as a resource in the cluster, which may not be removed completely. To work around the issue, you could rebuild the SQL Server cluster node.
Regards,
Michelle Li -
How is the impact when a sql server cluster instance failover to a downlevel version node?
Hi,
I have a one instance two nodes sql 2012 cluster, node1's sql 2012 version is 11.0.5522, node2's sql 2012 version is 11.0.3000, node1 is active.
When sql server failover from node1 to node2, it succeed. I want to konw if this downlevel failover impact database (data or usage)?
Many thanks.I would not say this as a correct configuration although you can do it. Downside is suppose node 1 which is on higer version has some bug fixed with patch when you failover to lower version that bug might re surface. I have never used nodes with different
version nor i recommend.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Articles -
SQL Server Alwayson availability groups automation
Hi all,
I had configured SQL Server always on availability groups on windows server 2012 and it was successful.
I’ve installed and successfully configured our SQL Server 2012 AlwaysOn 2-node windows cluster servers. I've gotten AlwaysOn working great, and our
Front End servers for the Intranet will be using SharePoint 2013. The glitch is that SharePoint 2013 is configured to add databases automatically to our SQL Server 2012 back end, but NOT to AlwaysOn. As we know “we must manually find, select, back-up and then
add those new databases individually to get them into AlwaysOn."
But wait; that can be quite a task, constantly checking the SQL Server back-end servers to see what databases were created, then having to add them
into AlwaysOn, 7/24!
Im looking for an automated script or process that will check for new databases, back those new databases up in FULL mode, (for being added to AlwaysOn, of course) then
add those databases to Always On, all automatically.
Requirements:
Every time the newly created or added databases should be full backed up once in the shared location with automated script.
Newly created database should be added into always on group and should be added to available replica automatically with the TSQL script
Regards,
SQL LOVER.awaiting for responses.
Kindly help the bewlo request.
Newly created database should be added into always on group and should be added to available
replica automatically with the TSQL script and perform restoration.
There is no out of the box solution for this. You may want to created a SQL Job (PowerShell) which can detect database which is newly created and perform steps to add new database to AG.
1. take full backup.
2. Take log backup.
3. restore then on secondary.
4. Add database to AG
Balmukund Lakhani
Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
This posting is provided "AS IS" with no warranties, and confers no rights.
My Blog |
Team Blog | @Twitter
| Facebook
Author: SQL Server 2012 AlwaysOn -
Paperback, Kindle -
Exchange server 2013 CAS server high availability
Hi
I have exchange server 2010 sp3(2 MB, 2Hub/Cas) servers.
Planning to migrate to exchange server 2013.( 2 cas servers and 2 mbx servers).
I dont want to go all traffic single so i am keeping the role separate..
In exchange 2010 i achieved hub/CAS high availability through NLB.
In exchange 2013 how to acheive this...
Please share your suggestions with document if possible...Here ya go:
http://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx
Load balancing
and
http://technet.microsoft.com/en-us/office/dn756394
Even though it says 2010, it applies to 2013 vendors as well.
Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied. -
Is there any MS Sql Server connector available for Flex Air.?
hi,
I want to connect my flex Air application with MS Sql server.* directly without any server side scripting. I got MySql Connector , which is available in (http://code.google.com/p/assql/downloads/list). Like this is there any connector is available .Please share it
thanks
karthyI do have exactly the same questions. SO far the fas_mssql_connector.asp file is placed at the 'wwwroot' folder of the 'inetpub' folder of the IIS server. As hostname I use '[MyServer]' since it should run on the same machine and username and password are correct.
The fas_MSsql_Clean file is setup in a way that the name of the database is specified and the url to the asp file is set as: http://[MyServer]/fas_mssql_connector.asp
My SQL query looks like this:
private function getDbData():void
mssqlQuery("Select * from Tomat", "getDataO3");
And the MXML document like this:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" minWidth="955" minHeight="600">
<mx:Script source="Actionscript/MSsql.as"/>
<mx:Script source="Actionscript/SqlQueries.as"/>
<mx:DataGrid x="148" y="95" id="dgData">
<mx:columns>
<mx:DataGridColumn headerText="Column 1" dataField="col1"/>
<mx:DataGridColumn headerText="Column 2" dataField="col2"/>
<mx:DataGridColumn headerText="Column 3" dataField="col3"/>
</mx:columns>
</mx:DataGrid>
<mx:Button x="148" y="264" label="Get data" id="btnGet" click="getDbData()"/>
</mx:Application>
However the application is busy for a long time and I do not receive a single piece of data. Any help with this?
Thanks in advance. -
Mac OS X Server, High Availability, Mac Mini
Hi All,
Currently running Mac OS X Server on a mac mini, I'm looking for some ways to make that mac mini "server" a high availability system for my office.
The first thing i will change is to install the operating system on a external firewire 2 disks RAID 1 hard drive : http://www.lacie.com/us/products/product.htm?pid=10967
Then I'm thinking how to make sure the rest of the mac mini will be running all the time as well.
That's why I'm wondering if it's possible to make everything double : 2 minis with 1 RAID hard drive each.
Each complete set would be linked to create somewhat of fail-over system : If one set woes the other one can take over with exactly the same data.
A bit like what it's been done here :
http://homepage.mac.com/pauljlucas/personal/macmini/cluster.html but with OS X Server instead of Linux.Hi lulu62-
I am working on a similar project with Minis so I will be keeping an eye on your site.
In addition to Camelot's suggestions, do not forget to have a robust UPS with Automatic Voltage Regulation capable of supporting all of your gear for any expected down time.
The Mini is excellent because you can power it for a long time on a healthy-sized UPS.
Luck-
-DP
Maybe you are looking for
-
Re: HELP!! JTextField.getText() is not working
I have the same problem for JDK 1.3. Did you finally figure out the problem?
-
How to make the tabs something else
My homepage is Google. exactly what i want it to be but when i go to a tab / open a tab its trovi. How do i change that so its Google or just blank to where it shows other sites I've been on?
-
Tengo instalado el Adobe Creative Suite, funciona todo bien menos el Acrobat, cuando trato de abrir algún documento en pdf se me cierra y no permite trabajar. Gracias por la ayuda que me puedan brindar
-
How to convert Latitude-Longtitude to Name location database in Powermap Porgram by myself ?
Dear Sir: My name is Dee I am consultance in Thailand. I really appritiate in Power map program because it can show data in the new ways, so i try to apply in my project as Estimate sale revenue of resturant in Chiang Rai Province of Thailand (
-
Error code - 1407 what does it mean?
Hi All, I have just bought new LaCie Hard Drive, formated it to Mac OS Extended (Journaled) - I thought in this way it will sync with Mac better and connected it via USB 2.0 to my Airport Extreme Base Station. I can see the disk in Airport Utility un