List of OOTB Reports for SQL Server and Windows Server
Hi,
I am struggling to find a list of OTB reports from Operations Manager for SQL and Windows. Ideally I need the name of the reports and a list of counters that they report on.
Can anybody give me an idea where I can find this?
Thanks,
Pete
Peter Carter http://sqlserverdownanddirty.blogspot.com/
Hi Pete,
I don't believe that there are any Windows or SQL Server specific reports out of the box from SCOM.
However, when you import the Windows (either Server or Client, unsure which you are referring to) and/or the SQL Server Management Packs, within the documentation it will tell you the reports that will also be imported and thus available.
Similar Messages
-
Cannot find a report for SQL Server monitoring
We installed a SCOM 2012 environment recently and imported the SQL Server Management packs successfully. SQL Servers are being monitored.
However, in the reporting section, i cannot find a group for SQL Server Reporting. All other show up ( Active Directory, Appication Monitoring, Web monitoring, etc) Just none for the SQL Reporting.
Please helpHi,
I assume that you miss some SQL management packs, would you please re-download the management pack here and import it:
https://www.microsoft.com/en-hk/download/details.aspx?id=10631
After successful importing, you should see it under reporting workspace:
If you still cannot see it, please check operation manager event logs for more information to help to troubleshoot this issue.
Regards,
Yan Li
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] -
Report with sql server database
hi
i m trying to create a report for sql server database
first time it was developed successfully but when i tried to run it next time it throws error
REP-4100 : Failed to execute data source
JDBCPDS -62008 : SQL Server Error:
S1000 [Microsoft SQL Server Driver] Connection is busy with results for another hstmt
now today when i tried to develop a new report it shows two new errors:
1.
REP-4100 : Failed to execute data source
JDBCPDS -62008 : SQL Server Error:
S0002[Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name
2.
REP-4100 : Failed to execute data source
JDBCPDS -62008 : SQL Server Error:
08S01[Microsoft][ODBC SQL Server Driver]Communication link failureSrini,
yes you can use report Buider with Sql Server through jdbc driver.
When you define the report datasource, you just have to set the JDBC conection infos.
Patrick. -
Increase Performance and ROI for SQL Server Environments
May 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDFMay 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDF -
SAP Crystal Report using SQL Server Authentication and Windows Authenticati
I'm a SAP Crystal Report, version for Visual Studio 2010 Beginner
my ingredients are
1.windows 7 ultimate service pack1
2.sql server 2008 standard edition
3.visual studio 2010 pro
4.SAP Crystal Report, version for visual studio.net
I was created a report named customersByCity.rpt using OLE DB (ADO) -> Microsoft OLE DB Provider for SQL Server -> I'm supply Server, User ID, Password and Database. I assume me using SQL Server Authentication for my report
Then, my ASP.NET files as following
//ASP.NET
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="viewCustomersByCity.aspx.cs" Inherits="viewCustomersByCity" %>
<%@ Register Assembly="CrystalDecisions.Web, Version=13.0.2000.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"
Namespace="CrystalDecisions.Web" TagPrefix="CR" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div><asp:Label ID="lblMsg" runat="server" BackColor="Yellow" ForeColor="Black"></asp:Label>
<CR:CrystalReportViewer ID="CrystalReportViewer1" runat="server" AutoDataBind="true"></CR:CrystalReportViewer>
</div>
</form>
</body>
</html>
//code-behind
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Collections;
using CrystalDecisions.CrystalReports.Engine;
using CrystalDecisions.Shared;
public partial class viewCustomersByCity : System.Web.UI.Page
private const string PARAMETER_FIELD_NAME = "city";
private ReportDocument customersByCityReport;
private void ConfigureCrystalReports()
ConnectionInfo connectionInfo = new ConnectionInfo();
connectionInfo.ServerName = @"WKM1925-PCWKM1925";
connectionInfo.DatabaseName = "Northwind";
connectionInfo.UserID = "sa";
connectionInfo.Password = "sysadmin25";
SetDBLogonForReport(connectionInfo);
private void SetDBLogonForReport(ConnectionInfo connectionInfo)
TableLogOnInfos tableLogOnInfos = CrystalReportViewer1.LogOnInfo;
foreach (TableLogOnInfo tableLogOnInfo in tableLogOnInfos)
tableLogOnInfo.ConnectionInfo = connectionInfo;
private void SetCurrentValuesForParameterField(ReportDocument reportDocument, ArrayList arrayList)
ParameterValues currentParameterValues = new ParameterValues();
foreach (object submittedValue in arrayList)
ParameterDiscreteValue parameterDiscreteValue = new ParameterDiscreteValue();
parameterDiscreteValue.Value = submittedValue.ToString();
currentParameterValues.Add(parameterDiscreteValue);
ParameterFieldDefinitions parameterFieldDefinitions = reportDocument.DataDefinition.ParameterFields;
ParameterFieldDefinition parameterFieldDefinition = parameterFieldDefinitions[PARAMETER_FIELD_NAME];
parameterFieldDefinition.ApplyCurrentValues(currentParameterValues);
protected void Page_Load(object sender, EventArgs e)
customersByCityReport = new ReportDocument();
string reportPath = Server.MapPath("customersByCity.rpt");
customersByCityReport.Load(reportPath);
ConfigureCrystalReports();
ArrayList arrayList = new ArrayList();
arrayList.Add("paris");
arrayList.Add("Madrid");
arrayList.Add("Marseille");
arrayList.Add("Buenos Aires");
arrayList.Add("Sao Paulo");
ParameterFields parameterFields = CrystalReportViewer1.ParameterFieldInfo;
SetCurrentValuesForParameterField(customersByCityReport, arrayList);
CrystalReportViewer1.ReportSource = customersByCityReport;
1st scenario
When in a runtime, it's keep appear a dialog box. This dialog box ask me to suppy Server, User ID, Password and Database. Once all information is supplied, my report display the data as expected
2nd scenario
I change my report using OLE DB (ADO) -> Microsoft OLE DB Provider for SQL Server -> checked on Integrated Security. I just choose Server, and Database. I assume me using Windows Authentication
When in a runtime, there's no dialog box as above. My report display the data as expected. really cool
Look's like, when report using SQL Server Authentication there's some problem. but, when report using Windows Authentication, it's fine.
I'm looking for comment. Please help meHello,
MS SQL Server 2008 requires you to install the MS Client Tools for 2008.
Once install then update all of your reports to use the SQL Native 10 as the OLE DB driver.
The try again, if it still fails search, lots of sample log on code in this forum.
Don -
Hi,
I have installed the x64 SQL Server 2008 R2 Express with default settings and run MBSA 2.3 (using default settings too). It shows three SQL Server instances: MSSQL10_50.SQLEXPRESS, SQLEXPRESS and SQLEXPRESS (32-bit). For the first, authentication
mode is Windows, for the rest two - mixed. Here https://social.msdn.microsoft.com/Forums/sqlserver/en-US/03e470dc-874d-476d-849b-c805acf5b24d/sql-mbsa-question-on-folder-permission?forum=sqlsecurity question
about such multiple instances was asked and the answer is that "MSSQL10.TEST_DB
is the instance ID for the SQL Server Database Engine of the instance, TEST_DB", so in my case, it seems that MSSQL10_50.SQLEXPRESS is the instance ID for SQL Server Database Engine of the SQLEXPRESS instance.
I have two questions:
1) How can it be that SQL Server DB Engine instance has different authentication mode than corresponding SQL Server Instance?
2) Why 32-bit instance reported although I installed only 64-bit version?
Also, this https://social.technet.microsoft.com/Forums/security/en-US/6b12c019-eaf0-402c-ab40-51d31dce968f/mbsa-23-reporting-sql-32bt-instance-is-running-in-mixed-mode-when-it-is-set-to-integrated?forum=MBSA question seems to be related to this
issue, but there is no answer :(.
Upd: Tried on clean Windows 8 installation and Windows 7 with the same result.Because I DO NOT want the three people who will be having access to the production SQL Server to also have access to the primary host ProductionA. Since I have to allow them to RDC into the box to manage the SQL Server, I figure why not create
a separate VM for each one of them and they can RDC into those instead.
Does this make any sense?
Any tips are greatly appreciated. The main reason for doing this is because the three people who will be accessing the box, I need to isolate each one of them and at the same time keep them off of the primary ProductionA.
Thanks for your help.
M
Hello M,
Since you dont want the 3 guys to have access to Production machine A.You can install SQL Server client .By client i mean SQL server management studio(SSMS) on there local desktop and then create login for them in SQL Server.Open port on which your SQL server
is running for three of the machines so that they can connct.Now with SSMS installed on each machine each can connect to SQL server from there own machine.
I would also like you to be cautious with giving Sysadmin privilege to all three of them ,first please note down what task they would do and then decide what rights to be provided.
Your option will also work but you need to create 3 VM for that .Which is more tedious task.
Hope this helps
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
Use rptproj SSRS for SQL Server 2008R2 in VS 2010 (and/or VS 2012 better(
In my company, I use VS 2008 and SQLServer 2008R2, and I have rptproj projects in VS 2008.
The rptproj project has several rdl files.
I would like use VS 2010 or VS 2012 with SSRS and SQLServer 2008R2.
SSDT, which was introduced with SQL Server 2012. I suggest not possible migration rpt projects in VS 2008 to VS 2010 / VS 2012
Any alternative solution to do it ?
I have seen, but I'm confused it
http://stackoverflow.com/questions/12503976/how-to-edit-ssrs-2008r2-reports-in-visual-studio-2012/16112721#16112721
Iko says
You can now use Visual Studio 2010 to edit .rtproj report projects and .rdl reports.
You need VS10 SP1, then install the Data Tools for VS10, followed by the installation of SQL Server Express 2012 with Reporting Services and Data Tools.
Reference: http://stackoverflow.com/a/14599850/206730
But I'm confused about it.
www.kiquenet.com/profesionalHi Kiquenet,
According to your description, you installed VS 2008 and SQL Server 2008 R2, and create Reporting Services projects. Now you want to use VS 2010 or VS 2012 to open and manage the reports.
SQL Server Data Tools - Business Intelligence for Visual Studio 2012 supports versions of SQL Server 2012 or lower, we can directly download Microsoft SQL Server Data Tools - Business Intelligence for Visual Studio 2012 from
https://www.microsoft.com/en-us/download/details.aspx?id=36843, then select SQL Server Data Tools - Business Intelligence for Visual Studio 2012 and SQL Client Connectivity SDK as
new shard features to install.
We can open the projects in both Visual Studio 2012 and Visual Studio 2010. For local mode only (that is, when not connected to SQL Server), we won’t get the design-time experience for controls that are associated with the viewer in Visual Studio 2008, but
the project will function correctly at runtime. If we add a feature that’s specific to Visual Studio 2012, the report schema is upgraded automatically and you can no longer open the project in Visual Studio 2008.
If you have any more questions, please feel free to ask.
Thanks,
Wendy Fu
Wendy Fu
TechNet Community Support -
Reviewing Windows NT Rights and Privileges Granted for SQL Server Service Accounts
Hi Folks,
I am an experienced .NET apps developer who has been tasked with writing a bunch of technical controls for all the SQL Server instances on a domain.
So for the last month I have been diving in the deep end learning Powershell, dba and infrastructure tasks. This is still a work in progress, so be kind to me.. ;o)
So the task I am stuck on is described in the section on 'Reviewing Windows NT Rights and Privileges Granted for SQL Server Service Accounts' http://technet.microsoft.com/en-us/library/ms143504(v=sql.105).aspx
I have not been able to find cmdlets that gives me this information. I have found some exes which come frustratingly close like NTRights.exe. This lets me specify a computer name which is great, but only seems to let you set or deny permissions, not just
list them!
Any help with this would be very much appreciated as I am firmly stuck. As per comments above also bear in mind that up until around 1.5 months ago I had never used powershell / knew very much at all about SQL server admin etc. Feeling much more comfortable
with them now, but much less so with Active Directory/ windows permission structures etc so please can I ask anyone kind enough to reply to try and keep the acronyms down as much as humanly possible.. ;o)
Cheers
KieronHi Kieron,
Take a look at this module, it makes permissions much easier to work with than what's currently available:
https://gallery.technet.microsoft.com/scriptcenter/PowerShellAccessControl-d3be7b83
Don't retire TechNet! -
(Don't give up yet - 13,085+ strong and growing) -
Do we need to format data and log files with 64k cluster size for sql server 2012?
Do we need to format data and log files with 64k cluster size for sql server 2012?
Does this best practice still applies to sql server 2012 & 2014?Yes. The extent size of SQL Server data files, and the max log block size have not changed with the new versions, so the guidance should remain the same.
Microsoft SQL Server Storage Engine PM -
Hello everyone.
I'm using SQL Server 2014, and writting on some C++ app to query and modify the database. I use the ODBC API.
I'm stuck on inserting an SQL_NUMERIC_STRUCT value into the database, if the corresponding database-column has a scale set.
For test-purposes: I have a Table named 'decTable' that has a column 'id' (integer) and a column 'dec' (decimal(5,3))
In the code I basically do:
1. Connect to the DB, get the handles, etc.
2. Use SQLBindParameter to bind a SQL_NUMERIC_STRUCT to a query with parameter markers. Note that I do include the information about precision and scale, something like: SQLBindParameter(hstmt, 2, SQL_PARAM_INPUT, SQL_C_NUMERIC, SQL_NUMERIC, 5, 3, &numStr,
sizeof(cbNum), &cbNum);
3. Prepare a Statement to insert values, something like: SQLPrepare(hstmt, L"INSERT INTO decTable (id, dec) values(?, ?)", SQL_NTS);
4. Set some valid data on the SQL_NUMERIC_STRUCT
5. Call SQLExecute to execute. But now I get the error:
SQLSTATE: 22003; nativeErr: 0 Msg: [Microsoft][ODBC Driver 11 for SQL Server]Numeric value out of range
I dont get it what I am doing wrong. The same code works fine against IBM DB2 and MySql. I also have no problems reading a SQL_NUMERIC_STRUCT using SQLBindCol(..) and the various SQLSetDescField to define the scale and precision.
Is there a problem in the ODBC Driver of the SQL Server 2014?
For completeness, here is a working c++ example:
// InsertNumTest.cpp
// create database using:
create a table decTable with an id and a decimal(5,3) column:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE[dbo].[decTable](
[id][int] NOT NULL,
[dec][decimal](5, 3) NULL,
CONSTRAINT[PK_decTable] PRIMARY KEY CLUSTERED
[id] ASC
)WITH(PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON[PRIMARY]
) ON[PRIMARY]
GO
// Then create an odbc DSN entry that can be used:
#define DSN L"exOdbc_SqlServer_2014"
#define USER L"exodbc"
#define PASS L"testexodbc"
// system
#include <iostream>
#include <tchar.h>
#include <windows.h>
// odbc-things
#include <sql.h>
#include <sqlext.h>
#include <sqlucode.h>
void printErrors(SQLSMALLINT handleType, SQLHANDLE h)
SQLSMALLINT recNr = 1;
SQLRETURN ret = SQL_SUCCESS;
SQLSMALLINT cb = 0;
SQLWCHAR sqlState[5 + 1];
SQLINTEGER nativeErr;
SQLWCHAR msg[SQL_MAX_MESSAGE_LENGTH + 1];
while (ret == SQL_SUCCESS || ret == SQL_SUCCESS_WITH_INFO)
msg[0] = 0;
ret = SQLGetDiagRec(handleType, h, recNr, sqlState, &nativeErr, msg, SQL_MAX_MESSAGE_LENGTH + 1, &cb);
if (ret == SQL_SUCCESS || ret == SQL_SUCCESS_WITH_INFO)
std::wcout << L"SQLSTATE: " << sqlState << L"; nativeErr: " << nativeErr << L" Msg: " << msg << std::endl;
++recNr;
void printErrorsAndAbort(SQLSMALLINT handleType, SQLHANDLE h)
printErrors(handleType, h);
getchar();
abort();
int _tmain(int argc, _TCHAR* argv[])
SQLHENV henv = SQL_NULL_HENV;
SQLHDBC hdbc = SQL_NULL_HDBC;
SQLHSTMT hstmt = SQL_NULL_HSTMT;
SQLHDESC hdesc = SQL_NULL_HDESC;
SQLRETURN ret = 0;
// Connect to DB
ret = SQLAllocHandle(SQL_HANDLE_ENV, NULL, &henv);
ret = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (SQLPOINTER)SQL_OV_ODBC3, SQL_IS_INTEGER);
ret = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc);
ret = SQLConnect(hdbc, (SQLWCHAR*)DSN, SQL_NTS, USER, SQL_NTS, PASS, SQL_NTS);
if (!SQL_SUCCEEDED(ret))
printErrors(SQL_HANDLE_DBC, hdbc);
getchar();
return -1;
ret = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt);
// Bind id as parameter
SQLINTEGER id = 0;
SQLINTEGER cbId = 0;
ret = SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_INTEGER, 0, 0, &id, sizeof(id), &cbId);
if (!SQL_SUCCEEDED(ret))
printErrorsAndAbort(SQL_HANDLE_STMT, hstmt);
// Bind numStr as Insert-parameter
SQL_NUMERIC_STRUCT numStr;
ZeroMemory(&numStr, sizeof(numStr));
SQLINTEGER cbNum = 0;
ret = SQLBindParameter(hstmt, 2, SQL_PARAM_INPUT, SQL_C_NUMERIC, SQL_NUMERIC, 5, 3, &numStr, sizeof(cbNum), &cbNum);
if (!SQL_SUCCEEDED(ret))
printErrorsAndAbort(SQL_HANDLE_STMT, hstmt);
// Prepare statement
ret = SQLPrepare(hstmt, L"INSERT INTO decTable (id, dec) values(?, ?)", SQL_NTS);
if (!SQL_SUCCEEDED(ret))
printErrorsAndAbort(SQL_HANDLE_STMT, hstmt);
// Set some data and execute
id = 1;
SQLINTEGER iVal = 12345;
memcpy(numStr.val, &iVal, sizeof(iVal));
numStr.precision = 5;
numStr.scale = 3;
numStr.sign = 1;
ret = SQLExecute(hstmt);
if (!SQL_SUCCEEDED(ret))
printErrorsAndAbort(SQL_HANDLE_STMT, hstmt);
getchar();
return 0;This post might help:
http://msdn.developer-works.com/article/12639498/SQL_C_NUMERIC+data+incorrect+after+insert
If this is no solution try increasing the decimale number on the SQL server table, if you stille have the same problem after that change the column to a nvarchar and see the actual value that is put through the ODBC connector for SQL. -
What licenses and how many should be taken for SQL Server 2012 Standard Edition
Hi,
Could you help me please with my problem: what licenses (per core or Server + CAL) should I take for SQL Server 2012 Standard Edition for the following environment:
SQl Server will be installed on VM with 1 core.
Physical processor is Intel Xeon CPU E7-4830, 2.13 GHz, 2131 Mhz.
Few people will have an access to the VM but SQL server is used for a web-application for about 200 users from Active Directory.
Great thanks in advance,
LenaHello,
Since the question is a license issue, you can call 1-800-426-9400, Monday through Friday, 6:00 A.M. to 6:00 P.M. (Pacific Time) to speak directly
to a Microsoft licensing specialist. You can also visit the following site for more information and support on licensing issues:
http://www.microsoft.com/licensing/mla/default.aspx
Hope this helps.
Regards,
Alberto Morillo
SQLCoffee.com -
How to Custom Report using sql server report builder for SCCM 2012 SP1
Hi ,
I am new to database, if i want to create a manual report using sql server report builder for SCCM 2012 SP1, what step should i take.
i want to create a report in which computer name, total disk space, physical disk serial no come together. i already added class (physical disk serial no.) in hardware inventory classes. refer snapshotHi,
Here is a guide on how to create custom reports in Configuration Manager 2012, it is a great place to start, change to the data you want to display instead.
http://sccmgeekdiary.wordpress.com/2012/10/29/sccm-2012-reporting-for-dummies-creating-your-own-ssrs-reports/
Regards,
Jörgen
-- My System Center blog ccmexec.com -- Twitter
@ccmexec -
Question : Service Accounts for SQL Server 2012
Hello,
I am planning to create AD accounts for SQL Server 2012 services that will be installed on Windows 2012 server.
I was reading the following
Configure Windows Service Accounts and Permissions
and
Windows Privileges and Rights
Is there a recommendation / document that would list that assocation of SQL Server Services with Actvie Directory service accounts / privileges required for installation and starting the services.
Isn't it recommended to create separate account for every service and they should not be local accounts ?
Hope to hear soon as to what industry standards are being followed for production systems ?
Thank you very much in advance.
Regards
NikunjFrom MSDN:
Each service in SQL Server represents a process or a set of processes to manage authentication of SQL Server operations with Windows. Each service can be configured to use its own service account. This facility is exposed
at installation. SQL Server provides a special tool, SQL Server Configuration Manager, to manage the services configuration.
When choosing service accounts, consider the principle of least privilege. The service account should have exactly the privileges that it needs to do its job and no more privileges. You also need to consider account isolation; the service accounts should
not only be different from one another, they should not be used by any other service on the same server. Do not grant additional permissions to the SQL Server service account or the service groups.
From Glen Berry's Blog:
You should request that a dedicated domain user account be created for use by the SQL Server service. This should just be a regular, domain account with no special rights on the domain. You do not need or want this account to be a local admin on the machine
where SQL Server will be installed. The SQL Server setup program will grant the necessary rights on the machine to that account during installation.
You will also want a separate, dedicated domain user account for the SQL Server Agent service. If you are going to be installing and using other SQL Server related services such as SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS),
or SQL Server Analysis Services (SSAS), you will want dedicated domain accounts for each service. The reason you want separate accounts for each service is because they require different rights on the local machine, and having separate accounts is both more
secure and more resilient, since a problem with one account won’t affect all of the SQL Server Services.
Depending on your organization, getting these domain accounts created could take anywhere from minutes to weeks to complete, so make sure to allow time for this. For each one of these accounts, you will need their logon credentials for the SQL Server setup
program. You are going to want to make sure that the accounts don’t have a temporary password that must be changed during the next login. If they are set up that way, make sure to change them to use a strong password, and record this information in a secure
location.
Please Mark This As Answer if it solved your issue
Please Mark This As Helpful if it helps to solve your issue
Thanks,
Shashikant -
Sql server 2005 reports with sql server 2014
Can i upload SSRS 2005 reports as it is on SSRS 2014 and use it without doing anything extra or without changing anything ??
I can use SSRS 2005 reports as it is with SSRS 2012. But for SSRS 2014 not sure.
h2007You can upgrade the the RDL schema will be upgraded.
Report definition (.rdl) files are automatically upgraded in the following ways:
When you open a report in Report Designer in SQL Server Data Tools (SSDT), the report definition is upgraded to the currently supported RDL schema. When you specify a SQL Server 2008 or SQL Server 2008 R2 report server in the project properties, the report
definition is saved in a schema that is compatible with the target server.
When you upgrade a Reporting Services installation to a SQL Server 2014 Reporting Services (SSRS) installation, existing reports and snapshots that have been published to a report server are compiled and automatically upgraded to the new schema the first
time they are processed. If a report cannot be automatically upgraded, the report is processed using the backward-compatibility mode. The report definition remains in the original schema.
Reports are not upgraded when you upload a report definition file directly to the report server or SharePoint site. Upgrading a report definition in SQL Server Data Tools is the only way to upgrade the .rdl file.
After a report is upgraded locally or on the report server, you might notice additional errors, warnings, and messages. This is the result of changes to the internal report object model and processing components, which cause messages to appear when underlying
problems in the report are detected. For more information, see Reporting Services Backward Compatibility.
For more information about new features for SQL Server 2014 Reporting Services (SSRS), see
What's New (Reporting Services).
Check the below Microsoft link for more information
http://msdn.microsoft.com/en-us/library/ms143674.aspx
Mudassar -
Integrating Reports from SQL Server Reporting Services
Dear Experts,
I am given the URL to a report from SQL Server Reporting Services. The URL looks like
[http://<report_server_host>/ReportServer/Pages/ReportViewer.aspx?/<category>/<report>&<list_of_parameter_value_pairs>]
When this URL is executed, the user will be prompted for user name and password.
And I am given the following requirements:
- Integrate the report into SAP NetWeaver Portal
- The report has to be passed parameters with values generated at runtime and which must be hidden (because it is sensitive data)
Initially, I have been working on two options:
1. using Application Integrator (portal application: com.sap.portal.appintegrator.sap, portal component: Generic)
2. using URL iView
With the Application Integrator, however, I got the following error:
The operation you are attempting on item "/" is not allowed for this item type. (rsWrongItemType)
I cannot find the solution to this. It looks like the URL which contain "/" after "?" is not appropriate for the Application Integrator.
So I turn my focus away from the Application Integrator, but to using the URL iView instead. And in order to hide the parameters passed, I make the following settings in the URL iView:
Request Method = POST
Fetch Mode = Server-Side
But then I got the following error:
Failed to fetch the content. The response status was 401.
Message:
(I have defined an HTTP system and used it in the URL iView.)
Does anyone know how to fix the errors (either with the Application Integrator or the URL iView)?
Your answers will be much appreciated!
Regards
Sakprasat SumHi
Was a solution for this found?
I am trying to achieve the same thing, create an iView that links to an existing SQL Reporting Services report.
Cheers,
Andrew
Maybe you are looking for
-
Confirmation of PM order, what FI posting will be done in SAP
Dear Gurus, On final confirmation of PM order, what FI posting will be done in SAP ? Regards
-
This is getting irritating. I finally get the update to install now I get this new error message, I don't get it. I have Windows 7, all the updates have been installed on that, but I can't get the stupid iTunes to open. Seriously annoying! Can anyone
-
Discoverer 11g certification with JRE 1.7
Hello, Oracle Support Note 1469683.1 "Discoverer 11g With Java 1.7 (JDK or JRE) Fails With Exceptions" states that Discoverer 11.1.1.6 is certified with Java 1.7 when using Weblogic 10.3.6. It indicates that this information can be found in the certi
-
Fan problems, running asymmetric with macbook pro early 2011
With Yosemite I have found that the fans are running asymmetric. With a any program running, the control of the fans run without any sense. I´ve tried to reset SMC, reinstall OS X 10.10, use some fan control software, uninstall all fan control softwa
-
I want to run custom file (not directory!) using Runtime.exec() (or you know other way? :) ) Here is what i get:: java.io.IOException: CreateProcess: C:\Documents and Settings\Sugar.SERVER\jbproject\test\theFileICannotOpen.txt error=2 at java.la