MS SQL Server SelectMethod Performance
I was having problems connecting to a MS SQL Server 2000 database using datasources.
With transactional operations, JDBC throws me the following exception:
java.sql.SQLException: [Microsoft][SQLServer 2000 Driver for JDBC]Can't start manual transaction mode because there are cloned connections.I found in the Internet that this issue wolud be solved setting the DataSource property selectMethod=cursor, but many people says that this configuration has many performance implications.
Can anyone tell me more about this issue???
Can you pin point a particular TopLink query tied to the " FETCH_APICURSOR* " call in the app and post how it is being created?
My guess is that the application is specifying the TopLink query object to return a cursor or stream and not closing it in all cases, or keeping them open for a long period - did you say they were leaking, or is it just that a large number are open at a time leading to performance problems?
This streams+cursors are described in the TopLink docs here
http://docs.oracle.com/cd/E21764_01/web.1111/b32441/qryadv.htm#CJGJBHGJ
or the 10g docs here:
http://sqltech.cl/doc/oas10gR3/web.1013/b13593/qryadv010.htm
If this is the case, you might want to use a different strategy such as pagination instead of cursors, described here:
http://docs.oracle.com/cd/E17904_01/web.1111/b32441/optimiz.htm#CHDIBGFE
Best Regards,
Chris
Similar Messages
-
Microsoft SQL Server 2012 Performance Dashboard compatible with windows server 2012?
Is Microsoft SQL Server 2012 Performance Dashboard compatible with Windows Server 2012? I only see Windows Server 2008r2 as the most recent version in the System Requirements list. Thanks!
Is Microsoft SQL Server 2012 Performance Dashboard compatible with Windows Server 2012? I only see Windows Server 2008r2 as the most recent version in the System Requirements list. Thanks!
As per download documents supported windows versions are
Windows 7, Windows Server 2008 R2, Windows Server 2008 Service Pack 2, Windows Vista Service Pack 2.
I cannot see any mention of Windows server 2012 in system requirements
See System Requirements
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
SQL Server 2005 performance decreases with DB size while SQL Server 2012 is fine
Hi,
We have a C# windows service running that polls some files and inserts/updates some fields in database.
The service was tested on a local dev machine with SQL Server 2012 running and performance was quite decent with any number of records. Later on the service was moved to a test stage environment where SQL Server 2005 is installed. At that point database
was still empty and service was running just fine but later on, after some 500k records were written, performance problems came to light. After some more tests we've founds out that, basically, database operation performance in SQL Server 2005 decreases with
a direct correlation with the database size. Here are some testing results:
Run#
1
2
3
4
5
DB size (records)
520k
620k
720k
820k
920k
SQL Server 2005
TotalRunTime
25:25.1
32:25.4
38:27.3
42:50.5
43:51.8
Get1
00:18.3
00:18.9
00:20.1
00:20.1
00:19.3
Get2
01:13.4
01:17.9
01:21.0
01:21.2
01:17.5
Get3
01:19.5
01:24.6
01:28.4
01:29.3
01:24.8
Count1
00:19.9
00:18.7
00:17.9
00:18.7
00:19.1
Count2
00:44.5
00:45.7
00:45.9
00:47.0
00:46.0
Count3
00:21.7
00:21.7
00:21.7
00:22.3
00:22.3
Count4
00:23.6
00:23.9
00:23.9
00:24.9
00:24.5
Process1
03:10.6
03:15.4
03:14.7
03:21.5
03:19.6
Process2
17:08.7
23:35.7
28:53.8
32:58.3
34:46.9
Count5
00:02.3
00:02.3
00:02.3
00:02.3
00:02.1
Count6
00:01.6
00:01.6
00:01.6
00:01.7
00:01.7
Count7
00:01.9
00:01.9
00:01.7
00:02.0
00:02.0
Process3
00:02.0
00:01.8
00:01.8
00:01.8
00:01.8
SQL Server 2012
TotalRunTime
12:51.6
13:38.7
13:20.4
13:38.0
12:38.8
Get1
00:21.6
00:21.7
00:20.7
00:22.7
00:21.4
Get2
01:38.3
01:37.2
01:31.6
01:39.2
01:37.3
Get3
01:41.7
01:42.1
01:35.9
01:44.5
01:41.7
Count1
00:20.3
00:19.9
00:19.9
00:21.5
00:17.3
Count2
01:04.5
01:04.8
01:05.3
01:10.0
01:01.0
Count3
00:24.5
00:24.1
00:23.7
00:26.0
00:21.7
Count4
00:26.3
00:24.6
00:25.1
00:27.5
00:23.7
Process1
03:52.3
03:57.7
03:59.4
04:21.2
03:41.4
Process2
03:05.4
03:06.2
02:53.2
03:10.3
03:06.5
Count5
00:02.8
00:02.7
00:02.6
00:02.8
00:02.7
Count6
00:02.3
00:03.0
00:02.8
00:03.4
00:02.4
Count7
00:02.5
00:02.9
00:02.8
00:03.4
00:02.5
Process3
00:21.7
00:21.0
00:20.4
00:22.8
00:21.5
One more thing is that it's not Process2 table that constantly grows in size but is Process1 table, that gets almost 100k records each run.
After that SQL Server 2005 has also been installed on a dev machine just to test things and we got exactly the same results. Both SQL Server 2005 and 2012 instances are installed using default settings with no changes at all. The same goes for databases
created for the service.
So the question is - why are there such huge differences between performance of SQL Server 2005 and 2012? Maybe there are some settings that are set by default in SQL Server 2012 database that need to be set manually in 2005?
What else can I try to test? The main problem is that production SQL Server will be updated god-knows-when and we can't just wait for that.
Any suggestions/advices are more than welcome....One more thing is that it's not Process2 table that constantly grows in size but is
Process1 table, that gets almost 100k records each run....
Hi,
It is not clear to me what is that you are doing, but now we have a better understanding on ONE of your tables an it is obviously you will get worse result as the data become bigger. Actually your table look like a automatic build table by ORM like Entity
Framework, and it's DDL probably do not much your needs. For example if your select query is using a filter on the other column that [setID] then you have no index and the server probably have to scan the entire table in order to find the records that you
need.
Forum is a suitable place to seek advice about a specific system (as I mentioned before we are not familiar with your system), and it is more suitable for general questions. For example the fact that you have no index except the index on the column [setID]
can indicate a problem. Ultimately to optimize the system will need to investigate it more thoroughly (as it is no longer appropriate forum ... but we're not there yet). Another point is that now we can see that you are using [timestamp] column, an this
implies that your are using this column as a filter for selecting the data. If so, then maybe a better DDL will be to use clustered index on this column and if needed a nonclustered index on the [setID] if it is needed at all...
what is obviously is that next is to check if this DDL fit
your specific needs (as i mentioned before).
Next step is to understand what action do you do with this table. (1) what is your query which become slowly in a bigger data set. (2) Are you using ORM (object relational mapping, like Entity Framework
code first), and if so then which one.
[Personal Site] [Blog] [Facebook] -
SQL Server Express Performance Limitations With OGC Methods on Geometry Instances
I will front load my question. Specifically, I am wondering if any of the feature restrictions with SQL Server Express cause performance limitations/reductions with OGC methods on geometry instances, e.g., STIntersects? I have spent time reading
various documents about the different editions of SQL Server, including the Features Supported by the Editions of SQL Server 2014, but nothing is jumping out at me. The
limited information on spatial features in the aforementioned document implies spatial is the same across all editions. I am hoping this is wrong.
The situation.... I have roughly 200,000 tax parcels within 175 taxing districts. As part of a consistency check between what is stored in tax records for taxing district and what is identified spatially, I set up a basic point-in-polygon query
to identify the taxing district spatially and then count the number of parcels within in taxing district. Surprisingly, the query took 66 minutes to run. As I pointed out, this is being run on a test machine with SQL Server Express.
Some specifics.... I wrote the query a few different ways and compared the execution plans, and the optimizer always choose the same plan, which is good I guess since it means it is doing its job. The execution plans show a 'Clustered Index Seek
(Spatial)' being used and only costing 1%. Coming in at 75% cost is a Filter, which appears to be connected to the STIntersects predicate. I brute forced alternate execution plans using HINTS, but they only turned out worse, which I guess is also
good since it means the optimizer did choose a good plan. I experimented some with changing the spatial index parameters, but the impact of the options I tried was never that much. I ended up going with "Geometry Auto Grid" with 16 cells
per object.
So, why do I think 66 minutes is excessive? The reason is that I loaded the same data sets into PostgreSQL/PostGIS, used a default spatial index, and the same query ran in 5 minutes. Same machine, same data, SQL Server Express is 13x slower than
PostgreSQL. That is why I think 66 minutes is excessive.
Our organization is mostly an Oracle and SQL Server shop. Since more of my background and experience are with MS databases, I prefer to work with SQL Server. I really do want to understand what is happening here. Is there something I can
do different to get more performance out of SQL Server? Does spatial run slower on Express versus Standard or Enterprise? Given I did so little tuning in PostgreSQL, I still can't understand the results I am seeing.
I may or may not be able to strip the data down enough to be able to send it to someone.Tessalating the polygons (tax districts) is the answer!
Since my use of SQL Server Express was brought up as possibly contributing to the slow runtime, the first thing I did was download an evaluation version of Enterprise Edition. The runtime on Enterprise Edition dropped from 66 minutes to 57.5 minutes.
A reduction of 13% isn't anything to scoff at, but total runtime was still 11x longer than in PostgreSQL. Although Enterprise Edition had 4 cores available to it, it never really spun up more than 1 when executing the query, so it doesn't seem
to have been parallelizing the query much, if at all.
You asked about polygon complexity. Overall, a majority are fairly simple but there are some complex ones with one really complex polygon. Using the complexity index discussed in the reference thread, the tax districts had an average complexity
of 4.6 and a median of 2.7. One polygon had a complexity index of 120, which was skewing the average, as well as increasing the runtime I suspect. Below is a complexity index breakdown:
Index
NUM_TAX_DIST
1
6
<2
49
<3
44
<4
23
<5
11
<6
9
<7
9
<8
4
<9
1
<10
4
>=10
14
Before trying tessellation, I tweaked the spatial indexes in several different ways, but the runtimes never changed by more than a minute or two. I reset the spatial indexes to "geometry auto grid @ 32" and tried out your tessellation functions
using the default of 5000 vertices. Total runtime 2.3 minutes, a 96% reduction and twice as fast as PostgresSQL! Now that is more what I was expecting before i started.
I tried using different thresholds, 3,000 and 10,000 vertices but the runtimes were slightly slower, 3.5 and 3.3 minutes respectively. A threshold of 5000 definitely seems to be a sweet spot for the dataset I am using. As the thread you referenced
discussed, SQL Server spatial functions like STIntersect appear to be sensitive to the number of vertices of polygons.
After reading your comment, it reminded me of some discussions with Esri staff about ArcGIS doing the same thing in certain circumstances, but I didn't go as far as thinking to apply it here. So, thanks for the suggestion and code from another post.
Once I realized the SRID was hard coded to 0 in tvf_QuarterPolygon, I was able to update the code to set it to the same as the input shape, and then everything came together nicely. -
Effect of BitLocker on SQL server database performance?
We have a corporate requirement to ensure data is encrypted at rest. We are running SQL Server 2014 with a 1.5TB database, mostly read activity.
I am considering using BitLocker to meet the requirement. I have searched the forums and can't seem to find anyone who has had direct experience with using BitLocker to protect data drives running SQL. Does anyone have any real world experience
with this, particularly experience with the effect on the performance of SQL?
BTW - our preference is to use BitLocker and not TDE, as TDE seems much more complex in its administration which introduces some risk for our team of IT generalists.
TIA.Hi BArmand ,
The hardware cost depends on how strong we want the BitLocker to protect .
BitLocker supports two levels of cipher strength for BitLocker: 128-bit and 256-bit. Longer encryption keys provide an enhanced level of security. However, longer keys can cause slower encryption
and decryption of data. On some computers, using longer keys might result in noticeable performance degradation.
Generally it imposes a single-digit percentage performance overhead.
If the hardware configuration of our server is up to date ,there won’t be noticeable effect on performance .
Here is some detailed information of BitLocker :
How Strong Do You Want the BitLocker Protection? :
https://technet.microsoft.com/en-us/library/ee706531(WS.10).aspx
BitLocker Drive Encryption :
https://technet.microsoft.com/en-us/library/cc731549%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected] -
MS SQL Server 2008 performance problem
We use TopLink 10.1.3.5 to connect to MS SQL Server 2008.
What we are seeing is that when a query is being run by TopLink a lot of cursors open up and remain open. Our database CPU usage goes up and it affects the whole application.
Our DBA took a look at it and said the database shows FETCH_APICURSOR* being used for select statements.
Is there a way to tell TopLink not to use cursors for queries?
Thanks.Can you pin point a particular TopLink query tied to the " FETCH_APICURSOR* " call in the app and post how it is being created?
My guess is that the application is specifying the TopLink query object to return a cursor or stream and not closing it in all cases, or keeping them open for a long period - did you say they were leaking, or is it just that a large number are open at a time leading to performance problems?
This streams+cursors are described in the TopLink docs here
http://docs.oracle.com/cd/E21764_01/web.1111/b32441/qryadv.htm#CJGJBHGJ
or the 10g docs here:
http://sqltech.cl/doc/oas10gR3/web.1013/b13593/qryadv010.htm
If this is the case, you might want to use a different strategy such as pagination instead of cursors, described here:
http://docs.oracle.com/cd/E17904_01/web.1111/b32441/optimiz.htm#CHDIBGFE
Best Regards,
Chris -
B1 and SQL Server 2005 performance with 3.000.000 invoice-lines per year
I would like to know if SAP Business One with SQL Server 2005 could work with the following information:
- 40.000 Business Partners
- 40.000 Invoices per month
- Aprox. 3.000.000 invoice-lines per year
Of course it will be necessary to change some forms in B1.
What do you think?
Do you know any B1 customer working with that amout of data?> Hi,
>
> I think a good SQL2005 tuning (done by a good DBA)
> will improve performance. Number of records like that
> shouldn't hurt that kind of DB engine...
Hi,
I'm sure that MSSQL2005 can handle the amount of records & transactions in question. Even MSSQL 2000 can do it. However, any DB engine can be put on its knees with the combination of 2-tier application architecture and badly designed queries. B1 is a case in point. I wouldn't go into such a project without decent preliminary load testing and explicit commitment for support from SAP B1 dev team.
I have heard from implementation projects where B1 simply couldn't handle the amount of data. I've also participated in some presales cases for B1 where we decided not to take a project because we saw that B1 couldn't handle the amount of data (while the other features of B1 would have been more than enough for the customer). The one you're currently looking at seems like one of those.
Henry -
MS SQL Server 7 - Performance of Prepared Statements and Stored Procedures
Hello All,
Our team is currently tuning an application running on WL 5.1 SP 10 with a MS
SQL Server 7 DB that it accesses via the WebLogic jConnect drivers. The application
uses Prepared Statements for all types of database operations (selects, updates,
inserts, etc.) and we have noticed that a great deal of the DB host's resources
are consumed by the parsing of these statements. Our thought was to convert many
of these Prepared Statements to Stored Procedures with the idea that the parsing
overhead would be eliminated. In spite of all this, I have read that because
of the way that the jConnect drivers are implemented for MS SQL Server, Prepared
Statments are actually SLOWER than straight SQL because of the way that parameter
values are converted. Does this also apply to Stored Procedures??? If anyone
can give me an answer, it would be greatly appreciated.
Thanks in advance!Joseph Weinstein <[email protected]> wrote:
>
>
Matt wrote:
Hello All,
Our team is currently tuning an application running on WL 5.1 SP 10with a MS
SQL Server 7 DB that it accesses via the WebLogic jConnect drivers.The application
uses Prepared Statements for all types of database operations (selects,updates,
inserts, etc.) and we have noticed that a great deal of the DB host'sresources
are consumed by the parsing of these statements. Our thought was toconvert many
of these Prepared Statements to Stored Procedures with the idea thatthe parsing
overhead would be eliminated. In spite of all this, I have read thatbecause
of the way that the jConnect drivers are implemented for MS SQL Server,Prepared
Statments are actually SLOWER than straight SQL because of the waythat parameter
values are converted. Does this also apply to Stored Procedures???If anyone
can give me an answer, it would be greatly appreciated.
Thanks in advance!Hi. Stored procedures may help, but you can also try MS's new free type-4
driver,
which does use DBMS optimizations to make PreparedStatements run faster.
Joe
Thanks Joe! I also wanted to know if setting the statement cache (assuming that
this feature is available in WL 5.1 SP 10) will give a boost for both Prepared Statements
and stored procs called via Callable Statements. Pretty much all of the Prepared
Statements that we are replacing are executed from entity bean transactions.
Thanks again -
Increase Performance and ROI for SQL Server Environments
May 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDFMay 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDF -
SQL Server 2000 std Report Performance Issue
Dear All,
I have a VB based desktop application with back end MS SQL server 2000 database with server machine ibmx5650 with specs intel xeon 2.7GHz (24 CPU's) & 24GB RAM.
There are two things i need help:
Recently we have upgrade the SQL server from 2000 personal edition to the 2000 standard edition. There comes a problem with one of the Report in the application. The report took almost 30 mins previously in SQL 2000 personal edition.But after the upgrade
to Standard edition we are unable to view report before 3 hours even sometimes it doesn't appear after several hours.
Secondly for brief testing i have installed the personal edition on a simple PC rather then a server PC specs are corei5 & 4 GB of RAM. The same report is generated in only 15 mins from the application with this desktop machine as DB server.
Please help me out i have gone through all SQL Server & system performance log of my server machine everything is normal but the report is taking too long & i can only generate that report from personal edition.
Is there the difference due the higher corei5 processor in desktop machine or there is any other issue behind this.
Your prompt response is highly appreciated.
Regards,
Rashid AliHello,
SQL Server 2000 is not support since 2013. Please upgrade to SQL Server 2012 to get better performance and support.
Thanks for your understanding and support.
Regards,
Fanny Liu
Fanny Liu
TechNet Community Support -
Required info on SQL Server Performance Issue Analysis and Troubleshoot way
Dear All,
I am going to prepare the simple documentation steps on SQL Server Performance Issue Analysis and troubleshoot method. I am struggling to make this documentation since we have different checklist (like network latency,disk latency, memory/processor pressure,SQL
query tuning etc) to validate once application performance issue reported from the customer.So, I am looking for the experts document or link sharing .
Your input will help for document preparation in better way.
Thanks in advance.Hi,
Recommendations and Guidelines on configuring disk partitions for SQL Server
http://support.microsoft.com/kb/2023571
Disk and File Layout for SQL Server
https://blogs.technet.com/b/dataplatforminsider/archive/2012/12/19/disk-and-file-layout-for-sql-server.aspx
Microsoft SQL Server 2012 Performance Tuning: Implementing Physical Database Structure
http://www.packtpub.com/article/sql-server-2012-implementing-physical-database-strusture
Database Mirroring Best Practices and Performance Considerations
http://technet.microsoft.com/en-us/library/cc917681.aspx
Hope the information helps.
Tracy Cai
TechNet Community Support -
Server performance slow using sql server 2005
Dear Team,
Kindly assist me in solving the performance issue .
The server is very slow and it is taking more time.,
SAP version details:
- SAP ECC 6.0 sr3 version.
-DB SQl server 2005.
performance is very slowi,ts is taking time to execute the Transactions.
Appreciate for quick response.
Thanks&Regards
KumarvyasDear Team,
In T- code: DB13
Space overview
Found an error:" ERROR CONDITION EXISTS- CHECK FILES TAB ".
in the files TAB: Datafiles:
Exception in red for <SID>DATA1, : <SID>DATA2,: <SID>DATA3
Freepct: 0,0,0,
Free pct : Free percentage of space in a SQL Server file tab
How to extend the The DATA files in SQL server 2005
Reagrds
Kumar -
What is the difference in Webintelligence while using SAP BW or MS SQL Server
Hi Experts,
What is main difference in using SAP BW and MS SQL server in SAP BO while creating Reports using Web intelligence, Crystal reports. I want to know mostly on Report Level difference.Hi,
I Have not used Crystal with Ms SQL server but webI i can tell .
SAP BW is OLAP Layer .
M S SQL Server is Database .
You can Edit your Web Intelligence Query SQL Code in MS SQL Server .
Not in SAP BW .
Combined Query is Disable in SAP BW but it is available in SQL Server .
Performance wise SAP BW Query is better because it calculates every thing in SAP BW & gives us Required Out . -
Migrating from SQL Server 7.0 To Oracle 8i (SQLServer Jobs/ Schedules)
Hi there,
How do you schedule jobs in Oracle in a similar way as SQL Server 7.0? So far I have only been able to find a DBMS_JOB.Start and Enable and Disable.
Apart from this I have a question on maintainence procedures.
I run the following steps in SQL Server for performance benefits.
1) Rebuild INdexes
2) Database Shrink
3) Recompile Stored Procedures
4) Recompute Statistics
5) Make an entry in a table to indicate that these 4 tasks have been completed.
Do these things matter in Oracle or does the database take care of it on its own? How do you get a similar maintenance schedule up and running in Oracle 8i?
Please reply.
Thanks and Regards,
AjayHi,
Version 1.2.5.0.0 will be available for free download from this site within the next two weeks.
I will migrate both schema and data for a SQL Server 7.0 database.
Please mail [email protected] with your request.
Regards
John -
Import data from SQL Server into MS Word document for Mail Merge purpose ?
Hi,
Is it possible to import contacts from SQL Server into MS Word for mail merge purpose or if retrieving data from MS Excel can we update the data in MS Excel sheet without opening it ?
Note: Remember when you open a word document already set up for mail merge, asks you to run the query to return all records from the excel sheet it is connected to.
KhurramWord and the current data source dialog do not really give you any help with that.
You either have to be able to create a View in SQL Server that performs the query you need, then connect to that, or you have to be able to create the correct query manually (or perhaps using some other query tool that can help you), then use VBA to connect
using that query.
For example, if you have been through the connection process once (connecting to a single table) then you will have a .odc (Office Data Connection file) which has the info. needed to connect to the correct server and database. It's a text file with some
HTML and XML inside. You can copy/rename it. Let's say it is called "c:\a\myodc.odc" Then in VBA you can use something like
ActiveDocument.OpenDataSource Name:="c:\a\myodc.odc, _
SQLStatement:="put your SQL statement in here, and if it is long,...", _
SQLStatement1:="put the second part in here"
You get a maximum of either 255 or around 511 characters in the SQL statement, and Word tends to impose some syntax requirements that Transact-SQL does not, so e.g. you may need to quote all your table names.
You can also se an empty .odc file and provide connection info. in the COnnection:= parameter in OpenDataSource.
As background, until Word 2000, by default you would use MS Query to create your SQL query, and MS Query does have facilities that can help you build your query (a bit like the ones in MS Access). That may still be possible (it is a bit harder to find the MS
Query option now, and I am not sure it works with the latest versions of Word). MS Query only works for ODBC queries, and they do not always work correctly when you actually issue the query using ODBC from Word, because of a Word problem to do with Unicode
fields in SQL Server. But you could probably still use MS Query to help you construct your SQL. (It's probably easier to do that in Excel, though).
Peter Jamieson
Maybe you are looking for
-
IPod shows up in Finder but not itunes - Have already tried suggestions
I have a 30gb ipod video & am using a Macbook I wanted to get files off my ipod that werent in my itunes so I downloaded yamipod. This worked for a bit then stopped working so I deleted the program. Now when i plug my ipod into my computer it shows u
-
Problem replicating catalogs in CRM Internet Sales/E-Commerce 5.0
Good afternoon ! We have CRM 5.0 SP 8, and we are using the CRM E-Commerce 5.0(also called Internet Sales) to b2b, and, we are experiencing problems during the replication of the Catalogs from the webcatalog url. When we try to do a f
-
Hi , I want to know the steps to be followed AFTER syntax errors are removed from UCCHECK programs with error status. Shall I test the programs for runtime errors before enabling Unicode flag? or first I should activate Unicode flag in UCHHECK for al
-
Can we assign same basic type to different message type in ALE
Hi Expert can we assign same idoc type to different message type?
-
My iPod speaker is broken, i bought it one year ago, can you fix that for free?
my iPod speaker is broken, i bought it one year ago, can you fix that for free?