Class specific sequence factory for SQL server?
Hi Guys,
I would like to use Kodo SequenceFactory to generate the Class sensitive
primary keys. In other words, to have a sequence for each table(class). We
are using Application identity, Kodo 2.4.2 and SQL Server as a database.
I have seen references to ClassSequenceFactory in the news group, however
they seemed to focus on Oracle and I was a bit fuzzy on what do I need to
do to get this working with SQL Server.
Would you have an example/cook book? (Tight deadline, lots of stress, not
much time to experiment, right now :-()).
Thank you,
Petr
You can use ClassDBSequenceFactory. Note that you MUST download the
patch I posted earlier on this newsgroup:
Re: Endless loop problem with ClassDBSequenceFactory
And you can see how to configure:
http://www.solarmetric.com/docs/2.5.0b1/docs/ref_guide_conf_kodo.html#com.solarmetric.kodo.impl.jdbc.SequenceFactoryClass
Note that you should probably set the TableName property in
SequenceFactoryProperties as the two tables (default sequencefactory and
this class sensitive one) have the same name, they are not structurally
similar or compatible.
Note you cannot use ClassSequenceFactory out of the box as SQLServer
does not support native sequencing. You may be able to approximate this
by creating a stored procedure to update a seperate table and return the
new value, and creating a SequenceFactory implementation to call this
stored procedure.
Petr Bulanek wrote:
Hi Guys,
I would like to use Kodo SequenceFactory to generate the Class sensitive
primary keys. In other words, to have a sequence for each table(class). We
are using Application identity, Kodo 2.4.2 and SQL Server as a database.
I have seen references to ClassSequenceFactory in the news group, however
they seemed to focus on Oracle and I was a bit fuzzy on what do I need to
do to get this working with SQL Server.
Would you have an example/cook book? (Tight deadline, lots of stress, not
much time to experiment, right now :-()).
Thank you,
Petr
Stephen Kim
[email protected]
SolarMetric, Inc.
http://www.solarmetric.com
Similar Messages
-
Driver class name value for SQL Server JDBC driver
if i am using SQL Server JDBC driver to connect to DB using STRUTS DBCP
what will be the value for the property="driverClassName"
& value for property="url"
eg., <data-source type="org.apache.commons.dbcp.BasicDataSource" >
<set-property property="driverClassName" value="org.gjt.mm.mysql.Driver" />
<set-property property="url" value="jdbc:mysql://10.1.0.8/omega4105" />
<set-property property="username" value="iris"/>
<set-property property="password" value="iris"/>
</data-source>For SQL Server 2000
com.microsoft.jdbc.sqlserver.SQLServerDriver
For SQL Server 2005
com.microsoft.sqlserver.jdbc.SQLServerDriver -
Missing Stored Procedure for SQL Server
Hello,
I'm trying to create the standard TestStand 4 schema for SQL Server. I get the following error when I run multiple numeric test cases:
An error occurred calling 'LogResults' in 'ITSDBLog' of 'zNI TestStand Database Logging'
An error occurred executing a statement.
Schema: SQL Server Stored Proc (NI).
Statement: MEAS_NUMERICLIMIT.
Could not find stored procedure 'InsertStepMeasNumericLimit'.
Description: Could not find stored procedure 'InsertStepMeasNumericLimit'.
Number: -2147217900
NativeError: 2812
SQLState: 42000
Reported by: Microsoft OLE DB Provider for SQL Server
Source: TSDBLog
-2147217900; User-defined error code.
Step 'Log Results to Database' of sequence 'Log To Database' in 'Database.seq'
My process for creating the database is to create a .sql file from the TestStand database configuration window. I have selected SQL Server Stored Procedure. I then build the database by running the script in the Execute SQL window of the Database viewer. I don't get any errors when creating the database or the stored procedures. I have noticed that a stored procedure, called InsertMeasNumericLimit does exist. When I rename this procedure to InsertStepMeasNumericLimit, the sequence with the multiple numeric executes without error, and my data is in the database.
Now to my questions>
1. Am I doing something wrong in my procedure to create the database schema, that would cause this?
2. Have the NI schemas or DB calls in TS been changed since 3.5 and validated?
3. Has anyone else run across this problem, and what others can I expect to encounter with the SQL server stored procedure schema?
Thanks in advance for the help,
BCEbce,
1. What you are doing is correct. There is a bug with the SQL Server Stored Procedure. Apparently
2. The schemas hasn't changed since 3.5? Were you able to go through the same process in TS 3.5 without running into this error?
3. It looks like you have found the way to solve this problem. There are a few other step types that have similar error. Specifically the IVI steps. Before the bug is fixed, you can use validate to determines whether the statement and column information for the selected schema matches the tables and columns in the database. Alternatively, you may look at the file below to find out the correct spelling of the stored procedures.
<TestStand 4.0>\Components\NI\Models\TestStandModels\Database\SQL Server Create Stored Proc Result Tables.sql
Regards,
Song Du
Systems Software
National Instruments R&D -
Increase Performance and ROI for SQL Server Environments
May 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDFMay 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDF -
Hi,
I'm trying to execute some SQL queries and I just don't understand what's wrong.
I�m using Tomcat and SQL Server in order to do this, but when I�m try to execute a query with a INNER JOIN statements Tomcat raise a SQL exception... at the very first time I thought there was a problem with database connection but I realize that a simple query to a table works pretty well. then I found out some problems with JDBC:ODBC.... so I install JDBC for SQL Server 2000 and test with the same simple query and works..... so, I come to a conclusion.... INNER JOIN or JOIN statements can't be used in JDBC..... please... somebody tell I�m wrong and give me a hand...
I'm using TOMCAT 4 and JDK 1.4 SQL Server 2000
Error occurs when executeQuery() is called.... not prepareStatement().... ??????
Driver DriverRecResult = (Driver)Class.forName(driver).newInstance();
Connection ConnRecResult = DriverManager.getConnection(DSN,user,password);
PreparedStatement StatementRecResult = ConnRecResult.prepareStatement(query);
ResultSet RecResult = StatementRecResult.executeQuery(); <---- Exception raise here
So much tahnks in advance,That's exactly what I think, driver it's raising the exception, but I don't know why.... i test the same query with INNER JOIN directly from SQL Query Analyser and it's works perfectly, my problem ain't SQL, but JSP and JDBC 'cause i'm a newbie about these issues.
Common sense tell me possible problems lie in SQLServer drivers 'cause i run the same pages on JRUN through jdbc:odbc and do works well, but by now i just depend on Tomcat.....
I've installed SQL Server drivers for JDBC but i just find it doesn't work fully... could be the version of JDK i've installed? what version do i need?
( I'm running Tomcat 4 with JDK 1.4 & SQL Server 2000 W2K )
thanks for reply. -
SQL Server Radio - A new podcast for SQL Server professionals
Hi everyone,
We (Guy Glantser and Matan Yungman) have recently launched a new podcast for SQL Server professionals, called "SQL Server Radio".
We release two shows a month, and basically talk about everything around SQL Server and the SQL Server community. On one show of each month, we juggle around and talk about interesting blog posts, forum discussions, news, events and other things that happen
in the SQL Server world. We expand about each of those items and add from our own experience. On the second show of the month, we go more in-depth. Sometimes we bring a guest, and sometimes we talk about a specific topic like SQL Server 2014, conferences or
professional development.
So if you love SQL Server and love learning by hearing, whether you're on your way to work, jogging or just chilling and
want to learn in a fun way, check us out on
http://www.SQLServerRadio.com
Two shows are already online..Hi everyone,
We (Guy Glantser and Matan Yungman) have recently launched a new podcast for SQL Server professionals, called "SQL Server Radio".
We release two shows a month, and basically talk about everything around SQL Server and the SQL Server community. On one show of each month, we juggle around and talk about interesting blog posts, forum discussions, news, events and other things that happen
in the SQL Server world. We expand about each of those items and add from our own experience. On the second show of the month, we go more in-depth. Sometimes we bring a guest, and sometimes we talk about a specific topic like SQL Server 2014, conferences or
professional development.
So if you love SQL Server and love learning by hearing, whether you're on your way to work, jogging or just chilling and
want to learn in a fun way, check us out on http://www.SQLServerRadio.com
Two shows are already online.. -
OGG for SQL Server - Extract stops capturing - Bug?
Hi, all,
I've found a problem with OGG for SQL Server where the Extract stops capturing data after the transaction log is backed up. I've looked for ways to reconfigure OGG to avoid the problem but couldn't find any reference to options to workaround this problem. It seems to be a bug to me.
My Extract configuration is as follows:
EXTRACT ext1
SOURCEDB mssql1
TRANLOGOPTIONS NOMANAGESECONDARYTRUNCATIONPOINT
EOFDELAY 60
EXTTRAIL dirdat/e1
TABLE dbo.TestTable;
I'm using the EOFDELAY parameter for testing purposes only, since it's easy to reproduce the scenario that causes the issue when the extract polling is configured with longer intervals.
When the Transaction Log backup runs, SQL Server marks all the virtual logs that are older than the primary and secondary truncation points as inactive (status = 0). These virtual logs can then be reused if required. They still contain change records, though, and OGG can read from then if required, before they are overwritten. This situation will never occur if we are not using SQL Replication and have the Extract configured with the parameter MANAGESECONDARYTRUNCATIONPOINT.
However, I'm trying to simulate a scenario where OGG is used along SQL Replication and the extract is configured with the NOMANAGESECONDARYTRUNCATIONPOINT option. The situation that I've reproduced and caused the Extract to stop capturing is the follow sequence of events:
1. Extract reads transaction log and capture change up to LSN X
2. More change are made to the database and the LSN is incremented
3. Log Reader reads Transaction Log, captures changes up to LSN X+Y and advances the secondary truncation point to that LSN
4. A transaction log occurs, backs up all the active virtual logs, advances the primary truncation point to a LSN greater than LSN X+Y, and marks all the virtual logs with LSNs <= X+Y as inactive (status = 0)
5. Changes continue to happen in the database consuming all the available inactive virtual logs and overwriting them.
6. The extract wakes up again to capture more changes.
At this point, the changes between LSNs X and X+Y are not in the Transaction Log anymore, but are available in the backups. From what I understood in the documentation the Extract should detect that situation and retrieve the changes from the Transaction Log backups. This, however, is not happening and the Extract becomes stuck. It still pools the transaction log at the configured interval query the log state with DBCC LOGINFO, but doesn't move forward anymore.
If I stop and restart the Extract I can see from the trace that it does the right thing upon startup. It realises that it requires information that's missing from the logs, query MSDB for the available backups, and mine the backups to get the required LSNs.
I would've thought the Extract should do the same during normal operation, without the need for a restart.
Is this a bug or the normal operation of the Extract? Is there a way to configure it to avoid this situation without using NOMANAGESECONDARYTRUNCATIONPOINT?
The following is the state of the Extract once it gets stuck. The last replicated change occurred at 2012-07-09 12:46:50.370000. All the changes after that, and there are many, were not captured until I restarted the Extract.
GGSCI> info extract ext1, showch
EXTRACT EXT1 Last Started 2012-07-09 12:32 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:54 ago)
VAM Read Checkpoint 2012-07-09 12:46:50.370000
LSN: 0x0000073d:00000aff:0001, Tran: 0000:000bd922
Current Checkpoint Detail:
Read Checkpoint #1
VAM External Interface
Startup Checkpoint (starting position in the data source):
Timestamp: 2012-07-09 11:41:06.036666
LSN: 0x00000460:00000198:0004, Tran: 0000:00089b02
Recovery Checkpoint (position of oldest unprocessed transaction in the data so
urce):
Timestamp: 2012-07-09 12:46:50.370000
LSN: 0x0000073d:00000afd:0004, Tran: 0000:000bd921
Current Checkpoint (position of last record read in the data source):
Timestamp: 2012-07-09 12:46:50.370000
LSN: 0x0000073d:00000aff:0001, Tran: 0000:000bd922
Write Checkpoint #1
GGS Log Trail
Current Checkpoint (current write position):
Sequence #: 14
RBA: 28531192
Timestamp: 2012-07-09 12:50:02.409000
Extract Trail: dirdat/e1
CSN state information:
CRC: D2-B6-9F-B0
CSN: Not available
Header:
Version = 2
Record Source = A
Type = 8
# Input Checkpoints = 1
# Output Checkpoints = 1
File Information:
Block Size = 2048
Max Blocks = 100
Record Length = 20480
Current Offset = 0
Configuration:
Data Source = 5
Transaction Integrity = 1
Task Type = 0
Status:
Start Time = 2012-07-09 12:32:29
Last Update Time = 2012-07-09 12:50:02
Stop Status = A
Last Result = 400
Thanks!
AndreIt might be something simple (or maybe not); but the best/fastest way to troubleshoot this would be to have Oracle (GoldenGate) support review your configuration. There are a number of critical steps required to allow GG to interoperate with MS's capture API. (I doubt this is it, but is your TranLogOptions on one line? It looks like you have it on two , the way it's formatted here.)
Anyway, GG support has seen it all, and can probably wrap this up quickly. (And if it was something simple -- or even a bug -- do post back here & maybe someone else can benefit from the solution.)
Perhaps someone else will be able to provide a better answer, but for the most part troubleshooting this (ie, sql server) via forum tends to be a bit like doing brain surgery blindfolded. -
How to establish a trusted connection with JDBC for SQL SERVER 2000
Hi!I am using jdk 1.4 and eclipse 3.3.
I create a servlet in eclipse with in-build tomcat.
When I run it ,it was working perfectlly has it was suppose to work.
In this servlet I connect to a sql 2000 database using jdbc-odbc bridge driver.
But when I tried to deploy the servlet on tomcat 5.5 manully on the same machine ,it gave me error saying
[Microsoft][SQLServer JDBC Driver][SQLServer]Login failed
for user 'sa'
I searched around some post and found that ok ,I need trusted connection
But I have 2 Questions
1). Why was in eclipse I was able to connect to the SQL server and why not in the servlet which I deployed manully on tomcat.
2). How do I create a trusted connection with JDBC for SQL server 2000
Thnaks for your help in advance.Hi! duffymo ,QussayNajjar ,dvohra09 .
Thank for help.
The ideas are really great.
I am trying generate reports for my company.
When I used eclipse the code worked perfectly.
below is code which I used
out.println("Calling For Class Name<br>");
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
out.println("Calling For Class Name success Now calling database <br>");
1). jdbcConnection = DriverManager.getConnection("jdbc:odbc:SQLJasper");
2). jdbcConnection = DriverManager.getConnection("jdbc:odbc:Driver={SQL Server};Server=ServerName;Database=tempdb");
3). jdbcConnection = DriverManager.getConnection("jdbc:odbc:Driver={SQL Server};Server=ServerName;Database=tempdb","UID=UserName","Password=Password");
out.println("connecting to database success<br>");
I had tried to connect the database using this three way.
In 1st I tried using DSN name .
Next 2 self explainer for expert like you.
I used to 2nd variant to connect in eclipse and it worked fine.
I not an expert in java ,I just doing some research on jasperReport.
My best guest is that eclipse is using some library files of which I have no clue.
Thank's for your help,I appretiate it.
Once again thank a billion.
Sorry for the messy righting. -
Has anybody used the microsoft JDBC 2.0 driver for sql server 2000?
Hi,
Has anybody used the JDBC 2.0 driver for sql server 2000 downloadable from the
microsoft website?When I try using it with WL 6.1 sp1 it says it can't load the
driver.I try viewing the class file from the jar file using the jar utility it
gives an unknown Zip format error.Anybody has any solution for this ?If anybody
has managed to work with this microsoft driver i will be grateful if they provide
me with a solution.
Thanks
ThomasHello Thomas,
You may want to download the driver again and install it again.
heres a sample xml tag in the config.xml:
<JDBCConnectionPool
DriverName="com.microsoft.jdbc.sqlserver.SQLServerDriver"
InitialCapacity="3" MaxCapacity="12" Name="MSpool"
Password="{3DES}fUz1bxR0zDg=" Properties="user=uid"
Targets="myserver"
URL="jdbc:microsoft:sqlserver://mydbserver:1433"/>
ensure that you follow the instructions from Microsoft. For using 2000
driver you will need to have
Install_dir/lib/msbase.jar and Install_dir/lib/msutil.jar in addition to
Install_dir/lib/mssqlserver.jar in the CLASSPATH.
hth
sree
"Thomas" <[email protected]> wrote in message
news:3c91ec0e$[email protected]..
Hi,
Has anybody used the JDBC 2.0 driver for sql server 2000 downloadable from
the
microsoft website?When I try using it with WL 6.1 sp1 it says it can't load
the
driver.I try viewing the class file from the jar file using the jar utility
it
gives an unknown Zip format error.Anybody has any solution for this ?If
anybody
has managed to work with this microsoft driver i will be grateful if they
provide
me with a solution.
Thanks
Thomas -
Error deploying JDBC driver for SQL Server 2005
Hi all
I'm trying to deploy a JDBC driver for MS SQL Server 2005 (downloaded [here|http://www.microsoft.com/downloads/details.aspx?familyid=C47053EB-3B64-4794-950D-81E1EC91C1BA&displaylang=en]). When I try to deploy it according to the instructions found [here|http://help.sap.com/saphelp_nwce10/helpdata/en/51/735d4217139041e10000000a1550b0/frameset.htm] it fails.
The error in the logs doesn't give any useful information though. It only says Error occurred while deploying component ".\temp\dbpool\MSSQL2005.sda".
Has anyone else deployed a driver for SQL Server 2005, or perhaps have any suggestions of what else I could try?
Thanks
StuartHi Vladimir
That's excellent news! Thanks for the effort you've put into this. I'm very impressed with how seriously these issues are dealt with, specifically within the Java EE aspects of SAP.
May I make two suggestions on this topic:
1. Given that NWA has such granular security permissions, please could the security error be shown when it is raised? This would help immediately identify that the problem isn't actually a product error, but rather a missing security permission (and thus save us time and reduce your support calls).
2. Please could the role permissions be clearly documented (perhaps they already are, and I just couldn't find the docs?) so we know what is and isn't included in the role. The name is very misleading, as a "superadmin" is generally understood to have no limitation on their rights - so clear documentation on what is in-/excluded would be most helpful.
On a related topic, I came across another issue like this that may warrant your attention (while you're already looking into NWA security issues). I logged a support query about it (ref: 0120025231 0000753421 2008) in case you can retrieve details there (screenshots, logs, etc.). It's basically a similar security constraint when trying to create a Destination. I'm not sure if this is something you would like to include as standard permissions within the NWA_SUPERADMIN role or not, but I think it's worth consideration.
Thanks again for your help!
Cheers
Stuart -
Security Update for SQL Server 2005 SP3 (KB2494113) failed
Here is the Error, Can't get the update install, please help.
KB Number: KB2494113
Machine: MJA01
OS Version: Server 4.0 Service Pack 1 (Build 7601)
Package Language: 1033 (ENU)
Package Platform: x86
Package SP Level: 3
Package Version: 4060
Command-line parameters specified:
Cluster Installation: No
Prerequisites Check & Status
SQLSupport: Passed
Products Detected Language Level Patch Level Platform Edition
SQL Server Database Services 2005 (BKUPEXEC) ENU SP3 2005.090.4035.00 x86 EXPRESS
SQL Server Tools and Workstation Components 2005 ENU SP2 9.2.3042 x86 EXPRESS
Products Disqualified & Reason
Product Reason
SQL Server Tools and Workstation Components 2005 The product instance SQL Tools does not have prerequisite update 4035 installed. Update 4060 is dependent on prerequisite update 4035. Exit setup and refer to the Knowledge Base article to find the prerequisite
patch. Install the prerequisite and rerun the installation.
Processes Locking Files
Process Name Feature Type User Name PID
Product Installation Status
Product : SQL Server Database Services 2005 (BKUPEXEC)
Product Version (Previous): 4035
Product Version (Final) :
Status : Failure
Log File : C:\Program Files (x86)\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\SQL9_Hotfix_KB2494113_sqlrun_sql.msp.log
SQL Express Features :
Error Number : 29528
Error Description : MSP Error: 29528 The setup has encountered an unexpected error while Setting Internal Properties. The error is: Fatal error during installation.
Product : SQL Server Tools and Workstation Components 2005
Product Version (Previous): 3042
Product Version (Final) :
Status : NA
Log File :
SQL Express Features :
Error Description : The product instance SQL Tools does not have prerequisite update 4035 installed. Update 4060 is dependent on prerequisite update 4035. Exit setup and refer to the Knowledge Base article to find the prerequisite
patch. Install the prerequisite and rerun the installation.
Summary
One or more products failed to install, see above for details
Exit Code Returned: 29528Hi Bhanu,
I have uninstall the SQL Server Tools and Workstation Components 2005.
But still unable to upg the SQL Server with the KB249411. Any more ideas?
This is the Summary log.
Time: 09/10/2014 11:09:48.218
KB Number: KB2494113
Machine: MJA01
OS Version: Server 4.0 Service Pack 1 (Build 7601)
Package Language: 1033 (ENU)
Package Platform: x86
Package SP Level: 3
Package Version: 4060
Command-line parameters specified:
Cluster Installation: No
Prerequisites Check & Status
SQLSupport: Passed
Products Detected Language Level Patch Level Platform Edition
SQL Server Database Services 2005 (BKUPEXEC) ENU SP3 2005.090.4035.00 x86 EXPRESS
Products Disqualified & Reason
Product Reason
Processes Locking Files
Process Name Feature Type User Name PID
Product Installation Status
Product : SQL Server Database Services 2005 (BKUPEXEC)
Product Version (Previous): 4035
Product Version (Final) :
Status : Failure
Log File : C:\Program Files (x86)\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\SQL9_Hotfix_KB2494113_sqlrun_sql.msp.log
SQL Express Features :
Error Number : 29528
Error Description : MSP Error: 29528 The setup has encountered an unexpected error while Setting Internal Properties. The error is: Fatal error during installation.
Summary
One or more products failed to install, see above for details
Exit Code Returned: 29528
Here is what is log in the SQL9_Hotfix_KB2494113_sqlrun_sql.msp.log
Property(S): CommonFilesFolder.D9BC9C10_2DCD_44D3_AACC_9C58CAF76128 = C:\Program Files (x86)\Common Files\
MSI (s) (B8:F0) [11:09:39:826]: Product: Microsoft SQL Server 2005 Express Edition - Update 'GDR 4060 for SQL Server Database Services 2005 ENU (KB2494113)' could not be installed. Error code 1603. Additional information is available in the log file C:\Program
Files (x86)\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\SQL9_Hotfix_KB2494113_sqlrun_sql.msp.log.
MSI (s) (B8:F0) [11:09:39:827]: Windows Installer installed an update. Product Name: Microsoft SQL Server 2005 Express Edition. Product Version: 9.3.4035.00. Product Language: 1033. Manufacturer: Microsoft Corporation. Update Name: GDR 4060 for SQL Server Database
Services 2005 ENU (KB2494113). Installation success or error status: 1603.
MSI (s) (B8:F0) [11:09:39:828]: Note: 1: 1729
MSI (s) (B8:F0) [11:09:39:828]: Product: Microsoft SQL Server 2005 Express Edition -- Configuration failed.
MSI (s) (B8:F0) [11:09:39:829]: Windows Installer reconfigured the product. Product Name: Microsoft SQL Server 2005 Express Edition. Product Version: 9.3.4035.00. Product Language: 1033. Manufacturer: Microsoft Corporation. Reconfiguration success or error
status: 1603.
MSI (s) (B8:F0) [11:09:39:829]: Attempting to delete file C:\Windows\Installer\63906e.msp
MSI (s) (B8:F0) [11:09:39:829]: Unable to delete the file. LastError = 32
MSI (s) (B8:F0) [11:09:40:092]: Deferring clean up of packages/files, if any exist
MSI (s) (B8:F0) [11:09:40:092]: Attempting to delete file C:\Windows\Installer\63906e.msp
MSI (s) (B8:F0) [11:09:40:094]: MainEngineThread is returning 1603
MSI (s) (B8:6C) [11:09:40:097]: RESTART MANAGER: Session closed.
MSI (s) (B8:6C) [11:09:40:097]: No System Restore sequence number for this installation.
=== Logging stopped: 9/10/2014 11:09:39 ===
MSI (s) (B8:6C) [11:09:40:098]: User policy value 'DisableRollback' is 0
MSI (s) (B8:6C) [11:09:40:098]: Machine policy value 'DisableRollback' is 0
MSI (s) (B8:6C) [11:09:40:098]: Incrementing counter to disable shutdown. Counter after increment: 0
MSI (s) (B8:6C) [11:09:40:098]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2
MSI (s) (B8:6C) [11:09:40:098]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2
MSI (s) (B8:6C) [11:09:40:098]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2
MSI (s) (B8:6C) [11:09:40:098]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2
MSI (s) (B8:6C) [11:09:40:098]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
MSI (s) (B8:6C) [11:09:40:099]: Restoring environment variables
MSI (s) (B8:6C) [11:09:40:099]: Destroying RemoteAPI object.
MSI (s) (B8:30) [11:09:40:099]: Custom Action Manager thread ending.
MSI (c) (FC:60) [11:09:40:100]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
MSI (c) (FC:60) [11:09:40:101]: MainEngineThread is returning 1603
Thanks,
Alice -
I have used SSRS a lot years ago with Sql Server 2000 and Sql Server 2005. I have written external assemblies, ... But now I have to do this with Sql Server 2008 (R2 -- which I realize I am way behind the times already but ...) in sql server 2000 and
2005 there was a tab for datasource to the left of the tab for design which was to the left of the preview tab. How do I get to the datasource window in sql server 2008 (r2) ?
I see that datasource explorer. But where can I get to the datasource window to edit my queries and so forth for sql server 2008 (R2)?
Thanks
Rich PI think I found the answer to my question --- just right-click on the the Data Sources or Datasets for editing connections and dataset queries. I'm guessing it gets even fancier with Sql Svr 2012 - 2014. Man, that's the one thing
about coding platforms -- you let it go for a few years and come back, and everything has changed (well, a lot of things). Now I need to figure out how to add an external assembly to SSRS 2008 (R2).
Rich P -
After installed SP1 for SQL Server 2012, can no longer export to csv
After installing SP1 today via Windows Update, I am no longer able to export data to csv using the SQL Server Import and Export wizard. I get the following error message:
"Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider."
"Column "col1": Source data type "200" was not found in the data type mapping file."...
(The above line repeats for each column)
The work-around I have to do is to manually map each column in the "Edit Mappings..." option from the "Configure Flat File Destination" page of the wizard. This is an extreme inconvenience to have to have to edit the mappings and change
each column to "string [DT_STR]" type from "byte stream [DT_BYTES]" type each time I want to export to csv. I did not have to do this before installing SP1; it worked perfectly for months with hundreds of exports prior to this update and
no need to modify mapping.I am running Windows 7 64-bit, SQL Server 2012 Express edition. Again, just yesterday from Windows Update, I installed SQL Server 2012 Service Pack 1 (KB2674319), followed by Update Rollup for SQL Server 2012 Service Pack 1 (KB2793634). This situation was
not occurring before these updates were installed, and I noticed it immediately after they were installed (and of course I restarted my computer after the updates).
In SSMS I just now created a test DB and table to provide a step-by-step with screenshots.
Here is the code I ran to create the test DB and table:
CREATE DATABASE testDB;
GO
USE testDB;
GO
CREATE TABLE testTable
id int,
lname varchar(50),
fname varchar(50),
address varchar(50),
city varchar(50),
state char(2),
dob date
GO
INSERT INTO testTable VALUES
(1,'Smith','Bob','123 Main St.','Los Angeles','CA','20080212'),
(2,'Doe','John','555 Rainbow Ln.','Chicago','IL','19580530'),
(3,'Jones','Jane','999 Somewhere Pl.','Washington','DC','19651201'),
(4,'Jackson','George','111 Hello Cir.','Dallas','TX','20010718');
GO
SELECT * FROM testTable;
Results look good:
id lname fname address city state dob
1 Smith Bob 123 Main St. Los Angeles CA 2008-02-12
2 Doe John 555 Rainbow Ln. Chicago IL 1958-05-30
3 Jones Jane 999 Somewhere Pl. Washington DC 1965-12-01
4 Jackson George 111 Hello Cir. Dallas TX 2001-07-18
In Object Explorer, I right-click on the [testDB] database, choose "Tasks", then "Export Data..." and the SQL Server Import and Export Wizard appears. I click Next to leave all settings as-is on the "Choose a Data Source" page, then on the "Choose a Destination"
page, under the "Destination" drop-down I choose "Flat File Destination" then browse to the desktop and name the file "table_export.csv" then click Next. On the "Specify Table Copy or Query" page I choose "Write a query to specify the data to transfer" then
click Next. I type the following SQL statement:
SELECT * FROM testTable;
When clicking the "Parse" button I get the message "This SQL statement is valid."
On to the next page, "Configure Flat File Destination" I try leaving the defaults then click Next. This is where I am getting the error message (see screenshot below):
Then going to the "Edit Mappings..." option on the "Configure Flat File Destination" page, I see that all columns which were defined as varchar in the table are showing as type "byte stream [DT_BYTES]", size "0", the state column which is defined as char(2)
shows correctly however with type "string [DT_STR]", size "2" (see screenshow below):
So what I have to do is change the type for the lname, fname, address and city columns to "string [DT_STR]", then I am able to proceed with the export successfully. Again, this just started happening after installing these updates. As you can imagine, this
is very frustrating, as I do a lot of exports from many tables, with a lot more columns than this test table.
Thanks for your help. -
Hi,
I have installed the x64 SQL Server 2008 R2 Express with default settings and run MBSA 2.3 (using default settings too). It shows three SQL Server instances: MSSQL10_50.SQLEXPRESS, SQLEXPRESS and SQLEXPRESS (32-bit). For the first, authentication
mode is Windows, for the rest two - mixed. Here https://social.msdn.microsoft.com/Forums/sqlserver/en-US/03e470dc-874d-476d-849b-c805acf5b24d/sql-mbsa-question-on-folder-permission?forum=sqlsecurity question
about such multiple instances was asked and the answer is that "MSSQL10.TEST_DB
is the instance ID for the SQL Server Database Engine of the instance, TEST_DB", so in my case, it seems that MSSQL10_50.SQLEXPRESS is the instance ID for SQL Server Database Engine of the SQLEXPRESS instance.
I have two questions:
1) How can it be that SQL Server DB Engine instance has different authentication mode than corresponding SQL Server Instance?
2) Why 32-bit instance reported although I installed only 64-bit version?
Also, this https://social.technet.microsoft.com/Forums/security/en-US/6b12c019-eaf0-402c-ab40-51d31dce968f/mbsa-23-reporting-sql-32bt-instance-is-running-in-mixed-mode-when-it-is-set-to-integrated?forum=MBSA question seems to be related to this
issue, but there is no answer :(.
Upd: Tried on clean Windows 8 installation and Windows 7 with the same result.Because I DO NOT want the three people who will be having access to the production SQL Server to also have access to the primary host ProductionA. Since I have to allow them to RDC into the box to manage the SQL Server, I figure why not create
a separate VM for each one of them and they can RDC into those instead.
Does this make any sense?
Any tips are greatly appreciated. The main reason for doing this is because the three people who will be accessing the box, I need to isolate each one of them and at the same time keep them off of the primary ProductionA.
Thanks for your help.
M
Hello M,
Since you dont want the 3 guys to have access to Production machine A.You can install SQL Server client .By client i mean SQL server management studio(SSMS) on there local desktop and then create login for them in SQL Server.Open port on which your SQL server
is running for three of the machines so that they can connct.Now with SSMS installed on each machine each can connect to SQL server from there own machine.
I would also like you to be cautious with giving Sysadmin privilege to all three of them ,first please note down what task they would do and then decide what rights to be provided.
Your option will also work but you need to create 3 VM for that .Which is more tedious task.
Hope this helps
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
HI,
I have installed Cumulative Update 12 for SQL Server 2008 R2 SP2 on my SharePoint instances. This was to resolve a known issue faced with the instance. CU12 helped resolve the issue. My company is rather strict regarding security hotfixes. But I am
not sure if this particular hotfix [Security Hotfix (KB2977319)] is required if the instance has CU12 applied.
Tested this on a Lab server, the installation did run fine, the summary log also stated that the KB is applied. But the Build Number did not change. Hence the doubt.
Overall summary:
Final result: Passed
Exit code (Decimal): 0
Exit message: Passed
Start time: 2014-09-06 10:31:21
End time: 2014-09-06 10:55:49
Requested action: Patch
Instance SPNTSQLTRN overall summary:
Final result: Passed
Exit code (Decimal): 0
Exit message: Passed
Start time: 2014-09-06 10:48:08
End time: 2014-09-06 10:55:45
Requested action: Patch
Package properties:
Description: SQL Server Database Services 2008 R2
ProductName: SQL2008
Type: RTM
Version: 10
SPLevel: 2
KBArticle: KB2977319
KBArticleHyperlink: http://support.microsoft.com/?kbid=2977319
PatchType: QFE
AssociatedHotfixBuild: 0
Platform: x64
PatchLevel: 10.52.4321.0
ProductVersion: 10.52.4000.0
GDRReservedRange: 10.50.4001.0:10.50.4199.0;10.50.4200.0:10.50.4250.0
PackageName: SQLServer2008-KB2977319-x64.exe
Installation location: e:\ac2af22d88ee645b5b32b5c178\x64\setup\
Please inform if I need to apply the hotfix on CU12. Thanks in advance.
John SYes you must install Security update mentioned in KB 2977319 it is important for SQL Server to be patches with this security update. Without this it could allow an attacker to compromise your system and gain control over it.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Articles
Maybe you are looking for
-
Vendor Down Payment Clearing: F-54 or F-44
Hi I have made a down payment (with Spl. GL indicator) to a vendor, using F-48. Then I book an invoice using FB60. I kept the two amounts same and ignored WT to keep things simple. Now i want to clear the down payment against the invoice. I use F-54.
-
Bought a Macbook Pro through education store - 3 year warranty? where?
Hey, Bought my Macbook Pro through the educational discount & wasn't supplied with anything in writing or on the invoice stating that it was supplied with a 3 year warranty. How do I get this proof from apple? Thanks
-
What's with the Blue&Green Text (sent as text message
Hi well i just want to know why sometimes my text appears in green and sometimes blue this is an example did she/he received the message? http://i.imgur.com/nsn4xSK.png?1 when i first sent the message was blue now i sent another text yesterday and it
-
Hi friends, I would like to offer an interface in XI to establish a synchronous communication for adapters HTTP or XI. Would need to run the service in XI from a URL like: http://host_xi:port_xi/path_xi/service_xi?parameter1=X¶meter2=Y&... And ca
-
Hi, Pls reply back if you have come across such a sapscript issue before. Sometimes(Not always) all the contents in a sapscript print out (also the print preview) prints all characters as # . Each and every character on the form prints as '#### .....