SCCM DCM CI for SQL Infra

Need to get a SCCM DCM CI created to check following configuration on SQL Server infra.
Determine the list of users/groups in sysadmin role
Any group/user in the sysadmin role also a member of the Administrators group for the host server. (IF So, Non Compliant)
BUILTIN\Administrators present on the host, and is it a member of Sysadmins

This can be done using PowerShell (or any other script you prefer).
Just return a "Compliant" or "Non-Compliant" depending on the result.
I don't have examples for the specific scenarios, but this is how it works:
If (Test-Whatever = $True){
Write-Host "Compliant" }
Else {
Write-Host "Non-Compliant" }
Under Compliance rules you configure like this:
"The value returned by the specified script: Equals "Compliant".
Ronni Pedersen | Microsoft MVP - ConfigMgr | Blogs:
www.ronnipedersen.com/ and www.SCUG.dk/ | Twitter
@ronnipedersen

Similar Messages

  • Does SCCM 2007 R3 support SQL 2014 if not is there any mitigation plan for this

    Hi ,
    As part of Database migration to SQL 2014, wanted to check whether SCCM 2007 R3 support SQL 2014 or not.
    I have a link for SCCM 2012 which tells it does support to SCCM 2012, but in upgrade mode.
    Is there any other approch instead of upgrade on SCCM 2012. 
    Please let us know.
    http://windowsitpro.com/configuration-manager/configmgr-2012-supported-sql-server-2014-only-if-you-upgrade
    Regards
    Sudam Bisi
    CTS

    I doubt CM07 will support SQL 2014, particularly when mainstream support has ended already.
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • Are new versions of SP for SQL supported for SCCM 2007 while in extended support

    With SCCM 2007 R3 in extended support, are new service pack versions for SQL, such as 2008 SP3, tested and supported?  We see TechNet is not updated with this version.  We have tested in lab works great, but wanted to verify supportability
    before upgrading production.

    Configuration Manager 2007 supports Microsoft SQL Server 2008 R2 SP1 and Microsoft SQL Server 2008 SP3.
    System Center Configuration Manager 2007 SP2, R2 and R3 now supports Microsoft SQL Server 2008 R2 SP1 and Microsoft SQL Server 2008 SP3 as a Configuration Manager 2007 site database.
    Also you can refer below link
    http://blogs.technet.com/b/configmgrteam/archive/2011/09/08/configuration-manager-support-announcements-for-august-2011.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"Mai Ali | My blog:
    Technical | Twitter:
    Mai Ali

  • Unable to install MBAM 2.5 with SCCM integration on named SQL instance

    Hi,
    in production i am installing MBAM 2.5 with SCCM integration but during the installation it asks for the SCCM reporting services point and which it is unable to recognize.
    whereas in my lab environment i have default SQL instance which has reporting installed, in that case it works fine.
    can someone let me know why the issue is coming?
    in the event log the issue is :  Description = "user domain\username is not able to get the lock at this time.Error:0x40480732"

    For the integration part, You need to install the MBAM SCCM Integration feature on SCCM server. For compliance reporting in integration, MBAM uses the Reporting service point for SCCM.
    When installing the SCCM integration feature, you need to provide the name of the reporting Server where Reporting Service point role is installed for SCCM. But it does not provides any option for the instance but for the Server only. We have two instances
    for SQL. One for CAS and other for Primary site. We need to run the installer on CAS and hence need to connect to the SQL instance for CAS.
    We have also tried with the powershell commands but it is not working either.
    Is there any way to install MBAM integration feature on a SQL instance. As far as I know, it automatically picks the instance but not happening for me.
    Gaurav Ranjan

  • Reporting Service Error Code 7403, on SCCM 2012 R2 with SQL 2012 SP1 CU6

    Dear All,
    I am facing issue to install Reporting Services Point on SCCM 2012 R2 with SQL 2012 SP1 CU6. getting error messages id 7403. please let me know if SQL 2012 SP1 CU6 is supported with SCCM 2012 R2 or not & if you have any solution on it.
    Error Message: The report server service is not running on Reporting Service Point server "SCCM2012"; start the service to enable reporting.

    I am getting below error msg when trying to browse both sites
    1) For Reports
    The report server cannot decrypt the
    symmetric key that is used to access sensitive or encrypted data in a report
    server database. You must either restore a backup key or delete all encrypted
    content. (rsReportServerDisabled)
    Get Online Help
    Keyset does not exist (Exception from
    HRESULT:
    0x80090016)
    2) For Reportserver
    Reporting Services Error
    The report server cannot decrypt the symmetric key that is used to access
    sensitive or encrypted data in a report server database. You must either restore
    a backup key or delete all encrypted content. (rsReportServerDisabled)
    Get Online Help
    Keyset does not exist (Exception from HRESULT: 0x80090016)
    SQL Server Reporting Services

  • SCCM Report- Count of SQL Servers installed in my enviroment

    Hello everyone,
    I need to give count of SQL server installed in my organization for auditing.
    i want the count of sql server edition wise ,for example,
    SOFTWARE                                         COUNT
    Microsoft SQL server 2008 express - 1000
    Microsoft SQL server 2010 standard - 525
    There are SQL servers 2005 to 2012 including all editions in my organization.
    PLEASE anyone help me.
    We have SCCM 2007 in my enviroment.

    I am getting this error as i pulled a report
    Invalid object name 'v_gs_sql_property0'.
    Error Number:
    -2147217865
    Source:
    Microsoft OLE DB Provider for SQL Server
    Native Error:
    208
    REPORT SQL QUERY
    select sys1.Netbios_name0,
    max(Case sql.PropertyName0 when 'SKUName' then
       sql.PropertySTRValue0 end) as [SQL08 Type]
    ,max(Case sql.PropertyName0 when 'SPLEVEL' then
       sql.PropertyNUMValue0 end) as [SQL08 Service Pack]  
    ,max(Case sql.PropertyName0 when 'VERSION' then
       sql.PropertySTRValue0 end) as [SQL08 Version]
    ,max(Case sql.PropertyName0 when 'FILEVERSION' then
       sql.PropertySTRValue0 end) as [SQL08 CU Version]
    ,max(Case sql.PropertyName0 when 'INSTANCEID' then
       sql.PropertySTRValue0 end) as [SQL08 InstanceID]
    ,max(Case sql2.PropertyName0 when 'SKUName' then
       sql2.PropertySTRValue0 end) as [SQL05 Type]
    ,max(Case sql2.PropertyName0 when 'SPLEVEL' then
       sql2.PropertyNUMValue0 end) as [SQL05 Service Pack]
    ,max(Case sql2.PropertyName0 when 'VERSION' then
       sql2.PropertySTRValue0 end) as [SQL05 Version]
    ,max(Case sql2.PropertyName0 when 'FILEVERSION' then
       sql2.PropertySTRValue0 end) as [SQL05 CU Version]
    ,max(Case sql2.PropertyName0 when 'INSTANCEID' then
       sql2.PropertySTRValue0 end) as [SQL05 InstanceID]
    from v_r_system sys1
    left join v_gs_sql_property0 sql on sys1.resourceid=sql.ResourceID
    left join v_gs_sql_property_legacy0 sql2 on sys1.ResourceID=sql2.ResourceID
    where sql.PropertyName0 in ('instanceid','SKUNAME','SPLevel','version','fileversion')
    or
    sql2.PropertyName0 in ('instanceid','SKUNAME','SPLevel','version','fileversion')
    group by sys1.Netbios_name0

  • Security Update for SQL Server 2008 R2 Service Pack 2 (KB2977320)

    So i have been trying to install this update on one of our servers since it is showing up on our MBSA scans but it has been failing on one of our machines with the error code: 0x80070643
    I downloaded the update and tired to install it manually. The installer returned: There are no SQL Server instances or shared features that can be updated on this computer. If that was true then why is SCCM still pushing the update to the machine and why
    is the MBSA showing that it needs the update? Any advice would be helpful. Thanks!

    Hi David,
    Based on my understanding, the error code: 0x80070643 was thrown out when you install KB2977320 with MBSA. And installation also failed when you download and install update manually.
    Regarding the error code: 0x80070643, it can be caused if the MSI software update registration has become corrupted, or if the .NET Framework installation on the computer has become corrupted. To fix this issue, we can follow these methods:
    Fix MSI software update registration corruption issues
    Repair the .Net Framework
    Uninstall and reinstall the .Net Framework
    For more information, please refer to this link:
    http://support.microsoft.com/kb/976982.
    Besides, the security update is released for SQL Server 2008 R2 Service Pack 2, please check the version of SQL Server you are using. About how to check the SQL Server version, please refer to this article: How to tell what SQL Server version you are running
    (http://www.mssqltips.com/sqlservertip/1140/how-to-tell-what-sql-server-version-you-are-running/). Then please download the Security Update for SQL
    Server 2008 R2 Service Pack 2 from this link (http://www.microsoft.com/en-us/download/details.aspx?id=43957), and retry install it manually.
    Best regards,
    Qiuyun Yu

  • BIDS or SQL data Tools for SQL 2012 SP1 database

    I want to start customising reports from my Win 7 workstations. The SCCM server is SQL 2012 SP1
    Do I need to install SQL Server Data Tools onto the workstation or can you still use BIDS (Visual Studio) from SQL 2008 media
    Also I have saved some custom report .RDL files from previous assignments. Can I convert these to point to this site's database for use in SQL 2012 ?
    Ian Burnell, London (UK)

    Thanks Garth - so if I install 2008 BIDS on my W7 workstation then I will be able to connect to the CM2012/SQL 2012 database?
    Yes you will be able to do that. I do it all the time for SQL 2005 or 2008 to SQL 2012 and SQL 2014. :-)
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • GeoRaptor 3.0 for SQL Developer 3.0 and 2.1 has now been released

    Folks,
    I am pleased to announce that, after 5 months of development and testing, a new release of GeoRaptor for SQL Developer 2.1 and 3.0 is now available.
    GeoRaptor for SQL Developer 3 is available via the SQL Developer Update centre. GeoRaptor 3 for SQL Developer 2.1 is being made available
    via a download fro the GeoRaptor website.
    No release notes have been compiled as the principal developer (oops, that's me!) is currently busy doing real work for a change (another 3 weeks), earning a living
    and keeping the wolves at bay. More extensive notes (with images) will be compiled when I get back. (Unless anyone is offering! See next.)
    We are still looking for people to:
    1. Provide translations of the English dialog menus etc.
    2. Write more extensive user documentation. If you use a particular part of GeoRaptor a lot and think
    you have found out all its functionality and quirks, contact us and offer to write a few pages of
    documentation on it. (Open Office or Microsoft Word is fine.) Easiest way to do this is to simply
    make screen captures and annotate with text.
    3. Conduct beta testing.
    Here are the things that are in the new release.
    New functionality:
    Overhaul of Validation Functionality.
    1. User can specify own validation SELECT SQL as long as it returns three required columns. The SQL is thus totally editable.
    2. Validation update code now allows user to associate a PL/SQL function with an error number which is applied in the UPDATE SQL.
    3. UPDATE SQL can use WHERE clause of validation SELECT SQL (1) to update specific errors.
       NOTE: The generated UPDATE statement can be manually edited. It is NEVER run by GeoRaptor. To run any UPDATE, copy the statement
       to the clipboard and run in an appropriate SQL Worksheet session within SQL Developer.
    4. Main validation table allows:
       a. Sorting (click on column header) and
       b. Filtering.
       c. Copying to Clipboard via right mouse click sub menu of:
          - Geometry's SDO_ELEM_INFO array constructor.
          - SDO_GEOMETRY constructor
          - Error + validation string.
       d. Access to Draw/Zoom functions which were previously buttons.
       e. Added a new right mouse click menu "Show Feature's Individual Errors" that gathers up all the errors
          it can process - along with the ring / element that is host to the error (if it can) - and displays
          them in the Attribute/Geometry tabs at the bottom of the Map Window (where "Identify" places its results).
          The power of this will be evident to all those who have wanted a way of stepping through errors in a geometry.
       f. Selected rows can now be deleted (select rows: press <DELETE> key or right mouse click>Delete).
       g. Table now has only one primary key column, and has a separate error column holding the actual error code.
       h. Right mouse click men added to table menu to display description of error in the new column (drawn from Oracle documentation)
       i. Optimisations added to improve performance for large error lists.
    5. Functionality now has its own validation layer that is automatically added to the correct view.
       Access to layer properties via button on validation dialog or via normal right mouse click in view/layer tree.
    Improved Rendering Options.
    1. Linestring colour can now be random or drawn from column in database (as per Fill and Point colouring)
    2. Marking of SDO_GEOMETRY objects overhauled.
       - Ability to mark or LABEL vertices/points of all SDO_GEOMETRY types with coordinate identifier and
         option {X,Y} location. Access is via Labelling tab in layer>properties. Thus, coordinate 25 of a linestring
         could be shown as: <25> or {x,y} or <25> {x,y}
       - There is a nice "stacked" option where the coordinate {x,y} can be written one line below the id.
       - For linestrings and polygons the <id> {x,y} label can be oriented to the angle between the vectors or
         edges that come in, and go out of, a vertex. Access is via "Orient" tick box in Labelling tab.
       - Uses Tools>Preferences>GeoRaptor>Visualisation>SDO_ORDINATE_ARRAY bracket around x,y string.
    3. Start point of linestring/polygon and all other vertices can be marked with user selectable point marker
       rather than previously fixed markers.
    4. Can now set a NULL point marker by selecting "None" for point marker style pulldown menu.
    5. Positioning of the arrow for linestring/polygons has extra options:
       * NONE
       * START    - All segments of a line have the arrow positioned at the start
       * MIDDLE   - All segments of a line have the arrow positioning in the middle.
       * END      - All segments of a line have the arrow positioning in the END.
       * END_ONLY - Only the last segment has an arrow and at its end.
    ScaleBar.
    1. A new graphic ScaleBar option has been added for the map of each view.
       For geographic/geodetic SRIDs distances are currently shown in meters;
       For all SRIDs an attempt is made to "adapt" the scaleBar units depending
       on the zoom level. So, if you zoom right in you might get the distance shown
       as mm, and as you zoom out, cm/m/km as appropriate.
    2. As the scaleBar is drawn, a 1:<DEMONINATOR> style MapScale value is written
       to the map's right most status bar element.
    3. ScaleBar and MapScale can be turned off/on in View>Properties right mouse
       click menu.
    Export Capabilities.
    1. The ability to export a selection from a result set table (ie result of
       executing ad-hoc SQL SELECT statement to GML, KML, SHP/TAB (TAB file
       adds TAB file "wrapper" over SHP) has been added.
    2. Ability to export table/view/materialised view to GML, KML, SHP/TAB also
       added. If no attributes are selected when exporting to a SHP/TAB file, GeoRaptor
       automatically adds a field that holds a unique row number.
    3. When exporting to KML:
       * one can optionally export attributes.
       * Web sensitive characters < > & etc for KML export are replaced with &gt; &lt; &amp; etc.
       * If a column in the SELECTION or table/view/Mview equals "name" then its value is
         written to the KML tag <name> and not to the list of associated attributes.
         - Similarly for "description" -> <description> AND "styleUrl" -> <styleUrl>
    4. When exporting to GML one can optionally export attributes in FME or OGR "flavour".
    5. Exporting Measured SDO_GEOMETRY objects to SHP not supported until missing functionality
       in GeoTools is corrected (working with GeoTools community to fix).
    6. Writing PRJ and MapInfo CoordSys is done by pasting a string into appropriate export dialog box.
       Last value pasted is remembered between sessions which is useful for users who work with a single SRID.
    7. Export directory is remembered between sessions in case a user uses a standard export directory.
    8. Result sets containing MDSYS.SDO_POINT and/or MDSYS.VERTEX_TYPE can also be written to GML/KML/SHP/TAB.
       Example:
       SELECT a.geom.sdo_point as point
         FROM (SELECT sdo_geometry(2002,null,sdo_point_type(1,2,null),sdo_elem_info_array(1,2,1),sdo_ordinate_array(1,1,2,2)) as geom
                 FROM DUAL) a;
       SELECT mdsys.vertex_type(a.x,a.y,a.z,a.w,a.v5,a.v6,a.v7,a.v8,a.v9,a.v10,a.v11,a.id) as vertex
         FROM TABLE(mdsys.sdo_util.getVertices(mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(1,1,2,2)))) a;
    9. A dialog appears at the end of each export which details (eg total) what was exported when the exported recordset/table contains more
       than on shape type. For example, if you export only points eg 2001/3001 from a table that also contains multipoints eg 2005/3005 then
       the number of points exported, and multipoints skipped will be displayed.
    10. SHP/TAB export is "transactional". If you set the commit interval to 100 then only 100 records are held in memory before writing.
        However, this does not currently apply to the associated DBASE records.
    11. SHP/TAB export supports dBase III, dBase III + Memo, dBase IV and dBase IV + Memo.
        Note: Memo allows text columns > 255 characters to be exported. Non-Memo formats do not and any varchar2 columns will be truncated
        to 255 chars. Some GIS packages support MEMO eg Manifold GIS, some do not.
    12. Note. GeoRaptor does not ensure that the SRID of SDO_GEOMETRY data exported to KML is in the correct Google Projection.
        Please read the Oracle documentation on how to project your data is this is necessary. An example is:
        SELECT OBJECTID,
               CODIGO as name,
               NOME as description,
               MI_STYLE,
               SDO_CS.TRANSFORM(shape,'USE_SPHERICAL',4055) as shape
          FROM MUB.REGIONAL;
    13. NOTE: The SHP exporter uses the Java Topology Suite (JTS) to convert from SDO_GEOMETRY to the ESRI Shape format. JTS does not handle
        circular curves in SDO_GEOMETRY objects you must "stroke" them using sdo_util.arc_densify(). See the Oracle documentation on how
        to use this.
    Miscellaneous.
    1. Selection View - Measurement has been modified so that the final result only shows those geometry
       types that were actually measured.
    2. In Layer Properties the Miscellaneous tab has been removed because the only elements in it were the
       Geometry Output options which have now been replaced by the new GML/KML/etc export capabilities.
    3. Shapefile import's user entered tablename now checked for Oracle naming convention compliance.
    4. Identify based on SDO_NN has been removed from GeoRaptor given the myriad problems that it seems to create across versions
       and partitioned/non-partitioned tables. Instead SDO_WITHIN_DISTANCE is now used with the actual search distance (see circle
       in map display): everything within that distance is returned.
    5. Displaying/Not displaying embedded sdo_point in line/polygon (Jamie Keene), is now controlled by
       a preference.
    6. New View Menu options to switch all layers on/off
    7. Tools/Preferences/GeoRaptor layout has been improved.
    8. If Identify is called on a geometry a new right mouse click menu entry has been added called "Mark" which
       has two sub-menus called ID and ID(X,Y) that will add the labeling to the selected geometry independently of
       what the layer is set to being.
    9. Two new methods for rendering an SDO_GEOMETRY object in a table or SQL recordset have been added: a) Show geometry as ICON
       and b) Show geometry as THUMBNAIL. When the latter is chosen, the actual geometry is shown in an image _inside_ the row/column cell it occupies.
       In addition, the existing textual methods for visualisation: WKT, KML, GML etc have been collected together with ICON and THUMBNAIL in a new
       right mouse click menu.
    10. Tables/Views/MViews without spatial indexes can now be added to a Spatial View. To stop large tables from killing rendering, a new preference
        has been added "Table Count Limit" (default 1,000) which controls how many geometry records can be displayed. A table without a spatial
        index will have its layer name rendered in Italics and will write a warning message in red to the status bar for each redraw. Adding an index
        which the layer exists will be recognised by GeoRaptor during drawing and switch the layer across to normal rendering.
    Some Bug Fixes.
    * Error in manage metadata related to getting metadata across all schemas
    * Bug with no display of rowid in Identify results fixed;
    * Some fixes relating to where clause application in geometry validation.
    * Fixes bug with scrollbars on view/layer tree not working.
    * Problem with the spatial networks fixed. Actions for spatial networks can now only be done in the
      schema of the current user, as it could happen that a user opens the tree for another schema that
      has the same network as in the user's schema. Dropping a drops only the network of the current connected user.
    * Recordset "find sdo_geometry cell" code has been modified so that it now appears only if a suitable geometry object is
      in a recordset.  Please note that there is a bug in SQL Developer (2.1 and 3.0) that causes SQL Developer to not
      register a change in selection from a single cell to a whole row when one left clicks at the left-most "row number"
      column that is not part of the SELECT statements user columns, as a short cut to selecting a whole row.  It appears
      that this is a SQL Developer bug so nothing can be done about it until it is fixed. To select a whole row, select all
      cells in the row.
    * Copy to clipboard of SDO_GEOMETRY with M and Z values forgot has extraneous "," at the end.
    * Column based colouring of markers fixed
    * Bunch of performance improvements.
    * Plus (happily) others that I can't remember!If you find any bugs register a bug report at our website.
    If you want to help with testing, contact us at our website.
    My thanks for help in this release to:
    1. John O'Toole
    2. Holger Labe
    3. Sandro Costa
    4. Marco Giana
    5. Luc van Linden
    6. Pieter Minnaar
    7. Warwick Wilson
    8. Jody Garnett (GeoTools bug issues)
    Finally, when at the Washington User Conference I explained the willingness of the GeoRaptor Team to work
    for some sort of integration of our "product" with the new Spatial extension that has just been released in SQL
    Developer 3.0. Nothing much has come of that initial contact and I hope more will come of it.
    In the end, it is you, the real users who should and will decide the way forward. If you have ideas, wishes etc,
    please contact the GeoRaptor team via our SourceForge website, or start a "wishlist" thread on this forum
    expressing ideas for future functionality and integration opportunities.
    regards
    Simon
    Edited by: sgreener on Jun 12, 2011 2:15 PM

    Thank you for this.
    I have been messing around with this last few days, and i really love the feature to pinpoint the validation errors on map.
    I has always been so annoying to try pinpoint these errors using some other GIS software while doing your sql.
    I have stumbled to few bugs:
    1. In "Validate geometry column" dialog checking option "Use DimInfo" actually still uses value entered in tolerance text box.
    I found this because in my language settings , is the decimal operators
    2. In "Validate geometry column" dialog textboxs showing sql, doesn't always show everything from long lines of text (clipping text from right)
    3. In "Validate geometry column" dialog the "Create Update SQL" has few bugs:
    - if you have selected multiple rows from results and check the "Use Selected Geometries" the generated IN-clause in SQL with have same rowid (rowid for first selected result) for all entries
    Also the other generated IN clause in WHERE-clause is missing separator if you select more than one corrective function
    4. "Validate geometry column" dialog stays annoyingly top most when using "Create Update SQL" dialog

  • Issue With Report Builder After Installing SP3 for SQL 2008 R2

    Hello.  We are experiencing an issue with Report Builder 3.0 since installing SP3 for SQL 2008 R2 over the weekend.  You can no longer launch Report Builder from the Reporting Services URL (http://dicomweb/ReportServer/ReportBuilder/ReportBuilder_3_0_0_0.application
          Server  : Microsoft-HTTPAPI/2.0
          X-AspNet-Version: 2.0.50727
     Application url   :
    http://dicomweb/ReportServer/ReportBuilder/RptBuilder_3/MSReportBuilder.exe.manifest
          Server  : Microsoft-HTTPAPI/2.0
          X-AspNet-Version: 2.0.50727
    IDENTITIES
     Deployment Identity  : ReportBuilder_3_0_0_0.application, Version=10.50.6000.34, Culture=neutral, PublicKeyToken=c3bce3770c238a49, processorArchitecture=x86
     Application Identity  : MSReportBuilder.exe, Version=10.50.6000.34, Culture=neutral, PublicKeyToken=c3bce3770c238a49, processorArchitecture=x86, type=win32
    APPLICATION SUMMARY
     * Online only application.
     * Trust url parameter is set.
    ERROR SUMMARY
     Below is a summary of the errors, details of these errors are listed later in the log.
     * Activation of
    http://dicomweb/ReportServer/ReportBuilder/ReportBuilder_3_0_0_0.application resulted in exception. Following failure messages were detected:
      + File, Microsoft.ReportingServices.ComponentLibrary.Controls.dll, has a different computed hash than specified in manifest.
    COMPONENT STORE TRANSACTION FAILURE SUMMARY
     No transaction error was detected.
    WARNINGS
     There were no warnings during this operation.
    OPERATION PROGRESS STATUS
     * [10/15/2014 9:35:56 AM] : Activation of
    http://dicomweb/ReportServer/ReportBuilder/ReportBuilder_3_0_0_0.application has started.
     * [10/15/2014 9:36:29 AM] : Processing of deployment manifest has successfully completed.
     * [10/15/2014 9:36:29 AM] : Installation of the application has started.
     * [10/15/2014 9:36:31 AM] : Processing of application manifest has successfully completed.
     * [10/15/2014 9:36:35 AM] : Found compatible runtime version 2.0.50727.
     * [10/15/2014 9:36:35 AM] : Detecting dependent assembly Sentinel.v3.5Client, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=msil using Sentinel.v3.5Client, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a,
    processorArchitecture=msil.
     * [10/15/2014 9:36:35 AM] : Detecting dependent assembly System.Data.Entity, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=msil using System.Data.Entity, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089,
    processorArchitecture=msil.
     * [10/15/2014 9:36:35 AM] : Request of trust and detection of platform is complete.
    ERROR DETAILS
     Following errors were detected during this operation.
     * [10/15/2014 9:36:43 AM] System.Deployment.Application.InvalidDeploymentException (HashValidation)
      - File, Microsoft.ReportingServices.ComponentLibrary.Controls.dll, has a different computed hash than specified in manifest.
      - Source: System.Deployment
      - Stack trace:
       at System.Deployment.Application.ComponentVerifier.VerifyFileHash(String filePath, Hash hash)
       at System.Deployment.Application.ComponentVerifier.VerifyFileHash(String filePath, HashCollection hashCollection)
       at System.Deployment.Application.ComponentVerifier.VerifyComponents()
       at System.Deployment.Application.DownloadManager.DownloadDependencies(SubscriptionState subState, AssemblyManifest deployManifest, AssemblyManifest appManifest, Uri sourceUriBase, String targetDirectory, String group, IDownloadNotification
    notification, DownloadOptions options)
       at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp)
       at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc)
       at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl)
       at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state)
    COMPONENT STORE TRANSACTION DETAILS
     No transaction information is available.

    Hi DMcGarveyt,
    This is an known issue in SSRS 2008 R2 SP3. For this issue, Microsoft has published an article addressing this issue. It is not planning to release a fix for this defect. But there's workarounds for this issue. Please refer to the link below:
    Report Builder of SQL Server 2008 R2 Service Pack 3 does not launch.
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Can we use xml Publisher reporting for sql* Plus in EBS

    Hello All,
    The current report is designed in Sql* Plus Executable report and the output is in txt format, Now the requirement is to have the output in Excel format.
    So is it possible to use the xml reporting and make the output as Excel from the word template we design from MSword as we do for rdf(I have done few reports created in rdf to xml publisher reports in EBS and stand alone as well.).
    Do the same procedure will suit for Sql*Plus reports tooo or Is there any work around to achieve this.
    Thanks and Regards
    Balaji.

    Hi
    Thanks for the reply..
    I tried to do the follwoing
    1. changed the output to xml in the conc. prog.
    2. ran the same report but i am getting the follwoing error in the output file
    The XML page cannot be displayed
    Cannot view XML input using style sheet. Please correct the error and then click the Refresh button, or try again later.
    Invalid at the top level of the document. Error processing resource
    Other reports which are using the Oracle Reports(rdf) as source, i am able to generated the xml as expected....
    So my question is whether we can use sql* reports executable and generate xml in the conc.prog.
    if any one has used the sql*reports for xml publisher reporting... please let me know, so that if its possible i will check my sql needs some validation or tuning...
    thanks in advance
    Balaji.

  • How to access the datasource window in SSRS for sql server 2008 R2 for writing my query without having to go through the wizard?

    I have used SSRS a lot years ago with Sql Server 2000 and Sql Server 2005. I have written external assemblies, ... But now I have to do this with Sql Server 2008 (R2 -- which I realize I am way behind the times already but ...)  in sql server 2000 and
    2005 there was a tab for datasource to the left of the tab for design which was to the left of the preview tab. How do I get to the datasource window in sql server 2008 (r2) ?
    I see that datasource explorer.  But where can I get to the datasource window to edit my queries and so forth for sql server 2008 (R2)?
    Thanks
    Rich P

    I think I found the answer to my question --- just right-click on the the Data Sources or Datasets for editing connections and dataset queries.  I'm guessing it gets even fancier with Sql Svr 2012 - 2014.    Man, that's the one thing
    about coding platforms -- you let it go for a few years and come back, and everything has changed (well, a lot of things).  Now I need to figure out how to add an external assembly to SSRS 2008 (R2).
    Rich P

  • WARNING:Could not lower the asynch I/O limit to 224 for SQL direct I/O. It is set to -1

    Hi,
    We have recently upgraded single instance database to 10.2.0.5 patch set 4 on windows server 2003 64 bit
    After upgrading we are facing  issue of number of warnings in trace files (bdump)
    WARNING:Could not lower the asynch I/O limit to 224 for SQL direct I/O. It is set to -1
    I know this is bug 9772888 and oracle suggest to upgrade to 11.2.x but we can't upgrade it now on early basis.
    According to oracle it is useless warning and can be ignored but these traces are consuming more than 500MB space per day and we are unable to delete trace files as oracle countinously using those file for writting
    it is production database kindly suggest solution other than upgrade.
    trace files sid_sxxx_XXX.trc
    if we set max_dump_file_size to XXXM or G will it resolve our problem?
    or is there any way to flush warnings to new file and delete old one.
    Please suggest
    Thanks
    shah

    duplicate thread.
    I suggest you don't multipost.
    I also suggest you upgrade from Windows 2003 (10 years old and probably desupported)
    and 10gR2 (desupported for a few years)
    Sybrand Bakker
    Senior Oracle DBA

  • After installed SP1 for SQL Server 2012, can no longer export to csv

    After installing SP1 today via Windows Update, I am no longer able to export data to csv using the SQL Server Import and Export wizard. I get the following error message:
    "Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider."
    "Column "col1": Source data type "200" was not found in the data type mapping file."...
    (The above line repeats for each column)
    The work-around I have to do is to manually map each column in the "Edit Mappings..." option from the "Configure Flat File Destination" page of the wizard. This is an extreme inconvenience to have to have to edit the mappings and change
    each column to "string [DT_STR]" type from "byte stream [DT_BYTES]" type each time I want to export to csv. I did not have to do this before installing SP1; it worked perfectly for months with hundreds of exports prior to this update and
    no need to modify mapping.

    I am running Windows 7 64-bit, SQL Server 2012 Express edition. Again, just yesterday from Windows Update, I installed SQL Server 2012 Service Pack 1 (KB2674319), followed by Update Rollup for SQL Server 2012 Service Pack 1 (KB2793634). This situation was
    not occurring before these updates were installed, and I noticed it immediately after they were installed (and of course I restarted my computer after the updates).
    In SSMS I just now created a test DB and table to provide a step-by-step with screenshots.
    Here is the code I ran to create the test DB and table:
    CREATE DATABASE testDB;
    GO
    USE testDB;
    GO
    CREATE TABLE testTable
    id int,
    lname varchar(50),
    fname varchar(50),
    address varchar(50),
    city varchar(50),
    state char(2),
    dob date
    GO
    INSERT INTO testTable VALUES
    (1,'Smith','Bob','123 Main St.','Los Angeles','CA','20080212'),
    (2,'Doe','John','555 Rainbow Ln.','Chicago','IL','19580530'),
    (3,'Jones','Jane','999 Somewhere Pl.','Washington','DC','19651201'),
    (4,'Jackson','George','111 Hello Cir.','Dallas','TX','20010718');
    GO
    SELECT * FROM testTable;
    Results look good:
    id    lname    fname    address    city    state    dob
    1    Smith    Bob    123 Main St.    Los Angeles    CA    2008-02-12
    2    Doe    John    555 Rainbow Ln.    Chicago    IL    1958-05-30
    3    Jones    Jane    999 Somewhere Pl.    Washington    DC    1965-12-01
    4    Jackson    George    111 Hello Cir.    Dallas    TX    2001-07-18
    In Object Explorer, I right-click on the [testDB] database, choose "Tasks", then "Export Data..." and the SQL Server Import and Export Wizard appears. I click Next to leave all settings as-is on the "Choose a Data Source" page, then on the "Choose a Destination"
    page, under the "Destination" drop-down I choose "Flat File Destination" then browse to the desktop and name the file "table_export.csv" then click Next. On the "Specify Table Copy or Query" page I choose "Write a query to specify the data to transfer" then
    click Next. I type the following SQL statement:
    SELECT * FROM testTable;
    When clicking the "Parse" button I get the message "This SQL statement is valid."
    On to the next page, "Configure Flat File Destination" I try leaving the defaults then click Next. This is where I am getting the error message (see screenshot below):
    Then going to the "Edit Mappings..." option on the "Configure Flat File Destination" page, I see that all columns which were defined as varchar in the table are showing as type "byte stream [DT_BYTES]", size "0", the state column which is defined as char(2)
    shows correctly however with type "string [DT_STR]", size "2" (see screenshow below):
    So what I have to do is change the type for the lname, fname, address and city columns to "string [DT_STR]", then I am able to proceed with the export successfully. Again, this just started happening after installing these updates. As you can imagine, this
    is very frustrating, as I do a lot of exports from many tables, with a lot more columns than this test table.
    Thanks for your help.

Maybe you are looking for