JDriver for SQL Server 6.5 Broken Pipe

hi:
I am using jDriver for SQL Server 6.5 and use prepared statements to execute
my Stored Procedures from a Stateless-EJB. At times, I get the following
error message, can anyone help me with what I should do ?
2002.06.06 at 03:49:43 PM PDT : weblogic.jdbcbase.mssqlserver4.TdsException:
I/O exception while talking to the server, java.io.IOException: Broken pipe
at java.lang.Throwable.fillInStackTrace(Native Method)
at java.lang.Throwable.fillInStackTrace(Compiled Code)
at java.lang.Throwable.<init>(Compiled Code)
at java.lang.Exception.<init>(Compiled Code)
at java.sql.SQLException.<init>(SQLException.java:82)
at weblogic.jdbcbase.mssqlserver4.TdsException.<init>(TdsException.java:40)
at
weblogic.jdbcbase.mssqlserver4.TdsStatement.cancel(TdsStatement.java:446)
at weblogic.jdbcbase.mssqlserver4.TdsStatement.getMoreResults(Compiled
Code)
at weblogic.jdbcbase.mssqlserver4.TdsStatement.execute(Compiled Code)
at weblogic.jdbcbase.mssqlserver4.TdsStatement.execute(Compiled Code)
at weblogic.jdbcbase.jts.Statement.execute(Compiled Code)
at weblogic.jdbc20.rmi.internal.PreparedStatementImpl.execute(Compiled
Code)
at weblogic.jdbc20.rmi.SerialPreparedStatement.execute(Compiled Code)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarDAO.insertException(Compiled
Code)
at com.pmi.zeus.ejb.LoanRegistration.LoanRegistrar.registerLoan(Unknown
Source)
at com.pmi.zeus.ejb.LoanRegistration.LoanRegistrar.register(Compiled Code)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl.register(LoanRegistrar
EOImpl.java:56)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl_ServiceStub.register(C
ompiled Code)
at
com.pmi.zeus.mom.LoanInputQueueListener.messageInputProcessor(LoanInputQueue
Listener.java:86)
at
com.pmi.zeus.mom.LoanInputQueueListener.onMessage(LoanInputQueueListener.jav
a:59)
at weblogic.jms.client.JMSMessageConsumer.run(Compiled Code)
at weblogic.jms.client.JMSSession.run(JMSSession.java:342)
at weblogic.jms.server.JMSServerSession.execute(JMSServerSession.java:44)
at weblogic.kernel.ExecuteThread.run(Compiled Code)
2002.06.06 at 03:49:43 PM PDT : java.sql.SQLException: I/O exception while
talking to the server, java.io.IOException: Broken pipe
at java.lang.Throwable.fillInStackTrace(Native Method)
at java.lang.Throwable.fillInStackTrace(Compiled Code)
at java.lang.Throwable.<init>(Compiled Code)
at java.lang.Exception.<init>(Compiled Code)
at java.sql.SQLException.<init>(SQLException.java:82)
at com.pmi.zeus.ejb.LoanRegistration.LoanRegistrar.register(Compiled Code)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl.register(LoanRegistrar
EOImpl.java:56)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl_ServiceStub.register(C
ompiled Code)
at
com.pmi.zeus.mom.LoanInputQueueListener.messageInputProcessor(LoanInputQueue
Listener.java:86)
at
com.pmi.zeus.mom.LoanInputQueueListener.onMessage(LoanInputQueueListener.jav
a:59)
at weblogic.jms.client.JMSMessageConsumer.run(Compiled Code)
at weblogic.jms.client.JMSSession.run(JMSSession.java:342)
at weblogic.jms.server.JMSServerSession.execute(JMSServerSession.java:44)
at weblogic.kernel.ExecuteThread.run(Compiled Code)
thanks,
balaji.

Balaji Venkataraman wrote:
hi:
I am using jDriver for SQL Server 6.5 and use prepared statements to execute
my Stored Procedures from a Stateless-EJB. At times, I get the following
error message, can anyone help me with what I should do ?That means the driver-DBMS connection has broken somehow. It can either be
a bug in our driver, or a bug in the DBMS. Is there any indication in DBMS logs
about client thread failures that result in a session being killed?
If you can isolate the SQL and JDBC that causes this, we may be able to
debug it. Is the DBMS a 6.5 DBMS?
Joe
>
>
2002.06.06 at 03:49:43 PM PDT : weblogic.jdbcbase.mssqlserver4.TdsException:
I/O exception while talking to the server, java.io.IOException: Broken pipe
at java.lang.Throwable.fillInStackTrace(Native Method)
at java.lang.Throwable.fillInStackTrace(Compiled Code)
at java.lang.Throwable.<init>(Compiled Code)
at java.lang.Exception.<init>(Compiled Code)
at java.sql.SQLException.<init>(SQLException.java:82)
at weblogic.jdbcbase.mssqlserver4.TdsException.<init>(TdsException.java:40)
at
weblogic.jdbcbase.mssqlserver4.TdsStatement.cancel(TdsStatement.java:446)
at weblogic.jdbcbase.mssqlserver4.TdsStatement.getMoreResults(Compiled
Code)
at weblogic.jdbcbase.mssqlserver4.TdsStatement.execute(Compiled Code)
at weblogic.jdbcbase.mssqlserver4.TdsStatement.execute(Compiled Code)
at weblogic.jdbcbase.jts.Statement.execute(Compiled Code)
at weblogic.jdbc20.rmi.internal.PreparedStatementImpl.execute(Compiled
Code)
at weblogic.jdbc20.rmi.SerialPreparedStatement.execute(Compiled Code)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarDAO.insertException(Compiled
Code)
at com.pmi.zeus.ejb.LoanRegistration.LoanRegistrar.registerLoan(Unknown
Source)
at com.pmi.zeus.ejb.LoanRegistration.LoanRegistrar.register(Compiled Code)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl.register(LoanRegistrar
EOImpl.java:56)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl_ServiceStub.register(C
ompiled Code)
at
com.pmi.zeus.mom.LoanInputQueueListener.messageInputProcessor(LoanInputQueue
Listener.java:86)
at
com.pmi.zeus.mom.LoanInputQueueListener.onMessage(LoanInputQueueListener.jav
a:59)
at weblogic.jms.client.JMSMessageConsumer.run(Compiled Code)
at weblogic.jms.client.JMSSession.run(JMSSession.java:342)
at weblogic.jms.server.JMSServerSession.execute(JMSServerSession.java:44)
at weblogic.kernel.ExecuteThread.run(Compiled Code)
2002.06.06 at 03:49:43 PM PDT : java.sql.SQLException: I/O exception while
talking to the server, java.io.IOException: Broken pipe
at java.lang.Throwable.fillInStackTrace(Native Method)
at java.lang.Throwable.fillInStackTrace(Compiled Code)
at java.lang.Throwable.<init>(Compiled Code)
at java.lang.Exception.<init>(Compiled Code)
at java.sql.SQLException.<init>(SQLException.java:82)
at com.pmi.zeus.ejb.LoanRegistration.LoanRegistrar.register(Compiled Code)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl.register(LoanRegistrar
EOImpl.java:56)
at
com.pmi.zeus.ejb.LoanRegistration.LoanRegistrarEOImpl_ServiceStub.register(C
ompiled Code)
at
com.pmi.zeus.mom.LoanInputQueueListener.messageInputProcessor(LoanInputQueue
Listener.java:86)
at
com.pmi.zeus.mom.LoanInputQueueListener.onMessage(LoanInputQueueListener.jav
a:59)
at weblogic.jms.client.JMSMessageConsumer.run(Compiled Code)
at weblogic.jms.client.JMSSession.run(JMSSession.java:342)
at weblogic.jms.server.JMSServerSession.execute(JMSServerSession.java:44)
at weblogic.kernel.ExecuteThread.run(Compiled Code)
thanks,
balaji.

Similar Messages

  • HotFix4 for SQL Server 2005 doesn't work

    I am having problems getting a machine running SQL Server 2008 R2 (Express) to connect to a SQL  Server 2005 (Full Standard Edition) DB on another machine via the LAN. It times out when connecting from a C# application. SSMS on the SQL 2008 machine
    connects fine, but SSMS on the 2005 machine (connecting to a copy of the same DB on the 2008 machine) says it isn't supported and I need a hotfix.
    I downloaded the hotfix link  from Microsoft, but when I paste it into the Browser I get a DNS name failure for the Hotfix4 server. I don't want to paste the full link here because it is not made public by MS and you need to request it. However, the
    first part of the URL is: http://hotfix4.microsoft.com/SQL%20Server%202005...etc.
    I have checked out all of the obvious things in both instances (2005 AND 2008)... Both instances are configured to accept remote connections, both have TCP/IP and named pipes protocols enabled, both are running fine for local connections, both have the SQL
    Browser enabled and the Computer Browser Service running on the network.
    As the hot fix doesn't install, I am kind of out of ideas.
    Any suggestions?
    Thanks,
    Pete
    I used to write COBOL... now I can do anything.

    SQL 2005 is out of
    mainstream support:
    SQL Server
    2005
    SP4
    04/12/2011
    04/12/2016
    Technical support continues till 04/12/2016, yet mainstream (hotfix) support ends as of 04/12/2011; options
    for hotfix support after 04/12/2011:
    ð  Continue
    with self-help
    ð  Upgrade
    to the latest supported service pack for SQL Server 2005 or SQL Server 2008 or SQL Server 2008 R2
    ð  Extended
    hotfix support agreement
    SP4 and CU3 are still available for download, if this helps...
    http://support.microsoft.com/kb/2507769
    http://www.microsoft.com/en-US/download/details.aspx?id=7218
    But you should update your SQL-Server..... SQL 2014 is the latest version

  • Com.evermind.server.http.HttpIOException: Broken pipe

    i am getting this error at my server. Can anybody help me in solving this. I am using Oracle 9iAs..and this is the full error that i get
    com.evermind.server.http.HttpIOException: Broken pipe
    at java.lang.Throwable.fillInStackTrace(Native Method)
    at java.lang.Throwable.fillInStackTrace(Compiled Code)
    at java.lang.Throwable.<init>(Compiled Code)
    at java.lang.Exception.<init>(Compiled Code)
    at java.io.IOException.<init>(Compiled Code)
    at com.evermind.server.http.HttpIOException.<init>(Compiled Code)
    at com.evermind.server.http.EvermindServletOutputStream.write(Compiled C
    ode)
    at com.evermind.server.http.EvermindServletOutputStream.write(Compiled C
    ode)
    at com.evermind.server.http.EvermindServletOutputStream.write(Compiled C
    ode)
    at com.evermind.server.http.HttpApplication.include(Compiled Code)
    at com.evermind.server.http.FileRequestDispatcher.forwardInternal(Compil
    ed Code)
    at com.evermind.server.http.FileRequestDispatcher.forward(Compiled Code)
    at Premier.RegistrationServlet.createHTML(Compiled Code)
    at Premier.RegistrationServlet.doGet(Compiled Code)
    at javax.servlet.http.HttpServlet.service(Compiled Code)
    at javax.servlet.http.HttpServlet.service(Compiled Code)
    at com.evermind.server.http.ServletRequestDispatcher.invoke(Compiled Cod
    e)
    at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(Com
    piled Code)
    at com.evermind.server.http.HttpRequestHandler.processRequest(Compiled C
    ode)
    at com.evermind.server.http.HttpRequestHandler.run(Compiled Code)
    at com.evermind.util.ThreadPoolThread.run(Compiled Code)

    I suggest you recompile your servlet RegistrationServlet with the debug option that way you will get the line in your source code that has triggered the Exception instead of (Compiled Code). From then on it will be easier for you to start undestanding what's going on.

  • Cumulative update 5 for SQL Server 2012 SP2 is released

    Cumulative update 5 for SQL Server 2012 SP2 is released
    https://support.microsoft.com/en-us/kb/3037255
    MDS related fix is:
    FIX: The notification email link is broken when a special character is part of the URL in SQL Server 2012 MDS
    http://support.microsoft.com/en-us/kb/3036201

    Hello,
    The following blog shows the latest builds (updates) available for each version of SQL Server.
    http://sqlserverbuilds.blogspot.com/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Java.sql.SQLException: Io exception: Broken pipe

    To whom it may concern:
    Anyone who can help me with my problem? We have Java application(Loading2Staging) that calls a series of stored procedures in Oracle. This job takes about 30 hours to complete. We have other Java applications that is running and calling Oracle on the same machine and database. All of the applications are using "Java connection pooling". I'm encountering problem with the said application. When Loading2Staging is running and then other jobs would start to establish a connection, Loading2Staging then fails with this error: java.sql.SQLException: Io exception: Broken pipe
    The procedures of Loading2Staging is taking too long to complete. So my question is, "Is the job failing because of its inactivity or do some other applications steal the 'connection pooling' assigned to it for it to run smoothly?"
    If that's the case, could I create a "dedicated connection" or "connection pooling" to prevent the stealing of connection among jobs?
    This is a very urgent issue. Your immediate response will be much appreciated.
    Thank you.
    wletmp5

    Having exactly the same problem, does anybody know where the problem is?
    Thanks
    PD

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • How to access the datasource window in SSRS for sql server 2008 R2 for writing my query without having to go through the wizard?

    I have used SSRS a lot years ago with Sql Server 2000 and Sql Server 2005. I have written external assemblies, ... But now I have to do this with Sql Server 2008 (R2 -- which I realize I am way behind the times already but ...)  in sql server 2000 and
    2005 there was a tab for datasource to the left of the tab for design which was to the left of the preview tab. How do I get to the datasource window in sql server 2008 (r2) ?
    I see that datasource explorer.  But where can I get to the datasource window to edit my queries and so forth for sql server 2008 (R2)?
    Thanks
    Rich P

    I think I found the answer to my question --- just right-click on the the Data Sources or Datasets for editing connections and dataset queries.  I'm guessing it gets even fancier with Sql Svr 2012 - 2014.    Man, that's the one thing
    about coding platforms -- you let it go for a few years and come back, and everything has changed (well, a lot of things).  Now I need to figure out how to add an external assembly to SSRS 2008 (R2).
    Rich P

  • After installed SP1 for SQL Server 2012, can no longer export to csv

    After installing SP1 today via Windows Update, I am no longer able to export data to csv using the SQL Server Import and Export wizard. I get the following error message:
    "Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider."
    "Column "col1": Source data type "200" was not found in the data type mapping file."...
    (The above line repeats for each column)
    The work-around I have to do is to manually map each column in the "Edit Mappings..." option from the "Configure Flat File Destination" page of the wizard. This is an extreme inconvenience to have to have to edit the mappings and change
    each column to "string [DT_STR]" type from "byte stream [DT_BYTES]" type each time I want to export to csv. I did not have to do this before installing SP1; it worked perfectly for months with hundreds of exports prior to this update and
    no need to modify mapping.

    I am running Windows 7 64-bit, SQL Server 2012 Express edition. Again, just yesterday from Windows Update, I installed SQL Server 2012 Service Pack 1 (KB2674319), followed by Update Rollup for SQL Server 2012 Service Pack 1 (KB2793634). This situation was
    not occurring before these updates were installed, and I noticed it immediately after they were installed (and of course I restarted my computer after the updates).
    In SSMS I just now created a test DB and table to provide a step-by-step with screenshots.
    Here is the code I ran to create the test DB and table:
    CREATE DATABASE testDB;
    GO
    USE testDB;
    GO
    CREATE TABLE testTable
    id int,
    lname varchar(50),
    fname varchar(50),
    address varchar(50),
    city varchar(50),
    state char(2),
    dob date
    GO
    INSERT INTO testTable VALUES
    (1,'Smith','Bob','123 Main St.','Los Angeles','CA','20080212'),
    (2,'Doe','John','555 Rainbow Ln.','Chicago','IL','19580530'),
    (3,'Jones','Jane','999 Somewhere Pl.','Washington','DC','19651201'),
    (4,'Jackson','George','111 Hello Cir.','Dallas','TX','20010718');
    GO
    SELECT * FROM testTable;
    Results look good:
    id    lname    fname    address    city    state    dob
    1    Smith    Bob    123 Main St.    Los Angeles    CA    2008-02-12
    2    Doe    John    555 Rainbow Ln.    Chicago    IL    1958-05-30
    3    Jones    Jane    999 Somewhere Pl.    Washington    DC    1965-12-01
    4    Jackson    George    111 Hello Cir.    Dallas    TX    2001-07-18
    In Object Explorer, I right-click on the [testDB] database, choose "Tasks", then "Export Data..." and the SQL Server Import and Export Wizard appears. I click Next to leave all settings as-is on the "Choose a Data Source" page, then on the "Choose a Destination"
    page, under the "Destination" drop-down I choose "Flat File Destination" then browse to the desktop and name the file "table_export.csv" then click Next. On the "Specify Table Copy or Query" page I choose "Write a query to specify the data to transfer" then
    click Next. I type the following SQL statement:
    SELECT * FROM testTable;
    When clicking the "Parse" button I get the message "This SQL statement is valid."
    On to the next page, "Configure Flat File Destination" I try leaving the defaults then click Next. This is where I am getting the error message (see screenshot below):
    Then going to the "Edit Mappings..." option on the "Configure Flat File Destination" page, I see that all columns which were defined as varchar in the table are showing as type "byte stream [DT_BYTES]", size "0", the state column which is defined as char(2)
    shows correctly however with type "string [DT_STR]", size "2" (see screenshow below):
    So what I have to do is change the type for the lname, fname, address and city columns to "string [DT_STR]", then I am able to proceed with the export successfully. Again, this just started happening after installing these updates. As you can imagine, this
    is very frustrating, as I do a lot of exports from many tables, with a lot more columns than this test table.
    Thanks for your help.

  • Is it possible to have different authentication mode for SQL Server Database Engine and corresponding SQL Server instance?

    Hi,
    I have installed the x64 SQL Server 2008 R2 Express with default settings and run MBSA 2.3 (using default settings too). It shows three SQL Server instances: MSSQL10_50.SQLEXPRESS, SQLEXPRESS and SQLEXPRESS (32-bit). For the first, authentication
    mode is Windows, for the rest two - mixed. Here https://social.msdn.microsoft.com/Forums/sqlserver/en-US/03e470dc-874d-476d-849b-c805acf5b24d/sql-mbsa-question-on-folder-permission?forum=sqlsecurity question
    about such multiple instances was asked and the answer is that "MSSQL10.TEST_DB
    is the instance ID for the SQL Server Database Engine of the instance, TEST_DB", so in my case, it seems that MSSQL10_50.SQLEXPRESS is the instance ID for SQL Server Database Engine  of the SQLEXPRESS instance.
    I have two questions:
    1) How can it be that SQL Server DB Engine instance has different authentication mode than corresponding SQL Server Instance?
    2) Why 32-bit instance reported although I installed only 64-bit version?
    Also, this https://social.technet.microsoft.com/Forums/security/en-US/6b12c019-eaf0-402c-ab40-51d31dce968f/mbsa-23-reporting-sql-32bt-instance-is-running-in-mixed-mode-when-it-is-set-to-integrated?forum=MBSA question seems to be related to this
    issue, but there is no answer :(.
    Upd: Tried on clean Windows 8 installation and Windows 7 with the same result.

      Because I DO NOT want the three people who will be having access to the production SQL Server to also have access to the primary host ProductionA.  Since I have to allow them to RDC into the box to manage the SQL Server, I figure why not create
    a separate VM for each one of them and they can RDC into those instead.
    Does this make any sense?
    Any tips are greatly appreciated.  The main reason for doing this is because the three people who will be accessing the box, I need to isolate each one of them and at the same time keep them off of the primary ProductionA.
    Thanks for your help.
    M
    Hello M,
    Since you dont want the 3 guys to have access to Production machine A.You can install SQL Server client .By client i mean SQL server management studio(SSMS) on there local desktop and then create login for them in SQL Server.Open port on which your SQL server
    is running for three of the machines so that they can connct.Now with SSMS installed on each machine each can connect to SQL server from there own machine.
    I would also like you to be cautious with giving Sysadmin privilege to all three of them ,first please note down what task they would do and then decide what rights to be provided.
    Your option will also work but you need to create 3 VM for that .Which is more tedious task.
    Hope this helps
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Do I need to install Security Hotfix (KB2977319) after Cumulative Update 12 for SQL Server 2008 R2 SP2

    HI,
    I have installed Cumulative Update 12 for SQL Server 2008 R2 SP2 on my SharePoint instances. This was to resolve a known  issue faced with the instance. CU12 helped resolve the issue. My company is rather strict regarding security hotfixes. But I am
    not sure if this particular hotfix [Security Hotfix (KB2977319)] is required if the instance has CU12 applied.
    Tested this on a Lab server, the installation did run fine, the summary log also stated that the KB is applied. But the Build Number did not change. Hence the doubt.
    Overall summary:
      Final result:                  Passed
      Exit code (Decimal):           0
      Exit message:                  Passed
      Start time:                    2014-09-06 10:31:21
      End time:                      2014-09-06 10:55:49
      Requested action:              Patch
    Instance SPNTSQLTRN overall summary:
      Final result:                  Passed
      Exit code (Decimal):           0
      Exit message:                  Passed
      Start time:                    2014-09-06 10:48:08
      End time:                      2014-09-06 10:55:45
      Requested action:              Patch
    Package properties:
      Description:                   SQL Server Database Services 2008 R2
      ProductName:                   SQL2008
      Type:                          RTM
      Version:                       10
      SPLevel:                       2
      KBArticle:                     KB2977319
      KBArticleHyperlink:            http://support.microsoft.com/?kbid=2977319
      PatchType:                     QFE
      AssociatedHotfixBuild:         0
      Platform:                      x64
      PatchLevel:                    10.52.4321.0
      ProductVersion:                10.52.4000.0
      GDRReservedRange:              10.50.4001.0:10.50.4199.0;10.50.4200.0:10.50.4250.0
      PackageName:                   SQLServer2008-KB2977319-x64.exe
      Installation location:         e:\ac2af22d88ee645b5b32b5c178\x64\setup\
    Please inform if I need to apply the hotfix on CU12. Thanks in advance.
    John S

    Yes you must install Security update mentioned in KB 2977319 it is important for SQL Server to be patches with this security update. Without this it could allow an attacker to compromise your system and gain control over it.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • Will Cumulative Update 12 for SQL Server 2008R2 SP2 fix: Warning: Unable to Verify TimeStamp for Path\ProgramName?

    Hello,
    I had a server crash recently and our outsource hosting tech support suggested applying Cumulative Upade 12 for SQL server 2008R2 SP2 to fix the issue.  The exception from our dump file "Warning: Unable to Verify TimeStamp for Path\ProgramName"
    is not in the list of hotfixes for this CU.  Do you know if this will fix this issue?  The CU warns not to apply the CU if your issue id not addressed by the CU.  Here is a portion of the dump file with the relevant error:
    This dump file has an exception of interest stored in it.
    The stored exception information can be accessed via .ecxr.
    (7e8.2ab4): Unknown exception - code 000042ac (first/second chance not available)
    ntdll!NtWaitForSingleObject+0xa:
    00000000`777412fa c3              ret
    0:240> .sympath srv*c:\Websymbols*http://msdl.microsoft.com/download/symbols;
    Symbol search path is: srv*c:\Websymbols*http://msdl.microsoft.com/download/symbols
    Expanded Symbol search path is: srv*c:\websymbols*http://msdl.microsoft.com/download/symbols
    0:240> .reload /f
    .Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for sqlservr.exe
    ..........Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlos.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for sqlos.dll
    ...............Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\opends60.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for opends60.dll
    .......Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\BatchParser.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for BatchParser.dll
    ....Unable to load image C:\Program Files\Microsoft SQL Server\100\Shared\instapi10.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for instapi10.dll
    *** ERROR: Module load completed but symbols could not be loaded for instapi10.dll
    ..Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Resources\1033\sqlevn70.rll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for sqlevn70.rll
    *** ERROR: Module load completed but symbols could not be loaded for sqlevn70.rll
    Press ctrl-c (cdb, kd, ntsd) or ctrl-break (windbg) to abort symbol loads that take too long.
    Run !sym noisy before .reload to track down problems loading symbols.
    .................Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\ftimport.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for ftimport.dll
    .Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\msfte.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for msfte.dll
    *** ERROR: Module load completed but symbols could not be loaded for msfte.dll
    ...........Unable to load image C:\Windows\System32\sqlncli10.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for sqlncli10.dll
    ...Unable to load image C:\Windows\System32\1033\SQLNCLIR10.RLL, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for SQLNCLIR10.RLL
    *** ERROR: Module load completed but symbols could not be loaded for SQLNCLIR10.RLL
    ..Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\xpsqlbot.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for xpsqlbot.dll
    .Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\XPStar.DLL, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for XPStar.DLL
    .Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlscm.dll, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for sqlscm.dll
    ...*** ERROR: Module load completed but symbols could not be loaded for odbcint.dll
    ...Unable to load image C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Resources\1033\XPStar.RLL, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for XPStar.RLL
    Thank you,
    Steve -

    The error message appears to be related to a debug session and not come from the actual crash.
    So all I know is that you hade a server crash. I don't even know exactly what that means. Did Windows bite the dust? Or was it only SQL Server?
    Assuming the latter, I would expect the SQL Server errorlog to have some information (and that would be ERRORLOG.1 or earlier, since the server have been restarted), but if SQL Server died the output may be incomplete.
    There may be also dump files, but as I rarely look into these, I am not sure how to interpret them. But I am quite confident that "Unable to verify TimeStamp..." is not the reason SQL Server went down.
    I would suggest the following course:
    *  If the server is not critical, do nothing. As long as it has onlyl happened once, it has only happened once.
    *  If the server is critical, open a case with Microsoft if you are not able to figure out the reason yourself. The key here is "Unknown exception - code 000042ac".
    *  If it happens again, you should absolutely open a case. An important thing here is whether the stack dump is identical or something different. If the stack dumps are identical, you may have hit a bug in SQL Server or the OS, and applying CUs or OS
    fixes could help if it is a known issue. If the stack dump is something else, you have ghosts in the machine - that is, bad hardware.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Deploy JDBC driver for SQL server 2005 on PI 7.1

    How to deploy JDBC driver for SQL server 2005 on PI 7.1
    We are in SAP NetWeaver 7.1 Oracle 10G
    Third party system is  SQL server 2005
    There are different JDBC versions are available to download for SQL server 2005.
    I am not sure about the applicable version for the PI 7.1 SP level. Again JMS Adapter needs to be deploy along with this.
    Please help

    Hi,
    Hope this How to Guide help you.
    How To Install and Configure External Drivers for the JDBC & JMS Adapters from
    www.sdn.sap.com/irj/sdn/howtoguides
    Regards,
    Karthick.
    Edited by: Karthick Srinivasan on Apr 13, 2009 4:07 PM

  • Deploying JDBC driver for SQL Server 2005 on PI 7.1

    How to Deploy JDBC driver for SQL Server 2005 on PI 7.1 on Windows 2003 server
    We are in NW PI 7.1 and third party db is sql server 05.
    Found How-to Guide SAP NetWeaver u201804 but unable to find for NW PI 7.1
    Can any one help me on this? (looking for guide step by step procedure)
    Regards
    Mahesh

    Hi,
    Check these:
    Re: Installing JDBC Drivers for PI 7.1
    how to deploy MS Sql Server 2005 and 2008 jdbc driver
    Mention of a SAP Note in the first link or refer this note [831162|https://service.sap.com/sap/support/notes/831162]
    Regards,
    Abhishek.

  • *****Error in Microsoft JDBC drivers for SQL Server 2000****

    hi guys,
    I am getting the following error in my application. The error seems to have thrown by Microsoft JDBC drivers for SQL Server 2000
    The application tries to execute the the following query when the error is thorwn:
    SELECT getDate(); // getDate is a function which returns currebt date time. The error is thrown occassionally. Other times the same query is executed correctly by the application.
    Can any one help with this one.
    The error is:
    java.lang.NullPointerException
         at com.microsoft.jdbc.base.BaseImplStaticCursorResultSet.setupTempFiles(Unknown Source)
         at com.microsoft.jdbc.base.BaseImplStaticCursorResultSet.<init>(Unknown Source)
         at com.microsoft.jdbc.base.BaseStatement.chainInServiceImplResultSets(Unknown Source)
         at com.microsoft.jdbc.base.BaseStatement.getNextResultSet(Unknown Source)
         at com.microsoft.jdbc.base.BaseStatement.commonGetNextResultSet(Unknown Source)
         at com.microsoft.jdbc.base.BaseStatement.executeQueryInternal(Unknown Source)
         at com.microsoft.jdbc.base.BaseStatement.executeQuery(Unknown Source)
         at com.sanderson.tallyman.util.TallymanDB.executeQuery(Unknown Source)
         at com.sanderson.tallyman.util.TallymanDB.getCurrentDate(Unknown Source)
         at com.sanderson.tallyman.operations.interfaces.RecordUpdateControl.updateRecord(Unknown Source)
         at com.sanderson.tallyman.operations.interfaces.DebtInterfaceControl.processUpdate(Unknown Source)
         at com.sanderson.tallyman.operations.interfaces.DebtInterfaceControl.processInterface(Unknown Source)
         at com.sanderson.tallyman.operations.interfaces.InterfaceHandler$ProcessRecord.run(Unknown Source)
         at java.lang.Thread.run(Unknown Source)
    rgds,

    Hi,
    Did you ever get an answer to this? I am also having this problem.

  • Why can't i use "INNER JOIN" in a query for SQL Server with JDBC??????

    Hi,
    I'm trying to execute some SQL queries and I just don't understand what's wrong.
    I�m using Tomcat and SQL Server in order to do this, but when I�m try to execute a query with a INNER JOIN statements Tomcat raise a SQL exception... at the very first time I thought there was a problem with database connection but I realize that a simple query to a table works pretty well. then I found out some problems with JDBC:ODBC.... so I install JDBC for SQL Server 2000 and test with the same simple query and works..... so, I come to a conclusion.... INNER JOIN or JOIN statements can't be used in JDBC..... please... somebody tell I�m wrong and give me a hand...
    I'm using TOMCAT 4 and JDK 1.4 SQL Server 2000
    Error occurs when executeQuery() is called.... not prepareStatement().... ??????
    Driver DriverRecResult = (Driver)Class.forName(driver).newInstance();
    Connection ConnRecResult = DriverManager.getConnection(DSN,user,password);
    PreparedStatement StatementRecResult = ConnRecResult.prepareStatement(query);
    ResultSet RecResult = StatementRecResult.executeQuery(); <---- Exception raise here
    So much tahnks in advance,

    That's exactly what I think, driver it's raising the exception, but I don't know why.... i test the same query with INNER JOIN directly from SQL Query Analyser and it's works perfectly, my problem ain't SQL, but JSP and JDBC 'cause i'm a newbie about these issues.
    Common sense tell me possible problems lie in SQLServer drivers 'cause i run the same pages on JRUN through jdbc:odbc and do works well, but by now i just depend on Tomcat.....
    I've installed SQL Server drivers for JDBC but i just find it doesn't work fully... could be the version of JDK i've installed? what version do i need?
    ( I'm running Tomcat 4 with JDK 1.4 & SQL Server 2000 W2K )
    thanks for reply.

Maybe you are looking for

  • Mitigation assignment approval in Access Request Workflow

    Hi Guys, I am currently implementing GRC for one of the clients. I have a question with respect to Mitigation assignment approval in Access Request Workflow. Below is the Scenario, 1) User Submits the request 2) Manager Approves 3) Role Owner runs th

  • ORA-39097:,ORA-39065: ORA-31641: ORA-27086:  occurred in expxdp while taking datapump expdp backup.

    Hi, getting below error while running expdp. ORA-39097: Data Pump job encountered unexpected error -31641 ORA-39065: unexpected master process exception in DISPATCH ORA-31641: unable to create dump file "/netbackup_216/EXPDP_HEALTHDB_Full/11EXPDPBACK

  • PS3 to 27" iMac Questions

    Hi, I currently just bought a new 27" iMac. -Is it possible to hook up my PS3 to my iMac considering that I have just ordered a HDMI ti Mini Display adapter? - Will the display out be in HD while watching HD movie on my PS3 Bluray player? - Does game

  • ADF Faces & BC: How to perform dynamic navigation.

    Here is the setup: Page 1 allows the user to enter a id Page 2 and 3 is either a update or insert depending on the id. The logic to figure out the insert or update operation is within a method in the AM. I have bound this method on a button in Page 1

  • Start process from Portal

    Hi everyone! I need to implement process start as a link in portal. I've created and tested WSDL in NWDS. I also assigned start event for the process. So it works and starts fine from the WS navigator. Now I need to make it available in portal. I hav