Deployment and Licensing Of SQL Server

Can I Use SQL Server Express to use in production environment. If not then which edition should i use  how can i go for unlimited Deployement for License and What is the cost of that product

Yes, you can use the Express Edition in production enviroment and it's also free for redistribution, see
http://www.microsoft.com/web/platform/database.aspx
Olaf Helper
[ Blog] [ Xing] [ MVP]

Similar Messages

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • When Deploying SSIS Packages to SQL Server using SSMS Import feature, is the package configuration file deployed at the same time?

    Hello, everyone,
    I have an SSIS package with a XML package configuration file. I deployed it to the sql server using the import feature in SSMS (in Stored Packages, right click MSDB, select Import, select the package to deploy in file system). My question is 
    Is the XML package configuration file also deployed? If so, which folder is it stored so that I can change the values in it? If not, is setting up the Configuration tab when scheduling a agent job the way to go manually bring the configuration file in?
    Your help and information is much appreciated.
    Regards

    Thank you Arthur for your reply. I appreciate it.
    A followup questions:
    I used direct setting for the configuration file path on C:\. But it needs to be on D:\ on the remote server. I can't create the D:\ path because I don't have D:\ drive on my computer (I understand I could use indirect setting to change the path in the remote
    server. But that's not the concern here).
    My question is:  since the configuration is not deployed, can I make it up by setting up the Configurations in creating the agent job step? That is, copy the configuration file to the D:\ drive on the remote server, in Configurations tab, add the configuration
    file on D:\ to the configuration. Will it work? Will the package still look for the original config file on C:\?  and if it doesn't find it there, will it cause any error?
    Thank you in advance for your help.
    Regards

  • Deploy JDBC driver for SQL server 2005 on PI 7.1

    How to deploy JDBC driver for SQL server 2005 on PI 7.1
    We are in SAP NetWeaver 7.1 Oracle 10G
    Third party system is  SQL server 2005
    There are different JDBC versions are available to download for SQL server 2005.
    I am not sure about the applicable version for the PI 7.1 SP level. Again JMS Adapter needs to be deploy along with this.
    Please help

    Hi,
    Hope this How to Guide help you.
    How To Install and Configure External Drivers for the JDBC & JMS Adapters from
    www.sdn.sap.com/irj/sdn/howtoguides
    Regards,
    Karthick.
    Edited by: Karthick Srinivasan on Apr 13, 2009 4:07 PM

  • Deploying JDBC driver for SQL Server 2005 on PI 7.1

    How to Deploy JDBC driver for SQL Server 2005 on PI 7.1 on Windows 2003 server
    We are in NW PI 7.1 and third party db is sql server 05.
    Found How-to Guide SAP NetWeaver u201804 but unable to find for NW PI 7.1
    Can any one help me on this? (looking for guide step by step procedure)
    Regards
    Mahesh

    Hi,
    Check these:
    Re: Installing JDBC Drivers for PI 7.1
    how to deploy MS Sql Server 2005 and 2008 jdbc driver
    Mention of a SAP Note in the first link or refer this note [831162|https://service.sap.com/sap/support/notes/831162]
    Regards,
    Abhishek.

  • Error deploying JDBC driver for SQL Server 2005

    Hi all
    I'm trying to deploy a JDBC driver for MS SQL Server 2005 (downloaded [here|http://www.microsoft.com/downloads/details.aspx?familyid=C47053EB-3B64-4794-950D-81E1EC91C1BA&displaylang=en]). When I try to deploy it according to the instructions found [here|http://help.sap.com/saphelp_nwce10/helpdata/en/51/735d4217139041e10000000a1550b0/frameset.htm] it fails.
    The error in the logs doesn't give any useful information though. It only says Error occurred while deploying component ".\temp\dbpool\MSSQL2005.sda".
    Has anyone else deployed a driver for SQL Server 2005, or perhaps have any suggestions of what else I could try?
    Thanks
    Stuart

    Hi Vladimir
    That's excellent news! Thanks for the effort you've put into this. I'm very impressed with how seriously these issues are dealt with, specifically within the Java EE aspects of SAP.
    May I make two suggestions on this topic:
    1. Given that NWA has such granular security permissions, please could the security error be shown when it is raised? This would help immediately identify that the problem isn't actually a product error, but rather a missing security permission (and thus save us time and reduce your support calls).
    2. Please could the role permissions be clearly documented (perhaps they already are, and I just couldn't find the docs?) so we know what is and isn't included in the role. The name is very misleading, as a "superadmin" is generally understood to have no limitation on their rights - so clear documentation on what is in-/excluded would be most helpful.
    On a related topic, I came across another issue like this that may warrant your attention (while you're already looking into NWA security issues). I logged a support query about it (ref: 0120025231 0000753421 2008) in case you can retrieve details there (screenshots, logs, etc.). It's basically a similar security constraint when trying to create a Destination. I'm not sure if this is something you would like to include as standard permissions within the NWA_SUPERADMIN role or not, but I think it's worth consideration.
    Thanks again for your help!
    Cheers
    Stuart

  • What is difference between 32 bit and 64 bit sql server memory management

    What is difference between 32 bit and 64 bit sql server memory management
    Thanks
    Shashikala

    This is the basic difference...check if helps:
    A 32-bit CPU running 32-bit software (also known as the x86 platform) is so named because it is based on an architecture that can manipulate values that are up to 32 bits in length. This means that a 32-bit memory pointer can store a value between 0 and
    4,294,967,295 to reference a memory address. This equates to a maximum addressable space of 4GB on 32-bit platforms
    On the other hand 64-bit limit of 18,446,744,073,709,551,616, this number is so large that in memory/storage terminology it equates to 16 exabytes. You don’t come across that term very often, so to help understand the scale, here is the value converted to
    more commonly used measurements: 16 exabytes = 16,777,216 petabytes (16 million PB)➤ 17,179,869,184 terabytes (17 billion TB)➤ 17,592,186,044,416 gigabytes (17 trillion GB)➤
    As you can see, it is significantly larger than the 4GB virtual address space usable in 32-bit systems; it’s so large in fact that any hardware capable of using it all is sadly restricted to the realm of science fiction. Because of this, processor manufacturers
    decided to only implement a 44-bit address bus, which provides a virtual address space on 64-bit systems of 16TB. This was regarded as being more than enough address space for the foreseeable future and logically it’s split into an 8TB range for user mode
    and 8TB for kernel mode. Each 64-bit process running on an x64 platform will be able to address up to 8TB of VAS.
    Please click the Mark as answer button and vote as helpful if this reply solves your problem

  • I have enabled both the traces 1204 and 1222 in sql server 2008 but not able to capture deadlocks

    In Application error:
    2014-09-06 12:04:10,140 ERROR [org.apache.catalina.core.ContainerBase] Servlet.service() for servlet DoMethod threw exception
    javax.servlet.ServletException: java.lang.Exception: [ErrorCode] 0010 [ServerError] [DM_SESSION_E_DEADLOCK]error:  "Operation failed due to DBMS Deadlock error." 
    And
    I have enabled both the traces 1204 and 1222 in sql server 2008 but still I am not getting any deadlock in sql server logs.
    Thanks

    Thanks Shanky may be u believe or not but right now I am in a migration of 4 lakh documents in documentum on Production site.
    and the process from documentum is halting due to this error but I am not able to get any deadlock in the sql server while the deadlock error is coming in documentum:
    [ Date ] 2014-09-06 17:54:48,192 [ Priority ] INFO [ Text 3 ] [STDOUT] 17:54:48,192 ERROR [http-0.0.0.0-9080-3] com.bureauveritas.documentum.commons.method.SystemSessionMethod - ERROR 100: problem occured when connecting repository! DfServiceException::
    THREAD: http-0.0.0.0-9080-3; MSG: [DM_SESSION_E_DEADLOCK]error: "Operation failed due to DBMS Deadlock error."; ERRORCODE: 100; NEXT: null at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.checkForDeadlock(NetwiseDocbaseRpcClient.java:276)
    at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.applyForObject(NetwiseDocbaseRpcClient.java:647) at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection$8.evaluate(DocbaseConnection.java:1292) at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.evaluateRpc(DocbaseConnection.java:1055)
    at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.applyForObject(DocbaseConnection.java:1284) at com.documentum.fc.client.impl.docbase.DocbaseApi.authenticateUser(DocbaseApi.java:1701) at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.authenticate(DocbaseConnection.java:418)
    at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.open(DocbaseConnection.java:128) at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:97) at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:60)
    at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionFactory.newDocbaseConnection(DocbaseConnectionFactory.java:26) at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionManager.getDocbaseConnection(DocbaseConnectionManager.java:83)
    at com.documentum.fc.client.impl.session.SessionFactory.newSession(SessionFactory.java:29) at com.documentum.fc.client.impl.session.PrincipalAwareSessionFactory.newSession(PrincipalAwareSessionFactory.java:35) at com.documentum.fc.client.impl.session.PooledSessionFactory.newSession(PooledSessionFactory.java:47)
    at com.documentum.fc.client.impl.session.SessionManager.getSessionFromFactory(SessionManager.java:111) at com.documentum.fc.client.impl.session.SessionManager.newSession(SessionManager.java:64) at com.documentum.fc.client.impl.session.SessionManager.getSession(SessionManager.java:168)
    at com.bureauveritas.documentum.commons.method.SystemSessionMethod.execute(Unknown Source) at com.documentum.mthdservlet.DfMethodRunner.runIt(Unknown Source) at com.documentum.mthdservlet.AMethodRunner.runAndReturnStatus(Unknown Source) at com.documentum.mthdservlet.DoMethod.invokeMethod(Unknown
    Source) at com.documentum.mthdservlet.DoMethod.doPost(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:173)
    at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:182) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:241) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
    at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) (Log Path:SID CER > vmsfrndc01docpp > JMS_Log > server ,Host: vmsfrndc01docpp ,Apps: [SID CER, SID IVS])</init></init>
    Thanks

  • How to Select from Oracle 8i database and insert into Sql Server 2005 datab

    Hi how to Select from Oracle 8i and insert into Sql Server 2005.
    Source db as Oracle 8i
    Target db as Sql Server 2005.
    I need to select one table data from Oracle 8i & insert into Sql Server 2005
    Thanks

    Thanks Khan..
    Is there is any query (OPENQUERY) available for that?
    Regards..

  • How to install and configure the SQL Server

    Hi All,
    We have to install SQL server in the new server because the old server is crash and need to upgrade. Please advice How to install and configure the SQL Server to run SAP Business One 8.8 successfully and what part we do have to give attention.
    Kind Rgds,
    Steve

    Hi,
    Try this solution:
    The step-by-step installation guide can be found in the documentation included in the installation media. (\Documentation\SystemSetup\AdministratorGuide_SQL.pdf).
    Below are some important parts that you should pay attention to during the Installation process.
    Resolution
    Collation setting: It must be set as SQL_Latin1_General_CP1_CI_AS,even the company DB is non-english location. The company DB will be created as corresponding collation settings automatically.
    Instance and TCP Port : It is recommended to run SBO on default instance and TCP port 1433. Otherwise,some optional components such as B1i may not work properly.
    Native Client: SQL Server Native Client should be installed on every client machine to enable the ODBC connection to DB server.
    Rgds,

  • Loop Through Excel Files and Load into SQL Server Table

    I'm following the example here.
    https://www.youtube.com/watch?v=_B83CPqX-N4
    I'm pretty sure I did all the steps, but for some reason my project is not running.  I'm thinking there is a 32-bit or 64-bit issue lurking in here somewhere.
    Here's my error message.
    SSIS package "C:\Users\Ryan\Documents\Visual Studio 2010\Projects\Loop through Multiple Excel sheets and load them into a SQL Table\Integration Services Project1\Package.dtsx" starting.
    Information: 0x4004300A at Data Flow Task, SSIS.Pipeline: Validation phase is beginning.
    Error: 0xC0209303 at Package, Connection manager "Excel Connection Manager": The requested OLE DB provider Microsoft.ACE.OLEDB.12.0 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode. Error code: 0x00000000.
    An OLE DB record is available.  Source: "Microsoft OLE DB Service Components"  Hresult: 0x80040154  Description: "Class not registered".
    Error: 0xC001002B at Package, Connection manager "Excel Connection Manager": The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine.
    For more information, see http://go.microsoft.com/fwlink/?LinkId=219816
    Error: 0xC020801C at Data Flow Task, Excel Source [2]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0209303. 
    There may be error messages posted before this with more information on why the AcquireConnection method call failed.
    Error: 0xC0047017 at Data Flow Task, SSIS.Pipeline: Excel Source failed validation and returned error code 0xC020801C.
    Error: 0xC004700C at Data Flow Task, SSIS.Pipeline: One or more component failed validation.
    Error: 0xC0024107 at Data Flow Task: There were errors during task validation.
    SSIS package "C:\Users\Ryan\Documents\Visual Studio 2010\Projects\Loop through Multiple Excel sheets and load them into a SQL Table\Integration Services Project1\Package.dtsx" finished: Failure.
    The program '[5392] DtsDebugHost.exe: DTS' has exited with code 0 (0x0).
    I have 32-bit Excel and 64-bit SQL Server.  Is that the issue?  Can someone tell me what's wrong here?
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    Sa-weeettttttt!!  Thanks.  I figured that's what it was.  For the benefit of others, this link shows you exactly how to make the modification recommended above.
    http://help.pragmaticworks.com/dtsxchange/scr/FAQ%20-%20How%20to%20run%20SSIS%20Packages%20using%2032bit%20drivers%20on%2064bit%20machine.htm
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • How to next consecutive node in binary tree and insert in sql server

    i need to find next node in this tree and insert in sql server ...
    dilip kumar

    It depends you data structure. How do you store this tree structure in the database? What's the database design? BTW, you can see this article which include a related sample :
    Custom Sort in Acyclic Digraph
    T-SQL e-book by TechNet Wiki Community
    My Blog
    My Articles

  • How to get a list of values used in the WHERE filter of stored procedures and functions in SQL Server

    How can I get a list of values (one or more) used in the WHERE filter of stored procedures and functions in SQL Server?
    How can get a list of values as shown (highlighted) in the sample stored procedure below?
    ALTER PROC [dbo].[sp_LoanInfo_Data_Extract] AS
    SELECT   [LOAN_ACCT].PROD_DT,
                  [LOAN_ACCT].ACCT_NBR, 
                  [LOAN_NOTE2].OFCR_CD, 
                  [LOAN_NOTE1].CURR_PRIN_BAL_AMT, 
                  [LOAN_NOTE2].BR_NBR,
    INTO #Table1
    FROM
                    dbo.[LOAN_NOTE1],
                    dbo.[LOAN_NOTE2],
                    dbo.[LOAN_ACCT]
    WHERE
                    [LOAN_ACCT].PROD_DT = [LOAN_NOTE1].PROD_DT
                    and
                    [LOAN_ACCT].ACCT_NBR = [LOAN_NOTE1].ACCT_NBR
                    and
                    [LOAN_NOTE1].PROD_DT = [LOAN_NOTE2].PROD_DT
                    and
                    [LOAN_NOTE1].MSTR_ACCT_NBR = [LOAN_NOTE2].MSTR_ACCT_NBR
                    and
                    [LOAN_ACCT].PROD_DT = '2015-03-10'
                    and
                    [LOAN_ACCT].ACCT_STAT_CD IN
    ('A','D')
                    and
                    [LOAN_NOTE2].LOAN_STAT_CD IN
    ('J','Z')
    Lenfinkel

    Hi LenFinkel,
    May I know what is purpose of this requirement, as olaf said,you may parse the T-SQL code (or the execution plan), which is not that easy.
    I have noticed that the condition values in your Stored Procedure(SP) are hard coded and among them there is a date values, I believe some day you may have to alter the SP when the date expires. So why not declare 3 parameters of the SP instead hard coding?
    For multiple values paramter you can use a
    table-valued parameter. Then there's no problem getting the values.
    If you could elaborate your purpose, we may help to find better workaround.
    Eric Zhang
    TechNet Community Support

  • One of server core 12 and showing in sql server 24. how it possible?

    Hello  everyone,
    I'm confused in sql server 2008 r2 when try to check no. of cores in sql server property are double in sql server and os showing 12 cores.
    one of my friend saying 1 cores= 2 processors. 
    What is happen behind that...
    Thanks in advance...

    Hi,
    Are you same person Baraiya Krit who posted below same question I have always observed question posted by you and Baraiya are same. if such is the case
    both should be merged and you should note not to post same question with different profile otherwise i have to report that as spam and your profile would be deleted. If such is case you are wasting people time
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/505a2871-a7fd-487e-b39d-788ff78b6c95/one-of-server-core-12-and-showing-in-sql-server-24-how-it-possible?forum=sqldatabaseengine
    If you are from same firm and different person why not post one question.
    Moderators please have a look
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • How ignore Change_tracking in build script deployment in Project Database SQL Server VS2013

    Hello,
    I have a "Project Database SQL Server" in SSDT VS2013 and my scenario is as follows:
    My project database has not enabled the feature of [CHANGE_TRACKING].
    I have N production databases, and there are a few in particular that are characteristic of [CHANGE_TRACKING]
    enabled.
    When I publish the project database against a target
    base has not enabled the feature of [CHANGE_TRACKING] no problem, everything works as expected.
    The problem is when I publish
    the project database against a database that has ENABLED feature tracking changes.
    I want the VS to make the comparison
    of schemes in the publication, ignore this feature, and NOT generate deployment scripts with instructions like:
    'Alter table dbo.table DISABLE CHANGE_TRACKING'
    'Alter database name SET CHANGE_TRACKING = OFF'
    It is possible to somehow customize the build-deployment to avoid this outcome?
    I hope the collaboration of you lot.
    Thank you very much.
    Estyfen

    Hi, thanks by response.
    The configuration "Do not
    alter Change Data Capture objects"
    is on, but even so, I do not get
    the expected results. When I run the generated
    script publication makes me a "Disable
    CHANGE_TRACKING" tables that have
    activated the "track changes"
    in the destination database.
    Stay tuned for
    your help ..
    thank you very much
    Estyfen

Maybe you are looking for

  • OpenSQLException caught: Cannot assign a blank-padded string

    Hi, sometimes I get this exception, when updating a bo instance. I didn't managed it to reproduce the error, because it is thrown in a complex background job. Can anybody give me more information about the reason behind this? Thanks, Jens stacktrace:

  • Photos appear in iPhoto, but original files have disappeared. Anything I can do to recover them?

    What it says in the topic. Specifically, I sorted through and deleted roughly 90% of the photos of a particular event, saving out the 10% I wanted to keep, before emptying the iPhoto trash. I then selected "File>Reveal in Finder>Original Files" in or

  • Mac Book Air Upgrade??

    I have a Mac Book Air Mid 2011 can I have its 64gb hard drive upgraded??

  • SATA boot with IDE hard dr. installed

    Newbie here. I'm so proud of myself. MoBo is KM4M-V w/latest BIOS and Athlon XP 2600+ CPU. 2 CDROMs, Zip Dr. and 60GB HD in a Pull-Out use up all 4 IDE's. So I had to use SATA 80GB as boot up. With 60gb NOT plugged into Pull-out, Win XP installs perf

  • What happened to my album covers?

    I synced my iPod with my iTunes on my laptop earlier today. I came back to find that some but not all of my album covers are missing. I would like to know what happened to them!!