Config for BIG server (Solaris)
Our app is currently running pretty well on 2-cpu NT servers. But we need
more performance to support more users on one instance of WL (clustering
isn't yet an option), so we are trying a Big Solaris server. We're looking
for any pointers on how to configure WebLogic/JVM for a big server, esp.
Solaris. I've read over some of the Sun docs but am still not sure.
Server: 8-processor, 6G memory, Solaris 2.7, JDK 1.3.0_03, WL 5.1 sp9
Heap size: should we set -Xms -Xmx=4.5G on this box?? (was 780M on 1G NT)
Most things say to set the heap "large" but too big can cause problems.
What's as big as possible with 8 processors?
#execute threads: should we multiply because of the number of processors?
We had been using 30 on 2-processor NT - should we now use like 100?
There's some other JVM params that we may play with as well, including the
HotSpot options that are listed on the WL site. Should we set -XX:NewSize
(and Max) = 128M, 256M, higher?
Additional info: our application has many EJBs (Entity and Stateless
Session) being accessed by client applets and JSPs. Our total number of
users will be in the hundreds, even under heavy loads, but they cause a lot
of activity in the server. Any advice on these and other parameters is
appreciated.
Yes, I think 2GB memory for one JVM would be too much even with eight CPUs.
In most cases, I woud use 512m per instance, but I am sure there are several
good folks in this news group using more than that. They can share their
experience.
.raja
"Joe Herbers" <[email protected]> wrote in message
news:[email protected]...
So even with 8 processors, you think 2G is too big? We've initially
configured the JDK to use incremental garbage collection so that wewouldn't
see large pauses, though I'm not sure how much of a performance hit wetake
for that (I suppose we should try to benchmark it)
We've got 6G on this box so I hate not to use it, but if others have seen
this not work, then we'll try a smaller heap size (1G?) However, a doc on
the Sun site says these two things: "Unless you have problems with pauses,
try granting as much memory as possible to the JVM." "Be sure to increase
the memory as you increase the number of processors, since allocation canbe
parallelized, but GC is not parallel." I guess we should monitor GC times
at 2G to see how it's doing.
"Scott Simpson" <[email protected]> wrote in message
news:[email protected]...
You really don't want to make your heap so large. This is a very bad idea.
When the system garbage collects, it will have to go through gigabytes of
memory and your server will stop. Making your heap size too large is justas
bad as making it too small.
"Deyan D. Bektchiev" <[email protected]> wrote in message
news:[email protected]...
Joe,
Your first problem would be memmory since 1.3.0 has a limitation of 2GB
of
heap
per JVM (its actualy a little bit less 2010k worked for us) so you might
consider running more than one instance if memory would be a problem.
I'm
not
sure if they've lifted that to 4GB in 1.3.1 but it is going to be for
sure
for
1.4.0.
The number of execute threads solely depends on your application since
if
you
have a lot of I/O then you'd want to increase the number of executethreads
while if you are mainly using CPU cycles and most threads do not block
you
might
consider even reducing the number of execute threads.
Best way to find out what is best for you -- benchmark it.
--deajn
Joe Herbers wrote:
Our app is currently running pretty well on 2-cpu NT servers. But we
need
more performance to support more users on one instance of WL
(clustering
isn't yet an option), so we are trying a Big Solaris server. We'relooking
for any pointers on how to configure WebLogic/JVM for a big server,
esp.
Solaris. I've read over some of the Sun docs but am still not sure.
Server: 8-processor, 6G memory, Solaris 2.7, JDK 1.3.0_03, WL 5.1sp9
>>>
Heap size: should we set -Xms -Xmx=4.5G on this box?? (was 780M on 1GNT)
Most things say to set the heap "large" but too big can cause
problems.
What's as big as possible with 8 processors?
#execute threads: should we multiply because of the number ofprocessors?
We had been using 30 on 2-processor NT - should we now use like 100?
There's some other JVM params that we may play with as well, includingthe
HotSpot options that are listed on the WL site. Should weset -XX:NewSize
(and Max) = 128M, 256M, higher?
Additional info: our application has many EJBs (Entity and Stateless
Session) being accessed by client applets and JSPs. Our total number
of
users will be in the hundreds, even under heavy loads, but they causea
lot
of activity in the server. Any advice on these and other parameters
is
appreciated.
Similar Messages
-
Hardware Config for MI server!
Hi All,
I need information regarding the hardware configuration for the MI server which should deploy SAP Netweaver Mobile 7.1 and support xMAU 3.0 SP5.
Can anyone throw some light on this.
Thanks in advance!
KanwarHi Kanwar,
Check this link in SMP for MAU specific information.
https://websmp105.sap-ag.de/~form/sapnet?_SCENARIO=01100035870000000202&_SHORTKEY=01100035870000694050
For general MI queries, please check the MI FAQ section https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cc489997-0901-0010-b3a3-c27a7208e20a
Regards
Ajith -
My Yahoo on Portal 6 for Application Server 7 on Solaris
Hi everyone, I'm trying to install the My Yahoo provider on Portal 6 for SUN ONE Application Server 7 on Solaris 8. So far it has not been successful. It seems that it only wants to look for web server components and cannot handle the Application server installation.
On my first trial the install log said:
DEPLOY_BASEDIR not found in /var/sadm/pkg/SUNWps/pkginfo
so I added the Base dir for the Portal installation.
On the second trial the log said:
PASSPHRASE not found in /var/sadm/pkg/SUNWps/pkginfo
so I added the passphrase for the adadmin user to the file
On the third trial it seemed to install most of the components but returned a not installed message and the log says:
Error: Cannot determine WS_ADMINPASSWD.
At this point I can see the provider listed as a service but no properties show when I try to display it.
I have 2 questions:
1. How can I modify the scripts or pkginfo files to make this work?
2. Does anyone know how to define a proxy in the APP server 7 web container so that it can fetch the content should the provider install eventually? As with the script the instructions only address a web server installation.
TIAHi,
I am getting this error because of the JDBC driver. I have an Oracle 9.2.04 database but there is no Oracle 9.2.0.4 client for Solaris 9 x86. I am trying to test it using the Oracle 10g Client. Hence the error. Anyone knows about any workarounds for this. -
Sender mail adapter config for MS Exchange Server
Dear All Gurus,
Need your advice on configuring sender mail adapter (mail to file scenario ) for exchange server. I have read a lot of SCN threads and other articles and was not able to find the exact solution for this. PI version : 7.3.
Thank you all in advance...Hi,
please check the below links.
http://wiki.scn.sap.com/wiki/display/XI/Step+by+Step+Mail+To+File+Scenario - Mail to File
http://www.riyaz.net/sap/xipi-configuring-the-sender-mail-adapter/90/
Regards
srinivas -
Hi All,
We're having SAP ERP ECC 6.0, and are going to implement DMS soon. Can anyone guide me about server configuration required for the same.
Can we use 32bit servers for DMS and integrate it with existing 64bit servers for ERP?
AshishHi,
DMS is part of ECC 6.0, you will only need content server to store DMS documents. It is possible to use 32 bit content server which talks with SAP (your 64 bit install) through http protocols. You need to plan one server for content server, on the same server you can install TREX search engine provides the RAM permits it.
Anirudh, -
Xmx over 3G for a server app on x86 Solaris is buggy for 32bit app jre15?
Hi folks,
I am sorry for the cross-list, but a bit desperate for feedback.
One of my colleagues says that he thinks the Xmx=3000MB (but less than 4Gig) is buggy for a server side application.
I have searched the bug tracker and the forums and I can't find any reference to this.
Does anyone know if the server side applications can misbehave given such high memory parameter if being run in 32 bit mode ?
My server app start up parameters are:
-server
${MIN_MEMORY:+-Xms$MIN_MEMORY}
${MAX_MEMORY:+-Xmx$MAX_MEMORY}
-Xloggc:${LOG_DIR}/gc.log
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+UseConcMarkSweepGC
where both MIN_MEMORY and MAX_MEMORY = 2500MB
using:
Java 1.5.0_09
on a
SunOS 5.10 Generic_120012-14 i86pc i386 i86pc
Please provide any feedback.
j.Hi! I use Farfinder to connect my phone to my computer at home when I'm away and either need a document or need to email one to someone. Tom
-
High availability for file server and exchange with 2 physical servers
Dear Experts,
I have 2 physical server with local disks only. I want to setup below on same with high availability, please advise best prossible options. We will be using windows 2012 R2 Server..
1. Domain controller
2. Exchange 2013
As of now I am thinking of setting up below:
1. Install Hyper-v on both and create 3 VM on each as
-On Host A- 1 VM for DC, 1 VM for File server with DFS namespace and replication for file server HA and 1 VM for Exchange 2013 with CAS/MBX with DAG and DNS RR for Exchange HA
-On Host B - 1 VM for ADC, 1 VM for File server DFS member for above and 1 VM for Exchange 2013 CAS/MBX with DAG member
I have read on internet about new features called scale out file server (SoFS) in Windows 2012 Server but not sure that will be preferred for file sharing.
Any advise will be highly appreciated..
Thanks for the help in advance..
Best regards,Dear Experts,
I have 2 physical server with local disks only. I want to setup below on same with high availability, please advise best prossible options. We will be using windows 2012 R2 Server..
1. Domain controller
2. Exchange 2013
As of now I am thinking of setting up below:
1. Install Hyper-v on both and create 3 VM on each as
-On Host A- 1 VM for DC, 1 VM for File server with DFS namespace and replication for file server HA and 1 VM for Exchange 2013 with CAS/MBX with DAG and DNS RR for Exchange HA
-On Host B - 1 VM for ADC, 1 VM for File server DFS member for above and 1 VM for Exchange 2013 CAS/MBX with DAG member
I have read on internet about new features called scale out file server (SoFS) in Windows 2012 Server but not sure that will be preferred for file sharing.
Any advise will be highly appreciated..
Thanks for the help in advance..
Best regards,
DFS is by far the best way to implement any sort of file server. Because a) failover is not fully transparent and does not happen always (say not on copy ) b) DFS cannot replicate open files so if you edit a big file and have node rebooted you're going to
lose ALL transactions/updates you've applied c) actually slows down the config. See:
DFS for failover
http://help.globalscape.com/help/wafs3/using_microsoft_dfs_for_failover.htm
DFS FAQ
http://technet.microsoft.com/library/cc773238(WS.10).aspx
(check "open files" point here)
DFS Performance
http://blogs.technet.com/b/filecab/archive/2009/08/22/windows-server-dfs-namespaces-performance-and-scalability.aspx
SoFS a) requires shared storage to run and you don't have one b) does not support generic workloads
(only Hyper-V and SQL Server) and c) technically makes sense to expand SAS JBOD or existing FC SAN to numerous Hyper-V clients over 10 GbE w/o need to invest money into SAS switches and HBAs and FC HBAs and new licenses FC ports. Making long story short:
SoFS is NOT YOUR CASE.
SoFS Overview
http://technet.microsoft.com/en-us/library/hh831349.aspx
http://www.aidanfinn.com/?p=12786
http://www.aidanfinn.com/?p=12786
For now you need to find some shared storage to be a back end for your hypevisor config (SAS JBOD from supported list, virtual SAN from multiple vendors like for example StarWind see below, make sure you review ALL the vendors) and then you create a failover
SMB 3.0 share for your file server workload. See:
Clustered Storage Spaces over SAS JBOD
http://technet.microsoft.com/en-us/library/jj822937.aspx
Virtual SAN from inexpensive SATA and no SAS or FC
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Failover
SMB File Server in Windows Server 2012 R2
http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
Fault
tolerant file server on just a pair of nodes
http://www.starwindsoftware.com/ns-configuring-ha-file-server-for-smb-nas
For Exchange you use SMB share from above for a file share witness and use DAG. See:
Exchange DAG
Good luck! Hope this helped :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Increase Performance and ROI for SQL Server Environments
May 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDFMay 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDF -
Set up Search Service App For SharePoint server 2013 on Windows server 2012 R2 not working
Hi all,
I installed SharePoint server 2013 on Windows server 2012 R2 using VirtualBox. I created a DC(domain controller) server with a domain set up on one VM and it has SQL server 2012 SP1 installed. Then SharePoint 2013 on another VM was set up to access
the DC server. Everything seems working except Search Service App which cannot be sucessfully set up. Creation process for Search service app says Successful and 4 search databases were created and look fine. But when I navigate to search service app
admin page, it gives error info:
System status: The search service is not able to connect to the machine that hosts the administration component. Verify that the administration component '386f2cd6-47ca-4b3a-aeb5-d9116772ef16' in search application 'Search Service Application 1' is in
a good state and try again.
Search Application Topology: Unable to retrieve topology component health states. This may be because the admin component is not up and running.
From event viewer, I see following errors:
(1) Error From source: SharePoint Server
Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance
(b7c72eb8-cbaf-435e-b4c9-963cb6e4e745).
Reason: The object you are trying to create already exists. Try again using a different name.
Technical Support Details:
System.Runtime.InteropServices.COMException (0x80040D02): The object you are trying to create already exists. Try again using a different name.
at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize()
at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean
isAdministrationServiceJob)
(2) Error From source: SharePoint Server Search
Could not access the Search database. A generic error occurred while trying to access the database to obtain the schema version info.
Context: Application '386f2cd6-47ca-4b3a-aeb5-d9116772ef16'
(3) Warning from source: SharePoint Server Search
A database error occurred. Source: .Net SqlClient Data Provider Code: 8169 occurred 0 time(s) Description: Error ordinal: 1 Message:
Conversion failed when converting from a character string to uniqueidentifier., Class: 16, Number: 8169, State: 2 at
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
(4) Error From source: SharePoint Server
Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance
(b7c72eb8-cbaf-435e-b4c9-963cb6e4e745).
Reason: The gatherer application could not be mounted because the search administration database schema version does not match the expected backwards compatibility schema version. The database might not have been upgraded.
Technical Support Details:
System.Runtime.InteropServices.COMException (0xC0041235): The gatherer application could not be mounted because the search administration database schema version does not match the expected backwards compatibility schema version. The database might not have
been upgraded.
Since separate DC server and SharePoint server do not work, I installed SharePoint 2013 on DC server ( so DC server has everything on it now ) but it gives exactly same result. Later I installed SharePoint 2013 SP1 and still have the same problem with Search
Service app. I spent two weeks tried all suggestions available from Web and Google but SharePoint Search Service simply does not work. Config and other databases work but why Search Service has this issue seemingly related to search DB.
Could anybody please help out? You deserve a top SharePoint consultant award if you could find a solution. I am so frustrated and so tired by this issue. This seems also to be a SP set up issue.
Thanks a lot.Using new Search Service App wizard to create SSA is always a success. I could delete existing SSA and recreate it and no problem. It says successful but when I open Search Admin page from CA, it gives me errors as mentioned.
Now I used the following PS script for creating SSA from Max Mercher, but it stays at the last setps in following script:
Add-PsSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue
$IndexLocation = "C:\Search" #Location must be empty, will be deleted during the process!
$SearchAppPoolName = "SSAPool"
$SearchAppPoolAccountName = "mydomain\admin"
$SearchServiceName = "SSA"
$SearchServiceProxyName = "SSA Proxy"
$DatabaseServer = "W12R2DC1"
$DatabaseName = "SSA"
$spAppPool = Get-SPServiceApplicationPool -Identity $SearchAppPoolName -ErrorAction SilentlyContinue
if (!$spAppPool)
$spAppPool = New-SPServiceApplicationPool -Name $SearchAppPoolName -Account $SearchAppPoolAccountName -Verbose
$ServiceApplication = Get-SPEnterpriseSearchServiceApplication -Identity $SearchServiceName -ErrorAction SilentlyContinue
if (!$ServiceApplication)
# process stays at the following step forever, already one hour now.
$ServiceApplication = New-SPEnterpriseSearchServiceApplication -Name $SearchServiceName -ApplicationPool $spAppPool.Name -DatabaseServer $DatabaseServer -DatabaseName $DatabaseName
Account mydomain\admin is an farm managed account, domain admin account, in WG_ADMIN role, It is in all SQL server roles and is DBO. I see search DBs are already on SQL server. From Event viewer, I got following errors in sequence:
(1) Crawler:Content Plugin under source Crawler:Content Plugin
Content Plugin can not be initialized - list of CSS addresses is not set.
(2) Warning for SharePoint Server Search
A database error occurred. Source: .Net SqlClient Data Provider Code: 8169 occurred 0 time(s) Description: Error ordinal: 1 Message: Conversion failed when converting from a character string to uniqueidentifier., Class: 16, Number: 8169, State: 2
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
(3) Error for SharePoint Server Search
Could not access the Search database. A generic error occurred while trying to access the database to obtain the schema version info.
Context: Application 'cbc5a055-996b-44a7-9cbc-404322f9cfdf'
(4) Error for SharePoint Server
Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (b7c72eb8-cbaf-435e-b4c9-963cb6e4e745).
Reason: The gatherer application could not be mounted because the search administration database schema version does not match the expected backwards compatibility schema version. The database might not have been upgraded.
(5) Error Shared Services for SharePoint Server Search
Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (b7c72eb8-cbaf-435e-b4c9-963cb6e4e745).
Reason: The object you are trying to create already exists. Try again using a different name.
Technical Support Details:
System.Runtime.InteropServices.COMException (0x80040D02): The object you are trying to create already exists. Try again using a different name.
at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize()
at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean isAdministrationServiceJob
Above errors keep being generated. Last step for SSA creation stay there forever. Any clue what is really going on? Thanks. -
Dear all,
I'm a new basis and now I'm working in big project ERP. I have a disturbed about config for Production client.
In scc4 we must set client role is Production and No change allowed for Objects. But in production some time we need do Open and Close Period, or change following business requirement, ... This is not allowed to do in Production client.
How do we config for Production client to cover this requirements ?
Do we need a config client for maintain Production client ? Example: Production client is 500, Config client is 100. When we need Open or Close Period or change anything, we do in 100 and transfer request to 500.
Thank you very much.
Regards,
Thanh.
Do not use text message language, the next time your thread will be deleted.
Read the "Rules of Engagement"
Edited by: Juan Reyes on Dec 1, 2010 11:06 AMYou can customize transaction to be executable although the setting in SCC4 is "productive", this is accomplished by using transaction SOBJ:
Note 1497640 - Open and close periods in productive client
You can theoretically put every customizing view there and make it "executable" in a production system.
Markus -
How to use the same services-config for the local and remote servers.
My flex project works fine using the below but when I upload my flash file to the server I doesn't work, all the relative paths and files are the same execpt the remote one is a linux server.
<?xml version="1.0" encoding="UTF-8"?>
<services-config>
<services>
<service id="amfphp-flashremoting-service"
class="flex.messaging.services.RemotingService"
messageTypes="flex.messaging.messages.RemotingMessage">
<destination id="amfphp">
<channels>
<channel ref="my-amfphp"/>
</channels>
<properties>
<source>*</source>
</properties>
</destination>
</service>
</services>
<channels>
<channel-definition id="my-amfphp" class="mx.messaging.channels.AMFChannel">
<endpoint uri="http://localhost/domainn.org/amfphp/gateway.php" class="flex.messaging.endpoints.AMFEndpoint"/>
</channel-definition>
</channels>
</services-config>
I think the problem is the line
<endpoint uri="http://localhost/domainn.org/amfphp/gateway.php" class="flex.messaging.endpoints.AMFEndpoint"/>
but I'm not sure how to use the same services-config for the local and remote servers.paul.williams wrote:
You are confusing "served from a web-server" with "compiled on a web-server". Served from a web-server means you are downloading a file from the web-server, it does not necessarily mean that the files has been generated / compiled on the server.
The server.name and server.port tokens are replaced at runtime (ie. on the client when the swf has been downloaded and is running) not compile time (ie. while mxmlc / ant / wet-tier compiler is running). You do not need to compile on the server to take advantage of this.
Hi Paul,
In Flex, there is feature that lets developer to put all service-config.xml file configuration information into swf file. with
-services=path/to/services-config.xml
IF
services-config.xml
have tokens in it and user have not specified additional
-context-root
and this swf file is not served from web-app-server (like tomcat for example) than it will not work,
Flash player have no possible way to replace token values of service-config.xml file durring runtime if that service-config.xml file have been baked into swf file during compilation,
for example during development you can launch your swf file from your browser with file// protocol and still be able to access blazeDS services if
-services=path/to/services-config.xml
have been specified durring compilation.
I dont know any better way to exmplain this, but in summary there is two places that you can tell swf about service confogiration,
1) pass -services=path/to/services-config.xml parameter to compiler this way you tell swf file up front about all that good stuff,
or 2) you put that file on the webserver( in this case, yes you should have replacement tokens in that file) and they will be repaced at runtime . -
Our setup is that we have two databases; a SQL Server 2008 database and an Oracle database (11g). I've got the oracle MTS stuff installed and the Oracle MTS Recovery Service is running. I have DTC configured to allow distributed transactions. All access to the Oracle tables takes place via views in the SQL Server database that go against Oracle tables in the linked server.
(With regard to DTC config: Checked-> Network DTC Access, Allow Remote Clients, Allow Inbound, Allow Outbound, Mutual Authentication (tried all 3 options), Enable XA Transactions and Enable SNA LU 6.2 Transactions. DTC logs in as NT AUTHORITY\NetworkService)
Our app is an ASP.NET MVC 4.0 app that calls into a number of WCF services to perform database work. Currently the web app and the WCF service share the same app pool (not sure if it's relevant, but just in case...)
Some of our services are transactional, others are not.
Each WCF service that is transactional has the following attribute on its interface:
[ServiceContract(SessionMode=SessionMode.Required)]
and the following attribute on the method signatures in the interface:
[TransactionFlow(TransactionFlowOption.Allowed)]
and the following attribute on every method implementations:
[OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]
In my data access layer, all the transactional methods are set up as follows:
using (IDbConnection conn = DbTools.GetConnection(_configStr, _connStr))
using (IDbCommand cmd = DbTools.GetCommand(conn, "SET XACT_ABORT ON"))
cmd.ExecuteNonQuery();
using (IDbCommand cmd = DbTools.GetCommand(conn, sql))
... Perform actual database work ...
Services that are transactional call transactional DAL code. The idea was to keep the stuff that needs to be transactional (a few cases) separate from the stuff that doesn't need to be transactional (~95% of the cases).
There ought not be cases where transactional and non-transactional WCF methods are called from within a transaction (though I haven't verified this and this may be the cause of my problems. I'm not sure, which is part of why I'm asking here.)
As I mentioned before, in most cases, this all works fine.
Periodically, and I cannot identify what initiates it, I start getting errors. And once they start, pretty much everything starts failing for a while. Eventually things start working again. Not sure why... This is all in a test environment with a single user.
Sometimes the error is:
Unable to start a nested transaction for OLE DB provider "OraOLEDB.Oracle" for linked server "ORACLSERVERNAME". A nested transaction was required because the XACT_ABORT option was set to OFF.
This message, I'm guessing is happening when I have non-transactional stuff within transactions, as I'm not setting XACT_ABORT in the non-transactional code (that's totally doable, if that will fix my issue).
Most often, however, the error is this:
System.Data.SqlClient.SqlException (0x80131904): The operation could not be performed because OLE DB provider "OraOLEDB.Oracle" for linked server "ORACLSERVERNAME" was unable to begin a distributed transaction.
Now, originally we only had transactions on SQL Server tables and that all worked fine. It wasn't until we added transaction support for some of the Oracle tables that things started failing. I know the Oracle transactions work. And as I said, most of the time, everything is just hunky dorey and then sometimes it starts failing and keeps failing for a while until it decides to stop failing and then it all works again.
I noticed that our transactions didn't seem to have a DistributedIdentifier set, so I added the EnsureDistributed() method from this blog post: http://www.make-awesome.com/2010/04/forcibly-creating-a-distributed-net-transaction/
Instead of a hardcoded Guid (which seemed to cause a lot of problems), I have it generating a new Guid for each transaction and that seems to work, but it has not fixed my problem. I'm wondering if the lack of a DistribuedIdentifier is indicative of some other underlying problem. I've never dealt with an environment quite like this before, so I'm not sure what is "normal".
I've also noticed that the DistributedIdentifier doesn't get passed to WCF. From the client, I have a DistributedIdentifier and a LocalIdentifier in Transaction.Current.TransactionInformation. In the WCF server, however there is only a LocalIdentifier set and it is a different Guid from the client side (which makes sense, but I would have expected the DistributedIdentifier to go across).
So I changed the wait the code above works and instead, on the WCF side, I call a method that calls Transaction.Current.EnlistDurable() with the DummyEnlistmentNotification class from the link above (though with a unique Guid for each transaction instead of the hardcoded guid in the link). I now havea DistributedIdentifier on the server-side, but it still doesn't fix the problem.
It appears that when I'm in the midst of transactions failing, even after I shut down IIS, I'm unable to get the DTC service to shutdown and restart. If I go into Component Services and change the security settings, for example, and hit Apply or OK, after a bit of a wait I get a dialgo that says, "Failed ot restart the MS DTC serivce. Please examine the eventlog for further details."
In the eventlog I get a series of events:
1 (from MSDTC): "The MS DTC service is stopping"
2 (From MSSQL$SQLEXPRESS): "The connection has been lost with Microsoft Distributed Transaction Coordinator (MS DTC). Recovery of any in-doubt distributed transactions
involving Microsoft Distributed Transaction Coordinator (MS DTC) will begin once the connection is re-established. This is an informational
message only. No user action is required."
-- Folowed by these 3 identical messages
3 (from MSDTC Client 2): 'MSDTC encountered an error (HR=0x80000171) while attempting to establish a secure connection with system GCOVA38.'
4 (from MSDTC Client 2): 'MSDTC encountered an error (HR=0x80000171) while attempting to establish a secure connection with system GCOVA38.'
5 (from MSDTC Client 2): 'MSDTC encountered an error (HR=0x80000171) while attempting to establish a secure connection with system GCOVA38.'
6 (From MSDTC 2): MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1):
Allow Remote Administrator = 0,
Network Clients = 1,
Trasaction Manager Communication:
Allow Inbound Transactions = 1,
Allow Outbound Transactions = 1,
Transaction Internet Protocol (TIP) = 0,
Enable XA Transactions = 1,
Enable SNA LU 6.2 Transactions = 1,
MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService,
Firewall Exclusion Detected = 0
Transaction Bridge Installed = 0
Filtering Duplicate Events = 1
This makes me wonder if there's something maybe holding a transaction open somewhere?The statement executed from the sql server. (Installed version sql server 2008 64 bit standard edition SP1 and oracle 11g 64 bit client), DTS enabled
Below is the actual sql statement issued
SET XACT_ABORT ON
BEGIN TRAN
insert into XXX..EUINTGR.UPLOAD_LWP ([ALTID]
,[GRANT_FROM],[GRANT_TO],[NO_OF_DAYS],[LEAVENAME],[LEAVEREASON],[FROMHALFTAG]
,[TOHALFTAG] ,[UNIT_USER],[UPLOAD_REF_NO],[STATUS],[LOGINID],[AVAILTYPE],[LV_REV_ENTRY])
values('IS2755','2010-06-01',
'2010-06-01','.5', 'LWOP' ,'PERSONAL' ,'F', 'F', 'EUINTGR',
'20101',1,1,0,'ENTRY')
rollback TRAN
OLE DB provider "ORAOLEDB.ORACLE" for linked server "XXX" returned message "New transaction cannot enlist in the specified transaction coordinator. ".
Msg 7391, Level 16, State 2, Line 3
The operation could not be performed because OLE DB provider "ORAOLEDB.ORACLE" for linked server "XXX" was unable to begin a distributed transaction.
Able to execute the above statement successfully without using transaction.We need to run the statement with transaction. -
Is SBS 2011 the right choice for replacing Server 2003 Terminal Server?
Reading license options has done my head in so going to ask for recommendations here if SBS is best option, as every option seems to have pros/cons.
Need to upgrade a 2003 Server which has 2 local users and average of 10 remote users (max 15 users). They use remote desktop to connect with everything on a single server (on its last legs now). Server acts as DC, runs SQL Server 2008 R2 Express
(need to upgrade from Express to SQL server Std shortly due to size), Microsoft Office, Myob, and a couple of proprietary applications.
I'm thinking SBS 2011 is best option as with premium add-on will include SQL Server licenses, so it's looking the most cost effective?
So the questions;
In an ideal world what would you suggest as a replacement for the current setup? (As is, with apps installed on server and remote users using Remote Desktop, or can SBS 2011 offer better solution?)
Can you install Microsoft Office 2003 under SBS 2011 for all users to use with existing licenses? Cost of upgrading server and O/S will probably rule out upgrading Office as well at the moment (though that's the long term plan).
or.. Best way of running Office and app for SQL Server on local PC's with VPN link? (may have issues with response times for some locations). I gather DirectAccess won't be an option as needs multiple servers and clients are Windows 7, but just
Win 7 Pro, not Enterprise.
Appreciate any advice.... this is for a small business with a few locations that is just getting big enough to require an upgrade, but small enough that funds as always are limited. Realise what I'm asking about maintaining current setup using
Remote Desktop is probably not best practice, but is it most cost effective given the small size?Hi:
There is no easy answer, and a lot depends on what you do with your email. SBS made email very easy, but without it, the decisions are harder and more costly. O365 and other hosted email systems are low cost of entry but they add up over time.
Pop3 is cheap, but not very robust or convenient.
SQL introduces another wrinkle. SQL express is free and is good for databases up to 10 GB. If you are well within that limit I would do that and not incur the expense of SQL Server. If you need SQL server, the PAO is the best way to
acquire it, but you will need a physical box to install it, unless you do Hyper V as below.
SBS does not allow for remote access to itself to run programs. Only for admin purposes. Full stop.
In an ideal world I would suggest SBS 2011 and Server 2012 running as VM's on one beefy server. Install Server 2012 in HyperV role only, then install SBS 2011 as VM1 and Server 2012 again as VM2. The license for Server 2012 allows 3 installs
of the same license, one as host and two guests. If you need a third server you can install the same copy of Server 2012 again as a second VM. Not that all three installs must be on the same box. This third VM could also be the Server 2008
from the PAO, but iirc the SQL version that comes with the PAO only checks that there is a SBS in the domain, not what OS it is being installed on.
So, for the cost of one pretty beefy server, one copy of SBS 2011, one copy of the Server 2012 and, if needed the PAO and RDS CALs and Office CALs you should be covered.
Don't forget a robust firewall to protect all of this.
Larry Struckmeyer[SBS-MVP] If your question is answered, please mark the response as the answer so that others can benefit. -
Inbound Adapter / 'Sink' failing for Content Server on Subscription Client
I am running into an issue in configuring the Outbound - Inbound Adapters for Connection Server / Subscription Client against Content Server. The details are below. Would sincerely appreciate if anyone can help with this.
I have installed a Oracle UCM setup together with two Content Server instances (Contribution instance and Live instance), a Connection Server and a Subscription Client - all within the same virtual machine on top of Windows XP SP2 OS and Oracle 10g Database.
I have further configured the Outbound Adapter on the Connection Server and the Inbound Adapter on the Subscription Client. All installation and configuration has been done following the step-by-step process from the relevant installation guides.
We have tested the connectivity between the Connection Server and the Subscription Client for a simple file-based content source and it works fine - i.e., files added to a directory registered as a source on connection server are retrieved and sent to the subscription client and received there successfully / dropped in the specified target client directory.
However, this transfer fails for the Content Server Outbound-Inbound Adapter connectivity. We have an Archive in Content Server registered as a "CNSArchive" and exporting successfully to the CNS Server. The registered outbound adapter is successfully able to receive these updates and are visible through the Connection Server
interface. The subscription client is also apparently able to receive these updates
correctly and writes them to the specified folder/directory on the client;
however fails to "batch load" these and reports a failure writing to the
Content Sink.
What we would like is for the ability to establish end-to-end connectivity between the contribution content server instance and the live content server using the connection server + outbound adapter on the sending end together with the subscription client + inbound adapter together on the receiving end. This is failing.
I have double checked the cns.oracle.config (connection server config), siclone.config (subscription client config), and the source content server's configuration (config.cfg), and all seems to be well. Not really sure therefore what is causing this.
Here is a snippet of the error trace that keeps showing up in the subscription client logs ...
[May 18 21:18:34] VERBOSE: scheduler: waiting to run job ICE Connection in 0:00:33.000 ...
[May 18 21:18:34] VERBOSE: replicator: response: 200 OK for Url:http://192.168.131.65:8891/42/E%3a%2fapps%2foracle%2fucm%2fserver%2fweblayout%2fgroups%2fpublic%2fdocuments%2fucmdocs%2fcpseven.pdf
[May 18 21:18:34] INFO: replicator: opened connection http://192.168.131.65:8891/42/E%3a%2fapps%2foracle%2fucm%2fserver%2fweblayout%2fgroups%2fpublic%2fdocuments%2fucmdocs%2fcpseven.pdf in 0:00:00.130
[May 18 21:18:34] ERROR: replicator: ContentSink reported failure to add item.
[May 18 21:18:34] ERROR: replicator: Telling all sinks to rollback changes
[May 18 21:18:34] ERROR: replicator: crawl failed
[May 18 21:18:34] VERBOSE: scheduler: job completed:ICE Request: 43Don't know if anyone is still having problems with this, but just in case...
I was getting a similar error. I had an Event Viewer record (It wasn't a Warning or an Error, just an Information) reading:You are running a version of Microsoft SQL Server 2000 or Microsoft SQL Server 2000 Desktop Engine (also called MSDE) that has known security vulnerabilities when used in conjunction with this version of Windows. To reduce your computer's vulnerability to certain virus attacks, the TCP/IP and UDP network ports of Microsoft SQL Server 2000, MSDE, or both have been disabled. To enable these ports, you must install a patch, or the most recent service pack for Microsoft SQL Server 2000 or MSDE from http://www.microsoft.com/sql/downloads/default.asp
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
I installed SQL SP4 and it corrected the problem -
Enable logging for FTP server in 10.8
Can't figure out how to enable 10.6-style logging for FTP server in 10.8. It used to be configurable via Server Admin allowing you to choose what to log and so on and the logs would go to /Library/Logs. Now I only see some irrelevant stuff in /var/log/system.log and I want to have all the transfers logged on this server. Tried setting parameters via serveradmin ftp:setting = param but it wouldn't appear in the list afterwards no matter what (just copied those off a working 10.6 server).
As a bonus, I can't figure out where the settings for logging are stored in a 10.6 server either. There's certainly not a mention of a file named FTP.transfer.log. Weird!So to help anyone who finds themselves in the same boat and wants to enable ftpd logging on OSX 10.8 proper:
All operations assume root which is obtained by issuing 'sudo su' or just prefixing every command with 'sudo'.
First you edit the ftpd's launchd plist file:
pico /Applications/Server.app/Contents/ServerRoot/System/Library/LaunchDaemons/com.a pple.ftpserver.plist
Find the section titled 'ProgramArguments'. Amend it so it looks like this:
<key>ProgramArguments</key>
<array>
<string>ftpd</string>
<string>-ll</string>
<string>-r</string>
<string>-n</string>
<string>-d</string>
<string>-c</string>
<string>/Library/Server/FTP/Config</string>
<string>-L</string>
<string>/var/log/ftpd.log</string>
</array>
Second you edit /etc/asl.conf. This is necessary to make '-d' (debug) switch work since by default syslog won't log any debug-level messages at all.
pico /etc/asl.conf
At the end of the file add the following:
# ftpd verbose logging
? [= Facility ftp] [<= Level debug] file /var/log/ftpdv.log
Third important step is to reboot the server. ftpd's plist configuration is cached somewhere so restarting just the ftp server alone won't work.
Now what the added switches do:
-ll enables some logging (PUT and GET commands only despite what the man pages leads you to believe) to a file specified later by -L /path/to/file
-d enables debug mode which logs everything - including server's response. May be too verbose but at least you get a log of every single command issued to the server by your clients.
Maybe you are looking for
-
How do I sync my gmail contacts with the contacts already on my iPhone?
I just bought an iPhone 5c and the dealer manually transferred all of my (gmail) contacts from my Samsung Galaxy. Now the contacts on my iPhone are no longer synced with my gmail account. If I make changes to a contact on my iPhone, that change doe
-
I created a MobileMe gallery and the metadata isn't there when I look at the info tab when a picture is selected. How do I get the metadata to show up?
-
How to set default folder view to "list"
Hi guys, This is regarding Tiger. I poked around in the Finder forum to no avail. Have any of you come up with anything that would force the default folder view to "list" view? Um, folders, they are found in the gui. Thanks for any thoughts, scripts,
-
Validating Product Category on SC Trasfer from punch-out catalog
Hi, We have a requirement to validate Product Category of the shopping cart items transferred from punch-out catalog items. This requirement will allow us to put controls on vendors from publishing items that are not approved for their product cate
-
I recently upgraded to OSX 10.4.11. When I try to set my master password to a new one, it will not recognize the change, it remains my old password. What can I do to remedy this?