N1000v : performance on vnic for vxlan traffic : massive drops over 340.000 pps
Hi,
during these days we've performed a lot of tests about performance ( in terms of pps ) in our virtualized data center, based on esxi5.5 , nexus 1000v ( sv2.2), vshpere 5.5, UCS blade B200 , UCS manager 2.0.4a, using vxlan and vlan traffic.
After the tuning of :
rx buffer on ethernet adapter ( ethtool -G . 4096) on guest machines
rx buffer and RSS on UCS on ethernet adapter policy as mentioned in https://supportforums.cisco.com/discussion/11646156/esx-vmnic-receive-discards-ucs ( after a reboot of ucs servers)
the drops seems to be reduced, and so we tried to stress the blade with 300.000 pps and over ( not so many if we think to an hardware device).
We try to send from a vem, 600.000 pps ( originated from two VMs , max 300.000 pps each in the same vxlan) , toward another vem ( two other VMs behind this vem in the same vxlan receiving the packets), passing by the uplink nexus 1000v on a vnic dedicated to Vxlan . This traffic was vxlan encapsulated/decapsulated.
The receiving vnic ( dedicated to vxlan) , experienced a lot of drops , rx drop ( 40%) .
On esxi , we use the ethtool -S command to see the statistics, and rx_no_bufs increased !!!
Using esxitool n , with the network statistics, we see the trasmitting blade/vem on vxlan vnic with 550.000 pps in TXpps , but the receiving vem in Rx pps stopped to 348.000 !!!
We repeat the same test using the VLAN upllink ( not the vxlan), the VM tx and rx traffic using a vlan , not a vxlan..
In this test, 600.000 pps are forwarded between the two vems, with no problem.. with 750.000 pps we experienced a little drop on DRPKTX% (from esxitop) on the blade trasmitting the traffic.. but it is not so worring as the vxlan drops.
Thanks
Federica
Similar Messages
-
Speed - Massive drop over the last 4 days - Ongoin...
Hi,
Since the middle of the week my broadband connection speed has become unbearable! Here's a snapshot of a rescent speed test:
Normally my connection isn't the best, and averages at around 1.5-2.0mb however it's now like having dial-up again!
Line state
Connected
Connection time
0 days, 0:16:44
Downstream
1,312 Kbps
Upstream
448 Kbps
VPI/VCI
0/38
Type
PPPoA
Modulation
ITU-T G.992.1
Latency type
Interleaved
Noise margin (Down/Up)
21.9 dB / 11.0 dB
Line attenuation (Down/Up)
41.0 dB / 22.5 dB
Output power (Down/Up)
18.3 dBm / 12.5 dBm
Loss of Framing (Local)
65
Loss of Signal (Local)
33
Loss of Power (Local)
0
FEC Errors (Down/Up)
0 / 0
CRC Errors (Down/Up)
4 / 2147480000
HEC Errors (Down/Up)
nil / 0
Error Seconds (Local)
45
I've been onto the phone to the helpline, (which was a mission, seeing as I had already tried using the master socket, changing filters etc) yet trying to explain I had already done that was mission impossible!
First time I called, they said wait 4 hours, as they were doing somthing to the line then over the next 7-10 days my speed would go up/down similiar to what it does when your first join.
After about 8/9 hours I tried loading a webpage and the speed was actually slightly worse! I called again, as the first person made out that my internet would be useable again that night. Second person I spoke to said that it was a problem affecting my whole dialing code! Which is a lie, as my neighbours BT Broadband seems to be fine!
They have said that it could take 10 days to settle down, however it's still just as dire, will it actually improve or does somthing else need to be done by BT?
It would be great if BT could deliver a basic broadband service to ALL it's customers before rolling out ever better improvements!
Any help would be greatly appreciated!do you have phone extension sockets in your home? if so you should connect to test socket if possible as this will eliminate any noise problems caused by your internal wiring. you need to leave it connected 24/7 with no resets
with an attenuation of 41 and a good line you should get 5/6mb
http://www.kitz.co.uk/adsl/max_speed_calc.php
also check this post for some suggestions
http://community.bt.com/t5/BB-in-Home/Poor-Broadband-speed/m-p/14217#M8397
If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post.
If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’. -
Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster
We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
Cheers for now
RussellSam,
Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
Cheers for now
Russell
Yep.
In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs.
Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx -
Ignoring TCP handshake & Sequence Numbers for STT Traffic
Hi,
I have to pass STT traffic through a Cisco ASA (details on STT are here http://tools.ietf.org/html/draft-davie-stt).
STT traffic looks like TCP traffic (i.e. it uses IP protocol 6 and is sent to a specific destination port) but is stateless. It doesn't perform TCP handshake, i.e. TCP flags are used differently same goes for sequence numbers.
Is there any way to disable to regular TCP handshake and sequence numbers checks? I saw that there might be a chance to do something for the handshake with the embryotic connection limit but I'm not sure about the sequence numbers.
Assume ASA 8.6.
Thanks,
BenHi,
You can configure tcp state bypass only for this traffic, for the rest the firewall would check the tcp state of the packet, here is the doc:
http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a0080b2d922.shtml
Hope that helps.
Thanks,
Varun Rao
Security Team,
Cisco TAC -
Perform VENDOR EVALUATION for MORE THAN ONE VENDORS at a time
Hello all,
Please guide for any process where i can perform Vendor Evaluation for MORE THAN ONE vendors AT A TIME.
At my location there are around thousand vendors, which are to be evaluated, and difficult to perform the evaluation process one-by-one.
(ME61/ME62/ME63)
Detailed replies with various possibilities would be highly appreciated.
Thanks & Regards,
Joy GhoshThe vendor evaluation for some thousand vendors at the same time has already been in SAP long before they developed LSMW. The purpose of LSMW is to load data from a legacy system, of course you can (mis-)use it for a lot other things.
But you should not always use LSMW if you are to lazy to go thru the SAP standard menu to find a transaction like ME6G
There you define a job that runs RM06LBAT report.
You first have to define a selection variant for this report. this can be done in SE38 by entering the report name, select variant, clicking display, then entering a name for the variant and clicking Create. -
Increase Performance and ROI for SQL Server Environments
May 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDFMay 2015
Explore
The Buzz from Microsoft Ignite 2015
NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
Hot topics at the NetApp booth included:
OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
These tools give you greater flexibility for managing and protecting important business applications.
Chris Lemmons
Director, EIS Technical Marketing, NetApp
If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
Source: NetApp, 2015
Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
Test Methodology
To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
Table 1) Components used in testing.
Test Configuration Components
Details
SQL Server 2014 servers
Fujitsu RX300
Server operating system
Microsoft Windows 2012 R2 Standard Edition
SQL Server database version
Microsoft SQL Server 2014 Enterprise Edition
Processors per server
2 6-core Xeon E5-2630 at 2.30 GHz
Fibre channel network
8Gb FC with multipathing
Storage controller
AFF8080 EX
Data ONTAP version
Clustered Data ONTAP® 8.3.1
Drive number and type
48 SSD
Source: NetApp, 2015
The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
Source: NetApp, 2015
In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
The All Flash FAS system still had additional headroom under this load.
Calculating the Savings
Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
Value
Analysis Results
ROI
65%
Net present value (NPV)
$950,000
Payback period
six months
Total cost reduction
More than $1 million saved over a 3-year analysis period compared to the legacy storage system
Savings on power, space, and administration
$40,000
Additional savings due to nondisruptive operations benefits (not included in ROI)
$90,000
Source: NetApp, 2015
The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
Maximum SQL Server 2014 Performance
In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
Data Reduction and Storage Efficiency
In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
A Better Way to Run Enterprise Applications
The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
Quick Links
Tech OnTap Community
Archive
PDF -
Can't add a new vNIC for service profile
Hello,
I'm trying to add an additional vNIC for Service Profile.
B200-M3, VIC1240+expander, IOM 2208XP, 4 links between IOM and FI 6248, firmware 2.2(1f).
At the moment 7 vNICs are configured for this service profile. Service profile is associated with blade server.
When I try to add a new vNIC (it will be 8th vNIC), I see the error (see the attachment).
For some reasons separate vNICs are more useful for Hyper-V servers :)
All vNICs created via vNIC templates. All vNICs configured as VMQ, only iSCSI configured as Dynamic,
As I read the spec, VIC1240 can support up to 250 vNICs. What is wrong ? How to fix that ?I have seen the VMQ limit being reached when applying the VMQ policy to the vNIC template. Can you try not selecting the VMQ policy for these new vNICs. If this works, you might be reaching the VMQ limits as described below:
Number of VMQ's:
128 is the Maximum number per vNIC
256 is the max number per Blade -
Administration port - network channel for admin traffic
I am trying to configure a separate channel for Administration traffic on weblogic. I followed the oracle docos and configured the SSL, domain wide admin port, server listen address, ‘admin’ channel.
The issue is admin traffic in not happening through the newly created channel.
L2 network is not getting used. I can’t see any activity in the monitoring tab of new Channel. Also the netstat is showing that the port 9101/9102 is getting used on the 192.168.100.218 and not on 10.254.252.849.
I also tried by setting up the newly created channel weight as 51, but no luck.
Is JMX connectivity related to admin channel?
Any help is highly appreciated. Thanks.
Ipconfig:
Admin: adminserver701.mycompany.internal, 192.168.100.238, 10.254.252.808
Managed: appserver701.mycompany.internal, :192.168.100.218, 10.254.252.849
Domain wide admin port: 9101
Admin:
Listen address –> adminserver701.mycompany.internal
Channel –> admin -> 10.254.252.808/9101
Startup -> -Dweblogic.admin.ListenAddress=admin://10.254.252.808:9101
Managed:(appserver701)
Listen address –> appserver701.mycompany.internal
Admin port override: 9102
Channel –> admin -> 10.254.252.849/9102
Startup -> -Dweblogic.admin.ListenAddress=admin://10.254.252.849:9102
AdminServer Logs:
####<Feb 18, 2013 1:53:33 PM EST> <Info> <JMX> <adminserver701.mycompany.internal> <soa_as> <[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361159613346> <BEA-149512> <JMX Connector Server started at service:jmx:iiop://adminserver701.mycompany.internal:9101/jndi/weblogic.management.mbeanservers.runtime .>
####<Feb 18, 2013 1:53:33 PM EST> <Info> <JMX> <adminserver701.mycompany.internal> <soa_as> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361159613353> <BEA-149512> <JMX Connector Server started at service:jmx:iiop://adminserver701.mycompany.internal:9101/jndi/weblogic.management.mbeanservers.edit .>
####<Feb 18, 2013 1:53:33 PM EST> <Info> <JMX> <adminserver701.mycompany.internal> <soa_as> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361159613367> <BEA-149512> <JMX Connector Server started at service:jmx:iiop://adminserver701.mycompany.internal:9101/jndi/weblogic.management.mbeanservers.domainruntime .>
####<Feb 18, 2013 1:53:36 PM EST> <Notice> <Server> <adminserver701.mycompany.internal> <soa_as> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361159616699> <BEA-002613> <Channel "DefaultAdministration" is now listening on 192.168.100.238:9101 for protocols admin, ldaps, https.>
####<Feb 18, 2013 1:53:36 PM EST> <Notice> <Server> <adminserver701.mycompany.internal> <soa_as> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361159616700> <BEA-002613> <Channel "Channel-0" is now listening on 10.254.252.808:9101 for protocols admin, ldaps, https.>
####<Feb 18, 2013 1:55:12 PM EST> <Notice> <Server> <adminserver701.mycompany.internal> <soa_as> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <cd259038c7dcf5a8:-26ac3ba0:13ceb6f767d:-8000-000000000000001a> <1361159712920> <BEA-002613> <Channel "Default" is now listening on 192.168.100.238:7001 for protocols iiop, t3, ldap, snmp, http.>
####<Feb 18, 2013 1:55:12 PM EST> <Notice> <Server> <adminserver701.mycompany.internal> <soa_as> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <cd259038c7dcf5a8:-26ac3ba0:13ceb6f767d:-8000-000000000000001a> <1361159712920> <BEA-002613> <Channel "DefaultSecure" is now listening on 192.168.100.238:7002 for protocols iiops, t3s, ldaps, https.>
ManagedServer Logs:
####<Feb 18, 2013 2:54:19 PM EST> <Info> <JMX> <appserver701.mycompany.internal> <adp_ms01> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361163259911> <BEA-149512> <JMX Connector Server started at service:jmx:iiop://appserver701.mycompany.internal:9102/jndi/weblogic.management.mbeanservers.runtime .>
####<Feb 18, 2013 2:54:20 PM EST> <Notice> <Server> <appserver701.mycompany.internal> <adp_ms01> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361163260350> <BEA-002613> <Channel "Channel-0" is now listening on 10.254.252.849:9102 for protocols admin, CLUSTER-BROADCAST-SECURE, ldaps, https.>
####<Feb 18, 2013 2:54:20 PM EST> <Notice> <Server> <appserver701.mycompany.internal> <adp_ms01> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1361163260350> <BEA-002613> <Channel "DefaultAdministration" is now listening on 192.168.100.218:9102 for protocols admin, CLUSTER-BROADCAST-SECURE, ldaps, https.>
####<Feb 18, 2013 2:54:58 PM EST> <Notice> <Server> <appserver701.mycompany.internal> <adp_ms01> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <d3208ed6c2482016:-46ac5fed:13ceba69a8e:-7ffe-000000000000000e> <1361163298045> <BEA-002613> <Channel "DefaultSecure" is now listening on 192.168.100.218:7102 for protocols iiops, t3s, CLUSTER-BROADCAST-SECURE, ldaps, https.>
####<Feb 18, 2013 2:54:58 PM EST> <Notice> <Server> <appserver701.mycompany.internal> <adp_ms01> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <d3208ed6c2482016:-46ac5fed:13ceba69a8e:-7ffe-000000000000000e> <1361163298045> <BEA-002613> <Channel "Default" is now listening on 192.168.100.218:7101 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.>
AdminServer logs update while starting managed:
####<Feb 18, 2013 2:54:57 PM EST> <Info> <JMX> <adminserver701.mycompany.internal> <soa_as> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <cd259038c7dcf5a8:-26ac3ba0:13ceb6f767d:-8000-0000000000000162> <1361163297488> <BEA-149506> <Established JMX Connectivity with adp_ms01 at the JMX Service URL of service: jmx:admin://appserver701.mycompany.internal:9102 /jndi/weblogic.management.mbeanservers.runtime.>
Admin Server :
[oracle@adminserver701 bin]$ netstat -an | grep 9101
tcp 0 0 10.254.252.808:9101 0.0.0.0:* LISTEN
tcp 0 0 192.168.100.238:9101 0.0.0.0:* LISTEN
tcp 0 0 192.168.100.238:9101 192.168.100.218:59038 ESTABLISHED
I am wondering if the JMX connectivity is using the server listen address (adminserver701.mycompany.internal) which will by default resolve to 192.168.100.238. Is there a way to force JMX to use 10.254.252.808?Hi
For first question the answer is no. With the administration port, you enable the SSL between the admin server and Node manager-managed Servers. You can still use the web console.
For teh second question, you can use ANT or can use the WLS Scripting ..you can get more details in dev2dev.bea.com
Jin -
Performance Tuning Certification for Application Developer
Hi,
Can you please advise if there is any Oracle Performance Tuning certification for an Application Developer and Oracle 9i to 10G migration certification? If yes, can you please let me know its Oracle examination number?
I have already passed 1Z0-007 and 1Z0-147 in Oct 2008.
Thanks in advance,
Sandeep Kumar... if there is any Oracle Performance Tuning certification for an Application Developer ...There is no performance tuning related certification for application developers. There is one for the DBA track.
Also -- to answer your question in the other thread:
So to clear this certification do I need to re-appear all the 3 exams as they were given in 2008 or just 1Z0-146 is enough.Having passed 1Z0-007 and 1Z0-147, if you pass 1Z0-046, you will gain the certification: Oracle Advanced PL/SQL Developer Certified Professional -
Performance Tuning Guidelines for Windows Server 2008 R2 mistake?
Hi all,
I'm reading the "Performance Tuning Guidelines for Windows Server 2008 R2" and I think there is same kind of error. At page 53 there is a sub chapter "I/O Priorities" that explain how to handle I/O priorities.
In the guide it is said that in the registry I have to have a key:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\DeviceClasses\{Device GUID}\DeviceParameters\Classpnp\
Where {Device GUID} I think it is the value I get in device manager under one of my disks -> properties -> details -> "Device Class GUID"
the trouble is that under the key HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\DeviceClasses\ I've no subkey matching my disk's device class guid! Maybe I'm drunk but I've checked every sub key under HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\DeviceClasses\
but neither of them as a subkey named "DeviceParameter"!!
So am I wrong?Hi Andrea,
To find the "DeviceParameter", please try to follow the steps below:
1.Find this registry key and note the DeviceInstance value:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\DeviceClasses\
2.Find the device instance registry key under "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum" which contains information about the devices on the system and get the device interface GUID:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USB\<hardware id>\<instance id>\Device Parameters
I hope this helps. -
DPM encountered an error while performing an operation for \\?\Volume{fc1f0549-9865-4349-b4df-8596a19ad7fe}\FolderPath\Work\ on FileServer (ID 2033 Details: The system cannot find the path specified (0x80070003))
Can I know what this mean? It completely fail all entire job.
Backup was on half way go, it already took about 3TB data and it fail.Hi Oneoa,
Please verify the antivirus exeptions. Look at this two links for more information:
http://technet.microsoft.com/en-us/library/ff399439.aspx
http://support.microsoft.com/kb/928840
I hope this solves your problem, please post your progress so I may assist you further.
Best Regards
Robert Hedblom
MVP DPM
Check out my DPM blog @ http://robertanddpm.blogspot.com -
Perform an export for both ABAP & JAVA in one shot with WAS 640 SAPinst ...
Hello,
I have to perform an OS/DB Migration of a SAP XI 3.0 (WAS640-SR1), an ABAP + JAVA addin configuration.
I would like to know if SAPinst for WAS640 (and SR1) could perform an export for both ABAP & JAVA in one shot as it seems possible in NW7.0 SAPinst version. None of the WAS 640 OS/DB migration guides gives these possibility.
And if not not, could U confirm that I have to perform 1st an export (with R3LOAD) of ABAB stack, then an export (with JLOAD) of the JAVA one ?
And apply the same way for installation with R3load and JLOAD downloaded files ?
Thks in advance
Bernard Accoce
SAP Basis Consultant - TeamWork in Toulouse (France)Hi Bernard,
No, both exports cannot be executed in one single step. You have to execute the procedure twice, once for ABAP and once for Java, like you already mentioned yourself..
For the import applies the same.
Kind regards,
Mark -
Dear SAP Gurus,
We are implementing TREX version 7.10.50 for Talent Management ECC 6.0 - EHP 5.
I'd like to ask you question regarding ESH_ADM_INDEX_ALL_SC program
which used to create search connector for TREX and perform initial indexing for all search connectors.
As we know we can perform indexing using ESH_COCKPIT transaction code or use ESH_ADM_INDEX_ALL_SC.
If I try to perform indexing using ESH_COCKPIT, all search connectors can be indexed ("searchable" column are "checked" and status are changed to "Active" for all search connectors).
However, if I try to perform indexing using ESH_ADM_INDEX_ALL_SC, not all search connectors are indexed.
I've traced the program ESH_ADM_INDEX_ALL_SC using ST01 transaction code and found these error:
- rscpe__error 32 at rscpu86r.c(6;742) "dest buffer overflow" (,)
- rscpe__error 32 at rscpc (20;12129) "convert output buffer overflow"
- rscpe__error 128 at rstss01 (1;178) "Object not found"
Please kindly help me to solve this issue,
Thank you very much
Regards,
BobbiHi Luke,
Please find below connectors and the status after running ESH_ADM_INDEX_ALL_SC:
HRTMC AES Documents Prepared
HRTMC AES Elements Prepared
HRTMC AES Templates Prepared
HRTMC Central Person Prepared
HRTMC Functional Area Prepared
HRTMC Job Prepared
HRTMC Job Family Prepared
HRTMC Org Unit Prepared
HRTMC Person Active
HRTMC Position Prepared
HRTMC Qualification Active
HRTMC Relation C JF 450 Active
HRTMC Relation C Q 031 Active
HRTMC Relation CP JF 744 Active
HRTMC Relation CP P 209 Active
HRTMC Relation CP Q 032 Active
HRTMC Relation CP TB 743 Active
HRTMC Relation FN Q 031 Active
HRTMC Relation JF FN 450 Active
HRTMC Relation JF Q 031 Active
HRTMC Relation P Q 032 Active
HRTMC Relation S C 007 Active
HRTMC Relation S CP 740 Active
HRTMC Relation S JF 450 Active
HRTMC Relation S O 003 Active
HRTMC Relation S O Area of Responsibility Active
HRTMC Relation S P 008 Active
HRTMC Relation S Q 031 Active
HRTMC Relation S S Manager Active
HRTMC Relation SC JF FN Active
HRTMC Structural authority Active
HRTMC Talent Group Prepared
As suggested by OSS, we implement SAP Note 1058533.
Kindly need your help.
Thank you very much
Regards
Bobbi -
Using Designer to perform reverse engineering for Adabas entities
Hi Experts,
Customer will migrate from Adabas to Oracle db. Is it possible to use designer to perform reverse engineering for entities of Adabas?
Thanks for your help in advance.
QueenieIf there is an ODBC driver for Adabas, it MAY be possible, though I have never had an Adabas database to try it on. I know that Adabas isn't natively a relational database using SQL, which is what Designer's Design Capture utility expects, so it will work through ODBC or not at all.
-
Have v10.6.8 and iMac performs slowly, spirals for very long periods, feels like I'm back to dial-up. What can I do to speed things up?
How large is you HD and how much space do you have left?
Check out the following & do the necessary:
User Tip: Why is my computer slow?
What to do when your computer is too slow
Speeding up your Mac
Maybe you are looking for
-
How do I use an external web cam with my Mac Pro on Skype?
I am trying to use my MAC Pro with an external web cam (Logitec C615) on Skype. I want to use the external so that I can pan and video a room full of people. Kind of hard to do with the built in cam. How do I tell the Mac Pro to use the external for
-
Editing in Photoshop Elements 6 - not a choice in Preferences
I have both PSE6 and onOne Essentials for iPhoto. When I go to Preferences - General - Edit Photo, only Essentials is listed as an option for editing in an external editor. How do I either add Photoshop Elements to the list or replace Essentials with
-
I'm about desperate because i spent 3 days on this and i'm am in dire need for help! 1. First i want to create a GUI that it a quiz. 2. Create Questions by using Combo Box 3. Next it creates a text field and allow me to input my answer 4. following t
-
Photoshop Elements 8: Adjusting Location of Image on Paper Before Printing
I recently graduated from Photoshop Elements Version 3 to Version 8. On the Print Preview screen (Version 3) you can adjust the location of the image on the paper. First you un-select "Center Image". Then you either drag the image to its new locatio
-
How long does it typically take to install creative cloud desktop to your MacBook pro?
How long does it typically take to install creative cloud desktop on a MacBook pro?