RAC Interconnect Transfer rate vs NIC's Bandwidth
Hi Guru,
I need some clarification for RAC interconnect terminology between "private interconnect transfer rate" and "NIC bandwidth".
We have 11gR2 RAC with multiple databases.
So we need to find out what the current resource status is.
We have two physical NICs each node. And 8G is for public and 2G is for private (interconnect).
Technically, we have 4G for Private network bandwidth.
If I look at the "Private Interconnect Transfer rate" though OEM or IPTraf (linux tool), it is showing 20 ~30 MB/Sec.
There is no any issue at all at this moment.
Please correct me if I am wrong.
The transfer rate will be fine till 500M or 1G/Sec. Because the current NIC's capacity is 4G. Does it make sense ?
I'm sure there are multiple things to consider,but I'm kind of stumped on the whole transfer rate vs bandwidth. Is there any way to calculate what a typical transfer would be....
OR How do I say our interconnect are good enough ....based on the transfer rate ?
Another question is ....
In our case, how do I set up the warning threshold and Critical threshold for "Private Interconnect Transer rate" in OEM ?
Any comments will be appreciated.
Please advise.
Interconnect performance sways more to latency than bandwidth IMO. In simplistic terms, memory is shared across the Interconnect. What is important for accessing memory? The size of the pipe? Or the speed of the pipe?
A very fast small pipe will typically perform significantly better than a large and slower pipe.
Even the size of the pipe is not that straight forward. Standard IP MTU size is 1500. You can run jumbo and super-jumbo frame MTU sizes on the Interconnect - where for example a MTU size of 65K is significantly larger than a 1500 byte MTU. Which means significantly more data can be transferred over the Interconnect at a much reduced overhead.
Personally, I would not consider Ethernet (GigE included) for the Interconnect. Infiniband is faster, more scalable, and offers an actual growth path to 128Gb/s and higher.
Oracle also uses Infiniband (QDR/40Gb) for their Exadata Database Machine product's Interconnect. Infiniband also enables one to run Oracle Interconnect over RDS instead of UDP. I've seen Oracle reports to the OFED committee saying that using RDS in comparison with UDP, reduced CPU utilisation by 50% and decreased latency by 50%.
I also do not see the logic of having a faster public network and a slower Interconnect.
IMO there are 2 very fundamental components in RAC that determines what is the speed and performance achievable with that RAC - the speed, performance and scalability of the I/O fabric layer and for the Interconnect layer.
And Exadata btw uses Infiniband for both these critical layers. Not fibre. Not GigE.
Similar Messages
-
RAC, Interconnect and CloudControl (a nice to have problem :-) )
Hi,
someone has changed the networkmask for one interconnect interface (eth3). eth3 is up an running, the interconnect ip-address move to the second interconnect interface here eth1. Everything is working fine, and after network is changed back the eth3 interface get his interconnect ip-address and all is fine. But there was no incident in CloudControl no warning that something is going wrong. A user defined metric on GV_$CLUSTER_INTERCONNECTS is no solution, the view shows old values. cluyfy is useful in this case.
best regards
ThomasHi,
I think two options
cluvfy and oifcfg to check the network configuration
regards -
Teamed NICs for RAC interconnect
Hi there,
We have an Oralce 10g RAC with 2 nodes. there are only one NIC for RAC interconnect in both servers.
now we want to add one redundant NIC into each server for RAC interconnect as well.
Could you please guide me some documents about this "teamed NICs for RAC interconnect "?
Your help is greatly appreciated!
Thanks,
ScottSearch around for NIC bonding. The exact process will depend on your OS.
Linux, see Metalink note 298891.1 - Configuring Linux for the Oracle 10g VIP or private interconnect using bonding driver
Regards,
Greg Rahn
http://structureddata.org -
Oracle RAC Interconnect, PowerVM VLANs, and the Limit of 20
Hello,
Our company has a requirement to build a multitude of Oracle RAC clusters on AIX using Power VM on 770s and 795 hardware.
We presently have 802.1q trunking configured on our Virtual I/O Servers, and have currently consumed 12 of 20 allowed VLANs for a virtual ethernet adapter. We have read the Oracle RAC FAQ on Oracle Metalink and it seems to otherwise discourage the use of sharing these interconnect VLANs between different clusters. This puts us in a scalability bind; IBM limits VLANs to 20 and Oracle says there is a one-to-one relationship between VLANs and subnets and RAC clusters. We must assume we have a fixed number of network interfaces available and that we absolutely have to leverage virtualized network hardware in order to build these environments. "add more network adapters to VIO" isn't an acceptable solution for us.
Does anyone know if Oracle can afford any flexibility which would allow us to host multiple Oracle RAC interconnects on the same 802.1q trunk VLAN? We will independently guarantee the bandwidth, latency, and redundancy requirements are met for proper Oracle RAC performance, however we don't want a design "flaw" to cause us supportability issues in the future.
We'd like it very much if we could have a bunch of two-node clusters all sharing the same private interconnect. For example:
Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
Cluster 2, node 1: 192.168.16.4 / 255.255.255.0 / VLAN 16
Cluster 2, node 2: 192.168.16.5 / 255.255.255.0 / VLAN 16
Cluster 3, node 1: 192.168.16.6 / 255.255.255.0 / VLAN 16
Cluster 3, node 2: 192.168.16.7 / 255.255.255.0 / VLAN 16
Cluster 4, node 1: 192.168.16.8 / 255.255.255.0 / VLAN 16
Cluster 4, node 2: 192.168.16.9 / 255.255.255.0 / VLAN 16
etc.
Whereas the concern is that Oracle Corp will only support us if we do this:
Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
Cluster 2, node 1: 192.168.17.2 / 255.255.255.0 / VLAN 17
Cluster 2, node 2: 192.168.17.3 / 255.255.255.0 / VLAN 17
Cluster 3, node 1: 192.168.18.2 / 255.255.255.0 / VLAN 18
Cluster 3, node 2: 192.168.18.3 / 255.255.255.0 / VLAN 18
Cluster 4, node 1: 192.168.19.2 / 255.255.255.0 / VLAN 19
Cluster 4, node 2: 192.168.19.3 / 255.255.255.0 / VLAN 19
Which eats one VLAN per RAC cluster.Thank you for your answer!!
I think I roughly understand the argument behind a 2-node RAC and a 3-node or greater RAC. We, unfortunately, were provided with two physical pieces of hardware to virtualize to support production (and two more to support non-production) and as a result we really have no place to host a third RAC node without placing it within the same "failure domain" (I hate that term) as one of the other nodes.
My role is primarily as a system engineer, and, generally speaking, our main goals are eliminating single points of failure. We may be misusing 2-node RACs to eliminate single points of failure since it seems to violate the real intentions behind RAC, which is used more appropriately to scale wide to many nodes. Unfortunately, we've scaled out to only two nodes, and opted to scale these two nodes up, making them huge with many CPUs and lots of memory.
Other options, notably the active-passive failover cluster we have in HACMP or PowerHA on the AIX / IBM Power platform is unattractive as the standby node drives no resources yet must consume CPU and memory resources so that it is prepared for a failover of the primary node. We use HACMP / PowerHA with Oracle and it works nice, however Oracle RAC, even in a two-node configuration, drives load on both nodes unlike with an active-passive clustering technology.
All that aside, I am posing the question to both IBM, our Oracle DBAs (whom will ask Oracle Support). Typically the answers we get vary widely depending on the experience and skill level of the support personnel we get on both the Oracle and IBM sides... so on a suggestion from a colleague (Hi Kevin!) I posted here. I'm concerned that the answer from Oracle Support will unthinkingly be "you can't do that, my script says to tell you the absolute most rigid interpretation of the support document" while all the time the same document talks of the use of NFS and/or iSCSI storage eye roll
We have a massive deployment of Oracle EBS and honestly the interconnect doesn't even touch 100mbit speeds even though the configuration has been checked multiple times by Oracle and IBM and with the knowledge that Oracle EBS is supposed to heavily leverage RAC. I haven't met a single person who doesn't look at our environment and suggest jumbo frames. It's a joke at this point... comments like "OMG YOU DON'T HAVE JUMBO FRAMES" and/or "OMG YOU'RE NOT USING INFINIBAND WHATTA NOOB" are commonplace when new DBAs are hired. I maintain that the utilization numbers don't support this.
I can tell you that we have 8Gb fiber channel storage and 10Gb network connectivity. I would probably assume that there were a bottleneck in the storage infrastructure first. But alas, I digress.
Mainly I'm looking for a real-world answer to this question. Aside from violating every last recommendation and making oracle support folk gently weep at the suggestion, are there any issues with sharing interconnects between RAC environments that will prevent it's functionality and/or reduce it's stability?
We have rapid spanning tree configured, as far as I know, and our network folks have tuned the timers razor thin. We have Nexus 5k and Nexus 7k network infrastructure. The typical issues you'd fine with standard spanning tree really don't affect us because our network people are just that damn good. -
HI -
I have recently moved from VMWare ESX to HYPER-V and love it! However, the network adapter shows 10gig and the Virtual Switch shows 10gig. My actual host adapter speed is set to 1GIG FULL. I am attached to a CISCO 4510 Switch. I have tried disabling certain TCP settings like RSS, TCPOffloading, Chimney, and DisableTaskOffload - set to enable for TCP Params. I can generally transfer from VM to VM or VM to HOST and ungodly speeds. But, when I transfer from VM to a system or server outside of the VM environment, network speed is right at 100mb.. I have tested this with Bandwidth Meter Pro. Is there something in the host server I am missing? I have not noticed any QoS or anything... Please help ! ;-)Hi Ron,
The speed of the files transfer between VM and Host or VM and VM really depends on the route the communication uses and the speed of the processor.
According to the architecture of the Hyper-V, a VM to VM file transfer will go VSC --->VMBus--->VSP--->VMswitch--->VSP--->VMBus--->VSC. Think of the VSP as a port on the VMSwitch.
As to your question, the file transfer rate on a VM to VM file transfer will be primarily determined by the speed of the processor as the entire event takes place in software. The VMswitch will report its speed as 10Gbps, but the actual rate is completely dependent on the speed of the processors and the load on the machine. It can be much faster than 10Gbps, or it can be quite a bit slower.
For "external" virtual networks, we only use the physical NIC if the traffic is between a virtual NIC and an external node. VM to VM traffic is still only bound by CPU. In other words, some connections are bound by the physical NIC and some only by the CPU.
If the host and the VM are attached to the same Hyper-V switch, then they are just like two VMs attached to the same Hyper-V switch. You can think of the Hyper-V like a physical switch, but it is actually a software switch with ports just like a physical switch has. In this case the data transfer speed is limited only by how fast the Hyper-V switch software can execute the instructions. This may be slower or faster than the nominal 10Gb/s. Since no hardware is involved, the hardware doesn't affect the speed of the transfer.
If the host and the VM are attached to different Hyper-V switches, or the host is directly connected to the network (i.e., not connected to a Hyper-V switch), then the VM to Host communication will go via the external network, the physical network that joins the VM’s NIC to the host’s NIC. In this case the data transfer rate is limited by the slower of the Hyper-V switch(es) involved, the NICs involved, and the physical network components traversed by the data. Without knowing the details of the data loads, the hardware and software components, the host processors, memory, etc., it isn’t possible to determine what the limiting factor will be.
We run file copy tests between VM's running on External and then on Private and noticed a significant improvement on private where it took ~30mins vs about 4 hours on External.
Best regards,
Vincent Hu -
How to find the max data transfer rate(disk speed) supported by mobo?
I plan on replacing my current HDD with a new and bigger HDD.
For this I need to know the max data transfer rate(disk speed) that my mobo will support. However, dmidecode is not telling me that. Am I missing something?
Here's dmidecode:
# dmidecode 2.11
SMBIOS 2.5 present.
80 structures occupying 2858 bytes.
Table at 0x000F0450.
Handle 0xDA00, DMI type 218, 101 bytes
OEM-specific Type
Header and Data:
DA 65 00 DA B2 00 17 4B 0E 38 00 00 80 00 80 01
00 02 80 02 80 01 00 00 A0 00 A0 01 00 58 00 58
00 01 00 59 00 59 00 01 00 75 01 75 01 01 00 76
01 76 01 01 00 05 80 05 80 01 00 D1 01 19 00 01
00 15 02 19 00 02 00 1B 00 19 00 03 00 19 00 19
00 00 00 4A 02 4A 02 01 00 0C 80 0C 80 01 00 FF
FF 00 00 00 00
Handle 0xDA01, DMI type 218, 35 bytes
OEM-specific Type
Header and Data:
DA 23 01 DA B2 00 17 4B 0E 38 00 10 F5 10 F5 00
00 11 F5 11 F5 00 00 12 F5 12 F5 00 00 FF FF 00
00 00 00
Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
Vendor: Dell Inc.
Version: A17
Release Date: 04/06/2010
Address: 0xF0000
Runtime Size: 64 kB
ROM Size: 4096 kB
Characteristics:
PCI is supported
PNP is supported
APM is supported
BIOS is upgradeable
BIOS shadowing is allowed
ESCD support is available
Boot from CD is supported
Selectable boot is supported
EDD is supported
Japanese floppy for Toshiba 1.2 MB is supported (int 13h)
3.5"/720 kB floppy services are supported (int 13h)
Print screen service is supported (int 5h)
8042 keyboard services are supported (int 9h)
Serial services are supported (int 14h)
Printer services are supported (int 17h)
ACPI is supported
USB legacy is supported
BIOS boot specification is supported
Function key-initiated network boot is supported
Targeted content distribution is supported
BIOS Revision: 17.0
Handle 0x0100, DMI type 1, 27 bytes
System Information
Manufacturer: Dell Inc.
Product Name: OptiPlex 755
Version: Not Specified
UUID: 44454C4C-5900-1050-8033-C4C04F434731
Wake-up Type: Power Switch
SKU Number: Not Specified
Family: Not Specified
Handle 0x0200, DMI type 2, 8 bytes
Base Board Information
Manufacturer: Dell Inc.
Product Name: 0PU052
Version:
Handle 0x0300, DMI type 3, 13 bytes
Chassis Information
Manufacturer: Dell Inc.
Type: Space-saving
Lock: Not Present
Version: Not Specified
Asset Tag:
Boot-up State: Safe
Power Supply State: Safe
Thermal State: Safe
Security Status: None
Handle 0x0400, DMI type 4, 40 bytes
Processor Information
Socket Designation: CPU
Type: Central Processor
Family: Xeon
Manufacturer: Intel
ID: 76 06 01 00 FF FB EB BF
Signature: Type 0, Family 6, Model 23, Stepping 6
Flags:
FPU (Floating-point unit on-chip)
VME (Virtual mode extension)
DE (Debugging extension)
PSE (Page size extension)
TSC (Time stamp counter)
MSR (Model specific registers)
PAE (Physical address extension)
MCE (Machine check exception)
CX8 (CMPXCHG8 instruction supported)
APIC (On-chip APIC hardware supported)
SEP (Fast system call)
MTRR (Memory type range registers)
PGE (Page global enable)
MCA (Machine check architecture)
CMOV (Conditional move instruction supported)
PAT (Page attribute table)
PSE-36 (36-bit page size extension)
CLFSH (CLFLUSH instruction supported)
DS (Debug store)
ACPI (ACPI supported)
MMX (MMX technology supported)
FXSR (FXSAVE and FXSTOR instructions supported)
SSE (Streaming SIMD extensions)
SSE2 (Streaming SIMD extensions 2)
SS (Self-snoop)
HTT (Multi-threading)
TM (Thermal monitor supported)
PBE (Pending break enabled)
Version: Not Specified
Voltage: 0.0 V
External Clock: 1333 MHz
Max Speed: 5200 MHz
Current Speed: 2666 MHz
Status: Populated, Enabled
Upgrade: Socket LGA775
L1 Cache Handle: 0x0700
L2 Cache Handle: 0x0701
L3 Cache Handle: Not Provided
Serial Number: Not Specified
Asset Tag: Not Specified
Part Number: Not Specified
Core Count: 2
Core Enabled: 2
Thread Count: 2
Characteristics:
64-bit capable
Handle 0x0700, DMI type 7, 19 bytes
Cache Information
Socket Designation: Not Specified
Configuration: Enabled, Not Socketed, Level 1
Operational Mode: Write Back
Location: Internal
Installed Size: 32 kB
Maximum Size: 32 kB
Supported SRAM Types:
Other
Installed SRAM Type: Other
Speed: Unknown
Error Correction Type: None
System Type: Data
Associativity: 8-way Set-associative
Handle 0x0701, DMI type 7, 19 bytes
Cache Information
Socket Designation: Not Specified
Configuration: Enabled, Not Socketed, Level 2
Operational Mode: Varies With Memory Address
Location: Internal
Installed Size: 6144 kB
Maximum Size: 6144 kB
Supported SRAM Types:
Other
Installed SRAM Type: Other
Speed: Unknown
Error Correction Type: Single-bit ECC
System Type: Unified
Associativity: <OUT OF SPEC>
Handle 0x0800, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: PARALLEL
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: DB-25 female
Port Type: Parallel Port PS/2
Handle 0x0801, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: SERIAL1
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: DB-9 male
Port Type: Serial Port 16550A Compatible
Handle 0x0802, DMI type 126, 9 bytes
Inactive
Handle 0x0803, DMI type 126, 9 bytes
Inactive
Handle 0x0804, DMI type 126, 9 bytes
Inactive
Handle 0x0805, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB1
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0806, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB2
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0807, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB3
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0808, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB4
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0809, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB5
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080A, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB6
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080B, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB7
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080C, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB8
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080D, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: ENET
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: RJ-45
Port Type: Network Port
Handle 0x080E, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: MIC
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x080F, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: LINE-OUT
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x0810, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: LINE-IN
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x0811, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: HP-OUT
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x0812, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: MONITOR
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: DB-15 female
Port Type: Video Port
Handle 0x090A, DMI type 9, 13 bytes
System Slot Information
Designation: SLOT1
Type: x1 Proprietary
Current Usage: In Use
Length: Long
Characteristics:
PME signal is supported
Handle 0x0901, DMI type 126, 13 bytes
Inactive
Handle 0x0902, DMI type 9, 13 bytes
System Slot Information
Designation: SLOT2
Type: 32-bit PCI
Current Usage: Available
Length: Long
ID: 2
Characteristics:
5.0 V is provided
3.3 V is provided
PME signal is supported
Handle 0x0903, DMI type 126, 13 bytes
Inactive
Handle 0x0904, DMI type 126, 13 bytes
Inactive
Handle 0x0905, DMI type 126, 13 bytes
Inactive
Handle 0x0906, DMI type 126, 13 bytes
Inactive
Handle 0x0907, DMI type 126, 13 bytes
Inactive
Handle 0x0908, DMI type 126, 13 bytes
Inactive
Handle 0x0A00, DMI type 10, 6 bytes
On Board Device Information
Type: Video
Status: Disabled
Description: Intel Graphics Media Accelerator 950
Handle 0x0A02, DMI type 10, 6 bytes
On Board Device Information
Type: Ethernet
Status: Enabled
Description: Intel Gigabit Ethernet Controller
Handle 0x0A03, DMI type 10, 6 bytes
On Board Device Information
Type: Sound
Status: Enabled
Description: Intel(R) High Definition Audio Controller
Handle 0x0B00, DMI type 11, 5 bytes
OEM Strings
String 1: www.dell.com
Handle 0x0D00, DMI type 13, 22 bytes
BIOS Language Information
Language Description Format: Long
Installable Languages: 1
en|US|iso8859-1
Currently Installed Language: en|US|iso8859-1
Handle 0x0F00, DMI type 15, 29 bytes
System Event Log
Area Length: 2049 bytes
Header Start Offset: 0x0000
Header Length: 16 bytes
Data Start Offset: 0x0010
Access Method: Memory-mapped physical 32-bit address
Access Address: 0xFFF01000
Status: Valid, Not Full
Change Token: 0x00000018
Header Format: Type 1
Supported Log Type Descriptors: 3
Descriptor 1: POST error
Data Format 1: POST results bitmap
Descriptor 2: System limit exceeded
Data Format 2: System management
Descriptor 3: Log area reset/cleared
Data Format 3: None
Handle 0x1000, DMI type 16, 15 bytes
Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 8 GB
Error Information Handle: Not Provided
Number Of Devices: 4
Handle 0x1100, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_1
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Manufacturer: AD00000000000000
Handle 0x1101, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_3
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Handle 0x1102, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_2
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Handle 0x1103, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_4
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Handle 0x1300, DMI type 19, 15 bytes
Memory Array Mapped Address
Starting Address: 0x00000000000
Ending Address: 0x000FDFFFFFF
Range Size: 4064 MB
Physical Array Handle: 0x1000
Partition Width: 1
Handle 0x1400, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00000000000
Ending Address: 0x0007FFFFFFF
Range Size: 2 GB
Physical Device Handle: 0x1100
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 1
Interleaved Data Depth: 1
Handle 0x1401, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00080000000
Ending Address: 0x000FDFFFFFF
Range Size: 2016 MB
Physical Device Handle: 0x1101
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 1
Interleaved Data Depth: 1
Handle 0x1402, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00000000000
Ending Address: 0x0007FFFFFFF
Range Size: 2 GB
Physical Device Handle: 0x1102
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 2
Interleaved Data Depth: 1
Handle 0x1403, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00080000000
Ending Address: 0x000FDFFFFFF
Range Size: 2016 MB
Physical Device Handle: 0x1103
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 2
Interleaved Data Depth: 1
Handle 0x1410, DMI type 126, 19 bytes
Inactive
Handle 0x1800, DMI type 24, 5 bytes
Hardware Security
Power-On Password Status: Enabled
Keyboard Password Status: Not Implemented
Administrator Password Status: Enabled
Front Panel Reset Status: Not Implemented
Handle 0x1900, DMI type 25, 9 bytes
System Power Controls
Next Scheduled Power-on: *-* 00:00:00
Handle 0x1B10, DMI type 27, 12 bytes
Cooling Device
Type: Fan
Status: OK
OEM-specific Information: 0x0000DD00
Handle 0x1B11, DMI type 27, 12 bytes
Cooling Device
Type: Fan
Status: OK
OEM-specific Information: 0x0000DD01
Handle 0x1B12, DMI type 126, 12 bytes
Inactive
Handle 0x1B13, DMI type 126, 12 bytes
Inactive
Handle 0x1B14, DMI type 126, 12 bytes
Inactive
Handle 0x2000, DMI type 32, 11 bytes
System Boot Information
Status: No errors detected
Handle 0x8100, DMI type 129, 8 bytes
OEM-specific Type
Header and Data:
81 08 00 81 01 01 02 01
Strings:
Intel_ASF
Intel_ASF_001
Handle 0x8200, DMI type 130, 20 bytes
OEM-specific Type
Header and Data:
82 14 00 82 24 41 4D 54 01 01 00 00 01 A5 0B 02
00 00 00 00
Handle 0x8300, DMI type 131, 64 bytes
OEM-specific Type
Header and Data:
83 40 00 83 14 00 00 00 00 00 C0 29 05 00 00 00
F8 00 4E 24 00 00 00 00 0D 00 00 00 02 00 03 00
19 04 14 00 01 00 01 02 C8 00 BD 10 00 00 00 00
00 00 00 00 FF 00 00 00 00 00 00 00 00 00 00 00
Handle 0x8800, DMI type 136, 6 bytes
OEM-specific Type
Header and Data:
88 06 00 88 5A 5A
Handle 0xD000, DMI type 208, 10 bytes
OEM-specific Type
Header and Data:
D0 0A 00 D0 01 03 FE 00 11 02
Handle 0xD100, DMI type 209, 12 bytes
OEM-specific Type
Header and Data:
D1 0C 00 D1 78 03 07 03 04 0F 80 05
Handle 0xD200, DMI type 210, 12 bytes
OEM-specific Type
Header and Data:
D2 0C 00 D2 F8 03 04 03 06 80 04 05
Handle 0xD201, DMI type 126, 12 bytes
Inactive
Handle 0xD400, DMI type 212, 242 bytes
OEM-specific Type
Header and Data:
D4 F2 00 D4 70 00 71 00 00 10 2D 2E 42 00 11 FE
01 43 00 11 FE 00 0F 00 25 FC 00 10 00 25 FC 01
11 00 25 FC 02 12 00 25 FC 03 00 00 25 F3 00 00
00 25 F3 04 00 00 25 F3 08 00 00 25 F3 0C 07 00
23 8F 00 08 00 23 F3 00 09 00 23 F3 04 0A 00 23
F3 08 0B 00 23 8F 10 0C 00 23 8F 20 0E 00 23 8F
30 0D 00 23 8C 40 A6 00 23 8C 41 A7 00 23 8C 42
05 01 22 FD 02 06 01 22 FD 00 8C 00 22 FE 00 8D
00 22 FE 01 9B 00 25 3F 40 9C 00 25 3F 00 09 01
25 3F 80 A1 00 26 F3 00 A2 00 26 F3 08 A3 00 26
F3 04 9F 00 26 FD 02 A0 00 26 FD 00 9D 00 11 FB
04 9E 00 11 FB 00 54 01 23 7F 00 55 01 23 7F 80
5C 00 78 BF 40 5D 00 78 BF 00 04 80 78 F5 0A 01
A0 78 F5 00 93 00 7B 7F 80 94 00 7B 7F 00 8A 00
37 DF 20 8B 00 37 DF 00 03 C0 67 00 05 FF FF 00
00 00
Handle 0xD401, DMI type 212, 172 bytes
OEM-specific Type
Header and Data:
D4 AC 01 D4 70 00 71 00 03 40 59 6D 2D 00 59 FC
02 2E 00 59 FC 00 6E 00 59 FC 01 E0 01 59 FC 03
28 00 59 3F 00 29 00 59 3F 40 2A 00 59 3F 80 2B
00 5A 00 00 2C 00 5B 00 00 55 00 59 F3 00 6D 00
59 F3 04 8E 00 59 F3 08 8F 00 59 F3 00 00 00 55
FB 04 00 00 55 FB 00 23 00 55 7F 00 22 00 55 7F
80 F5 00 58 BF 40 F6 00 58 BF 00 EB 00 55 FE 00
EA 00 55 FE 01 40 01 54 EF 00 41 01 54 EF 10 ED
00 54 F7 00 F0 00 54 F7 08 4A 01 53 DF 00 4B 01
53 DF 20 4C 01 53 7F 00 4D 01 53 7F 80 68 01 56
BF 00 69 01 56 BF 40 FF FF 00 00 00
Handle 0xD402, DMI type 212, 152 bytes
OEM-specific Type
Header and Data:
D4 98 02 D4 70 00 71 00 00 10 2D 2E 2D 01 21 FE
01 2E 01 21 FE 00 97 00 22 FB 00 98 00 22 FB 04
90 00 11 CF 00 91 00 11 CF 20 92 00 11 CF 10 E2
00 27 7F 00 E3 00 27 7F 80 E4 00 27 BF 00 E5 00
27 BF 40 D1 00 22 7F 80 D2 00 22 7F 00 45 01 22
BF 40 44 01 22 BF 00 36 01 21 F1 06 37 01 21 F1
02 38 01 21 F1 00 39 01 21 F1 04 2B 01 11 7F 80
2C 01 11 7F 00 4E 01 65 CF 00 4F 01 65 CF 10 D4
01 65 F3 00 D5 01 65 F3 04 D2 01 65 FC 00 D3 01
65 FC 01 FF FF 00 00 00
Handle 0xD403, DMI type 212, 157 bytes
OEM-specific Type
Header and Data:
D4 9D 03 D4 70 00 71 00 03 40 59 6D 17 01 52 FE
00 18 01 52 FE 01 19 01 52 FB 00 1A 01 52 FB 04
1B 01 52 FD 00 1C 01 52 FD 02 1D 01 52 F7 00 1E
01 52 F7 08 1F 01 52 EF 00 20 01 52 EF 10 21 01
52 BF 00 22 01 52 BF 40 87 00 59 DF 20 88 00 59
DF 00 E8 01 66 FD 00 E9 01 66 FD 02 02 02 53 BF
00 03 02 53 BF 40 04 02 53 EF 00 05 02 53 EF 10
06 02 66 DF 00 07 02 66 DF 20 08 02 66 EF 00 09
02 66 EF 10 17 02 66 F7 00 18 02 66 F7 08 44 02
52 BF 40 45 02 52 BF 00 FF FF 00 00 00
Handle 0xD800, DMI type 126, 9 bytes
Inactive
Handle 0xDD00, DMI type 221, 19 bytes
OEM-specific Type
Header and Data:
DD 13 00 DD 00 01 00 00 00 10 F5 00 00 00 00 00
00 00 00
Handle 0xDD01, DMI type 221, 19 bytes
OEM-specific Type
Header and Data:
DD 13 01 DD 00 01 00 00 00 11 F5 00 00 00 00 00
00 00 00
Handle 0xDD02, DMI type 221, 19 bytes
OEM-specific Type
Header and Data:
DD 13 02 DD 00 01 00 00 00 12 F5 00 00 00 00 00
00 00 00
Handle 0xDE00, DMI type 222, 16 bytes
OEM-specific Type
Header and Data:
DE 10 00 DE C1 0B 00 00 10 05 19 21 01 00 00 01
Handle 0x7F00, DMI type 127, 4 bytes
End Of Table
Hdparm also does not tell me the max data transfer rate (disk speed) of my current drive although this link : www.wdc.com/en/library/sata/2879-001146.pdf says that it is 3.0Gb/s
and here's hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: WDC WD800JD-75JNC0
Firmware Revision: 06.01C06
Standards:
Supported: 6 5 4
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
CHS current addressable sectors: 16514064
LBA user addressable sectors: 156250000
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 76293 MBytes
device size with M = 1000*1000: 80000 MBytes (80 GB)
cache/buffer size = 8192 KBytes
Capabilities:
LBA, IORDY(can be disabled)
Standby timer values: spec'd by Standard, with device specific minimum
R/W multiple sector transfer: Max = 16 Current = 8
Recommended acoustic management value: 128, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
Automatic Acoustic Management feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* SMART error logging
* SMART self-test
* Gen1 signaling speed (1.5Gb/s)
* Host-initiated interface power management
* SMART Command Transport (SCT) feature set
* SCT Long Sector Access (AC1)
* SCT LBA Segment Access (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
not supported: enhanced erase
Checksum: correct
Last edited by Inxsible (2011-03-27 04:40:49)I just checked my BIOS and my current setting is set at IDE although it also mentions that the default should be AHCI. Currently I have a dual boot of Windows 7 (need it for Tax software) and Arch
So I guess, when I get the new HDD, I will first set it to AHCI and then install the OSes on it. See if NCQ helps any, and if not I will turn it back and re-install (if I have to). I am planning to have Windows only in virtualbox in the new drive.
Anyhoo, while I was in the BIOS I found two things which I had questions about :
1) Under Onboard Devices --> Integrated NIC , my setting is currently set at "On w/PXE" and it says the default should be just "On". Would it be ok to change it back to On since its a single machine and its not booting an OS on any server. I just don't want to have to re-install anything now since I will be doing that in the new HDD.
2) How would I know whether my BIOS would support a 64 bit OS in Virtualbox? I checked some setting under Virtualization, but they weren't very clear.
I will edit this post and let you know exactly what settings were present under the Virtualization sub-section. -
What is max internal hard drive transfer rate compatible with a 2007 MacBook Pro?
In attempt to upgrade my mid-2007 MacBook Pro (Intel Core 2 Duo), I bought a Seagate 750GB listed as compatible with my computer on MacSales.com . . .
Seagate Momentus XT ST750LX003 750GB 7200 RPM 32MB Cache 2.5" SATA 6.0Gb/s Solid State Hybrid Drive -Bare Drive
Installation appeared to go well and I ran an extended hardware test with no issues found. However, the drive is listed at 5.46 TB (would be nice) and I get an input/output error when attempting to partition. I can initiate an erase, but after the time I would suspect it would take to erase the drive, I get an input/output error - it seems to appear after the first 750 GB of the '5.5 TB' was erased. At all times, the drive is not recognized when I attempt to install OS X from original discs.
I suspect the 6.0 GB/s transfer capacity of the hard drive is not compatible with the MacBook Pro. The drive came with 4 jumper pins but no jumper and no label diagram to set a slower transfer rate. I called Seagate, Newegg, OWC, and Apple, but no one has compatibility info for my MacBook Pro. To them, it appears I am running the first tests of this new technology with an 'older' MacBook Pro.
My MBP has had no issues - I'd like to keep her going with the optimum internal hard drive capacity, but don't necessarily want to set up a test bench in my house (though my kids would enjoy destroying it) and pay several shipping and restocking fees to test new hard drives.
The original drive had a 1.5 GB/s transfer rate. Does anyone know the maximum transfer rate compatible with a 2007 MacBook Pro? 3.0 GB/s? 1.5 GB/s? Thank you.No spinning hard drive will transfer data faster then 60-80MB a second. The XT models have a flash storage area that is used when reading and writing data that can make it Appear faster in some situations. That flash memory if only 8 or 16GBs in side, I forget which one.
The drie is rated to work on 6GB SATA bus but it certainly can not transfer data that fast. It should be backword compatible to work on slower buses.
Your drive is 750GBs in size. Not wure where you are getting this 5.5TB (that is 5.5 Tera Bytes which is 5500 Giga Bytes. Your drive is under 1TB)
What are you using to partition the drive? Disk Utilities from the original install DVD?
You need to install on your old drive, Update it from the Apple Website then clone it to the new drive. The version of OSX you are using may not function correctly with that large of a drive. Or get yourself a copy of Snow Leopard, retail disk for $29 from Apple, and do all the partitioning and installing with that version od OSX. -
FTP/SFTP/FISH (etc) slow file transfer rate over LAN
Hi everyone,
I have a problem with transferring files over my home network that has been bothering me for quite some time.
I have a 802.11n router which should provide me with the transfer rate up to 150 Mbps (afaik). When I download files from the Internet, 3 MB/s data transfer rate is of no problem.
However, when receiving or sending data over LAN, the transfer rate is much slower (1.8 MB/s).
My rough guess is (after reading some papers on this topic) that TCP protocol is causing this (its flow control feature to be exact), since TCP max window size is too small on Linux by default.
So, setting TCP max window size to a greater number should solve this.
I tried putting this:
# increase TCP max buffer size setable using setsockopt()
# 16 MB with a few parallel streams is recommended for most 10G paths
# 32 MB might be needed for some very long end-to-end 10G or 40G paths
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
# (only change the 3rd value, and make it 16 MB or more)
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# recommended to increase this for 10G NICS
net.core.netdev_max_backlog = 30000
# these should be the default, but just to be sure
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
in /etc/sysctl.conf but to no avail.
So either there is no problem with the max window size setting, or the Linux kernel ignores it (maybe because /proc is no longer supported?).
Thanks for any neat ideas.
Last edited by Vena (2012-06-01 21:48:14)Bump? No ideas whatsoever?
-
Is it possible to achieve 10 MHz output transfer rate on 6534 device?
I want to achieve 10 MHz output transfer rate on 6534 device. I need to contunuously generate non-repeating data. But when I execute the CVI sample program DOdoubleBufPatternGen653x, I cannot exceed the 1 MHz transfer rate (iPgTB = -3, iReqInt = 20), I get a -10843 error. Parameter oldDataStop is set to 1. Each halfbuffer contains 100000 points, buffer - 200000. My system is Pentium IV 2GHz, 512 MB RAM. What should I change to achieve 10 MHz transfer rate? Is it possible in this mode?
Greetings Alexey,
It should be possible, but it may take some extra thought and effort on your part due to the limitations and/or use case of your system. I would first recommend that you see the NI Tutorial "Maximizing the Performance of the NI 6534 Digital I/O Device" available online at . Based upon the error you are getting, it sound like you are not getting sufficient PCI bus bandwidth. Are you perhaps reading this data from disk (that would roughly double the PCI bus bandwidth required, or halve the transfer rate that you can expect to achieve) or using other PCI devices that are contending for bandwidth (potentially video cards, sound cards, network cards, dr
ive controllers...)? Some of these PCI devices need substantial bandwidth, others will grab the bus periodically for small transfers (sound cards, seemingly unused network cards forwarding broadcast packets...), and these could potentially interfere with your ability to stream data continuously at high rates. You might try disabling unused devices in your device manager. If you continue to have difficulties, please forward us more details about your system, what it is doing, and how your program works (are you streaming data from disk, network, or are you generating it on the fly).
Sincerely,
Jeremiah Cox
Platform Software Product Support Engineer
National Instruments
> -
Solaris 10 u5 Samba slow transfer rates?
Hi!
I've installed Solaris 10 x86 (Core2Duo - x64) server, with Samba over ZFS RAID-Z. Samba is a part of Active Directory Domain. I've managed to join it to domain, to get the users and groups from A.D. and to translate them to Unix IDs. Everything works really good. Samba is installed from the packages from Solaris 10 DVD.
Only problem I have is the performance :( It's disastrous!
On 100Mbit Realtek NIC, Samba can manage around 4 MB/s if log level is set to very high (10). If I lower it to 0, then transfer rates go up to 7.5-8.5MB/s and they fluctuate in that interval.
On the same network, there is a Debian Samba server, and transfer rates go high as 10.5-11.0MB/s.
Next test I did was switching to Gbit interface. That increased transfer rates up to 25 MB/s, but that is still 5 times slower than the theoretical limit.
So, next thing I've tried was to switch to Blastwave (CSW) Samba instead of SUNW Samba.... My transfer rates went back to normal immediately! It was a bit of shock for me... I could transfer about 10MB/s on 100Mbit interface, and around 45MB/s on 1Gbit interface. 45MB/s is theoretically limit of the workstation hard drive I was doing transfers from.
Sun packaged (SUNW) Samba is 3.0.28 patched today to the latest patchlevel, and CSW uses 3.0.23. I used CSW Samba with the exact same smb.conf file. Only problem is - I never managed to connect CSW samba to ADS on my network :( So I gave up on that, and I'm facing a dilemma. Managers request full speed of the Samba server (comparable to Linux/Windows shares), but I just can't connect to Domain with CSW package.
So I'm asking you guys - any ideas what could be the problem with SUNW Samba and performance? Is it just the 3.0.28 vs 3.0.23 issue, or what? Why is there so big difference in transfer rates? :(
Please help!OK, here goes my smb.conf:
[global]
workgroup = MYCOMPANY
realm = MYCOMPANY.LOCAL
server string = server4 (Samba, Solaris 10)
security = ADS
map to guest = Bad User
obey pam restrictions = Yes
password server = server1.mycompany.local
passdb backend = tdbsam
log file = /var/samba/log/log.%m
max log size = 50
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE
load printers = No
local master = No
domain master = No
dns proxy = No
idmap uid = 10000-90000
idmap gid = 10000-90000
winbind separator = +
winbind enum users = Yes
winbind enum groups = Yes
winbind use default domain = Yes
[share]
comment = Share on ZFS Raid-Z
path = /tank/share
force user = local_user
force group = users
read only = No
guest ok = Yes
vfs objects = zfsacl -
802.3ad (mode=4) bonding for RAC interconnects
Is anyone using 802.3ad (mode=4) bonding for their RAC interconnects? We have five Dell R710 RAC nodes and we're trying to use the four onboard Broadcom NetXtreme II NICs in a 802.3ad bond with src-dst-mac load balancing. Since we have the hardware to pull this off we thought we'd give it a try and achieve some extra bandwith for the interconnect rather than deploying the traditional acitve/standby interconnect using just two of the NICs. Has anyone tried this config and what was the outcome? Thanks.
I don't but may be the documents might help ?
http://www.iop.org/EJ/article/1742-6596/119/4/042015/jpconf8_119_042015.pdf?request-id=bcddc94d-7727-4a8a-8201-4d1b837a1eac
http://www.oracleracsig.org/pls/apex/Z?p_url=RAC_SIG.download_my_file?p_file=1002938&p_id=1002938&p_cat=documents&p_user=nobody&p_company=994323795175833
http://www.oracle.com/technology/global/cn/events/download/ccb/10g_rac_bp_en.pdf
Edited by: Hub on Nov 18, 2009 10:10 AM -
Limit Hyper-V Replication Data Transfer Rate
Hello,
i need to limit data transfer rate for Hyper-v replication because between two Datacenter have bandwidth limitation.
Please suggestYou can use the New-NetQoSPolicy cmdlet to set the throttling limits -
http://technet.microsoft.com/en-us/library/hh967468.aspx. Based on the destination port (the port on which the replica server has been configured to receive replication traffic - maybe it's port 80 in your case) or the destination subnet, you can specify
a throttling value (-ThrottleRateActionBitsPerSecond) or assign a weight (MinBandwidthWeightAction).
Eg: New-NetQosPolicy "Replica traffic to 8080" –DestinationPort 8080 –ThrottleRateActionBitsPerSecond 100000
or check the link to Thomas blog
http://www.thomasmaurer.ch/2013/12/throttling-hyper-v-replica-traffic/ -
Dedicated switches needed for RAC interconnect or not?
Currently working on an Extended RAC cluster design implementation, I asked the network engineer for dedicated switches for the RAC interconnects.
Here is a little background:
There are 28 RAC clusters over 2X13 physical RAC nodes with separate Oracle_Home for each instance with atleast 2+ instances on each RAC node. So 13 RAC nodes will be in each site(Data-Center). This is basically an Extended RAC solution for SAP databases on RHEL 6 using ASM and Clusterware for Oracle 11gR2. The RAC nodes are Blades in a c7000 enclosure (in each site). The distance between the sites is 55+ kms.
Oracle recommends to have Infiniband(20GBps) as the network backbone, but here DWDM will be used with 2X10 Gbps (each at 10 GBps) links for the RAC interconnect between the sites. There will be separate 2x1GBps redundant link for the Production network and 2x2 GBps FC(Fiber-Channel) redundant links for the SAN/Storage(ASM traffic will go here) network. There will be switches for the Public-production network and the SAN network each.
Oracle recommends dedicated switches(which will give acceptable latency/bandwith) with switch redundancy to route the dedicated/non-routable VLANs for the RAC interconnect (private/heartbeat/global cache transfer) network. Since the DWDM interlinks is 2x10Gbps - do I still need the dedicated switches?
If yes, then how many?
Your inputs will be greatly appreciated.. and help me take a decision.
Many Thanks in advance..
AbhijitAbsolutely agree.. the chances of overload in a HA(RAC) solution and ultmate RAC node eviction are very high(with very high latency) and for exactly this reason I even suggested inexpensive switches to route the VLANs for the RAC interconnect through these switches. The ASM traffic will get routed through the 2x2GB FC links through SAN-Directors (1 in each site).
Suggested the network folks to use Up-links from the c7000 enclosure and route the RAC VLAN through these inexpensive switches for the interconnect traffic. We have another challenge here: HP has certified using VirtualConnect/Flex-Fabric architecture for Blades in c7000 to allocate VLANs for RAC interconnect. But this is only for one site, and does not span Production/DR sites separated over a distance.
Btw, do you have any standard switch model to select from.. and how many to go for a RAC configuration of 13 Extended RAC clusters with each cluster hosting 2+ RAC instances to host total of 28 SAP instances.
Many Thanks again!
Abhijit -
There are three AEBS’s : one with a modem, one without a modem but it has more Ethernet ports, and one with gigabit Ethernet ports.
Do they all have the same Bluetooth transfer rate and do the two with out the gigabit Ethernet ports have the same Ethernet transfer rates?Do they all have the same Bluetooth transfer rate and do the two with out the gigabit Ethernet ports have the same Ethernet transfer rates?
None of the AirPorts are Bluetooth devices so there would be no "Bluetooth transfer rate" to speak of. However, if you mean 802.11x wireless, then the 802.11g AirPort Extreme Base Stations (AEBS) all operate at a maximum bandwidth of 54 Mbps. The new 802.11n AirPort Extreme Base Station (AEBSn), can have a maximum bandwidth of over 300 Mbps depending on which radio mode it is running in.
For Ethernet, both of the 802.11g (& the 1st generation 802.11n) base station have 10/100 Mbps ports. The newest 802.11n version has 10/100/1000 Mbps ports. -
Low Transfer Rate On 2960-S Switch
Hi,
I have 2 PCs (1gbps lan card each ) connected to Cisco 2960-S switch.
The bandwidth of all ports on 2960-S is set to 1gbps manually.
However whenever I transfer any file between the two I don't get transfer rate more than 12MBps/96mbps.
Theoretically it should be 125MBps but why am I getting such low transfer rate?
Regards.Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Bandwidth is set manually, eh? Have you checked for a duplex mixmatch?
Maybe you are looking for
-
BP role UDM000 does not exist Message no. R1206
Hi Team, I have created New BP role ZUDM0PY by copy of standard UDM000. Everything is working well , when I choose "Display BP" in Worklist system throws error "BP role UDM000 does not exist Message no. R1206" My doubt is if i am using BP role ZUD
-
Pulldown selection creates new text field in a form
I have a set of dynamic select menus that are built using Kaosweaver's http://www.kaosweaver.com/extensions/details.php?id=88 Thanks to Kaosweaver, it works fine. However, I was wondering how someone could have an extra text field popup in a form if
-
Hi, I am working for an ad-agency project. They have an item in the website to sell. The item should be customized by different colors in the palatte.. how can i do this in flex?
-
Can't connect to PC server from Mac after PC is put behind switch.NE ideas?
After adding a switch in my network i bumped into some network trouble. Here's my new setup: Airport Extreme iBook via WIFI Airport Express - LAN cable - SWITCH - PC/XBOX360/PS2 All computers have internet access, i can access PC via VNC on my iBook,
-
I can transfer my catalog from Adobe Photoshop element 8 on my previous PC to Adobe Photoshop element 10 for mac on my book air. Thank for help.