Link Aggregation, 10.4.8 Server, D-Link problems.

I am having problems with Link Aggregation on a Mac Pro running 10.4.8 OS X Server. I'm connecting to a D-Link DGS-3224TGR. The Switch has the LACP active on both ports of the trunk and the Ports are grouped, enabled and LACP. When both en0 & en1 are connected they show green in the network status but I have no real connectivity beyond the switch. I can connect to the web-GUI of the switch only. (Before you point fingers at the switch) I can unplug en1 and all begins to work well but if I reconnect en1 I get green lights on both, but still no connectivity beyond switch. If I unplug en0 and leave en1 in it still will not work. As long as only en0 is connected to port 1 OR port 2 of the trunk it works. Once en1 is plugged in or en1 alone, it does not work.
BTW i am setting a static ip for bond0.
One other note as I test en1 solo to the switch. There was a previous post concerning Link Aggregation and the person found no real fix but mentioned ports on the Mac going to 6.5.6.0 as the ip. He seemed to believe the problem was in the switch but I was setting up en1 with the servers ip and en0 shows this 6.5.6.0 address, hmm.
Anyone with info on any special settings needed on the d-link switch or any settings for other switchs related to this please post. I realy believe the problem to be OS X related but ...

Someone made a post that made me look at the netstat. It was suggested that that not only do the IP's on en0 & en1 need to be the same but also the MAC addresses. On my Mac Pro en0 & en1 retain different MAC address. Here's the read out.
<pre>osxserver:~ root# netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lo0 16384 <Link#1> 301253 0 301252 0 0
lo0 16384 localhost ::1 301253 - 301252 - -
lo0 16384 localhost fe80::1 301253 - 301252 - -
lo0 16384 127 localhost 301253 - 301252 - -
gif0* 1280 <Link#2> 0 0 0 0 0
stf0* 1280 <Link#3> 0 0 0 0 0
en0 1500 <Link#4> 00:17:f2:00:10:12 3117722 0 3747823 0 0
en1 1500 <Link#5> 00:17:f2:00:10:13 0 0 0 0 0
fw0* 494 <Link#6> 00:16:cb:ff:fe:6b:7b:5a 0 0 0 0 0
bond0 1500 <Link#7> 00:17:f2:00:10:12 3107388 0 3709075 0 0
bond0 1500 osxserver.l fe80::217:f2ff:fe 3107388 - 3709075 - -
bond0 1500 192.168.1 osxserver.qchro 3107388 - 3709075 - -
osxserver:~ root# netstat -r
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default routerlan.qchron.n UGSc 45 11340 bond0
127 localhost UCS 0 0 lo0
localhost localhost UH 32 296823 lo0
169.254 link#7 UCS 0 0 bond0
192.168.1 link#7 UCS 17 0 bond0
ns2.mycompany.com 0.50.e4.9e.ef.29 UHLW 0 2977 bond0 1034
osxserver.mycompany.c localhost UHS 1 1776 lo0
rays.mycompany.com 0.3.93.a4.c2.e6 UHLW 1 7541 bond0 762
markw.mycompany.com 0.7.e9.db.eb.62 UHLW 0 7314 bond0 723
edit120.mycompany.com 0.d.93.62.c5.18 UHLW 2 25439 bond0 994
edit121.mycompany.com 0.14.51.0.92.36 UHLW 1 55664 bond0 304
edit122.mycompany.com 0.d.93.62.5e.f8 UHLW 5 19449 bond0 1044
edit123.mycompany.com 0.30.65.b8.5a.14 UHLW 1 164064 bond0 1200
edit125.mycompany.com 0.d.93.64.a2.c2 UHLW 3 8418 bond0 4
edit126.mycompany.com 0.11.24.7e.a2.9c UHLW 4 16093 bond0 1124
edit127.mycompany.com 0.d.93.61.6.38 UHLW 6 238746 bond0 854
art135.mycompany.com 0.a.95.be.2f.88 UHLW 3 382196 bond0 730
art136.mycompany.com 0.a.95.ae.96.10 UHLW 4 139562 bond0 672
art137.mycompany.com 0.a.95.ae.94.a UHLW 3 328845 bond0 866
art138.mycompany.com 0.a.95.ba.f4.4a UHLW 0 30 bond0 624
mikespowerbook.qch 0.3.93.a4.7a.1c UHLW 2 5320 bond0 506
192.168.1.223 0.a.95.bb.c.78 UHLW 0 142 bond0 546
routerlan.mycompany.c 0.d.bd.a1.89.98 UHLW 46 605 bond0 928
Internet6:
Destination Gateway Flags Netif Expire
localhost localhost UH lo0
localhost Uc lo0
localhost link#1 UHL lo0
link#7 UC bond0
osxserver.local 0.17.f2.0.10.12 UHL lo0
ff01:: localhost U lo0
ff02::%lo0 localhost UC lo0
ff02::%bond0 link#7 UC bond0
</pre>

Similar Messages

  • Link Aggregation and OS X Server

    Hello.
    I'm moving my servers (3 x Intel Xserves running 10.5.6) to a new location with all new network hardware (wiring, switches, routers, etc.) and I'm now considering using link aggregation on the Xserves (the previous switches didn't support it while the new ones do) — but, since I'm working with servers which are already set up I'm concerned I might "break" services by moving to link aggregation. I've concerned that services which are bound to a specific network connection/port will fail when setting up link aggregation (for example, an Open Directory setup as OD Master).
    Does anyone have any experience with this? I've read that with OD you have to revert back to an Standalone setup before setting up link aggregation, but I've only seen this mentioned once on other forums (and have never seen it confirmed anywhere).
    Anyway, if anyone has any experience/advice/etc. it'd be much appreciated!
    Regards,
    Kristin.

    To add to that question, do I just have to set the link aggregation in the Network interfaces preferences of the X-Serves or do we also have to set the Switches (through their administration interfaces) to be prepared for link-aggregation?

  • Intel, Link Aggregation, 10.4.8 Server

    Trying to narrow down a problem to see if it's just the Intel Ethernet Chipset or maybe just the Mac Pro. Some with a G5 PowerMac Xserv Mac OS X (10.4.8) posted a ifconfig that was correct in my open issue topic under Server/Networking. Was hoping to find an Intel user with Link Aggregation
    active to any switch that could post an ifconfig -b bond0 results.
    It is the MAC address setup on the Mac Pro via network setup. Here is my ifconfig.
    <pre>osxserver:~ root# ifconfig -b bond0
    bond0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    inet6 fe80::217:f2ff:fe00:1012%bond0 prefixlen 64 scopeid 0x7
    inet 192.30.40.2 netmask 0xffffff00 broadcast 192.30.40.255
    ether 00:17:f2:00:10:12
    media: autoselect (1000baseT <full-duplex,flow-control>) status: active
    supported media: autoselect
    bond key: 0x0001 interfaces: en0 (selected) en1 (unselected)
    bond interface: en0 priority: 0x8000 state: 0xbd partner system: 0x0001,00:0f:3d:f8:12:8f key: 0x0013 port: 0x0013 priority: 0x0001 state: 0x37
    bond interface: en1 priority: 0x8000 state: 0x05 partner system: 0x0001,00:0f:3d:f8:12:8f key: 0x0013 port: 0x0014 priority: 0x0001 state: 0x37</pre>
    See my "ether" address it DOES NOT match the MAC addresses assigned to the Bonded en0 & en1 as yours does.
    <pre>osxserver:~ root# networksetup -listBonds
    interface name: bond0
    user-defined-name: bond0
    devices: en0, en1
    }</pre>
    Please does anyone know what CLI commands will fix this?

    I'm having too problems with the Link Aggregation on an Intel Xserve with 10.4.8. In contrast to your ifconfig my output shows me that the bond seems to have lost its ethernet adress completely:
    xserve1:~ isdadmin$ ifconfig -b bond0
    bond0: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 9000
    ether 00:17:f2:93:7d:f8
    media: autoselect status: inactive
    supported media: autoselect
    bond key: 0x0001 interfaces: en0 (unselected) en1 (unselected)
    bond interface: en0 priority: 0x8000 state: 0x45 partner system: 0x0000,00:00:00:00:00:00 key: 0x0000 port: 0x0000 priority: 0x0000 state: 0x00
    bond interface: en1 priority: 0x8000 state: 0x45 partner system: 0x0000,00:00:00:00:00:00 key: 0x0000 port: 0x0000 priority: 0x0000 state: 0x00
    On the last two lines there are only "0". In the status panel in system preferences it states that the switch seems to not supporting 802.3ad though it does and is activated for the connected ports.
    I hope that if it is a system fault it will be corrected in 10.4.9.
      Mac OS X (10.4.8)  

  • Mac Mini Server Link aggregation - Thunderbolt or USB 3 gigabit ethernet adaptors

    I am setting up a late 2012 Mac Mini as a file server with Server 2.2. It has a Promise Pegasus R4 RAID and LaCie 4TB drives daisy chained via the Thunderbolt connection. 4 users on MacPro's will connect to the server to access these hard drives via gigabit ethernet.
    I imagine the gigabit ethernet will be the bottleneck, so I'm now looking at link aggregation. Not a problem on the MacPro's but the Mac Mini will require an adaptor to get a second gigabit port. From reading this forum I understand the Apple Thunderbolt to Gigabit adaptor will work, but I'm concerned that it will need to be fitted 3rd in line after the R4 and LaCie drives. The 10Gbps bandwidth Thunderbolt has, may cause another bottleneck with all three working off the same port?
    An option would be to use one of the USB 3 ports with this adaptor http://www.ebuyer.com/412005-startec...edium=products
    I believe it work with OSX, but I have no speed information or if OSX link aggregation will work using it.
    Any thoughts on the above would be appreciated and recommendations on a suitable Network Switch with LACP support welcome.

    At present Mountain Lion Server cannot use a LACP bond, in my experience only of course. http://www.small-tree.com/kb_results.asp?ID=59 describes LACP/Bonds do not show up in Server Admin GUI on Mountain Lion
    anyone know how to do it? or the location and the name of the plist file to configure the network interface in ML server?
    regards

  • X Serve and link aggregation - how to get more speed

    Based on the thread http://discussions.apple.com/thread.jspa?threadID=1583184
    is there any news with Snow Leopard Server and bonding 2 or 4 nics on an XServe to get a true load balancing? Can you recommend specific ethernet switches?
    Thanks
    Kostas

    Why do you think Snow Leopard makes any difference?
    Link Aggregation is an IEEE standard. Apple can't arbitrarily change their implementation of it.
    As for switches, I've used it successfully on a variety of HP, Cisco and Foundry switches, although since it's a standard protocol almost any switch should work.

  • Mac OS X Server v10.7 does not show the ethernet link aggregated interface I created in Server Hardware Network Dialogue window. Are link aggregated ethernet connections not supported in Lion Server?

    Mac OS X Server v10.7 does not show the ethernet link aggregated interface i created. Does Lion server support ethernet link aggregated interfaces?

    Thanks for responding Cold--
    Hardware: Mac Pro  3.0 GHZ quad core xeon
    I read the link but it still does not explain why the aggregated dual ethernet interface does not show up in the Network tab of the hardware section Lion Server. I was able to see it on the network and looks to be using a single static IP that I assigned. My concern was that is this supported and will it allow for failover and double performance of the single network interface.
    Any thoughts?
    Thanks again!

  • How can I set a right link Aggregations?

    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
    In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
    Regards,
    sundy
    Output of several commands on the T5220:
    uname -a:
    SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
    ifconfig -a (IP and broadcast hidden for security):
    lo0: flags=2001000849 mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    aggr1: flags=1000843 mtu 1500 index 2
    inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
    ether 0:14:4f:ec:bc:1e
    dladm show-dev:
    e1000g0 link: unknown speed: 0 Mbps duplex: half
    e1000g1 link: unknown speed: 0 Mbps duplex: half
    e1000g2 link: up speed: 1000 Mbps duplex: full
    e1000g3 link: up speed: 1000 Mbps duplex: full
    dladm show-link:
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    aggr1 type: non-vlan mtu: 1500 aggregation: key 1
    dladm show-aggr:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
    duplex link state
    e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
    e1000g3 1000 Mbps full up attached
    dladm show-aggr -L:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
    device activity timeout aggregatable sync coll dist defaulted expired
    e1000g2 active short yes yes yes yes no no
    e1000g3 active short yes yes yes yes no no
    dladm show-aggr -s:
    key: 1 ipackets rbytes opackets obytes %ipkts %opkts
    Total 464982722061215050501612388529872161440848661
    e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
    e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5

    sundy.liu wrote:
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!If you're only running a single stream, that's all you'll see. Teaming/aggregating doesn't make one stream go faster.
    If you ran two streams simultaneously, then you should see a difference between a single 1G interface and an aggregate of two 1G interfaces.
    Darren

  • Link aggregation

    Hello,
    I have a new Mac Pro specifically dedicated as a security camera server and am wondering if I should implement the use of two Ethernet ports and/or link aggregation. Here is the rest of the story...
    My Mac Pro 3.06 GHz 12-core Intel Xeon computer corrdinates twelve network megapixel sercurity cameras. The network configuration now that the Mac Pro is simply linked into the only network that everything else in my home is via one main 10/100/1000 switch. This same network also hosts four other wired Macs, two HDMI-Cat6-HDMI channels, and a variety of other wired/wireless items that need Internet access. A brief test shows that my new Mac Pro does the job just fine under this plan. Isn't that what a network switch is supposed to do; juggle multiple data streams without them colliding or interfearing with one another? Regardless, I haven't tried to take any diagnostic readings or done any comparisons. I have further found little information from Apple on the use of two Ethernet ports.
    So, any suggestions here? Maybe it would be good to have all of my cameras on one Network with the Mac Pro, since it is the one that coordinates all the video data. However, downstream access to all of that data via the main household Network and the Internet would be resticted. This is unless I can use both Networks at the same time. Like I said, I am finding little information to even start designing a Network with these two Enetrnet ports.
    More microchips than sense,
    Dr. Z.

    Link Aggregation uses a slightly different protocol. It is different enough that the Mac will only commit both its Ethernet ports when the equipment you are connecting to explicitly supports Link Aggregation Protocol. (certain high-end Switches do this, but most consumer equipment does not.)
    The Mac can use such an Aggregated link once established, but it does not do load-balancing unless there are multiple virtual connections. If you have only one data stream, it will be routed over one side of the aggregate link and will not benefit from having the other side present unless other connections to other places were using the same aggregate link..
    So I think that if you are taking advantage of Gigabit Ethernet, you are doing fine. Link Aggregation is available, but it is really solving a problem you do not have in a way that does not benefit you.
    Have you checked the actual speed in Network Utility to make sure you really are connecting at Gigabit speeds?
    I some times set these up with manual speed so that they connect quickly at the speed I specify (with flow control) instead of auto-speed.

  • Link Aggregation ... almost there!

    Hi all
    After struggling with Link Aggregation on Mac OS X Server to Extreme X450 switches we are almost there. We've now managed to get a live working link where the Ethernet 1 and 2 arew green and the Bond0 shows both links as active, and finally the Bond0 interface picks up a DHCP address.
    So that's great, but no Network connection which is weird because it got an IP address.
    Do we have to route the traffic over one of the other interfaces or something?
    Any suggestions at all?
    Cheers
    C

    Camelot wrote:
    The first, or at least - most obvious, problem is that you have IP addresses assigned to each of en0 and en1.
    This should not be the case. Only the bond0 network should have an IP address assigned.
    The other interfaces should not be configured at all. That's almost certainly the issue since your machine has three IP addresses in the same subnet - one on each of en0, en1 and bond0. It's no wonder things are confused
    Thanks that now works a treat!
    Was hoping you could help on another set of ports again being configured for Link Aggregation. We have tried to set it up in exactly the same way but again its not working. The ifconfig returns back the following:
    lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
    inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
    inet 127.0.0.1 netmask 0xff000000
    inet6 ::1 prefixlen 128
    gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
    stf0: flags=0 mtu 1280
    en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    inet6 fe80::219:e3ff:fee7:5706%en0 prefixlen 64 scopeid 0x4
    inet 169.254.102.66 netmask 0xffff0000 broadcast 169.254.255.255
    ether 00:19:e3:e7:57:07
    media: autoselect (1000baseT <full-duplex,flow-control>) status: active
    supported media: autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>
    en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    inet6 fe80::219:e3ff:fee7:5707%en1 prefixlen 64 scopeid 0x5
    inet 169.254.102.66 netmask 0xffff0000 broadcast 169.254.255.255
    ether 00:19:e3:e7:57:07
    media: autoselect (1000baseT <full-duplex,flow-control>) status: active
    supported media: autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>
    fw0: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 2030
    lladdr 00:1b:63:ff:fe:6e:6c:8a
    media: autoselect <full-duplex> status: inactive
    supported media: autoselect <full-duplex>
    bond0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    ether 00:19:e3:e7:57:07
    media: autoselect status: inactive
    supported media: autoselect
    bond interfaces: en1 en0
    When I compared this to the working Link Aggregation ifconfig output I noticed this one has the line "media: autoselect status: inactive" as appose to active. Could this be the cause and how do I rectify it?
    Thanks

  • Mac Pro - Link Aggregation Group

    Hi community, 
    does any now know in what mode the virtual interface feature LAG (Link Aggregation Group) on a MacPro operates?
    The LACP modes are: On / Active / Passive / Off 
    And what driver mode is used?
    The driver modes are: Round-robin / Active-backup / XOR (balance-xor) / Broadcast / Adaptive transmit load balancing (balance-tlb) / Adaptive load balancing (balance-alb)
    There is nothing to ajust.
    Kind regards,
    Nils

    Link Aggregation only works when the device at the other end of the Links understands the commands of Link Aggregation Protocol. These tend to be Industrial-Strength Switches and Routers, not the affordable ones most Homeowners buy.
    The constraints of the other device, listed in its manual, will determine the best choices.
    Mac OS X 10.6 Server Admin: Setting Up Link Aggregation in Mac OS X Server
    OS X Mountain Lion: Combine Ethernet ports

  • Link Aggregation, revisited again

    I did all this research to find out how LACP Link Aggregation actually functions in OS X Server, and found many varying opinions out there (there is precious little in OS X documentation, it says essentially set it up and you are good to go:). It seems there is a disagreement on whether OS X (properly configured with LACP switch) will actually spread out traffic dynamically, or whether it only spreads out traffic when one becomes saturated.
    So I did a test, configured en0 and en1 as a link aggregate called Fat Pipe, and both show green (Cisco switch with LACP for those ports). However, no matter how many concurrent read/writes to different servers, I can't confirm that there is any improvement at all, which leads me to believe that 'true' load balancing is not happening.
    Also, I have read 2 differing opinions on how the bonded NICs should appear in Network prefs: configured as "Off", or as DHCP with self-assigned 169.x.x.x address.
    Anyone with info, would be much appreciated. At this point I'm almost inclined to just give each NIC their own IP, and just manually assign some users to connect with one IP, and some with the other...doesn't seem as efficient as aggregation though.
    Thanks

    I did all this research to find out how LACP Link Aggregation actually functions in OS X Server, and found many varying opinions out there
    There's no disagreement as far as I'm aware
    It's a function of link aggregation (which is an IEEE standard) and therefore not subject to the whims of Apple's implementation.
    The low-down is that the links are both used and specifically which link is used is based on the MAC address of the source and destination hosts.
    Moreover, there is no concept of link saturation or failover when one is full - it's entirely possible for one to to be saturated and all the other links to be completely idle if that's the way the MAC addresses run.
    A simplified view of the algorithm makes it easy to understand - for a 2-link trunk using en0 and en1, the system looks at the MAC addresses of the source and destination hosts. If they're both odd it uses en0, if they're both even, it uses en0, if they're different (one is odd and one is even) then it uses en1. Therefore, it's entirely possible that all traffic will use one link if all the MAC addresses are odd (or even).
    The upshot is that you will never exceed the single link speed (e.g. 1gbps if using 1gbps links) to any single host on the network, so the transfer rates between two devices will be capped at that. However, if a second host initiates a connection, and if that second host's MAC address causes its traffic to transmit over the other link, the the second host's transfers won't be impacted by the first host's transfer.
    I can't confirm that there is any improvement at all, which leads me to believe that 'true' load balancing is not happening.
    In the real world, it's unlikely that you'll get a single host pumping 1gbps of traffic into the network, so you only really see the effect when you have multiple hosts all talking at the same time. A reasonable test would be to time simultaneous transfers to/from the server, then pull one of the links and see if things slow down.
    Also, I have read 2 differing opinions on how the bonded NICs should appear in Network prefs: configured as "Off", or as DHCP with self-assigned 169.x.x.x address.
    The links don't appear at all in any of my servers running with trunks. The only interface I see is the trunk.
    If I use ifconfig in the terminal I see the underlying links as 'active', but they don't have any IP address assigned to them.

  • HP 1910-24g and link aggregation

    Hi!
    I need to know if two this switches are capable of link aggregation.
    I have an esxi server, and i want to configure it for fault tollerance and load balance with nic teaming, connect one nic to every switch, so that if a switch fail the server continue working without interruption.
    It's possible to do it or the switches support only link aggregation between ports of the same switch?
    Thanks!!

    Masterx81,
    This is the consumer products forum.
    You want the HP Enterprise Business Community  for server questions.
    I am guessing this is the specific area you require for esxi vmware.
    Also here... HP1910 Port Aggregation over 2 or more switches?

  • Link aggregation - performance overhead?

    Does anybody know if Solaris link aggregation incurs any performance degredation compared with non-redundant network connections?
    We've recently upgraded a client system and have enabled link aggregation to bind two interfaces (bge) to a logical aggregated interface.
    Apart from the server hardware upgrade, which brought a change from ce to bge interfaces, this is the only other significant network change.
    On this heavily used system, database network performance has degraded significantly, and being the only significant network change I'm wondering whether link aggregation could be a cause.
    Reading through Sunsolve articles and man pages, there doesn't appear to be anything categorically stating aggregation imposes a cost overhead.
    However, I'd be interested to hear from others if they've experienced this.
    For what it's worth, this is Solaris 10 10/09 on an Ultrasparc VII.

    So many considerations when you talk about performance: what system hardware in use? how many interfaces? is this a CoolThreads "T" server (where single threaded programs don't run as well as multi-threaded); what do you mean by "database network performance has degraded"; does the DB need to be tuned or adjusted for this? If using add-on cards, are they in the most optimal slot (some platforms) and don't go over the recommended # of interface boards supported by the system? different storage array or config for you DB?
    when cases like these come in, there typically is a discussion that needs to take place around those questions as well as what/when/where and expectations like those described here: http://blogs.sun.com/hippy/entry/what_s_the_answer_to
    also understand that link aggregation does not necessarily mean that you will see a balancing of the load evenly b/t 2 or more aggregated interfaces. The default policy is "L4" which decides the outbound interface by TCP and UDP info if found in the packet, not by simple source/destination or mac address hashing, although you can set those with -P if you want. You can also set multiple policies. (-P L2,L3).
    So it /could/ be related to the aggregation change you made, but you also upgraded this DB machine from some other system, so there ARE other differences. Were you using link aggregation on the previous system or Sun Trunking? What type of CPU was that other system?
    that's all I can think of, but hopefully you get the idea.

  • Link aggregation and DNS

    Brand new Xserve intel
    link aggregation turn on two network cards works great.
    DNS is running but when testing i dont' resolve his own adres
    Link aggregation off DNS works
    What goes wrong ?
    thanks

    You mean if the DNS server is running on the same machine you can no longer use it if bonding the interfaces?
    I only know I saw a problem when I used 127.0.0.1 as the nameserver address in Network settings on the bonded interface. Users that always get the same "macaddress locked" static IP from the DHCP server, running in the same machine, got that as the nameserver IP (from the machine's Network setting) instead of from the ones in DHCP config.
    (The DHCP server config also needed to be altered to serve on the bond0 interface).
    And I think the info in /etc/named.conf "to only use 127.0.0.1" is, you should only allow for alterations of DNS server settings from that IP and maybe not that you should use it as the IP in Network settings.

  • Link aggregation stradling two switches?

    Hey guys, I'm back with more questions about Link agregation. I figured out that I do have to manually configure both of my switches to support it. Now though I'm stuck with trying to figure out the best way to implement it. I have a Netgear FSM726 and a Linksys EF24G2. Both are 24-port 100BT switches with 2 Gigabit ports on them. They are currently setup like this: The Xserve runs with one gigabit port going into one of the gigabit ports on the Linksys. The other gigabit port on the linksys runs into the Netgear to join the two together. That leaves one open gigabit port on the netgear.
    So in order to setup link aggregation I'd have to use two gigabit ports on one of the switches, or use two 100BT ports. Alternatively, I was thinking if I setup link aggregation on the Xserve and then just ran each of the two lines into one gigabit port on both switches it may work without having to do any configuring on the switches? Will that cause any problems with network traffic?
    If I go with the gigabit port on one switch idea, as far as I see, I'd have to join the two switches with a 100BT connection instead of the current gigabit line. I'm not even sure if that matters really. So which way is a better way to go? Also, if I go with using the gigabit ports on one switch, can I use two open 100BT ports to join into the other switch for increased bandwidth? Thanks for helping out here.

    Steve has it right. Link aggregation only works between two devices (e.g. a server and a switch, or two switches). You cannot link three devices (a server and two switches) using a single link aggregation. That's because of how the traffic flows over the link.
    Your best solution depends on the traffic patterns on your network - where are the clients that are hitting this server?
    If you have a dozen clients that hit the server a lot, plus more clients that don't hit it much (or at all), plus printers, etc., you could use two of the gigabit ports on one switch as a link aggregate to the server and plug the busy clients into that switch, then plug the other clients into the other switch, using a 100base-T link (or multiple 100base-T links) to connect to two switches together.
    This may or may not be viable, in which case a separate gigabit switch to connect the server and the two existing switches may be the best solution.

Maybe you are looking for

  • How to add a column in table control

    Hi ,    Can any one tell me how to add a column in table control? My requirement is to add two columns ( custom fields ) into table control ( It is a standard program). I have added the column in the table and also in the table control. But when I am

  • Quiet down OSX when receiving/making a call on mobile phone

    Hi everyone, I want to write an application in JAVA which will quiet down the system volume when I am either receiving or making a call on my Nokia E66 mobile phone. Where can I read up on what I have to do to get my mobile phone to broadcast that in

  • For today's random glitch.. incorrect responses are all gone.

    After working through a problem with the results page continue button not working, I now have a full quiz (10 questions, no breaks) that is missing the incorrect answer captions.  I KNOW I didn't delete these.. and I can't find any way to restore the

  • Is the new iMac faster than my PowerMac G5 dual 2Ghz?

    I have a PowerMac G5, Dual 2Ghz (first generation model) 2.5GB Ram, and a ATI 9800 Pro 256mb Graphics Card. Looking at the SPEC ratings on the Apple page, the new iMac (2.0Ghz model) scored... SPECint_rate2000 = 32.6 SPECfp_rate2000 = 27.1 And accord

  • HT201177 i am having problem with my camera on my mac book

    i bought iphone 4's for me and my wife and now the camera on my macbook isn't working. does anyone think this is related?