Link aggregation between fabrics?

Hi all!
I have a doubt, is possible to aggregate links, with LACP or similar, between Fabrics interconnet UCS 6200?
I have two buildings (they are very near, a couple of walls in the middle) and I would like to install a UCS Blade server chassis in each and one fabric interconnect with each UCS Blade server chassis.
This is the topology,
Switch 29060 for LAN                         Switch 29060 for LAN
            |                                                          |
            |                                                          |
Fabric interconnect ============= Fabric Interconect
                                   10Gb LACP
            |                                                          |
            |                                                          |
            |                                                          |
UCS Blade Server Chassis                 UCS Blade Server Chassis
Thanks in advance.

Hello David,
I assume you refer to the links between FI are connecting the cluster ( L1 and L2 ) ports.
These ports need to be directly connected and the maximum distance is limited by the Ethernet cable  i.e 100 meters
FI internally configures these ports in a bond and no need of additional configuration.
Padma

Similar Messages

  • Link aggregated between NAS and a switch: the Mac as a very slow access...

    Hello,
    in my Office we're working with Macs and PCs and all the data is on a NAS.
    Here is our configuration:
    NAS <-link1->Switch<-Link2->Macs or PC.
    Macs are connected with AFP protocol (because SMB is very slow).
    We want to use Link Aggregation between the NAS and the switch (with 802.3ad procotol) but when we do that all the Macs have a very slow access to the NAS. But all is OK with the PCs.
    What can we do? Is there a problem with macOS X and link aggregation?
    Thank you for your help.
    Nicolas

    Sorry, not sure what the question is exactly.
    You must have an Xserve, or Ethernet cards capable of Jumbo Frames for one, I assume the Switch & NAS are capable?
    Possible clues...
    http://docs.info.apple.com/article.html?path=ServerAdmin/10.4/en/c3ha3.html
    http://discussions.apple.com/thread.jspa?threadID=1715388&tstart=0
    http://www.macnn.com/articles/04/06/21/link.aggregation.for.macs/
    http://www.smallnetbuilder.com/content/view/30556/53/
    http://www.afp548.com/forum/viewtopic.php?showtopic=8309

  • Cant get link aggregation working on srw2048

    Hello
    We are trying to setup link aggregation between 2 nodes in our cluster. They are 64 bit nodes running Opensuse 11.1 and are connected by Gigabit Ethernet. We have an srw2048 switch.
    The problem is we are not able to see any performance improvement in network bandwidth after the configuration. We seem to have configured the nodes correctly: ifconfig shows something like this on both nodes, where eth4, eth5 are slaves to bond0 :
    bond0     Link encap:Ethernet  HWaddr 00:1E:68:78:F9:84  
          inet addr:192.168.1.198  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::21e:68ff:fe78:f984/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:40443248 errors:442 dropped:0 overruns:0 frame:442
          TX packets:30955485 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:31836352423 (30361.5 Mb)  TX bytes:31997996320 (30515.6 Mb)
    eth4      Link encap:Ethernet  HWaddr 00:1E:68:78:F9:84  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:6213 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15477741 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1290977 (1.2 Mb)  TX bytes:16000747443 (15259.5 Mb)
          Interrupt:246 Base address:0xe000 

eth5      Link encap:Ethernet  HWaddr 00:1E:68:78:F9:84  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:40437035 errors:442 dropped:0 overruns:0 frame:442
          TX packets:15477744 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:31835061446 (30360.2 Mb)  TX bytes:15997248877 (15256.1 Mb)
          Interrupt:247 Base address:0x4000
    Starting the network services shows
     bond0     
    bond0     enslaved interface: eth5
    bond0     enslaved interface: eth4
    bond0     (DHCP) . . IP/Netmask: '192.168.1.198' / '255.255.255.0'
    So it seems that the client side configuration is correct. The bond has been configured with the default mode balance-rr.
    On the switch side, we have grouped the right ports to form LAG groups and have checked LACP on them. Running some trusted TCP benchmarks yields the same results as the original configuration without link aggregation.
    I feel we are missing some configuration on the switch side.
    Can anybody point out what we are doing wrong?
    Thanks,
    K**bleep**ij

    If you think this has been a mis-configuration on the switch side, please try to reset the switch and then re-configure it again. You may also seek assistance with a Cisco/Linksys tech support so that they can guide you step by step at real time.

  • SG300-10mp Fibre Link aggregation

      Hi,
    I have 2 SG300-10mp switches which i am trying to create a link redundancy for over the two fibre ports.
    On the web interface when i go into the LAG settings ports g9 and g10 which are the fibre ones don't show up.
    How do i create the link aggregation between them? Or perhaps i'm doing the wrong thing... All i require is the two switches to function as normal, however should one fibre port become faulty or the cable to break, the over port just takes over straight away with no latency, or atleast very minimal.
    Please help!                

    Hello Ashley,
    I think I see where the confusion is here.
    When you are looking at the LAG settings page, the 8 things you see listed represent the 8 possible LAGs that the switch can handle, the page actually looks exactly the same even with a 24 port switch.  You are seeing the LAGs themselves, not your individual switchports.
    If you go to LAG management, you should be able to select one of those 8 LAGs and click the edit button.  The window that pops up should allow you to add both fiber ports to the LAG.  I don't have one of these in front of me right now, but since the SFP slots on the 300's are combo ports they will probably be numbered as whatever the ethernet port combo'ed to the SFP slot is.
    Simply add the ports you would like to the LAG, and check LACP if you are using it on the other switch (you cannot change this setting later, you would have to delete the LAG first).
    This way you have the redundancy you are looking over with the added benefit of having more throughput available between those two switches, since the traffic will be load-balanced across the LAG. 
    A LAG will also failover much faster than two redundant links, since it won't have to go through the Spanning Tree process everytime a link state changes.
    Like I said I don't have one of these in front of me right now, so if I messed something up let me know and I will try it in the lab tomorrow.
    Let me know if you need any more information,
    Christopher Ebert
    Senior Network Support Engineer - Cisco Small Business Support Center
    *Please rate helpful posts*

  • HP 1910-24g and link aggregation

    Hi!
    I need to know if two this switches are capable of link aggregation.
    I have an esxi server, and i want to configure it for fault tollerance and load balance with nic teaming, connect one nic to every switch, so that if a switch fail the server continue working without interruption.
    It's possible to do it or the switches support only link aggregation between ports of the same switch?
    Thanks!!

    Masterx81,
    This is the consumer products forum.
    You want the HP Enterprise Business Community  for server questions.
    I am guessing this is the specific area you require for esxi vmware.
    Also here... HP1910 Port Aggregation over 2 or more switches?

  • How can I set a right link Aggregations?

    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
    In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
    Regards,
    sundy
    Output of several commands on the T5220:
    uname -a:
    SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
    ifconfig -a (IP and broadcast hidden for security):
    lo0: flags=2001000849 mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    aggr1: flags=1000843 mtu 1500 index 2
    inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
    ether 0:14:4f:ec:bc:1e
    dladm show-dev:
    e1000g0 link: unknown speed: 0 Mbps duplex: half
    e1000g1 link: unknown speed: 0 Mbps duplex: half
    e1000g2 link: up speed: 1000 Mbps duplex: full
    e1000g3 link: up speed: 1000 Mbps duplex: full
    dladm show-link:
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    aggr1 type: non-vlan mtu: 1500 aggregation: key 1
    dladm show-aggr:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
    duplex link state
    e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
    e1000g3 1000 Mbps full up attached
    dladm show-aggr -L:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
    device activity timeout aggregatable sync coll dist defaulted expired
    e1000g2 active short yes yes yes yes no no
    e1000g3 active short yes yes yes yes no no
    dladm show-aggr -s:
    key: 1 ipackets rbytes opackets obytes %ipkts %opkts
    Total 464982722061215050501612388529872161440848661
    e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
    e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5

    sundy.liu wrote:
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!If you're only running a single stream, that's all you'll see. Teaming/aggregating doesn't make one stream go faster.
    If you ran two streams simultaneously, then you should see a difference between a single 1G interface and an aggregate of two 1G interfaces.
    Darren

  • SGE2010 stacking versus link aggregation

    Can someone provide an answer regarding stacking the SGE2010 switches versus link aggregation if greater than 1 Gb connectivity is required between individual switches? Currently have several switches in a stack configuration but would like to increase the bandwidth between some or all of the switches. Does stacking support a link aggregation configuration? If so what ports can be used and how should the link aggregation be configured in conjunction with the stacking?

    Hi Jacqueline,
    On the SGE2010 - Default stacking ports: 24/GBIC 3, 48/GBIC 4  support stacking, that's two ports.
    If you want a LAG between SGE2010, my suggestion, and others may disagree, would be to turn off the default stacking mode and enable standalone mode. 
    Then create a LAG between the SGE2010 switches.
    regards dave

  • Link Aggregation, revisited again

    I did all this research to find out how LACP Link Aggregation actually functions in OS X Server, and found many varying opinions out there (there is precious little in OS X documentation, it says essentially set it up and you are good to go:). It seems there is a disagreement on whether OS X (properly configured with LACP switch) will actually spread out traffic dynamically, or whether it only spreads out traffic when one becomes saturated.
    So I did a test, configured en0 and en1 as a link aggregate called Fat Pipe, and both show green (Cisco switch with LACP for those ports). However, no matter how many concurrent read/writes to different servers, I can't confirm that there is any improvement at all, which leads me to believe that 'true' load balancing is not happening.
    Also, I have read 2 differing opinions on how the bonded NICs should appear in Network prefs: configured as "Off", or as DHCP with self-assigned 169.x.x.x address.
    Anyone with info, would be much appreciated. At this point I'm almost inclined to just give each NIC their own IP, and just manually assign some users to connect with one IP, and some with the other...doesn't seem as efficient as aggregation though.
    Thanks

    I did all this research to find out how LACP Link Aggregation actually functions in OS X Server, and found many varying opinions out there
    There's no disagreement as far as I'm aware
    It's a function of link aggregation (which is an IEEE standard) and therefore not subject to the whims of Apple's implementation.
    The low-down is that the links are both used and specifically which link is used is based on the MAC address of the source and destination hosts.
    Moreover, there is no concept of link saturation or failover when one is full - it's entirely possible for one to to be saturated and all the other links to be completely idle if that's the way the MAC addresses run.
    A simplified view of the algorithm makes it easy to understand - for a 2-link trunk using en0 and en1, the system looks at the MAC addresses of the source and destination hosts. If they're both odd it uses en0, if they're both even, it uses en0, if they're different (one is odd and one is even) then it uses en1. Therefore, it's entirely possible that all traffic will use one link if all the MAC addresses are odd (or even).
    The upshot is that you will never exceed the single link speed (e.g. 1gbps if using 1gbps links) to any single host on the network, so the transfer rates between two devices will be capped at that. However, if a second host initiates a connection, and if that second host's MAC address causes its traffic to transmit over the other link, the the second host's transfers won't be impacted by the first host's transfer.
    I can't confirm that there is any improvement at all, which leads me to believe that 'true' load balancing is not happening.
    In the real world, it's unlikely that you'll get a single host pumping 1gbps of traffic into the network, so you only really see the effect when you have multiple hosts all talking at the same time. A reasonable test would be to time simultaneous transfers to/from the server, then pull one of the links and see if things slow down.
    Also, I have read 2 differing opinions on how the bonded NICs should appear in Network prefs: configured as "Off", or as DHCP with self-assigned 169.x.x.x address.
    The links don't appear at all in any of my servers running with trunks. The only interface I see is the trunk.
    If I use ifconfig in the terminal I see the underlying links as 'active', but they don't have any IP address assigned to them.

  • Link aggregation stradling two switches?

    Hey guys, I'm back with more questions about Link agregation. I figured out that I do have to manually configure both of my switches to support it. Now though I'm stuck with trying to figure out the best way to implement it. I have a Netgear FSM726 and a Linksys EF24G2. Both are 24-port 100BT switches with 2 Gigabit ports on them. They are currently setup like this: The Xserve runs with one gigabit port going into one of the gigabit ports on the Linksys. The other gigabit port on the linksys runs into the Netgear to join the two together. That leaves one open gigabit port on the netgear.
    So in order to setup link aggregation I'd have to use two gigabit ports on one of the switches, or use two 100BT ports. Alternatively, I was thinking if I setup link aggregation on the Xserve and then just ran each of the two lines into one gigabit port on both switches it may work without having to do any configuring on the switches? Will that cause any problems with network traffic?
    If I go with the gigabit port on one switch idea, as far as I see, I'd have to join the two switches with a 100BT connection instead of the current gigabit line. I'm not even sure if that matters really. So which way is a better way to go? Also, if I go with using the gigabit ports on one switch, can I use two open 100BT ports to join into the other switch for increased bandwidth? Thanks for helping out here.

    Steve has it right. Link aggregation only works between two devices (e.g. a server and a switch, or two switches). You cannot link three devices (a server and two switches) using a single link aggregation. That's because of how the traffic flows over the link.
    Your best solution depends on the traffic patterns on your network - where are the clients that are hitting this server?
    If you have a dozen clients that hit the server a lot, plus more clients that don't hit it much (or at all), plus printers, etc., you could use two of the gigabit ports on one switch as a link aggregate to the server and plug the busy clients into that switch, then plug the other clients into the other switch, using a 100base-T link (or multiple 100base-T links) to connect to two switches together.
    This may or may not be viable, in which case a separate gigabit switch to connect the server and the two existing switches may be the best solution.

  • Port-channel Problem between Fabric Interconnect and N7K vPC

    Dear all,
    I have a problem with Port-channel Uplink between Fabric Interconnect with N7K using vPC
    This is my network topology for UCS Deployment
    In N7K I has configured vPC for red link and green link, at Fabric Interconnect A I has configured Port-Channel with member is Port 1 and Port 2, uplink is red link. At Fabric Interconnect B, I has configured Port-Channel with member is Port 1 and Port 2, uplink is green link.
    The show interface port-channel on N7K is good, every port-channel is up and have all member. But At Fabric Interconnnect, when I see on UCS Manager, the status of Port-Channel on Fabic A and Fabric B is fault with Additional Info: No operational member. Although all link is link up and I has status of Port-Channel is enable on UCS Manager. When I see the Properties of Port 1, Port 2 on Port-channel, I see the membership status is : individual. This mean port-channel is not up and no membership in this configuration. I want to using port-channel for load balance and plus more bandwidth for uplink to 20Gig. I don't understand why ?
    Please help me resolve this problem, I has send the capture screen of UCS Manager when I show status of Port-channel and Port-member in port-channel in attach items.
    Anyone can help me to resolve this, thanks you very much. Please reference attach items for more detail about fault.
    Thanks,
    Trung.

    Thanks Matthew very much,
    I has resolved this problem. The reason of problem is miss match protocol of port-channel between N7K and Fabric Interconnect. The Fabric Interconnect always use LACP protocol, but N7K using Port-channel mode on, that why the port-channel failed. I has configured LACP for port-channel in N7K, it has resolved the problems.
    Thanks,
    Trung.

  • Link aggregation query

    Hi,
    I've recently taken over a network which has got 4 Linksys switches - 3 * SRW2048 and 1 * SRW2024
    One of the SRW2048 devices has 2 LAGs setup (of 4 ports each) connecting to the other 2 SRW2048s (both with a single LAG). So far, so good.
    However, between the 'main' SRW2048 and the SRW2024 there are 4 ethernet cables, but no Link Aggregation is set up. Everything seems to be working OK, but I'm wondering if this is this an 'OK setup'? If so, where does it rate in performance terms between having just one connecting cable, and having all 4 with Link Aggregation?
    Thanks for any help
    Michael

    Hi Michael,
    It could be that spanning tree is blocking three of those active  links. 
    Might i suggest you save the configurations to your PC, so they can be restored,  if needed.
    I think it's a great idea to add the four switch ports to a new Link Aggregation (LAG) group on each switch.
    Make sure,  on both switches that you  click 'save settings'.
    LAG provides link redundancy and load sharing between the switches, so i persoanally love the idea of using Link Aggregation (LAG)
    regards Dave

  • Link Aggregation with LACP off

    Hi,
    How does Link Aggregation in Solaris 10 works in the switch if LACP is off?
    If it's flip-flopping between two ports in the switch. Who's responsible to update the arp table in the switch?
    Thanks

    I haven't tried it with a Cisco but you don't have any other/old network/interface settings left on the OS X server when trying LACP do you?
    There should only be the bond0 "interface" (with IP, netmask and so on) when trying LACP.

  • Link Aggregation on ML-1000-2 Card

    We are trying to bundle two STM-4 on ML-1000-2 Card residing inside Cisco ONS 15454 at both ends to achieve 1.2G. The ONS software version is 7. We tried using RPR (SPR), Link aggregation using POS channel. All sort of other configuration from Cisco web site did not work. We are able to bundled the interface using RPR but the traffic goes out on POS0 and return on POS1. When we use POS channel, the Port-channel bundled the two POS but only one POS interface carries traffic and not both at the same time.
    Has anyone successfully tries out this scenario before, please help.

    It sounds like you are trying to balance the traffic between two ML cards connected together
    ML POS0----POS1 ML
    ML POS1----POS0 ML
    On the ML cards the traffic that comes in on a port-channel and is flooded to the RPR will all go out only one POS port (not load-balanced). The POS port selection is based on the mapping of member-0 of the port-channel, which can vary when ethernet ports change state.

  • SRW224G4 and 2960 Link Aggregation LACP problem

    Hello
    WE have Linksys SRW224G4 and Cisco 2960 switches.
    We made link aggregation onboth two ports between them, all is working except that when I make continuous ping between them it fails every 90 seconds one ping. I saw the problem is , when I connect to SRW224G4 by telnet and go CTRL+Z and lcli , and login, I got console. Every 90 seconds I got this messages:
    01-Jan-2000 22:14:49 %TRUNK-W-PORTREMOVED: Port g2 removed from ch1
    01-Jan-2000 22:14:53 %TRUNK-W-PORTREMOVED: Port g1 removed from ch1
    01-Jan-2000 22:14:53 %LINK-W-Down:  ch1
    01-Jan-2000 22:14:53 %TRUNK-I-PORTADDED: Port g1 added to ch1
    01-Jan-2000 22:14:53 %LINK-I-Up:  ch1
    01-Jan-2000 22:14:56 %TRUNK-I-PORTADDED: Port g2 added to ch1
    So after this one ping fails, and it goes good next 90 seconds, and then again same group of same messages and one ping fails.
    I tried all options to change on Linksys and nothing, still i got this link down and up and one ping failed.
    Did someone had same problem, can you help me somehow.
    THank you
    Hrvoje Pastar

    Hello
    WE have Linksys SRW224G4 and Cisco 2960 switches.
    We made link aggregation onboth two ports between them, all is working except that when I make continuous ping between them it fails every 90 seconds one ping. I saw the problem is , when I connect to SRW224G4 by telnet and go CTRL+Z and lcli , and login, I got console. Every 90 seconds I got this messages:
    01-Jan-2000 22:14:49 %TRUNK-W-PORTREMOVED: Port g2 removed from ch1
    01-Jan-2000 22:14:53 %TRUNK-W-PORTREMOVED: Port g1 removed from ch1
    01-Jan-2000 22:14:53 %LINK-W-Down:  ch1
    01-Jan-2000 22:14:53 %TRUNK-I-PORTADDED: Port g1 added to ch1
    01-Jan-2000 22:14:53 %LINK-I-Up:  ch1
    01-Jan-2000 22:14:56 %TRUNK-I-PORTADDED: Port g2 added to ch1
    So after this one ping fails, and it goes good next 90 seconds, and then again same group of same messages and one ping fails.
    I tried all options to change on Linksys and nothing, still i got this link down and up and one ping failed.
    Did someone had same problem, can you help me somehow.
    THank you
    Hrvoje Pastar

  • Fedora and SRW2024 link aggregation

    Usually I am reluctant to criticize products, but this time I decided to get it out of my chest.
    For people considering purchase of Linksys SRW2024 I would rather suggest to think twice before making this step.
    Let’s cut to the chase... Recently I have built iSCSI storage with 2x 1Gbit NIC (Inter  1000PT) for my ESX server and the only missing ingredient was a dissent gigabit switch. Eventually I got SRW2024 as according to Linksys specs it supports VLANs and Link Aggregation etc. After configuring Fedora distro for link bonding I was unable to ping any of the hosts on the network (172.16.192.0/24). I checked the status of individual NICs and a configuration on the switch million of times, but I couldn’t find anything that was out of ordinary. In order to eliminate issues with NIC bonding on Fedora system I used my old DES3226 Layer 2 switch... and guess what? It worked just fine.  
    Next obvious step was to upgrade firmware from 1.2.1 to 1.2.2 (hardware version 1.2), but yet again... NO LUCK.
    My question to Linksys Support or anyone who come across the same issue is – Are you aware of the issue and if so please did you find the solution as I have found number of users having similar problems!

    Hi, the topic is quite old, but I am facing the same problem. We have three SRW2024 Switches. The LACP between the Switches is working, but I want to use the 802.3ad (EtherNet Link Aggregation) with two Xserves, but can´t get it working. The Linksys-Support is unable to help, because they ignore Apple-Products and name it an Apple-problem. Witch Firmware do You use on Your SRW2024? Many thanks in advance Regards svenc
    Message Edited by svenc on 03-14-2008 01:16 AM

Maybe you are looking for

  • Windows 8.1 for MBP 13" late 2011 BootCamp

    Hi guys i have a MBP 13" late 2011 and I am installing windows 8.1 using bootcamp from an usb. I have reached the Install Windows part on EFI device ( the yellow usb). It is the part when i have to choose partition (1-4). Partition 1 is the Macintosh

  • Creating projects with more that one exported sequences from different FCP

    I am creating a compilation DVD containing 6 different FCP projects. I exported all the projects using the same settings MPEG-2 6.2Mbps 1-pass 4:3.m2v and .aiff files. They each burn and play just fine when I use just one in a DVD Studio project. The

  • DVD drive drivers for HP dv6-6195sp

    I recently formatted my HP Pavilion dv6-6195sp and installed Windows 7 Professional 64bit. Now, everything was ok until I noticed the system hadn't recognized the DVD drive. I went to the HP support site and searched the drivers for my computer and e

  • Populating custom field in an existing sap query

    HI, I am making changes to existing query . Query already had a qty field, now I need to display the % of qty delivered for every vendor  as a subtotal text for every vendor. I added the custom field to query and can display it as subtotal text. But

  • Photoshop CS5 "full screen with menu bar" problem

    Hi, I use Photoshop in a Mac computer. In CS4 when I work in "full screen with menu bar" mode, is really a full screen because the workspace go beyond the dock at the bottom. I mean, the dock is overlaping the workspace. In CS5, in the same mode is n