Purpose
This
article provides information on the concepts, limitations, and some
sample configurations of link aggregation, NIC Teaming, Link Aggregation
Control Protocol (LACP), and EtherChannel connectivity between ESXi/ESX
and Physical Network Switches, particularly for Cisco and HP.
Resolution
Note:
There are a number of requirements which need to be considered before
implementing any form of link aggregation. For more/related information
on these requirements, see ESXi/ESX host requirements for link aggregation (1001938).
Link aggregation concepts:
EtherChannel supported scenarios:
This is a Cisco EtherChannel sample configuration:
ESX Server and Cisco switch sample topology and configuration:

Run this command to verify EtherChannel load balancing mode configuration:
To configure a static portchannel in an HP switch using ports 10, 11, 12, and 13, run this command:
To verify your portchannel, run this command:

Note: The only load balancing option for vSwitches or vDistributed Switches that can be used with EtherChannel is IP HASH.
Link aggregation concepts:
- EtherChannel: This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links. For additional information on Cisco EtherChannel, see the EtherChannel Introduction by Cisco.
- LACP or IEEE 802.3ad: The Link Aggregation
Control Protocol (LACP) is included in IEEE specification as a method to
control the bundling of several physical ports together to form a
single logical channel. LACP allows a network device to negotiate an
automatic bundling of links by sending LACP packets to the peer
(directly connected device that also implements LACP). For additional
information on LACP see the Link Aggregation Control Protocol whitepaper by Cisco.
Note: LACP is only supported in vSphere 5.1, using vSphere Distributed Switches (VDS) or the Cisco Nexus 1000v.
- EtherChannel vs. 802.ad: EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than EtherChannel is Cisco proprietary and 802.3ad is an open standard.
- For additional information on EtherChannel implementation, see the Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches article from Cisco.
EtherChannel supported scenarios:
- One IP to many IP connections. (Host A making two connection sessions to Host B and C)
- Many IP to many IP connections. (Host A and B multiple connection sessions to Host C, D, etc)
Note: One IP to one IP connections over multiple NICs is not supported. (Host A one connection session to Host B uses only one NIC).
- Compatible with all ESXi/ESX VLAN configuration modes: VST, EST, and VGT. For more information on these modes, see VLAN Configuration on Virtual Switch, Physical Switch, and Virtual Machines (1003806).
- Supported Cisco configuration: EtherChannel Mode ON – ( Enable EtherChannel only)
- Supported HP configuration: Trunk Mode
- Supported switch Aggregation algorithm: IP-SRC-DST (short for IP-Source-Destination)
- Supported Virtual Switch NIC Teaming mode: IP HASH
Note: The only load balancing option for vSwitches or vDistributed Switches that can be used with EtherChannel is IP HASH:
- Do not use beacon probing with IP HASH load balancing.
- Do not configure standby or unused uplinks with IP HASH load balancing.
- VMware only supports one EtherChannel per vSwitch or vNetwork Distributed Switch (vDS).
- Lower model Cisco switches may have MAC-SRC-DST set by default, and may require additional configuration. For more information, see the Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches article from Cisco.
This is a Cisco EtherChannel sample configuration:
interface Port-channel1
switchport
switchport access vlan 100
switchport mode access
no ip address
!
interface GigabitEthernet1/1
switchport
switchport access vlan 100
switchport mode access
no ip address
channel-group 1 mode on
!ESX Server and Cisco switch sample topology and configuration:
Run this command to verify EtherChannel load balancing mode configuration:
Switch# show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-dst-ip
mpls label-ip
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
IPv4: Source XOR Destination IP address
IPv6: Source XOR Destination IP address
MPLS: Label or IP
Switch# show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
Number of channel-groups in use: 2
Number of aggregators: 2
Group Port-channel Protocol Ports
------+-------------+-----------+--------------------------
1 Po1(SU) - Gi1/15(P) Gi1/16(P)
2 Po2(SU) - Gi1/1(P) Gi1/2(P)
Switch# show etherchannel protocol
Channel-group listing:
-----------------------
Group: 1
----------
Protocol: - (Mode ON)
Group: 2
----------
Protocol: - (Mode ON)HP switch sample configuration
This configuration is specific to HP switches:- HP switches support only two modes of LACP: ACTIVE and PASSIVE.
Note: LACP is only supported in vSphere 5.1 with vSphere Distributed Switches and on the Cisco Nexus 1000V.
- Set the HP switch port mode to TRUNK to accomplish static link aggregation with ESXi/ESX.
- TRUNK Mode of HP switch ports is the only supported aggregation method compatible with ESXi/ESX NIC teaming mode IP hash.
To configure a static portchannel in an HP switch using ports 10, 11, 12, and 13, run this command:
conf
trunk 10-13 Trk1 TrunkTo verify your portchannel, run this command:
ProCurve# show trunk
Load Balancing
Port | Name Type | Group Type
---- + --------- + ----- -----
10 | 100/1000T | Trk1 Trunk
11 | 100/1000T | Trk1 Trunk
12 | 100/1000T | Trk1 Trunk
13 | 100/1000T | Trk1 TrunkConfiguring load balancing within the vSphere/VMware Infrastructure Client
To configure vSwitch properties for load balancing:- Click the ESXi/ESX host.
- Click the Configuration tab.
- Click the Networking link.
- Click Properties.
- Click the virtual switch in the Ports tab and click Edit.
- Click the NIC Teaming tab.
- From the Load Balancing dropdown, choose Route based on ip hash.
- Verify that there are two or more network adapters listed under Active Adapters.
Note: The only load balancing option for vSwitches or vDistributed Switches that can be used with EtherChannel is IP HASH.
- Do not use beacon probing with IP HASH load balancing.
- Do not configure standby or unused uplinks with IP HASH load balancing.
- VMware supports only one EtherChannel per vSwitch or vNetwork Distributed Switch (vDS).
- ESX/ESXi running on a blade system does not require IP Hash load balancing if an EtherChannel exists between the blade chassis and upstream switch. This is only required if an EtherChannel exists between the blade and the internal chassis switch, or if the blade is operating in a network pass-through mode with an etherchannel to the upstream switch. For more information on these various scenarios, please contact your blade hardware vendor.
Additional Information
For more information, see NIC teaming using EtherChannel leads to intermittent network connectivity in ESXi (1022751).
LACP is supported on vSphere ESXi 5.1 on VMware vDistributed Switches only. For more information, see Enabling or disabling LACP on an Uplink Port Group using the vSphere Web Client (2034277) and the What's New in VMware vSphere 5.1 - Networking white paper.
LACP is supported on vSphere ESXi 5.1 on VMware vDistributed Switches only. For more information, see Enabling or disabling LACP on an Uplink Port Group using the vSphere Web Client (2034277) and the What's New in VMware vSphere 5.1 - Networking white paper.
0 comments
Post a Comment