Anyone who’s managed switches over the years knows that the Spanning-tree protocol (STP) is both the best and worst thing to ever happen to the data center at layer 2 of the OSI model.
On the plus side, the Spanning-tree protocol is what first allowed us to create redundant paths within our switching infrastructure, making our data center much more resilient to outages than ever before. Anyone who’s experienced a “broadcast storm” knows the full value of Spanning-tree in the traditional switching environment. We’ve also seen many improvements in Spanning-tree over the years to make it work faster and more efficiently (i.e. Rapid Spanning-tree, Bridge Assurance, and many others).
On the flip side, this resiliency comes at a steep price. In order to ensure there are no loops in the switching infrastructure (still referred to as “bridging loops”… a hold-over from the modern switch’s predecessor, the bridge), Spanning-tree is forced to shut down additional interfaces above and beyond the first one between two switches. While we want additional bandwidth and redundancy, Spanning-tree simply sees these additional connections as loops and disables them. In the event of an outage on the first link, it will kick into action and select a new path, but this process takes an unreasonable amount of time by today’s data center standards.
With the introduction of the Cisco Unified Computing System (UCS), we are no longer bound by Spanning-tree’s restrictive management on our uplink Ethernet connections. By default, the UCS’s main components, the Fabric Interconnects, operate in End-Host mode. By doing this, the UCS system literally looks like a big computer with a bunch of ports to the north-bound LAN switches. With this, as long as a company has a standards-compliant network, they are able to simply “drop in” the UCS solution to their existing infrastructure. Because their existing network sees the UCS as just a “host”, the switches don’t think they’re connecting to another switch and therefore, no switching changes are needed.
While End-Host mode is worthy of its own blog series, it works by making the Fabric Interconnect logically invisible in the eyes of the south-bound blades in the Cisco 5108 chassis. Through the “pinning” of the blade’s network adapter (vNIC) to one of the Fabric Interconnect’s uplink ports or port-channels, the blade literally sees the LAN switch as its next hop. Effectively, any traffic that must leave the blade and enter the LAN cloud must exit and reenter through this pinned uplink. Because this is the case, it’s like the blade’s vNIC is directly connected to the LAN. Additionally, as long as the operating system installed on the UCS blades can understand and handle multipathing, it not only can achieve a direct logical connection to the LAN cloud, but it can also achieve an active/active connection via both Fabric’s A and B.
End Host mode allows us to use the UCS infrastructure components (the Fabric Interconnects and IOM modules) to transparently manipulate the flow of traffic between the networking adapters on the blades and the LAN switches. Also, because the logical next hop for the blade is not the Fabric Interconnects but the north-bound switch, this means that no bridging loops can exist within the UCS architecture! Because no bridging loops can physically exist, there’s no need for Spanning-tree.
Again, End-Host mode is the default mode for the UCS architecture and is the best practice recommendation for operation within your data center. Reclaiming the simultaneous use of all of our networking uplinks through the reduction or elimination of Spanning-tree is just part of Cisco’s larger data center design philosophy regarding the Nexus Operating System (NX-OS). Other features, such as the new TRILL-based Fabric Path, are emerging NX-OS features designed to eliminate the effects of Spanning-tree within the data center. It’s never been a more exciting time to dive into Cisco’s data center offerings… stay tuned for many more posts on these amazing technologies!
Recommended Courses
DCUCD v4.0 - Data Center Unified Computing Design
DCUCI v4.0 - Data Center Unified Computing Implementation