Multicast Implementation with Virtual Port Channels and FabricPath
Multicast Sparse Mode and its derivatives are supported in the Nexus OS. This white paper explains how it has been implemented in the Nexus platform to provide optimum performance in both virtual PortChannel and FabricPath environments.
The assumption is that the reader has a working understanding of multicast, vPCs, and FabricPath. As a reminder, both vPCs and FabricPath allow all links to forward data. The limitations of Spanning Tree Protocol blocked ports are avoided and both vPC and FabricPath provide loop avoidance.
First let's do a quick review of the function and benefits of vPCs.
vPCs are an extension of port channels. Port channel technology does not allow members of a port group to terminate on more than one device. vPCs virtualize two Nexus switches to become one logical switch.
This provides two major benefits: the use of all available uplink bandwidth with loop avoidance and high availability for the downstream switches (as shown in the above illustration). vPCs are widely implemented on Cisco Nexus switches in a Data Center environment.
Cisco Nexus software supports PIM-SM (Protocol Independent Multicast-Sparse Mode) only. This paper looks at how vPC manages multicast traffic. The NXOS synchronizes the multicast forwarding state on both of the vPC peers. Similarly, the group information learned via IGMP Snooping is shared between vPC peers as well. The PIM process in vPC mode ensures that only one of the vPC peers is actively forwarding multicast data downstream to receivers.