In any operational data center, it is critical to have the highest degree of visibility on network traffic for operations and application troubleshooting.
In this white paper, we discuss how to achieve optimal visibility of any type of application traffic with Cisco Application Centric Architecture, or ACI.
Cisco ACI is a data center networking technology. The hardware components in a Cisco ACI datacenter are very simple. You simply need one or more devices of all these three categories cabled together to create an ACI fabric:
A spine is a Cisco Nexus 9000 switch that acts like the backplane of the fabric. While minimally one spine is supported to create a fabric, between two and six spines are typically chosen for fault tolerance and load balancing. End node devices like servers and firewalls are never directly connected to spines. Spines are never cabled to each other.
A leaf is a Cisco Nexus 9000 switch that acts like a line card on an individual switch. End node devices like servers and firewalls are directly connected to leaves with an Ethernet connection. Every leaf needs to be cabled to every spine. Leaves are never cabled to each other.
Configuration of all networking settings are always done in the Application Policy Infrastructure Controller, or APIC. The APIC is a dedicated Cisco server cabled to one or two leaves. While a single APIC will work, multiple APICs are typically used for management fault tolerance. The content of all APICs are automatically replicated to each other with the principal of sharding. Production traffic in the data center never goes thru the APIC. ACI separates the control plane from the data plane.
Creating an Allow-list model
If an allow list isn’t created in a new ACI installation, then no networking traffic will be allowed for any device connected to any leaf. The entire data center would be unusable.
The first task to create an allow list is to create a tenant. Multiple tenants can exist within an ACI fabric. By default, all tenants are fully isolated from other tenants.
Inside the tenant, create an application profile. Multiple application profiles can exist in any tenant. An application profile should represent actual applications that communicate with each other in the data center. A common three-tiered application example is with web servers, application servers and databases. The web servers often need to communicate to the application servers, and the application servers need to communicate to the databases.
For each type of application, an endpoint group, or EPG can be created in the application profile. By default, all new endpoints connected to the leaf or leaves will have full connectivity to each other if they are in the same EPG.
All devices in one EPG will be totally isolated from other EPGs by default unless there is a contract between EPGs. A contract is like a “virtual cable” in that it is connected always at both ends, to two different ACI objects like and EPG. A contract is then logically connected to a filter. A filter is very close to a traditional extended access list, except with no IP addressing. The entire ACI allow list is not based on IP addressing.
A common principal in layer 2 switching is that if you generate traffic between any two or more devices on the switch, then the switch will not forward the traffic to a third device to view the packets. The solution is Switch Port Analyzer or SPAN, formerly known as port mirroring or port monitoring.
While SPAN can send traffic from any source port to a destination port on the same switch, to view the traffic on a very remote destination, Cisco uses Encapsulated Remote Switch Port Analyzer, or ERSPAN. ERSPAN uses a Generic Routing Encapsulation (GRE) tunnel to send all the spanned traffic to any chosen remote IP address.
Configuring ACI SPAN
In the early releases of Cisco ACI, seeing all traffic between the spine and leaves required in-line wire taps. While this was very effective in viewing all ACI traffic, the added hardware cost of the wire taps was very costly. In all recent versions of ACI, physical wire taps are no longer needed for SPAN ports. Now, in ACI Span ports can be done based on the tenant all in software. The ACI Tenant contains an Application Profile, and the Application Profile contains End Point Groups or EPGs. As shown here, the source and destination SPAN ports are based on the EPGs.
ACI Span Wireshark Packet Decode
After the ACI span port is configured to a destination IP address, you can start Wireshark or any other protocol analyzer on the computer at the destination IP. Note in the packet capture example shown here that the received SPAN traffic contains an inner and an outer Ethernet and IP header, one for the GRE tunnel and the other for the original traffic being captured. In this example a ping test using ICMP was the original traffic sent to the remote destination running Wireshark via ERSPAN.
The security of accessing any data network grows considerably when Cisco ACI is deployed with a well-architected allow list. Performing SPAN in ACI is done in software in the Tenant under the EPG which provides excellent visibility to all traffic flows in an ACI data center with no added hardware cost.
These Global Knowledge Cisco classes provide a comprehensive view of Cisco ACI.
DCACI - Implementing Cisco Application Centric Infrastructure v1.0
Cisco Application Centric Infrastructure Operations and Troubleshooting (DCACIO) v4.2
DCACIA - Implementing Cisco Application Centric Infrastructure – Advanced v1.0
DCAUI - Implementing Automation for Cisco Data Center Solutions v1.1