Spine and Leaf was designed to be very deterministic as far as bandwidth and very deterministic as far as latency. It also built with faster speed, Usually 10-25g down toward to servers and 40-100g between the spine and leaf.
In this type of architecture we actually create a partial mesh in the infrastructure.
If we are going to connect our traditional network or core to this newly built architecture we actually connect thru the leaf same way I will plug my server or firewall or another layer 4 – 7 device.
The benefit of this architecture is that can actually do a lot of my local routing and switching at the leaf. If i want to get to a service or device on any other leaf we can do so in 2 hops or less.
In terms of scalability, If we want to add more capacity we simply add more leaf and plug them in. If we want to add more bandwidth or availability I simply add more spines.
Spine layer (made up of switches that perform routing) is the backbone of the network, where every Leaf switch is interconnected
with each and every Spine switch.
Leaf layer consists of access switches that connect to devices like servers, firewalls, load balancers, and edge routers.
North-south traffic – refers to traffic within a data center — i.e. server to server traffic.
East-west traffic – is client to server traffic, between the data center and the rest of the network (anything outside the data center).
Border leaf refers to the leaf switches that provide connectivity between two sites. Border leaf switches connect to spine switches on both sites. There are no special requirements and no additional configurations required for transit leaf switches
Performance Optimized Datacenters (PODs). It is a 40-foot shipping container with up to 22 racks of servers inside, pre-installed and ready to go. Container data centers have become popularized by vendors like Sun Microsystems and Rackable Systems and are considered to be a part of the modern data center.
Application Centric Infrastructure – is a comprehensive SDN architecture.
If 1 doesn’t have information about destination it’s going to use spine as proxy, So its gonna encapsulate the packet into VXLAN header using it’s own VTEP as source (Leaf1) but going to use spine proxy ip as the destination VTEP. So the traffic will be sent onto spine node and spine will have the complete information so it gonna resolve the egress leaf location for the packet and then change the destination VTEP to leaf2 which is the egress leaf and forward the packet toward to destination host. The result will be 2 hop or less than away from source to destionation,
“Generally speaking the spine nodes only connect to leaf nodes and leaf nodes only connect to spines (within the backbone of the fabric). This architecture offers equal-cost multi-pathing, non-blocking links, predictable latency and delay, etc.
The main difference between them is the massive bandwidth you can have contrasting CLOS (leaf-spine-leaf) topology and Hierarchical topology (3 tier – core/aggregation/access). As every leaf is connected to every spine. the fabric transport capacity would be the addition of those links to every spine.
So, if you have 4 leaves with 40 GB links to the Spines, and 2 spines, then you have 80GB of transport capacity in the fabric. Every leaf would have 80 GB (adding the 40GB per link, per spine).
Also, in the case of deployments where the Spines are connected as border devices, its seen in multipod deployments. Usually, the rule of the thumb is to only connect leaves in Spines, nothing else goes there. But, with this new implementation you can merge 2 pods and connect them via Spines. I believe it started to be supported in ACI version 2.2 and spine/leaf switches must support IPN (only supported in 2nd generation of spines/leaves). “
VXLAN is extending the VLANs or layer 2 traffic within the datacenter while OTV is the extension of the layer 2 Traffic across the datacenter.