My Question about DC traditional and new era network architecture.
Saw an article saying we are moving toward to 2 tier architecture in our data center infrastructure, Since according to the article we have a lot of east and west traffic now and traditional 3 tier is like insufficient.
DC 2 Tier setup was:
N9k Platform <– Core
N9k w/ N2k <–Agg/Access
We used to see 2 tier/Collapse Core architecture in small-medium campuses.
The data center uses a clos fabric made up of spines nodes and leaf nodes. The layer 2 and layer 3 outside links (connecting to domains outside of the clos fabric) as well as the firewalls and other service insertion are typically done on a leaf node (although I think there are cases where some of those things are supported on the spine).
Generally speaking the spine nodes only connect to leaf nodes and leaf nodes only connect to spines (within the backbone of the fabric). This architecture offers equal-cost multi-pathing, non-blocking links, predictable latency and delay, etc.
The main difference between them is the massive bandwidth you can have contrasting CLOS (leaf-spine-leaf) topology and Hierarchical topology (3 tier – core/aggregation/access). As every leaf is connected to every spine. the fabric transport capacity would be the addition of those links to every spine.
So, if you have 4 leaves with 40 GB links to the Spines, and 2 spines, then you have 80GB of transport capacity in the fabric. Every leaf would have 80 GB (adding the 40GB per link, per spine).
Also, in the case of deployments where the Spines are connected as border devices, its seen in multipod deployments. Usually, the rule of the thumb is to only connect leaves in Spines, nothing else goes there. But, with this new implementation you can merge 2 pods and connect them via Spines. I believe it started to be supported in ACI version 2.2 and spine/leaf switches must support IPN (only supported in 2nd generation of spines/leaves).