Category Archives: Tier 2-3 & Clos Fabric

2 tier over 3 tier architecture

My Question about DC traditional and new era network architecture.

Saw an article saying we are moving toward to 2 tier architecture in our data center infrastructure, Since according to the article we have a lot of east and west traffic now and traditional 3 tier is like insufficient.

DC 2 Tier setup was:
N9k Platform <– Core
N9k w/ N2k <–Agg/Access

We used to see 2 tier/Collapse Core architecture in small-medium campuses.

The data center uses a clos fabric made up of spines nodes and leaf nodes. The layer 2 and layer 3 outside links (connecting to domains outside of the clos fabric) as well as the firewalls and other service insertion are typically done on a leaf node (although I think there are cases where some of those things are supported on the spine).

Generally speaking the spine nodes only connect to leaf nodes and leaf nodes only connect to spines (within the backbone of the fabric). This architecture offers equal-cost multi-pathing, non-blocking links, predictable latency and delay, etc.

The main difference between them is the massive bandwidth you can have contrasting CLOS (leaf-spine-leaf) topology and Hierarchical topology (3 tier – core/aggregation/access). As every leaf is connected to every spine. the fabric transport capacity would be the addition of those links to every spine.

So, if you have 4 leaves with 40 GB links to the Spines, and 2 spines, then you have 80GB of transport capacity in the fabric. Every leaf would have 80 GB (adding the 40GB per link, per spine).

spl917

Also, in the case of deployments where the Spines are connected as border devices, its seen in multipod deployments. Usually, the rule of the thumb is to only connect leaves in Spines, nothing else goes there. But, with this new implementation you can merge 2 pods and connect them via Spines. I believe it started to be supported in ACI version 2.2 and spine/leaf switches must support IPN (only supported in 2nd generation of spines/leaves).

https://learningnetwork.cisco.com/thread/112163

Spine-Leaf Summary

In the latest trends in the data center, the traffic patterns have shifted to virtualization and new application architectures. This new traffic trend is called “East to west,” which means the majority of the traffic and bandwidth being used is actually between nodes within the data center, such as when motioning a virtual machine from one node to another or application clustering.

dcicnSL

Few benefits of Spine-leaf design.
1. Design scales horizontally through the addition of spine switches which add availability and bandwidth, which a spanning tree network cannot do.
2. Uses routing with equal-cost multipathing to allow for all links to be active with higher availability during links failure.

Basic Spine and Leaf Architecture

Spine and Leaf was designed to be very deterministic as far as bandwidth and very deterministic as far as latency. It also built with faster speed, Usually 10-25g down toward to servers and 40-100g between the spine and leaf.

In this type of architecture we actually create a partial mesh in the infrastructure.

If we are going to connect our traditional network or core to this newly built architecture we actually connect thru the leaf same way I will plug my server or firewall or another layer 4 – 7 device.

The benefit of this architecture is that can actually do a lot of my local routing and switching at the leaf. If i want to get to a service or device on any other leaf we can do so in 2 hops or less.

In terms of scalability, If we want to add more capacity we simply add more leaf and plug them in. If we want to add more bandwidth or availability I simply add more spines.

Spine layer (made up of switches that perform routing) is the backbone of the network, where every Leaf switch is interconnected
with each and every Spine switch.
Leaf layer consists of access switches that connect to devices like servers, firewalls, load balancers, and edge routers.

North-south traffic – refers to traffic within a data center — i.e. server to server traffic.
East-west traffic – is client to server traffic, between the data center and the rest of the network (anything outside the data center).

Border leaf refers to the leaf switches that provide connectivity between two sites. Border leaf switches connect to spine switches on both sites. There are no special requirements and no additional configurations required for transit leaf switches

Performance Optimized Datacenters (PODs). It is a 40-foot shipping container with up to 22 racks of servers inside, pre-installed and ready to go. Container data centers have become popularized by vendors like Sun Microsystems and Rackable Systems and are considered to be a part of the modern data center.

Application Centric Infrastructure – is a comprehensive SDN architecture.

If 1 doesn’t have information about destination it’s going to use spine as proxy, So its gonna encapsulate the packet into VXLAN header  using it’s own VTEP as source (Leaf1) but going to use spine proxy ip as the destination VTEP. So the traffic will be sent onto spine node and spine will have the complete information so it gonna resolve the egress leaf location for the packet and then change the destination VTEP to leaf2 which is the egress leaf and forward the packet toward to destination host. The result will be 2 hop or less than away from source to destionation,

“Generally speaking the spine nodes only connect to leaf nodes and leaf nodes only connect to spines (within the backbone of the fabric).  This architecture offers equal-cost multi-pathing, non-blocking links, predictable latency and delay, etc.

The main difference between them is the massive bandwidth you can have contrasting CLOS (leaf-spine-leaf) topology and Hierarchical topology (3 tier – core/aggregation/access). As every leaf is connected to every spine. the fabric transport capacity would be the addition of those links to every spine.

So, if you have 4 leaves with 40 GB links to the Spines, and 2 spines, then you have 80GB of transport capacity in the fabric. Every leaf would have 80 GB (adding the 40GB per link, per spine).

Also, in the case of deployments where the Spines are connected as border devices, its seen in multipod deployments. Usually, the rule of the thumb is to only connect leaves in Spines, nothing else goes there. But, with this new implementation you can merge 2 pods and connect them via Spines. I believe it started to be supported in ACI version 2.2 and spine/leaf switches must support IPN (only supported in 2nd generation of spines/leaves). “

VXLAN is extending the VLANs or layer 2 traffic within the datacenter while OTV is the extension of the layer 2 Traffic across the datacenter.

3 & 2 Tier Architecture

Network Requirements:

  • Small network: Provides services for up to 200 devices.
  • Medium-size network: Provides services for 200 to 1,000 devices.
  • Large network: Provides services for 1,000+ devices.

 

3 Tier Architecture – Divide into 3 area. Core, Distribution and Access.

Core:
– In charge of fast routing, To get traffic as quickly as possible from distribution switch to another distribution switch.
– Gateway to Internet or other sites.
– The core layer also provides scalabily and fastconvergence.

Distribution:
– A multilayer sw capable doing routing. High capacity, High port speed and density.
– A layer that aggregates the server access layer, using stiches to segment workgroups and isolate network problems in a data center environment.

Access:
– We can use L2 switch because we are not based on routing but in mac address forwarding.
– A layer that is used to grant user access to network devices.

Traditional 3 Tier Architecture:3tier

Cisco Portfolio switch which can be used in core and distribution layers of enterprise network as per the design requirements.These devices are :

  • Cisco Nexus 7000 Series
  • Cisco catalyst 6800 Switch
  • Cisco catalyst 6500 Switch
  • Cisco catalyst 4500-X switches
  • Cisco 3850 Switches
  • Meraki MS400 series Switches
Cisco Access switches used widely in the networks are :
  • Cisco 4500 E Switch
  • Cisco 3850 Switch
  • Cisco 3600 Switch
  • Cisco 2960 X/XR/L Switches
  • Meraki MS series switches

 

Layer 2 Architecture

The three-tier hierarchical design maximizes performance, network availability, and the ability to scale the network design.

However, many small enterprise networks do not grow significantly larger over time. Therefore, a two-tier hierarchical design where the core and distribution layers are collapsed into one layer is often more practical. A “collapsed core” is when the distribution layer and core layer functions are implemented by a single device. The primary motivation for the collapsed core design is reducing network cost, while maintaining most of the benefits of the three-tier hierarchical model.

Second article say’s that we are moving toward 2 tier architecture?

“In the last few years application have been changing inside of the data center there’s a lot more east and west traffic inside of the data center, So become very insufficient to have your traffic move through the network up to aggregation layer and the come back done to the access later to be routed or in some case switch in the environment. It also starting to become very insufficient if you have to integrate layers 4-7 services such as firewall and load balancers, The most common place to do this in previous architecture was at the aggregation layer. So, What we are starting to see is that we are moving towards to 2 tier architecture where we keep our data center core but we are actually collapse our aggregation and access layer so that we can do local routing and switching for the servers and that we can basically add services where ever we need to because we are building a network that’s very fast east to west inside the data center.”

2tier

DC 2 Tier setup:

N9k Platform <– Core

N9k w/ N2k <–Agg/Access