Link Search Menu Expand Document
calendar_month 22-Aug-24

Multi-Fabric Design

In many cases, a single fabric design suffices. In others, a multi-fabric design is more appropriate. Some reasons to move to multiple fabrics include:

  • Site segmentation by fabric: i.e., one or more fabrics per site

  • Functional segmentation by fabric: for example, using one fabric for data center and a second fabric for campus within a single site

  • Scale requirements: in extreme cases, a single fabric may not be able to accommodate the required network scale.

iBGP is used within each fabric, and eBGP is used to connect fabric to one another fabrics forming connectivity between border switches. Border switches at each fabric are typically configured as a VSX pair to allow for active-active high availability. Connectivity between border switches is established with Layer 3 routed access with ECMP.

Fabrics within a region, in the same Aruba Central UI Groups, can be optimized by designating a border switch in one fabric as a border leader and establish Multi-Fabric connectivity between border leaders.

Border leaders effectively serve as eBGP route reflectors with two or more fabrics typically in a single site or region, covered in detail in the next section.

Multiple borders within a fabric can be configured without using VSX, resulting in an active-standby design where one border is used for data plane based on BGP EPVN routing updates. VSX-based active/active configuration is always recommended.

Within a Multi-Fabric network, network segments - VLANs, VRFs, and User Roles can be extended across the fabrics allowing for consistent Group Based Policy throughout the deployment.

A simple two fabric design is shown below:

Simple 2 Fabric Design

Transport between the border switches of each fabric can be metro-ethernet, dark fiber, etc. It is important to use a jumbo MTU (MTU larger than 1550 bytes) to accommodate the VXLAN header encapsulation between the sites. VXLAN encapsulation ensures macro or micro segmentation is retained within and across fabrics.

With a Routed-Access fabric, all intra-fabric traffic between two endpoints uses a single VXLAN tunnel as each fabric devices are fully meshed. In the Scaled-Access Design, all intra-fabric traffic might take 2-3 VXLAN tunnels.

For Inter-fabric traffic, border node at each fabric will terminate and re-build VXLAN tunnel between them. Clients traffic between the fabric might take up to 3-5 VXLAN tunnels to reach its destination depending on if itís a router-access or scaled-access at individual fabrics.

When building the underlay between border switches in multiple fabrics, OSPF is preferred but not mandated. If all switches are in the same Central UI group, it may not be ideal to use the underlay network workflow for inter-fabric connectivity, because a separate OSPF area might be required. Alternatively, eBGP can be used to configure the inter-fabric underlay manually with MultiEdit.

Aruba Central release 2.5.8 supports the Multi-Fabric EVPN workflow to orchestrate BGP EVPN VXLAN overlays between the fabrics across sites.

Scaling Multi-Fabric Networks Using Border Leaders

Up to 32 fabrics can be connected to one another as a full mesh between borders in this design. If a larger scale is required, the network can be split into multiple logical groups or regions using border leader switches in a hierarchical design. Border leader switches have fully meshed eBGP connections to all other border switches in their region or group and are fully meshed to all border leader switches in other regions.

The drawing below shows a simplified representation of a network with four sites or regions with one to four fabrics in each, shown as the outer ovals. The dashed lines represent the VXLAN tunnels over eBGP between each border switch pair. In the center of the drawing, the border leaders for each site/region are shown as fully meshed with eBGP and VXLAN.

Border Leaders

As with the underlay within a site or region containing multiple fabrics, the underlay network connecting the border leader switches between regions should use OSPF or eBGP routing with Jumbo MTU enabled across the deployment.

The CX8325 or CX9300 are the recommended switch models for border leaders.

The following table shows the validated scale for recommended Border Leader options:

 83259300
Fabric RoleBorder LeaderBorder Leader
VTEPs per Fabric (Standalone or VSX logical VTEP pair) Sites (Number of VSX border-leader VTEPs)64 3264 32
Fabrics across sites (number of VSX border-VTEPs, VXLAN Full-Mesh)3232
L3 IPv4 Routes across all VRFs and all sites (including host routes)45,00045,000
L3 IPv4 Routes (prefix routes)2,0002,000
L3 IPv6 Routes across all VRFs and all sites (including host routes)28,00028,000
L3 IPv6 Routes (prefix routes)2,0002,000
Overlay hosts (MAC / ARP / ND) across sitesLocal SiteRemote Site
MAC: 9,000MAC: 18,000MAC: 13,000
IPv4 ARP: 6,000IPv4 ARP: 10,000IPv4 ARP: 7,000
IPv6 ND: 4500IPv6 ND: 2000IPv6 ND: 5500
VLANs local to the fabric512512
Stretched VLANs with all fabrics386386
VRFs shared with all fabrics1616

It is important to observe the limitations of each switch model used as border or border leader switches regarding the maximum number of BGP peers.

Mobility Gateways in Multi-Fabric Design

Mobility gateways can be deployed individually within each fabric, or shared gateways can be used to cover multiple fabrics. In either case, static VXLAN tunnels must be configured between stub switches and gateways.

Multi Fabric with Gateways

Wireless SSIDs configured in tunnel mode tunnel traffic from the AP to the mobility gateway whether the gateway is in the same fabric or a different one. For APs onboarded in the underlay and located in a different fabric than their gateway, it is necessary to provide routing between AP subnets and gateway subnets in the different fabrics independent of the overlay.

One advantage of using shared mobility gateway clusters for multi-fabric networks is the capacity for seamless roaming over all fabrics with gateway cluster.

Network-Access Policies in Multi-Fabric Networks

Extending the VXLAN across multiple fabrics makes it possible to implement global User Roles and consistent Policies throughout the network for both wired and wireless users.

Group-Based Policies are enforced as follows:

  • Traffic between wired clients is enforced at the destination egress switch interface.

  • Traffic between wireless clients is enforced at the Mobility Gateway cluster.

  • Traffic from wired to wireless clients is enforced at the Mobility Gateway cluster.

  • Traffic from wireless to wired clients is enforced at the destination egress switch interface.

  • Access policies with non User-Role destination is enforced at the source ingress interface.

CX10000 as Border Switch

If the CX10000 switch is used as a Border Switch, it is important to prevent L2/L3VNI traffic from being redirected to the Pensando Elba module. This can be accomplished using the uplink-to-uplink command in the config-dsm context.

The CX10000 is not recommended for use as a border leader switch. The CX8325 or CX9300 are recommended for this role.


Back to top

© Copyright 2024 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Aruba Networks and the Aruba logo are registered trademarks of Aruba Networks, Inc. Third-party trademarks mentioned are the property of their respective owners. To view the end-user software agreement, go to Aruba EULA.