Link Search Menu Expand Document
calendar_month 12-Sep-24

NetConductor Design

Central NetConductor is an edge-to-cloud network and security framework designed to create dynamic and consistent security policy across a modern enterprise network of CX switches, AOS-10 Gateways, AOS-10 APs and EdgeConnect SD-WAN Gateways. Intelligent overlays built on highly available underlays are tied to a policy-based micro-segmentation model across the entire network infrastructure using consistent, secure global roles.

Role-based policies abstracted directly from the underlying network enable flexible and simplified security definition, application, and enforcement. Policy definition is facilitated by full underlay automation, comprehensive overlay orchestration, “single-pane-of-glass” management and monitoring views, plus a rich array of powerful, complementary services.

This guide provides basic details for common NetConductor design considerations. For more in-depth technical guidance, refer to the NetConductor Architecture Guide.

Table of contents

NetConductor Solution Components

This section outlines various solution components.

NetConductor Overview

Management Plane

Central serves as the management plane for NetConductor. Central contains workflow-based functions for configuration, management, and visibility of fabric networks.

Certain components of NetConductor, such as the EdgeConnect SD-WAN and HPE Aruba Networking SSE integrations, are managed from their respective dashboards.

Control Plane

NetConductor’s distributed overlay fabric uses standards-based MP-BGP and the EVPN address family, including options for Layer 2 and Layer 3 overlays as needed for network control.

The solution delivers consistency in user experience for both wired and wireless users, with a high degree of commonality in configurations and protocols between the campus and data center.

Data Plane

For centralized fabrics, GRE encapsulation using user-based tunnels serves as the data plane. For the distributed overlay fabric, standards-based VXLAN encapsulation provides the data plane. VXLAN enables the use of both VRF-based macro segmentation and role-based micro segmentation, using fields from the VXLAN header.

Policy Plane

The Global Policy Manager (GPM) component of the NetConductor solution provides a single point of management for creating and configuring consistent policies across the campus, branch, and data center. GPM provides the flexibility to define role-based, application-based, IP-based and port-based policies. The use of a role decouples policy from the IP network and enables consistent policy enforcement across wired and wireless networks, as well as between locations.

A role is typically, although not always, assigned following an 802.1X authentication event. A NAC solution, such as HPE Aruba Networks ClearPass, is used to onboard users and devices to the network and assign a Role with a policy tied to it by GPM.

The policy is enforced within the data-plane of campus, data center, and branch networks. Roles and role-based policies can be propagated and connected across sites using several different transports, including the EdgeConnect SD-Branch and EdgeConnect SD-WAN solutions.

NetConductor policy

Zero Trust Framework

User Role and Policy Design

A user role simply represents a group of users or devices that share a common set of access policies. A role is assigned when a new user or device is brought onto the network.

Historically, roles and their accompanying policies were defined separately for each platform in the network and required manual creation in multiple places. Now, user roles are defined in HPE Aruba Networking Central Global Policy Manager, and the centralized configuration can be applied consistently across all network infrastructure.

User roles are assigned to users and devices using a network access control (NAC) solution such as ClearPass, CloudAuth, or a third-party solution.

Detailed architecture and design information for NetConductor Global Policy Manager is available in the Policy Design Validated Solution Guide.

Network Access Control

To provide secure access to the network, the Network Access Control (NAC) solution is a critical component that includes authentication, authorization, accounting, and device profiling features to ensure that only known and authorized clients are connected, then placed dynamically in the appropriate network segment.

HPE Aruba Networking includes two options for Network Access Control (NAC) that are compatible with the NetConductor solution.

  • ClearPass Policy Manager is an on-premise solution offering the most complete and flexible features available while integrating with AI-powered Client Insights for device profiling.
  • HPE Aruba Networking Central includes CloudAuth which provides a core set of NAC features for a cloud-only solution. CloudAuth integrates with popular Cloud-based authentication sources such as Microsoft Entra ID, Google Auth, or Okta.

Third-party NAC solutions also can be used for secure onboarding, as long as the NAC server’s authorization policy supports ‘Aruba-User-Role’ RADIUS Server Vendor-Specific Attributes (VSA).

CX switches, access points and gateways support Colorless Ports and role-based micro-segmentation. Colorless Ports enable consistent configuration for ports using 802.1X authentication to assign roles and policies. Clients can connect to any switch port or access point across the enterprise. A role is assigned at authentication or following profiling and, based on the authorization profile associated with the assigned ‘Aruba-User-Role’, the client can be placed automatically in an authorized network segment.

In addition, the NetConductor overlay carries the assigned user role as a value in the VXLAN header. All traffic to or from an endpoint connected to the fabric is evaluated against the role-associated policy to provide a role-based, micro-segmentation solution that follows the user anywhere, seamlessly.

Monitoring and Visibility

HPE Aruba Networking Central includes AI-powered Client Insights for advanced visibility and granular profiling. Client Insights uses native infrastructure telemetry from access points, switches, and gateways, along with controlled, crowd-sourced Machine Learning, to validate fingerprints and provide precise classification capabilities. Client/Endpoint classification enables organizations to label devices and assign the ‘Aruba-User-Role’ from ClearPass Policy Manager or CloudAuth as they connect to the network.

Client Insights enables continuous monitoring of clients and, when paired with ClearPass Policy Manager, provides closed loop, end-to-end access control with automated policy enforcement. If the client has been compromised or acts suspiciously, the client can be quarantined by issuing a dynamic change-of-authorization to disconnect or change the User Role with limited access for test, repair, or replacement.

Macro vs. Micro Segmentation (VRF and/or Subnet vs. User Role)

Traditionally, segmentation of networks depended on placing the end user or device in a subnet with other similar users or devices, then using ACLs to permit or deny traffic based on source and destination IPs and port numbers. NetConductor replaces the need for subnets with User Roles and role-based policies that are easier to manage. In both cases, it may be appropriate to use multiple VRFs to isolate large blocks of users/devices that never have reason to connect directly with one another. Usage varies, but some typical examples include guest networks, data center management networks, and production plant or lab equipment that rely on older, less secure operating systems.

Campus Overlay Fabrics

Overlay networks provide a mechanism to deploy flexible topologies that meet the demands of ever changing endpoints and applications. By fully decoupling user data traffic and associated policy from the physical topology of the network, overlays enable on-demand deployment of L2 or L3 services. Overlays also make it simpler to carry device or user role information across the network without requiring all devices in the path to understand or manage the roles. Overlay networks are virtual networks, built on an underlay network. The underlay should be designed for stability, scalability, and predictability.

NetConductor provides the flexibility to choose between centralized overlays or distributed overlays to address a range of traffic engineering and policy enforcement requirements. User Based Tunneling is a centralized overlay architecture that provides simplified operations and advanced security features on gateway devices. EVPN-VXLAN is a distributed overlay architecture that enables dynamic tunnels between switches anywhere on a campus, providing consistent and continuous policy enforcement across the network. Both overlay models support the “Colorless Ports” feature, which enables automated client onboarding and access control for ease of operations.

Centralized Campus Fabrics

User-Based Tunneling (UBT) is a centralized overlay fabric that tunnels some or all user traffic to a centralized gateway cluster where policy is enforced using services such as firewalling, DPI, application visibility, and bandwidth control. UBT tunnels traffic selectively based on user or device roles. Tunnels can originate from APs and/or switches.

Centralized campus fabric is easy to deploy, it provides a consistent experience for both wired and wireless users, and it can implement a wide variety of routing and security functions at the gateways.

Centralized fabric works best for small- to medium-size branch/campus locations where most traffic is north-south, destined to an external data center or the Internet. This model also enables migration options for customers to adopt fabrics and role-based segmentation in a phased manner while using existing third-party and legacy switch infrastructure at the core and aggregation layers. This model is not recommended when there is a high degree of east-west traffic originating and terminating between endpoints within the campus, since traffic to the gateway cluster can become bottlenecked.

Wired and wireless authentication traffic to the NAC service all originates from the same gateway cluster. Consider the load placed on the cluster when supporting large WLAN and UBT deployments.

Distributed Campus Fabrics

Distributed fabrics enable policy enforcement anywhere in the network, as an alternative to a centralized approach. The distributed fabric uses a suite of open standard protocols to create a dynamic network fabric that extends Layer 2 connectivity over an existing physical network and Layer 3 underlay. Ethernet VPN (EVPN), a BGP-driven protocol provides the control plane. Virtual extensible LANs (VXLANs), a common network virtualization tunneling protocol that expands the number of Layer 2 broadcast domains to 16 million from the 4,000 available using traditional VLANs, provide the data plane.

The benefits of a distributed fabric are: efficient Layer 2 extension across Layer 3 boundaries; anycast gateways that ensure consistent first-hop routing services; and end-to-end segmentation using VXLAN-Group Based Policies (VXLAN-GBP) to propagate policy. The distributed fabric works best in large campus/data center environments. It also is recommended when there is a high amount of east-west traffic because of its efficiency forwarding traffic.

A fabric is composed of an underlay network and one or more overlay networks. The underlay network represents the physical network infrastructure of the fabric. In the NetConductor solution, a routed underlay is recommended in the majority of cases. With a routed underlay, all inter-switch links are configured as routed and routes are distributed using an interior gateway protocol enabling Equal Cost Multipath (ECMP) routing. The NetConductor underlay wizard configures point-to-point routed links, using OSPF for the routing protocol. One or more overlay networks- can be layered on top of the underlay network.

EVPN enables a control plane database across the entire campus to provide segmentation and seamless roaming across the network, with advertisement of MAC addresses, MAC/IP bindings, and IP Prefixes. The solution uses symmetric IRB with distributed anycast gateway to discover and advertise remote fabric devices and advertise MAC addresses and MAC/IP bindings with EVPN type 2 and type 5 routing. With the help of Route Distinguisher (RD), a unique number prepended to the advertised address within the VRF, the campus fabric can support overlapping IP addresses and MACs across different tenants.

The use of MP-BGP with EVPN address families between virtual tunnel endpoints (VTEPs) provides a standards-based, highly scalable control plane for sharing endpoint reachability information with native support for multi-tenancy. For many years, service providers have used MP-BGP to offer secure Layer 2 and Layer 3 VPN services on a very large scale. An iBGP design with route reflectors simplifies design by eliminating the need for a full mesh of BGP peerings across the full set of switches containing VTEPs. BGP peering is required only between VTEP terminating switches (access, stub, and border) and the core.

BGP control plane constructs include:

  • Address Family (AF): MP-BGP enables the exchange of network reachability information for multiple address types by categorizing them into address families (IPv4, IPv6, L3VPN, etc.). The Layer 2 VPN address family (AFI=25) and the EVPN subsequent address family (SAFI=70) advertise IP and MAC address information between MP-BGP speakers. The EVPN address family contains reachability information for establishing VXLAN tunnels between VTEPs.
  • Route Distinguisher (RD): A route distinguisher enables MP-BGP to carry overlapping Layer 3 and Layer 2 addresses within the same address family by prepending a unique value to the original address. The RD is only a number with no inherent meaningful properties. It does not associate an address with a route or bridge table. The RD value supports multi-tenancy by ensuring that a route announced for the same address range via two different VRFs can be advertised in the same MP-BGP address family.
  • Route Target (RT): Route targets are MP-BGP extended communities used to associate an address with a route or bridge table. In an EVPN-VXLAN network, importing and exporting a common VRF route target into the MP-BGP EVPN address family establishes Layer 3 reachability for a set of VRFs defined across a number of VTEPs. Layer 2 reachability is shared across a distributed set of L2 VNIs by importing and exporting a common route target in the L2 VNI definition. Additionally, Layer 3 routes can be leaked between VRFs using the IPv4 address family by importing route targets into one VRF that are exported by other VRFs.
  • Route Reflector (RR): To optimize the process of sharing reachability information between VTEPs, the use of route reflectors at the core enables simplified iBGP peering. This design allows all VTEPs to have the same iBGP peering configuration, eliminating the need for a full mesh of iBGP neighbors.

This campus design uses two Layer 3 connected core switches as iBGP route reflectors. The number of destination prefixes and overlay networks consume physical resources in the form of forwarding tables and should be considered when designing the network. Refer to the Design Recommendations section of the NetConductor Architecture guide for scaling guidelines when designing the fabric network.

VXLAN encapsulates Layer 2 Ethernet frames in a UDP packets. These VXLAN tunnels provide both Layer 2 and Layer 3 virtualized network services to connected endpoints. A VTEP is the function within a switch that handles the origination or termination of VXLAN tunnels. Similar to a traditional VLAN ID, a VXLAN Network Identifier (VNI) identifies an isolated Layer 2 segment in a VXLAN overlay topology. Symmetric Integrated Routing and Bridging (IRB) enables the overlay networks to support contiguous Layer 2 forwarding and Layer 3 routing across VTEPs.

Note: Configure jumbo frames on all underlay links in the fabric to allow transport of additional encapsulation.

VXLAN networks comprise two key virtual network constructs: Layer 2 VNI and Layer 3 VNI. The relationship between an L2VNI, L3VNI, and VRF is described below:

  • L2VNIs are analogous to a VLAN and, for AOS-CX use, the configuration of a VLAN. An L2VNI bridges Layer 2 traffic between endpoints attached to different VTEPs.
  • L3VNIs are analogous to VRFs and route between the subnets of L2VNIs between VTEPs.
    • Multiple L2VNIs can exist within a single VRF.

An overlay network is implemented using Virtual Extensible LAN (VXLAN) tunnels that provide both Layer 2 and Layer 3 virtualized network services to endpoints connected to the campus. The VXLAN Network Identifier (VNI) associates tunneled traffic with the correct corresponding Layer 2 VLAN or Layer 3 route table so the receiving VTEP can forward the encapsulated frame appropriately. The Symmetric Integrated Routing and Bridging (IRB) capability allows the overlay networks to support contiguous Layer 2 forwarding and Layer 3 routing across VTEPs.

A VTEP encapsulates a frame in the following headers:

  • IP header: IP addresses in the header can be VTEPs or VXLAN multicast groups in the transport network. Intermediate devices between the source and destination forward VXLAN packets based on this outer IP header.

  • UDP header for VXLAN: The default VXLAN destination UDP port number is 4789.

  • VXLAN header: VXLAN information for the encapsulated frame.

    • 8-bit VXLAN Flags: The first bit signals if a GBP ID has been set on the packet and the fifth bit signals if the VNI is valid. All other bits are reserved and set to “0”.
    • 16-bit VXLAN Group Policy ID: The group ID identifies the policy enforced on tunneled traffic.
    • 24-bit VXLAN Network Identifier: Specifies the virtual network identifier (VNI) of the encapsulated frame.
    • 24-bit Reserved field

    vxlan_packet_header

Wireless infrastructure in the distributed campus fabric uses AOS 10 gateways and access points (AP) provisioned with tunneled WLAN SSID. Wireless client traffic is GRE/IPsec encapsulated from the access points to the gateway to accommodate large roaming campus domains. The gateway encapsulates data traffic in VXLAN, inserts a role ID into the header, and forwards the packet into the campus fabric.

Authentication and role assignment for the wireless clients occur on the APs; however, authentication traffic to ClearPass or other authentication providers is proxied by the gateway cluster for wireless clients. Wired client authentication is sourced directly from switches to which wired clients are attached. ClearPass Policy Manager assigns the User Role based on the results of authentication. This role assignment can enable dynamic network assignment and role-based policy enforcement across wired and wireless infrastructure.

The solution allows enterprise-level definition of universal user roles and role-based policies that can be applied for both wired and wireless clients. User Roles and policies are defined in Central one time; there is no need to create separate policies for different types of network devices. Policies are provisioned to fabric devices and enforced at the destination egress point for role-to-role polices and at the source ingress point for all other policies.

A key advantage of the distributed fabric design compared to a centralized fabric is the distributed policy enforcement capability at any point within the campus. In addition, user traffic does not require forwarding to a centralized cluster for policy enforcement.

A Distributed Fabric is formed by assigning personas to various devices in the network. The list below describes the purpose of each persona.

  • Route Reflector (RR): Core switches are configured as BGP route reflectors (the RR persona) to share EVPN reachability information. This reduces the number of peering sessions required across the fabric.
  • Stub: Wireless aggregation switches are configured with the stub persona to extend policy enforcement to wireless gateways, which only support static VXLAN tunnels. The aggregation switches carry GPID values from the campus fabric VXLAN tunnels forward into static VXLAN tunnels configured between the aggregation switches and the gateways. Edge switches can also host the stub persona (stub+edge) when extending the fabric to an extended edge switch which only support static VXLAN tunnels to extend the fabric beyond the edge.
  • Border: Internet edge switches use the border persona to provide connectivity between the fabric and services outside the fabric.
  • Edge: The edge persona is applied to access switches that provide primary VXLAN tunnel ingress/egress and policy enforcement for endpoint traffic into or out of the fabric. Clients connect directly to the edge switch.
  • Extended Edge: The extended edge persona is used to extend the fabric beyond the access layer. Static VXLAN tunnels are orchestrated between edge and extended edge. The extended edge switches performs authentication, authorization, VLAN and role assignment, and group-based policy enforcement. Clients connect directly to the extended edge switch.
  • Intermediate Devices: Wired aggregation switches are underlay devices with no fabric persona assigned. They do not run a VTEP and must support jumbo frames.

All devices in a NetConductor distributed campus fabric require Advanced Central subscriptions.

device

Fabric Design

The fabric design is built with a routed underlay that connects all switches within the fabric. A routed underlay eliminates the need for Layer 2 links; spanning-tree is not used to block redundant Layer 2 paths. A full Layer 3 underlay enables using all links actively with ECMP routing, and it allows for extremely fast convergence in case of link failures.

In the fabric overlay all the network devices fully participate in the EVPN-VXLAN fabric, except extended edge devics, which utilize static VXLAN. Anycast gateways for segments are distributed across the access switches.

A single fabric design is recommended when there are 256 or fewer edge devices and no more than 16 VRFs. If the design scale exceeds 256 edge nodes and 16 VRFs utilize multi-fabric to scale achieve higher scale.

Distributed Fabric Overview

Underlay Network

Central NetConductor can automate deployment of the underlay network using an intent-based workflow deployment. Network Operators also can use MultiEdit to provision the underlay network for greenfield deployment and for brownfield deployments that require specialized network configurations and topologies.

Central Connectivity

How the network infrastructure connects to Central is a critical consideration when deploying NetConductor.

All switches supported for use with NetConductor have dedicated management network ports. In-band management also is supported and most often is required for campus networks where multiple buildings and closets make Out-Of-Band-Management (OOBM) impractical.

Using in-band management presents challenges, since connectivity to the Internet/Central is required both before and after configuration of the underlay network using NetConductor. Typically, this is accomplished using the default VLAN (1) initially because the connected ports use this by default. Extra care is required for VSX pairs, since the default route used for initial Central connectivity remains active due to the ISL connection and the default administrative distance for this route is lower than the default OSPF route distance. As a result, the initial default route should be assigned a higher administrative distance (such as 120) to ensure that the OSPF default route prevails after completion of the underlay workflow.

In some cases, manual configuration of port speed or breakout settings for transceivers or DAC cables may be required.

Workflow Deployment

The NetConductor “Underlay Networks” workflow automates deployment of the underlay network, including configuration of point-to-point routed connections between switches in the fabric and single area OSPF configuration.

If in-band management is used, take care to ensure that connection to Central is not lost when inter-switch connections are reconfigured to routed ports/connections. One option is to configure a management VLAN with static SVI and nameserver defined. As best practice, DNS servers should be configured at the group level to ensure that connectivity to Central is maintained after the underlay wizard is completed, especially if the initial DNS assignment uses DHCP.

Overlays

Overlay networks, which are analogous to VRFs, enable multiple instances of the routing table to coexist within the same switch hardware. Overlay networks are a way to achieve macro-segmentation within a fabric, where the role allows for micro-segmentation. Overlay segments, which are analogous to VLANs, are created to host endpoints within the fabric.

Within NetConductor deployments, separating the segments into VRFs is recommended when complete isolation is required for a given set of overlay segments. Common uses for dedicated overlay network include full isolation between IT, OT, and guest. When this type of isolation is not required, putting all overlay segments into a single overlay network is recommended.

Multiple Overlay networks can be built on top of the Underlay network. Each Overlay exists as a separate VRF with unique L3 VNI, Route Target, and Route Distinguishers.

NetConductor provides a fabric overlay network workflow that automates building of overlay networks and overlay segments, and configuring all necessary components of the overlay. The wizard configures elements such as iBGP with EVPN on all switches, configures fabric personas, orchestrates VXLAN tunnels, builds loopback interfaces that are used for DHCP relay and EVPN-XVLAN control plane messaging, and more. Within each overlay network, multiple Layer 2 and Layer 3 network segments can be created to stretch subnets over the selected switches within the fabric effectively.

L2 Only Overlay Segments

Within the NetConductor solution, an L2 only overlay segment is used when the default gateway for a subnet must exist on a device outside the fabric, such as a third-party router or firewall, or when a gateway is not required at all for the segment. Traffic to other endpoints in the same subnet remains within the fabric, but traffic destined for the Internet or other internal subnets exit the fabric from a border switch to the device with the gateway, which then handles required routing or filtering decisions. Although this option is available, distributed anycast segments are preferred in most cases.

Layer 2 Segment

Distributed Anycast Segments

When using this option, a distributed anycast gateway with common gateway IP and MAC address is created on all the edge switches in the fabric to provide a default gateway. Traffic destined outside the source subnet is routed at the source switch. In conjunction with role-based policies, this enables decentralized routing and policy enforcement, which can increase network performance and decrease costs. Use of Layer 3 segments is recommended whenever possible.

Layer 3 Segment

The image above shows an example of VRFs used to isolate Corporate VNIs from IOT VNIs. Traffic traversing within the fabric is VXLAN-encapsulated, carrying the VNI information to ensure that segmentation is maintained across all fabric-enabled switches within the site.

Wireless Design

To ensure optimal functionality, mobility gateways and access points must run AOS version 10 firmware. For enhanced reliability and performance, mobility gateways should be deployed in clusters, designed to handle varying levels of throughput and client volume. Depending on the specific model, a cluster can support up to 12 gateways, ensuring high availability across the network. Mobility gateways are connected to the fabric stub nodes via Layer 2 MC-LAG. Mobility gateway onboarding VLAN is part of the overlay network segment on the WLAN-Aggregation Stub switches.

Access points connected to access switches operate within the overlay network, establishing a secure control plane connection with the mobility gateways. This design facilitates efficient management and communication between access points and the centralized mobility infrastructure. Distributed-anycast overlay segments for access points should not exceed a /23 subnet (512 access points) across the entire fabric network. For large deployments, multiple distributed-anycast segments should be provisioned within the same overlay network.

The overlay segment used for the gateways and APs requires reachability to the Internet/Cloud for the access points and mobility gateways to register with Central. Ensure that proper routing is configured on the border to allow appropriate routes into the fabric for infrastructure onboarding. Access points and mobility gateways must be onboarded on the same overlay network.

Note: While placing APs into the overlay is reccommended, APs can also operate in the underlay.

Wireless SSIDs configured in tunnel mode facilitate the tunneling of client traffic from access points to mobility gateways. This enables centralized control and monitoring of wireless traffic flows.

Integration of the wireless gateways into the fabric is achieved through static VXLAN tunnels from each mobility gateway to the fabric stub switches. Additionally, anycast gateways for wireless clients are configured at the stub switches, further optimizing network performance and scalability. These anycast gateways enhance localized traffic handling and reduce centralized processing overhead. Stub switches also play a vital role in relaying segmentation details such as Role and L2VNI from static VXLAN tunnels to the rest of the EVPN-VXLAN fabric.

Distributed Fabric Overview

External Connectivity

Clients connected to overlay networks require reachability to external resources such as data center, existing network, firewall, WAN, and SD-WAN networks. A guest overlay networks may require access only to selected services such as DHCP, DNS, and NAC to onboard clients and reach the Internet. A Building Management System overlay network may require access only to certain server hosted in the data center and shared services and must be inspected by firewall.

External connectivity configured between border devices and upstream external devices provides a method to connect campus fabric to the rest of the network. The external device can be a router, switch, firewall, SD-WAN, WAN, or a Metro-feature-rich capable network device. Depending on the capabilities of the upstream external device, connectivity can be achieved with any of the methods mentioned below and must be configured using MultiEdit.

SD-WAN Connectivity

The Central NetConductor solution extends the ability to stretch segmentation (VRF, User Roles, and policy) across geographically distributed enterprise deployments with sites inter-connected via any WAN or SD-WAN environment. This allows enterprise-level definition of global User Roles, and segmentation of policies by carrying the role and VRF information in the SD-WAN fabric, simplifying and standardizing security construction and enforcement.

The solution supports Multi-Site deployments interconnected through:

  • HPE Aruba Networking EdgeConnect SD-Branch fabric
  • HPE Aruba Networking EdgeConnect SD-WAN fabric

SD-WAN Connectivity

IP Connectivity

In many situations, the fabric must connect to a traditional IP network. In this case, a dedicated interface (physical port, VLAN, or sub-interface) for each overlay network extends the segmentation to the external device via either trunk or routed point-to-point link. Any routing protocol can be used, but Multi-VRF OSPF or MP-BGP are recommended. The Border node is installed with EVPN Route-type 2/5 host routes from the fabric and prefixes learned from external connectivity. By default, the Border node advertises both Host and Prefix addresses to the external network. Use a route map to filter out host entries during redistribution from BGP EVPN to OSPF/MP-BGP on the Border node. Inter-VRF Route leaking can be configured on the external device to filter and advertise only the required prefixes/default route for each overlay segment.

IP Connectivity

Multi Fabric

Multi-Fabric design enables interconnection of fabrics over a high-speed WAN technology such as Metro Ethernet or Dark Fiber. In this design, multiple fabrics are deployed independently and interconnected with an end-to-end EVPN-VXLAN solution to extend segmentation.

In many cases, a single fabric design suffices. In others, a multi-fabric design is more appropriate. Some reasons to move to multiple fabrics include:

  • Site segmentation by fabric: i.e., one or more fabrics per site

  • Functional segmentation by fabric: for example, using one fabric for data center and a second fabric for campus within a single site

  • Scale requirements: in very high scale designs, a single fabric may not be able to accommodate the required network size.

The most common use for multi-fabric is interconnecting a fabric-enabled campus and a data center or interconnecting multiple buildings in a campus where a fabric is deployed in each building, such as a college campus.

iBGP is used within each fabric and eBGP is used to connect between fabrics at the border switches. Border switches at each fabric are typically configured as a VSX pair to allow for active-active high availability. Connectivity between border switches is established using Layer 3 links and ECMP.

Within a Multi-Fabric network VLANs, VRFs, and User Roles can be extended across the fabrics allowing for consistent policy throughout the deployment. This means that user based policy can be applied consistently across sites. These policy constructs are transported directly in the data plane using headers in the VXLAN packet.

A simple two fabric design is shown below:

Multifabric

Transport between the border switches of each fabric can be metro-ethernet, dark fiber, or another high speed WAN technology able to support frame sizes great than 1550 MTU. This jumbo frame support is necessary to accommodate the VXLAN header encapsulation between the sites. VXLAN encapsulation ensures that macro- or micro-segmentation is retained within and across fabrics.

When building the underlay between border switches in multiple fabrics, OSPF is recommended. Alternatively, eBGP can be used to configure the inter-fabric underlay manually with MultiEdit. Each fabric should be organized as a separate Central UI group. Among other reasons, this simplifies the creation of multiple OSPF areas if needed

Central supports the “Multi-Fabric EVPN” workflow to orchestrate EVPN-VXLAN overlays between the fabrics across sites.


Back to top

© Copyright 2024 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Aruba Networks and the Aruba logo are registered trademarks of Aruba Networks, Inc. Third-party trademarks mentioned are the property of their respective owners. To view the end-user software agreement, go to Aruba EULA.