Link Search Menu Expand Document
calendar_month 31-Oct-23

AFC EVPN-VXLAN Configuration

Configuration of an Aruba ESP Data Center fabric using a spine-and-leaf topology is best performed with the Aruba Fabric Composer (AFC) Guided Setup process. AFC automates switch provisioning, underlay link and routing configuration, overlay configuration, and integration with VMware vCenter.

Table of contents

Physical Topology Overview

The diagram below illustrates the physical links and hardware that comprise the primary data center. AFC is used to configure a routed underlay and EVPN-VXLAN overlay for the topology.

AFC Process Overview

AFC’s Guided Setup automates configuration following these steps:

  • Switch discovery: Discover and inventory data center switches in Aruba Fabric Composer.
  • AFC Fabric creation: Define the logical construct within AFC that identifies a fabric.
  • Switch assignment: Assign roles to fabric switches.
  • NTP and DNS configuration: Assign NTP and DNS servers to fabric switches.
  • VSX configuration: Create VSX-redundant ToR leaf pairs.
  • Leaf/Spine configuration: Assign IP addresses to leaf/spine links.
  • Underlay configuration: Establish OSPF underlay to support EVPN-VXLAN overlay data plane and control plane.
  • Overlay configuration: Establish BGP peerings to enable EVPN overlay control plane and VXLAN tunnel endpoints for overlay data plane.
  • EVPN configuration: Establish Layer 2 EVPN mapping of VLANs to VXLAN Network Identifiers (VNIs).

When the Guided Setup is complete, configuration details for computing host onboarding and external fabric connectivity are required:

  • Layer 3 services within overlays.
  • Multi-chassis LACP LAG configuration for host connectivity.
  • Routing between the data center and campus.

For additional details on the Guided Setup steps, refer to the “Guided Setup > Guided Setup Configuration Options” section of the Aruba Fabric Composer User Guide.

Plan the Deployment

Before starting the guided setup, plan ahead and identify a naming convention and address scheme with values that can accommodate the current deployment size and leave room for growth. A consistent approach in the physical and logical configurations improves the management and troubleshooting characteristics of the fabric.

This section provides sample values and rationale. Adjust values and formats as needed to best accommodate the current and projected sizes of the fabric.

Naming Conventions

AFC supports the execution of operations on a single switch or on a selected group of switches.

Establish a switch naming convention that indicates the switch type, role, and location to simplify identification and increase efficiency when operating production-scale fabrics. Configure switch names before importing them into AFC.

Example values used in this guide:

Switch NameFabric RoleDescription
RSVDC-FB1-SP1SpineFabric #1, Spine #1
RSVDC-FB1-LF1-1LeafFabric #1, VSX Leaf Pair #1, Member #1
RSVDC-FB1-LF1-2LeafFabric #1, VSX Leaf Pair #1, Member #2
RSVDC-FB1-SA1Server Access (Sub Leaf)Fabric #1, VSF Server Access Stack #1

Note: VSF stacks used in the server access role comprise two or more switches. The stack operates as a single logical switch with a single control plane. It is not possible to differentiate between stack members using a unique hostname.

The Guided Setup prompts for a Name Prefix on some steps. Name prefixes are logical names used within the AFC. Choose a descriptive name to make it easy to monitor, edit, and execute operations. The procedures below include examples of effective names that can be used.

Underlay Connectivity and Addressing

Point-to-point connections between spine-and-leaf switches are discovered and automatically configured for IP connectivity using /31 subnets within a single network range. AFC supports addressing up to 128 links inside a fabric using a /24 subnet mask. The maximum number of links on a fabric is determined by the aggregate port count of the spine switches.

Another network range is provided to create:

  • A /32 loopback address on each switch, used as the router ID for OSPF and BGP.
  • A /31 transit VLAN between ToR switch pairs to ensure data plane continuity in case of host link failure.
  • A /31 point-to-point interface between ToR switch pairs to transmit keep-alive messages for VSX peer loss detection.

AFC creates each of these subnet types automatically from a single network range provided during the VSX setup process. If VSX is not used, the network range is provided during the underlay configuration process.

Example values used in this guide are:

PurposeDescriptionExample
Leaf-Spine IP address blockAn IPv4 address block used to create /31, point-to-point layer 3 links between leaf and spine switches.10.255.0.0/23
Routed loopback, VSX transit VLAN, and VSX Keep-Alive Interface IP address blockAn IPv4 address block used to allocate unique loopback addresses (/32) for each switch, for VSX keep-alive point-to-point connection (/31) and also used as a transit-routed VLAN between redundant ToRs (/31)10.250.0.0/23

Overlay Connectivity and Addressing

The overlay network is created using VXLAN tunnels established between Virtual Tunnel Endpoints (VTEPs) within the leaf switches in the fabric. Loopback addresses assigned to establish route peerings are unique per switch and cannot be used as a VTEP IP when using VSX. A single logical VTEP per rack is defined by creating a dedicated /32 loopback interface common to both ToR peer switches. The interfaces are assigned automatically from a single subnet scope provided during the overlay guided setup.

PurposeDescriptionExample
VTEP IP address blockAn IPv4 address block used to allocate VXLAN tunnel endpoint (VTEP) loopback addresses (/32) for each ToR switch pair10.250.2.0/24

A Virtual Network Identifier (VNI) is a numerical value that identifies network segments within the fabric’s overlay topology. The VNI is carried in the VXLAN header to enable switches in the fabric to identify the overlay to which a frame belongs and apply the correct policy to it.

When configuring the overlay topology, a Layer 3 VNI represents the routed component of the overlay. Each Layer 3 VNI maps to a VRF. A Layer 2 VNI represents the bridged component of the overlay. Each Layer 2 VNI maps to a VLAN ID. Multiple Layer 2 VNIs can be associated to a single VRF.

Plan your VNI numbering scheme in advance to ensure that values do not overlap. Example values used in this guide are:

VNI TypeDescriptionExample
L2 VNIVLAN ID + 10,000VLAN100 == L2 VNI 10100, VLAN200 == L2VNI 10200
L3 VNIOverlay # + 100,000Overlay1 == L3 VNI 100001, Overlay2 == L3 VNI 100002

Internal BGP (iBGP) is used to share overlay reachability information between leaf switches. Layer 3 and Layer 2 information associated to a local switch’s VNIs is advertised with its associated VTEP to other members of the fabric. Two of the spines operate as BGP route reflectors. All leaf switches are clients of the two route reflectors.

MAC Address Best Practices

A Locally Administered Address (LAA) should be used when AFC requires entry of a MAC address. This is required when configuring the Active Gateway for a Distributed SVI. An LAA is a MAC in one of the four formats shown below:

x2-xx-xx-xx-xx-xx 
x6-xx-xx-xx-xx-xx 
xA-xx-xx-xx-xx-xx 
xE-xx-xx-xx-xx-xx

The x positions can contain any valid hex value. It is helpful to create a binary representation of the associated VLAN ID using the hex positions. For more details on the LAA format, see the IEEE tutorial guide.

AFC Prerequisite Configuration Summary

The following items must be configured before building an AFC-based fabric.

  • Physically cable all switches in the topology. VSX pairs, VSF stacks, and leaf-spine links must be fully connected to support AFC’s fabric automation.
  • Configure VSF stacking for server access switches. When optional server access switches are present, VSF auto-stacking must be configured when the switches are at their default configuration. VSF configuration guidance is available in the Aruba Support Portal. Enable split detection after the stack is formed.
  • Assign management interface IP addresses. A DHCP scope using MAC address reservations for each switch can be used in place of manual IP address assignment. When using DCHP, MAC address reservations ensure that each switch is assigned a consistent IP address.
  • Assign switch hostnames. Assigning unique hostnames using a naming convention allows administrators to identify a switch and its role quickly during setup and future troubleshooting.

Fabric Initialization

Configuration of an Aruba ESP Data Center fabric using a spine-and-leaf topology is best performed using the AFC Guided Setup process. To return to Guided Setup, simply select it in the menu bar at the top right of the AFC user interface.

Discover Switches on the Network

The first procedure adds switches to the AFC device inventory. An orderly naming convention for switch host names should be implemented before continuing with this procedure in order to simplify switch selection in the following steps.

Step 1 On the Guided Setup menu, select SWITCHES.

Step 2 In the Discover Switches window, enter the following switch information and click APPLY.

  • Switches: < OOBM IP addresses for fabric switches >
  • admin Switch Password: < password created during switch initialization >
  • admin Switch Password: < password created during switch initialization >
  • afc_admin Account Password: < new password for the afc_admin account >
  • Confirm afc_admin Account Password: < new password for the afc_admin account >

Note: Switch IP addresses can be entered in a comma-separated list or in one or more ranges. If the IP addresses provided include devices not supported by AFC or switches with different credentials, a “Discovery Partially Successful” warning message appears after the import.

This step creates a new afc_admin account on all the switches for API access from AFC.

Step 3 Review the list of imported switches in the Maintenance > Switches window and verify that the health status of each switch is HEALTHY, BUT… Hovering over the health status value of an individual switch provides additional details.

Create a Fabric

A fabric is created in AFC for collective configuration of switch members. The name is internal to AFC operations and is not tied to a specific switch configuration. AFC supports the configuration of both data and management fabrics.

Step 1 On the Guided Setup menu, select FABRIC.

Step 2 Define a unique logical name, set the Type to Data, specify a time zone, and click APPLY.

Add Switches to the Fabric

Switches must be added to a fabric before they can be configured. When adding a switch to a fabric, a role is declared. In the following steps, begin by adding spine switches. Leaf switches can then be added more easily as a group.

Step 1 On the Guided Setup menu, choose the fabric created in the previous step under Selected Fabric and click ASSIGN SWITCH TO FABRIC.

Step 2 Assign switches to the fabric grouped by role. Assign the following values for spine switches, then click ADD.

  • Fabric: RSVDC-FB1
  • Switches: < All spine switches >
  • Role: Spine
  • Force LLDP Discovery: checked
  • Initialize Ports: checked
  • Exclude this switch from association to any Distributed Services Manager: unchecked

Note: Checking Initialize Ports enables all switch ports for use in LLDP neighbor discovery. Split port configuration is performed in the previous Switch Initialization procedure to allow proper port initialization by AFC. The MTU of the physical ports also is adjusted to 9198 in order to support jumbo frames that allow VXLAN encapsulation overhead.

Checking Force LLDP Discovery prompts AFC to use LLDP neighbor information to discover link topology between spine-and-leaf switches and ToR VSX pairs dynamically.

Step 3 Repeat the steps above for VSF server access switch stacks. Verify that all server access switch stacks are listed with the Sub Leaf role selected and click ADD.

Note: This step is optional. It is required only when server access switches are present in the topology. Each VSF switch stack has a single entry that represents all switch members of the stack. This example implementation contains a single VSF stack.

Step 4 Repeat the previous step for border leaf switches with the Border Leaf role selected and click ADD.

Step 5 Repeat the previous step for the remaining leaf switches with the Leaf role selected and click ADD.

Note: Leaf switches typically comprise the majority of switches in a fabric. Use SELECT ALL to catch all remaining leaf switches, when switch assignments containing smaller sets of switches are assigned first.

Step 6 Scroll through the list of switches to verify role assignments to ensure successful configuration of the fabric. After adding all switches to the fabric with the correct role, click APPLY.

Step 7 Guided Setup displays the list of switches in the Maintenance > Switches window. Switch status should sync in a few seconds. Verify that all switches in the fabric are listed as HEALTHY in green.

Configure Infrastructure Split Ports

This process is necessary only when using links between fabric switches that require split port operation. The most common case is using a CX 9300 in the spine role to increase rack capacity of a fabric. CX 9300 spine ports are set to operate in 2 x 100 Gbps mode in this example deployment.

Step 1 On the Configuration menu, select Ports > Ports.

Step 2 On the Ports page, select both spine switches in the Switch field.

Step 3 Filter displayed ports by entering Invalid in the regex field below the Reason column heading and click the Apply Table Filters icon.

Note: Invalid speed is displayed in the Reason column when there is a mismatch between a physical port’s configured operation and an attached Active Optical Cable’s (AOC’s) physical split configuration. No error message is displayed when using a standard 400 Gbps transceiver before defining split port operation.

Step 4 Click the box at the top of the selection column to select all the displayed ports on both spine switches matching the search criteria.

Step 5 On the ACTIONS menu, select QSFP Transform > Split > 2x 100.

Step 6 When prompted to confirm the split operation, click OK.

Note: The split ports are enabled by AFC for use in LLDP neighbor discovery, and the MTU of the split ports is adjusted to 9198 to support jumbo frames for VXLAN encapsulation.

Configure NTP for the Fabric

Modern networks require accurate, synchronized time. The NTP wizard is used to enter NTP server hosts and associate them with all fabric switches. The NTP servers must be reachable from the data center management LAN. The AFC CLI Command Processor shows the time synchronization status of each switch. At the completion of this procedure, the data center switches have the date and time synchronized with the NTP servers.

Step 1 On the Guided Setup menu, select NTP CONFIGURATION.

Step 2 On the Name page, enter a Name and Description, then click NEXT.

Step 3 On the Entries page, enter a valid hostname or IP address and optional NTP authentication information, then click ADD.

Step 4 Repeat the step above for each NTP server in the environment.

Step 5 After all NTP servers have been added, click NEXT.

Step 6 On the Application page, select the name of the fabric in the Fabric field and click NEXT.

Step 7 On the Summary page, verify that the information is entered correctly and click APPLY.

Step 8 On the Configuration > Network > NTP page, click the radio button for the NTP config applied to an individual switch, click the ACTIONS menu on the right, and click Delete.

Note: AFC dynamically creates switch level objects that reconcile configuration performed by an administrator directly on the switch in AFC. A switch level configuration object has a higher precedence than AFC objects defined at the fabric level. NTP default config can be reconciled in a switch level configuration object. In this case, it is necessary to delete switch level NTP configuration objects to apply the fabric level config. If per-switch reconciled config is not present, omit steps 8, 9, and 10.

Step 9 In the Delete confirmation window, click OK.

Step 10 Repeat steps 8 and 9 to remove reconciled NTP configuration for all switches.

Step 11 In the menu bar at the top right of the AFC display, click the CLI Commands icon and select Show Commands.

Step 12 On the CLI Command Processor page, enter the following values, then click RUN.

  • Fabrics: RSVDC-FB1
  • Commands: show ntp status

Note: Multiple commands are supported in the Commands field in a comma-separated list. CLI commands can be saved for future re-use by clicking the ADD button. When typing a command in the Saved Commands field, preconfigured and saved commands appear in a list. Select a command in the list to add it to the Commands field.

Step 13 Verify that the output for each switch displays an NTP server IP address with stratum level, poll interval, and time accuracy information.

Note: NTP synchronization can take several minutes to complete. If a hostname was used instead of an IP address, complete the next step to configure DNS for the fabric before NTP verification.

Configure DNS for the Fabric

Use the DNS wizard to enter DNS host details and associate them with all fabric switches. The DNS servers must be reachable from the data center management LAN.

At the completion of this procedure, the data center switches can resolve DNS hostnames to IP addresses.

Step 1 On the Guided Setup menu, select DNS CONFIGURATION.

Step 2 On the Name page, enter a Name and Description, then click NEXT.

Step 3 On the Settings page, enter the Domain Name. Enter a valid DNS server IP address in the Name Servers field. Press the TAB or ENTER key to complete the server entry.

Step 4 Create additional entries as needed. After all required DNS servers are entered, click NEXT.

Step 5 On the Application page, select the name of the fabric in the Fabrics field and click NEXT.

Step 6 On the Summary page, verify that the information is entered correctly and click APPLY.

Configure VSX on Leaf Switches

VSX enables a pair of ToR leaf switches to appear as a single logical switch to downstream hosts using multi-chassis link aggregation. VSX improves host availability in case of switch failure or maintenance downtime. AFC automatically identifies VSX switch pairs and configures them with the values supplied in the VSX wizard. Resource Pool wizards create IP and MAC address objects. The AFC CLI Command Processor verifies VSX operational status.

The diagram below highlights leaf and border leaf VSX pairs created in this procedure.

Note: Use of a non-uplink port for keep-alive messages between VSX peers is recommended to maximize fabric capacity.

Step 1 On the Guided Setup menu, select VSX CONFIGURATION.

Step 2 On the Create Mode page, leave Automatically generate VSX Pairs selected and click NEXT.

Step 3 On the Name page, enter a Name Prefix and Description, then click NEXT.

Step 4 On the Inter-Switch Link Settings page, leave the default values and click NEXT.

Step 5 On the Keep Alive Interfaces page, select Point-to-Point as the Interface Mode. Click ADD to launch the Resource Pool wizard.

Note: The Resource Pool wizard is launched in this step to create an object representing the IPv4 address range used for underlay loopback interfaces on all switches, VSX keep-alive interfaces, and routed transit VLAN interfaces on VSX pairs. A resource pool is a reusable object that ensures consistency and reduces errors when adding switches to the fabric in the future.

Step 6 Resource Pool wizard: On the Name page, enter a Name and Description for the IPv4 address pool, then click NEXT.

Step 7 Resource Pool wizard: On the Settings page, enter an IPv4 address block in the Resource Pool field and click NEXT.

Note: This IPv4 address block is used to allocate IP addresses to loopback interfaces (/32) for all fabric switches, VSX keep-alive point-to-point interfaces (/31), and routed transit VLAN interfaces on VSX pairs (/31). Use a block large enough to support addressing these interfaces across the entire fabric.

Step 8 Resource Pool wizard: On the Summary page, verify the IP address pool information and click APPLY. The Resource Pool wizard closes and returns to the main VSX Configuration workflow.

Step 9 On the Keep Alive Interfaces page, verify that the new IPv4 Address Resource Pool is selected and click NEXT.

Step 10 On the Keep Alive Settings page, leave the default values and click NEXT.

Step 11 On the Options page, enter the value 600 for the Linkup Delay Timer field. Click ADD to launch the Resource Pool wizard.

Note: It is recommended to set a 600 second Linkup Delay Timer value on CX 10000 switches using firewall policy to ensure that policy and state have synchronized before forwarding traffic attached on a multi-chassis LAG. The same value is applied to all switches in this example fabric.

Step 12 Resource Pool wizard: On the Name page, enter a Name and Description for the system MAC address pool. Click NEXT.

Step 13 Resource Pool wizard: On the Settings page, enter a MAC address range to be used for the VSX system MAC addresses, then click NEXT.

Step 14 Resource Pool wizard: On the Summary page, verify the system MAC address pool information and click APPLY. The Resource Pool wizard closes and returns to the main VSX Configuration workflow.

Step 15 On the Options page, verify that the new MAC Address Resource Pool is selected and click NEXT.

Step 16 On the Summary page, verify the complete set of VSX settings and click APPLY.

Step 17 Guided Setup displays the list of VSX pairs in the Configuration / Network / VSX window. Review the information to verify that the VSX pairs created are consistent with physical cabling.

Note: VSX Health status in AFC can update slowly. Click the Refresh button in the upper right of the Configuration / Network / VSX window to refresh switch status manually.

Step 18 On the menu bar at the top right of the AFC display, click the CLI Commands icon and select Show Commands

Step 19 On the CLI Command Processor page, enter the following values, then click RUN.

  • Switches: < All leaf switches >
  • Commands: show vsx status

Step 20 Verify that each switch has both Local and Peer information populated with the following values:

  • ISL channel: In-Sync
  • ISL mgmt channel: operational
  • Config Sync Status: In-Sync
  • NAE: peer_reachable
  • HTTPS Server: peer_reachable

Configure Layer 3 Leaf-to-Spine Connections

AFC automatically identifies leaf-to-spine connections and configures them with the values supplied in the Leaf-Spine wizards. A resource pool is created to assign IP addresses to routed leaf and spine interfaces using /31 subnets. At the completion of this procedure, IP addresses are assigned to all interfaces required to support deployment of the OSPF fabric underlay.

Step 1 On the Guided Setup menu, select L3 LEAF-SPINE CONFIGURATION to start the Leaf-Spine workflow.

Step 2 On the Create Mode page, leave Automatically generate Leaf-Spine Pairs selected and click NEXT.

Step 3 On the Name page, enter a Name Prefix and Description, then click NEXT.

Step 4 On the Settings page, click ADD to launch the Resource Pool wizard.

Step 5 Resource Pool wizard: On the Name page, enter a Name and Description for the IPv4 address pool, then click NEXT.

Step 6 Resource Pool wizard: On the Settings page, enter an IPv4 address block in the Resource Pool field and click NEXT.

Note: Use a subnet distinct from other subnets used in the overlay networks. The assigned subnet is used to configure routed ports between fabric switches. Use a block large enough to accommodate anticipated fabric growth.

Step 7 Resource Pool wizard: On the Summary page, verify the IP address pool information and click APPLY. The Resource Pool **wizard closes and returns to the main **Leaf-Spine Configuration workflow.

Step 8 On the Settings page, verify that the new IPv4 Address Resource Pool is selected and click NEXT.

Step 9 On the Summary page, verify that the information is correct and click APPLY.

Step 10 Guided Setup displays the list of leaf-to-spine links in the Configuration/Network/Leaf-Spine window. Review the information to verify that the leaf-spine links created are consistent with physical cabling.

AFC refers to server access switches as sub leaf switches. Compute and storage hosts are typically attached directly to leaf switches. Server access switches are primarily used to achieve two goals. They provide a transition strategy to connect existing server infrastructure into an EVPN-VXLAN fabric, and they provide an economical strategy to support a large number of 1 Gbps connected hosts. Server access switches extend Layer 2 services from the leaf, but do not directly participate in underlay routing or overlay virtualization mechanisms.

The following procedure establishes an MC-LAG between a VSX leaf pair and a downstream VSF server access switch stack. The LAGs defined on both sets of switches are 802.1Q trunks that allow all VLANs.

The diagram below highlights the server access MC-LAG created in this procedure.

Step 1 On the Configuration > Network > Leaf-Spine page, click SUBLEAF-LEAF.

Step 2 On the ACTIONS menu, select Add.

Step 3 When prompted to continue, click OK.

Step 4 Review the leaf and server access MC-LAG information. Verify that the values in the Leaf LAG Status and SubLeaf LAG Status columns are up.

Configure Underlay Network Routing

The Aruba ESP Data Center spine-and-leaf design uses OSPF as the underlay routing protocol. The AFC Underlay Configuration wizard creates a transit VLAN between redundant ToRs to support routing adjacency, assigns IP addresses to loopback and transit VLAN interfaces, and creates underlay OSPF configuration. OSPF shares the loopback0 IP addresses for later use in establishing overlay routing. The AFC CLI Command Processor verifies OSPF adjacencies.

At the completion of this procedure, a functional underlay for the data center fabric is complete. The diagram below illustrates the assigned loopback IP addresses and the links where OSPF adjacencies are formed between leaf and spine switches.

Step 1 On the Guided Setup menu, select UNDERLAYS to start the Underlay Configuration workflow.

Step 2 On the Name page, enter a Name and Description, then click NEXT.

Step 3 On the Underlay Type page, leave the default OSPF selection and click NEXT.

Step 4 On the Settings page, set the Transit VLAN to 3999. Leave other settings at their defaults and click NEXT.

Note: Enter a VLAN ID that cannot be confused easily with other VLANs within the network.

Step 5 On the Max Metric page, enter the value 600 in the On Startup field. Leave other settings at their defaults and click NEXT.

Note: It is recommended to set a 600-second OSPF On Startup max metric value for CX 10000 switches using firewall policy in a VSX pair to ensure that policy and state have synchronized before fabric traffic is forwarded to the switch VTEP. The same value is applied to all switches in this example fabric.

Step 6 On the Summary page, verify that the information is entered correctly and click APPLY to create the OSPF configuration.

Step 7 In the menu bar at the top right of the AFC display, click the CLI Commands icon and select Show Commands.

Step 8 On the CLI Command Processor page, enter the following values, then click RUN.

  • Fabrics: RSVDC-FB1
  • Commands: show ip ospf neighbors

Step 9 Verify that each spine switch shows an OSPF neighbor adjacency in the “FULL” state for all leaf switches. Verify that all leaf VSX pairs show an OSPF neighbor adjacency in the “FULL” state between themselves over the routed transit VLAN in addition to an adjacency in the “FULL” state with each spine.

Configure Overlay Network Routing

The Aruba ESP Data Center spine-and-leaf design uses iBGP as the control plane for the fabric overlay. BGP provides a mechanism to dynamically build VXLAN tunnels and share host reachability across the fabric using the EVPN address family. VTEP interfaces are the VXLAN encapsulation and decapsulation points for traffic entering and exiting the overlay. VSX leaf pairs share the same VTEP IP address.

Use the AFC Overlay Configuration wizard to implement iBGP EVPN peerings using a private ASN and to establish VXLAN VTEPs. VTEP IP addresses are assigned as a switch loopback using a resource pool. iBGP neighbor relationships are verified using the AFC CLI Command Processor.

The diagram below illustrates the iBGP EVPN address family peerings established using loopback interfaces between leaf switches and the two spines operating as iBGP route reflectors, and the loopbacks added that function as VTEPs.

Step 1 From the Guided Setup menu, select OVERLAYS to start the Overlay Configuration workflow.

Step 2 On the Name page, enter a Name and Description, then click NEXT.

Step 3 On the Overlay Type page, leave iBGP selected and click NEXT.

Step 4 On the iBGP Settings page, enter the following settings, then click NEXT.

  • Spine-Leaf ASN: 65001
  • Route Reflector Servers: < Select two spine switches >
  • Leaf Group Name: RSVDC-FB1-LF
  • Spine Group Name: RSVDC-FB1-RR

Note: Use a 2-byte ASN in the private range of 64512-65534 for an easy-to-read switch configuration. A 4-byte ASN is supported.

Step 5 On the IPv4 Network Address page, click ADD to launch the Resource Pool wizard.

Step 6 Resource Pool wizard: On the Name page, enter a Name and Description, then click NEXT.

Step 7 Resource Pool wizard: On the Settings page, enter an IPv4 address block in the Resource Type field and click NEXT.

Note: This IPv4 address block is used to configure loopback addresses on all leaf switches for VXLAN VTEPs. A VSX leaf pair uses the same IP loopback address on both switches.

Step 8 Resource Pool wizard: On the Summary page, verify the VTEP IP address pool information and click APPLY. The Resource Pool wizard closes and returns to the main Overlay Configuration workflow.

Step 9 On the IPv4 Network Address page, verify that the new IPv4 Address Resource Pool is selected and click NEXT.

Step 10 On the Overlay Configuration Settings page, leave the default values and click NEXT.

Step 11 On the Summary page, verify that the iBGP information is correct, then click APPLY.

Step 12 In the menu bar at the top right of the AFC UI, click the CLI Commands icon and select Show Commands.

Step 13 On the CLI Command Processor page, enter the following values, then click RUN.

  • Switches: < Select both route reflector spine switches >
  • Commands: show bgp l2vpn evpn summary

Step 14 Verify that both route reflectors show an L2VPN EVPN neighbor relationship in the “Established” state for all leaf switches.

Configure Overlay VRFs

The Aruba ESP Data Center uses overlay VRFs to provide the Layer 3 virtualization and macro segmentation required for flexible and secure data centers. VRFs are distributed across all leaf switches. A VRF instance on one switch is associated to the same VRF on other leaf switches using a common L3 VNI and EVPN route-target, binding them together into one logical routing domain. VRFs are commonly used to segment networks by tenants and business intent.

Use the Virtual Routing & Forwarding workflow to create overlay network VRFs and associate a VRF with an L3 VNI and EVPN route-target. The VNI and route target for each set of overlay VRFs must be unique to preserve traffic separation.

This guide uses a production VRF and development VRF as an example of route table isolation. TCP/IP hosts in one VRF are expected to be isolated from hosts in the other VRF. The diagram below illustrates the logical VRF overlay across all leaf switches.

Note: The diagram above depicts the border leaf switches at the same horizontal level as all other leaf switches. This placement of the border leaf pair is a cosmetic preference for easier depiction of virtualization across leaf switches. The deployed topology is consistent with previous diagrams, but without the pictorial emphasis of the special role of the border leaf handling data center north/south traffic.

Hosts attached to server access switches can be connected to subnets in either VRF by VLAN extension from the leaf switch, but the server access switches do not contain their own VRF definition.

Step 1 On the left menu, select VRF. If VRF does not appear in the left pane, select Configuration > Routing > VRF from the top menu.

Step 2 On the ACTIONS menu on the right, select Add.

Step 3 On the Name page, enter a Name and Description, then click NEXT.

Step 4 On the Scope page, uncheck Apply the VRF to the entire Fabric and all Switches contained within it. Select the VSX leaf pairs in the Switches field, then click NEXT.

Note: When a high number of leaf switches are present, click the SELECT ALL button to select all switches, then de-select spine and server access switches. Spine and server access switches do not participate in overlay virtualization and do not possess VTEPs, so overlay VRFs should not be configured on them.

Step 5 On the Routing page, enter an L3 VNI and click NEXT.

Note: Refer to the “Overlay Connectivity and Addressing” section above for a VNI numbering reference.

Step 6 On the Virtual Routing & Forwarding Route Targets page, assign the following settings to add an EVPN route-target to the VRF, then click ADD.

  • Route Target Mode: Both
  • Route Target Ext-Community: 65001:100001
  • Address Family: EVPN

Note: Setting Route Target Mode to Both exports local switch VRF routes to BGP with the Route Target Ext-Community value assigned as the route-target and imports BGP routes into the local VRF route table advertised by other switches with the same value.

For Route Target Ext-Community, enter the private autonomous system number used in the “Configure Overlay Network Routing” procedure and the L3 VNI, separated by a colon. The L3 VNI is used in the BGP route target for logical consistency with the VXLAN L3 VNI. The complete route-target value uniquely identifies a set of VRFs.

Step 7 Verify that the Route Targets information is correct and click NEXT.

Step 8 On the Summary page, verify that the complete set of VRF information is correct and click APPLY.

Step 9 Repeat this procedure for each additional overlay VRF.

Configure Overlay VLANs and SVIs

The Aruba ESP Data Center uses one or more VLANs within each VRF to provide host connectivity. VLAN SVIs provide IP addressing within the fabric. The AFC IP Interface workflow creates consistent VLANs across all leaf switches within an overlay VRF. The workflow assigns an SVI IP address, a virtual gateway address, and a locally administered virtual MAC address to the VLAN interface on each leaf switch. Aruba Active Gateway permits the SVI IP and virtual gateway to be used on VSX leaf pairs.

The creation of VLANs and SVIs in this step is prerequisite to binding the VLANs across racks in logically contiguous Layer 2 domains in the next procedure. At the end of this procedure, each VLAN’s broadcast domain is scoped to each VSX pair.

The diagram below illustrates the creation of VLANs in each ToR VSX leaf pair.

Step 1 Confirm that the view is set to Configuration/Routing/VRF, then click the • • • symbol next to PROD-DC-VRF and select IP Interfaces.

Note: The • • • symbol is a shortcut to most options in the ACTIONS menu. This shortcut method is available in many AFC contexts. The IP Interfaces context also can be viewed by clicking the PROD-DC-VRF radio button and selecting IP Interfaces on the ACTIONS menu.

Step 2 On the Configuration/Routing/VRF/PROD-DC-VRF page, select the right ACTIONS menu below IP INTERFACES and click Add.

Step 3 On the IP Interfaces page, assign the following values, then click NEXT.

  • Type: SVI
  • VLAN: 101
  • Switches: < Select all leaf switches >
  • IPv4 Subnetwork Address: 10.5.101.0/24
  • Switch Addresses: 10.5.101.1
  • Active Gateway IP Address: 10.5.101.1
  • Active Gateway MAC Address: 02:00:0A:05:00:01

Note: The SELECT ALL button selects all switches assigned to the VRF where the SVI interface will be created.

The range provided for IPv4 Addresses and the Active Gateway IP Address must be from the same network range as the IPv4 Subnetwork Address. The IPv4 Addresses field value is used to assign an IP address to each SVI interface. AOS-CX 10.09 and above supports assigning the same IP address as both the SVI interface and the active gateway. This maximizes the number of IPs available to assign to attached network hosts.

The active gateway is a virtual IP address serving as the default gateway locally on each switch. The active gateway MAC is a virtual MAC address associated with the active gateway IP.

Step 4 On the Name page, enter a Name and Description, then click NEXT.

Note: Including the associated VLAN ID and overlay VRF in the Name can be helpful during management operations.

Step 5 On the Summary page, verify that the information is entered correctly and click APPLY.

Step 6 Repeat the procedure to create an additional overlay subnet in the production VRF using the following values:

NameDescriptionTypeVLANSwitchesIPv4 Subnetwork AddressIPv4 AddressesActive Gateway IP AddressActive Gateway MAC Address
DB-V102-PROD-DCProduction database SVI/VLAN 102 DC overlaySVI102< All leaf switches >10.5.102.0/2410.5.102.110.5.102.102:00:0A:05:00:01

Step 7 Repeat the procedure to create additional overlay subnets in the development VRF using the following values:

NameDescriptionTypeVLANSwitchesIPv4 Subnetwork AddressIPv4 AddressesActive Gateway IP AddressActive Gateway MAC Address
WEB-V201-DEV-DCDevelopment web app SVI/VLAN 201 in DC overlaySVI201< All leaf switches >10.6.201.0/2410.6.201.110.6.201.102:00:0A:06:00:01
DB-V202-DEV-DCDevelopment database SVI/VLAN 202 in DC overlaySVI202< All leaf switches >10.6.202.0/2410.6.202.110.6.202.102:00:0A:06:00:01

Configure EVPN Instances

An EVPN instance joins each previously created VLAN across leaf switches into a combined broadcast domain. This procedure defines two key attribute to logically bind each VLAN across the leaf switches. A VNI is assigned to each VLAN. MP-BGP EVPN host advertisements associate host MACs to VNI values for VXLAN tunneling. An auto-assigned route target per VLAN is also defined. The VLAN route-target associates a MAC address with the appropriate VLAN at remote switches for the purpose of building bridge table MAC reachability. Route targets are included in MP-BGP EVPN host advertisements.

The EVPN wizard maps VLAN IDs to L2 VNI values. A prefix value is provided for automatic generation of route targets. The EVPN wizard also creates an EVPN instance to associate route-targets with VLANs. When using iBGP for the overlay control plane protocol, route targets can be automatically assigned. A resource pool is used to assign the EVPN system MAC addresses.

At the completion of this procedure, distributed L2 connectivity across switches in the fabric is established. Aruba active gateway permits the same IP address to be used on all leaf switches in the fabric. The diagram below illustrates the the logical binding of VLANs across racks into logically contiguous broadcast domains.

Step 1 On the Guided Setup menu, select EVPN CONFIGURATION to start the EVPN workflow.

Step 2 On the Introduction page, review the guidance and click NEXT.

Note: The prerequisites noted above were completed in previous steps.

Step 3 On the Switches page, leave Create EVPN instances across the entire Fabric and all Switches contained within it selected and click NEXT.

Step 4 On the Name page, enter a Name Prefix and Description, then click NEXT.

Step 5 On the VNI Mapping page, enter one or more VLANs and a Base L2VNI, then click NEXT.

Note: The Base L2VNI value is added to each VLAN ID to generate a unique L2 VNI associated to each VLAN automatically.

Step 6 On the Settings page, click ADD to launch the Resource Pool wizard.

Step 7 Resource Pool wizard: On the Name page, enter a Name and Description, then click NEXT.

Step 8 Resource Pool wizard: On the Settings page, enter a MAC address range for System MAC Addresses in the Resource Pool field and click NEXT.

Step 9 Resource Pool wizard: On the Summary page, verify that the System MAC information is correct and click APPLY. The Resource Pool wizard closes and returns to the main EVPN Configuration workflow.

Step 10 On the Settings page, verify that the MAC Address Resource Pool just created is selected, set the Route Target Type to AUTO, and click NEXT.

Note: EVPN route targets can be set automatically by switches only when using an iBGP overlay.

Step 11 On the Summary page, verify that the information is correct and click APPLY.

The Guided Setup is now complete.

Host Port Configuration

Use this section to configure the Port Groups and LACP Host LAG ports.

Configure Port Groups

The SFP28 ports in the Aruba 8325-48Y8C switches (JL624A and JL625A) are organized into four groups of 12 ports each, and the SFP28 ports on the Aruba 10000-48Y6C switches (R8P13A and R8P14A) are organized into 12 groups of four ports each. The SFP28 ports for both switch models default to a speed of 25 Gb/s and must be set manually to 10 Gb/s, if required.

For additional details, find the complete Aruba 8325 or 10000 Switch Series Installation and Getting Started Guide on the Aruba Support Portal. Go to the section: Installing the switch > Install transceivers > Interface-Group operation.

Step 1 On the Configuration menu, select Ports > Ports.

Step 2 Select all switches that require port speed changes in the Switch field.

Step 3 Enter mismatch in the Reason column’s regex field and click the Apply table filters icon.

Note: This step is optional. Cabling must be complete before this step so the switch can generate a speed mismatch status used for filtering.

Step 4 Select a single port in the group to be changed. On the right ACTIONS menu, select Edit.

Note: Edit on the ACTIONS menu is available only when a single switch port is selected.

Step 5 On the Ports page, select the Speed tab. Select the appropriate value in the Speed dropdown, then click APPLY.

Note: Observe the full list of ports affected by the speed change. Ensure that this is the correct speed setting for all listed ports.

Step 6 Repeat the procedure for additional leaf switch ports requiring speed changes. Be sure to make changes to corresponding ports on VSX-paired switches supporting MC-LAGs.

Note: The displayed port list of mismatched transceiver speeds is updated dynamically. It may be necessary to toggle the select all/deselect all checkbox in the upper left column to deselect the previously selected port after the update hides it from view.

Multiple LACP MC-LAG Configuration

The Aruba ESP Data Center uses LACP link aggregation groups to provide fault tolerance and efficient bandwidth utilization to physical hosts in the data center. The Link Aggregation Group wizard configures multi-chassis LAGs and LACP on fabric switches. Use the AFC CLI Command Processor to verify LAG interface state for LAG connected hosts.

Multiple MC-LAG creation can be applied to one or more switches using the AFC wizard. It enables quick setup of MC-LAGs across all leaf switches, when leaf switch models and cabling are consistent across the fabric. For example, a single pass of the Link Aggregation Groups wizard can configure all host leaf ports for MC-LAG given the following conditions:

  • All leaf switches contain the same number of host facing ports.
  • On all leaf switches, port 1/1/1 is assigned to LAG 1, port 1/1/2 is assigned to LAG 2, etc.
  • No more than one port per switch requires assignment to an individual MC-LAG.
  • VLANs assigned to all MC-LAGs are consistent.

Configuration also is required on the connected hosts. Configuration varies by server platforms and operating systems and is not presented in this guide. Refer to the appropriate technical documentation for attached devices and operating systems.

Step 1 On the Configuration menu, select Ports > Link Aggregation Groups.

Step 2 On the right ACTIONS menu, select Add.

Step 3 On the Create Mode page, select Create multiple MLAGs for selected VSX Pairs and click NEXT.

Step 4 On the Settings page, enter a Name Prefix, LAG Number Base, then click NEXT.

Note: If individual hostname assignments are required per MC-LAG in place of a more general prefix-based naming convention, choose Create a single LAG/MLAG in the previous step to provide a unique a name per LAG.

LAG index values are numbered sequentially beginning with the LAG Number Base.

Step 5 On the Ports page, select one or more VSX-pairs of switches in the VSX Pairs field, enter the ports that are physically cabled for MC-LAG operation in Ports, then click VALIDATE.

Step 6 If AFC can validate that MC-LAG port configuration is consistent with LLDP neighbor data, a success message is presented.

Note: Validation of hypervisor host connections requires previous assignment of physical host ports to LACP LAGs. Validation is intended to verify that the requested configuration is consistent with cabling to attached hosts. It is not required to continue the process of MC-LAG creation, if attached hosts are not configured or present.

Step 7 On the Ports page, click NEXT.

Step 8 On the LACP Settings page, check Enable LACP Fallback, leave other LACP settings at their default values, and click NEXT.

Step 9 On the VLANs page, modify the untagged Native VLAN number if necessary, enter the tagged VLAN IDs in the VLANs field, then click NEXT.

Step 10 On the Summary page, confirm that the information is entered correctly and click APPLY to create the LAGs.

Step 11 In the menu bar at the top right of the AFC display, click the CLI Commands icon and select Show Commands.

Step 12 On the CLI Command Processor page, enter the following values, then click RUN.

  • Switches: < Select all switches with newly configured LAGs >
  • Commands: show lacp interfaces

Step 13 When a host is connected to the LAG, verify that each port assigned to one of the host LAGs created in this procedure has a State of “ALFNCD” for its local interfaces and “PLFNCD” for its partner interfaces. The Forwarding State should be “Up” for local interfaces.

Note: A combination of VSX peer LAG interfaces and VSX multi-chassis LAG interfaces to hosts may be included in the command output. As shown above, the multi-chassis interfaces are denoted with (mc) after the LAG name. The Actor is the switch where the command was run. The Partner is the host at the other end of the LAG. The State column shows the expected values for a switch set to Active LACP mode and a host set to Passive LACP mode with a healthy LAG running.

Single LACP MC-LAG Configuration

Individual LAGs are assigned for the following conditions:

  • A unique LAG name is required in AFC.
  • Assigned VLANs are unique to the LAG.
  • More than one port per switch are assigned to an MC-LAG to increase capacity.

Step 1 On the Configuration menu, select Ports > Link Aggregation Groups.

Step 2 On the right ACTIONS menu, select Add.

Step 3 On the Create Mode page, leave Create a single LAG/MLAG selected and click NEXT.

Step 4 On the Settings page, enter a Name, Description, and LAG Number. Click NEXT.

Note: Consider using a Name that identifies the host and where it is connected.

Step 5 On the Ports page, select a VSX-pair of switches from the LAG Switch Member dropdown.

Step 6 Click the Switch View mode icon to identify ports more easily.

Step 7 Click the port icons to add them as members of the link aggregation group and click NEXT.

Note: A checkmark appears on the newly selected ports. The diamond icon appears on ports not currently available for a new LAG group assignment.

Select ports on both VSX paired switches to create a multi-chassis LAG.

Step 8 On the LACP Settings page, leave Enable LACP Fallback selected, leave other settings at their defaults, and click NEXT.

Note: Aruba CX switches default to “Active” mode to ensure that LACP can be established regardless of the LACP configuration of the host platform. Using the default settings is recommended. Click the box next to one or both switch names to modify default values.

Step 9 On the VLANs page, modify the untagged Native VLAN number if necessary, enter tagged VLAN IDs in the VLANs field, then click NEXT.

Step 10 On the Summary page, confirm that the information is entered correctly and click APPLY to create the LAGs.

Step 11 Repeat the procedure for each individual LAG connection in the fabric.

Step 12 In the menu bar at the top right of the AFC UI, click the CLI Commands icon and select Show Commands.

Step 13 On the CLI Command Processor page, enter the following values, then click RUN.

  • Switches: < Select all switches with host LAGs to verify >
  • Commands: show lacp interfaces

Step 14 When a host is connected to the LAG, verify that each port assigned to one of the host LAGs on a VSX pair has a State of ALFNCD for its local interfaces and PLFNCD for its partner interfaces. Verify that all interfaces in a LAG defined on a VSF stack have a Sate of ALFNCD, when connected to a host. The Forwarding State should be “Up” for local interfaces.

Configure the Border Leaf

The border leaf is the ToR switch pair that connects the data center fabric to other networks such as a campus, WAN, or DMZ.

When connecting overlay networks to external networks, segmentation is preserved by establishing a distinct Layer 3 connection for each data center overlay VRF. A firewall often is used between the fabric hosts and an external network for policy enforcement, but this is not a requirement. A firewall also can be configured to permit traffic between VRFs based on policy. When connecting multiple overlay VRFs that require that route table separation is preserved upstream, the firewall must support VRFs or device virtualization.

The following diagram illustrates the topology and BGP peerings for connecting the production overlay VRF to an active/passive pair of upstream firewalls.

**Border Leaf Topology Production VRF**

An MC-LAG is used between the border leaf switch pair and each upstream firewall. This strategy provides network path redundancy to each firewall. When using an active/passive firewall, traffic is forwarded only to the active upstream firewall. Detailed firewall configuration details are outside the scope of this document.

Each MC-LAG between the border leaf switches and the firewalls is an 802.1Q trunk, where one VLAN per VRF is tagged on the LAG. Tagging the same VLANs on both LAGs supports the active/passive operation of the firewall. Using VLAN tags when only one overlay VRF is present supports adding overlay VRFs in the future without additional cabling or changing port roles from access to trunk.

MP-BGP EVPN advertisements share host routes inside the data center (/32 IPv4 and /128 IPv6). EVPN host routes are commonly filtered to connections outside the data center. Only network prefixes containing overlay hosts are shared, which can be redistributed connected routes or learned within the fabric from type-5 EVPN route advertisements.

In this sample implementation, each overlay VRF on the border leaf switches learns a default route and a campus summary route from the firewalls. The border leaf shares learned external routes with other leaf switches by advertising a type-5 EVPN route.

The following diagram illustrates additional elements required when adding external connectivity to the development overlay VRF. The same set of physical links between the border leaf and the firewalls is used to connect both production and development overlay VRFs. A development VRF VLAN is tagged on the previously configured MC-LAG trunks between the border leaf switches and the firewalls to support an additional set of BGP peerings with the firewall.

**Border Leaf Topology Multiple VRFs**

Note: When using an Aruba CX 10000 in the border leaf role, physical ports connecting to external networks must be configured with persona access.

Configure External Routing VLAN SVIs

In the configuration steps below, the VLAN SVIs are created to use in eBGP peerings between border leaf switches and the upstream active/passive firewall pair.

Step 1 On the Configuration menu, select Routing > VRF.

Step 2 On the Configuration > Routing > VRF page, click the • • • symbol left of PROD-DC-VRF and select IP Interfaces.

Step 3 On the right ACTIONS menu of the IP Interfaces tab, select Add to launch the IP Interfaces wizard.

Step 4 On the IP Interfaces page, enter the following values and click NEXT

  • Type: SVI
  • VLAN: 2021
  • Switches: < Select the border leaf VSX pair object >
  • IPv4 Subnetwork Address: 10.255.2.0/29
  • IPv4 Addresses: 10.255.2.1-10.255.2.2
  • Active Gateway IP Address: < blank >
  • Active Gateway MAC Address: < blank >
  • Enable VSX Shutdown on Split: < unchecked >
  • Enable Local Proxy ARP: < unchecked >

Step 5 On the Name page, enter a Name and Description, then click NEXT.

Step 6 On the Summary page, review the interface settings and click APPLY.

Step 7 Repeat this procedure to create an additional VLAN and SVI interface for DEV-DC-VRF. In step 2, select DEV-DC-VRF, then create an SVI with the following values:

NameDescriptionTypeVLANSwitchesIPv4 Subnetwork AddressIPv4 Addresses
DEV-DC-BORDER-LF to FWBorder leaf DEV-DC-VRF uplink to external FW clusterSVI2022< Border leaf VSX pair object >10.255.2.8/2910.255.2.9-10.255.2.10

Create Border Leaf to Firewall MC-LAGs

A VSX-based MC-LAG is created to each individual firewall in the active/passive cluster from the border leaf switches.

Step 1 On the Configuration menu, select Ports > Link Aggregation Groups.

Step 2 On the right ACTIONS menu, select Add.

Step 3 On the Create Mode page, leave Create a single LAG/MLAG selected and click NEXT.

Step 4 On the Settings page, enter the following values and click NEXT.

  • Name: RSVDC-BL to EXT-FW1
  • Description: MC-LAG from border leaf switches to FW1 in firewall cluster
  • LAG Number: 251

Step 5 On the Ports page, select the border leaf VSX object from the LAG Switch Member dropdown.

Step 6 Click the Switch View mode icon to identify ports more easily.

Step 7 Click the port icons connected to the first firewall to add them as members of the link aggregation group and click NEXT.

Note: A checkmark appears on the newly selected ports. The diamond icon appears on ports not currently available for a new LAG group assignment.

Step 8 On the LACP Settings page, leave all settings at their defaults and click NEXT.

Step 9 On the VLANs page, enter the VLAN ID for each VRF previously created to connect to external networks in the VLANs field, and click NEXT.

Step 10 On the Summary page, review the link aggregation settings and click APPLY.

Step 11 Repeat the procedure to create an additional MC-LAG on ports connecting the VSX border leaf switch pair to the second firewall using the following settings:

NameDescriptionLAG NumberPortsLACP SettingsVLANs
RSVDC-BL to EXT-FW2MC-LAG from border leaf switches to FW2 in firewall cluster25223 (on each switch member)< Leave all defaults >2021-2022

Configure Host Filter Prefix List

Host routes and point-to-point link prefixes should not be advertised to external networks. The following procedure creates a prefix list used in route policy to filter /31 and /32 IPv4 prefix advertisements.

Step 1 On the Configuration menu, select Routing > Route Policy.

Step 2 Click the PREFIX LISTS tab. On the right ACTIONS menu, select Add.

Step 3 On the Settings page, enter a Name and Description, then click NEXT.

Note: The Name value defines the name of the prefix list in AFC and on the switch.

Step 4 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.

Step 5 On the Entries page, enter the following non-default values and click Add

  • Action: Permit
  • Prefix: 0.0.0.0/0
  • GE: 31

Step 6 Click NEXT.

Step 7 On the Summary page, review the prefix list settings and click APPLY.

Configure Campus AS Path List

An internal BGP peering will be established between the border leaf pair to create a routed backup path to the upstream firewall. IP prefixes learned in the fabric should not be advertised in the overlay BGP peering between the border leaf pair to avoid a routing loop. The following procedure creates an AS path list that matches only prefix advertisements sourced from the upstream firewall and campus routers.

Step 1 Click the AS PATH LISTS tab. On the right ACTIONS menu, select Add.

Step 2 On the Name page, enter a Name and Description, then click NEXT.

Step 3 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.

Step 4 On the Entries page, enter the following values and click ADD

  • Sequence: 10
  • Description: permit campus originated advertisements
  • Action: Permit
  • Regex: ^65501 65000$

Note: The Regex field value matches BGP advertisements originated by the campus AS (65000) that are received by the RSVDC fabric border leaf via the firewall AS (65501). Routes advertised by the campus that are received from other external AS numbers are not accepted.

Step 5 On the Entries page, enter the following values and click ADD

  • Sequence: 20
  • Description: permit firewall originated advertisements
  • Action: Permit
  • Regex: ^65501$

Note: The Regex field value matches BGP advertisements originated by the firewall AS. In this example topology, the default route is originated by the firewall.

Step 6 Click NEXT.

Step 7 On the Summary page, verify the AS path list settings and click APPLY.

Configure Firewall Route Map

The following procedure creates a route map that will be applied outbound to external BGP peers. The route map policy filters host and point-to-point prefixes using the previously created host filter prefix list.

Step 1 On the Configuration > Routing > Route Policy page, click the ROUTE MAPS tab. On the right ACTIONS menu, select Add.

Step 2 On the Name page, enter a Name and Description, then click NEXT.

Step 3 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.

Step 4 On the Entries page, click the right ACTIONS menu, and select Add to launch the Route Map Entries wizard.

Step 5 Route Map Entries wizard: On the Settings page, enter the following non-default values and click NEXT.

  • Description: filter host and P2P prefixes
  • Action: Deny

Step 6 Route Map Entries wizard: On the Match Attributes page, enter the following values and click NEXT.

  • Attributes: Match IPv4 Prefix List
  • Match IPv4 Prefix List: PL-HOST-P2P

Step 7 Route Map Entries wizard: On the Set Attributes page, click NEXT.

Step 8 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.

Step 9 Create a second route map sequence. On the Entries page, click the right ACTIONS menu, and select Add.

Step 10 Route Map Entries wizard: On the Settings page, set the Action field to Permit and click NEXT.

Step 11 Route Map Entries wizard: On the Match Attributes page, click NEXT.

Step 12 Route Map Entries wizard: On the Set Attributes page, click NEXT.

Step 13 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.

Step 14 On the Entries page, click NEXT.

Step 15 On the Summary page, review the route map settings and click APPLY.

Configure Internal Border Leaf Route Map

The following procedure creates a route map that will be applied to the BGP peering between the border leaf switches. The route map only permits advertising prefixes originated by the campus AS number or the upstream firewall AS number.

Step 1 On the right ACTIONS menu of the ROUTE MAPS tab, select Add.

Step 2 On the Name page, enter a Name and Description, then click NEXT.

Step 3 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.

Step 4 On the Entries page, click the right ACTIONS menu, and select Add to launch the Route Map Entries wizard.

Step 5 Route Map Entries wizard: On the Settings page, enter the following non-default values and click NEXT.

  • Description: allow campus and firewall ASNs
  • Action: Permit

Step 6 Route Map Entries wizard: On the Match Attributes page, enter the following values and click NEXT.

  • Attributes: Match AS Path List
  • Match IPv4 Prefix List: ALLOWED-EXT-AS

Step 7 Route Map Entries wizard: On the Set Attributes page, click NEXT.

Step 8 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.

Step 9 Create a second route map sequence. On the Entries page, click the right ACTIONS menu, and select Add.

Step 10 Route Map Entries wizard: On the Settings page, set the Action field to Deny and click NEXT.

Step 11 Route Map Entries wizard: On the Match Attributes page, click NEXT.

Step 12 Route Map Entries wizard: On the Set Attributes page, click NEXT.

Step 13 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.

Step 14 On the Entries page, click NEXT.

Step 15 On the Summary page, review the route map settings and click APPLY.

Configure Border Leaf BGP Peerings

The following procedure configures the eBGP peerings between the border leaf switches and the upstream firewalls with a route map applied to filter host routes and point-to-point link prefixes. A single BGP peering is defined to the upstream firewalls, which is established only with the active firewall in the active/passive pair.

Step 1 On the left navigation menu, click BGP.

Step 2 On the Configuration > Routing > BGP page, click the PROD-DC-VRF radio button. On the right ACTIONS menu, select Edit.

Step 3 On the Settings page, check Enable BGP on PROD-DC-VRF and click APPLY.

Step 4 On the Configuration > Routing > BGP page, click the • • • symbol left of PROD-DC-VRF and select Switches.

Step 5 On the SWITCHES tab, click • • • next to RSVDC-FB1-LF1-1 and select Neighbors.

Step 6 On the right ACTIONS menu of the NEIGHBORS tab, select Add.

Step 7 On the Settings page, enter the following non-default values and click NEXT.

  • Neighbor AS Number: 65501
  • IP Address: 10.255.2.3
  • Route Map Out IP: RM-EXT-OUT
  • Enable Bidirectional Forwarding Detection (BFD) Fall Over: < checked >

Step 8 On the Name page, enter a Name and Description, then click NEXT.

Step 9 On the Summary page, review the BGP neighbor settings and click APPLY.

Step 10 Repeat steps 6-9 to add an iBGP peering between the border leaf switches in the production VRF with the following settings:

NameDescriptionNeighbor ASNIP AddressesRoute Map IP Out
PROD-DC-VRF LF1-1 to LF1-2PROD VRF peering between border leaf switches6500110.255.2.2RM-PERMIT-CAMPUS

Note: A BGP peering is defined per overlay VRF between border leaf switches to share routes learned from external peers. This avoids the possibility of discarding overlay traffic in the case where only one of the of the border leaf switches has an active external peering relationship. This may occur due to misconfiguration or administratively disabling one of the external peerings.

Step 11 In the top left current context path, click PROD-DC-VRF.

Step 12 On the SWITCHES tab, click • • • next to RSVDC-FB1-LF1-2 and select Neighbors.

Step 13 Repeat steps 6 to 9 to create additional BGP peerings on RSVDC-FB1-LF1-2 with the following settings:

NameDescriptionNeighbor ASNIP AddressesRoute Map IP OutBFD
PROD-DC-VRF LF1-2 to FWBGP peering from LF1-2 PROD VRF to FW cluster6550110.255.2.3RM-EXT-OUT< checked >
PROD-DC-VRF LF1-2 to LF1-1PROD VRF peering between border leaf switches6500110.255.2.1RM-PERMIT-CAMPUS< unchecked >

Step 14 Repeat this procedure for each overlay VRF network that requires external connectivity. Reachability between overlay VRFs is governed by policy at the upstream firewall. Strict overlay route table separation can be maintained by connecting to discrete VRFs or virtual firewall contexts on the upstream firewall.

Verify Border Leaf Routing

Step 1 In the top-left current context path, click BGP.

Note: To display information and the current state of an individual BGP peering, click the expansion icon (>) at the beginning of the row for each BGP peer definition. After a BGP peering is defined, the AFC web page may require a refresh to display the expansion icon.

Step 2 Click • • • next to PROD-DC-VRF and select Neighbors Summary.

Step 3 In the NEIGHBORS SUMMARY window, verify that each peering displays Established in the State column.

Step 4 Repeat steps 1-3 for each overlay VRF.

Step 5 In the menu bar at the top right of the AFC display, click the CLI Commands icon and select Show Commands.

Step 6 On the CLI Command Processor page, enter the following values, then click RUN.

  • Switches: < Select all leaf switches >
  • Commands: show ip route bgp vrf PROD-DC-VRF

Step 7 Verify that there is a default route and campus summary route learned on all leaf switches in the production VRF. The border leaf switch routes use the upstream firewall IP as a next hop. The remaining leaf switches use a next hop of the border leaf Anycast VTEP, learned via BGP EVPN type-5 advertisements.

Step 7 Repeat steps 6 to 7 for each overlay VRF.

VMWare vSphere Integration

VMware vSphere integration enables VMware host and virtual machine visualization within AFC. This procedure also enables automated switch port provisioning of VLANs based on how the vSwitch and VMs are setup.

Step 1 On the Configuration menu, select Integrations > VMware vSphere.

Step 2 On the right ACTIONS menu, click Add to start the VMware vSphere wizard.

Step 3 On the Host page, assign the following settings:

  • Name: Example-vSphere1
  • Description: Example vSphere Integration
  • Host: rsvdc-vcenter.example.local
  • Username: administrator@example.local
  • Password: < password >
  • Validate SSL/TLS certificates for Aruba Fabric Composer: unchecked
  • Enable this configuration: checkmark

Note: Host is the resolvable hostname or IP address of the vCenter server.
Username is the name of an administrator account on the vCenter server.
Password is the password for the administrator account on the vCenter server.

Step 4 Click VALIDATE to verify that the provided credentials are correct. A green success message appears at the bottom right. Click NEXT.

Step 5 On the Aruba Fabric page, choose from the two options below and enter a VLAN Range, check Automated PVLAN provisioning for ESX hosts direction connected to the fabric and enter a VLAN range. Check Automated Endpoint Group Provisioning, then click NEXT.

  • If the hosts are directly connected from the NIC to the switch, select Automated VLAN provisioning for ESX hosts directly connected to the fabric.
  • If host infrastructure is HPE Synergy or another chassis with an integrated switch solution, select Automated VLAN provisioning for ESX hosts connected through intermediate switches.

Note: Automated PVLAN provisioning for ESX hosts directly connected to the fabric is a prerequisite for microsegmentation automations built into AFC.

Automated Endpoint Group Provisioning enables assigning VMs dynamically to firewall policy using VM tags. The IP addresses used in the policy are modified dynamically in the future, if a VM IP changes or the VMs associated with the tag change.

For additional details on all options, refer to the Aruba Fabric Composer User Guide.

Step 6 On the vSphere page, click the checkbox for Discovery protocols and click NEXT.

Caution: If Discovery protocols is not enabled, the VMware integration cannot display virtual switches correctly.

Step 7 On the Summary page, confirm that the information is entered correctly and click APPLY.

Step 8 Go to Visualization > Hosts.

Step 9 Select the checkbox next to the Name of an ESXi VM host to add it to the visualization window.

Step 10 Verify the connectivity displayed from the hypervisor layer to the leaf switches.


Back to top

© Copyright 2022 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Aruba Networks and the Aruba logo are registered trademarks of Aruba Networks, Inc. Third-party trademarks mentioned are the property of their respective owners. To view the end-user software agreement, go to Aruba EULA.