EVPN-VXLAN Configuration
Configuring an HPE Aruba Networking data center fabric is best performed using the HPE Aruba Networking Fabric Composer guided setup process. Fabric Composer automates switch provisioning, underlay link and routing configuration, overlay configuration, and integration with VMware vCenter.
Table of contents
- EVPN-VXLAN Configuration
- Physical Topology Overview
- HPE Aruba Networking Fabric Composer Process
- Plan the Deployment
- HPE Aruba Networking Fabric Composer Prerequisites
- Fabric Initialization
- Discover Switches on the Network
- Create a Fabric
- Assign Switches to the Fabric
- Configure Switch Profile
- Configure Infrastructure Split Ports
- Configure NTP for the Fabric
- Configure DNS for the Fabric
- Configure VSX on Leaf Switches
- Configure Layer 3 Leaf-to-Spine Connections
- Configure Server Access Switch Links
- Configure Underlay Network Routing
- Configure Overlay Network Routing
- Configure Overlay VRFs
- Configure Overlay VLANs and SVIs
- Configure EVPN Instances
- Host Port Configuration
- Configure the Border Leaf
- Configure Overlay Test Loopbacks
- Configure Overlay IP Multicast
- VMWare vSphere Integration
Physical Topology Overview
The diagram below illustrates the physical links and hardware that comprise the primary data center in this guide. Fabric Composer is used to configure a routed underlay and EVPN-VXLAN overlay for the topology.
HPE Aruba Networking Fabric Composer Process
Fabric Composer’s Guided Setup automates configuration following these steps:
- Switch discovery: Discover and inventory data center switches in Fabric Composer.
- Fabric Composer fabric creation: Define the logical construct that identifies a fabric within Fabric Composer.
- Switch assignment: Assign roles to fabric switches.
- NTP and DNS configuration: Assign NTP and DNS servers to fabric switches.
- VSX configuration: Create VSX-redundant ToR leaf pairs.
- Leaf/Spine configuration: Assign IP addresses to leaf/spine links.
- Underlay configuration: Establish OSPF underlay to support the EVPN-VXLAN overlay data plane and control plane.
- Overlay configuration: Establish BGP peerings to enable the EVPN overlay control plane and VXLAN tunnel endpoints for overlay data plane.
- EVPN configuration: Establish Layer 2 EVPN mapping of VLANs to VXLAN Network Identifiers (VNIs).
When the Guided Setup is complete, additional configuration details for host onboarding, external fabric connectivity, testing, and multicast are required:
- Layer 3 services within overlays.
- Multi-chassis LACP LAG configuration for host connectivity.
- Routing between the data center and campus.
- Overlay loopbacks for testing reachability to directly connected hosts and resources both inside and external to the fabric.
- PIM-SM and IGMP to support overlay multicast services.
For additional details on the Guided Setup steps, refer to the “Guided Setup” section of the HPE Aruba Networking Fabric Composer User Guide.
Plan the Deployment
Before starting the guided setup, plan ahead and develop a naming convention and address scheme with values that can accommodate the current deployment size and leave room for growth. Using a consistent approach in the physical and logical configurations improves the management and troubleshooting characteristics of the fabric.
This section provides sample values and rationale. Adjust the values and formats as needed to accommodate the current and projected sizes of the fabric effectively.
Naming Conventions
Fabric Composer supports the execution of operations on a single switch or on a selected group of switches.
Establish a switch naming convention that indicates the switch type, role, and location to simplify identification and increase efficiency when operating production-scale fabrics. Configure switch names before importing them into Fabric Composer.
Example values used in this guide:
Switch Name | Fabric Role | Description |
---|---|---|
RSVDC-FB1-SP1 | Spine | Fabric #1, Spine #1 |
RSVDC-FB1-LF1-1 | Leaf | Fabric #1, VSX Leaf Pair #1, Member #1 |
RSVDC-FB1-LF1-2 | Leaf | Fabric #1, VSX Leaf Pair #1, Member #2 |
RSVDC-FB1-LF3-SA1 | Server Access (Sub Leaf) | Fabric #1, VSF Server Access Stack #1 attached to Leaf Pair #3 |
Note: VSF stacks used in the server access role contain two or more switches. The stack operates as a single logical switch with a single control plane. It is not possible to differentiate between stack members using a unique hostname.
The Guided Setup prompts for a Name Prefix on some steps. Name prefixes are logical names used within Fabric Composer. Choose a descriptive name to make it easy to monitor, edit, and execute operations. The procedures below include examples of effective names that can be used.
Underlay Connectivity and Addressing
Point-to-point connections between spine-and-leaf switches are discovered and configured automatically for IP connectivity using /31 subnets within a single network range. Fabric Composer supports addressing up to 128 links inside a fabric using a /24 subnet mask. The maximum number of links on a fabric is determined by the aggregate port count of the spine switches.
Another network range is provided to create:
- A /32 loopback address on each switch, used as the router ID for OSPF and BGP.
- A /31 transit VLAN between ToR switch pairs to ensure data plane continuity in case of host link failure.
- A /31 point-to-point interface between ToR switch pairs to transmit keep-alive messages for VSX peer loss detection.
Fabric Composer creates each of these subnet types automatically from a single network range provided during the VSX setup process. If VSX is not used, the network range is provided during the underlay configuration process.
Example values used in this guide are:
Purpose | Description | Example |
---|---|---|
Leaf-Spine IP address block | An IPv4 address block used to create /31, point-to-point layer 3 links between leaf and spine switches. | 10.255.0.0/23 |
Routed loopback, VSX transit VLAN, and VSX Keep-Alive Interface IP address block | An IPv4 address block used to allocate unique loopback addresses (/32) for each switch, for VSX keep-alive point-to-point connection (/31) and also used as a transit-routed VLAN between redundant ToRs (/31) | 10.250.0.0/23 |
Overlay Connectivity and Addressing
The overlay network is created using VXLAN tunnels established between Virtual Tunnel Endpoints (VTEPs) within the leaf switches in the fabric. Loopback addresses assigned to establish route peerings are unique per switch and cannot be used as a VTEP IP when using VSX. A single logical VTEP per rack is defined by creating a dedicated /32 loopback interface common to both ToR peer switches. The interfaces are assigned automatically from a single subnet scope provided during the overlay guided setup.
Purpose | Description | Example |
---|---|---|
VTEP IP address block | An IPv4 address block used to allocate VXLAN tunnel endpoint (VTEP) loopback addresses (/32) for each ToR switch pair | 10.250.2.0/24 |
A Virtual Network Identifier (VNI) is a numerical value that identifies network segments within the fabric’s overlay topology. The VNI is carried in the VXLAN header to enable switches in the fabric to identify the overlay to which a frame belongs and apply the correct policy to it.
When configuring the overlay topology, a Layer 3 VNI represents the routed component of the overlay. Each Layer 3 VNI maps to a VRF. A Layer 2 VNI represents the bridged component of the overlay. Each Layer 2 VNI maps to a VLAN ID. Multiple Layer 2 VNIs can be associated to a single VRF.
Plan your VNI numbering scheme in advance to ensure that values do not overlap. Example values used in this guide are:
VNI Type | Description | Example |
---|---|---|
L2 VNI | VLAN ID + 10,000 | VLAN100 == L2 VNI 10100, VLAN200 == L2VNI 10200 |
L3 VNI | Overlay # + 100,000 | Overlay1 == L3 VNI 100001, Overlay2 == L3 VNI 100002 |
Internal BGP (iBGP) is used to share overlay reachability information between leaf switches. Layer 3 and Layer 2 information associated to a local switch’s VNIs is advertised with its associated VTEP to other members of the fabric. Two of the spines operate as BGP route reflectors. All leaf switches are clients of the two route reflectors.
A unique IP loopback IP address is assigned to each overlay VRF for testing and troubleshooting.
Overlay VLAN switched virutal interface (SVI) and Active Gateway IP address assignments are not unique on leaf switches. A ping test to a directly attached host is not supported when the SVI/AG IP is the traffic source, because the ping response may be sent to the switch that did not originate the ping. Using a unique IP source from a loopback IP allows the attached switch to source a ping test to the host.
Additionally, the SVI/AG IP address for each VLAN is present on all leaf switches in a fabric. A unique loopback IP provides a source IP address for testing reachability within the fabric and to external hosts. It also provides a unique destination IP to test individual switch reachability within an overlay.
Plan an IP block per overlay VRF large enough that a unique IP address can be assigned to each leaf switch in the fabric. A single maskable block allows summarizing route advertisements to external networks.
Purpose | Description | Example |
---|---|---|
VRF 1 IP address block | An IPv4 address block used to assign loopback IPs for reachability testing in VRF 1 | 10.250.4.0/24 |
VRF 2 IP address block | An IPv4 address block used to assign loopback IPs for reachability testing in VRF 2 | 10.250.5.0/24 |
Each VSX pair requires an overlay transit VLAN to share routed reachability of the loopback IP addresses assigned from the IP block above. Assign a single maskable block of IP addresses for each overlay VRF, where /31 blocks can be assigned to the transit VLANs.
Purpose | Description | Example |
---|---|---|
VRF 1 IP address block | An IPv4 address block used to assign loopback IPs for reachability testing in VRF 1 | 10.255.4.0/24 |
VRF 2 IP address block | An IPv4 address block used to assign loopback IPs for reachability testing in VRF 2 | 10.255.5.0/24 |
MAC Address Best Practice
A Locally Administered Address (LAA) should be used when Fabric Composer requires entry of a MAC address for the switch virtual MAC, a VSX system-MAC, or an Active Gateway MAC for a distributed SVI. An LAA is a MAC in one of the four formats shown below:
x2-xx-xx-xx-xx-xx
x6-xx-xx-xx-xx-xx
xA-xx-xx-xx-xx-xx
xE-xx-xx-xx-xx-xx
The x positions can contain any valid hex value. For more details on the LAA format, see the IEEE tutorial guide.
An active gateway IP distributes the same gateway IP across all leaf switches in a fabric to support gateway redundancy and VM movement across racks. An active gateway MAC associates a virtual MAC address with an active gateway IP. Only a small number of unique virtual MAC assignments can be configured per switch. The same active gateway MAC address should be reused for each active gateway IP assignment.
HPE Aruba Networking Fabric Composer Prerequisites
The following items must be configured before building a Fabric Composer-based fabric.
- Physically cable all switches in the topology. VSX pairs, VSF stacks, and leaf-spine links must be connected fully to support Fabric Composer’s automation.
- Configure VSF stacking for server access switches. When optional server access switches are present, VSF auto-stacking must be configured when the switches are at their default configuration. VSF configuration guidance is available in the HPE Networking Support Portal. Enable split detection after the stack is formed.
- Assign management interface IP addresses. A DHCP scope using MAC address reservations for each switch can be used in place of manual IP address assignment. When using DCHP, MAC address reservations ensure that each switch is assigned a consistent IP address.
- Assign switch hostnames. Assigning unique hostnames using a naming convention helps administrators identify a switch and its role quickly during setup and future troubleshooting.
Fabric Initialization
Configuring an HPE Aruba Networking data center fabric using a spine-and-leaf topology is best performed using the Fabric Composer guided setup process. To return to guided setup at any time, simply select it in the menu bar at the top right of the Fabric Composer user interface.
Discover Switches on the Network
The first procedure adds switches to the Fabric Composer device inventory. An orderly naming convention for switch host names should be implemented before continuing with this procedure in order to simplify switch selection in the following steps.
Step 1 On the Guided Setup menu, select SWITCHES.
Step 2 In the Discover Switches window, enter the following switch information and click APPLY.
- Switches: < OOBM IP addresses for fabric switches >
- admin Switch Password: < password created during switch initialization >
- admin Switch Password: < password created during switch initialization >
- Service Account Password: < new password for the afc_admin account >
- Confirm Service Account Password: < new password for the afc_admin account >
Note: Switch IP addresses can be entered in a comma-separated list or in one or more ranges. If the IP addresses provided include devices not supported by Fabric Composer or switches with different credentials, a “Discovery Partially Successful” warning message appears after the import.
This step creates a new afc_admin account on all the switches for API access from Fabric Composer.
Step 3 Review the list of imported switches in the Maintenance > Switches window and verify that the health status of each switch is HEALTHY, BUT… Hovering over the health status value of an individual switch provides additional details.
Create a Fabric
A fabric container is created in Fabric Composer for collective configuration of a group of switches. The fabric name is internal to Fabric Composer operations and is not tied to configuration elements on a switch. Fabric Composer supports the configuration of spine and leaf, Layer 2 two-tier, and management networks. All topologies assign switches to a Fabric Composer internal fabric for configuration and management.
The fabric in this guide is used to implement a spine-and-leaf routed network with an EVPN-VXLAN overlay.
Step 1 On the Guided Setup menu, select FABRIC.
Step 2 Define a unique logical name, set the Type to Data, specify a time zone, and click APPLY.
Assign Switches to the Fabric
Switches must be added to a fabric before they can be configured. When adding a switch to a fabric, a role is declared. In the following steps, begin by adding spine switches. Leaf switches can then be added more easily as a group.
Step 1 On the Guided Setup menu, verify that the fabric created in the previous step appears under Selected Fabric and click ASSIGN SWITCH TO FABRIC.
Step 2 Assign switches to the fabric grouped by role. Assign the following values for spine switches, then click ADD.
- Fabric: RSVDC-FB1
- Switches: < All spine switches >
- Role: Spine
- Force LLDP Discovery: checked
- Initialize Ports: checked
- Exclude this switch from association to any Distributed Services Manager: unchecked
Note: Checking Initialize Ports enables all switch ports for use in LLDP neighbor discovery. Split port configuration is performed in the previous Switch Initialization procedure to allow proper port initialization by Fabric Composer. The MTU of the physical ports also is adjusted to 9198 in order to support jumbo frames that allow VXLAN encapsulation overhead.
Checking Force LLDP Discovery prompts Fabric Composer to use LLDP neighbor information to discover link topology between spine-and-leaf switches and ToR VSX pairs dynamically.
Step 3 Repeat the steps above for VSF server access switch stacks. Verify that all server access switch stacks are listed with the Sub Leaf role selected and click ADD.
Note: This step is optional. It is required only when server access switches are present in the topology. Each VSF switch stack has a single entry that represents all switch members of the stack. This example implementation contains a single VSF stack.
Step 4 Repeat the previous step for border leaf switches with the Border Leaf role selected and click ADD.
Step 5 Repeat the previous step for the remaining leaf switches with the Leaf role selected and click ADD.
Note: Leaf switches typically comprise the majority of switches in a fabric. Use SELECT ALL to catch all remaining leaf switches, when switch assignments containing smaller sets of switches are assigned first.
Step 6 Scroll through the list of switches to verify role assignments and ensure successful configuration of the fabric. After adding all switches to the fabric with the correct role, click APPLY.
Step 7 Guided Setup displays the list of switches in the Maintenance > Switches window. Switch status should sync in a few seconds. Verify that all switches in the fabric are listed as HEALTHY in green.
Configure Switch Profile
The switch profile optimizes hardware resources for a switch’s role in the network. Most switches are assigned a leaf role by default. The following procedure assigns the spine profile to the spine switches in the network.
Note: When using IPsec or NAT on a CX10000 border leaf, the border leaf switches profile must be changed to spine. East-west policy enforcement for hosts attached to the border leaf is not supported after this change.
Step 1 On the Maintenance > Switches page, click a checkbox to select one of the spine switches.
Step 2 Click the ACTIONS menu on the right, and select Change Profile.
Step 3 Select Spine in the New Profile field dropdown, check Reboot switch after changing profile, and click Apply.
Note: When selecting Spine in the New Profile field, the Configured Profile value changes dynamically from Leaf to Spine.
Step 4 Repeat the procedure for each spine switch.
Configure Infrastructure Split Ports
This process is necessary only when using links between fabric switches that require split port operation. The most common case is using a CX 9300 in the spine role to increase rack capacity of a fabric. In this sample deployment, CX 9300-32D spine ports are set to operate in 2 x 100 Gbps mode.
Step 1 On the Configuration menu, select Ports > Ports.
Step 2 On the Ports page, select both spine switches in the Switch field.
Note: Typing a value in the Switch field filters selectable switch names to those containing that value in their name. Following the naming convention in this guide, only the spine switches are displayed for selection by typing sp in the Switch field.
Step 3 Filter displayed ports by entering Invalid in the regex field below the Reason column heading and click the Apply Table Filters icon.
Note: Invalid speed is displayed in the Reason column when there is a mismatch between a physical port’s configured operation and an attached Active Optical Cable’s (AOC’s) physical split configuration. No error message is displayed when using a standard 400 Gbps transceiver before defining split port operation.
Step 4 Click the box at the top of the selection column to select all the displayed ports on both spine switches that match the search criteria.
Step 5 On the ACTIONS menu, select QSFP Transform > Split > 2x 100.
Step 6 When prompted to confirm the split operation, click OK.
Note: The split ports are enabled by Fabric Composer for use in LLDP neighbor discovery, and the MTU of the split ports is adjusted to 9198 to support jumbo frames for VXLAN encapsulation.
The Confirm prompt indicates that a reboot is required, but a reboot is not required to enable split ports.
Configure NTP for the Fabric
Modern networks require accurate, synchronized time. The NTP wizard is used to enter NTP server hosts and associate them with all fabric switches. The NTP servers must be reachable from the data center management LAN. The Fabric Composer CLI Command Processor shows the time synchronization status of each switch. At the completion of this procedure, the date and time are synchronized between the data center switches and the NTP servers.
Step 1 On the Guided Setup menu, select NTP CONFIGURATION.
Step 2 On the Name page, enter a Name and Description, then click NEXT.
Step 3 On the Entries page, enter a valid hostname or IP address and optional NTP authentication information, then click ADD.
Step 4 Repeat the step above for each NTP server in the environment.
Step 5 After all NTP servers have been added, click NEXT.
Step 6 On the Application page, select the name of the fabric in the Fabric field and click NEXT.
Step 7 On the Summary page, verify that the information is entered correctly and click APPLY.
Step 8 On the Configuration > Network > NTP page, click the radio button for the NTP config applied to an individual switch, click the ACTIONS menu on the right, and click Delete.
Note: Fabric Composer dynamically creates switch level objects that reconcile configuration performed by an administrator directly on the switch. A switch level configuration object has a higher precedence than Fabric Composer objects defined at the fabric level. At this time, the default NTP config is reconciled in a switch level configuration object. In this case, it is necessary to delete switch level NTP configuration objects to apply the fabric level config. If per-switch reconciled config is not present, omit steps 8, 9, and 10.
Step 9 In the Delete confirmation window, click OK.
Step 10 Repeat steps 8 and 9 to remove reconciled NTP configuration for all switches.
Step 11 In the menu bar at the top right of the Fabric Composer display, click the CLI Commands icon and select Show Commands.
Step 12 On the CLI Command Processor page, enter the following values, then click RUN.
- Fabrics: RSVDC-FB1
- Commands: show ntp status
Note: Multiple commands are supported in the Commands field in a comma-separated list. CLI commands can be saved for future reuse by clicking the ADD button. When typing a command in the Saved Commands field, preconfigured and saved commands appear in a list. Select a command in the list to add it to the Commands field.
Step 13 Verify that the output for each switch displays an NTP server IP address with stratum level, poll interval, and time accuracy information.
Note: NTP synchronization can take several minutes to complete. If a hostname was used instead of an IP address, complete the next step to configure DNS for the fabric before NTP verification.
Configure DNS for the Fabric
Use the DNS wizard to enter DNS host details and associate them with all fabric switches. The DNS servers must be reachable from the data center management LAN.
At the completion of this procedure, the data center switches can resolve DNS hostnames to IP addresses.
Step 1 On the Guided Setup menu, select DNS CONFIGURATION.
Step 2 On the Name page, enter a Name and Description, then click NEXT.
Step 3 On the Settings page, enter the Domain Name. Enter a valid DNS server IP address in the Name Servers field. Press the TAB or ENTER key to complete the server entry.
Step 4 Create additional entries as needed. After all required DNS servers are entered, click NEXT.
Step 5 On the Application page, select the name of the fabric in the Fabrics field and click NEXT.
Step 6 On the Summary page, verify that the information is entered correctly and click APPLY.
Configure VSX on Leaf Switches
VSX enables a pair of ToR leaf switches to appear as a single logical switch to downstream hosts using multi-chassis link aggregation. VSX improves host availability in case of switch failure or maintenance downtime. Fabric Composer automatically identifies VSX switch pairs and configures them with the values supplied in the VSX wizard. Resource Pool wizards create IP and MAC address objects. The Fabric Composer CLI Command Processor verifies VSX operational status.
The diagram below highlights leaf and border leaf VSX pairs created in this procedure.
Note: Use of a non-uplink port for keep-alive messages between VSX peers is recommended to maximize fabric capacity.
Step 1 On the Guided Setup menu, select VSX CONFIGURATION.
Step 2 On the Create Mode page, leave Automatically generate VSX Pairs selected and click NEXT.
Step 3 On the Name page, enter a Name Prefix and Description, then click NEXT.
Step 4 On the Inter-Switch Link Settings page, leave the default values and click NEXT.
Step 5 On the Keep Alive Interfaces page, select Point-to-Point as the Interface Mode. Click ADD to launch the Resource Pool wizard.
Note: The Resource Pool wizard is launched in this step to create an object representing the IPv4 address range used for underlay loopback interfaces on all switches, VSX keep-alive interfaces, and routed transit VLAN interfaces on VSX pairs. A resource pool is a reusable object that ensures consistency and reduces errors when adding switches to the fabric in the future.
Step 6 Resource Pool wizard: On the Name page, enter a Name and Description for the IPv4 address pool, then click NEXT.
Step 7 Resource Pool wizard: On the Settings page, enter an IPv4 address block in the Resource Pool field and click NEXT.
Note: This IPv4 address block is used to allocate IP addresses to loopback interfaces (/32) for all fabric switches, VSX keep-alive point-to-point interfaces (/31), and routed transit VLAN interfaces on VSX pairs (/31). Use a block large enough to support addressing these interfaces across the entire fabric.
Step 8 Resource Pool wizard: On the Summary page, verify the IP address pool information and click APPLY. The Resource Pool wizard closes and returns to the main VSX Configuration workflow.
Step 9 On the Keep Alive Interfaces page, verify that the new IPv4 Address Resource Pool is selected and click NEXT.
Step 10 On the Keep Alive Settings page, leave the default values and click NEXT.
Step 11 On the Options page, enter the value 600 for the Linkup Delay Timer field. Click ADD to launch the Resource Pool wizard.
Note: It is recommended to set a 600-second Linkup Delay Timer value on CX 10000 switches using firewall policy to ensure that policy and state have synchronized before forwarding traffic attached on a multi-chassis LAG.
Step 12 Resource Pool wizard: On the Name page, enter a Name and Description for the system MAC address pool. Click NEXT.
Step 13 Resource Pool wizard: On the Settings page, enter a MAC address range to be used for the VSX system MAC addresses, then click NEXT.
Step 14 Resource Pool wizard: On the Summary page, verify the system MAC address pool information and click APPLY. The Resource Pool wizard closes and returns to the main VSX Configuration workflow.
Step 15 On the Options page, verify that the new MAC Address Resource Pool is selected and click NEXT.
Step 16 On the Summary page, verify the complete set of VSX settings and click APPLY.
Step 17 Guided Setup displays the list of VSX pairs in the Configuration / Network / VSX window. Review the information to verify that the VSX pairs created are consistent with physical cabling.
Note: VSX Health status in Fabric Composer can update slowly. Click the Refresh button in the upper right of the Configuration / Network / VSX window to refresh the switch status manually.
Step 18 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands
Step 19 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < All leaf switches >
- Commands: show vsx status
Step 20 Verify that each switch has both Local and Peer information populated with the following values:
- ISL channel: In-Sync
- ISL mgmt channel: operational
- Config Sync Status: In-Sync
- NAE: peer_reachable
- HTTPS Server: peer_reachable
Configure Layer 3 Leaf-to-Spine Connections
Fabric Composer automatically identifies leaf-to-spine connections and configures them with the values supplied in the Leaf-Spine wizards. A resource pool is created to assign IP addresses to routed leaf and spine interfaces using /31 subnets. At the completion of this procedure, IP addresses are assigned to all interfaces required to support deployment of the OSPF fabric underlay.
Step 1 On the Guided Setup menu, select L3 LEAF-SPINE CONFIGURATION to start the Leaf-Spine workflow.
Step 2 On the Create Mode page, leave Automatically generate Leaf-Spine Pairs selected and click NEXT.
Step 3 On the Name page, enter a Name Prefix and Description, then click NEXT.
Step 4 On the Settings page, click ADD to launch the Resource Pool wizard.
Step 5 Resource Pool wizard: On the Name page, enter a Name and Description for the IPv4 address pool, then click NEXT.
Step 6 Resource Pool wizard: On the Settings page, enter an IPv4 address block in the Resource Pool field and click NEXT.
Note: Use a subnet distinct from other subnets used in the overlay networks. The assigned subnet is used to configure routed ports between fabric switches. Use a block large enough to accommodate anticipated fabric growth.
Step 7 Resource Pool wizard: On the Summary page, verify the IP address pool information and click APPLY. The Resource Pool **wizard closes and returns to the main **Leaf-Spine Configuration workflow.
Step 8 On the Settings page, verify that the new IPv4 Address Resource Pool is selected and click NEXT.
Step 9 On the Summary page, verify that the information is correct and click APPLY.
Step 10 Guided Setup displays the list of leaf-to-spine links in the Configuration/Network/Leaf-Spine window. Review the information to verify that the leaf-spine links created are consistent with physical cabling.
Configure Server Access Switch Links
Fabric Composer refers to server access switches as subleaf switches. Compute and storage hosts are typically attached directly to leaf switches. Server access switches are primarily used to achieve two objectives: they provide a transition strategy to connect existing server infrastructure into an EVPN-VXLAN fabric, and they provide an economical strategy to support a large number of 1 Gbps connected hosts. Server access switches extend Layer 2 services from the leaf, but do not participate directly in underlay routing or overlay virtualization mechanisms.
The following procedure establishes an MC-LAG between a VSX leaf pair and a downstream VSF server access switch stack. The LAGs defined on both sets of switches are 802.1Q trunks that allow all VLANs.
The diagram below highlights the server access MC-LAG created in this procedure.
Step 1 On the Configuration > Network > Leaf-Spine page, click SUBLEAF-LEAF.
Step 2 On the ACTIONS menu, select Add.
Step 3 When prompted to continue, click OK.
Step 4 Review the leaf and server access MC-LAG information. Verify that the values in the Leaf LAG Status and SubLeaf LAG Status columns are up.
Note: The status field values may take a few minutes to populate and may require a screen refresh.
Configure Underlay Network Routing
The HPE Aruba Networking data center spine-and-leaf design uses OSPF as the underlay routing protocol. The Fabric Composer Underlay Configuration wizard creates a transit VLAN between redundant ToRs to support routing adjacency, assigns IP addresses to loopback and transit VLAN interfaces, and creates underlay OSPF configuration. OSPF shares the loopback0 IP addresses for later use in establishing overlay routing. The Fabric Composer CLI Command Processor verifies OSPF adjacencies.
At the completion of this procedure, a functional underlay for the data center fabric is complete. The diagram below illustrates the assigned loopback IP addresses and the links where OSPF adjacencies are formed between leaf and spine switches.
Step 1 On the Guided Setup menu, select UNDERLAYS to start the Underlay Configuration workflow.
Step 2 On the Name page, enter a Name and Description, then click NEXT.
Step 3 On the Underlay Type page, leave the default OSPF selection and click NEXT.
Step 4 On the Settings page, set the Transit VLAN to 3999. Leave other settings at their defaults and click NEXT.
Note: Enter a VLAN ID that cannot be confused easily with other VLANs within the network.
Step 5 On the Max Metric page, enter the value 600 in the On Startup field. Leave other settings at their defaults and click NEXT.
Note: It is recommended to set a 600-second OSPF On Startup max metric value for CX 10000 switches using firewall policy in a VSX pair to ensure that policy and state have synchronized before fabric traffic is forwarded to the switch VTEP. The same value is applied to all switches in this sample fabric.
Step 6 On the Summary page, verify that the information is entered correctly and click APPLY to create the OSPF configuration.
Step 7 In the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 8 On the CLI Command Processor page, enter the following values, then click RUN.
- Fabrics: RSVDC-FB1
- Commands: show ip ospf neighbors
Step 9 Verify that each spine switch shows an OSPF neighbor adjacency in the “FULL” state for all leaf switches. Verify that all leaf VSX pairs show an OSPF neighbor adjacency in the “FULL” state between themselves over the routed transit VLAN in addition to an adjacency in the “FULL” state with each spine.
Configure Overlay Network Routing
The HPE Aruba Networking data center uses iBGP as the control plane for the fabric overlay within a single fabric. BGP provides a mechanism to build VXLAN tunnels dynamically and share host reachability across the fabric using the L2VPN EVPN address family. VTEP interfaces are the VXLAN encapsulation and decapsulation points for traffic entering and exiting the overlay. VSX leaf pairs share the same anycast VTEP IP address.
Use the Fabric Composer Overlay Configuration wizard to implement iBGP peerings using a private ASN and to establish VXLAN VTEPs. VTEP IP addresses are assigned as a switch loopback using a resource pool. iBGP neighbor relationships are verified using the Fabric Composer CLI Command Processor.
The diagram below illustrates the iBGP L2VPN EVPN address family peerings established using loopback interfaces between leaf switches and the two spines operating as iBGP route reflectors.
Step 1 From the Guided Setup menu, select OVERLAYS to start the Overlay Configuration workflow.
Step 2 On the Name page, enter a Name and Description, then click NEXT.
Step 3 On the Overlay Type page, leave iBGP selected and click NEXT.
Step 4 On the iBGP Settings page, enter the following settings, then click NEXT.
- Spine-Leaf ASN: 65001
- Route Reflector Servers: < Select two spine switches >
- Leaf Group Name: RSVDC-FB1-LF
- Spine Group Name: RSVDC-FB1-RR
Note: Use a 2-byte ASN in the private range of 64512-65534 for an easy-to-read switch configuration. A 4-byte ASN is supported.
Step 5 On the IPv4 Network Address page, click ADD to launch the Resource Pool wizard.
Step 6 Resource Pool wizard: On the Name page, enter a Name and Description, then click NEXT.
Step 7 Resource Pool wizard: On the Settings page, enter an IPv4 address block in the Resource Type field and click NEXT.
Note: This IPv4 address block is used to configure loopback addresses on all leaf switches for VXLAN VTEPs. Each member of a VSX leaf pair uses the same IP loopback address.
Step 8 Resource Pool wizard: On the Summary page, verify the VTEP IP address pool information and click APPLY. The Resource Pool wizard closes and returns to the main Overlay Configuration workflow.
Step 9 On the IPv4 Network Address page, verify that the new IPv4 Address Resource Pool is selected and click NEXT.
Step 10 On the Overlay Configuration Settings page, leave the default values and click NEXT.
Step 11 On the Summary page, verify that the iBGP information is correct, then click APPLY.
Step 12 In the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 13 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select both route reflector spine switches >
- Commands: show bgp l2vpn evpn summary
Step 14 Verify that both route reflectors show an L2VPN EVPN neighbor relationship in the “Established” state for all leaf switches.
Configure Overlay VRFs
An EVPN-VXLAN data center uses overlay VRFs to provide the Layer 3 virtualization and macro segmentation required for flexible and secure data centers. VRFs are distributed across all leaf switches. A VRF instance on one switch is associated to the same VRF on other leaf switches using a common L3 VNI and EVPN route-target, binding them together into one logical routing domain. VRFs are commonly used to segment networks by tenants and business intent.
Use the Virtual Routing & Forwarding workflow to create overlay network VRFs and associate a VRF with an L3 VNI and EVPN route-target. The VNI and route target for each set of overlay VRFs must be unique to preserve traffic separation.
This guide uses a production VRF and development VRF as an example of route table isolation. TCP/IP hosts in one VRF are expected to be isolated from hosts in the other VRF. The diagram below illustrates the logical VRF overlay across all leaf switches.
Note: The diagram above depicts the border leaf switches at the same horizontal level as all other leaf switches. This placement of the border leaf pair is a cosmetic preference for easier depiction of virtualization across leaf switches. The deployed topology is consistent with previous diagrams, but without the pictorial emphasis of the special role of the border leaf handling data center north/south traffic.
Hosts attached to server access switches can be connected to subnets in either VRF by VLAN extension from the leaf switch, but the server access switches do not contain their own VRF definition.
Step 1 On the left menu, select VRF. If VRF does not appear in the left pane, select Configuration > Routing > VRF from the top menu.
Step 2 On the ACTIONS menu on the right, select Add.
Step 3 On the Name page, enter a Name and Description, then click NEXT.
Step 4 On the Scope page, uncheck Apply the VRF to the entire Fabric and all Switches contained within it. Select the VSX leaf pairs in the Switches field, then click NEXT.
Note: When a large number of leaf switches is present, click the SELECT ALL button to select all switches, then deselect spine and server access switches. Spine and server access switches do not participate in overlay virtualization and do not possess VTEPs, so overlay VRFs should not be configured on them.
Step 5 On the Routing page, enter the following values to create a Layer 3 VNI and BGP route distinguisher
- L3 VNI: 100001
- Route Distinguisher: loopback1 : 1
Note: Refer to the “Overlay Connectivity and Addressing” section above for a VNI numbering reference. The Layer 3 VNI associates routes in an EVPN-VXLAN overlay with a VRF.
The integer value in the Route Distinguisher should correlate to the VNI value without the addition of its 100,000 prefix for easier troubleshooting. The integer must be unique for each VRF.
Step 6 On the Virtual Routing & Forwarding Route Targets page, assign the following settings to add an EVPN route-target to the VRF, then click ADD.
- Route Target Mode: Both
- Route Target Ext-Community: 65001:100001
- Address Family: EVPN
Note: Setting Route Target Mode to Both exports local switch VRF routes to BGP with the Route Target Ext-Community value assigned as the route target and imports BGP routes into the local VRF route table advertised by other switches with the same value.
For Route Target Ext-Community, enter the private autonomous system number used in the “Configure Overlay Network Routing” procedure and the L3 VNI, separated by a colon. The L3 VNI is used in the BGP route target for logical consistency with the VXLAN L3 VNI. The complete route target value uniquely identifies a set of VRFs.
Step 7 Verify that the Route Targets information is correct and click NEXT.
Step 8 On the Summary page, verify that the complete set of VRF information is correct and click APPLY.
Step 9 Repeat this procedure for each additional overlay VRF.
Configure Overlay VLANs and SVIs
One or more VLANs within each VRF provide host connectivity. VLAN SVIs provide IP addressing within the fabric. The Fabric Composer IP Interface workflow creates consistent VLANs across all leaf switches within an overlay VRF. The workflow assigns an SVI IP address, a virtual gateway address, and a locally administered virtual MAC address to the VLAN interface on each leaf switch. Aruba Active Gateway permits the SVI IP and virtual gateway to be used on VSX leaf pairs.
The creation of VLANs and SVIs in this step is prerequisite to binding the VLANs across racks in logically contiguous Layer 2 domains in the next procedure. At the end of this procedure, each VLAN’s broadcast domain is scoped to each VSX pair.
CX 10000 switches positioned in a border leaf role support east-west policy for attached hosts, when the switches are assigned a leaf switch profile. When IPsec or NAT features are enabled, the CX 10000 border leaf must be configured with a spine switch profile. After assigning the CX 10000 a spine switch profile, east-west policy enforcement is no longer supported, and directly attaching hosts to the border leaf is not recommended. This limitation does not apply to other switch models in the border leaf role. In this guide, CX 10000 switches positioned at the border leaf are not configured with host VLANs to support enabling IPsec in a separate procedure.
The diagram below illustrates the creation of VLANs on ToR VSX leaf pairs, except the CX 10000 border leaf.
Step 1 Confirm that the view is set to Configuration/Routing/VRF, then click the • • • symbol next to PROD-DC-VRF and select IP Interfaces.
Note: The • • • symbol is a shortcut to most options in the ACTIONS menu. This shortcut method is available in many Fabric Composer contexts. The IP Interfaces context also can be viewed by clicking the PROD-DC-VRF radio button and selecting IP Interfaces on the ACTIONS menu.
Step 2 On the Configuration/Routing/VRF/PROD-DC-VRF page, select the right ACTIONS menu below IP INTERFACES and click Add.
Step 3 On the IP Interfaces page, assign the following values, then click NEXT.
- Type: SVI
- VLAN: 101
- Switches: < Select all leaf switches, except CX 10000 border leaf>
- IPv4 Subnetwork Address: 10.5.101.0/24
- Switch Addresses: 10.5.101.1
- Active Gateway IP Address: 10.5.101.1
- Active Gateway MAC Address: 02:00:0A:05:00:01
Note: The SELECT ALL button selects all switches assigned to the VRF where the SVI interface will be created.
The range provided for IPv4 Addresses and the Active Gateway IP Address must be from the same network range as the IPv4 Subnetwork Address. The IPv4 Addresses field value is used to assign an IP address to each SVI interface. AOS-CX 10.09 and above supports assigning the same IP address as both the SVI interface and the active gateway. This maximizes the number of IPs available to assign to attached network hosts.
Note: The Active Gateway IP address is not supported as a source IP address when using the ping command. When assigning the same IP address to both the Active Gateway and VLAN SVI, the ping command must specify a unique source interface or IP address, such as a loopback assigned to the same VRF, to verify reachability.
For example:# ping 10.5.101.11 vrf PROD-DC-VRF source loopback11
Step 4 On the Name page, enter a Name and Description, then click NEXT.
Note: Including the associated VLAN ID and overlay VRF in the Name can be helpful during management operations.
Step 5 On the Summary page, verify that the information is entered correctly and click APPLY.
Step 6 Repeat the procedure to create an additional overlay subnet in the production VRF using the following values:
Name | Description | Type | VLAN | Switches | IPv4 Subnetwork Address | IPv4 Addresses | Active Gateway IP Address | Active Gateway MAC Address |
---|---|---|---|---|---|---|---|---|
DB-V102-PROD-DC | Production database SVI/VLAN 102 DC overlay | SVI | 102 | < All non-border leaf switches > | 10.5.102.0/24 | 10.5.102.1 | 10.5.102.1 | 02:00:0A:05:00:01 |
Step 7 Repeat the procedure to create additional overlay subnets in the development VRF using the following values:
Name | Description | Type | VLAN | Switches | IPv4 Subnetwork Address | IPv4 Addresses | Active Gateway IP Address | Active Gateway MAC Address |
---|---|---|---|---|---|---|---|---|
WEB-V201-DEV-DC | Development web app SVI/VLAN 201 in DC overlay | SVI | 201 | < All non-border leaf switches > | 10.6.201.0/24 | 10.6.201.1 | 10.6.201.1 | 02:00:0A:06:00:01 |
DB-V202-DEV-DC | Development database SVI/VLAN 202 in DC overlay | SVI | 202 | < All non-border leaf switches > | 10.6.202.0/24 | 10.6.202.1 | 10.6.202.1 | 02:00:0A:06:00:01 |
Note: Host connectivity can be extended to border leaf switches, when not using IPsec or NAT services on CX 10000 switches in the border leaf role.
Configure EVPN Instances
An EVPN instance joins each previously created VLAN across leaf switches into a combined broadcast domain. This procedure defines two key attributes to logically bind each VLAN across the leaf switches. A VNI is assigned to each VLAN. MP-BGP associates host MACs to VNI values in its EVPN host advertisements to support VXLAN tunneling. An auto-assigned route target per VLAN also is defined. The VLAN route-target associates a MAC address with the appropriate VLAN at remote switches for the purpose of building bridge table MAC reachability. Route targets are included in MP-BGP EVPN host advertisements.
The Fabric Composer EVPN wizard maps VLAN IDs to L2 VNI values. A prefix value is provided for automatic generation of route targets. The EVPN wizard also creates an EVPN instance to associate route targets with VLANs. When using iBGP for the overlay control plane protocol, route targets can be assigned automatically. A resource pool is used to assign the EVPN system MAC addresses.
At the completion of this procedure, distributed L2 connectivity across leaf switches in the fabric is established, with the exception of the border leaf. Aruba active gateway permits the same IP address to be used on all leaf switches in the fabric for a VLAN. Overlay reachability between the border leaf and other leaf switches is routed. The diagram below illustrates the the logical binding of VLANs across leaf racks into logically contiguous broadcast domains.
Step 1 On the Guided Setup menu, select EVPN CONFIGURATION to start the EVPN workflow.
Step 2 On the Introduction page, review the guidance and click NEXT.
Note: The prerequisites noted above were completed in previous steps.
Step 3 On the Switches page, uncheck Create EVPN instances across the entire Fabric and all Switches contained within it, select all leaf switches except the border leaf, and click NEXT.
Step 4 On the Name page, enter a Name Prefix and Description, then click NEXT.
Step 5 On the VNI Mapping page, enter one or more VLANs and a Base L2VNI, then click NEXT.
Note: The Base L2VNI value is added to each VLAN ID to generate a unique L2 VNI associated to each VLAN automatically.
Step 6 On the Settings page, click ADD to launch the Resource Pool wizard.
Step 7 Resource Pool wizard: On the Name page, enter a Name and Description, then click NEXT.
Step 8 Resource Pool wizard: On the Settings page, enter a MAC address range for System MAC Addresses in the Resource Pool field and click NEXT.
Step 9 Resource Pool wizard: On the Summary page, verify that the System MAC information is correct and click APPLY. The Resource Pool wizard closes and returns to the main EVPN Configuration workflow.
Step 10 On the Settings page, verify that the MAC Address Resource Pool just created is selected, set the Route Target Type to AUTO, and click NEXT.
Note: EVPN route targets can be set automatically by switches only when using an iBGP overlay.
Step 11 On the Summary page, verify that the information is correct and click APPLY.
Step 12 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 13 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select all non-border leaf switches >
- Commands: show interface vxlan vtep
Step 14 Verify that the output for each switch displays a remote VTEP to each non-border leaf switch pair for each VLAN.
The Guided Setup is now complete.
Host Port Configuration
Use this section to configure the Port Groups and LACP Host LAG ports.
Configure Port Groups
The SFP28 ports on the Aruba 10000-48Y6C switches (R8P13A and R8P14A) are organized into 12 groups of four ports each. The SFP28 default port speed is 25 Gb/s and must be set manually to 10 Gb/s, if required. Port groups can be configured on CX 8325, 8360, 10000, and 9300S series switches.
For additional details, find the Installation and Getting Started Guide for a specific switch model on the Aruba Support Portal. Go to the section: Installing the switch > Install transceivers > Interface-Group operation.
The following procedure configures a set of ports for 10 Gbps operation.
Step 1 On the Configuration menu, select Ports > Ports.
Step 2 Select all switches that require port speed changes in the Switch field.
Step 3 Enter mismatch in the Reason column’s regex field and click the Apply table filters icon.
Note: This step is optional. Cabling must be complete before this step so the switch can generate a speed mismatch status used for filtering.
Step 4 Select an individual port in the port-group to be changed. On the right ACTIONS menu, select Edit.
Note: The Edit option on the ACTIONS menu is available only when a single switch port is selected.
Step 5 On the Ports page, select the Speed tab. Select the appropriate value in the Speed dropdown, then click APPLY.
Note: Observe the full list of ports affected by the speed change. Ensure that this is the correct speed setting for all listed ports.
Step 6 Repeat the procedure for additional leaf switch ports requiring speed changes. Be sure to make changes to corresponding ports on VSX-paired switches supporting MC-LAGs.
Note: The displayed port list of mismatched transceiver speeds is updated dynamically. It may be necessary to toggle the select all/deselect all checkbox in the upper left column to deselect the previously selected port after the update hides it from view.
Multiple LACP MC-LAG Configuration
LACP link aggregation groups provide fault tolerance and efficient bandwidth utilization to physical hosts in the data center. The Link Aggregation Group wizard configures multi-chassis LAGs and LACP on fabric switches. Use the Fabric Composer CLI Command Processor to verify LAG interface state for LAG connected hosts.
Multiple MC-LAG creation can be applied to one or more switches using the Fabric Composer wizard. It enables quick setup of MC-LAGs across all leaf switches, when leaf switch models and cabling are consistent across the fabric. For example, a single pass of the Link Aggregation Groups wizard can configure all host leaf ports for MC-LAG given the following conditions:
- All leaf switches contain the same number of host facing ports.
- No more than one port per switch requires assignment to an individual MC-LAG.
- VLANs assigned to all MC-LAGs are consistent.
Configuration also is required on the connected hosts. Configuration varies by server platforms and operating systems and is not presented in this guide. Refer to the appropriate technical documentation for attached devices and operating systems.
Step 1 On the Configuration menu, select Ports > Link Aggregation Groups.
Step 2 On the right ACTIONS menu, select Add.
Step 3 On the Create Mode page, select Create multiple MLAGs for selected VSX Pairs and click NEXT.
Step 4 On the Settings page, enter a Name Prefix, LAG Number Base, then click NEXT.
Note: If individual hostname assignments are required per MC-LAG in place of a more general prefix-based naming convention, choose Create a single LAG/MLAG in the previous step to provide a unique a name per LAG.
LAG index values are numbered sequentially beginning with the LAG Number Base. The wizard will not complete if a LAG value is already in use on any switch target. The server access/subleaf MC-LAG configured from the RSVDC-FB1-LF3-1 and RSVDC-FB1-LF3-2 VSX pair to the downstream RSVDC-FB1-LF3-SA1 switch uses LAG index 1 on all three switches. LAG Number Base 11 is chosen to avoid a conflict with the existing LAG.
Step 5 On the Ports page, select one or more VSX-pairs of switches in the VSX Pairs field, enter the ports that are physically cabled for MC-LAG operation in Ports, then click VALIDATE.
Step 6 If Fabric Composer can validate that MC-LAG port configuration is consistent with LLDP neighbor data, a success message is presented.
Note: Validation of hypervisor host connections requires previous assignment of physical host ports to LACP LAGs. Validation is intended to verify that the requested configuration is consistent with cabling to attached hosts. It is not required to continue the process of MC-LAG creation, if attached hosts are not configured or present.
Fabric Composer configures an MC-LAG using port 1/1/1on RSVDC-FB1-LF2-1 and RSVDC-FB1-LF2-2 switches, and a second MC-LAG using port 1/1/1 on RSVDC-FB1-LF3-1 and RSVDC-FB1-LF3-2 switches, when using the values above. Specifying multiple ports will create additional MC-LAGs with corresponding port numbers between the VSX switch pairs.
Step 7 On the Ports page, click NEXT.
Step 8 On the LACP Settings page, check Enable LACP Fallback, leave other LACP settings at their default values, and click NEXT.
Step 9 On the VLANs page, modify the untagged Native VLAN number if necessary, enter the tagged VLAN IDs in the VLANs field, then click NEXT.
Step 10 On the Summary page, confirm that the information is entered correctly and click APPLY to create the LAGs.
Step 11 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 12 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select all switches with newly configured LAGs >
- Commands: show lacp interfaces
Step 13 When a host is connected to the LAG, verify that each port assigned to one of the host LAGs created in this procedure has a State of “ALFNCD” for its local interfaces and “PLFNCD” for its partner interfaces. The Forwarding State should be “Up” for local interfaces.
Note: A combination of VSX peer LAG interfaces and VSX multi-chassis LAG interfaces to hosts may be included in the command output. As shown above, the multi-chassis interfaces are denoted with (mc) after the LAG name. The Actor is the switch where the command was run. The Partner is the host at the other end of the LAG. The State column shows the expected values for a switch set to Active LACP mode and a host set to Passive LACP mode with a healthy LAG running.
Single LACP MC-LAG Configuration
Individual LAGs are assigned for the following conditions:
- A unique LAG name is required in Fabric Composer.
- Assigned VLANs are unique to the LAG.
- More than one port per switch are assigned to an MC-LAG to increase capacity.
Step 1 On the Configuration menu, select Ports > Link Aggregation Groups.
Step 2 On the right ACTIONS menu, select Add.
Step 3 On the Create Mode page, leave Create a single LAG/MLAG selected and click NEXT.
Step 4 On the Settings page, enter a Name, Description, and LAG Number. Click NEXT.
Note: Consider using a Name that identifies the host and where it is connected.
Step 5 On the Ports page, select a VSX-pair or a VSF stack of switches from the LAG Switch Member dropdown.
Step 6 Click the Switch View mode icon to identify ports more easily.
Step 7 Click the port icons to add them as members of the link aggregation group and click NEXT.
Note: A checkmark appears on the newly selected ports. The diamond icon appears on ports not currently available for a new LAG group assignment.
Select ports on both VSX switches or multiple VSF switches to create a functional multi-chassis LAG.
Step 8 On the LACP Settings page, leave the settings at their defaults, and click NEXT.
Note: Enable LACP Fallback is auto-selected for VSX pairs. LACP fallback is not a valid option on VSF stacks.
CX switches default to “Active” mode to ensure that LACP can be established regardless of the LACP configuration of the host platform. Using the default settings is recommended. Click the box next to one or both switch names to modify default values.
Step 9 On the VLANs page, modify the untagged Native VLAN number if necessary, enter tagged VLAN IDs in the VLANs field, then click NEXT.
Step 10 On the Summary page, confirm that the information is entered correctly and click APPLY to create the LAGs.
Step 11 Repeat the procedure for each individual LAG connection in the fabric.
Step 12 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 13 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select the switch LAG configured for the new LAG >
- Commands: show lacp interfaces
Step 14 When a host is connected to the LAG, verify the port status for each member of the LAG. On a VSX pair, the local interface State should be ALFNCD and the partner interface should be PLFNCD. Verify that all interfaces in a LAG defined on a VSF stack have a Sate of ALFNCD, when connected to a host. The Forwarding State should be “Up” for local interfaces.
Note: When assigning VLANs to a LAG on a server access (sub-leaf) switch, Fabric Composer automatically creates VLAN configuration for VLANs not present on the switch.
Configure the Border Leaf
The border leaf is the ToR switch pair that connects the data center fabric to other networks such as a campus, WAN, or DMZ.
When connecting overlay networks to external networks, segmentation is preserved by establishing a distinct Layer 3 connection for each data center overlay VRF. A firewall often is used between the fabric hosts and an external network for policy enforcement, but this is not a requirement. A firewall also can be configured to permit traffic between VRFs based on policy. When connecting multiple overlay VRFs that require preserving route table separation upstream, the firewall must support VRFs or device virtualization.
The following diagram illustrates the topology and BGP peerings for connecting the production overlay VRF to an active/passive pair of upstream firewalls.
An MC-LAG is used between the border leaf switch pair and each upstream firewall. This strategy provides network path redundancy to each firewall. When using an active/passive firewall, traffic is forwarded only to the active upstream firewall. Detailed firewall configuration is outside the scope of this document.
Each MC-LAG between the border leaf switches and the firewalls is an 802.1Q trunk, where one VLAN per VRF is tagged on the LAG. Tagging the same VLANs on both LAGs supports the active/passive operation of the firewall. Using VLAN tags when only one overlay VRF is present supports adding overlay VRFs in the future without additional cabling or changing port roles from access to trunk.
MP-BGP EVPN advertisements share host routes inside the data center (/32 IPv4 and /128 IPv6). EVPN host routes are commonly filtered to connections outside the data center. In the following example, only network prefixes containing overlay hosts are shared, which can be redistributed connected routes or learned within the fabric from type-5 EVPN route advertisements.
In this sample implementation, each overlay VRF on the border leaf switches learns a default route and a campus summary route from the firewalls. The border leaf shares learned external routes with other leaf switches by advertising a type-5 EVPN route.
The following diagram illustrates additional elements required when adding external connectivity to the development overlay VRF. The same set of physical links between the border leaf and the firewalls is used to connect both production and development overlay VRFs. A development VRF VLAN is tagged on the previously configured MC-LAG trunks between the border leaf switches and the firewalls to support an additional set of BGP peerings with the firewall.
Note: When using an Aruba CX 10000 in the border leaf role, physical ports connecting to external networks must be configured with persona access.
Configure External Routing VLAN SVIs
In the configuration steps below, the VLAN SVIs are created to use in eBGP peerings between border leaf switches and the upstream active/passive firewall pair.
Step 1 On the Configuration menu, select Routing > VRF.
Step 2 On the Configuration > Routing > VRF page, click the • • • symbol left of PROD-DC-VRF and select IP Interfaces.
Step 3 On the right ACTIONS menu of the IP Interfaces tab, select Add to launch the IP Interfaces wizard.
Step 4 On the IP Interfaces page, enter the following values and click NEXT.
- Type: SVI
- VLAN: 2021
- Switches: < Select the border leaf VSX pair object >
- IPv4 Subnetwork Address: 10.255.2.0/29
- IPv4 Addresses: 10.255.2.1-10.255.2.2
- Active Gateway IP Address: < blank >
- Active Gateway MAC Address: < blank >
- Enable VSX Shutdown on Split: < unchecked >
- Enable VSX Active Forwarding: < unchecked >
- Enable Local Proxy ARP: < unchecked >
Step 5 On the Name page, enter a Name and Description, then click NEXT.
Step 6 On the Summary page, review the interface settings and click APPLY.
Step 7 Repeat this procedure to create an additional VLAN and SVI interface for DEV-DC-VRF. In step 2, select DEV-DC-VRF, then create an SVI with the following values:
Name | Description | Type | VLAN | Switches | IPv4 Subnetwork Address | IPv4 Addresses |
---|---|---|---|---|---|---|
DEV-DC-BORDER-LF to FW | Border leaf DEV-DC-VRF uplink to external FW cluster | SVI | 2022 | < Border leaf VSX pair object > | 10.255.2.8/29 | 10.255.2.9-10.255.2.10 |
Create Border Leaf to Firewall MC-LAGs
A VSX-based MC-LAG is created to each individual firewall in the active/passive cluster from the border leaf switches.
Step 1 On the Configuration menu, select Ports > Link Aggregation Groups.
Step 2 On the right ACTIONS menu, select Add.
Step 3 On the Create Mode page, leave Create a single LAG/MLAG selected and click NEXT.
Step 4 On the Settings page, enter the following values and click NEXT.
- Name: RSVDC-BL to EXT-FW1
- Description: MC-LAG from border leaf switches to FW1 in firewall cluster
- LAG Number: 251
Step 5 On the Ports page, select the border leaf VSX object from the LAG Switch Member dropdown.
Step 6 Click the Switch View mode icon to identify ports more easily. Click the port icons connected to the first firewall to add them as members of the link aggregation group and click NEXT.
Note: A checkmark appears on the newly selected ports. The diamond icon appears on ports that are not currently available for a new LAG group assignment.
Step 7 On the LACP Settings page, leave all settings at their defaults and click NEXT.
Step 8 On the VLANs page, enter the VLAN ID for each VRF previously created to connect to external networks in the VLANs field, and click NEXT.
Step 9 On the Summary page, review the link aggregation settings and click APPLY.
Step 10 Repeat the procedure to create an additional MC-LAG on ports connecting the VSX border leaf switch pair to the second firewall using the following settings:
Name | Description | LAG Number | Ports | LACP Settings | VLANs |
---|---|---|---|---|---|
RSVDC-BL to EXT-FW2 | MC-LAG from border leaf switches to FW2 in firewall cluster | 252 | 12 (on each switch member) | < Leave all defaults > | 2021-2022 |
Configure Host Filter Prefix List
Host routes and point-to-point link prefixes should not be advertised to external networks. The following procedure creates a prefix list used in route policy to filter /31 and /32 IPv4 prefix advertisements.
Step 1 On the Configuration menu, select Routing > Route Policy.
Step 2 Click the PREFIX LISTS tab. On the right ACTIONS menu, select Add.
Step 3 On the Settings page, enter a Name and Description, then click NEXT.
Note: The Name value defines the name of the prefix list in Fabric Composer and on the switch.
Step 4 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.
Step 5 On the Entries page, enter the following non-default values and click Add
- Action: Permit
- Prefix: 0.0.0.0/0
- GE: 31
Step 6 Click NEXT.
Step 7 On the Summary page, review the prefix list settings and click APPLY.
Configure Campus AS Path List
An internal BGP peering is established between the border leaf pair to create a routed backup path to the upstream firewall. IP prefixes learned in the fabric should not be advertised in the overlay BGP peering between the border leaf pair to avoid a routing loop. The following procedure creates an AS path list that matches only prefix advertisements sourced from the upstream firewall and campus routers.
Step 1 Click the AS PATH LISTS tab. On the right ACTIONS menu, select Add.
Step 2 On the Name page, enter a Name and Description, then click NEXT.
Step 3 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.
Step 4 On the Entries page, enter the following values and click ADD.
- Sequence: 10
- Description: permit campus originated advertisements
- Action: Permit
- Regex: ^65501 65000$
Note: The Regex field value matches BGP advertisements originated by the campus AS (65000) that are received by the RSVDC fabric border leaf via the firewall AS (65501). Routes advertised by the campus that are received from other external AS numbers are not accepted.
Step 5 On the Entries page, enter the following values and click ADD.
- Sequence: 20
- Description: permit firewall originated advertisements
- Action: Permit
- Regex: ^65501$
Note: The Regex field value matches BGP advertisements originated by the firewall AS. In this example topology, the default route is originated by the firewall.
Step 6 Click NEXT.
Step 7 On the Summary page, verify the AS path list settings and click APPLY.
Configure Firewall Route Map
The following procedure creates a route map that will be applied outbound to external BGP peers. The route map policy filters host and point-to-point prefixes using the previously created host filter prefix list.
Step 1 On the Configuration > Routing > Route Policy page, click the ROUTE MAPS tab. On the right ACTIONS menu, select Add.
Step 2 On the Name page, enter a Name and Description, then click NEXT.
Step 3 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.
Step 4 On the Entries page, click the right ACTIONS menu and select Add to launch the Route Map Entries wizard.
Step 5 Route Map Entries wizard: On the Settings page, enter the following non-default values and click NEXT.
- Description: filter host and P2P prefixes
- Action: Deny
Step 6 Route Map Entries wizard: On the Match Attributes page, enter the following values and click NEXT.
- Attributes: Match IPv4 Prefix List
- Match IPv4 Prefix List: PL-HOST-P2P
Step 7 Route Map Entries wizard: On the Set Attributes page, click NEXT.
Step 8 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.
Step 9 Create a second route map sequence. On the Entries page, click the right ACTIONS menu, and select Add.
Step 10 Route Map Entries wizard: On the Settings page, set the Action field to Permit and click NEXT.
Step 11 Route Map Entries wizard: On the Match Attributes page, click NEXT.
Step 12 Route Map Entries wizard: On the Set Attributes page, click NEXT.
Step 13 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.
Step 14 On the Entries page, click NEXT.
Step 15 On the Summary page, review the route map settings and click APPLY.
Configure Internal Border Leaf Route Map
The following procedure creates a route map that is applied to the BGP peering between the border leaf switches. The route map permits advertising only prefixes originated by the campus AS number or the upstream firewall AS number.
Step 1 On the right ACTIONS menu of the ROUTE MAPS tab, select Add.
Step 2 On the Name page, enter a Name and Description, then click NEXT.
Step 3 On the Scope page, select the two border leaf switches in the Switches field, then click NEXT.
Step 4 On the Entries page, click the right ACTIONS menu, and select Add to launch the Route Map Entries wizard.
Step 5 Route Map Entries wizard: On the Settings page, enter the following non-default values and click NEXT.
- Description: allow campus and firewall ASNs
- Action: Permit
Step 6 Route Map Entries wizard: On the Match Attributes page, enter the following values and click NEXT.
- Attributes: Match AS Path List
- Match IPv4 Prefix List: ALLOWED-EXT-AS
Step 7 Route Map Entries wizard: On the Set Attributes page, click NEXT.
Step 8 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.
Step 9 Create a second route map sequence. On the Entries page, click the right ACTIONS menu, and select Add.
Step 10 Route Map Entries wizard: On the Settings page, set the Action field to Deny and click NEXT.
Step 11 Route Map Entries wizard: On the Match Attributes page, click NEXT.
Step 12 Route Map Entries wizard: On the Set Attributes page, click NEXT.
Step 13 Route Map Entries wizard: On the Summary page, review the route map entry settings and click APPLY.
Step 14 On the Entries page, click NEXT.
Step 15 On the Summary page, review the route map settings and click APPLY.
Configure Border Leaf BGP Peerings
The following procedure configures the eBGP peerings between the border leaf switches and the upstream firewalls with a route map applied to filter host routes and point-to-point link prefixes. A single BGP peering is defined to the upstream firewalls, which is established only with the active firewall in the active/passive pair.
Step 1 On the left navigation menu, click BGP. Click the PROD-DC-VRF radio button. On the right ACTIONS menu, select Edit.
Step 2 On the Settings page, check Enable BGP on PROD-DC-VRF, check Redistribute Loopback, and click APPLY.
Note: The loopback interfaces that are redistributed are created in the Assign Unique Overlay Loopbacks procedure later in this guide.
Step 3 On the Configuration > Routing > BGP page, click the • • • symbol left of PROD-DC-VRF and select Switches.
Step 4 On the SWITCHES tab, click • • • next to RSVDC-FB1-LF1-1 and select Neighbors.
Step 5 On the right ACTIONS menu of the NEIGHBORS tab, select Add.
Step 6 On the Settings page, enter the following non-default values and click NEXT.
- Neighbor AS Number: 65501
- IP Address: 10.255.2.3
- IPv4 Route Map In: RM-PERMIT-CAMPUS
- IPv4 Route Map Out: RM-EXT-OUT
- Enable Bidirectional Forwarding Detection (BFD) Fall Over: < checked >
Step 7 On the Name page, enter a Name and Description, then click NEXT.
Step 8 On the Summary page, review the BGP neighbor settings and click APPLY.
Step 9 Repeat steps 5 to 8 to add an iBGP peering between the border leaf switches in the production VRF with the following non-default settings:
Name | Description | Neighbor ASN | IP Addresses | IPv4 Route Map Out |
---|---|---|---|---|
PROD-DC-VRF LF1-1 to LF1-2 | PROD VRF peering between border leaf switches | 65001 | 10.255.2.1 | RM-PERMIT-CAMPUS |
Step 10 In the top left current context path, click PROD-DC-VRF.
Step 12 On the SWITCHES tab, click • • • next to RSVDC-FB1-LF1-2 and select Neighbors.
Step 13 Repeat steps 6 to 9 to create additional BGP peerings on RSVDC-FB1-LF1-2 with the following settings:
Name | Description | Neighbor ASN | IP Addresses | IPv4 Route Map In | IPv4 Route Map Out | BFD |
---|---|---|---|---|---|---|
PROD-DC-VRF LF1-2 to FW | BGP peering from LF1-2 PROD VRF to FW cluster | 65501 | 10.255.2.3 | RM-PERMIT-CAMPUS | RM-EXT-OUT | < checked > |
PROD-DC-VRF LF1-2 to LF1-1 | PROD VRF peering between border leaf switches | 65001 | 10.255.2.2 | RM-PERMIT-CAMPUS | < unchecked > |
Step 13 Repeat this procedure for each overlay VRF network that requires external connectivity. Reachability between overlay VRFs is governed by policy at the upstream firewall. Strict overlay route table separation can be maintained by connecting to discrete VRFs or virtual firewall contexts on the upstream firewall.
Name | Description | Neighbor ASN | IP Addresses | IPv4 Route Map In | IPv4 Route Map Out | BFD |
---|---|---|---|---|---|---|
DEV-DC-VRF LF1-1 to FW | BGP peering from LF1-1 DEV VRF to FW cluster | 65501 | 10.255.2.11 | RM-PERMIT-CAMPUS | RM-EXT-OUT | < checked > |
DEV-DC-VRF LF1-1 to LF1-2 | DEV VRF peering between border leaf switches | 65001 | 10.255.2.9 | RM-PERMIT-CAMPUS | < unchecked > | |
DEV-DC-VRF LF1-2 to FW | BGP peering from LF1-2 DEV VRF to FW cluster | 65501 | 10.255.2.11 | RM-PERMIT-CAMPUS | RM-EXT-OUT | < checked > |
DEV-DC-VRF LF1-2 to LF1-1 | DEV VRF peering between border leaf switches | 65001 | 10.255.2.10 | RM-PERMIT-CAMPUS | < unchecked > |
Verify Border Leaf Routing
Step 1 In the top-left current context path, click BGP.
Note: To display information and the current state of an individual BGP peering, click the expansion icon (>) at the beginning of the row for each BGP peer definition. After a BGP peering is defined, the Fabric Composer web page may require a refresh to display the expansion icon.
Step 2 Click • • • next to PROD-DC-VRF and select Neighbors Summary.
Step 3 In the NEIGHBORS SUMMARY window, verify that each peering displays Established in the State column.
Step 4 Repeat steps 1 to 3 for each overlay VRF.
Step 5 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 6 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select all leaf switches >
- Commands: show ip route bgp vrf PROD-DC-VRF
Step 7 Verify that there is a default route and campus summary route learned on all leaf switches in the production VRF. The border leaf switch routes use the upstream firewall IP as a next hop. The remaining leaf switches use a next hop of the border leaf Anycast VTEP, learned via BGP EVPN type-5 advertisements.
Note: The prefixes advertised into an overlay fabric vary based on the environment. A default route is often the only learned prefix required. The campus summary route is used in the Validation Solution Guide’s multifabric configuration.
Step 8 Repeat steps 6 to 7 for each overlay VRF.
Configure Overlay Test Loopbacks
A unique loopback IP per switch in each overlay VRF is required to verify connectivity to directly attached hosts and reachability through the overlay.
Sourcing a ping from a switch to one of its directly attached hosts is a common method to verify reachability at the point of attachment. When a VSX leaf pair provides redundant links to attached hosts, the return data path from the host may not be the same link as the originating traffic. By default, a switch will use the SVI interface IP for the VLAN connecting the downstream host. The same VLAN SVI IP address is configured on both VSX member switches to conserve IP address space. If the response to a ping originated by one member of the VSX pair is received by the other member, the response will be dropped, because the switch receiving the response has no state for the ping conversation.
Sourcing a ping from a unique IP address that is present only on one of the VSX switch pair members resolves the issue, when combined with a static route to share reachability of the unique IP between the VSX pair members. On any leaf switch, one loopback IP address in a VRF can be used to test reachability to all locally attached hosts in all subnets associated locally with that VRF. If a response is received by the non-originating member of the VSX pair, the destination IP address is not local to the VSX member, so the route table is consulted and the static route is used to forward the ping response to the originating member of the pair. A unique IP loopback per VRF per switch is required for full testing capability.
A similar problem exists when sourcing a ping from a switch to verify overlay reachability. Up to this point, the only IP interfaces configured in the overlay are VLAN SVIs. For each VLAN, the same SVI IP address is assigned to all leaf switches. Sourcing a ping with a VLAN SVI to an IP host connected to another leaf in the fabric will result in the ping response being dropped at the remote host’s point of attachment, which also owns the destination IP address in the ping response. In the case where a ping response can be received by the originating VTEP, it is not guaranteed that the response will be received by the originating switch. A VSX pair of switches represent a single logical VTEP. If the response to a ping originated by one member of the VSX pair is received by the other member, the response will be dropped for the same reason noted above.
Sourcing a ping from an IP address that is unique to an individual switch resolves the reachability problem in the overlay as well.
The following procedure configures a transit VLAN in each overlay VRF between leaf switch VSX pairs, unique loopback on each switch in both the PROD and DEV VRFs, and a static route using the transit VLAN for loopback reachability between the pair. The loopback address can be used to test overlay reachability to directly attached hosts and remote IP destinations.
The following procedure configures the required elements for one VSX leaf pair.
Configure Overlay Transit VLAN
Each redundant pair of ToR switches requires a transit VLAN in the overlay VRF to enable routed reachability to its VSX partner’s IP loopback address in the same VRF.
Step 1 On the Configuration menu, select Routing > VRF.
Step 2 Click • • • next to PROD-DC-VRF and select IP Interfaces.
Step 3 On the right ACTIONS menu, select Add.
Step 4 On the Interface Type page, assign the following non-default values and click NEXT.
- VLAN: 3001
- Switches: < Select a VSX leaf switch pair >
- IPv4 Subnetwork Address: < Assign a /31 block of addresses >
- IPv4 Address: < Assign the range of 2 IP address that comprise the /31 subnet >
Step 5 On the Name page, assign a Name and Description, then click NEXT.
Step 6 On the Summary page, review the transit VLAN settings and click APPLY.
Assign Unique Overlay Loopbacks
Step 1 On the right ACTIONS menu of the Configuration > Routing > VRF > PROD-DC-VRF page, select Add.
Step 2 On the Interface Type page, enter the following non-default values:
- Type: Loopback
- Loopback Name: < An unused loopback interface value >
- Switch: < Select an individual switch in the VSX pair where a transit VLAN was created >
Step 3 On the IPv4 Addresses page, enter a /32 host address and click NEXT.
Step 4 On the Name page, enter a Name and Description, then click NEXT.
Step 5 On the Summary page, review the loopback interface settings and click APPLY
Step 6 Repeat steps 1 to 5 to assign a loopback interface to the VSX partner switch with the following non-default values:
Name | Description | Type | Loopback Name | Switch | Primary IPv4 Network Address |
---|---|---|---|---|---|
LF1-2 PROD LOOPBACK | Unique overlay loopback IP address in the PROD VRF for LF1-2 | Loopback | Loopback11 | RSVDC-FB1-LF1-2 | 10.250.4.0/32 |
Configure Static Route for VSX Routed Loopback Reachability
Step 1 On the Configuration > Routing > VRF > PROD-DC-VRF page, click IP STATIC ROUTES. On the right ACTIONS menu, select Add.
Step 2 On the Route page, enter the following values and click NEXT.
- Destination Prefix: < Host IP prefix of VSX peer’s overlay loopback interface >
- Next Hop Address: < IP address of VSX peer’s transit VLAN interface >
- Switch: < Individual VSX member target for static route >
Step 3 On the Name page, enter a Name and Description, then click NEXT.
Step 4 On the Summary page, review the static route settings and click APPLY.
Step 5 Repeat steps 1 to 4 to create a static route to the VSX peer’s loopback interface using the following non-default values:
Name | Description | Destination | Next Hop Address | Switches |
---|---|---|---|---|
to-LF1-1-PROD-loopback | Static route to L1-1 PROD VRF loopback IP | 10.250.4.1/32 | 10.255.4.1 | RSVDC-FB1-LF1-2 |
Repeat the Configure Overlay Test Loopbacks procedure for each VRF in the overlay on each VSX leaf pair in the network. For standalone leaf switches, perform only the Assign Loopbacks to Individual Switches steps in this section for each VRF in the overlay.
Configure Loopback Summary Route
A summary static route for the collective set of loopback interfaces is configured on the border leaf to advertise loopback reachability to external networks. This static route points to null. A summary route is created for each VRF in the overlay.
Step 1 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Configuration Editor.
Step 2 On the Configuration Editor page, select the two border leaf switches in the Switch field.
Step 3 Enter the summary static route configuration for the loopback addresses in the same section where the previously created static routes appear on both switch tabs, and click VALIDATE ALL.
ip route 10.250.4.0/24 nullroute vrf PROD-DC-VRF
ip route 10.250.5.0/24 nullroute vrf DEV-DC-VRF
Note: Fabric Composer 7.0.X and previous versions require creating static null routes using Fabric Composer’s Configuration Editor or the switch CLI.
If the configuration is valid, a Success message is presented.
Step 4 Click APPLY ALL.
Note: Create Checkpoint before Apply is selected by default. Fabric Composer creates a checkpoint on the switch to restore the switch configuration back to its state prior to the change.
If the checkpoint was successfully created, the Success message indicates the time of its creation.
If the configuration is applied successfully, a Success message is presented.
Redistribute Static Routes on the Border Leaf
The loopback summary route created in the previous procedure is redistributed into BGP for advertisement to campus. Redistribution is applied to each VRF.
Step 1 On the Configuration menu, select Routing > VRF.
Step 2 On the Configuration > Routing > BGP page, right-click • • • next to PROD-DC-VRF and select Switches.
Step 3 Click the radio button for the RSVDC-FB1-LF1-1 border leaf switch. On the right ACTIONS menu, select Edit.
Step 4 Click the REDISTRIBUTE ROUTES tab, click the Redistribute Static Routes option, and click APPLY.
If the BGP configuration is updated, a Success message is presented.
Step 5 Repeat steps 3 to 4 for the second border leaf to redistribute the static route on the second border leaf.
Step 6 Click BGP in the path at the top of the main configuration window and repeat steps 2 to 5 for each overlay VRF.
Verify Overlay Test Loopback
On each leaf switch, ping a directly connected host in the overlay using the loopback interface in the host’s VRF as a ping source to verify the overlay test loopback is working.
ping 10.5.101.121 vrf PROD-DC-VRF source loopback11
Configure Overlay IP Multicast
Protocol Independent Multicast–Sparse Mode (PIM-SM) is configured to build multicast route state between VTEPs within the data center and to external networks. PIM-SM is required for both sources and listeners in the data center. Internet Group Management Protocol (IGMP) manages known multicast listener state on data center leaf switches. IGMP snooping is configured to optimize Layer 2 forwarding of multicast traffic to only ports with interested listeners on leaf and server access switches.
In this guide, the PIM-SM rendezvous point (RP) is located outside the data center fabric in the campus network. The RP is learned with PIM-SM’s Bootstrap Router (BSR) mechanism.
Configuration of multicast can be done at the command line of the switch or using Fabric Composer’s Configuration Editor. Enter the configuration in code blocks in the procedures below to enable multicast in the EVPN-VXLAN overlay. The code blocks may include existing configuration to set context and existing descriptions to assist the reader.
Configure Overlay PIM Multicast
The configuration examples in each step should be applied to all leaf switches, except where noted that configuration is only applied to border leaf switches.
Step 1 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Configuration Editor.
Step 2 In the Switch field, select the border leaf and all other leaf switches.
Note: The spine switches do not contain multicast configuration for the overlay. Layer 2 server access switches are configured in a separate procedure in this guide. If there are few spine and server access switches, click SELECT ALL to select all switches in the fabric and deselect spine and server access switches.
Step 3 Enable PIM routing in each overlay VRF.
router pim vrf PROD-DC-VRF
enable
register-source loopback11
router pim vrf DEV-DC-VRF
enable
register-source loopback12
Note: In the configuration above, the register-source command instructs PIM to send register messages to the RP using the unique overlay loopback IP configured in the previous Assign Unique Overlay Loopbacks procedure. This is required to send register-stop messages originated by the RP to reach the individual leaf switch originating PIM register messages.
When entering configuration in Fabric Composer’s Configuration Editor, new configuration must be entered below any references to other configuration elements. In this example, VRF names and loopback interfaces are referenced in the PIM router configuration, which requires placing the new PIM router config after those elements are defined in the existing configuration.
Step 4 Enable PIM for each unique overlay loopback interface by adding ip pim-sparse enable in each configuration stanza. Do not modify existing configuration.
interface loopback 11
ip pim-sparse enable
interface loopback 12
ip pim-sparse enable
Step 5 Enable PIM on overlay VLAN SVIs. This includes adding PIM to all data center host VLANs and overlay transit VLANs. On the border leaf, this includes campus routed interfaces.
On the border leaf, add the following configuration lines:
interface vlan2021
description Border leaf PROD-DC-VRF uplink to external FW cluster
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan2022
description Border leaf DEV-DC-VRF uplink to external FW cluster
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan3001
description Overlay transit VLAN for PROD VRF
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan3002
description Overlay transit VLAN for DEV VRF
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
On all other leaf switches, add the following configuration lines:
interface vlan101
description Production web app SVI/VLAN 101 in DC overlay
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan102
description Production database SVI/VLAN 102 DC overlay
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan201
description Development web app SVI/VLAN 201 in DC overlay
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan202
description Development database SVI/VLAN 202 in DC overlay
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan3001
description Overlay transit VLAN for PROD VRF
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
interface vlan3002
description Overlay transit VLAN for DEV VRF
ip pim-sparse enable
ip pim-sparse vsx-virtual-neighbor
Note: The border leaf switches in our example are dedicated to the border leaf function and do not include overlay host VLANs. When overlay VLANs are present on border leaf switches, configure PIM on those VLAN interfaces.
Step 6 Click VALIDATE ALL.
Step 7 Click APPLY ALL.
Verify Overlay PIM
Step 1 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 2 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select all leaf switches >
- Commands: show ip pim neighbor brief all-vrfs
Step 3 Review the output to verify that the following PIM neighbor adjacencies are established:
- Each VRF logical L3 VNI interface has a PIM neighbor relationship with each other VTEP in the EVPN-VXLAN fabric.
- Each VRF overlay transit VLAN has a PIM neighbor adjacency.
- Each host facing VLAN has a PIM neighbor adjacency on all VSX redundant leaf switches.
- Two PIM adjacencies are formed on the border leaf VLAN that supports external routed connectivity. One adjacency is with the peer VSX switch and the second is with the external firewall.
Note: Overlay PIM adjacencies formed between logical L3 VNI interfaces take longer to establish than PIM adjacencies between switches. It may take a minute for the logical adjacencies to form.
The RP in the fabric is learned from PIM BSR messages received by the border leaf switches from the external network.
Step 4 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select all leaf switches >
- Commands: show ip pim rp-set all-vrfs
Step 5 Review the output to verify that the campus RP is learned in all overlay VRFs on all leaf switches.
Redistribute Local SVI into EVPN
Redistribute local SVI interfaces into EVPN instances on leaf switches to distribute system-MAC values. This ensures proper distribution of IGMP querier information throughout the fabric.
Step 1 On the Configuration menu, select Routing > EVPN.
Step 2 On the right ACTIONS menu, select Settings.
Step 3 On the EVPN Settings Page, assign the following values and click APPLY.
- Enable ARP Suppression: < checked >
- Redistribute Local MAC Address: < unchecked >
- Redistribute Local SVI: < checked >
- Apply the EVPN Settings across the entire Fabric and all Switches contained within it: < unchecked >
- Switches: < Select leaf switches containing EVPN mapped overlay VLANs >
Note: ARP suppression was enabled when originally creating the EVPN instance. Checking Enable ARP Suppression when changing EVPN settings is required, because existing settings will be overwritten using the values specified on the EVPN Settings page after clicking APPLY.
When routed multicast is not performed in the overlay, select Redistribute Local MAC Address to enable system-MAC propagation.
Configure Overlay IGMP and IGMP Snooping
IGMP is configured on all leaf switches, and IGMP snooping is configured on both leaf switches and server access switches.
Step 1 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Configuration Editor.
Step 2 In the Switch field, select all other leaf switches with data center overlay VLANs and server access switches
Note: The border leaf switches in our example are dedicated to the border leaf function and do not include overlay host VLANs. Configure IGMP on border leaf switches, when overlay host VLANs are present.
Step 3 Enable IGMP on all overlay VLAN interfaces on each leaf switch.
interface vlan 101
ip igmp enable
interface vlan 102
ip igmp enable
interface vlan 201
ip igmp enable
interface vlan 202
ip igmp enable
Step 4 Enable IGMP snooping on all overlay VLANs for both leaf switches and server access switches.
vlan 101
ip igmp snooping enable
vlan 102
ip igmp snooping enable
vlan 201
ip igmp snooping enable
vlan 202
ip igmp snooping enable
Step 5 Click VALIDATE ALL.
Step 6 Click APPLY ALL.
Verify Overlay IGMP and IGMP Snooping
On all leaf and server access switches, start a multicast listener for a multicast group with an active source, then use the follow procedure to verify IGMP and IGMP snooping optimizations.
Step 1 On the menu bar at the top right of the Fabric Composer window, click the CLI Commands icon and select Show Commands.
Step 2 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select all leaf switches with overlay VLANs >
- Commands: show ip igmp groups all-vrfs
Step 3 Verify that the multicast group is learned on each switch on the VLAN corresponding with the attached listening hosts.
Step 4 On the CLI Command Processor page, enter the following values, then click RUN.
- Switches: < Select all leaf switches with overlay VLANs and server access switches >
- Commands: show ip igmp snooping groups
Step 5 Verify that that IGMP snooping has state for all VLANs with a listener.
VMWare vSphere Integration
VMware vSphere integration enables VMware host and virtual machine visualization within Fabric Composer. This procedure also enables automated switch port provisioning of VLANs based on how the vSwitch and VMs are setup.
Step 1 On the Configuration menu, select Integrations > VMware vSphere.
Step 2 On the right ACTIONS menu, click Add to start the VMware vSphere wizard.
Step 3 On the Host page, assign the following settings:
- Name: Example-vSphere1
- Description: Example vSphere Integration
- Host: rsvdc-vcenter.example.local
- Username: administrator@example.local
- Password: < password >
- Validate SSL/TLS certificates for Aruba Fabric Composer: unchecked
- Enable this configuration: checkmark
Note: Host is the resolvable hostname or IP address of the vCenter server.
Username is the name of an administrator account on the vCenter server.
Password is the password for the administrator account on the vCenter server.
Step 4 Click VALIDATE to verify that the provided credentials are correct. A green success message appears at the bottom right. Click NEXT.
Step 5 On the Aruba Fabric page, choose from the two options below and enter a VLAN Range, check Automated PVLAN provisioning for ESX hosts direction connected to the fabric and enter a VLAN range. Check Automated Endpoint Group Provisioning, then click NEXT.
- If the hosts are directly connected from the NIC to the switch, select Automated VLAN provisioning for ESX hosts directly connected to the fabric.
- If host infrastructure is HPE Synergy or another chassis with an integrated switch solution, select Automated VLAN provisioning for ESX hosts connected through intermediate switches.
Note: Automated PVLAN provisioning for ESX hosts directly connected to the fabric is a prerequisite for microsegmentation automations built into Fabric Composer.
Automated Endpoint Group Provisioning enables assigning VMs dynamically to firewall policy using VM tags. The IP addresses used in the policy are modified dynamically in the future, if a VM IP changes or the VMs associated with the tag change.
For additional details on all options, refer to the HP Aruba Networking Fabric Composer User Guide.
Step 6 On the vSphere page, click the checkbox for Discovery protocols and click NEXT.
Caution: If Discovery protocols is not enabled, the VMware integration cannot display virtual switches correctly.
Step 7 On the Summary page, confirm that the information is entered correctly and click APPLY.
Step 8 Go to Visualization > Hosts.
Step 9 Select the checkbox next to the Name of an ESXi VM host to add it to the visualization window.
Step 10 Verify the connectivity displayed from the hypervisor layer to the leaf switches.