The most critical point of connectivity in a campus LAN, the network core is designed for simplicity and reliability. Relative to the rest of the network, the core provides high-speed, high-bandwidth, Layer 3 connectivity between the various aggregation points across the campus.
The network core also provides services aggregation functions when needed. Deciding where to locate network services, such as gateway devices, depends on the number of access aggregation switches and where user applications are hosted. Refer to the ESP Campus Design Validated Solution Guide for further discussion.
Table of contents
The following procedures describe the creation of a core switch configuration in CLI format. The switch configuration can be created offline in a text editor and copied into MultiEdit, or it can be typed directly in MultiEdit in a UI group of Central. Switches in the group receive the configuration when synchronized to Central.
The figure below shows the standalone core switches in the Aruba ESP Campus.
The base configuration of the switch was previously described in the Switch Group Configuration section of this guide. The following procedure completes the switch configuration using the Aruba Central MultiEdit tool, a CLI-based configuration editor built into Central.
Step 1 Go to Central and login using administrator credentials.
Step 2 On the Aruba Central Account Home page, launch the Network Operations app.
Step 3 In the filter dropdown, select an aggregation switch Group name. On the left menu, select Devices.
Step 4 In the upper right of the Switches page, select Config.
Step 5 In the upper left of the Switches page, move the slider right to enable MultiEdit.
Step 6 Select the devices for editing. In the lower right window, click EDIT CONFIG.
In the following procedure, Open Shortest Path First (OSPF) routing is configured and neighbor relationships are established between aggregation and core switches by configuring point-to-point IP links using /30 subnets. Then, Protocol Independent Multicast-Sparse Mode (PIM-SM) routing is enabled on the same links to ensure that multicast streams coming from the core can flow to the access VLANs. Loopback interfaces are created for the routers.
The figure below can be used as a reference point for the implemented configuration.
Note: After pasting a configuration in the MultiEdit window, right-click any device-specific values. A Modify Parameters window appears on the right to allow input of individual device values when entering configuration for multiple devices.
Step 1 Configure the global OSPF routing instance with area 0 and enable passive-interface default to avoid unwanted OSPF adjacencies. Use a pre-allocated loopback IP address as the router-id. When a chassis has redundant management modules, enable graceful-restart.
router ospf 1 area 0 passive-interface default router-id 10.0.0.1 redistribute bgp graceful-restart restart-interval 30
Step 2 Configure multicast routing globally.
router pim enable
Step 3 Configure OSPF on the loopback interface. Create the loopback 0 interface and configure the IP address using the router ID from the earlier step. Enable OSPF with area 0.
interface loopback 0 ip address 10.0.0.1/32 ip pim-sparse enable ip ospf 1 area 0
Step 4 Create a new loopback interface with the Anycast IP address. Enable PIM-SM and OSPF.
interface loopback 1 ip address 10.0.0.100/32 ip pim-sparse enable ip ospf 1 area 0
In the following procedure, PIM-SM is associated with the loopback 1 interface. The core is then configured as a rendezvous point (RP) candidate and a bootstrap router (BSR) candidate using the loopback 1 IP address as the source interface. Next, MSDP and PIM-SM are enabled on the loopback 0 interface.
Step 1 Configure the RP and BSR candidate source IP interface using loopback 1. Set the RP-candidate group prefix and the BSR-candidate priority.
router pim enable rp-candidate source-ip-interface loopback1 rp-candidate group-prefix 188.8.131.52/4 bsr-candidate source-ip-interface loopback1 bsr-candidate priority 1
Note: The RP candidate group prefix should be adjusted based on the IP design of the local network.
Step 2 Configure MSDP globally. The MSDP peer is the IP address of the loopback 0 interface on the adjacent core switch. The local loopback 0 interface is the connect-source.
Example: Core 1 Switch
router msdp enable ip msdp peer 10.0.0.2 connect-source loopback0 enable
Example: Core 2 Switch
router msdp enable ip msdp peer 10.0.0.1 connect-source loopback0 enable
At the bottom right of the MultiEdit window, click Save.
Step 3 In a Remote Console window, type the command
show ip msdp summary, then press ENTER. The output shown below indicates that MSDP is running from Core 1 to Core 2.
Next, each physical interface connected to an aggregation switch is configured for OSPF and PIM-SM routing.
Step 1 Configure OSPF and PIM-SM on the physical interfaces. Configure a large IP MTU, turn off OSPF passive mode, set the OSPF network to point-to-point, and enable OSPF using the router process and area.
interface 1/1/1 description CORE_TO_AGG1 no shutdown ip mtu 9198 ip address 172.18.103.2/30 no ip ospf passive ip ospf network point-to-point ip ospf 1 area 0 ip pim-sparse enable
Step 2 Repeat the previous step for each interface between the core and aggregation switches.
Example: Core 1 Switch
|Core 1 IP Address||Subnet||Peer Device|
Example: Core 2 Switch
|Core 2 IP Address||Subnet||Peer Device|
Many campuses have a locally attached data center. With this arrangement, routing must be established between the two networks so that clients in the campus can access applications in the data center. In the OWL, Corp. campus, BGP is used to peer with the data center border to learn the routes needed by clients.
Step 1 Create VLANs and SVIs for peering between the campus core and data center border. Each VLAN SVI becomes the BGP neighbor and participates in OSPF for the campus.
vlan 2011 name DC1_FB1_PROD_LF1-1 vlan 2013 name DC1_FB1_PROD_LF2-1 ... interface vlan 2011 description DC1_FB1_PROD_LF1-1 ip mtu 9198 ip address 172.18.100.63/31 ip ospf 1 area 0.0.0.0 ip ospf passive interface vlan 2013 description DC1_FB1_PROD_LF2-1 ip mtu 9198 ip address 172.18.100.67/31 ip ospf 1 area 0.0.0.0 ip ospf passive
Step 2 Configure the physical interfaces connected to the data center border to trunk the VLANs created above.
interface 1/3/5 description RSVDC-FB1-LF1-1 no shutdown mtu 9198 no routing vlan trunk native 1 vlan trunk allowed 2011 interface 1/3/6 description RSVDC-FB1-LF1-2 no shutdown mtu 9198 no routing vlan trunk native 1 vlan trunk allowed 2013
Step 3 Configure the BGP router to peer with the routers running on the data center border switches.
router bgp 65000 bgp router-id 10.0.0.1 neighbor 172.18.100.62 remote-as 65001 neighbor 172.18.100.62 fall-over bfd neighbor 172.18.100.66 remote-as 65001 neighbor 172.18.100.66 fall-over bfd address-family ipv4 unicast neighbor 172.18.100.62 activate neighbor 172.18.100.62 default-originate neighbor 172.18.100.66 activate neighbor 172.18.100.66 default-originate exit-address-family
Step 4 At the bottom right of the MultiEdit window, click Save.
Central provides a remote console capability that allows for CLI access on any managed switch. Use this to run CLI show commands at validation steps throughout this guide.
Step 1 On the left menu, select Tools.
Step 2 On the Console tab, assign the following settings, then select Create New Session.
Device Type: Switch
Switch: Device name
Step 3 In the Remote Console window, type the command
show bgp ipv4 unicast summary, then press ENTER. The output shown below indicates healthy BGP sessions to data center border switches.
Step 4 In the Remote Console window, type the command
show ip route bgp, then press ENTER. The output shown below shows the routes learned from the data center border switches.
Step 1 Configure an interface on each switch to provide Internet connectivity. In the OWL, Corp. campus, internet service is provided through a firewall running OSPF. The core switches use OSPF to peer with the firewall and learn the default route.
interface 1/3/11 description RSVCP-INET no shutdown mtu 9198 routing ip mtu 9000 ip address 192.168.8.9/31 ip ospf 1 area 0.0.0.0 no ip ospf passive ip ospf network point-to-point
Note: Devices in the group automatically synchronize the new configuration. Synchronization status is updated on the Configuration Status page. Process step execution can be observed by clicking Audit Trail on the left menu. Verification of OSPF routing is performed during aggregation switch deployment.