Design Principles
Before going into the different configuration steps, it’s important to understand the design principles upon which this integration is based.
Service Orchestration
Cloud Connect Service
HPE Aruba Networking Central is a multi-tenant cloud platform serving 1000s of customers. As such, it’s composed of many microservices handling all sorts of tasks.
Cloud Connect is a horizontally scalable service that takes care of establishing communications with third-party (cloud) solutions. It does so by making use of partner APIs to define sites/locations, finding the nearest point of presence, and automatically establish IPsec tunnels. Furthermore, it can also help provide advanced capabilities such as defining 3rd party options or establishing routing neighborhoods.
In the case of the integration with HPE Aruba Networking SSE, Cloud Connect orchestrates the connectivity between edge devices (Microbranch APs and SD-Branch Gateways) and SSE nodes distributed around the world. It does so by making use of APIs to define tunnels, locations and sublocations in the SSE dashboard, and then leveraging the Overlay Tunnel Orchestrator (another service in Central) to build tunnels to the closest SSE nodes.
IPsec tunnel establishment
As described, Cloud Connect builds IPsec tunnels between the SD-Branch gateways and the nearest of HPE Aruba Networking’s Points of Presence (PoPs). This is done to preserve data privacy, leverage IKEv2 for authentication, and allow more flexibility (traversing NAT/PAT boundaries, sourcing traffic from dynamically assigned IP addresses, etc.). The phase 1 and phase 2 details used in this integration are detailed below:
Phase 1 | Phase 2 | |
---|---|---|
Encryption | AES-256 | AES-256 |
Integrity | SHA2-256 | SHA2-256 |
Authentication | FQDN / PSK / IP | N/A |
Key Exchange Method | Diffie-Hellman | Diffie-Hellman |
Diffie-Hellman Group | 14 | N/A |
NAT-Transversal | Enabled | N/A |
Dead Peer Detection (DPD) | Enabled | N/A |
Perfect Forward Secrecy (PFS) | N/A | DH Group 14 |
VPN Type | Policy-based VPN | Policy-based VPN |
Service Chaining
Policy-Based Routing
Once tunnels are established, the next step would be to make sure relevant traffic is sent through these tunnels. Aruba Gateways use policy-based routing to determine which traffic flows are to be sent through the Secure Web Gateway.
The following parameters can be taken into consideration when determining traffic types to be sent through to SWG:
· VLAN/User Role; PBR policies can be applied to roles or VLANs
· Stateful Firewall attributes; Protocol, source/destination address, source/destination port
· FQDN; ArubaOS supports creating “netservices” based on FQDN, which can be used to build PBR policies.
· Application/Application Group; Thanks to caching capabilities in the DPI engine, Gateways support 1st packet classification technologies necessary to route traffic based on applications or application groups.
The following figure illustrates how Gateways selectively redirect traffic to the SSE; In this example, cameras are full-tunneled to the DC, Guest is sent directly to the Internet, and HTTP/HTTPS traffic from Employees/IoT are sent to the Internet through to HPE Aruba Networking’s Secure Service Edge.
Uplink Load-Balancing and Dynamic Path Steering on Branch Gateways
Aruba Branch Gateways (BGWs) support uplink load-balancing. The Branch Gateway would simply set up a tunnel from every WAN interface to each SSE node. To ensure traffic symmetry, all traffic that enters the Secure Web Gateway through a tunnel is guaranteed to return (egress) through the same tunnel.
Moreover, the Aruba Branch Gateway can select the WAN circuit to be used by each traffic flow based on rich policies such as the ones built for PBR. The routing engine (global routing table or PBR) provides a set of “next-hops” and the DPS engine selects the optimal path. On top of that, Branch Gateways can monitor different WAN circuits to steer traffic to the optimal path based on SLAs set for each application. To do so, it can send synthetic probes to the tunnel monitoring IP addresses provided by Aruba to measure loss, latency, and jitter over the tunnels.
An example workflow would look like this:
Step 1 ClearPass (or another AAA server) assigns the role “PoS” to the device.
Step 2 The firewall classifies the session as “Payment”.
Step 3 The routing for a PoS device using a “Payment” app state that the next-hop is a certain Axis SSE node, and the paths are, for example, INET and LTE
Step 4 Because the traffic is classified as “Payment”, it’s handled by the DPS policy “Payment”. This policy has INET as the preferred path, as well as an SLA that has to be met.
Step 5 If the measured values for INET meet the SLA for the “Payment” policy the session goes through the tunnel that’s established using the INET uplink. If at any point in time the measured SLA for INET drops, the Gateway will steer it to any other active tunnel that’s meeting the SLA. If no circuit meets the SLA, the system will choose the one that deviates the least from the configured SLA.