Table of contents
Centralized Layer 2 (CL2) is an extension of previously introduced Remote Access Point (RAP). CL2 forwarding provides flexible options:
- All user traffic can be tunneled entirely to the data center.
CL2 is supported for both wireless and wired clients. In CL2 mode, Microbranch AP does not act as DHCP server or as a gateway for the clients. DHCP server and Default GW reside in the data center, so DHCP requests from the client are tunneled to the data center. CL2 also extends the corporate VLAN or broadcast domain to remote branches.
Common usage for CL2 includes, but is not limited to:
- Remote deployments that must perform security policy checks at the data center
- Remote deployments that require VLAN extension and DHCP scopes from the data center to the branches
The AP follows its routing table to forward traffic, so any user traffic is sent via the AP’s default gateway (to the AP’s WAN uplink to the ISP network).
In addition, the Overlay Route Orchestrator (ORO), that helps to advertise data center routes to APs dynamically, does not serve a role in CL2. Therefore, when using CL2, a policy must be defined to redirect or forward user traffic to the data center using Policy-Based Routing (PBR). The PBR policy action “forward to cluster” is designed specifically to enable CL2 mode to redirect traffic to VPNC clusters.
After the user is authenticated, the VLAN configured for CL2 is assigned to the client. Two options are available to handle the user traffic flow or the APs’ data forwarding decision to forward all user traffic to the data center or to forward only a select subset of user traffic to the data center:
- Split-tunnel: The AP tunnels only the user traffic destined to access resources at data center, while other traffic can be NATed locally to an AP WAN uplink (Internet or cellular).
- Full-tunnel: The AP tunnels all the user traffic to the data center.
The split-tunnel optimizes traffic flow by directing only corporate traffic back to the data center through the secure IPsec tunnel, while Internet application traffic can be bridged locally to the AP WAN uplink by source-NAT with AP uplink IP. This ensures that non-corporate Internet traffic does not incur the overhead of a round trip to the data center VPNCs, which decreases traffic on the WAN link and minimizes latency for voice/video applications such as Zoom, Teams, etc.
By default, all user traffic is NATed locally to the AP WAN uplink and does not allow access to corporate resources. To allow access to internal resources with CL2, engage split-tunneling by configuring Policy-Based Routing (PBR) policy with two or more rules. Traffic matching a PBR rule with the action “forward to cluster” is securely tunneled to the UDG (User Designated Gateway). If traffic does not match a PBR rule, the traffic is src-NATed with the AP uplink’s IP and sent to the uplink.
In full-tunnel mode, the Microbranch AP forwards all user traffic securely via the IPsec tunnel to the VPNC clusters at the data center instead of using its own routing table for routing decision. Full-tunneling may required to perform additional required security checks at the data center and/or to provide centralized access for all user traffic. Typical usage would include networks for banking and insurance that require scrutinizing user traffic at the data center for added security and other similar business situations.
To configure full-tunnel in CL2 Microbranch deployments, a Policy-Based Routing (PBR) policy is created first with a rule stating that any user traffic to any destination must be forwarded to the cluster through the secure IPsec tunnel. Traffic matching any PBR rule with the action “forward to cluster” is securely tunneled to the UDG (User Designated Gateway).
Note: By default, all user traffic is sent to the AP’s WAN uplink, so data center resources cannot be accessed. PBR rules must be configured to send authorized user traffic to the data center to access internal resources.
For overlay cases, unlike DL3 where the routing table in AP (populated by ORO) determines the VPNC that client traffic terminates, using CL2 the AP receives the bucket map from the data center to map clients to the VPNC, also known as UDG (User Designated Gateway).
Any time a client sends traffic to the data center, the AP checks its bucket map, determines the client’s UDG, and forwards the traffic through the pre-established IPsec tunnel to the UDG/VPNC assigned to the client. This helps with load balancing in addition to assigning clients to a specific UDG/VPNC in the data center cluster.
The screenshot below displays the bucket map that the AP receives from the data center. The client (in Station list) connected to the AP is assigned to the UDG/VPNC with index 1 and IP 172.30.28.33. The traffic from the client destined to the data center is sent via the secured IPSec tunnel to UDG/VPNC.
This set-up also can be observed on the Client Details page in the Aruba Central user interface. The UDG where the client traffic is tunneled and the UDG IP are displayed in the screenshot below.