This section covers the following aspects of hub design:
Table of contents
When designing the hub site, the first major decision is determining if the hub is physical or virtual. The organization must consider the following factors:
- Physical gateways - On-premise gateways are generally required for organizations with an on-premise data center site.
- Virtual gateways - Virtual environments can be used by organizations that use a cloud provider for Infrastructure as a Service (IaaS) or other private cloud workloads.
Integrating with a cloud provider is primarily deployed in two different ways: by using a cloud provider or by using an MSP that peers with a cloud provider, as follows:
- Cloud Integration - Aruba gateways can be deployed directly within Azure, Google Cloud, or AWS for cloud connectivity that provides SD-WAN capabilities such as Route Orchestration, FEC, DPS, VIA, and Microbranch connectivity.
- MSP Integration - Aruba gateways can be deployed within a MSP/Colocation where the MSP has direct low-latency connectivity to a cloud provider. This hybrid approach provides the benefit of having the physical infrastructure and cloud connectivity in the same location.
Aruba gateways can be used to peer directly with a transit gateway to establish connectivity between VPC/VNETs. This is not recommended because organizations will lose the following SD-WAN capabilities:
- Reverse Path Pinning ensures that traffic always returns through the path of origin, enabling Branch Gateways (BGWs) to perform uplink load-balancing and Dynamic Path Steering.
- Forward Error Correction protects critical traffic flows from potential network issues between the branches and the cloud, especially when traversing the Internet.
- Tunnel Orchestration automates the process of establishing IPsec tunnels from all BGWs to all relevant VPNCs (including the vGW).
- Orchestrated Routing automates the exchange of routes across the SD-WAN.
- End-to-End Visibility enables single-source visualization and monitoring of the entire SD-WAN network using a single application (Aruba Central).
When integrating with a cloud provider, it is important to deploy a virtual gateway to ensure the high performance and stability that SD-WAN provides.
For more details on cloud integration see the following guides:
Zero Touch Provisioning (ZTP) is the preferred method to deploy gateways that obtain IP addressing dynamically from an Internet Service Provider (ISP). The gateway connects to the Internet service, obtains an IPv4 address, then communicates with Aruba Central to obtain the configuration.
There may be circumstances when a gateway requires additional configuration before it can communicate with Central. The gateway may require:
- Static addressing
- Point-to-Point Protocol over Ethernet (PPPoE) credentials to initiate the Internet service
- Specific VLAN configuration.
For these deployments, Aruba offers a One Touch Provisioning (OTP) feature for gateways. OTP can use the serial console or the web user interface.
This method is recommended when a VPNC or BGW requires more advanced configurations that may require a specific VLAN ID or trunk configuration. Generally, when deploying a hub site, OTP is needed due to its placement. The OTP feature is available only for gateways in their factory default state and cannot be accessed after a gateway has received its configuration from Central.
The SD-WAN Orchestrator brings up active tunnels to multiple headend gateways. The routes from the BGWs are advertised to the gateways. The overlay routes use weighted costs to select one gateway over another. Weighted costs are set on a group that enables traffic load-balancing by creating two branch groups with inverse priorities.
On the northbound LAN interface side, the overlay cost is automatically translated into the dynamic routing protocols as follows:
- OSPF: Direct translation into External 1 and External 2 cost
- BGP: Direct translation into Multi-Exit Discriminator
- BGP: Automatic prepending of Autonomous System numbers to ensure routing symmetry.
The figure below shows a headend site with redundancy and two gateways.
Aruba supports multiple active data centers to provide branch locations with easy access to resources in different locations. Tunnels are built to all data centers using the method described above.
To minimize the number of routes in the data center gateways, Aruba recommends summarizing the branch site routes advertised into the data centers and data center routes into the branch.
Proper IP address planning is required across the entire organization, so the number of subnets at each branch falls within an easily summarized bit boundary. If three or four subnets are currently in use at a location, plan for a minimum of eight summarized subnets to allow for future expansion without adding new summaries. A good rule of thumb is using a network range with a 255.255.248.0 mask or /21, for each branch location. This allows for eight /24 subnets at each location.
At the data centers, the routes also should be summarized to route table constraints on the branch gateways. In most cases, it is preferable to use a single supernet route for each data center location. If this is not possible, use as few summary routes as needed. Also consider creating a single summary route to cover all of branch locations.
Another recommendation is setting the DC preference for each branch to the closest hub location. The secondary and tertiary locations use lower DC preferences, so the closest one is always preferred. This enables the branches closest to a particular data center to use it as a regional hop to other branches in the area.
Aruba always recommends allowing branch-to-branch communications through the data center, even when planning to use branch mesh, as discussed below. This enables the closest data center to act as a backup path between branches if the branch mesh tunnel is not available.
The figure below shows the summary routes and DC preference with multiple active data centers.
To maintain traffic symmetry when using multiple data centers, the SD-WAN Orchestrator automatically sets different routing costs for the different VPNCs in increments of 10. The smart redistribution feature between VPNCs in different data centers works the same as two redundant VPNCs in the same data center, as discussed in the previous section.