Aruba SD-Branch has supported deploying vGWs in GCP since the introduction of version 126.96.36.199-188.8.131.52. This allows customers to connect a vGW (or 2 for HA) to a VPC to facilitate the communications with the workloads inside that VPC. The challenge with this approach, however, is that to peer with multiple VPCs, the vGW would have to establish IPsec tunnels with the VPN Gateway in such VPCs, to then peer with a Cloud Router in such VPC. This complicates the design and negatively impacts the performance.
To address this (and other) challenges as well and provide a cleaner architecture, GCP has introduced the Network Connectivity Center (NCC). Google NCC simplifies communications across and into the cloud environment by providing a global network service to handle all networking needs. It not only facilitates the connectivity between workloads in VPCs across the world, but it also provides an easy mechanism for external networks such dedicated interconnects or SD-WAN to connect to it.
When using Google NCC, vGWs can be connected to a transit VPC that would then peer to other VPCs in the region, as well as to other regions, simplifying Branch to Cloud Communications and enabling Branch to Branch communications through the GCP backbone network.
As described in the GCP documentation, Router Appliances (RAs), as in this case the vGW, can be defined as NCC spokes to connect to the corresponding Cloud Routers. That enables the SD-WAN and cloud infrastructure to naturally route traffic back and forth, using the same mechanisms that would be applied in more “traditional” environments such as an on-premises Data Center.
This simplifies the connectivity of SD-Branch, Microbranch and VPN clients into the VPCs deployed in GCP, as it provides for dynamic routing between the two environments.
When integrating vGWs into GCP using NCC, vGWs declared as NCC Spokes will dynamically exchange routing prefixes with the Cloud Routers using Border Gateway Protocol (BGP). This gives network administrators the flexibility to dynamically exchange routes with a great deal of control into which routing information is shared by making use of route-map policies in the Aruba vGWs.
When defining the BGP topology, GCP recommends reserving one BGP Autonomous System (AS) for NCC and the Cloud Routers, and then making use of different BGP AS for the vGWs in every region. This will enable communication between regions while at the same time preventing routing loops.
When integrating vGWs with GCP Cloud Routers using NCC, High Availability is achieved by making use of traditional routing mechanisms. In this particular case, and since both vGWs will most likely advertise the same routing prefixes, the active/standby traffic paths would be determined by standard BGP path selection mechanism. At the same time, when building an SD-WAN topology with Aruba SD-Branch, the Orchestrator allows for a very simple mechanism to establish the preference across nodes advertising the same prefixes. A given Branch Gateway or Microbranch connected to multiple hubs would have its SD-WAN Overlay routing assigned by the orchestrator according to the DC preference, in increments of 10 for each subsequent headend gateway.
Once this DC-Preference is set, Aruba vGWs automatically translate the SD-WAN routing costs into the corresponding BGP routing cost. When eBGP is used (as it happens between the Aruba vGWs and the Cloud Router), the Aruba vGWs translate the SD-WAN overlay routing cost into BGP attributes by prepending their own Autonomous System incrementally.