Link Search Menu Expand Document
calendar_month 07-Mar-24

Google NCC Integration

As described, by integrating SD-Branch with the GCP Network Connectivity Center, customers can peer the Aruba vGWs in GCP with the regional Cloud Routers to facilitate the communication between both environments. This enables branch to cloud as well as branch to branch communications with a flexible and dynamic mechanism. Achieving this is as simple as:

Step 1 Deploying Aruba vGWs in GCP

Step 2 Enabling the NCC Hub and defining the vGWs as Spokes

Step 3 Defining Cloud Routers in every region

Step 4 Peering the Cloud Routers with the Aruba vGWs using Dynamic Routing (BGP)

Detailed steps on how to configure this environment are provided below. For ease of documentation these will be described using gcloud commands, but alternative mechanisms (including GUI-Based configurations) are described in detail in the NCC documentation.

Note: While the web console can be used for gcloud commands, installing the Google Cloud SDK is recommended for a superior user experience, as it enables the gcloud CLI tool to access Google Cloud resources from the local machine.

Table of contents

Aruba Virtual Gateway Setup

Virtual Gateway Deployment

The first step will be to create the Aruba vGW in GCP, described above in this guide. The following command will list the instances in each region. The vGW(s) should be part of them. The general recommendation is that vGWs should be attached to different subnets of the same global VPCs. In the specific case of the LAN interface of the vGWs (the one attached to VLAN 4092), this becomes a hard mandate when using Google NCC. The following diagram represents how global VPCs should be assigned to the different network interfaces of a vGW when integrating with NCC.

VPC using global VPCs

The list of router appliances (Aruba vGWs in this case) can easily be obtained using the following command:

gcloud compute instances list

With the variables representing:

  • ZONE : the region where the vGW would be deployed.

An example would be:

Home$ gcloud compute instances list
NAME           ZONE            MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP                                      EXTERNAL_IP    STATUS
vgw-ncc-01     us-west1-b      n2-standard-4               10.128.0.2,10.128.1.2,10.128.2.2,10.128.3.2      35.212.223.10  RUNNING
vgw-ncc-uk-01  europe-west2-c  n2-standard-4               10.128.16.7,10.128.17.7,10.128.18.7,10.128.19.2  35.214.21.100  RUNNING

BGP configuration of the Aruba vGW

Google recommends configuring the RA before the Cloud Router, the next step should therefore be to do the BGP configuration in the Aruba vGWs. This can be done by going to the group with the vGWs and then to Routing > BGP from the advanced configuration menu. Once there, the following configurations can be done:

Step 1 Enable BGP and set an Autonomous System. As described above, the vGWs in every region should be in their own unique private AS, which should be different than the one used by the NCC service.

Step 2 Create the corresponding Route-maps to attach them to the BGP neighbors.

Step 3 Configure the primary and standby Cloud Router as BGP neighbors in every vGW.

Step 4 Redistribute Overlay routes (learned from the SD-WAN) into BGP and vice-versa. As described in the “design” section, the route redistribution will be done in a way that helps ensure traffic symmetry.

BGP Neighbors

BGP Redistribution

Overlay Redistribution

Note: Aruba Gateways follow RFC8212, which mandates that a route-map has to be applied to eBGP neighbors or the implicit “deny” will be applied. It is therefore needed to apply an inbound route-map to eBGP neighbors in order to learn prefixes from them.

NCC Initial Setup

The vGWs deployed in GCP will need to integrate with the Network Connectivity Center, which is what enables dynamic routing to address the communication between the SD-WAN and the Cloud Routers. This requires bringing up the Global NCC Hub (which may have already been in place before the deployment) and defining the vGWs as NCC Spokes.

Create the NCC Hub

The first step will be to create the NCC Hub. This “hub” is a global resource that will behave as our “backbone as a service”. As such, there should only be one hub per GCP project. The code sample would be the following:

  gcloud network-connectivity hubs create NAME \
    --description=DESCRIPTION \
    --labels=KEY=VALUE

The variables above would represent:

  • NAME: the name of the new hub
  • DESCRIPTION: optional text that describes the hub
  • KEY: the key in the key-value pair for the optional label text
  • VALUE: the value in the key-value pair for the optional label text

An example would be the following:

Home$ gcloud network-connectivity hubs create hub-ncc-uk --description="NCC Hub UK"
Create request issued for: [hub-ncc-uk]
Waiting for operation [projects/sd-branch-ncc-testing/locations/global/operations/operation-1629332960069-5c9dea44e47d8-a816636a-e287f102] to complete...done.
Created hub [hub-ncc-uk].
Home$
Home$
Home$ gcloud network-connectivity hubs list
NAME        DESCRIPTION
hub-ncc     NCC Hub 
Home$

Additional details as well as other configuration options are described in the CGP documentation.

Add vGWs as spokes to the NCC Hub

After the NCC Hub has been created, the next step would be to add the Aruba vGWs as “Router appliance spokes” to them. Unlike other type of spokes (VLAN, VPN), these aren’t associated with a single location outside Google Cloud, as the Aruba vGW will bring connections from anywhere in the SD-WAN network. It’s worth noting that Spokes with site_to_site_data_transfer enabled must belong to the same VPC network.

For redundancy or scalability purposes multiple vGWs may be needed for a specific deployment. They should all be tied to the same NCC spoke. These vGWs should reside in the same region as the NCC spoke.

The code sample to create NCC spokes (and associate them to the vGWs) is:

  gcloud network-connectivity spokes create NAME \
    --hub=HUB_NAME \
    --description=DESCRIPTION \
    --router-appliance=ROUTER_APPLIANCE_DETAILS \
    --region=REGION \
    --labels=KEY=VALUE

With the variables above representing the following:

  • NAME: the name of the spoke
  • HUB_NAME: the name of the hub, in URI format, that you are attaching the spoke to—for example, projects/myproject/locations/global/hubs/us-west-to-uk
  • DESCRIPTION: optional text that describes the spoke—for example, us-vpn-spoke
  • ROUTER_APPLIANCE_DETAILS: the URI and IP address of the router appliance instance to add to the spoke
  • REGION: the Google Cloud region where the spoke is located—for example, us-west1
  • KEY: the key in the key-value pair for the optional label text
  • VALUE: the value in the key-value pair for the optional label text

The ROUTER_APPLIANCE_DETAILS variable should follow the format below:

instance="https://www.googleapis.com/compute/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME",ip="INTERNAL_IP_ADDRESS" 

An example would be the following:

Home$ gcloud network-connectivity spokes create ncc-spokes-vgw-uk-01 --hub=hub-ncc-uk --description="NCC Spoke for UK Region" --router-appliance=instance="https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/zones/europe-west2-c/instances/vgw-ncc-uk-01",ip="10.128.19.2" --region=europe-west2
Create request issued for: [ncc-spokes-vgw-uk-01]
Waiting for operation [projects/sd-branch-ncc-testing/locations/europe-west2/operations/operation-1629400259100-5c9ee4fa3fd64-a84dd719-b0b41f41] to complete...done.
Created spoke [ncc-spokes-vgw-uk-01].
Home$
Home$ gcloud network-connectivity spokes list --region=europe-west2
NAME                  REGION        HUB         TYPE              RESOURCE COUNT  DESCRIPTION
ncc-spokes-vgw-uk-01  europe-west2  hub-ncc-uk  Router appliance  1               NCC Spoke for UK Region
Home$
Home$ gcloud network-connectivity spokes list --region=us-west1
NAME               REGION    HUB         TYPE              RESOURCE COUNT  DESCRIPTION
ncc-spokes-vgw-01  us-west1  hub-ncc-01  Router appliance  1

Additional details as well as other configuration options are described in the CGP documentation.

Creating and configuring the Cloud Router

After the spokes have been created, the Cloud Routers for every region can be defined. These should be in the same region as where the router appliances (the Aruba vGWs) are deployed. For simplicity, Google recommends using the same ASN on Cloud Routers. And use unique ASN on Router appliances except HA pair.

Create Cloud Router

Cloud Routers are created with the following commands:

  gcloud compute routers create NAME \
      --region=REGION \
      --network=NETWORK \
      --asn=ASN \
      --project=PROJECT_ID

With the variables representing the following:

  • NAME: the name of the Cloud Router—for example, cloud-router-a
  • REGION: the region that contains the Cloud Router—for example, us-west1
  • NETWORK: the VPC network that contains the Cloud Router—for example, network-a
  • ASN: the autonomous system number (ASN) for the Cloud Router—this ASN must be a 16-bit or 32-bit private ASN as defined in RFC 6996—for example, 65000
  • PROJECT_ID: the project ID for the Cloud Router—for example, my-project

As an example:

Home$ gcloud compute routers create router-ncc-uk-01 --region=europe-west2 --network=vpc-vgw-lan --asn=65000 --project=sd-branch-ncc-testing
Creating router [router-ncc-uk-01]...done.
NAME              REGION        NETWORK
router-ncc-uk-01  europe-west2  vpc-vgw-lan
Home$
Home$ gcloud compute routers list
NAME              REGION        NETWORK
router-ncc-01     us-west1      vpc-vgw-lan
router-ncc-uk-01  europe-west2  vpc-vgw-lan
Home$

Additional details as well as other configuration options are described in the CGP documentation.

Add Interfaces to the Cloud Router

Each Cloud Router supports up to 128 BGP Peering sessions, and a given Cloud Router can have up to 2 interfaces (for redundancy purposes). For a normal HA scenario, 2 vGWs would be connected to each interface of the cloud router, with a topology like the one below:

Cloud Router interfaces

To create two redundant interfaces, use the following commands:

Step 1 Create the first router interface:

gcloud compute routers add-interface NAME \
    --interface-name=INTERFACE_NAME \
    --ip-address=IP_ADDRESS \
    --subnetwork=SUBNET \
    --region=REGION \
    --project=PROJECT_ID

Step 2 Create the redundant interface:

gcloud compute routers add-interface NAME \
    --interface-name=INTERFACE_NAME \
    --ip-address=IP_ADDRESS \
    --subnetwork=SUBNET \
    --redundant-interface=REDUNDANT_INTERFACE \
    --region=REGION \
    --project=PROJECT_ID

With the variables representing the following:

  • NAME: the name of the Cloud Router to update—for example, cloud-router-a
  • INTERFACE_NAME: the name of the interface—for example, router-appliance-interface-0 or router-appliance-interface-1
  • IP_ADDRESS: the RFC 1918 internal IP address to use for the interface—for example, 10.0.1.5 or 10.0.1.6
  • SUBNET: the subnet where the internal IP address resides—for example, subnet-a-1
  • REDUNDANT_INTERFACE: the redundant Cloud Router interface that peers with the same router appliance instance as the primary interface—for example, router-appliance-interface-0
  • REGION: the Google Cloud region where the Cloud Router resides—for example, us-west1
  • PROJECT_ID: the project ID for the Cloud Router—for example, my-project

An example would be the following output:

Home$ gcloud compute routers add-interface router-ncc-uk-01 --interface-name=router-interface-0 --ip-address=10.128.19.10 --subnetwork=subnet-vgw-lan-uk --region=europe-west2 --project=sd-branch-ncc-testing
Updated [https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/routers/router-ncc-uk-01].
Home$
Home$ gcloud compute routers add-interface router-ncc-uk-01 --interface-name=router-interface-1 --ip-address=10.128.19.11 --subnetwork=subnet-vgw-lan-uk --redundant-interface=router-interface-0 --region=europe-west2 --project=sd-branch-ncc-testing
Updated [https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/routers/router-ncc-uk-01].
Home$
Home$ gcloud compute routers describe router-ncc-uk-01 --region=europe-west2
bgp:
  advertiseMode: DEFAULT
  asn: 65000
  keepaliveInterval: 20
creationTimestamp: '2021-08-19T13:35:03.266-07:00'
id: '8698182270885002904'
interfaces:

- ipRange: 10.128.19.10/27
  name: router-interface-0
  privateIpAddress: 10.128.19.10
  redundantInterface: router-interface-1
  subnetwork: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/subnetworks/subnet-vgw-lan-uk
- ipRange: 10.128.19.11/27
  name: router-interface-1
  privateIpAddress: 10.128.19.11
  redundantInterface: router-interface-0
  subnetwork: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/subnetworks/subnet-vgw-lan-uk
  kind: compute#router
  name: router-ncc-uk-01
  network: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
  region: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2
  selfLink: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/routers/router-ncc-uk-01
Home$

Additional details as well as other configuration options are described in the CGP documentation.

BGP Configuration of the Cloud Router

The last step of the Cloud Router configuration would be to define the BGP neighbors, in this case the Aruba vGWs. As described above, Google recommends all GCP cloud Cloud Routers to be part of the same AS, with the Router Appliances (Aruba vGWs) in different ASs. For this type of deployment, Aruba recommends having the vGWs in the same AS, using different Autonomous System for every region.

Note: BGP uses TCP port 179 for its communication, so it should be allowed as part of the firewall rules of the VPC.

For every peer router, the configuration steps would be to first configure the BGP peer for the primary interface.

gcloud compute routers add-bgp-peer NAME \
    --peer-name=PEER_NAME \
    --interface=INTERFACE \
    --peer-ip-address=PEER_IP_ADDRESS \
    --peer-asn=PEER_ASN \
    --instance=ROUTER_APPLIANCE \
    --instance-zone=ROUTER_APPLIANCE_ZONE \
    --region=REGION

And then for the secondary interface:

gcloud compute routers add-bgp-peer NAME \
    --peer-name=PEER_NAME \
    --interface=INTERFACE \
    --peer-ip-address=PEER_IP_ADDRESS \
    --peer-asn=PEER_ASN \
    --instance=ROUTER_APPLIANCE \
    --instance-zone=ROUTER_APPLIANCE_ZONE \
    --region=REGION

With the variables representing:

  • NAME: the name of the Cloud Router to update
  • PEER_NAME: the name of the BGP peering session to establish with the router appliance instance
  • INTERFACE: the name of the interface for this BGP peer
  • PEER_IP_ADDRESS: the internal IP address of the peer router (the router appliance instance)—this address must match the primary internal IP address for the VM’s primary network interface (nic0)
  • PEER_ASN: the BGP autonomous system number (ASN) for this BGP peer—this ASN must be a 16-bit or 32-bit private ASN as defined in RFC 6996
  • ROUTER_APPLIANCE: the name of the VM acting as the router appliance instance
  • ROUTER_APPLIANCE_ZONE: the zone where the VM acting as the router appliance instance is located
  • REGION: the region where the VM acting as the router appliance instance is located

An example of the entire configuration would look like the following:

Home$
Home$ gcloud compute routers add-bgp-peer router-ncc-uk-01 --peer-name=vgw-ncc-uk-01-int0 --interface=router-interface-0 --peer-ip-address=10.128.19.2 --peer-asn=65011 --instance=vgw-ncc-uk-01 --instance-zone=europe-west2-c --region=europe-west2
Creating peer [vgw-ncc-uk-01-int0] in router [router-ncc-uk-01]...done.
Home$
Home$ gcloud compute routers add-bgp-peer router-ncc-uk-01 --peer-name=vgw-ncc-uk-01-int1 --interface=router-interface-1 --peer-ip-address=10.128.19.2 --peer-asn=65011 --instance=vgw-ncc-uk-01 --instance-zone=europe-west2-c --region=europe-west2
Creating peer [vgw-ncc-uk-01-int1] in router [router-ncc-uk-01]...done.
Home$
Home$ gcloud compute routers describe router-ncc-uk-01 --region=europe-west2
bgp:
  advertiseMode: DEFAULT
  asn: 65000
  keepaliveInterval: 20
bgpPeers:
- enable: 'TRUE'
  interfaceName: router-interface-0
  ipAddress: 10.128.19.10
  name: vgw-ncc-uk-01-int0
  peerAsn: 65011
  peerIpAddress: 10.128.19.2
  routerApplianceInstance: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/zones/europe-west2-c/instances/vgw-ncc-uk-01
- enable: 'TRUE'
  interfaceName: router-interface-1
  ipAddress: 10.128.19.11
  name: vgw-ncc-uk-01-int1
  peerAsn: 65011
  peerIpAddress: 10.128.19.2
  routerApplianceInstance: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/zones/europe-west2-c/instances/vgw-ncc-uk-01
creationTimestamp: '2021-08-19T13:35:03.266-07:00'
id: '8698182270885002904'
interfaces:
- ipRange: 10.128.19.10/27
  name: router-interface-0
  privateIpAddress: 10.128.19.10
  redundantInterface: router-interface-1
  subnetwork: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/subnetworks/subnet-vgw-lan-uk
- ipRange: 10.128.19.11/27
  name: router-interface-1
  privateIpAddress: 10.128.19.11
  redundantInterface: router-interface-0
  subnetwork: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/subnetworks/subnet-vgw-lan-uk
kind: compute#router
name: router-ncc-uk-01
network: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
region: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2
selfLink: https://www.googleapis.com/compute/beta/projects/sd-branch-ncc-testing/regions/europe-west2/routers/router-ncc-uk-01
Home$

Additional details as well as other configuration options are described in the CGP documentation.

Configuration for Redundant vGWs

When building an SD-WAN topology with Aruba SD-Branch, the Orchestrator allows for a very simple mechanism to establish the preference across nodes advertising the same prefixes. Branch Gateways have active paths to both vGWs in the same region, and both Aruba vGWs advertise the same routing prefixes to the Cloud Router. The routing cost for the paths through the 2 vGWs is determined when configuring the DC preference for a given branch group:

DC Preference

To ensure symmetric communication flow, Aruba Gateways advertises routes that look less preferable to the upstream router (in this case the GCP Cloud Router) by incrementally prepending its own Autonomous System. Advertising a longer AS_PATH from the secondary Aruba vGW will ensure the Cloud Router chooses the one advertised by the primary vGW as the best path.

Once this DC-Preference has been set, Aruba SD-Branch helps Aruba vGWs automatically translate the SD-WAN routing costs (lower for the VPNCs higher in the DC Preference) into the corresponding BGP routing cost. When eBGP is used between the Aruba vGWs and the TGW, the Aruba vGWs translate the SD-WAN overlay routing cost into BGP metrics by prepending their own Autonomous System incrementally. For example:

  • An Overlay cost of 10 is translated to AS-Path prepend = 0
  • An Overlay cost of 20 is translated to AS-Path prepend = 1
  • An Overlay cost of 30 is translated to AS-Path prepend = 2

In summary, to configure a redundant vGW, simply repeat the steps described above (deploy vgw, add it to NCC as a spoke, and configure BGP between the vGW and the Cloud Router) for the secondary vGW. The SD-WAN Orchestrator in Aruba Central will automatically handle HA and path symmetry.

Validating the peering between vGW and Cloud Router

After the BGP configuration is done on both the Cloud Router and the Aruba vGW, they should start advertising network prefixes to one another.

The routes advertised by the Aruba vGW can be monitored from the vGW details page, by going to Routing > BGP and then into the corresponding BGP neighbor:

vGW Advertised Routes

These routes can also be read from the cloud router by issuing the following command:

gcloud compute routers get-status NAME \
    --region=REGION \
    --project=PROJECT_ID

With the variables representing the following:

  • NAME: the name of the Cloud Router to update—for example, cloud-router-a
  • REGION: the Google Cloud region where the Cloud Router resides—for example, us-west1
  • PROJECT_ID: the project ID for the Cloud Router—for example, my-project

An example would be the following (truncated) output:

Home$ gcloud compute routers get-status router-ncc-us  --region=us-west1
kind: compute#routerStatusResponse
result:
  bestRoutes:

  - creationTimestamp: '2021-09-09T11:37:30.083-07:00'
    destRange: 10.127.31.80/32
    kind: compute#route
    network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
    nextHopIp: 10.128.3.2
    priority: 0
  - creationTimestamp: '2021-09-09T13:29:54.496-07:00'
    destRange: 10.127.32.121/32
    kind: compute#route
    network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
    nextHopIp: 10.128.19.2
    priority: 334
  - creationTimestamp: '2021-09-09T11:37:30.083-07:00'
    destRange: 10.127.29.0/26
    kind: compute#route
    network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
    nextHopIp: 10.128.3.2
    priority: 0
...
...
  - creationTimestamp: '2021-09-10T00:49:21.597-07:00'
    destRange: 10.127.31.80/32
    kind: compute#route
    network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
    nextHopIp: 10.128.3.2
    priority: 0
      bgpPeerStatus:
  - advertisedRoutes:
    - destRange: 10.128.10.0/24
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.10
      priority: 100
  ...
  ...
    - destRange: 10.128.19.0/27
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.10
      priority: 434
    - creationTimestamp: '2021-09-09T07:29:42.127-07:00'
      destRange: 10.127.18.0/24
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.10
      priority: 334
    - creationTimestamp: '2021-09-09T07:29:42.127-07:00'
      destRange: 10.127.32.121/32
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.10
      priority: 334
      ipAddress: 10.128.3.10
      name: vgw-ncc-01-int0
      numLearnedRoutes: 3
      peerIpAddress: 10.128.3.2
      state: Established
      status: UP
      uptime: 22 hours, 36 minutes, 3 seconds
      uptimeSeconds: '81363'
  - advertisedRoutes:
    - destRange: 10.128.10.0/24
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.11
      priority: 100
   ...
   ...
    - destRange: 10.128.11.0/27
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.11
      priority: 452
    - creationTimestamp: '2021-09-03T01:50:56.223-07:00'
      destRange: 10.127.32.121/32
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.11
      priority: 334
    - creationTimestamp: '2021-09-03T01:50:56.223-07:00'
      destRange: 10.127.18.0/24
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan
      nextHopIp: 10.128.3.11
      priority: 334
      ipAddress: 10.128.3.11
      name: vgw-ncc-01-int1
      numLearnedRoutes: 3
      peerIpAddress: 10.128.3.2
      state: Established
      status: UP
      uptime: 9 hours, 21 minutes, 22 seconds
      uptimeSeconds: '33682'
      network: https://www.googleapis.com/compute/v1/projects/sd-branch-ncc-testing/global/networks/vpc-vgw-lan

Conversely, the vGW should also be learning routes from the Cloud Router. This can also be verified from the BGP Details tab by selecting “Routes” in the drop-down.

vGW Learned routes

NCC integration use-cases

Creating the NCC hub, defining the vGWs as NCC spokes and connecting them to the cloud routers enables several use-cases. Whilst not all can be described in detail, the most interesting ones certainly deserve an additional focus.

Branch to Cloud Connectivity

As discussed in the Reference Architectures, providing connectivity between the SD-WAN and the cloud environments is the primary use-case for the NCC integration. The steps to achieve this are the following.

Configuration Steps

Establish VPC Peering

Once the vGW is integrated with NCC and exchanging routes with the cloud router, the next step is to use the LAN VPC of the vGW as a transit VPC and peer it with other VPCs. This can be done from Networking > VPC Network > VPC Network Peering. When doing so, it’s important to allow both importing as well as exporting custom routes.

Create VPC Peering

It’s important to note that VPC Peering has to be done in both directions; vpc-01 will only accept the peering from vpc-02 if it has one peering to vpc-02 itself. Until that happens, the peering will remain inactive, and no communication will happen between the two.

VPC Peering - Bidirectional

Add Custom Routes to Cloud Router

The second step will then be to add custom routes to the Cloud Router table to advertise them via BGP. This can be easily done from Networking > Hybrid Connectivity > Cloud Routers, and going to the router where we want to add a custom route. Once there, it’s as simple as enabling the option to advertise custom routes (as well as all other subnets), and then add the subnet range that should be advertised.

Advertise Custom Routes

Validation Steps

If the NCC integration is functioning properly, the custom routes will automatically be advertised to the vGWs. As usual, that can easily be validated from the routing tab of the gateway details page in Aruba Central:

Custom Routes Advertised to vGW

This alone should be enough to ensure the network is routable. For more in-depth information about how traffic goes through GCP, the whole data flow can also be visualized from GCP. This facilitates checking that traffic is taking the right path, that no firewall rules are blocking the traffic flow, and so on. This can be viewed by doing Network Connectivity Test, from the Network > Network Intelligence page.

Branch to Cloud traffic flow

Branch to Branch Connectivity

The other key use case for the SD-WAN integration with NCC is enabling branch to branch communication through the GCP backbone. After creating the NCC Hub (and spokes) and connecting the Aruba vGWs to the GCP cloud routers, the network should be ready to have any branch-to-branch communication go through the google cloud backbone. To validate this, we’re going to take two branches, one in Redwood City, California and one in Madrid, Spain.

Branch to branch Connectivity

The communication flow can be easily verified by looking at Aruba Central as well as GCP.

Validation steps

Tunnels and routing

Even though its detailed description is outside the scope of this document, Aruba Central orchestrates the connectivity between SD-Branch or Microbranch and the vGWs deployed in GCP. In doing so, Aruba Central provides complete visibility of the branch and WAN infrastructure, displaying how branch gateways are connected to one (or 2 for HA) Aruba vGW(s) in a given GCP region. The branch topology, including communication with the corresponding hubs, can be found by navigating to the corresponding site and then to Overview > Topology.

Orchestrated Branch to Cloud connectivity

Once the SD-WAN overlay is up, the Aruba vGWs will automatically learn routing prefixes from the branch gateways. This information can be obtained from the Routing > Overlay section in the vGW details page:

vGW-Overlay-Routes

The vGW should then advertise those prefixes to the Cloud Router (also visible in the vGW details page, by going to Routing > BGP and then into the corresponding BGP neighbor:

vGW Advertised Routes

At the same time, the vGW would also be learning routes from other regions from the Cloud Router. This can also be verified from the BGP Details tab by selecting “Routes” in the drop-down. Please note how the AS Path of the prefixes learnt by the vGW includes the AS defined for NCC (65000) as well as the AS defined for the UK region (65011), indicating the origin of such routes.

vGW Learned routes

These routes would finally be advertised to the SD-WAN orchestrator. This can, once again be verified from the gateway details page by going to the Overview > Routing > Overlay page, and into Routes Advertised:

vGW-Advertised-Routes

Data Flow

GCP also provides very valuable information to validate and/or troubleshoot the deployment. Given the fact that the traffic would be traversing several firewall rules (those that apply to the ingress and egress of the transit vpc), perhaps the most interesting tool is the Network Connectivity Test that can be performed from the Network > Network Intelligence page. In the example below, the entire communication path between a host connected to a Branch Gateway in California, US and another one connected to a Branch Gateway in Madrid, Spain is clearly displayed:

Network Connectivity Test


Back to top

© Copyright 2024 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Aruba Networks and the Aruba logo are registered trademarks of Aruba Networks, Inc. Third-party trademarks mentioned are the property of their respective owners. To view the end-user software agreement, go to Aruba EULA.