Planning
25 minute read
Planning a Gateway Cluster
Each cluster can support a specific number of tunneled clients and tunneling devices. The Gateway series, model, and number of cluster nodes determines each cluster’s capacity. When planning a cluster, the primary consideration is the number of Gateways that are required to meet the base client, device, and tunnel capacity needs in addition to how many Gateways are required for redundancy.
Cluster Capacity
A cluster’s capacity is the maximum number of tunneled clients and tunneling devices each cluster can serve. This includes each AP and UBT switch/stack that establishes tunnels to a cluster and each wired or wireless client device that is tunneled to the cluster.
For each Gateway series, HPE Aruba Networking publishes the maximum number of clients and devices supported per Gateway and per cluster. The maximum number of cluster nodes that can be deployed per Gateway series is also provided. This information and other considerations such as uplink types and uplink bandwidth are used to select a Gateway model and the number of cluster nodes that are required to meet the base capacity needs.
Once your base capacity needs are met, you can then determine the number of additional nodes that are needed to provide redundant capacity to accommodate maintenance events and failures. The additional nodes added for redundancy are not dormant during normal operation and will carry user traffic. Additional nodes can be added as needed up to the maximum supported cluster size for the platform.
7000 / 9000 Series - Gateway Scaling
Scaling | 7005 | 7008 | 7010 | 7024 | 7030 | 9004 | 9012 |
---|---|---|---|---|---|---|---|
Max Clients / Gateway | 1,024 | 1,024 | 2,048 | 2,048 | 4,096 | 2,048 | 4,096 |
Max Clients / Cluster | 4,096 | 4,096 | 8,192 | 8,192 | 16,384 | 8,192 | 16,384 |
Max Devices / Gateway | 64 | 64 | 128 | 128 | 256 | 128 | 256 |
Max Devices / Cluster | 256 | 256 | 512 | 512 | 1,024 | 512 | 1,024 |
Max Tunnels / Gateway | 5,120 | 5,120 | 5,120 | 5,120 | 10,240 | 5,120 | 5,120 |
Max Cluster Size | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes |
7200 Series – Gateway Scaling
Scaling | 7205 | 7210 | 7220 | 7240XM | 7280 |
---|---|---|---|---|---|
Max Clients / Gateway | 8,192 | 16,384 | 24,576 | 32,768 | 32,768 |
Max Clients / Cluster | 98,304 | 98,304 | 98,304 | 98,304 | 98,304 |
Max Devices / Gateway | 1,024 | 2,048 | 4,096 | 8,192 | 8,192 |
Max Devices / Cluster | 2,048 | 4,096 | 8,192 | 16,384 | 16,384 |
Max Tunnels / Gateway | 12,288 | 24,576 | 49,152 | 98,304 | 98,304 |
Max Cluster Size | 12 Nodes | 12 Nodes | 12 Nodes | 12 Nodes | 12 Nodes |
9100 / 9200 Series – Gateway Scaling
Scaling | 9114 | 9240 Base | 9240 Silver | 9240 Gold |
---|---|---|---|---|
Max Clients / Gateway | 10,000 | 32,000 | 48,000 | 64,000 |
Max Clients / Cluster | 60,000 | 128,000 | 192,000 | 256,000 |
Max Devices / Gateway | 4,000 | 4,000 | 8,000 | 16,000 |
Max Devices / Cluster | 8,000 | 8,000 | 16,000 | 32,000 |
Max Tunnels / Gateway | 40,000 | 40,000 | 80,000 | 160,000 |
Max Cluster Size | 6 Nodes | 6 Nodes | 6 Nodes | 6 Nodes |
Maximum cluster capacity
Each cluster can support a maximum number of clients and devices that cannot be exceeded. The number of cluster nodes required to reach a cluster’s maximum client or device capacity will vary by Gateway series and model. In some cases the maximum number of clients and devices for a cluster can only be reached by ignoring any high availability requirements and running with no redundancy.
Gateway series | Gateway model | Max cluster client capacity |
---|---|---|
7000 | All | 4 Nodes |
9000 | All | 4 Nodes |
7200 | 7205 | 12 Nodes |
7210 | 6 Nodes | |
7220 | 4 Nodes | |
7240XM / 7280 | 3 Nodes | |
9100 | All | 6 Nodes |
9200 | All | 4 Nodes |
Gateway series | Gateway model | Max cluster device capacity |
---|---|---|
7000 | All | 4 Nodes |
9000 | All | 4 Nodes |
7200 | All | 2 Nodes |
9100 | All | 2 Nodes |
9200 | All | 2 Nodes |
When a cluster’s client or device maximum capacity has been reached, the addition of more cluster nodes will not provide any additional client or device capacity. A cluster cannot support more clients or devices than the stated maximum for the Gateway series or model. Once the maximum client or device capacity has been reached for a cluster, each additional node will add forwarding and uplink capacity for client traffic in addition to client and device capacity for failover.
What Consumes Capacity
Each tunneled client and tunneling device consumes resources within a cluster. Each Gateway model can support a specific number of clients and devices that directly correlates to the available processing, memory resources and forwarding capacity for each platform. HPE Aruba Networking tests and validates each platform at scale to determine these limits.
With AOS 10 the Gateway scaling capacity has changed from what was set with AOS 8. These new capacities should be considered when evaluating a Gateway model or series for deployment with AOS 10. As AP management and control is no longer provided by Gateways, the number of supported devices and tunnels has been increased.
Client Capacity
Each tunneled client device (unique MAC) consumes one client resource within a cluster and counts against the cluster’s published client capacity. For each Gateway series and model, HPE Aruba Networking provides the maximum number of clients that can be supported per Gateway and per homogeneous cluster. Each Gateway model and cluster cannot support more clients than the stated maximum.
When determining client capacity needs for a cluster, consider all tunneled clients that are connected to Campus APs, Microbranch APs, and UBT switches. Each tunneled client consumes one client resource within the cluster. Clients that need to be considered include:
-
WLAN clients connected to Campus APs.
-
WLAN clients connected to Microbranch APs implementing Centralized Layer 2 (CL2) forwarding.
-
Wired clients connected to tunneled downlink ports on APs.
-
Wired clients connected to UBT ports.
{: .note } Only tunneled clients that terminate in a cluster need to be considered. WLAN and wired clients connected to Campus APs, Microbranch APs or UBT ports that are bridged by the devices are excluded. WLAN and wired clients connected to Microbranch APs implementing Distributed Layer 3 (DL3) forwarding may also be excluded.
Each AP and active UBT port establish GRE tunnels to each cluster node. The bucket map published by the cluster leader determines each tunneled client’s UDG and S-UDG assignment. A client’s UDG assignment determines which GRE tunnel the AP or UBT switch uses to forward the client’s traffic. If the client’s UDG fails, the client’s traffic is transitioned to the GRE tunnel associated with the client’s assigned S-UDG.
The number of tunneled clients does not influence the number of GRE tunnels that APs or UBT switches establish to the cluster nodes. Each AP and active UBT port will establish one GRE tunnel to each cluster node regardless of the number of tunneled client devices the WLAN or UBT port is servicing. The number of WLAN and wired port profiles also does not influence the number of GRE tunnels. The GRE tunnels are shared by all the profiles that terminate within a cluster.
The figure below depicts the client resource consumption for a 4 node 7240XM cluster supporting 60K tunneled clients. A four node 7240XM cluster can support a maximum of 98K clients and each node can support a maximum of 32K clients. In this example each client is assigned a UDG and S-UDG using the published bucket map for the cluster that is distributed between the four cluster nodes. Each cluster node in this example is allocated 15K UDG sessions and 15K S-UDG sessions during normal operation.
Device Capacity
Each tunneling device consumes one device resource within a cluster and counts against the cluster’s published device capacity. For each Gateway series and model, HPE Aruba Networking provides the maximum number of devices that can be supported per Gateway and per homogeneous cluster. Each Gateway model and cluster cannot support more devices than the stated maximum.
When determining device capacity for a cluster, you need to consider all devices that are tunneling client traffic to the cluster. Each device that is tunneling client traffic to a cluster consumes a device resource within the cluster. Devices that need to be considered include:
-
Campus APs
-
Microbranch APs
-
UBT Switches
Each AP and UBT switch that is tunneling client traffic to a cluster establishes IPsec tunnels to each cluster node for signaling, messaging and bucket map distribution. The cluster leader determines each AP’s DDG and S-DDG assignment which are load balanced based on each cluster nodes capacity and load. For UBT switches, the admin configuration determines each UBT switch’s SDG assignment while the cluster leader determines the S-SDG assignment. UBT switches implement a PAPI control-channel to the SDG node for signaling, messaging and bucket map distribution.
The figure below depicts the device resource consumption for a 4 node 7240XM cluster supporting 8K APs. A four node 7240XM cluster can support a maximum of 16K devices and each node can support a maximum of 8K devices. In this example each AP is assigned a DDG and S-DDG by the cluster leader that are distributed between the four cluster nodes. Each cluster node in this example is allocated 2K DDG sessions and 2K S-DDG sessions during normal operation.
Tunnel Capacity
APs and UBT switches establish IPsec and/or GRE tunnels to each cluster node. APs will only establish tunnels to a cluster when a WLAN or wired-port profile is configured for mixed or tunnel forwarding, and a cluster is selected as the primary or secondary cluster. UBT switches will only tunnel to the cluster that is configured as the primary or secondary IP as part of the switch configuration.
The following types of tunnels will be established:
-
Campus APs – IPsec and GRE tunnels
-
Microbranch APs (CL2) – IPsec and GRE tunnels. GRE tunnels are encapsulated in IPsec.
-
UBT Switches – GRE Tunnels
The tunnels from Campus APs and Microbranch APs are orchestrated by Central while the GRE tunnels from UBT switches are initiated based on admin configuration. Each tunnel from an AP or UBT switch consumes tunnel resources on each Gateway within a cluster. Unlike client and device capacity that is evaluated per cluster, tunnel capacity is evaluated per Gateway.
The number of tunnels that a device can establish to each Gateway in a cluster will vary by device. During normal operation, APs will establish 2 x IPsec tunnels (SPI-in and SPI-out) per Gateway for DDG sessions and 1 x GRE tunnel per Gateway for UDG sessions. The number of IPsec tunnels will periodically increase to 4 x IPsec tunnels per Gateway during re-keying (5 total). Microbranch APs configured for CL2 forwarding consume the same number of tunnels as Campus APs. The main difference being that each GRE tunnel is encapsulated in the IPsec tunnel.
Tunnel consumption for a Campus AP is depicted in the figure below. In this example the AP has established 2 x IPsec tunnels and 1 x GRE tunnel to each Gateway in the cluster. The 2 additional IPsec tunnels that are periodically established to each Gateway for re-keying are also shown. Worst case, each AP will establish a total of 5 tunnels to each Gateway in the cluster during re-keying.
For WLAN only deployments, the need for calculating the tunnel consumption per Gateway is not required as the maximum number of devices supported per Gateway already factors in the worst-case maximum of 5 tunnels per AP. As the maximum number of devices per Gateway is a hard limit, there will never be more tunnels established by APs than a Gateway can support.
The number of GRE tunnels established to each cluster node per UBT switch or stack is variable based on UBT version and number of UBT ports. For both UBT versions, 1 x GRE tunnel is established per UBT port to each Gateway in the cluster which are used for UDG sessions. The total number of UBT ports will therefore influence the total number of GRE tunnels that are established to each cluster node.
When UBT version 1.0 is deployed, two additional GRE tunnels are established from each UBT switch or stack to their SDG/S-SDG cluster nodes. These additional GRE tunnels are used to forward broadcast and multicast traffic destined to clients similar to how DDG tunnels are used on APs. Each UBT switch or stack configured for UBT version 1.0 will therefore consume two additional GRE tunnels per cluster.
Tunnel consumption for a UBT switch with two active UBT ports is depicted in the figure below. In this example the UBT switch is configured for UBT version 1.0 and has established 1 x GRE tunnel to its SDG/S-SDG Gateways for broadcast / multicast traffic destined to clients. Additionally, each active UBT port has established 1 x GRE tunnel to each Gateway for UDG sessions. If all 48-ports were active in this example, a total of 49 x GRE tunnels would be established per Gateway. Note that the number of clients per UBT port does not influence GRE tunnel count but would count against the cluster’s client capacity.
As the tunnel consumption for UBT deployments is variable, it is therefore, important to understand the UBT version that will be implemented, the total number of UBT switches or stacks and total number of UBT ports. For UBT version 1.0, each switch / stack will consume 2 x GRE tunnels per cluster and each UBT port will consume 1 x GRE tunnel per Gateway in the cluster for UDG sessions. For UBT version 2.0, each UBT port will consume 1 x GRE tunnel per Gateway in the cluster for UDG sessions.
For mixed WLAN and UBT switch deployments, the number of tunnels that can be consumed by both the APs and UBT switches may potentially exceed the Gateways tunnel capacity. As such it is important to calculate the total number of tunnels needed to support your deployment as each Gateway in the cluster will be terminating tunnels from both APs and UBT switches.
Determining Capacity
To successfully determine a cluster’s base capacity requirements, a good understanding of the environment is needed. Each Gateway model is designed to support a specific number of clients, devices and tunnels, and can forward a specific amount of encrypted and unencrypted traffic. The number of cluster nodes you deploy in a cluster will determine the total number of clients and devices that can be supported during normal operation and during maintenance or failure events.
Base Capacity
A successful cluster design starts by gathering requirements which will influence the Gateway model and number of cluster nodes you deploy. Once the base capacity has been determined, additional nodes can then be added to the base cluster as redundant capacity.
To determine a clusters base capacity requirements, the following information needs to be gathered:
-
Total Tunneled Clients – The total number of client devices that will be tunneled to the cluster. This includes wireless clients, clients connected to wired AP ports and wired clients connected to UBT ports. Each unique client MAC address counts as one client.
-
Total Tunneling Devices – The total number of devices that are establishing tunnels to the cluster. This will include Campus APs, Microbranch APs and UBT switches. Each AP, UBT switch / stack counts as one device.
-
Total UBT Ports – If UBT is deployed, the total number of UBT ports across all switches and stacks must be known.
-
UBT Version – The UBT version determines if additional GRE tunnels are established to the cluster from each UBT switch or stack for broadcast / multicast traffic destined to clients. This can be significant if the total number of UBT switches or stacks are high.
-
Traffic Forwarding – The minimum aggregate amount of user traffic that will need to be forwarded by the cluster. This will help with Gateway model selection.
-
Uplink Ports – The types of Ethernet ports needed to connect each Gateway to their respective switching layer and the number of uplink ports that need to be implemented.
Determining the number of clients and devices that need to be supported by a cluster is a straightforward process. Each tunneled client (wired and wireless) will consume one client resource within the cluster. Each AP and UBT switch or stack that is tunneling client traffic to a cluster will consume one device resource within that cluster. A Gateway model and number of nodes can then be selected to meet the client and device capacity needs. The primary goal is to deploy the minimum number of cluster nodes required to meet your base client and device capacity needs.
When evaluating client and device capacities to select a Gateway, the best practice is to use 80% of published Gateway and cluster scaling numbers to ensure that your base cluster design will include 20% additional capacity to accommodate future expansion. Designing a cluster at 100% scale is not recommended as there will be no additional capacity to support additional clients or devices after the initial deployments.
The general philosophy used to select a Gateway model and determine the minimum number of nodes required to meet the base capacity needs starts with referencing the tables below. These tables provide the maximum number of clients and devices supported per Gateway and per cluster and can aid by narrowing the choice of Gateways to a specific series or model.
For example, if your base cluster needs to support 50,000 clients and 5,000 APs, the 7000 and 9000 series Gateways can be quickly eliminated as can the 7205 and 7210 series Gateways. The remaining Gateway options are reduced to the 7220, 7240XM, 7280 and 9240 base models.
Using 80% scaling numbers, the minimum number of nodes required to meet the client and device capacity requirements for each Gateway model can be calculated and evaluated. For each model the maximum clients and devices supported per platform are captured and 80% numbers determined. The number of nodes required to meet the client and device requirements for each platform can then be determined. The minimum number of nodes required to meet your client and device capacity will likely differ. For example, a specific model of Gateway may require 2 nodes to meet client capacity needs and 1 node to meet device capacity needs.
This is demonstrated below where the 80% client and device capacities for each Gateway model is captured and listed under Per Node. This value multiplied to determine how many nodes are required to meet the 50,000 client and 5,000 AP requirement. Using the 7220 as an example, a minimum of 3 nodes is required to meet the client capacity requirements (19,660 x 3 = 58,980) while a minimum of 2 nodes are required to meet the device capacity requirements (3,277 x 2 = 6,554).
Other Gateway models require a minimum of 1 or 2 nodes to meet the above client and device capacity requirements. As such the 7220 can be excluded from consideration as 3 nodes are required to meet the capacity needs vs. 2 nodes for other models.
Model | 80% client cap per node | Min Nodes | Cluster | 80% device cap per node | Min Nodes | Cluster |
---|---|---|---|---|---|---|
7220 | 19,660 | 3 | 58,980 | 3,277 | 2 | 6,554 |
7240XM | 26,214 | 2 | 52,428 | 6,554 | 1 | 6,554 |
7280 | 26,214 | 2 | 52,428 | 6,554 | 1 | 6,554 |
9240 Base | 25,600 | 2 | 51,200 | 3,200 | 2 | 6,400 |
The next step is to evaluate the number of uplink ports and port types needed to connect the Gateways to their respective core / aggregation layer switches. As a best practice, each Gateway should connect to a redundant switching layer using a minimum of two ports in a LACP configuration. Each Gateway model is available with different Ethernet port configurations supporting different speeds. Gateways models are available with copper, SFP, SFP+, SFP28+ and QSFP+ interfaces which are provided in the datasheets.
In the above example, the 7240XM, 7280 and 9240 base models all support a minimum of four SFP+ ports and either can be selected if 10Gbps uplinks are required. If higher speed uplinks such as 25Gbps or 40Gbps are needed, the 7240XM can be excluded.
In parallel, the forwarding performance of each Gateway model needs to be considered. The maximum amount of traffic that each Gateway model can forward is provided in the published datasheets. Each Gateway model can forward a specific amount of user traffic and the number of nodes in the cluster determines the aggregate throughput of the cluster. For example, a 9240 base Gateway can provide up to 20Gbps of forwarding capacity. A 2-node 9240 base cluster will offer an aggregate forwarding capacity of 40Gbps (2 x 20Gbps).
If more aggregate forwarding capacity is required, a different Gateway model and uplink type might be selected. For example, a 7280 series Gateway that is connected using QSFP+ interfaces can provide up to 80Gbps of forwarding capacity per Gateway. A 2 node 7280 cluster offering an aggregate forwarding capacity of 160Gbps (2 x 80Gbps).
In the above example, both the 9240 base and 7280 series Gateways meet the base capacity requirements with a 2-node cluster. The ultimate decision as to which Gateway model to use will likely come down to uplink port preference based on the port types that are available on the switching layer and aggregate forwarding capacity requirements. Additional nodes can be added to the base cluster design if more uplink and aggregate forwarding capacity is required.
The above example captured the methodology used to select a Gateway model and determine the minimum cluster size for a wireless LAN only deployment and did not evaluate tunnel capacity. As a Gateway cannot support more APs than its maximum device capacity, a Gateways tunnel capacity cannot be exceeded for a wireless LAN only deployment.
When UBT is deployed, the number of clients and devices will influence your base cluster client and device capacity requirements while the UBT version and total number of UBT ports will influence tunnel capacity requirements. As the total number of UBT switches or stacks and UBT ports are variable, additional validation will be required to ensure that tunnel capacity on a selected Gateway model is not exceeded:
-
UBT version 1.0 – Each UBT switch or stack will consume 2 x GRE tunnels to the cluster for broadcast / multicast traffic destined to clients. Additionally, each UBT port will consume 1 x GRE tunnel too each Gateway in the cluster.
-
UBT version 2.0 – Each UBT port will consume 1 x GRE tunnel to each Gateway in the cluster.
Expanding on the previous example, let’s assume the base cluster needs to support 50,000 clients, 4,500 APs, 512 UBT switches / stacks and 12,288 UBT ports and UBT version 2.0 will be implemented. The total number of clients and devices remains the same, but we have now introduced additional GRE tunnels to support the UBT ports.
We have already determined that a 2-node cluster using a 7240XM, 7280 or 9240 base series Gateways can meet the base client and device capacity needs. The next step is to calculate tunnel consumption. Each AP will establish 5 tunnels, each UBT port will establish 1 tunnel. With simple multiplication and addition, we can easily determine to total number of tunnels that are required:
-
AP Tunnels / Gateway: 5 x 4500 = 22,500
-
UBT Port Tunnels / Gateway: 12,288
For this example, a total of 34,788 tunnels per Gateway is required. We can determine the maximum tunnel capacity for each Gateway model and calculate the 80% tunnel scaling number. The number of required tunnels is then subtracted to determine the remaining number of tunnels for each model.
This is demonstrated in the table below that shows that our tunnel capacity requirements can be met by both the 7240XM and 7280 series Gateways but not the 9240 base series Gateway. The 9240 base Gateway would not be a good choice for this mixed wireless LAN / UBT deployment unless a separate cluster is deployed.
Model | Capacity (80%) | Required | Remaining |
---|---|---|---|
7240XM | 76,800 | 34,788 | 42,012 |
7280 | 76,800 | 34,788 | 42,012 |
9240 Base | 32,000 | 34,788 | -2,788 |
If UBT version 1.0 was deployed in the above example, two additional GRE tunnels would be consumed per UBT switch or stack to the cluster. In this example 1,024 additional GRE tunnels would be established from the 512 UBT switches to different Gateways within the cluster based on the SDG/S-SDG assignments. To calculate the additional per Gateway tunnel capacity for UBT version 1.0, the total number of tunnels is divided by the number of base cluster nodes. For a 2-node base cluster, 512 additional tunnels would be consumed per Gateway.
Redundant Capacity
Once a base cluster design has been determined, additional nodes can be added to provide redundant capacity. Each additional node added to a base cluster will provide additional forwarding capacity, uplink capacity and redundant client and device capacity to accommodate maintenance and failure events. It’s important to note that each additional node added to your base cluster are not dormant and will support client and device sessions and provide forwarding during normal operation.
The number of additional nodes that you add to your base cluster for redundant capacity will be influenced by your tolerance for how many cluster nodes can be lost before client or device capacity is impacted. Your cluster design may include as many redundant nodes as the maximum cluster size for the Gateway series supports.
Minimum redundancy is provided by adding one redundant node to the base cluster. This is referred to as N+1 redundancy where the cluster can sustain the loss of a single node without impacting clients or devices. An N+1 redundancy model is typically employed for base clusters consisting of a single node but may also be used to provide redundancy for base clusters with multiple nodes. The following is an example of a N+1 redundancy model where one additional node is added to each base cluster:
The maximum number of redundant nodes that you add to your base cluster will typically be less than or equal to the number of nodes in the base cluster. The only limitation is the maximum number of cluster nodes the Gateway series can support.
When the number of redundant nodes equals the number of base cluster nodes, maximum redundancy is provided. This is referred to as 2N redundancy (also known as N+N redundancy) where the cluster can sustain the loss of half its nodes without impacting clients or devices. 2N redundancy is typically employed in mission critical environments where continuous operation is required. The cluster nodes may reside within the same datacenter or be distributed between datacenters when bandwidth and latency permits. The 2N redundancy model is depicted below where three redundant nodes are added to a three-node base cluster design:
Most cluster designs will not include more redundant nodes than the base cluster unless additional forwarding, uplink or firewall capacity is required. Your cluster design may include a single node for redundancy for N+1 redundancy, twice as many nodes for 2N redundancy or something in between.
MultiZone
One main architectural change in AOS 10 is that WLAN and wired-port profiles in an AP configuration group can terminate on different clusters. This capability is referred to as MultiZone and is supported by Campus APs using profiles configured for mixed or tunnel forwarding and Microbranch APs with profiles configured for Centralized Layer 2 (CL2) forwarding.
MultiZone has various applications within an enterprise network. The most common use is segmentation where different classes of traffic are tunneled to different points within the network. For example, trusted traffic from an employee WLAN is tunneled to a cluster located in the datacenter while untrusted traffic from a guest/visitor WLAN is tunneled to a cluster located in a DMZ behind a firewall. Other common uses include departmental access and multi-tenancy.
When planning for capacity for a MultiZone deployment, the following considerations need to be made:
-
Each AP will consume a device resource on each cluster it is tunneling client traffic to.
-
Each AP will establish IPsec and GRE tunnels to each cluster node for each cluster it is tunneling client traffic to.
-
Each tunneled client will consume a client resource on the cluster it is tunneled to.
-
Each AP can tunnel to a maximum of twelve Gateways across all clusters.
MultiZone is enabled when WLAN or wired-port profiles configured for mixed, or tunnel forwarding are provisioned that terminate on separate clusters within the Central instance. When enabled, APs will establish IPsec and GRE tunnels to each cluster node in each cluster. As with a single cluster implementation, the APs will establish 3 tunnels to each cluster node during normal operation and 5 tunnels during re-keying.
DDG and S-DDG sessions are allocated in each cluster by each cluster leader that also publishes the bucket map for their respective cluster. Each tunneled client is allocated a UDG and S-UDG session in their respective cluster based on the bucket map for that cluster.
Tunnel consumption for a MultiZone AP deployment is depicted below. In this example an AP is configured with three WLAN profiles where two WLAN profiles terminate on an employee cluster while one WLAN profile terminates on a guest cluster. The APs establish IPsec and GRE tunnels to each cluster and are assigned DDG sessions in each cluster and receive a bucket map for each cluster. Clients connected to WLAN A or WLAN B are assigned UDG sessions in the employee cluster while clients connected to WLAN C are assigned UDG sessions in the guest cluster.
Capacity planning for a MultiZone deployment follows the methodology described in previous sections where the base capacity for each cluster is designed to support the maximum number of tunneling devices and tunneled clients that terminate in each cluster. Additional nodes are then added for redundant capacity.
As mixed and tunneled WLAN and wired-port profiles can be distributed between multiple configuration groups in Central, a good understanding of the total number of APs that are assigned to profiles terminating in each cluster is required. Device capacity and tunnel consumption may be equal across clusters if profiles are common between all APs and configuration groups or unequal if different profiles are assigned to APs in each configuration group.
For example, if WLAN A, WLAN B and WLAN C in this illustration are assigned to 1,000 APs in configuration group A and WLAN A and WLAN B are assigned to 1,000 APs in configuration group B, 2,000 device resources would be consumed in the employee cluster while 1,000 device resources would be consumed in the guest cluster. Tunnel consumption would be 10,000 on the Gateways in the employee cluster and 5,000 on the Gateways in the guest cluster.
An understanding of the maximum number of tunneled clients per cluster across all WLANs is also required and this will typically vary between clusters. For example, the employee cluster may be designed to support a maximum of 10,000 employee devices while the guest cluster may be designed to support a maximum of 2,000 guest or visitor devices. In this case WLAN A and WLAN B would consume 10,000 client resources on the employee cluster while WLAN C would consume 2,000 client resources on the guest cluster.
Feedback
Was this page helpful?
Glad to hear it!
Sorry to hear that.