Link Search Menu Expand Document
calendar_month 19-Sep-24

HPE Aruba Networking Data Center Reference Architectures

HPE Aruba Networking data center reference architectures support high-availability computing racks using redundant top-of-rack (ToR) switches in EVPN-VXLAN overlay and traditional topologies.

Table of contents

EVPN-VXLAN Spine and Leaf

The HPE Aruba Networking EVPN-VXLAN solution is built on a physical spine-and-leaf topology, which optimizes performance and provides a horizontally scalable design that accommodates data center growth. The Layer 3 links between spine and leaf switches enable adding spine capacity without disrupting existing network components. A data center can start with two spine switches, and then add spine switches in the future when additional capacity is required. The figure below shows the reference architecture with two spine switches and dual-ToR switches.

**Spine and Leaf: Dual Top of Rack**

Certain application environments do not require high availability at the individual computing host. In this case, a single ToR switch per rack provides a more cost-effective data center network. In this type of implementation, host positioning and non-switch redundancy mechanisms must be considered, because a ToR switch under maintenance affects connectivity to all computing hosts in the rack. Spine and leaf deployments can include a mix of both single and dual ToR racks.

Two-Tier

The Two-Tier topology physically resembles a spine-and-leaf design with two spines. Fault tolerance is achieved using multi-chassis Layer 2 link aggregation between the core and access layers, in contrast to the Layer 3 links used in a spine-and-leaf solution. The VSX feature enables upgrading and removing individual switches without disrupting other network components. The core size is fixed at two switches, which makes upgrading physical links and aggregation bundles the primary methods of increasing bandwidth capacity between access and core switches.

**Two-Tier Solution**

Reference Architecture Components Selection

The following section provides guidance for hardware selection based on computing host, availability, and bandwidth requirements.

HPE Aruba Networking CX Data Center Switch Overview

The HPE Aruba Networking CX portfolio offers five 1U fixed configuration data center switch models.

  • The CX 8325 model offers high ToR port density for 10 and 25 Gbps connected hosts.
  • The CX 10000 adds enhanced features along with the same ToR port density.
  • The CX 8100 offers high ToR port density for small and medium data centers consisting of 1 and 10 Gbps connected hosts.
  • The CX 9300 offers the highest throughput capacity and the most flexibility in a 1U form-factor.
  • The CX 9300S offers high throughput ToR capacity for 100 and 200 Gbps connected hosts.
  • The CX 8360 model offers a variety of port configurations for small and medium sized topologies.

The CX 10000 distributed services switch (DSS) supports non-switching features to consider when selecting a ToR switch. In addition to inline stateful firewall enforcement and enhanced traffic visibility, it includes IPsec encryption services, DDoS protection, and NAT.

All models offer the following data center switching capabilities:

  • High-speed, fully distributed architecture with line-rate forwarding
  • High availability and in-service ToR upgrades with VSX
  • Cloud-native and fully programmable modern operating system built on a microservices architecture
  • Error-free network configuration with software-defined orchestration tools
  • Distributed analytics and guided troubleshooting to provide full visibility and rapid issue resolution
  • Hot-swappable and redundant load-sharing fans and power supplies
  • Front-to-back and back-to-front cooling options for different data center designs
  • Jumbo frame support for 9198 byte frames
  • Advanced Layer 2 and Layer 3 features to support an EVPN-VXLAN overlay
  • Distributed active gateways to support host mobility.

The HPE Aruba CX 6300 model offers an economical Layer 2 ToR for racks with a high number of 1 Gbps connected hosts.

EVPN-VXLAN Solution Switches

The HPE Aruba Networking reference architecture for an EVPN-VXLAN data center includes switches in two roles: spine and leaf.

Spine Switches

The EVPN-VXLAN architecture is built around spine switches with high-density, high-speed ports. The primary function of spine switches is to provide high-speed routed capacity between tunnel endpoints for VXLAN encapsulated traffic. When choosing a spine switch, primary design considerations are:

  • Port density
  • Ports speeds
  • Maximum routes in BGP RIB.

HPE Aruba Networking 1U switches support a range of data center fabric sizes, offering 400 Gbps, 100 Gbps, and 40 Gbps connections to leaf switches.

The CX 9300-32D offers the greatest spine capacity and flexibility in the 1U switch lineup.

  • When using a CX 9300S-32C8D leaf switch, a maximum of eight CX 9300-32D spines can connect up to 32 leaf racks in a single ToR switch topology or 16 leaf racks in a dual ToR switch topology using 400 Gbps links. This configuration targets high-speed compute and AI applications using 100 and 200 Gbps connected hosts.
  • When using the CX 9300-32D as both spine and leaf switches, it supports up to 32 leaf racks in a single ToR switch topology or up to 16 leaf racks in a dual ToR switch topology using 400 Gbps links over single-mode or multimode fiber optic cable. This configuration supports 400/200/100-Gbps leaf connected compute and AI applications.
  • Using the CX 9300-32D as both spine and leaf switches supports extreme horizontal spine scaling. A single ToR topology supports up to 16 spines, and a dual ToR topology supports up to 15 spines, delivering a respective non-oversubscribed fabric capacity of 6.4 Tbps or 6.0 Tbps to each leaf rack.
  • The CX 9300-32D spine can double (64 single ToR/32 dualToR) or quadruple (128 single ToR/64 dual ToR) the number of leaf racks supported over its physical port count when using breakout cabling combined with 100 Gbps connections to CX 8xxx and CX 10000 leaf switches. Single-mode transceivers and fiber are required to support four leaf switches per spine port. Two leaf switches per spine port are supported over multimode fiber or when using AOCs.
  • The CX 9300-32D spine can support a mix of 400 Gbps links to service leaf racks and 100 Gbps links to standard computing racks to alleviate centralized service congestion points. A CX 9300-32D based spine also provides an upgrade path from 100 Gbps to 400 Gbps for up to 32 leaf switches by replacing a CX 8xxx leaf with a CX 9300 or 9300S switches.

The CX 8325 and CX 8360 offer cost-effective, high-speed spine capacity using 40/100 Gbps links.

  • The CX 8325 can support up to 32 leaf racks in a single ToR switch topology or up to 16 computing racks in a dual ToR switch topology.
  • The CX 8360 can support up to 12 leaf racks in a single ToR switch topology or up to six computing racks in a dual ToR switch topology.

The table below summarizes the spine SKUs available and their corresponding leaf rack capacity.

SKUDescriptionMaximum Leaf Rack Capacity
R9A29A9300-32D: 32-port 400 GbE QSFP-DD, front-to-back airflow400G to CX 9300/9300S leaf:
32 single ToR / 16 dual ToR
   
  100G to CX 8xxx/10000 leaf (single-mode fiber):
128 single ToR / 64 dual ToR
(400G eDR4 to 4 x 100G FR1)
  100G to CX 8xxx/10000 leaf (multimode fiber or AOC):
64 single ToR / 32 dual ToR
(400G SR8 to 2 x 100G SR4 or AOC breakout cable)
R9A30A9300-32D: 32-port 400 GbE QSFP-DD, back-to-front airflow400G to CX 9300/9300S leaf:
32 single ToR / 16 dual ToR
  100G to CX 8xxx/10000 leaf (single-mode fiber):
128 single ToR / 64 dual ToR
(400G eDR4 to 4 x 100G FR1)
  100G to CX 8xxx/10000 leaf (multimode fiber or AOC):
64 single ToR / 32 dual ToR
(400G SR8 to 2 x 100G SR4 or AOC breakout cable)
JL626A8325-32C: 32-port 40/100 GbE QSFP+/QSFP28, front-to-back airflow32 single ToR / 16 dual ToR
JL627A8325-32C: 32-port 40/100 GbE QSFP+/QSFP28, back-to-front airflow32 single ToR / 16 dual ToR
JL708C8360-12C v2: 12-port 40/100 GbE QSFP+/QSFP28, front-to-back airflow12 single ToR / 6 dual ToR
JL709C8360-12C v2: 12-port 40/100 GbE QSFP+/QSFP28, back-to-front airflow12 single ToR / 6 dual ToR

The table below lists the optics that support CX 9300 spine connectivity over structured cabling:

SKUDescriptionComments
R9B41A400G QSFP-DD MPO-16 SR8 100m MMF TransceiverSupports 400G connections between CX 9300 switches over multimode optical fiber. Suports 2 x 100G connections in breakout mode to CX 8xxx/10000 switches using 100G QSFP28 MPO SR4 transceivers (JL309A).
R9B42A400G QSFP-DD MPO-12 eDR4 2km SMF TransceiverSupports 400G connections between CX 9300 switches over single-mode optical fiber. Suports 4 x 100G connections in breakout mode to CX 8xxx/10000 switches using 100G QSFP28 LC FR1 transceivers (R9B63A).
JL309A100G QSFP28 MPO SR4 MMF TransceiverWhen installed in CX 8xxx/10000, supports a 100G connection to CX 9300 400G SR8 (R9B41A) in breakout mode.
R9B63A100G QSFP28 LC FR1 SMF 2km TransceiverWhen installed in CX 8xxx/10000, supports a 100G connection to CX 9300 400G eDR4 (R9B42A) in breakout mode.

The table below lists the available AOC breakout cables for connecting CX 9300 spines to CX 8xxx/10000 leaf switches:

SKUDescription
R9B60A3m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B58A7m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B62A15m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B61A30m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B59A50m 200G QSFP-DD to 2x QSFP28 100G AOC

Leaf Switches

The HPE Aruba Networking data center reference architecture primarily uses six models as 1U data center ToR switches.

  • The CX 8325 series and CX 10000 switches support high-density host racks using 1 GbE / 10 GbE / 25 GbE ports.
  • The CX 9300-32D in a leaf role is intended to connect 100 GbE, 200 GbE, and 400 GbE high-throughput hosts to a CX 9300-32D spine using 400 Gbps links.
  • The CX 9300S supports 100 GbE and 200 GbE high-throughput hosts to a CX 9300-32D spine. It also can be optimized for 25 GbE connected hosts. Additionally, the 9300S provides secure border leaf options using high-speed MACsec interfaces.
  • The CX 8100 offers high ToR port density for small and medium data centers with 1 GbE and 10 GbE host ports.
  • The CX 8360 series offers a variety of models that support 1GbE / 10 GbE RJ45 ports, and flexible variations of 1 GbE, 10 GbE, 25 GbE, and 50 GbE modular transceiver ports.

The CX 10000 distributed services switch (DSS) adds inline firewall features typically provided by dedicated firewall appliances attached to a services leaf or VM hypervisors attached to leaf switches. The CX 10000 also offers IPsec encryption between data centers, NAT, DDoS, and enhanced telemetry services. The CX 10000 switch should be selected when these features are required by downstream hosts or to meet other data center goals. DSS features are not available on other CX switch models. A mix of DSS and non-DSS ToR leaf switch models can connect to a common spine.

Redundant ToR designs require at least four uplink ports for a two-spine switch topology. A minimum of two ports connect to spine switches and two additional ports are members of a high-speed VSX ISL. The CX 9300S is an exception that can connect all eight 400 Gbps uplink ports to spine switches, when using 200 Gbps ports for the VSX ISL. A non-redundant ToR design requires at least two high-speed uplink ports for a two-spine topology.

The table below summarizes the leaf SKUs available and their corresponding supported designs.

SKUDescriptionRack DesignSpine Design
R8P13A10000-48Y6C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 6-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR2–4 switches
R8P14A10000-48Y6C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 6-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR2–4 switches
JL624A8325-48Y8C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 8-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR2–6 switches
JL625A8325-48Y8C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 8-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR2–6 switches
S0F82A9300S-32C8D: 32-port QSFP28 100G 8p QSFP-DD 400G, front-to-back airflowHigh-density / Dual or Single ToR400G 9300-32D spine:
2-8 switches
S0F84A9300S-32C8D: 32-port QSFP28 100G 8p QSFP-DD 400G, back-to-front airflowHigh-density / Dual or Single ToR400G 9300-32D spine:
2–8 switches
R9A29A9300-32D: 32-port 100/200/400 GbE QSFP-DD, 2-port 10G SFP+, front-to-back airflowHigh-density / Dual ToR9300-32D spine:
2–15 switches
  High-density / Single ToR9300-32D spine:
2–16 switches
R9A30A9300-32D: 32-port 100/200/400 GbE QSFP-DD, 2-port 10G SFP+, back-to-front airflowHigh-density / Dual ToR9300-32D spine:
2–15 switches
  High-density / Single ToR9300-32D spine:
2–16 switches
JL704C8360-48Y6C v2: 48-port with up to 22 ports of 50GbE, 44-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR2 switches
JL705C8360-48Y6C v2: 48-port with up to 22 ports of 50GbE, 44-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR2 switches
JL706C8360-48XT4C: 48-port 100M / 1GbE / 10GbE BASE-T, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR2 switches
JL707C8360-48XT4C: 48-port 100M / 1GbE / 10GbE BASE-T, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR2 switches
R9W90A8100-48XF4C: 48-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR2 switches
R9W91A8100-48XF4C: 48-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR2 switches
R9W92A8100-40XT8XF4C: 40-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE BASE-T, 8-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR2 switches
R9W93A8100-40XT8XF4C: 40-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE BASE-T, 8-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR2 switches
R9W86A8100-24XF4C: 24-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Dual ToR2 switches
R9W87A8100-24XF4C: 24-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowMedium-density / Dual ToR2 switches
R9W88A8100-24XT4XF4C: 24-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE 10GBASE-T, 4-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Dual ToR2 switches
R9W89A8100-24XT4XF4C: 24-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE 10GBASE-T, 4-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowMedium-density / Dual ToR2 switches
JL700C8360-32Y4C v2: 32-port with up to 12 ports of 50GbE, 28-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Dual ToR2 switches
JL701C8360-32Y4C v2: 32-port with up to 12 ports of 50GbE, 28-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowMedium-density / Dual ToR2 switches
JL710C8360-24XF2C v2: 24-port 1/10 GbE SFP/SFP+, 2-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Single ToR2 switches
JL711C8360-24XF2C v2: 24-port 1/10 GbE SFP/SFP+, 2-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Single ToR2 switches
JL702C8360-16Y2C v2: 16-port 1/10/25 GbE SFP/SFP+/SFP28, 2-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowLow-density / Single ToR2 switches
JL703C8360-16Y2C v2: 16-port 1/10/25 GbE SFP/SFP+/SFP28, 2-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowLow-density / Single ToR2 switches

Note: Three CX 9300S-32C8D bundles are Trade Agreement Act (TAA) compliant with the same capabilities listed in the table above.
S0F81A (front-to-back air flow)
S0F83A (back-to-front air flow)
S0F87A (back-to-front air flow and DC power supplies)

Server Access Switches

CX 6300 and CX 8100 switches can be used to extend VLANs from a leaf switch to adjacent racks. This strategy provides an economical solution for connecting a rack with a high number of low-speed connected hosts. CX 6300 server access switches are typically connected to CX 8325 or 10000 leaf switches. CX 6300 models support both built-in and modular power supplies

SKUDescriptionPower Supplies
JL663A6300M: 48-port 10/100/1000Base-T, 4-port 1/10/25/50 GbE SFP/SFP+/SFP28/SFP56, port/side-to-power airflowModular/Redundant
JL762A6300M: 48-port 10/100/1000Base-T, 4-port 1/10/25/50 GbE SFP/SFP+/SFP28/SFP56 Bundle, back-to-front/side airflowModular/Redundant
JL664A6300M: 24-port 10/100/1000Base-T, 4-port 1/10/25/50 GbE SFP56, port/side-to-power airflowModular/Redundant
JL658A6300M: 24-port 1/10 GbE SFP/SFP+, 4-port 1/10/25 GbE SFP/SFP+/SFP28, port/side-to-power airflowModular/Redundant
JL667A6300F: 48-port 10/100/1000Base-T, 4-port 1/10/25/50 GbE SFP/SFP+/SFP28/SFP56, port/side-to-power airflowBuilt-in/Non-Redundant
JL668A6300F: 24-port 10/100/1000Base-T, 4-port 1/10/25/50 GbE SFP/SFP+/SFP28/SFP56, port/side-to-power airflowBuilt-in/Non-Redundant
R9W90A8100-48XF4C: 48-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowModular/Redundant
R9W91A8100-48XF4C: 48-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowModular/Redundant
R9W92A8100-40XT8XF4C: 40-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE BASE-T, 8-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowModular/Redundant
R9W93A8100-40XT8XF4C: 40-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE BASE-T, 8-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowModular/Redundant
R9W86A8100-24XF4C: 24-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowModular/Redundant
R9W87A8100-24XF4C: 24-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowModular/Redundant
R9W88A8100-24XT4XF4C: 24-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE 10GBASE-T, 4-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowModular/Redundant
R9W89A8100-24XT4XF4C: 24-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE 10GBASE-T, 4-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowModular/Redundant

EVPN-VXLAN Architecture Capacity Planning

The following section provides capacity planning guidance for the HPE Aruba Networking data center spine-and-leaf reference architecture.

Bandwidth Calculations

A spine-and-leaf network design provides maximum flexibility and throughput in a data center implementation. To achieve the greatest level of performance, a spine-and-leaf topology can be designed for zero oversubscription of bandwidth. This results in a data center network that will never be congested because the bandwidth available to hosts is equal to the bandwidth between leaf-and-spine switches.

A significant advantage of a spine-and-leaf design is the ability to add capacity as needed simply by adding additional spine switches and/or increasing the speed of the uplinks between leaf-and-spine switches. A rack with 40 dual-homed servers with 10 GbE NICs could theoretically generate a total load of 800G of traffic. For that server density configuration, a 1:1 (non-oversubscribed) fabric could be built with four spine switches using 4x100 GbE links on each. In practice, most spine-and-leaf topologies are built with server-to-fabric oversubscription ratios between 2:1 and 6:1.

Network and Compute Scaling

The HPE Aruba Networking data center reference architecture provides capacity for most deployments. Distributed gateways and symmetric IRB forwarding optimize fabric capacity consumption. Total fabric capacity can be increased incrementally by adding spines to accommodate growing host compute requirements over time. The CX 10000 DSS switch enables policy enforcement without changing spine-and-leaf traffic optimizations.

The border leaf is typically the node with the highest control plane load since it handles both internal and external connections. Route summarization is a good practice to reduce the redistribution of IP prefixes among domains. Both CX 10000 and 9300S switches support secure border leaf capabilities to external networks and between fabrics.

The HPE Aruba Networking data center reference architecture was tested thoroughly in an end-to-end solution environment that incorporates best-practice deployment recommendations, applications, and load profiles that represent production environments.

Refer to the product data sheets on HPE Aruba Networking Campus Core and Aggregation Switches for detailed specifications not included in this guide.

Two-Tier Solution Switches

The HPE Aruba Networking reference architecture for a Two-Tier data center includes switches in two roles: core and access.

Core Switches

The Two-Tier architecture is built around a pair of core switches with high-density, high-speed ports. The core switches provide fast Layer 2 switching between data center computing racks and all Layer 3 functions for the data center, including IP gateway services, routing between subnets, routed connectivity outside of the data center, and multicast services. The primary design considerations when choosing a spine switch are:

  • Port density
  • Ports speeds
  • MAC address table size
  • ARP table size
  • IPv4/IPv6 route table size

HPE Aruba Networking 1U switch models support a full range of small to large data center core options.

The CX 9300-32D offers the most capacity and flexibility in the core role of the 1U switch lineup.

  • When using the CX 9300-32D in both core and access roles, it supports up to 28 computing racks in a single ToR switch topology or up to 14 computing racks in a dual ToR switch topology using 400 Gbps links over single-mode or multimode fiber optic cable.
  • A CX 9300-32D core can double (56 single ToR/28 dualToR) or quadruple (112 single ToR/56 dual ToR) the number of supported access racks when using breakout cabling combined with 100 Gbps connections to CX 8xxx and CX 10000 access switches. Single-mode transceivers and fiber are required to support four leaf switches per spine port. Two leaf switches per spine port are supported over multimode fiber or when using AOCs.

CX 8325 and CX 8360 offer cost-effective, high-speed core capacity using 40/100 Gbps links.

  • The CX 8325 can support up to 28 access racks in a single ToR switch topology or up to 14 access racks in a dual ToR switch topology.
  • The CX 8360 can support up to 8 access racks in a single ToR switch topology or up to four access racks in a dual ToR switch topology.

The table below summarizes the core switch SKUs available and their corresponding access rack capacity, assuming two core ports are consumed per core switch for redundant external connectivity in addition to the two VSX ISL ports.

SKUDescriptionMaximum Access Rack Capacity
JL626A8325-32C: 32-port 40/100 GbE QSFP+/QSFP28, front-to-back airflow28 single ToR / 14 dual ToR
JL627A8325-32C: 32-port 40/100 GbE QSFP+/QSFP28, back-to-front airflow28 single ToR / 14 dual ToR
JL708C8360-12C v2: 12-port 40/100 GbE QSFP+/QSFP28, front-to-back airflow8 single ToR / 4 dual ToR
JL709C8360-12C v2: 12-port 40/100 GbE QSFP+/QSFP28, back-to-front airflow8 single ToR / 4 dual ToR
R9A29A9300-32D: 32-port 400 GbE QSFP-DD, front-to-back airflow400G to CX 9300/9300S access:
28 single ToR / 14 dual ToR
  100G to CX 8xxx/10000 access (single-mode fiber):
112 single ToR / 56 dual ToR
(400G eDR4 to 4 x 100G FR1)
  100G to CX 8xxx/10000 access (multimode fiber or AOC):
56 single ToR / 28 dual ToR
(400G SR8 to 2 x 100G SR4 or AOC breakout cable)
R9A30A9300-32D: 32-port 400 GbE QSFP-DD, back-to-front airflow400G to CX 9300/9300S access:
28 single ToR / 14 dual ToR
  100G to CX 8xxx/10000 access (single-mode fiber):
112 single ToR / 56 dual ToR
(400G eDR4 to 4 x 100G FR1)
  100G to CX 8xxx/10000 access (multimode fiber or AOC):
56 single ToR / 28 dual ToR
(400G SR8 to 2 x 100G SR4 or AOC breakout cable)

The table below lists the optics that support CX 9300 core connectivity over structured cabling:

SKUDescriptionComments
R9B41A400G QSFP-DD MPO-16 SR8 100m MMF TransceiverSupports 400G connections between CX 9300/9300S series switches over multimode optical fiber. Suports 2 x 100G connections in breakout mode to CX 8xxx/10000 switches using 100G QSFP28 MPO SR4 transceivers (JL309A).
R9B42A400G QSFP-DD MPO-12 eDR4 2km SMF TransceiverSupports 400G connections between CX 9300/9300S series switches over single-mode optical fiber. Suports 4 x 100G connections in breakout mode to CX 8xxx/10000 switches using 100G QSFP28 LC FR1 transceivers (R9B63A).
JL309A100G QSFP28 MPO SR4 MMF TransceiverWhen installed in CX 8xxx/10000, supports a 100G connection to CX 9300 400G SR8 (R9B41A) in breakout mode.
R9B63A100G QSFP28 LC FR1 SMF 2km TransceiverWhen installed in CX 8xxx/10000, supports a 100G connection to CX 9300 400G eDR4 (R9B42A) in breakout mode.

The table below lists the available AOC breakout cables for connecting a CX 9300-32D core to CX 8xxx/10000 access switches:

SKUDescription
R9B60A3m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B58A7m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B62A15m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B61A30m 200G QSFP-DD to 2x QSFP28 100G AOC
R9B59A50m 200G QSFP-DD to 2x QSFP28 100G AOC

Access Switches

The HPE Aruba Networking data center reference architecture includes six access switch models. All models are 1U ToR switches.

  • The CX 8325 series and CX 10000 switches support high-density racks using 1 GbE / 10 GbE / 25 GbE host ports.
  • The CX 8360 series offers a variety of models supporting 1GbE / 10 GbE RJ45 ports, and flexible variations of 1 GbE, 10 GbE, 25 GbE, and 50 GbE modular transceiver ports.
  • The CX 8100 series offers a cost effective model for 1 GbE / 10 GbE connected hosts.
  • The CX 9300-32D in an access role is intended to connect 100 GbE and 200 GbE high-throughput hosts to a CX 9300-32D core layer using 400 Gbps links.
  • The CX 9300S supports 100 GbE and 200 GbE high-throughput hosts to a CX 9300-32D core, but it also can be optimized for 25 GbE connected hosts.

The CX 10000 distributed services switch (DSS) adds inline firewall features typically provided by dedicated firewall appliances attached to the core or VM hypervisors attached to access switches. The CX 10000 switch should be selected when these features are required by downstream hosts, or to meet other data center goals. DSS features are not available on other CX switch models. A mix of DSS and non-DSS switches connected to a common core is supported.

The table below summarizes the access switch SKUs available.

SKUDescriptionRack Design
R8P13A10000-48Y6C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 6-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
R8P14A10000-48Y6C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 6-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
JL624A8325-48Y8C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 8-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
JL625A8325-48Y8C: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 8-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
R9A29A9300-32D: 9300-32D 32-port 100/200/400 GbE QSFP-DD, 2-port 10G SFP+, front-to-back airflowHigh-density / Dual ToR
R9A30A9300-32D: 9300-32D 32-port 100/200/400 GbE QSFP-DD, 2-port 10G SFP+, back-to-front airflowHigh-density / Dual ToR
S0F82A9300S-32C8D: 32-port QSFP28 100G 8p QSFP-DD 400G, front-to-back airflowHigh-density / Dual ToR
S0F82A9300S-32C8D: 32-port QSFP28 100G 8p QSFP-DD 400G, front-to-back airflowHigh-density / Dual ToR
R9W90A8100-48XF4C: 48-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
R9W91A8100-48XF4C: 48-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
R9W92A8100-40XT8XF4C: 40-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE BASE-T, 8-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
R9W93A8100-40XT8XF4C: 40-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE BASE-T, 8-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
R9W86A8100-24XF4C: 24-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
R9W87A8100-24XF4C: 24-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
R9W88A8100-24XT4XF4C: 24-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE 10GBASE-T, 4-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
R9W89A8100-24XT4XF4C: 24-port 100M / 1GbE / 2.5GbE / 5GbE / 10GbE 10GBASE-T, 4-port 1/10 GbE SFP/SFP+, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
JL704C8360-48Y6C v2: 48-port with up to 22 ports of 50GbE, 44-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
JL705C8360-48Y6C v2: 48-port with up to 22 ports of 50GbE, 44-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
JL706C8360-48XT4C v2: 48-port 100M / 1GbE / 10GbE 10GBASE-T, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowHigh-density / Dual ToR
JL707C8360-48XT4C v2: 48-port 100M / 1GbE / 10GbE 10GBASE-T, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowHigh-density / Dual ToR
JL700C8360-32Y4C v2: 32-port with up to 12 ports of 50GbE, 28-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Dual ToR
JL701C8360-32Y4C v2: 32-port with up to 12 ports of 50GbE, 28-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 10/25 GbE SFP+/SFP28 with MACsec, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowMedium-density / Dual ToR
JL710C8360-24XF2C v2: 24-port 1/10 GbE SFP/SFP+, 2-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Single ToR
JL711C8360-24XF2C v2: 24-port 1/10 GbE SFP/SFP+, 2-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowMedium-density / Single ToR
JL702C8360-16Y2C v2: 16-port 1/10/25 GbE SFP/SFP+/SFP28, 2-port 40/100 GbE QSFP+/QSFP28, front-to-back airflowLow-density / Single ToR
JL703C8360-16Y2C v2: 16-port 1/10/25 GbE SFP/SFP+/SFP28, 2-port 40/100 GbE QSFP+/QSFP28, back-to-front airflowLow-density / Single ToR

Note: Three CX 9300S-32C8D bundles are Trade Agreement Act (TAA) compliant with the same capabilities listed in the above table.
S0F81A (front-to-back air flow)
S0F83A (back-to-front air flow)
S0F87A (back-to-front air flow and DC power supplies)

Out-of-Band Management Switches

The HPE Aruba Networking data center reference architecture uses a management LAN built on dedicated switching infrastructure to ensure reliable connectivity to data center infrastructure for automation, orchestration, and traditional management access. The table below lists the recommended switch models.

SKUDescriptionHost ports
JL667ACX 6300F 48-port 1 GbE and 4-port SFP56 Switch48
JL668ACX 6300F 24-port 1 GbE and 4-port SFP56 Switch24
JL663ACX 6300M 48-port 1 GbE and 4-port SFP56 Switch48
JL664ACX 6300M 24-port 1 GbE and 4-port SFP56 Switch24
JL724A6200F 24G 4SFP+ Switch24
JL726A6200F 48G 4SFP+ Switch48
JL678A6100 24G 4SFP+ Switch24
JL676A6100 48G 4SFP+ Switch48

Aruba Fabric Composer

HPE Aruba Networking’s Aruba Fabric Composer (AFC) is offered as a self-contained ISO or virtual machine OVA and can be installed in both virtual and physical host environments as a single instance or as a high-availability, three-node cluster. AFC can manage EVPN-VXLAN spine-and-leaf fabric and Two-Tier topologies. AFC is available as an annual per-switch software subscription.

SKUDescriptionSupported Switches
R7G99AAEAruba Fabric Composer Device Management Service Tier 4 Switch 1 year Subscription E-STU9300, 10000, 8360, 8325, 6400, 8400
R7H00AAEAruba Fabric Composer Device Management Service Tier 4 Switch 3 year Subscription E-STU9300, 10000, 8360, 8325, 6400, 8400
R7H01AAEAruba Fabric Composer Device Management Service Tier 4 Switch 5 year Subscription E-STU9300, 10000, 8360, 8325, 6400, 8400
R8D18AAEAruba Fabric Composer Device Management Service Tier 3 Switch 1 year Subscription E-STU6300
R8D19AAEAruba Fabric Composer Device Management Service Tier 3 Switch 3 year Subscription E-STU6300
R8D20AAEAruba Fabric Composer Device Management Service Tier 3 Switch 5 year Subscription E-STU6300

The AFC solutions overview provides additional information.

Pensando Policy and Services Manager

The Pensando Policy and Services Manager (PSM) runs as a virtual machine OVA on a host. PSM requires vCenter for installation. It is deployed as a high-availability, quorum-based cluster of three VMs.

PSM supports CX 10000 series switches. Management of PSM is integrated into AFC.

PSM can be downloaded from the HPE Networking Support Portal. Entitlement to PSM is included by adding the following required SKU when purchasing a CX 10000 switch.

SKUDescription
R9H25AAECX 10000 Base Services License

NetEdit

HPE Aruba Networking’s NetEdit software runs as a VM OVA on a host. NetEdit is available from the HPE Networking Support Portal.

Ordering information for NetEdit is provided at the end of this data sheet.

Reference Architecture Physical Layer Planning

The following section provides guidance for planning the physical layer of data center switches.

Cables and Transceivers

Refer to the following documents to ensure that supported cables and transceivers are selected when planning physical connectivity inside the data center:

HPE Server Networking Transceiver and Cable Compatibility Matrix

HPE Aruba Networking ArubaOS-Switch and ArubaOS-CX Transceiver Guide

Interface Groups

For ToR configurations that require server connectivity at multiple speeds, it is important to note that setting the speed of a port might require adjacent ports to operate at that same speed.

CX 8325 and CX 10000 host facing ports have a default speed of 25GbE. Changing the speed to 10GbE will impact groups of 12 ports on the CX 8325 and groups of four ports on the CX 10000. Some CX 8360 switches use interface groups and others support individual port speed settings without impacting adjacent ports. CX 9300-32D switches allow individual ports to operate at different speeds. The CX 9300S 400 Gbps ports support individual speed settings, while the remaining 100G and 200G ports can be assigned two speed modes in interface groups of four.

The following diagram illustrates 9300S port groups:

**CX 9300S Interface Groups**

Split Ports

Split ports enable an individual high-speed interface to establish multiple lower speed links using active optical breakout cables or optical transceivers.

The CX 9300-32D can split an individual 400 Gbps port into 4 x 100 Gbps, 2 x 100 Gbps or 2 x 200 Gbps links.

The CX 9300S supports two split interface profile modes that optimize split port capabilities for 100 Gbps or 25 Gbps operational requirements. The default profile (profile 1) optimizes 100 Gbps operation. In this mode, the eight 400 Gbps ports can be split into 4 x 100 Gbps, 2 x 100 Gbps, or 2 x 200 Gbps links, and eight 200 Gbps ports can be split into 2 x 100 Gbps links.

The following diagram illustrates split port operation on the CX 9300s using split interface profile 1 with interface-groups 3 and 6 set to 200 Gbps operation:

**CX 9300S Split Interface Profile 1**

Note: Currently shipping HPE Aruba Networking 200G to 2 x 100G AOC split cables support only Q-DD interfaces. These are supported in the CX 9300S 400G interfaces, but not in the 200G QSFP28/56 interfaces. Future cabling options will support 200G to 2 x 100G split operation on CX9300S 200G ports.

When a CX 9300S 200G port group is set to 40 Gbps operation in split interface profile 1, the ports are capable of 2 x 10 Gbps split. Split interface profile 2 is recommended when optimizing the 9300S for 25 Gbps or 10 Gbps operation.

The CX 9300S split interface profile 2 optimizes 25 Gbps operation, where six 200 / 100 / 40 Gbps ports can be split into four 25 Gbps links. The number of 400 Gbps ports supporting split operation is reduced to four, when using split interface profile 2.

The following diagram illustrates split port operation on the CX 9300s using split interface profile 2 with interface-groups 4 and 5 set to 200 Gbps operation:

**CX 9300S Split Interface Profile 1**

Note: When the CX 9300S 200G ports in interface-group 4 or 5 are set to 40 Gbps operation (depicted in green in the diagram above), ports within that group only support 4 x 10 Gbps or 2 x 10 Gbps split operation.

The CX 9300S requires a reboot to switch between split interface port profiles.

The QSA28 network adapter (845970-B21) supports 25 Gbps and 10 Gbps optics in QSFP28 ports and 10 Gbps optics in QSFP+ ports. The QSA28 can be used with the CX 9300S to enable lower port speed operation on ports that do not support split operation or have split operation disabled due to the port profile selection.

Most other platforms can split a 40/100 Gbps port into four lower-speed connections (4x10 Gb/s or 4x25 Gb/s).

Refer to the HPE Aruba Networking ArubaOS-Switch and ArubaOS-CX Transceiver Guide when selecting supported breakout cables, adapters, and transceivers.

Media Access Control Security (MACsec)

MACsec is a standard defined in IEEE 802.1AE that extends standard Ethernet to provide frame-level encryption on point-to-point links. This feature is typically used in environments where additional layers of data confidentiality are required or where it is impossible to physically secure the network links between systems.

MACsec can be used to encrypt communication between switches within a data center, between two physically separate data center locations over a data center interconnect (DCI), or between switches and attached hosts.

The table below details MACsec support in the HPE Aruba Networking switch portfolio:

SKUDescriptionNumber of MACsec Ports
S0F82A9300S-32C8D: 32-port QSFP28 100G 8p QSFP-DD 400G, front-to-back airflow16 QSFP+/QSFP28
Future firmware upgrade will provide additional:
8 x QSFPDD (400 GbE) ports
8 x QSFP28/56 ports
S0F84A9300S-32C8D: 32-port QSFP28 100G 8p QSFP-DD 400G, back-to-front airflow16 QSFP+/QSFP28
Future firmware upgrade will provide additional:
8 x QSFPDD (400 GbE) ports
8 x QSFP28/56 ports
JL704C8360-48Y6C v2: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 6-port 40/100 GbE QSFP+/QSFP28, front-to-back airflow4 SFP+/SFP28,
2 QSFP+/QSFP28
JL705C836048Y6C v2: 48-port 1/10/25 GbE SFP/SFP+/SFP28, 6-port 40/100 GbE QSFP+/QSFP28, back-to-front airflow4 SFP+/SFP28,
2 QSFP+/QSFP28
JL700C8360-32Y4C v2: 32-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 40/100 GbE QSFP+/QSFP28, front-to-back airflow4 SFP+/SFP28
JL701C8360-32Y4C v2: 32-port 1/10/25 GbE SFP/SFP+/SFP28, 4-port 40/100 GbE QSFP+/QSFP28, back-to-front airflow4 SFP+/SFP28

Scale Validation

HPE Aruba Networking’s test lab performs multidimensional scale validation of data center architectures. A comprehensive, solution-level test case for each architecture is implemented using recommended best practices.

The validated scale values below represent specific test cases and are not intended to indicate the maximum achievable scale for a specific architecture. The test case is intended to provide a sample reference of achievable scale across multiple switch resources, in contrast to unidimensional data sheet values that specify maximum values for a feature in isolation. Each customer environment is unique and may require optimizing resources in a different manner.

Topology architectures are connected to a high performance testing platform that generates large-scale client traffic.

Spine and Leaf with EVPN-VXLAN Overlay

The spine-and-leaf/EVPN-VXLAN data center was validated using CX 8325-32C spine switches and CX 10000-48Y6C leaf switches.

The following diagram illustrates the HPE Aruba Networking test lab’s topology (simulated racks not depicted).

**HPE Aruba Networking NTL S&L Test Topology**

The underlay uses IPv4 routed-only ports between spine and leaf switches and a single OSPF area to share loopback and VTEP reachability. The testing environment consists of three physical racks with redundant leaf switches and 13 simulated racks to support a total of 16 overlay VTEPs. The testing platform simulates non-redundant leaf switches, resulting in a lower number of underlay OSPF adjacencies than when using a purely physical setup, which does not affect EVPN-VXLAN overlay scale testing parameters.

Layer 2 and Layer 3 overlay scalability were tested. Sixty-four VRFs were defined, each with five VLANs [three standard VLANs, an isolated private VLAN (PVLAN), and a primary PVLAN]. Dual-stacked VLAN SVIs were defined on standard VLANS and primary PVLANs. HPE Aruba Networking’s Active Gateway feature provided a dual-stacked, distributed Layer 3 gateway on each leaf switch. Both ARP and ND suppression were enabled.

Two VLAN SVIs per VRF were defined on each border leaf to connect to a pair of external firewalls. Bidirectional Forwarding Detection (BFD) was enabled on external BGP peerings for fast routing failure detection.

Hardware and Firmware

The following switch models and firmware versions were tested in the designated roles:

Switch RoleSwitch ModelFirmware VersionModeForwarding Profile
Spine8325-32C10.13.1000StandaloneSpine
Leaf10000-48Y6C10.13.1000VSXLeaf
Border Leaf10000-48Y6C10.13.1000VSXLeaf

Note: The internal switch architecture of the 10000-48Y6C is based on the 8325-48Y8C. Validated values for the 10000-48Y6C also apply to the 8325-48Y8C.

Switch Scale Configuration

The following per-switch configuration values established Layer 3 and Layer 2 scale for the testing environment.

FeatureSpineLeafBorder Leaf
Underlay OSPF Areas111
Underlay OSPF Interfaces1933
Underlay BGP Peers1922
Overlay VRFsN/A6464
Overlay VLANs (including one transit VLAN per VRF)N/A387515
Overlay Primary PVLANsN/A6464
Overlay Isolated PVLANs (one per primary)N/A6464
Overlay BGP Peers to External NetworksN/AN/A128
BGP IPv4 Route Maps (In + Out)00128
BGP IPv6 Route Maps (In + Out)00128
VXLAN EVPN L3 VNIsN/A6464
VXLAN EVPN L2 VNIsN/A256256
Dual-stack overlay external-facing SVIsN/AN/A128
Dual-stack overlay host SVIsN/A256256
SVIs with DHCPv4 RelayN/A255255
SVIs with DHCPv6 RelayN/A255255
Dual-stack Aruba Active Gateway SVIsN/A256256
Unique Active Gateway virtual MACsN/A11
Host MC-LAG 4848

Multidimensional Dynamic Table Values

The following table values were populated during the solution test.

FeatureSpineLeafBorder Leaf
Underlay OSPF Neighbors1933
MACN/A3833938651
IPv4 ARP193728837543
IPv6 NDN/A2637426758
IPv4 Routes (Underlay + Overlay)60837066/1250*37080/1250*
IPv6 Routes (Overlay)N/A26694/640*26848/656*
Underlay BGP Peers1922
Loop Protect interfacesN/A69765568

Note: *The AOS-CX “show ip route” and “show ipv6 route” command outputs include /32 and /128 EVPN host routes, which do not consume a route table entry. In the table above, the first value represents the number of displayed routes when using a show route command. The second number represents the number of actual route entries consumed in the route table during the test.

Two-Tier Architecture

The Two-Tier data center was validated using CX 8360-12C core switches and two types of server access switches: 8360-48XT4Cv2 and 8100-40XT8XF4C. A total of four server access racks were connected to the VSX-redundant core.

The following diagram illustrates the HPE Aruba Networking test lab’s topology (simulated access racks not depicted).

**HPE Aruba Networking NTL S&L Test Topology**

Four VRFs were defined, with 128 VLANs assigned per VRF (127 server facing VLANs and one transit VLAN). HPE Aruba Networking’s Active Gateway feature provided host gateway redundancy on the core switches.

OSPFv2 and OSPFv3 were used for IPv4 and IPv6 routing on a transit VLAN between core switches and external firewalls. BFD was enabled for fast OSPF neighbor failure detection.

PIM-SM, IGMP, and MLD were enabled on core routed interfaces. IGMP and MLD snooping were enabled on server access switches.

MSTP was enabled with a single instance.

Hardware and Firmware

The following switch models and firmware versions were tested in the designated roles:

Switch RoleSwitch ModelFirmware VersionModeForwarding Profile
Core8360-12C10.13.1000VSXAggregation-Leaf
Server Access8360-48XT4Cv210.13.1000VSXAggregation-Leaf
Server Access8100-40XT8XF4C10.13.1000VSXN/A

Configured Test Scale

The following per-switch configuration values established Layer 3 and Layer 2 scale for the testing environment.

FeatureCoreServer Access (8360)Server Access (8100)
VRFs4N/AN/A
ACL Routed VLAN IPv4 Ingress Entries4096N/AN/A
ACL Routed VLAN IPv6 Ingress Entries4096N/AN/A
OSPF Areas1N/AN/A
OSPF Interfaces8N/AN/A
Dual-stack PIM Interfaces516N/AN/A
VLANs516512512
VLAN SVI (dual-stack)512N/AN/A
SVIs with DHCPv4 Relay511N/AN/A
SVIs with DHCPv6 Relay511N/AN/A
Active-Gateway virtual IP (dual-stack)512N/AN/A
Active-Gateway virtual MAC1N/AN/A
Host MC-LAGN/A4848

Multidimensional Dynamic Table Values

The following table values were populated during the solution test.

FeatureCoreServer Access (8360)Server Access (8100)
MAC251092560025600
IPv4 ARP25109N/AN/A
IPv6 ND49685N/AN/A
IPv4 IGMPv3 Groups1024256256
IPv4 Multicast Routes2036N/AN/A
IPv6 MLDv2 Groups2686767
IPv6 Multicast Routes240N/AN/A
PIM-SM Neighbors516N/AN/A
IPv4 Routes16471N/AN/A
IPv6 Routes5528N/AN/A
Dual-stack OSPF Neighbors8N/AN/A
OSPF BFD Neighbors8N/AN/A

Back to top

© Copyright 2024 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Aruba Networks and the Aruba logo are registered trademarks of Aruba Networks, Inc. Third-party trademarks mentioned are the property of their respective owners. To view the end-user software agreement, go to Aruba EULA.