The first step for deploying a data center is the physical installation of the switches and computing hosts.
Table of contents
- Initialize Fabric Components
- Switch Installation
- Physical Cabling
- Out-of-Band Management
- Switch Initialization
- Download Aruba Fabric Composer
- Install Aruba Fabric Composer
- Download AMD Pensando Policy and Services Manager
- Install AMD Pensando Policy and Services Manager
- Configure the AMD PSM Cluster
Verify the airflow configuration for the products to be installed to ensure that they support the cooling design for the data center. If required, an optional air duct kit is available for Aruba data center top-of-rack (ToR) switches to redirect hot air away from servers inside the rack.
Before installing switches, download the Aruba Installation Guide for the specific models. Review the Installation Guide before installing and deploying the switches. Carefully review requirements for power, cooling, and mounting to ensure that the data center environment is outfitted adequately for safe, secure operations .
Step 1 Open a web browser and navigate to the Aruba Support Portal at https://asp.arubanetworks.com/.
Step 2 On the Support Portal page, select the Software & Documents tab.
Step 3 On the Software & Documents tab, select Switches.
Step 4 Select the filter options on the left.
File Type: Document
Product: Aruba Switches
- File Category: Installation Guide
Step 5 Download the Installation Guide version for the switch model to be installed.
Step 6 Complete the physical installation of switches in the racks.
Note: Spine switches can be installed centrally, in middle-of-row or end-of-row locations depending on cabling requirements and space availability. The key consideration is cable distance and the types of media used between leaf and spine switches.
Leaf switches should be installed top-of-rack (ToR) in high-density environments or middle-of-row in low-density environments.
Consistent port selection across racks and in the spine switches increases the ease of configuration management, monitoring, reporting, and troubleshooting tasks in the data center.
Breakout cables are numbered consistently with their split port designation on the switch.
Document all connections.
Ensure that distance limitations are observed for your preferred host connection media and between switches.
Refer to the “Data Center Design” section for guidance on cabling design options for the installation.
The illustrations below show the port configuration on two types of 48-port ToR switches. Redundant ToR switch pairs must be the same model.
Ports on an Aruba CX 8325-48Y8C:
Ports on an Aruba CX 10000-48Y6C:
In a redundant ToR configuration, the first two uplink ports should be allocated to interconnect redundant peers (ports 49-50 on 8325-48Y8C and 10000-48Y6C switches), which provides physical link redundancy and sufficient bandwidth to accommodate a spine uplink failure on one of the switches.
Two links between redundant peers are sufficient for most deployments, unless the design may result in high traffic utilization of the inter-switch links under normal operating conditions, such as when many hosts in a rack are single-homed to only one of the redundant switches.
Additional uplink ports should be allocated to connect spine switches (ports 51-56 on an 8325-48Y8C and ports 51-54 on a 10000-48Y6C).
The highest numbered non-uplink port should be reserved as a heartbeat link between a ToR redundant pair.
The number of spine switches should match the number of leaf-to-spine links required on each ToR, providing a fully meshed, Clos switch topology.
Follow a similar approach when using lower density ToR designs. Before deploying ToR configurations that require server connectivity at multiple speeds, review the switch guide to determine if adjacent ports are affected.
Configuration steps for changing port speeds are covered later in this guide. Refer to the “Data Center Design” section for guidance on port speed groups on the different hardware platforms.
The illustration below shows the port configuration on an 8325 32-port spine switch.
In a dual ToR configuration, a spine switch must be connected to each switch in the redundant ToR pair in each rack. A 32-port spine switch supports up to 16 racks in this design. Use the same port number on each spine switch to connect to the same leaf switch to simplify switch management and documentation. For example, assign port 1 of each spine switch to connect to the same leaf switch.
In a VXLAN spine-and-leaf design, a pair of leaf switches serves as the single entry and exit point to the data center. This is called the border leaf, but it does not require dedication to only border leaf functions. It may provide services leaf functions and, in some cases, provide connectivity to directly attached data center workloads. Cabling the border leaf can vary among deployments, depending on how the external network is connected and if services such as firewalls and load balancers are connected.
After all switches are physically installed with appropriate power and networking connections, continue to the next procedure.
For the Aruba ESP Data Center spine-and-leaf design, use of a dedicated management LAN for the data center is strongly recommended.
A dedicated management LAN on separate physical infrastructure ensures reliable connectivity to data center infrastructure for automation, orchestration, and traditional management access. The management LAN provides access to Aruba Fabric Composer (AFC), Aruba NetEdit, and AMD Pensando Policy and Services Manager (PSM) applications. Ensure that the host infrastructure needed for those applications also can be connected to the management LAN or is reachable from the management LAN.
Deploy management LAN switches top-of-rack with switch and host management ports connected. Plan for an IP subnet with enough capacity to support all management addresses in the data center. DNS and NTP services for the fabric should be reachable from the out-of-band management network.
Configuration steps for the management LAN are not covered in this guide. For design assistance, refer to the ESP Data Center Volume 1 Design Guide.
Go to the Aruba Support Portal at https://asp.arubanetworks.com/ and download the AOS-CX Fundamentals Guide for the version of the operating system you plan to run using the steps noted above for “Switch Installation.”
Note: Refer to the operating system release notes and consult with an Aruba Networks SE or TAC team member for assistance with determining and selecting the version.
The “Initial Configuration” section of each Fundamentals Guide presents detailed instructions for connecting to the switch console port. After connecting to the console port, follow the steps below.
Step 1 Enable power to the switch by connecting power cables to the switch power supplies.
Step 2 Login with the username admin and an empty password.
Step 3 Enter a new password for the admin account.
Note: The “Initial Configuration” section of the Fundamentals Guide provides detailed instructions for logging into the switch the first time.
Step 4 Confirm that all CX 10000 switches in the fabric are running AOS-CX version 10.11 for compatibility with PSM version 1.54.5-T-2 used in this guide.
Step 5 Confirm that all other switches are running AOS-CX 10.09 or later for compatibility with AFC 6.4.1 used in this guide.
Step 6 If the switch was previously configured, reset it to the factory default configuration. Aruba Fabric Composer (AFC) requires a factory default configuration for orchestration during the fabric configuration process.
8325# erase all zeroize This will securely erase all customer data and reset the switch to factory defaults. This will initiate a reboot and render the switch unavailable until the zeroization is complete. This should take several minutes to one hour to complete. Continue (y/n)? y
Step 7 Configure 6300M VSF stacks using the Aruba AOS-CX VSF Guide.
Note: VSF stacks should be configured on 6300 switches before making any other configuration changes after zeroization.
Step 8 Configure the switch hostname.
Note: It is important to use a canonical naming scheme to easily identify the function of each switch. The hostname scheme above uses <physical location>-<fabric identifier>-<role and unique VSX pair identifier>-<VSX pair member id> to identify the correct fabric and role when using AFC for fabric configuration. When using this scheme for switches that are not in a VSX pair, the number in the role field is sufficient for unique identification (i.e., RSVDC-FB1-SP1).
Step 9 Configure the Switch Management Interface. By default, the management interface uses DHCP for its configuration. DHCP reservations can be used to assign a consistent IP address, default gateway, and nameserver. Static IP configuration eliminates dependence on DHCP service availability.
interface mgmt no shutdown ip static 172.16.104.101/24 default-gateway 172.16.104.1 nameserver 172.16.1.98
Note: Based on the existing IP address management process, determine a subnet to be used for the management LAN, where out-of-band (OOB) management ports on your switches are connected. Aruba Fabric Composer must be reachable from this network. The “Initial Configuration” section of the Fundamentals Guide provides detailed instructions for configuring the management interface.
Step 10 When spines use breakout cabling, optionally configure split ports with the appropriate number of child interfaces and connection speeds, then confirm the operational port change.
interface 1/1/1-1/1/3 split 2 100g
Note: Typically, a spine uses a consistent split port strategy. An interface range is used to assign the same split configuration to multiple ports. The confirm parameter in the split configuration statement disables the operational warning. For example, split 2 100g confirm.
Split interfaces also can be configured in AFC.
Step 1 Navigate to https://asp.arubanetworks.com/.
Step 2 On the menu at the top of the page, select Software & Documents.
Step 3 On the menu on the left under Product, select Show All.
Step 4 In the Product menu, select “Aruba Fabric Composer”, and click Apply.
Step 5 In the search results, select the latest OVA version and download it to your computer. AFC 6.5.0 or above is required to manage AMD Pensando’s Policy and Services Manager version 1.54.5-T-2 used in this guide.
On the second page of the Aruba Support Portal search results, find the “Aruba Fabric Composer Install Guide.” Review the installation considerations to ensure that adequate host resources are available.
Step 1 Select the OVA file using the Deploy OVF Template workflow in vCenter.
Note: Refer to the Aruba Fabric Composer Release Notes available on the Aruba Support Portal for minimum host requirements.
Step 2 Proceed with selecting the appropriate vSphere resources for your environment and accept the license agreement.
Step 3 Complete the Customize template form in the next-to-last step of the pre-deployment. See below for the types of information required.
Step 4 Verify all settings and power on the new virtual machine. Wait several minutes for the system to initialize and for the application to become available.
Step 5 Open a web browser and connect to AFC at the previously configured IP address.
Note: The software version is not displayed and login is not allowed while the system is initializing.
Step 6 On the Fabric Composer page, enter the following default credentials, and click LOGIN.
Step 7 Enter the current and new password and click APPLY.
Step 1 On the Maintenance menu, select Licenses.
Step 2 On the ACTIONS menu in the Maintenance/Licenses pane, select ADD.
Step 3 On the License page, paste the JSON license string in the License field and click APPLY.
Step 4 Review the installed license to verify that the Start Date, End Date, Quantity, and Tier values display as expected.
Note: AFC supports automating two tiers of switches (Tier 3 and Tier 4). The datasheet for each switch model contains the licensing tier required for AFC.
Refer to the Aruba Fabric Composer Installation Guide available on the Aruba Support Portal. In the “Deploying High Availability for Aruba Fabric Composer” section, review the installation requirements and ensure that adequate host resources are available. Follow the steps provided to deploy the HA cluster.
When using the firewall capabilities of the CX 10000 switch in a data center, AMD Pensando Policy and Services Manager (PSM) VMs must be installed on a network that is accessible by AFC and switch management interfaces.
Step 1 Navigate to https://asp.arubanetworks.com/.
Step 2 On the menu at the top of the page, select Software & Documents.
Step 3 In the Search Files field at the top, type Pensando.
Step 4 In the search results, select the latest OVA version and download it to your computer.
In the Aruba Support Portal search results, find the Pensando Policy and Services Manager for Aruba CX 10000: User Guide. Review the “PSM Installation” section and ensure that adequate host resources are available. PSM requires a minimum of three VM instances for a production deployment.
Step 1 Select the OVA file using the Deploy OVF Template workflow within vCenter and click NEXT.
Step 2 Choose the appropriate options in Select a compute resource and proceed through Review details.
Step 3 On the Configuration page, click the radio button for Production and click NEXT.
Step 4 Proceed with selecting the appropriate storage and network resources for the deployment.
Step 5 Complete the Customize template form using the example below.
Step 6 Complete the VM creation workflow.
Step 7 Create additional PSM VMs as needed.
Note: Additional VMs can be created by importing the OVA again or by cloning the first VM as a template as described in the “Installing OVA on ESXi” section of the Pensando Policy and Services Manager for Aruba CX 10000: User Guide.
Step 1 In vCenter, login to one of the Penando PSM VM consoles.
Password: < Specified during VM creation process >
Step 2 At the VM console, bootstrap the PSM cluster with the bootstrap_PSM.py utility using the following command-line switch/value pairs followed by a space-delimited list of IP addresses for all cluster members.
- -enablerouting: < No value required >
- -distributed_services_switch: < No value required >
- -autoadmit: False
- -clustername: < User supplied cluster name >
- -domain: < Domain name >
- -ntpservers: < Comma-separated list of NTP servers >
bootstrap_PSM.py -enablerouting -distributed_services_switch -autoadmit False -clustername FB1_PSM -domain example.local -ntpservers 172.16.1.98,172.16.1.99 172.16.104.51 172.16.104.52 172.16.104.53
Note: The -autoadmit command line switch is set to True by default. This automatically enables any Distributed Services Switch to join PSM. When a strict admission policy to PSM is required, set this command line switch to False.
Step 3 When prompted, read and accept the End User License Agreement.
Step 4 Verify that the PSM cluster bootstrap completes successfully.
Step 5 On the VM console, enter the following to generate a PSM security token.
/usr/pensando/bin/psmctl get node-token --psm-ip localhost --psm-port 443 --audience "*" --token-output ~/dse-tok
Note: The token can be used for disaster recovery and backup purposes. Store it with other sensitive network credentials.
Step 6 When prompted, enter the following default credentials:
User name: admin
Step 7 Open a web browser and connect to PSM at one of the configured VM IP addresses.
Step 8 On the AMD Pensando login page, enter the following default credentials and click SIGN IN.
Step 9 Go to System > Cluster and verify that each PSM VM is listed under Nodes in the Cluster Detail pane with the following values.
Step 10 Go to Admin > User Management, mouse-over the admin user, and click the Change password icon.
Step 11 Enter the old and new passwords and click Save changes.
Note: Changing the password on one VM updates all cluster members.