This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Services

Many of the services that provide normal operations for AOS-10 are running within HPE Aruba Networking Central and their operation is not necessarily apparent. This section describes those services and how they work.

1 - AirGroup

AirGroup provides advanced functionality for multicast DNS and SSDP based network devices.

In today’s interconnected and mobile-centric world, seamless communication and interaction between devices are essential. HPE Aruba Networking’s AirGroup service emerges as a powerful solution designed to bridge the gap between diverse devices and the services they offer, enhancing the user experience within enterprise networks. At its core, it leverages zero-configuration networking to facilitate the discovery and utilization of multicast DNS (mDNS) and Simple Service Discovery Protocol (SSDP) services. These services encompass a wide range of functionalities, including Apple® AirPrint, AirPlay, Google Cast streaming, and Amazon Fire TV integration etc, all of which are integral to the modern digital workplace.

AirGroup simplifies the management of diverse devices, each with its own set of services, enabling users to seamlessly access these services from their mobile devices, laptops, and more, all within an enterprise network environment. Whether it’s sharing a presentation via AirPlay, streaming content through GoogleCast, or enjoying Amazon Fire TV services, AirGroup empowers users to be more productive and efficient.

This service is not limited to wireless connections; it seamlessly integrates wired and wireless devices, making it a comprehensive solution for modern network environments. Whether you’re in a bustling corporate office, a dynamic educational institution, or any enterprise setting, AirGroup enhances the functionality of your network by enabling devices to communicate effortlessly.

Key Features

  • Service Discovery: Aruba AirGroup simplifies the process of discovering and accessing services and resources available on the network across layer 2 domains. It enables devices to automatically detect and connect to services such as printers, file servers, media devices, and other resources without the need for complex configurations.

  • Device Isolation and Security: AirGroup ensures that devices can only discover and communicate with other devices within the same security and policy domain. This isolation prevents unauthorized access to sensitive information and enhances network security and privacy.

  • User Role and VLAN Based Access Control: Administrators can implement user role and VLAN based access control policies using AirGroup. This allows them to define specific rules and permissions for different user groups in different VLANs, ensuring that users have appropriate access to services and resources based on their roles and VLANs.

  • Enhanced User Experience: By streamlining service discovery and enabling seamless communication between devices, Aruba AirGroup improves the overall user experience. Users can effortlessly access shared resources and collaborate effectively, boosting productivity and satisfaction.

  • Easy Configuration and Management: AirGroup is easy to configure and manage through Aruba Central management platform. Administrators can use a user-friendly interface to set up and monitor service discovery and communication settings efficiently.

1.1 - Architecture of the AirGroup Technology

Deep dive into AirGroup architecture for AOS 10 operations

In transitioning to AOS 10, the AirGroup service has undergone a significant architectural overhaul to meet the dynamic needs of modern enterprise networks. In AOS 8, its centralized model struggled to cope with the growing number of mDNS/SSDP devices. Recognizing the need for change, Aruba reengineered AirGroup, shifting away from the single-central-system approach. In this new design, AirGroup server cache is distributed to every AP in the network. This shift empowers AirGroup to efficiently handle the increasing device population and their evolving behaviors while achieving exceptional performance and scalability.

In this revamped architecture, AirGroup operates as a distributed model, dividing functionality between APs and the AirGroup Service in Aruba Central. This innovative approach ensures AirGroup remains efficient and adaptable to meet the evolving demands of modern enterprise networks.

The New AOS 10 Architecture

  • Diverse Service Advertisement Frequencies - Various mDNS/SSDP devices relay service advertisement frames at intervals ranging from 5 seconds to 2 minutes or more. To ensure that a single or a subset of servers with aggressive advertisement tactics do not monopolize the system, the new AirGroup architecture can manage service advertisement frequency effectively.

  • Variable Client Query Frequencies - Applications like YouTube and Netflix constantly scan for new servers through mDNS/SSDP query frames, especially during video streaming. As queries generally outnumber service advertisements (20% advertisements vs. 80% queries), prompt query response times are essential without causing delays in processing service advertisements.

  • Proliferation of Unsupported Service Advertisements - The Bonjour protocol permits new applications to define and advertise services. As the usage of Wi-Fi BYOD devices continues to grow, the quantity of services being advertised and queried naturally fluctuates. Within this fresh design, each AP is equipped to intelligently filter and drop unsupported service advertisement and query packets originating from devices. This approach ensures that the responsiveness of AirGroup services remains undisturbed, even in the face of numerous unsupported services being advertised and queried by devices.

  • Ease of Serviceability - The new AirGroup architecture prioritizes ease of service, this includes the configuration and the provision of MRT Dashboards, APIs, alerts, and visibility into the state of APs.

  • Horizontally Scalable Architecture - Given the inherent nature of cloud-based deployments, the AirGroup service can be horizontally scalable by merely scaling up pods.

AirGroup Operations in AOS 10

AirGroup on each AP in AOS 10 serves as a protocol-aware proxy cache for service discovery protocols such as mDNS and SSDP. It intercepts and decodes these protocol packets from the L2 header, storing essential information in a cache.

In the AirGroup architecture of AOS 10, two components collaborate to support mDNS/SSDP functionality:

  • AirGroup Service in Aruba Central

  • AP mDNS module in every AP

Devices capable of mDNS/SSDP periodically broadcast their capabilities in the network, referred to as AirGroup servers. Devices searching for these services or capabilities are known as AirGroup users, leading to two distinct packet flows within the AirGroup application:

  • Query packet flow

  • Advertisement packet flow

Message flow of AirGroup within AOS 10

When an AP boots up, its mDNS process receives the AirGroup configuration, which originates from Aruba Central. This configuration encompasses AirGroup’s enable/disable status, service enable/disable status, disallow-role/vlan per service, and allowed role/vlan per service.

Each AP maintains two types of AirGroup server caches:

  • Discover Cache: This cache stores AirGroup servers directly connected to the AP. It facilitates sending delta discover cache updates to AirGroup service in Central and ensures cache coherency during AP-Central connection downtime.

  • Central Cache: The AirGroup service in Central processes cache updates from each AP, applies policies, and sends cache sync messages to the AP and its neighboring APs. The mDNS process on the AP uses these updates to construct the Central cache database. This cache contains only the synchronized cache from the AirGroup service. All AirGroup client queries are answered from this cache after applying configuration policies and per-server global policies.

The mDNS process in the AP primarily handles three processes:

  • Advertisement process

  • Query process

  • Cache synchronization with AirGroup service in Central

The Advertisement process manages new AirGroup server advertisements or updates (cache add/update), server disconnects (cache delete), and suppressed services dropping. Each AP’s mDNS process actively listens on a RAW socket, capturing mDNS/SSDP server advertisement packets. These packets undergo configured policies, including AirGroup service enabling/disabling, disallowed/allowed client roles, and disallowed/allowed VLANs. After policy assessment, the server entry is either updated or added to the AP Discover Cache table.

For new servers, all cache records from the packet are sent to the AirGroup Service in Central. For updates, only the delta updates are relayed. If advertised services are unsupported or disallowed in specific VLANs or client roles, the AP takes direct action by dropping the packet. Simultaneously, information about the suppressed service is forwarded to the AirGroup service in Central to maintain visibility.

The subsequent diagrams illustrate specific workflows for cache entry creation and updates, cache entry deletion, handling unsupported or disallowed service packets.

Advertisement processing on AP – cache addition and update

Advertisement processing on AP – cache deletion

Advertisement processing on AP – updates of dropped packets for disallowed or suppressed services

Query Process

During the query process of APs, when a device seeks mDNS/SSDP services offered by another device, the mDNS process on the AP accesses its Central cache. This cache is built and continuously updated with AirGroup server/service records synchronized from the AirGroup service in Central.

For each incoming query, the mDNS process applies a set of policies, including service configurations (enabling/disabling), disallowed/allowed VLANs, and/or disallowed/allowed client roles. After this filtering process, the mDNS process consults the Central cache for records corresponding to the requested service IDs. If cached records are found, it assembles response packets with these records as the payload and subsequently dispatches them as unicast packets to the querying client.

In cases where all records cannot fit within a single packet, they are transmitted in successive packets. It’s important to note that there is a predefined hard limit, currently set at 150, for the maximum number of records that can be sent in response to any query. Future iterations may allow for configurability in this regard.

Query processing on AP

Cache Synchronization

The AirGroup service in Central plays a pivotal role in cache synchronization:

  • It processes the Discover cache updates received from each AP. Following the application of relevant policies, the AirGroup service dispatches cache synchronization messages to both the respective AP and all neighboring APs.

  • The mDNS process on the APs then processes these cache synchronization updates to construct the Central Cache database. This database exclusively comprises cache entries that have been synchronized from the AirGroup service in Central. All client queries draw from this cache, with all configured policies and per-server global policies applied.

  • In scenarios involving configuration changes or during roaming events, the AirGroup Service sends synchronization updates to all neighboring APs, ensuring the Central cache remains current and up to date.

To maintain cache coherency and consistency:

  • Each AP calculates the crc64 checksum for all Service Identifier counter IDs in both the Discover and Central cache databases. This checksum, along with the Discover cache checksum and Central cache checksum, is included in the periodic checksum messages transmitted to the AirGroup service in Central.

  • The checksum is routinely updated with each central cache synchronization message received, reinforcing the cache’s integrity. This process is especially valuable in scenarios involving connectivity disruptions and aids in recovering from cache losses during connection down/up events.

Ultimately, this design approach solidifies the Central Cache as the definitive and authoritative source of information, ensuring a robust and reliable service discovery environment.

AP cache synchronization with AirGroup service in Central

1.2 - Configuration of AirGroup for an AOS 10 Environment

Configuration elements for the AirGroup service including user and server policies, wired and wireless devices, custom services and license requirements.

AirGroup configuration policies are a pivotal component in managing and controlling service discovery within the network. These policies provide administrators with the flexibility to define how AirGroup functions and ensure that it aligns with the specific requirements and security standards of the organization.

Here are some key aspects of AirGroup configuration policies:

Enabling AirGroup Services

AirGroup services can be managed at both Central Global level and the AP group level. When configuring at the group level, it takes precedence over the settings at the Global level.

Administrators have the flexibility to selectively enable or disable specific services. This capability empowers organizations to customize their network environment, accommodating essential services while effectively managing and mitigating potential security risks or unnecessary services.

For instance, suppose you have 7 predefined AirGroup services enabled globally. However, in a specific AP group, you only wish to enable AirPrint, AirPlay, and GoogleCast while disabling the other four services. In this scenario, you can accomplish this by disabling the remaining four services at the AP group level, allowing for precise control over service availability within that specific group.

The following two screen captures demonstrate how to enable the AirGroup service at the Global level and how to subsequently disable the DLNA media service at the AP group level, effectively superseding the Global level configuration.

Enable AirGroup service at Global level

Disable DLNA Media service at ’AOS 10 AP‘ group

User Role and VLAN-Based Policies

AirGroup configuration policies offer granular control by allowing the application of policies based on user roles and VLAN assignments. This precise control mechanism ensures that specific services are exclusively accessible to authorized users or devices within designated network segments. This approach not only bolsters network security but also facilitates the isolation of services as necessary. Both role and VLAN-based policies provide the option to either “allow service” or “deny service,” granting administrators flexibility in defining access rules.

AirPlay policy restricted to Employee user role on VLAN 100 and 200

Wired and Wireless Servers

In a wireless network, a wireless AirGroup server is automatically placed by the AP it connects to, becoming visible and accessible to clients within one hop RF neighborhood of the server’s AP, provided the AirGroup policies allow it.

However, for wired AirGroup servers, automatic positioning in relation to AP locations does not occur. To enable wired AirGroup servers to be shared with wireless clients in AOS 10, global server policies must be configured.

Within a server policy, specific to a server’s MAC address, administrators can stipulate which user roles are permitted or prohibited. Additionally, administrators need to define a list of APs to which the wired AirGroup server will be visible. As a result, all clients connected to those APs will gain visibility and access to the server. This configuration ensures seamless accessibility to wired AirGroup servers for wireless clients within the network. In current release, maximum 50 APs are allowed to be in the visibility list.

Global server policies also serve another crucial purpose – ensuring the visibility of wireless AirGroup servers when a specific server is located beyond the one-hop RF neighborhood. This situation may arise when there’s a need to allow wireless clients to access a server that is not within the typical range of nearby APs the wireless clients connect to.

For instance, consider a library where there’s only one AirPlay printer available, but some APs are situated beyond the one-hop RF neighborhood of the printer. Consequently, clients connected to these remote APs cannot access the printer. In such scenarios, the solution is to establish a server policy for the printer at the Global level and include all relevant APs in the visibility list within the server policy.

By doing so, the configuration ensures that clients connected to any AP in the list, whether within the immediate RF neighborhood or beyond, can seamlessly access the printer. This flexibility in defining server visibility allows organizations to meet their specific connectivity requirements and provide a consistent user experience.

Global server policy

Leader AP for each wired AirGroup server

In AOS 10, the concept of a Leader AP is crucial for managing wired AirGroup servers. For a wired AirGroup server to be recognized and learned by the APs, the VLANs of the wired servers must be trunked to the switch ports connected to the APs. This ensures that all APs on the same VLAN can detect these wired servers. To avoid the inefficiency of having every AP on the same VLAN send redundant server updates to Central—which would generate excessive duplicate information and waste AP resources and WAN link bandwidth—AOS 10 introduces the Leader AP role for each wired AirGroup server on the same VLAN. Central selects a Leader AP, and only this Leader AP is responsible for sending any further updates about the server after it has been learned.

Each wired AirGroup server has its own Leader AP, and any AP can act as the Leader AP for up to 10 wired servers within the same VLAN. This distributes the Leader AP responsibilities and load across the APs on the VLAN. As we know, every AP maintains two cache tables for AirGroup servers: the Discover Cache, which stores all directly connected wireless servers, and the Central Cache, which contains server entries distributed by Central, these entries are used by the AP to service MDNS/SSDP queries. The Leader AP for a wired AirGroup server will cache this specific wired server in its Discover Cache table and send updates for this server to Central. Central then distributes the server information to other APs in the RF neighborhood.

Wired AirGroup server migration considerations from AOS 8 to AOS 10

In AOS 10, AirGroup operates solely on each AP and not on the gateways. To ensure that all wired AirGroup servers are recognized, the VLANs associated with these servers must be trunked to the switch ports connected to the APs. Therefore, when migrating from an AOS 8 AirGroup network, which is based on Mobility Conductor and Mobility Controller, to AOS 10, it is necessary to remove the wired AirGroup server VLANs from the switch ports connected to the gateways and add them to the switch ports connected to the APs. This allows the MDNS/SSDP packets from the wired servers to be detected by the APs, enabling them to learn these servers and make them visible to clients connected to neighboring APs.

Predefined Services

With an AP foundation license, 7 predefined services are available, including AirPlay, AirPrint, Googlecast, Amazon_TV, DIAL, DLNA Print, and DLNA Media. For these 7 predefined services, administrators have the option to disable or suppress specific service IDs that may pose a security risk. This proactive measure prevents these potentially risky services from being discovered or accessed within the network, bolstering security and reducing the attack surface.

Edit service ID of AirPlay

Disable service ID _raop._tcp of AirPlay

Custom Services

Aruba AirGroup encompasses 7 predefined services, including AirPlay, AirPrint, Googlecast, Amazon TV, DIAL, DLNA media and DLNA print. However, the custom service feature extends the flexibility of AirGroup by enabling customers to configure additional AirGroup services beyond the 7 predefined ones. This empowers organizations to tailor their service discovery environment to suit their specific needs and applications.

With custom service policies, customers can define and manage unique AirGroup services that are not part of the standard predefined set. This customization allows organizations to integrate specialized services, applications, or devices into their network while still benefiting from AirGroup’s service discovery and access control capabilities.

For example, a company may have proprietary in-house applications or devices that need to be discoverable and accessible by authorized users within their network. By utilizing the custom service feature, administrators can set up policies that govern the visibility and accessibility of these custom services based on user roles, VLAN assignments while maintaining the security and control provided by AirGroup.

In essence, custom service policies within AirGroup empower organizations to expand and adapt their service discovery ecosystem beyond the predefined services, enhancing the network’s versatility and accommodating their specific requirements.

Custom services can exclusively be configured at the Global level as the following screen capture, which illustrates the manual addition of a custom service. Typically, a single AirGroup service may encompass multiple service IDs, and manually configuring these IDs can be a laborious and error-prone process. To streamline this procedure, the “List” window in the AirGroup section at the Global level offers a comprehensive list of over 140 suppressed services, covering nearly all mDNS/SSDP services available in the market.

Users can conveniently search for and highlight the specific service they wish to add. As a result, the service IDs associated with the selected service are automatically incorporated. When creating a custom service, users need only provide the service name and con user role/VLAN policies. The following screen capture serves as an illustrative example of how to add a custom service via the Suppress Service list within the “List” window. This feature simplifies the process and enhances the accuracy of custom service configuration within Aruba AirGroup.

Add a custom service via suppressed services list at Global level

Licensing Requirements

Access points have two options for licensing in Central: the AP Foundation license and the AP Advanced license.

In earlier versions of Central, the AP Foundation license only allowed the use of the seven predefined AirGroup services: AirPlay, AirPrint, Google Cast, Amazon TV, DIAL, DLNA Print, and DLNA Media. When originally deployed, the AP Advanced license was required for custom services but this is no longer the case. Now, the AP Foundation license supports both the seven predefined AirGroup services and custom services.

Monitoring

Aruba AirGroup offers comprehensive monitoring capabilities, enabling administrators to track various aspects of service discovery. This includes monitoring server availability for specific user roles or VLANs, as well as monitoring server and service entries, which provide information about associated VLANs, user roles, and usernames, among other details.

Image

Image

Image

1.3 - Personal Device Visibility and Sharing

Description of the workflow, configuration of Personal wireless AirGroup servers and conversion process to public servers.

Aruba’s AirGroup personal device visibility and sharing feature, once activated in Central, leverages the capabilities of Aruba’s network infrastructure. This allows clients to share various wireless devices, including printers, smart TVs, IoT devices, and more. The streamlined sharing process enhances the client experience, simplifying wireless device discovery and access without the need for intricate setups or additional software. Clients can initiate sharing through the Aruba Cloud Guest Portal, adding further convenience to the process.

Personal devices are exclusively shared with wireless clients authenticated through the UPN (User Principal Name) format. In the current phase, only MPSK AES SSID device owners can share their devices, and the Aruba CloudAuth server serves as the supported authentication server for the MPSK SSID. Sharing a wireless personal device is possible with either MPSK AES or 802.1X authenticated clients, facilitated through the “Manage my devices” portal link hosted by Cloud Guest at the MPSK Wi-Fi password portal. However, this is contingent upon the availability of wireless sharing clients’ user entries in the identity repository utilized by Cloud Auth. For example, if an 802.1X client is authenticated by another RADIUS server, such as HPE Aruba Networking ClearPass, and the same client’s user entry is available in the identity repository used by the Cloud Auth server, then the wireless personal device owner can share with this client.

This feature introduces the concept of “Personal Servers or Devices” and “Public Servers or Devices”:

  • Personal Servers or Devices: Wireless devices associated with a username are default “Personal Devices” with the option to manually change the classification to public when Personal AirGroup feature is enabled.

  • Public Servers or Devices: Devices without a username or associated with a username in the public server list are automatically classified as “Public Devices” when Personal AirGroup feature is enabled at the Global level. When Personal AirGroup feature is disabled, all AirGroup servers are considered as public servers.

Here’s how personal device visibility and sharing typically work in Aruba AirGroup:

  • Device Discovery and Announcement: Device Discovery and Announcement: AirGroup-enabled wireless devices use mDNS or SSDP to announce their presence on the network, providing information about the device and the services it offers.

  • User Identification and Access Control: AirGroup distinguishes wireless personal devices owned by individual users using the UPN format username. Personal devices are automatically accessible by the device owner with the same username or through sharing client lists configured in the Cloud Guest portal.

  • User-Centric Experience: With personal device visibility and sharing, wireless users can easily locate and interact with their own or other clients’ devices, as well as discover shared devices within their authorized scope. This simplifies tasks like printing, streaming, or accessing resources without the need to configure device-specific settings.

  • Security and Privacy: AirGroup ensures secure device sharing and respects user privacy. Administrators can define granular service policies, preventing unauthorized access. User authentication ensures that only sharing clients can share and access their devices.

  • Cross-VLAN Sharing: In segmented VLAN environments, AirGroup facilitates device sharing across different VLANs. This feature is useful when users in different departments or areas need to share resources while maintaining network segregation.

  • User Control and Management: Administrators can centrally manage sharing policies, configuring rules, permissions, and visibility settings based on organizational requirements using user roles, VLANs, and service IDs.

Personal device visibility and sharing in Aruba AirGroup contribute to a collaborative and efficient networking environment, empowering users to interact with both their personal devices and shared resources within the organization.

Workflow

The process of sharing personal devices is compatible with MPSK servers and MPSK/dot1x clients which are authenticated via CloudAuth in Central. Here’s a breakdown of the workflow:

  • The AirGroup server undergoes MPSK authentication with the CloudAuth server in steps 1 to 4. The server’s username is transmitted to the AP through the username Vendor-Specific Attribute (VSA) at step 3.

  • Subsequently, the AP establishes a Discover cache entry for the AirGroup server at step 6, connected directly to the AP after receiving MDNS advertisement packets at step 5. The Discover cache update is then forwarded to Central at step 7.

  • If the personal device visibility and sharing feature is active and the server’s email address is not in the list of public server usernames, the AirGroup service in Central fetches the sharing policy for this specific server from the server sharing policy database at step 8.

  • Any device owner can share their AirGroup server via the “Manage my devices” portal hosted by Cloud Guest. The portal page link is conveniently available at the bottom of the MPSK Wifi password portal page, and access instructions are detailed in the following accompanying screen captures.

  • At step 9, a Central cache entry is generated for this server, contingent upon its compliance with the AirGroup policy.

  • The Central cache updates are disseminated to neighboring APs, specifically those within a one-hop distance from the AP to which the AirGroup server is connected.

  • Consequently, all Access Points within the RF neighborhood establish a Central cache for this specific server. This cache becomes instrumental in handling future mDNS queries.

Workflow for personal device visibility and sharing

It’s crucial to note that the sharing radius of the AirGroup server’s visibility is confined to a one-hop RF neighborhood. Effective interaction between the client and the AirGroup server is only achievable when both are within the proximity of a single-hop RF neighborhood.

Configuration

  • Enable personal device visibility and sharing at Global level.

  • Get into MPSK management window at the section of Security -> Authentication & Policy at Global level.

  • Copy the MPSK password portal page URL and distribute it to the personal device owners.

  • The personal device owners log into the MPSK password portal and clicks “Manage my devices” button, which directs them to the personal device sharing portal page hosted by Cloud Guest.

MPSK clients Wi-Fi password portal and “Manage my device” page link

  • Within the personal device sharing configuration portal, AirGroup server owners can share their devices with other clients or remove sharing access, allowing each device to be shared with a maximum of 8 clients.

Personal device sharing portal

Converting Personal Wireless AirGroup Servers into Public Servers

When a wireless AirGroup server is associated with an AP and authenticated with a username, its initial device visibility type is always set to “Personal.” This is illustrated in the example of the server logged in as conf-room1@abc.com in the following screen capture. However, if there is a requirement to make this wireless AirGroup server become a public server and accessible to the broader RF neighborhood , you can follow these steps:

  • In the list window of AirGroup servers at the Global level, locate the server entry that you want to share.

  • Highlight the specific server entry.

  • Click the “+” sign.

  • This action will add the server’s username to the list of public server usernames as the example in the following screen capture.

  • As a result, the server’s visibility status will change from “Personal” to “Public.” It will now be visible to clients within the same RF neighborhood instead of only being visible to the same user.

By following these steps, you can effectively convert a wireless personal AirGroup server into a public server, expanding its accessibility to clients in the RF neighborhood.

Configuration of converting a wireless personal device into a public server

List of usernames associated with public server

1.4 - Survivability

Description of AirGroup’s survivability mechanisms to handle network outages.

Survivability in Aruba AirGroup is a crucial aspect that ensures uninterrupted service discovery even when APs lose their connection to Aruba Central, the centralized management platform. This feature is vital for maintaining the functionality and accessibility of AirGroup services in scenarios where network connectivity to the central management platform is disrupted. Here’s an overview of how survivability is managed in Aruba AirGroup:

  • Local Central Cache: APs in an Aruba AirGroup deployment maintain a local Central cache of service discovery information. This cache includes essential data about AirGroup services, servers, and associated policies. In the event of a network interruption to Aruba Central, this local Central cache allows APs to continue serving AirGroup services based on the last synchronized information.

  • Service Continuity: The local Central cache enables APs to continue responding to service discovery requests from clients, even when they are unable to communicate with Aruba Central. This ensures that AirGroup services remain accessible to users within the AP’s coverage area, minimizing disruptions.

  • Cache Synchronization: When the network connection to Aruba Central is restored, APs sends Discover cache updates and synchronize their Central cache with the latest information from AirGroup service in Central. This process helps maintain consistency across the network and updates any changes or policies made during the network outage.

  • Delta Updates: Aruba AirGroup employs delta update mechanisms to transmit only the changes or updates to the local Central cache, rather than sending the entire cache. This efficient data transfer minimizes bandwidth usage during cache synchronization.

  • Network Resilience: To further enhance survivability, organizations may implement network redundancy and failover mechanisms to reduce the risk of network outages affecting AirGroup operations.

In summary, Aruba AirGroup’s survivability mechanisms ensure that service discovery operations continue seamlessly even in the absence of a connection to Aruba Central. By maintaining a local Central cache and employing efficient synchronization methods, AirGroup enables uninterrupted access to essential services for users, enhancing the reliability and resilience of the network. These features are critical for organizations that prioritize service availability and seamless user experiences.

2 - Roaming and the Key Management Service

Deep dive into how roaming is accomplished in AOS 10 and the Key Management Service (KMS) that helps to enable the process.

The Key Management Service (KMS) is a novel addition to HPE Aruba Networking Wireless Operating System Software 10, designed with the specific purpose of facilitating seamless wireless user roaming and enhancing network performance. Its primary function is to distribute critical information, including the Pairwise Master Key (PMK) or 802.11r R1 key, among neighboring APs. This exchange enables fast roaming, ensuring a smooth and uninterrupted user experience in the wireless network.

In addition to key sharing, KMS serves as a conduit for disseminating crucial user-related data. This includes details such as VLAN assignments, user role information, and, when machine authentication is in use, the authentication state of the user’s device. These data elements collectively form a station record for each user, which plays a pivotal role in the roaming process.

The core responsibility of KMS is to efficiently communicate these station records to neighboring APs, thereby enabling them to provide uninterrupted service as users move between APs. The list of neighboring APs is sourced from the AirMatch service, which plays a complementary role in optimizing wireless network performance.

Both KMS and AirMatch services operate within the broader framework of HPE Aruba Networking Central and work collaboratively to facilitate the key-sharing process.

Workflows

Initial state

In this workflow, we delve into the key stages of how KMS manages and disseminates vital data, such as Pairwise Master Keys (PMKs), 802.11r R1 keys, VLAN assignments, user roles, and authentication states, to create a seamless and secure wireless user experience.

Key Management Service workflow

  1. A wireless user initiates association with an Access Point (AP1) and undergoes 802.1X authentication, resulting in the acquisition of either the Pairwise Master Key (PMK) or the derivation of the R0 key from the master session key, depending on whether or not the 802.11r protocol is enabled.

  2. Subsequently, AP1 transmits the user’s station record to KMS located within HPE Aruba Networking Central. This comprehensive station record contains user-specific details, including the PMK or R0 key, VLAN ID, user role, and machine authentication state (if machine authentication is enabled).

  3. Upon receipt of the user’s station record, KMS stores this information in its cache and simultaneously retrieves the list of neighboring APs associated with AP1 through the AirMatch service.

  4. Leveraging the list of neighboring APs for AP1, KMS accesses the cached user station record, including the PMK or R0 key. If the network employs the 802.11r fast roaming protocol, KMS proceeds to generate R1 keys for each of the neighboring APs. However, if the Opportunistic Key Caching (OKC) roaming protocol is utilized, the R1 key generation step is omitted.

  5. To ensure seamless roaming for the user, KMS disseminates the user’s station record to all neighboring APs connected to AP1. Consequently, when the user later transitions to AP2 or AP3, a full authentication process is not required. AP2 or AP3 already possess the user’s PMK or R1 key, allowing for streamlined four-way key exchange between the user and the respective AP, simplifying and expediting the roaming process.

Bridged user roaming

AOS 10 introduces two distinct user types: bridged users and tunneled users. Bridged users encompass all individuals connected to a bridge-mode SSID. In this configuration, user traffic remains localized within the AP’s network and is not routed through a gateway. For bridged users, the associated VLANs are established on the uplink switches of APs and are permitted by the uplink ports of these APs.

Illustrated below is an example of a bridged user engaging in fast roaming by leveraging the capabilities of KMS.

Bridged user roaming workflow with KMS

  1. Following the initial association with AP1 and the completion of the first-time full authentication, the wireless user eventually transitions to neighboring AP2 during the course of their wireless session.

  2. AP2 promptly updates KMS with the user’s new location, ensuring seamless handoff within the network.

  3. KMS, driven by the user’s movement to AP2, retrieves the list of neighboring APs specific to AP2 from the AirMatch service.

  4. Building upon this list of neighboring APs for AP2, KMS references the cached user station record, which includes PMK or R0 key, and generates R1 keys for each neighboring AP. This process is contingent on the utilization of the 802.11r fast roaming protocol, while the R1 key generation step is omitted if OKC roaming protocol is in use.

  5. KMS commences the distribution of the user station record solely to those neighboring APs of AP2 that do not possess a cache of the user station record. This process avoids redundancy by excluding neighbors common to both AP1 and AP2.

  6. AP2 initiates the synchronization of user sessions by transmitting a broadcast user session sync request message across the user VLAN. This synchronization action pertains to the top 120 user datapath sessions.

  7. The user, now associated with AP2, engages in a four-way key exchange with AP2 as part of the seamless roaming process.

  8. AP2 effectively communicates with AP1, instructing it to clear all entries related to the user, such as datapath entries. Subsequently, the user resumes data transmission through the new access point, AP2, ensuring a smooth and uninterrupted wireless experience.

Tunneled user roaming

In the realm of AOS 10, the implementation of a gateway cluster is highly recommended when network scalability becomes a primary concern. As networks grow to encompass a substantial number of APs, typically exceeding 500, or serve a significant client base that surpasses 5000 users, the introduction of a gateway cluster becomes essential. This architectural choice offers a multitude of advantages, including supporting large scale of APs and clients, centralized management of user VLANs, the establishment of unified firewall policies spanning both wireless and wired users, RADIUS proxy capabilities, and more.

With the presence of gateways, wireless users adopt a tunneled user configuration, where all of their network traffic is efficiently tunneled through the gateway cluster. This configuration eliminates the need for individual APs to manage user VLANs, centralizing this function at the gateway level. One notable advantage is that APs no longer need to belong to the same layer 2 domain for smooth client roaming. Consequently, when a tunneled user roams between different APs, their user session synchronization relies on seamless communication with their designated User Designated Gateway (UDG).

Illustrated below is a tunneled user executing fast roaming facilitated by KMS. This approach ensures network scalability while maintaining seamless and uninterrupted user experiences.

Tunneled user roaming workflow with KMS

  1. Following a wireless user’s initial association with AP1 and the completion of full authentication, the user may eventually roam to a neighboring AP2.

  2. AP2 promptly updates KMS with the user’s new location.

  3. KMS, in turn, retrieves the list of neighbor APs associated with AP2 from the AirMatch service.

  4. Leveraging this list, KMS fetches the user station record, encompassing the PMK or R0 key from its cache, and proceeds to generate the R1 keys for each neighboring AP present in the list if 802.11r fast roaming protocol is used for roaming.

  5. KMS initiates the distribution of the user record to the neighboring APs of AP2 that lack the cached user station record. KMS refrains from repeating the station record distribution process for any APs that happen to be neighbors to both AP1 and AP2.

  6. AP2 broadcasts a user session synchronization request message over the user VLAN.

  7. The User Designated Gateway (UDG) forwards this session synchronization message to the user’s original AP, AP1.

  8. AP2 proceeds to synchronize the top 120 user datapath sessions with AP1.

  9. A start accounting notice is dispatched by AP2 to the UDG.

  10. When the UDG gets the start accounting packet, it changes the bridge or user entry to send traffic to the AP2 tunnel. If the user is the first one from that VLAN on AP2, the multicast group gets updated with the client’s VLAN information.

  11. The user embarks on a four-way key exchange with AP2.

  12. AP2 then notifies AP1 to perform cleanup, which includes purging all entries related to the user, such as datapath entries. Following this, the user begins forwarding traffic through AP2.

Non-fast-roaming users

In older versions of AOS 10, user cache synchronization, which included user key information, was exclusively reserved for fast-roaming users like 802.11r users, OKC users, or MPSK users. However, a pressing need arose for cache synchronization among non-fast-roaming users, such as Captive Portal users and MAC authentication users. This need stems from the desire to prevent reauthentication when these users transition from one access point to another. To address this requirement, cache synchronization between neighboring APs was introduced and has been supported from AOS 10.4 onwards.

Cache classification

To optimize cache distribution, cache entries are classified into three distinct types:

  1. Partial Roam Cache: This cache structure exclusively contains essential information necessary during roaming. For non-fast-roaming users, the partial roam cache is synchronized with neighboring APs.

  2. Full Roam Cache: In addition to the data found in the partial roam cache, the full roam cache includes supplementary station-related state information that may not be immediately required during roaming. The full roam cache entry is consistently available in KMS and on the AP to which the client is currently associated.

  3. Key Cache: This specific cache structure is exclusively employed by fast-roaming users. It houses station keys essential for fast roaming, including PMK (Pairwise Master Key), PMKR0, PMKR1 (per-BSSID), and MPSK, alongside comprehensive full roam cache information.

Workflows

Initial state

The diagram below provides an overview of the process for creating and synchronizing cache entries among neighboring AP for non-fast-roaming users.

Cache entry creation and synchronization for non-fast-roaming users

  1. The user establishes a connection with the AP and successfully completes the authentication process.

  2. In this step, the AP generates a full roam cache entry. Within this full cache entry, the partial roam cache information includes user-specific details such as user role, user VLAN, username, ESSID, and sequence number. In addition to the partial roam cache, the full cache incorporates various user state attributes like Class ID, multi-session ID, idle/session timeout, and more.

  3. The AP transmits the full roam cache information of the user to KMS.

  4. KMS retrieves the list of neighboring APs associated with this particular AP.

  5. KMS proceeds to distribute the partial cache information of the user to all the neighboring APs linked to the same AP. This ensures that neighboring APs possess the essential cache data for seamless user roaming and authentication.

Roaming

The roaming workflow for non-fast-roaming users closely resembles that of fast-roaming users, with a notable distinction: the complete roam cache is exclusively retained by the AP and KMS, while only a partial roam cache is distributed to neighboring APs.

Illustrated below are the primary steps in the roaming process for non-fast-roaming clients.

Non-fast-roaming user roaming workflow with KMS

  1. The user initiates a roam from AP1 to AP2.

  2. AP2 transmits a roaming notification to KMS.

  3. KMS retrieves the list of neighboring APs for AP2 from the AirMatch service.

  4. KMS dispatches the partial roam cache for this user to the neighboring APs of AP2, excluding those that overlap with AP1. For instance, in this scenario, AP3 is a common neighbor of both AP1 and AP2. Since AP3 already received the partial roam cache when the user initially connected to AP1, KMS only sends the partial roam cache to AP4 at this stage.

  5. AP2 sends a broadcast session synchronization request within the user’s VLAN to AP1 in an underlay scenario, to AP1 via AP2’s UDG in an overlay scenario, or within the default VLAN of the SSID if the cache is unavailable on AP2.

  6. AP1 responds to the session synchronization request by sharing the top 120 user sessions.

  7. AP2 forwards a user move request to AP1.

  8. AP1 acknowledges the move request.

  9. KMS dispatches the user’s complete roam cache to the AP2 to which the user has roamed.

  10. AP2 initiates an accounting start message to AP1 in an underlay case or to the AP2’s UDG in an overlay case.

  11. AP1 undertakes user entry cleanup, deletes the user’s full roam cache, and installs the partial roam cache. In an overlay scenario, the AP2’s UDG updates the bridge or user entry to direct traffic toward the AP2 tunnel. If the user marks the first instance of that VLAN on AP2, the multicast group is updated with the client’s VLAN information.

Configuration

To configure fast roaming in AOS 10, follow these steps:

  1. Navigate to the WLANs section and select the specific SSID you want to configure.

  2. Access the Security tab on the AP configuration page.

Fast roaming configuration

By default, 802.11r fast roaming is enabled, while OKC is disabled.

For optimal 802.11r configuration, it is highly recommended to set up the Mobility Domain Identification (MDID). MDID represents a cluster of APs that create a continuous radio frequency space, allowing 802.11r R1 keys for devices to be shared and enabling fast roaming.

Additionally, it is recommended to enable 802.11k. This standard facilitates swift AP discovery for devices searching for available roaming targets by creating an optimized channel list. As the signal strength from the current AP weakens, the device scans for target APs based on this list.

When 802.11k is enabled, 802.11v is automatically activated in the background. 802.11v facilitates BSS (Basic Service Set) transition messages between APs and wireless devices. These messages exchange information to help guide the device to a better AP during the 802.11r fast roaming process.

Verification

Command Line Interface

AP CLI command for checking the PMK or R1 key caching of wireless users of the AP:

show ap pmkcache

APIs

  • Retrieving the neighbor APs list for an AP:

    URL: https://<central-url>/airmatch/ap_nbr_graph/v1/Ap/NeighborList/<AP Serial Number>

  • Retrieving the client record:

    https://<app-url>/keymgmt/v1/keycache/{client_mac}

  • Retrieving the encryption key hash:

    https://<app-url>/keymgmt/v1/keyhash

  • Retrieving the client key synced AP list:

    https://<app-url>/keymgmt/v1/syncedaplist/{client_mac}

  • Retrieving the stats per AP:

    https://<app-url>/keymgmt/v1/Stats/ap/{AP_Serial}

  • Checking on the health of KMS:

    https://<app-url>/keymgmt/health

Survivability

Client roaming

In scenarios where connectivity to HPE Aruba Networking Central is lost during a roaming event, the station records and roam cache information of existing users have typically been synchronized among neighboring APs. Consequently, the fast roaming experience for these users remains unaffected.

It is, however, possible that during a network outage, the station records or cache information for new users cannot be synchronized among neighboring APs. In this scenario:

  • For new users who roam during this period, their user devices will undergo full authentication during the roaming event.

  • Despite the full authentication process, these users will continue to enjoy uninterrupted service.

In summary, while connectivity issues with HPE Aruba Networking Central may necessitate full authentication for new users, it does not disrupt their ongoing communications on the network.

Cloud Fallback

In light of the earlier sections detailing user roaming workflows, it is important to highlight that there are two specific steps in which the new AP might not receive a response from the previous AP due to a timeout in the network:

  • Datapath session synchronization: In this phase, the new AP attempts to synchronize datapath sessions with the previous AP.

  • User state cleanup in the previous AP: During this step, the new AP requests the previous AP to clean up user-related information.

To address potential timeouts in these situations, KMS employs the Cloud Fallback mechanism. When a session synchronization or user state cleanup request times out, the new AP communicates with KMS to report the lack of response from the previous AP. KMS then searches the client-AP association table. If a client entry is found, KMS facilitates the communication between both APs, enabling them to coordinate the above-mentioned steps effectively.

3 - IoT Operations

Fundamentals for the IoT offerings in areas of BLE, Zigbee and USB based IoT devices.

Aruba Central supports transporting of IoT data over enterprise WLAN. APs receive data from the IoT devices and send the metadata for these devices to Aruba Central and the IoT data to external servers through IoT Connectors. The IoT Connector aggregates the device data, performs edge-compute, and runs business logic on the raw data before sending the metadata and IoT data. The metadata for all IoT devices is displayed in IoT Operations dashboard in Aruba Central. Partner-developed applications, running on the IoT Connector, can be used to send the IoT data to external servers.

While enabling new capabilities to address real business needs, this proliferation of IoT devices at the edge creates a new set of challenges for IT. IoT devices use a variety of different physical layer connectivity types and communication protocols. Vendor-specific IoT gateways are often required to manage those devices and collect their data. IoT gateways obscure IoT devices on the network, making it difficult—if not impossible—to understand at a granular level what is connected to the network and where device data is going. Security is always front-and-center when it comes to IoT because many IoT devices are fundamentally untrustworthy, and the lack of visibility creates greater risk. IoT Operations within Aruba Central provides a solution to all of these problems.

Aruba’s IoT ecosystem mainly relies on its partner integrations. Aruba provides a transport medium in the form of its APs and a IoT Connector at the edge, for the data sensed by the IoT devices from different vendors and send that securely and efficiently to their backend.

Additionally, Aruba offers BLE based tags and beacons for Meridian based location services. Tags are mainly used for asset tracking and beacons are used for indoor wayfinding and identifying device location. To learn more about Meridian, check out the Meridian Platform documentation.

Solution Components

The IoT Operations consists of the following three solution components:

IoT Dashboard

The IoT dashboard provides a unified view of all your IoT Connectors, Access Points sending IoT data to these connectors, Apps that are currently installed and a comprehensive list of all the IoT devices/sensors that are being heard by your Access Points.

It gives a detailed overview of how your IoT network is performing. The IoT dashboard provides a view of non-Wi-Fi IoT devices that would otherwise be obscured by vendor or device-specific hardware. IT can monitor these devices from the first moment they connect to APs anywhere in the environment and see exactly which AP each device is connected to. Once an appropriate App is installed on the IoT connector, previously unknown devices of that type can be automatically and accurately classified, so network administrators know exactly what the IoT devices are, and where the IoT devices are, with confidence. To learn more about monitoring IoT operations, refer to Monitoring HPE Aruba Networking IoT Operations.

Representation of IoT Ops Home Page

IoT App Store

The IoT app store takes the complexity out of deploying new IoT use cases within the organization. Simply visit the IoT app store—located within Central—and use the store’s intuitive interface to browse ArubaEdge IoT applications certified to integrate seamlessly with our networks. Unlike directory-style marketplaces that simply provide pointers to compatible applications, the IoT app store provides certified applications for immediate download and activation with just a few clicks of the mouse. To install a Partner-Developed App refer to Installing a Partner-Developed App

Using the IoT app store also simplifies the complex and often confusing task of IoT device- application configuration. After the application is installed on the IoT connector, the AP can be easily configured to securely transport the device’s telemetry data to the appropriate destination, whether that’s an on-premises server or the cloud. From BLE location tags, beacons, and sensors to Zigbee door locks, IoT deployment is simple—so you no longer need to rely on third-party integrators for custom development. To know what IoT Apps are supported today, refer to IoT Operations App Matrix

IoT Operations App Store

IoT Connector

Intelligent Edge applications which require edge computing of IoT data have historically been some of the most difficult for IT to implement and manage. The challenges are particularly acute when it comes to processing IoT data. In some cases, IoT Connector at the edge is needed to parse/decode IoT telemetry data from Central-managed APs and make the data available to IoT applications, whether hosted on premises or in the cloud (e.g., Microsoft Azure IoT Hub). In other instances, the AP itself can be used as a connector to securely transport the data.

The IoT Connector aggregates the device data, performs edge-compute, and runs business logic on the raw data before sending the dashboard metadata and IoT data. The IoT connector puts Intelligent Edge applications within reach—allowing IT to accommodate whatever technology transition comes next with the speed and ease of deploying a virtual machine or AP in the existing infrastructure. With the Aruba IoT connector component, it’s easy to provision multiple ArubaEdge IoT applications within the environment—it only takes a few clicks. The IoT connector virtual appliance is added as a new data collector within Central and then installed on the VM instance or AP. The administrator can then enable new IoT connectors through the IoT Operations guided user interface and see connectors in use—all within Central.

Each customer deployment can have different IoT Connector in their environment based on the scale of their deployment. IoT Operations supports the following specifications:

Parameter Mini VM Small VM Medium VM DC-2000
APs 50 250 1000 1000
BLE Devices 2000 5000 20000 20000
Zigbee Devices 200 500 2000 2000

The IoT Connector support the following specifications:

Parameter Mini VM Small VM Medium VM DC-2000
CPU(Cores) 4 8 24 24
Memory(GB) 4 16 64 64
Storage(GB) 256 256 480 512

Deployment Models

IoT Connector is an integral part of the IoT Operations solution, providing connectivity and edge processing for IoT use cases. These Connectors are used to parse or decode IoT telemetry data from Central-managed APs and make the data available to partner IoT applications that are either hosted on premises or in the cloud. There are two deployment types available: virtual machine (VM)-based using VMWare ESXi or Aruba AP-based. For downloading and deploying an IoT Connector refer to Downloading IoT connector

VM based IoT Connector

A VM-based IoT Connector leverages Aruba’s Data Collector architecture and is provisioned within Central. Configuration of the IoT Connector is provided within IoT Operations, using the guided user interface.

IoT Operations Architectural Diagram

AP as IoT Connector

For customers who find it difficult to deploy and manage a separate machine outside of their wireless deployment, Aruba provides an option of using the existing AOS 10 APs as IoT Connectors. In this type, as compared to the previous model, the function of the IoT connector is collapsed into the AP.

However, the scale and the capacity of the IoT Connector would be less if it is running on an AP. In this model, only classifier apps like iBeacon, Eddystone, Blyott etc could be used, as the installation of heavy containerized apps like Dormakaba is not supported. Support for container-based Apps inside APs to come in future releases on Aruba Central. For creating AP-based IoT connector refer to Creating AP-based IoT Connector

IoT Operations Architectural Diagram with AP acting as IoT Connector

Types of IoT Solutions

BLE based

BLE or Bluetooth Low Energy based IoT solutions are the most common amongst all the types of IoT solutions. This is mainly because BLE as a technology is very common, easily available, could be implemented with relatively low effort and rather easy to connect. BLE which is basically Bluetooth version 4.0, was introduced for over a decade ago now and found its way to a variety of different applications and solutions. Today, most of BLE based IoT devices that Aruba supports are based on BLE 5.0.

How it works with IoT Operations

The way most of the BLE devices are designed is that they broadcast their BLE beacons at pre-defined regular intervals which consists of raw data along with their payloads. Once a radio profile is configured and enabled on Aruba APs, they will start listening to these beacons and transport them over to the IoT Connector. Now within the IoT Connector, apps are installed to classify these devices and various filters could be applied to only forward that data which is relevant to the partner backend. In case of simple classifier apps like Aruba Devices, iBeacons, Eddystone, Blyott or Minew to name a few, there is no need for container-based workflow in the backend. This makes such apps very easy and quick to build and deploy. More complex solutions like on the BLE based door lock solution that utilize southbound API options necessitates the use of a container.

Use Cases

High value asset tracking, location tracking, indoor navigation and wayfinding are the current most common use cases for this type of BLE-based solution in IoT Operations in AOS 10.

Zigbee based

AP’s built-in IoT radio, which supports 802.15.4 use case like Zigbee is used for providing gateway services to relay the Zigbee-based sensor data to a management server. As of today, Aruba mainly supports two smart door lock vendors as far as Zigbee based solution is concerned.

This allows an administrator to avoid deploying a network of ZigBee routers and gateways to provide connectivity to each door lock. A single network can handle both Wi-Fi and ZigBee devices. An AP from Aruba provides ZigBee gateway functionality that offers a global standard to connect many types of ZigBee networks to the Internet or with service providers.

ZigBee devices are of three kinds:

ZigBee Coordinator (ZC)—The ZC is the most capable device. It forms the root of the network tree and may bridge to other networks. There is only one ZC in each ZigBee network.

ZigBee Router (ZR)—A ZR runs an application function and may act as an intermediate router that transmits data from other devices.

ZigBee End Device (ZED)—A ZED contains enough functionality to communicate with the parent node (either a ZC or ZR). A ZED cannot relay data from other devices. This relationship allows the ZED to be asleep for a significant amount of time thereby using less battery.

An AP acts as a ZC and forms the ZigBee network. It selects the channel, PAN ID, security policy, and stack profile for a network. A ZC is the only device type that can start a ZigBee network and each ZigBee network has only one ZC. After the ZC has started a network, it may allow new devices to join the network. It may also route data packets and communicate with other devices in the network. Aruba solution does not utilize a ZR.

How it works with IoT Operations

Compared to BLE, Zigbee based solutions differs in the fact that they need to directly connect to a coordinator. The configuration is generally more time and labour intensive given the nature of deploying such a solution given the one-by-one nature of putting the APs in permit-joining mode and connecting a given lock.

Zigbee use cases almost always require the use of container based IoT apps given the edge processing is needed to transform data in the payload. The two Zigbee based door lock vendors that are supported today with IoT Operations, require the transport detailed to be configured while installing the apps as opposed to configuring a separate transport stream.

Use Cases

Smart Zigbee-based door locks mainly comprise of the current supported use cases as far as Zigbee is concerned. These are mainly seen in the hospitality industry and enterprises that use smart buildings and facilities. These solutions provide immense ease of use with simple NFC-based key cards or even mobile phones entry, along with providing appropriate security and very detailed analytics. Features like remote locking and unlocking of doors, key blocking, how many times was the door locked/unlocked, was the latch enabled or not are some of the basic smart features that are offered by these solutions.

USB based

All Aruba APs have a dedicated USB-A slot where external supported devices could be plugged and powered. One major benefit this provides is that it opens the spectrum of use cases that are not natively available from the IoT chipset in the APs. Essentially, the AP can run any proprietary protocols other than Wi-Fi, BLE or Zigbee making use of the USB slot.

One thing to note while using this is that the USB slot is strictly governed by ACLs, so unless a supported vendor’s dongle is plugged, the slot will not function or allow for connectivity.

How it works with IoT Operations

The vendors that are supported today can be divided into 2 categories: Ethernet-over-USB and Serial-over-USB. Hanshow and SoluM fall under the 1st Ethernet-over-USB category and EnOcean falls under Serial-over-USB category.

Now apart from installing the vendor app itself, a transport stream or ‘AOS8’ app needs to be configured to specify its endpoint details.

Use Cases

Hanshow and SoluM make the electronic shelf labels(ESL). These are widely used in the retail industries and warehouses. They are replacing the traditional price tags that had be managed manually, were cumbersome and required a lot of time, effort and cost. With these ESLs, all of the above problems could be managed digitally through a central management server.

EnOcean’ USB dongle is used in conjunction with a variety of sensors. The EnOcean Alliance is a federation of nearly 400 product vendors, manufacturing more than 5,000 different lighting, temperature, humidity, air quality, security, safety, and power monitoring sensors and actuators.

The table below shows a summary of the available transport services and the corresponding supported server connection types and device class filters:

IoT Transport Service IoT Radio Connectivity IoT Server Connectivity Device Class Filter
BLE Telemetry Aruba IoT radio Telemetry-WebSocket, Telemetry-HTTPS All BLE device classes
BLE Data Aruba IoT radio Telemetry-WebSocket, Azure-IoT-Hub All BLE device classes
Serial Data USB-to-Serial Telemetry-WebSocket, Azure-IoT-Hub serial-data
Zigbee Data Aruba IoT radio Gen 2 Telemetry-WebSocket zsd

SD-Radio

SD-Radio or SDR is a new feature that allows our IoT partners to load their proprietary firmware onto the built-in IoT radio of the APs and then communicate with their backend server. This SDR can be enabled on both of our internal and external radios. Having support for this feature on external radios, enables the use of older AP models that did not have built-in IoT radio.

When a radio is software defined, it can accept new firmware from IoT App supported in Aruba Central. Once radio switches to the SDR, the App can communicate with the radio and run their logic and protocols which are transparent to Aruba.

Firmware images are stored in Openchanel’s file server. Central pushes the URL and APIKEY to the Connector and the Connector pushes them to the AP. AP downloads the image and then starts upgrading. If file servers need an SSL certificate for downloading image, AP images should embed it in advance. If current APs don’t have such certificate, Central needs to support upload Certificates to APs by customer.

Licensing

IoT Operations is available to Aruba Central customers using AOS 10 based APs, with Foundation and/or Advanced AP licenses. Separate licenses are not required for IoT Operations.

IoT Operations utilizes an IoT Connector to receive IoT data from APs and sends IoT device metadata to Aruba Central and IoT data to partner applications. The APs that are assigned to an IoT Connector utilize their IoT radios to act as IoT gateways for myriad IoT devices in the physical environment.

Aruba uses the license tier of APs assigned to your IoT Connector to determine the user experience. Currently, that user experience is differentiated in the IoT Operations Application Store. You will either have access to all apps in the IoT Operations Application Store or some of the apps in the IoT Operations Application Store. Regardless of license tier, the supported scale and base functionality of IoT Operations are the same. In the future, Aruba may add new capabilities to IoT Operations which may extend across apps or even be offered independently of the apps themselves. The user experience is currently determined in IoT Operations as:

  • When all APs assigned to an IoT Connector have an Advanced AP license, you have access to all apps in the IoT Operations Application Store.

  • When at least one AP assigned to an IoT Connector has a Foundation AP license, you have access to a subset of apps in the IoT Operations Application Store. The apps that are available are shown in full color, while the apps that are unavailable are shaded grey.

Filters can be used in the IoT Operations App Store user interface to further refine your app search. For more information on HPE Aruba Networking Central Licenses, refer to About HPE Aruba Networking Central Licensing

Key Considerations and Setup

This section describes some of the key considerations and brief steps involved for successful implementation of IoT Operations

  • APs need to run ArubaOS 10 code version

  • Configure IoT Radio Profiles and/or Zigbee Service Profiles

    This configuration is required to enable AP’s IoT radio to listen to nearby BLE or Zigbee sensors. This piece of config is done outside of the IoT Operations home page, under AP config>IoT section.

  • Deploy an IoT Connector

    IoT Connector can be deployed and managed under Organization>Platform Integrations>Data Collectors. From here an OVA file could be downloaded, the collector could be deployed and eventually registered to your Central account. Once everything is in place, you can start configuring the IoT Connector under IoT Operations home page.

  • Assign APs to IoT Connector

    APs need to be assigned to a connector for them to transport the IoT data that is sensed by the APs to the connector. Multiple APs could be assigned to one connector. Conversely, one AP can only be assigned to one connector. This could be achieved under Applications>Connectors>Gear Icon.

  • Install Apps

    Once inside your IoT Connector context, navigate to Installed Applications>Manage. This presents a list of all the available apps in the IoT Ops app store. To install any of them, simply open the app card and click Install. Most of the apps are classifier apps that don’t require any additional configuration. Some apps might require additional transport related configuration.

  • Create a Transport Stream

    For the apps which are just classifier apps and don’t require any additional configuration, we need to configure a separate transport stream to send the IoT data to an endpoint. This could be done either using ‘AOS8’ app or creating a transport stream under connector >Transports.

4 - Tunnel Orchestrator

The workings and survivability of the tunnel orchestrator service in HPE Aruba Networking Central.

The Overlay Tunnel Orchestrator (OTO) service architecture defines the working model of Tunnel Orchestrator service between Aruba Gateways and APs.

Aruba supports automated Tunnel Orchestrator for LAN Tunnels service for APs and Gateways deployed in campus WLAN. Based on the location of the devices, the tunnel orchestrator service establishes either GRE tunnels (at the branch site) or IPsec tunnels between Gateways and APs provisioned in an Aruba Central account (This has been described in greater detail in the following sections). The tunnel orchestrator service along with AP Tunnel Agent and Gateway Tunnel Agent creates and maintains the tunnels between APs and Gateways.

The Tunnel Orchestrator for LAN Tunnels service can be enabled either globally or on individual device groups. By default, the Tunnel Orchestrator for LAN Tunnels service is enabled for Gateways and AP devices provisioned in an Aruba Central account. The tunnel orchestrator automatically builds a tunnel mode network based on the tunnel endpoint preference that you configure in the WLAN SSID. The tunnel orchestrator selects the Gateway-AP pairs and decides the number of tunnels between the Gateway cluster and APs based on the virtual AP configuration.

Working

The Tunnel Orchestrator service leverages the Tunnel Orchestrator service of Aruba SD-WAN solution released in 2018. In ArubaOS 8, the tunnels between gateways and APs were built through the legacy IPsec process: Both end devices go through IKE phase 1 and phase 2 to authenticate each other, negotiate authentication method and timers, and generate encryption keys used for data traffic.

In ArubaOS 10, each gateway and AP pair does not directly go through IKE phase 1 and phase 2 to establish an IPsec tunnel between the gateway and the AP. Instead, the whole tunnel setup function is moved to the Tunnel Orchestrator service in Aruba Central which is responsible for generating the session keys and SPIs for the gateway and AP pair. These session keys are used to control traffic encryption between the AP and the gateway.

The different parts of the legacy IKE phase 1 process, such as authentication and encryption, timer negotiation, SPI pair generation, and encryption keys generation are skipped in the Tunnel Orchestrator service because all the gateways and APs are registered and subscribed in Aruba Central are treated as trusted entities. Secondly, the encryption policy and timers used by Aruba gateways and APs are hardcoded, there is no negotiation required. In ArubaOS 10, the tunnel orchestrator completely takes over the job of the IKE process and generates the keys and SPIs used for the IPsec tunnels. The Tunnel Orchestrator service not only simplifies the configuration model and device software, but also increases the performance and scalability of the whole network.

AP Tunnel Agent and Gateway Tunnel Agent

The AP Tunnel Agent (ATA) and Gateway Tunnel Agent are the tunnel management modules in APs and Gateways respectively. They are responsible for handling all GRE and IPsec tunnel configurations and maintaining the status in APs and Gateways. ATA and Gateway Tunnel Agent provide the following functions:

  • Register the information of APs and Gateways with tunnel orchestrator service.

  • Receive Gateway cluster and tunnel information and distribute to other processes.

  • Create and maintain IPsec and GRE tunnel and survivability status.

Use Cases of Tunnel Orchestrator

In ArubaOS 10, there are two scenarios where the tunnel orchestrator orchestrates tunnels for a pair of end points:

APs and Gateways in a Campus Network

When a tunnel or mixed mode SSID is configured, tunnel orchestrator orchestrates an IPsec tunnel and a GRE tunnel between each AP in the AP group and each gateway member in the gateway cluster. For example, in a case of one AP and two gateways in a cluster as what is shown in the following diagram: the tunnel orchestrator orchestrates one IPsec tunnel and one GRE tunnel between the AP and the first gateway, and one IPsec tunnel and one GRE tunnel between the AP and the second gateway. The IPsec tunnel is used for control traffic between the AP and the gateway, such as bucketmap and nodelist updates. The GRE tunnel is used for user data traffic from all the configured SSIDs on the AP. The GRE tunnel is not encrypted by the IPsec tunnel to avoid double encryption and performance degradation. The security of user traffic is guaranteed by the encryption method used in the SSID.

Micro Branch APs and Gateways for Remote Offices and Teleworkers

In a Micro Branch deployment, the GRE tunnel is encrypted by the IPsec tunnel. Since the data traffic between the AP and gateway goes through the WAN network, extra encryption becomes necessary.

GRE and IPsec Tunnels in Different Modes

Tunnel Orchestrator Workflow for Creation of Tunnel or Mixed WLAN SSIDs

By now, we know that the tunnels are created between gateways and APs as soon as a tunneled SSID or a mixed SSID is created. At the time of this type of SSID creation, the user needs to select the gateway cluster to which the APs will form a tunnel to. Eventually this will tunnel all the wireless client traffic to the gateway cluster.

The process is automated to the extent that when a new AP or a gateway is added to the existing groups, Tunnel Orchestrator service will automatically build all the relevant new tunnels between all the devices.

If we were to look at the step-by-step workflow of the entire orchestration process when a new tunnel or mixed mode SSID is to be configured, it would include the following:

  1. Tunnel SSID or mixed mode SSID is configured, and the service configuration module notifies the Tunnel Orchestrator.

  2. The Tunnel Orchestrator queries the device configuration module about all the gateway members in the cluster on which the SSID terminates.

  3. The Tunnel Orchestrator queries the group management module about all the APs in the AP group.

  4. The Tunnel Orchestrator generates SPI and encryption keys of the IPsec tunnels and the GRE tunnels for each pair of AP and gateway member in the gateway cluster.

  5. All the tunnel details are pushed to the gateways and APs.

Tunnel Orchestrator Workflow

Key Considerations

To allow APs and Gateways to automatically establish tunnel modes, ensure that the following configuration tasks are completed:

  • Aruba Gateways are onboarded to a group in Aruba Central.

  • Aruba APs are provisioned in Aruba Central.

  • Aruba Gateways and APs are upgraded to ArubaOS 10.0.0.0 or a later software version.

  • A WLAN SSID with the tunnel forwarding mode is configured on the APs. When you create a new SSID, you must select the primary cluster name or Gateway where you want to establish tunnel traffic of the SSID. Optionally, you can select the backup cluster that can be used when the primary cluster goes down completely. The APs establish tunnel with the Gateways in a Gateway cluster.

  • If the overlay IPsec tunnels initiated by APs to a VPN Concentrator use NAT traversal, the UDP 4500 port is enabled.

IPsec SA Rekeying

IPsec SA is created with a finite lifetime to ensure security. The AP-Gateway IPsec SA lifetime is 36 hours. The rekeying process ensures that there is no data loss during rekeying. Before the SA expires, the Tunnel Orchestrator performs rekeying for all the IPsec SAs of AP-Gateway pairs.

The following workflow explains the Tunnel Orchestrator IPsec SA rekeying process:

  1. Twelve hours before the IPsec lifetime expires, the Tunnel Orchestrator starts IPsec SA rekeying and orchestrates new keys for AP-Gateway pairs.

  2. New keys are pushed to the AP-Gateway pairs.

  3. The gateway examines temporary ingress tunnels in learning mode.

  4. The AP sends a probe encrypted with the new key.

  5. Parallelly, all the control traffic is still exchanged between the AP and the gateway through the old IPsec tunnel.

  6. After the gateway successfully decrypts the probe message with the new key, the gateway removes.

  7. temporary learning mode tunnel and installs new ingress and egress tunnels with the new key.

  8. The gateway sends a probe response to the AP.

  9. The AP removes the temporary learning mode tunnel and installs new ingress and egress tunnels with the new keys.

  10. The old tunnels remain active for 10 seconds for transient traffic.

  11. All the control traffic is switched to the new tunnels.

  12. Both the AP and the gateway age out the old tunnels.

Cloud Survivability for Tunnel Orchestrator

Survivability is a feature to mitigate the IPSec traffic between Aruba devices whose IPsec tunnels are orchestrated by Cloud and has a definite key expiration time when the cloud connectivity fails for any unknown reason and the devices have still connectivity between them. With survivability, the devices should be able to re-establish IPSec tunnels between them based on the tunnel config which they have already received from Tunnel Orchestrator using the legacy IKE/IPSec tunnel establishment.

During the rekey phase, if cloud connectivity is lost, the devices will seamlessly switch over to legacy IPSec tunnels. Survivability can be triggered during the following situations:

  • Either side of the tunnel has no connectivity to Tunnel Orchestrator

  • Tunnel Orchestrator pushes new keys to APs and gateways, but they are not received

  • Tunnel Orchestrator does not push new key to APs or gateways

  • The devices are not able to bring up tunnels using received keys

Cryptomaps for all the tunnel config received are created irrespective of Initiator/Responder. But on the initiator side, cryptomaps are in disabled state. After 9 retries to bring the rekey tunnel up, cryptoMap on the initiators side will be enabled for the survival tunnel to be triggered.

Once the survivability tunnel is up, the same process will be started again during rekey.

5 - Live Upgrade

Live Upgrade provides for uninterrupted services when upgrading AOS-10 access points and gateways.

Live Upgrade Services

The Live Upgrade and Firmware Management services in HPE Aruba Networking Central provide seamless upgrade of APs and gateway devices. The Live Upgrade service provides the following functions:

  • Upgrades APs, gateway clusters, or both
  • Allows the selection of the desired device firmware from a list of available versions
  • Allows to schedule upgrades up to one month in the future
  • Provides visibility of the upgrade progress
  • Allows upgrade of multiple groups in parallel
  • Allows termination of upgrades in the middle of the upgrade process

The Firmware Management service provides the user interface and other existing functions like firmware compliance and scheduled upgrades. The service also interfaces with the devices to initiate upgrades and receive upgrade status. The Live Upgrade service provides APIs to the Firmware Management service to initiate or abort live upgrades and decides the logic to orchestrate the upgrade process and the time of upgrade for the devices.

AP Live Upgrade

The Live Upgrade service enables the upgrade of APs without disrupting the connection of the already existing clients and reduces the upgrade duration by running parallel upgrades. The Live Upgrade service interfaces with the AirMatch service to partition all the APs that require upgrade into multiple smaller sets of APs that can be upgraded in parallel and each batch of APs will go for reload sequentially. AirMatch partitions APs based on RF neighborhood data such that neighboring APs are available for clients to roam to when the associated AP is undergoing an upgrade, so all the APs in one AP partition are in the same channel and are not in the same RF neighborhood.

To reduce WAN bandwidth consumption and upgrade duration, a set of APs, known as seed APs, are selected. Only these seed APs download the images directly from the Aruba Activate server. The rest of the APs download the image from its designated seed AP. Seed APs are selected based on the following considerations:

  • A non-seed AP is L2 connected to its designated seed AP.

  • A non-seed AP is of the same platform model as its designated seed AP.

  • Not more than 20 non-seed APs are assigned to a given seed AP.

  • Seed AP is randomly selected out of APs which are L2 connected and have same model type.

The following image shows the background process when an AP undergoes the Live Upgrade workflow.

AP Live Upgrade Process

The AP Live Upgrade workflow is as follows:

  1. When the scheduled time arrives, the Firmware Management (FM) service sends the AP group details to the Live Upgrade service. The upgrade time can either be the current time or a scheduled time within a month.

  2. The Live Upgrade service stores the AP list in the AP group and sets all the APs in the list, in the upgrade INIT state.

  3. The Live Upgrade service retrieves the subnet information of the APs from the monitoring module in Aruba Central.

  4. The Live Upgrade service selects seed APs based on seed AP selection criteria, and assigns a designated seed AP for each non-seed AP.

  5. The Live Upgrade service retrieves the AP partition information from the AirMatch service.

  6. The Live Upgrade service initiates an image upgrade of all the seed APs and sends the request to the FM service.

  7. The FM service sends an image upgrade request to all the seed APs.

  8. All the seed APs download the image from the Activate server.

  9. After downloading the image, all the seed APs send upgrade responses and states to the FM service.

  10. The FM service forwards the responses and states of all the seed APs to the Live Upgrade service.

  11. The Live upgrade service initiates an image upgrade of all the non-seed APs and sends the request to the FM service.

  12. The FM service sends an image upgrade request to all the non-seed APs.

  13. All the non-seed APs download the image from their designated seed APs.

  14. After downloading the image, all the non-seeds APs send upgrade responses and states to the FM service.

  15. The FM service forwards the responses and states of all the non-seed APs to the Live Upgrade service.

  16. The Live Upgrade service initiates the reboot for the first AP partition and sends the request to the FM service.

  17. The FM service forwards the AP reboot request to all the APs in the AP partition, and all the rebooted

  18. APs send reboot responses to the FM service after rebooting.

  19. The FM service forwards reboot responses and states to the Live Upgrade service.

  20. Remaining AP partitions are rebooted in sequence

Rebooting Access Points

After all the APs finish downloading the image, the Live Upgrade service starts the AP rebooting process. All the APs in one AP partition reboots at the same time. After the Live Upgrade Service receives updates on successful reboot of all the APs in the partition, the service initiates the reboot process for the next AP partition. This process is repeated until all the AP partitions reboot and come up with the new image.

The following is an example workflow of an AP reboot process:

  1. The Live Upgrade service sends the AP reboot request for a specific AP partition to the FM service.

  2. The request is sent to every AP in the partition.

  3. After receiving the reboot request, each AP disables all its radios.

  4. All the clients associated with the AP are unable to ping the AP and forced to roam to the neighboring APs.

  5. All the neighboring APs associated with the clients perform a session sync with their previously associated AP.

  6. The AP starts rebooting a few seconds later.

Reboot AP Process

The HPE Aruba Networking Central product documentation provides the steps required to perform or schedule the Live Upgrade service for APs.

Upgrading a Gateway Group

As gateways in a gateway group are much smaller in number, seeding and partition concepts are not necessary. To reduce upgrade duration, all the gateways in a cluster are instructed to download the image together. However, they are reloaded sequentially to avoid disruption. As only one gateway is reloaded at any given time, remaining cluster members together need free capacity for clients of only one gateway.

The Cluster UDG (User Anchor Gateway) concept ensures that all the users on a gateway are assigned standby UDG. The user state and Datapath sessions are synced to the standby UDG. When a gateway reloads, cluster failure detection logic detects the failure within a second and moves users to the standby UDG. With multi- version support added, when a gateway reloads with a new firmware version, it rejoins the cluster and can sync the user state and sessions with the cluster members running an older version. Users are not disrupted during the gateway upgrade process.

The gateway Live Upgrade workflow is as follows:

  1. The FM service applies all the configured compliance rules which include gateway group names, the firmware version to which the gateway group needs to upgrade, the time at which the group needs to upgrade. The upgrade time can either be the current time or a scheduled time within a month.

  2. When the scheduled time to upgrade arrives, the FM service sends the gateway group details to the Live Upgrade service.

  3. The Live Upgrade service stores the gateway list in the gateway group and sets all the gateways of the list in the upgrade INIT state.

  4. The Live Upgrade service initiates an image upgrade of all the gateways in the group and sends the request to the FM service.

  5. The FM service sends the image upgrade request to all the gateways.

  6. All the gateways download the image from the Activate server.

  7. After downloading the image, all the gateways send upgrade responses and states to the FM service.

  8. The FM service forwards the responses and states of all the gateways to the Live Upgrade ser-vice.

  9. The Live Upgrade service initiates the reboot request to one of the gateways and sends the request to the FM service.

  10. The FM service sends the reboot request to the gateway.

  11. The gateway reboots.

  12. The gateway sends reboot responses and states to the FM service.

  13. The FM service forwards the reboot responses and states to the Live Upgrade service.

  14. The rest of the gateways in the group are cycled through the reboot process in sequence.

The following image explains the Gateway Live Upgrade workflow:

GW Live Upgrade Process

The HPE Aruba Networking Central product documentation provides the steps required to perform or schedule the Live Upgrade service for gateways or gateway clusters.

Upgrading a Device Image

You can upgrade a device or multiple devices under the Firmware tab in Aruba Central. Any configured device group can be upgraded together or individually.

There are two upgrade types:

  1. Live Upgrade

  2. Standard

The default upgrade type is Standard. In this upgrade type, all the devices under the selected group download the images directly from the Aruba Activate server and reboot simultaneously. There is no consideration for reducing disruption to the network.

To perform Live Upgrade, set the upgrade type to Live Upgrade.

Firmware compliance configuration parameters:

  • Groups: You can choose All Groups, a single group, or a combination of groups

  • Firmware version: This is the target firmware version for the selected devices. There are three choices:

    • Choose a specific build number from the drop-down list.

    • Enter a custom build number, for example, 10.0.0.1_74637, and click Check Validity to validate the build number.

    • Set to None. When you set the option to None, set the compliance status as Not Set as you see in the screen capture. When you want to upgrade a device to a specific firmware version from your FTP server instead of Aruba Central, ensure that the Compliance Status is Not Set for the group to which the device belongs. Otherwise, Aruba Central automatically upgrades the group to the configured firmware version in Aruba Central and overrides the firmware version downloaded from your FTP server.

  • Upgrade type

    • Standard (default type)

    • Live upgrade

  • Upgrade time

    • Now

    • Later date: Firmware upgrade can be scheduled at a future time from the time of upgrade configuration up to one month later.

Refer to the HPE Aruba Networking Central product documentation for the specific steps required to perform or schedule the Live Upgrade service for different devices or device groups.

6 - RAPIDS

WIDS/WIPS services for AOS 10.

Rogue Access Point Intrusion Detection System (RAPIDS) automatically detects and locates unauthorized access points (APs), regardless of your deployment persona, through a patented combination of wireless and wired network scans. RAPIDS uses existing, authorized APs to scan the RF environment for any unauthorized devices in range. RAPIDS also scans your wired network to determine if the wirelessly detected rogues are physically connected. Customers can deploy this solution with “hybrid” APs serving as both APs and sensors or as an overlay architecture where Aruba APs act as dedicated sensors called air monitors (AMs). RAPIDS uses data from both the dedicated sensors and deployed APs to provide the most complete view of your wireless environment. The solution improves network security, manages compliance requirements, and reduces the cost of manual security efforts.

  • Rogue device detection is a core component of wireless security.

  • RAPIDS rules engine and containment options allows for creation of a detailed definition of what constitutes a rogue device and can quickly act on a rogue AP for investigation, restrictive action, or both.

  • Once rogue devices are discovered, RAPIDS can alert a security team of the possible threat and provides essential information needed to locate and manage the threat.

  • The RAPIDS feature set is included with Foundation subscriptions.

RAPIDS Flow

RAPIDS provides an effective defense against rogues and other forms of wireless intrusion. To accomplish these objectives, RAPIDS will:

  • Perform multiple types of wireless scans.

  • Correlate the results of the various scans to consolidate all available information about identified devices.

  • Classify the discovered devices based on rules that are customized to an organization’s security needs.

  • Generate automated alerts and reports for IT containing key known information about unauthorized devices, including the physical location and switch port whenever possible.

  • Deploy containment mechanisms to neutralize potential threats.

Key Features & Advantages of using RAPIDS

Feature Benefit
Wireless scanning that leverages existing Access Points and AM sensors Time and cost savings. Eliminates the need to perform walk-arounds or to purchase additional RF sensors or dedicated servers.
Default or Custom Rules-based threat classification Time and resource savings. Allows staff to focus on the most important risk mitigation tasks. Comprehensive device classification that’s tailored to the organization means less time spent investigating false positives.
Automated alerts Faster response times. Alerts staff the instant a rogue is detected, reducing reaction time and further improving security.
Rogue AP location and switch/port information Faster threat mitigation. Greatly simplifies the task of securing rogue devices and removing potential threats.
Reporting Reduced regulatory expense. Comprehensive rogue and audit reports helps companies comply with various industry standards and regulatory requirements.
IDS event management Single point of control. Provides you with a full picture of network security. Improves security by aggregating data for pattern detection.
Manual and automated containment Continuous security. Improves security by enabling immediate action even when network staff is not present.

RAPIDS Use Cases

Regulatory compliance is a key motivator that drives many organizations to implement stringent security processes for their enterprise wireless networks. The most common regulations are Payment Card Industry (PCI) Data Security Standard, Health Insurance Portability and Accountability Act (HIPAA) and Sarbanes-Oxley (SOX).

RAPIDS reporting is helpful for compliance audits

  • PCI DSS requires that all organizations accepting credit or debit cards for purchases protect their networks from attacks via rogue or unauthorized wireless APs and clients. This applies even if the merchant has not deployed a wireless network for its own use.

  • RAPIDS helps retailers and other covered organizations comply with these requirements. RAPIDS also enables companies to set up automated, prioritized alerts that can be emailed to a specified distribution list when rogues are detected.

  • Hospitals use RAPIDS to protect patient data as well as protect thier systems. They need to know if rogues exist on their network along with critical medical devices use for patient care.

WIDS vs RAPIDS

Wireless Intrusion Detection Service (WIDS) provides additional behavioral information and security for a wireless network by constantly scanning the RF environment for pre-defined wireless signatures. Intrusion detection is built into AOS and uses a signature matching logic, as opposed to RAPIDS usage of rule matching.

  • AOS can trigger alerts, known as WIDS events, based on the configured threat detection level: high, medium, or low.

  • WIDS events can be categorized into two buckets:

    • Infrastructure Detection Events

    • Client Detection Events

RAPIDS will consume WIDS events to present the event information in a clear and intelligible manner with logging and rogue location information. Security events rarely happen in isolation, the attack will usually generate multiple WIDS events so RAPIDS will merge the reporting of multiple related attacks into a single event to reduce the amount of noise.

  • RAPIDS in Central aggregates WIDS events and provides a method to view which events are getting raised in the environment.

    • Each event has a specific victim MAC address; events are aggregated for each of those victim MACs.

      • Multiple APs reporting the same event.

      • Several attacks against the same MAC.

  • Visibility in the UI, NBAPI, API streaming

  • Device classification is a combination of cloud processing and edge processing.

    • Aruba access points can discover Rogue access points independently, without intervention from Aruba Central (continuous monitoring).

    • Aruba Central classification takes precedence.

RAPIDS Classifications

RAPIDS ranks classifications in the following hierarchy.

  • Interfering

  • Suspected Rogue

  • Rogue

  • Neighbor (Known Interfering)

  • Manually Contained (DoS)

  • Valid

In the lifecycle of a monitored AP, classifications can only be promoted (i.e.. go higher in the list – in other words left to right in the diagram below) and can never be demoted (ie. go back down to a lower value).

If a neighbor reaches one of the classifications, “Valid” (in orange), this is considered a ‘final state’. Meaning, the AP will stop applying its own classification algorithms on that AP and this is where the AP will remain (unless it is aged out, or if the user manually classifies it to something else).

This same behavior also applies to the custom rules. For example, if a neighbor AP is already classified as Rogue then even if it matches a rule, it will never be demoted to a Suspected Rogue.

{% include image.html rel_url=“image2.png” alt=“RAPIDS ranks classifications in the following hierarchy: Interfering, Suspected Rogue, Rogue, Neighbor (Known Interfering), Manually Contained (DoS), Valid” caption=“RAPIDS ranks classifications in the following hierarchy: Interfering, Suspected Rogue, Rogue, Neighbor (Known Interfering), Manually Contained (DoS), Valid” %}

This same behavior also applies to the custom rules. For example, if a neighbor AP is already classified as Rogue then even if it matches a rule, will never be demoted to a Suspected Rogue.

Configuring Rules

After enabling RAPIDS in the UI, a set of 3 default classification rules will take effect.

For existing RAPIDS customers, these rules are the same rules that have been applied in previous releases. Maximum of 32 Single Rules can be configured. All criteria in a single rule uses an “AND” operand which means a rule will only be applied if all the criteria in that rule evaluate as a match.

Configuring custom rules

Creating a custom rule

Add one or more conditions to your rule

Classification Criteria

  • Signal - The user will be able to specify a minimum signal strength from -120 to 0 dB
  • Detecting AP Count - The number of detecting APs that can “see” the monitored AP 2 to 255
  • WLAN classification – Valid, Interfering, Unsecure, DOS, Unknown, Known Interfering, Suspected Unsecure
  • SSID Includes - Pattern for matching against the SSID value of a monitored AP.
  • SSID excludes - Pattern for matching against the SSID value of a monitored AP.
  • Known valid SSIDs - Match against all known valid SSIDs configured on the customer’s account. Regular expression matching
  • Plugged into wired network - When there is a managed PVOS/CX switch and a neighbor AP is determined to be plugged into the wired network when the BSSID matches the first 40 bits of a known wired MAC address as reported by the switch.
  • Time on network - Minimum number of minutes since monitored AP was first seen on the network
  • Site - List of site IDs for which this rule applies. If not populated then apply rule to all sites.
  • Band - The radio band of the monitored AP. 80211B (2.4 GHz), A (5 GHz), G (2.4 GHz), AG (Not Used), 6GHz
  • Valid client MAC match - Match any monitored BSSID against the current valid station cache list. This must be an exact match.
  • Encryption - Encryption: OPEN, WEP, WPA, WPA2, WPA3

Rule ordering matters; rules are evaluated from top to bottom in the custom rule list.

Whenever a match is found; then that rule is executed and further rule evaluation is stopped.

Because of this, it’s important to order your rules from lower classifications to higher classifications.

Manual classification will be respected; if a neighbor AP has already been manually classified by the user then no rules will be evaluated for that AP.

If the classification rule selects a non-final state classification (ie. Interfering or Suspected Rogue), then AP rogue detection algorithms will continue to be applied at the edge. And theoretically they could determine that the AP is in fact a rogue and promote the classification to Rogue.

Rogues Panel

The rogues panel provides a lot of detailed information about your wireless environment. Here is an example of what information is provided.

Rogues Panel

7 - AirMatch

AirMatch is HPE Aruba Networking’s next-generation automatic RF planning service.

Running within HPE Aruba Networking Central, AirMatch has the duty of computing an optimal radio frequency (RF) network resource allocation. AirMatch runs on a 24 hour cycle, first collecting RF network statistics and then developing an optimized RF network plan, which specifies channel, bandwidth, and EIRP settings for each radio, that is deployed once every cycle. As a best practice, the RF plan change should be deployed at the time of lowest network utilization so that radio channel changes have a minimal impact on user experience. In addition to the planning done every 24 hours, AirMatch also reacts to dynamic changes in the RF environment such as channel quality, radar, and high noise events. AirMatch results in a stable network experience with greatly minimized channel and EIRP variations. AirMatch is defined by the following key attributes:

  • A centralized RF optimization service
  • Newly defined information collection and configuration deployment paths
  • Models the network into partitions and then solves the different partitions as a whole
  • Results in optimal channel selection, bandwidth size, radio operating band (for Flex Dual Radio APs), and EIRP plan for the network

If the link between the access points and Central goes down, then features which require the coordination of Central, such as scheduled updates for RF optimization, will be lost. The current RF solution will continue to function and reactive changes resulting from high noise events and radar will still occur.

AirMatch Workflow

The AirMatch workflow occurs using the following steps:

  • APs send RF statistics to Central
  • The AirMatch service in Central calculates the optimal RF solution
    • AirMatch divides the network into separate partitions
    • AirMatch then calculates the optimal channel plan for each partition
    • AirMatch evaluates if the new channel plan for this partition is a sufficient improvement or not
    • If sufficiently improved, AirMatch pushes the solution to the access points at the scheduled time
  • Provides neighboring APs list to the Key Management Service
  • Provides AP partition information to the Live Upgrade Service

AirMatch Configuration

AirMatch was developed to operate with no user input, but instead based on readings taken from the RF network, and as such offers very little in terms of configuration. Constraining the parameters used to help fine tune the behavior is possible, but AirMatch should function correctly without any additional or specific configuration in most cases. Please consult the AOS 10 configuration guide to find this information.

Wireless Coverage Tuning

By default, the wireless coverage tuning is set to Balanced. This can be adjusted so that a channel plan improvement quality threshold which ranges from 0% (aggressive) to 16% (conservative) can be configured. By default, the Balanced setting represents 8% quality threshold improvement.

To determine the channel plan improvement index, the average radio conflict metric is computed. For each radio of an AP, channels that overlap with neighbors are calculated and path-loss is used to calculate a weighted conflict value. The closer the AP with the overlapping channels, the lower the path-loss and consequently, the higher the conflict. After AirMatch comes up with a new channel plan, the conflict value is compared with the current operating network and an improvement percentage is calculated. If the improvement percentage is higher than or equal to the configured quality threshold (8% by default), then the new channel plan is deployed to the AP at the scheduled time as configured.

Channel Quality Aware AirMatch

Channel quality, which is represented as a percentage, is a weighted metric derived from key parameters that can affect the communication quality of a wireless channel, including noise, non-Wi-Fi (interferer) utilization and duty-cycles, and certain types of retries. Note that channel quality is not directly related to Wi-Fi channel utilization, as a higher quality channel may or may not be highly used.

8 - ClientMatch

ClientMatch is HPE Aruba Networking’s advanced service for maintaing peak connectivity for wireless clients.

The ClientMatch service continuously gathers RF performance metrics from mobile devices and uses this information to intelligently improve the client’s experience. Proactive and deterministic, ClientMatch dynamically optimizes Wi-Fi client performance, even while users roam and RF conditions change. If a mobile device moves out of range of an AP or RF interference impedes performance, ClientMatch steers the device to a better AP to maximize Wi-Fi performance.

ClientMatch is aware of each client’s capabilities while also being aware of the Wi-Fi environment, this puts ClientMatch in the best position to maximize the user experience of each device as we know which radio each station is most likely to have the best experience on. In doing so, ClientMatch is also able to improve the experience of the entire system as slow clients on an access point also affect the experience of other users.

ClientMatch does this by maintaining a list of radios each station can see which is basically a database that states which access point radio that has been able to see the client’s device, and at which signal level. This information is then used, with a list of rulesets, to enhance the user’s experience.

The main difference between AOS8 and AOS10 regarding ClientMatch is that the orchestration of this feature is now handled by Central in the cloud instead of on a Mobility Conductor.

Move Types

There are two types of moves, deauth moves and BSS Transition Message moves, also known as 802.11v message moves.

Deauth moves function by sending a de-authentication frame to a connected station and then not letting this station associate anywhere but to the desired radio for a duration of time after the frame is sent.

802.11v or BSS Transition Messages are an Action Frame sent out by the AP to the station suggesting that they should instead be moving to that BSSID. Keep in mind that these frames are not mandatorily obeyed by the station and are sometimes ignored or rejected. If that happens 5 or more times, ClientMatch will then trigger a deauth move instead.

Band Steer

When ClientMatch sees an authentication being attempted by a station on the 2.4 GHz radio and he station is known to be 5 GHz or even 6 GHz capable, ClientMatch will then not let the client device connect to the 2.4 GHz band, effectively forcing them up to the more efficient bands.

ClientMatch will only attempt to move clients on 2.4 GHz with a worse signal then -45 dBm onto a target radio with a RSSI on 5 or 6 GHz that is better than -65 dBm.

6 GHz band steering can be disabled using the REST API.

Sticky steer

This feature comes into play in a scenario such as when a station associates to an access point while within the prime coverage area and then moves away to the edge of the coverage of the radio but the station does not roam voluntarily. ClientMatch is then aware that there is an access point that would be a better option for the station and as such is able to tell the client to move to the better candidate.

Load Balancing steer

This is a feature that is more frequently used in high density environments such as auditoriums. Load balancing aims to balance out the number of clients on a per-radio basis as to make sure not all clients are connected to the same radio and that instead the load is split across the network.

This move type only uses 802.11v moves and does not attempt to do deauth moves as the clients do not fully understand the load component involved in the computation and might see a degradation in signal strength as a negative outcome.

This specific move type can be disabled using the REST API.

MU-MIMO steer

The goal of this is to move stations that are MU-MIMO capable so that they are on the same radios so that the AP can leverage the MU-MIMO function as two stations of appropriate types are necessary for an access point to be able to do MU-MIMO.

Un-steerable clients

Some stations refuse to cooperate with ClientMatch and as such will be put into an un-steerable station list for 48 hours if there has been three unsuccessful deauth steer attempts or for 24 hours if the client device ignores more than five consecutive 802.11v moves.

This list of un-steerable clients can be viewed using the REST API. Client devices known to not support being steered by ClientMatch can be added permanently to the un-steerable list by

using the REST API.

Disabling ClientMatch

Should ClientMatch be suspected of causing issues in the network, or if there is a desire or requirement to allow the devices to choose the connected access point without attempts for steering, then ClientMatch can be disabled using the REST API.

ClientMatch Monitoring

In the Network Operations app, set the filter to Global.

  1. Under Alerts & Events, click Events. The Events page is displayed.

  2. Click on the CLICK HERE FOR ADVANCED FILTERING toggle to bring down the filtering options.

  3. Click the ClientMatch Steer option.

  4. Click the Filter button on the right.

Event viewer showing ClientMatch events in Central.