Deploying Policy Manager Clusters
A cluster is a logical connection of any combination of Policy Manager hardware or virtual appliances. This section of the deployment guide provides guidance on how to design and deploy Policy Manager clusters, how to complete major tasks such as adding a Subscriber server and deploying a standby Publisher, as well as how to rejoin a down server to the cluster. Finally, the set of cluster-specific CLI commands is included.
Policy Manager Cluster Overview
Policy Manager can be deployed either as a dedicated hardware appliance or a virtual machine running on top of VMware vSphere Hypervisor or Microsoft Hyper-V.
When demand exceeds the capacity of a single instance, or you have a requirement for a High Availability deployment, you have the option of logically joining multiple instances to process the workload from the network. You can logically join physical and virtual instances and also join Policy Manager instances that are dissimilar in size. However, careful planning must be taken, especially if you plan to utilize the failover capabilities within the clustering feature. The cluster feature allows for shared configuration and databases. However, it does not provide a virtual IP address for the cluster, so failover/redundancy for captive portal for Guest relies on Domain Name System (DNS) lookup or load balancing. RADIUS clients must define a primary and backup RADIUS server.
Authentication Requests in a Cluster
The typical use case for Policy Manager is to process authentication requests using the policy framework. The policy framework is a selection of services that work to process authentication requests, but the policy framework also determines authentication, authorization, posture, enforcement, role, etc. of the endpoint/end-user.
In the context of cluster operations, authentication typically involves a read-only operation from the configuration database. A cluster server receives an authentication request, determines the appropriate policies to apply, and responds appropriately. This does not require a configuration change, and can therefore be scaled across the entire cluster.
|
Authentication is performed from the server itself to the configured identity store, whether locally (as synchronized by the Publisher; for example, a Guest account) or externally, such as with Microsoft Active Directory. |
Logs relevant to each authentication request are recorded separately on each server, using that server’s log database. Centralized reporting is handled by generating a Netevent from the server, which is sent to all Insight servers and recorded in the Insight database (for related information, see Deploying Policy Manager Insight in a Cluster).
Policy Manager Databases
Each Policy Manager server makes use of the following databases:
Policy Manager user interface. This includes, but is not limited to:
. Contains most of the editable entries that can be seen in theAdministrative user accounts
Local user accounts
Service definitions
Role definitions
Enforcement policies and profiles
Network access devices
Guest accounts
Onboard certificates
Most of the configuration shown within Guest and Onboard
. Contains activity logs generated by typical usage of the system. This includes information shown in Access Tracker and the Event Viewer.
Deploying Policy Manager Insight in a Cluster).
. Records historical information generated by the Netevents framework. This database is used to generate reports (for related information, seePublisher/Subscriber Model
Policy Manager uses a Publisher/Subscriber model to provide multiple-box clustering. Another term for this model is hub and spoke, where the hub corresponds to the Publisher, and the spokes correspond to the Subscribers.
Figure 1 Publisher and Subscribers in Hub and Spoke Configuration
The controller in a cluster. The Publisher is your central point of configuration, monitoring, and reporting. It is also the central point of database replication. All the databases are managed through the Publisher.
functions as the masterThere is at most one active Publisher in this model, and a potentially unlimited number of Subscribers.
The Publisher server has full read/write access to the configuration database. All configuration changes must be made on the Publisher. The Publisher server sends configuration changes to each Subscriber server.
The
are worker servers. All the AAA load, all RADIUS requests, and the server where policy decisions are being made are on the Subscriber servers.Subscriber servers maintain a local copy of the configuration database, and each Subscriber has read-only access to a local copy of the configuration database.
Network Address Translation (NAT) is not supported between the Publisher and Subscriber servers.
What Information Is Replicated?
A background replication process handles the task of updating the configuration database based on the configuration changes received from the Publisher.
Multiple entities exist within a Policy Manager server cluster that must be shared to ensure successful operation of the cluster. Only the configuration database is replicated.
|
The Log and Insight databases are not replicated across the cluster. |
However, certain elements are server-specific and these must be configured separately for each server, which you can achieve directly on the Publisher or individually on the Subscriber server.
Elements Replicated
Cluster replication is delta-based; that is, only changed information is replicated.
The cluster elements that are replicated across all the servers in the cluster are as follows:
All policy configuration elements
All audit data
All identity store data
Guest accounts, endpoints, and profile data
Runtime information
Authorization status, posture status, and roles
Connectivity information, NAS details
Database replication on port 5432 over SSL
Runtime replication on port 443 over SSL
Elements Not Replicated
The following elements are not replicated:
Access Tracker logs and Session logs
Authentication records
Accounting records
System events (Event Viewer data)
System monitoring data
Network Ports That Must Be Enabled
Table 1 lists the network ports that must be opened between the Publisher and the Subscriber servers.
Table 1: Network Ports to Be Enabled
Port |
Protocol |
Description |
---|---|---|
80 |
HTTP |
Internal proxy |
123 |
UDP |
TNTP: Time synchronization |
443 |
TCP |
HTTPS: Internal proxy and server-to-server service |
5432 |
TCP |
PostgreSQL: Database replication |
Because any Subscriber server can be promoted to be the Publisher server, all port/protocol combinations listed in Table 1 should be:
Bidirectional
Open between any two servers in the cluster
Cluster Scaling Limitations
Due to the design requirements of the cluster Publisher/Subscriber model, various Policy Manager components scale differently (see Table 2).
Table 2: Policy Manager Cluster Scaling Limitations
Component |
Scaling Limitation |
---|---|
Authentication capacity |
Scales linearly according to the number of Subscriber servers. Add more servers as necessary to provide additional capacity to service authentication requests. |
Configuration changes (Guest/ Onboard) |
These configuration changes do not scale with additional servers as they are centralized. Requires the Publisher be scaled to support write traffic from the maximum number of Subscribers that would be active concurrently. |
Configuration changes (Policy Manager) |
As the total size of the configuration set is bounded, these configuration changes are assumed to be infrequent and therefore not a significant limit to scaling. |
Insight reports |
Because this function is centralized, reporting does not scale with additional servers. Use a separate Insight server sufficient to handle the incoming Netevents traffic from all servers in the cluster. In a very large-scale deployment, the Publisher server should not be used as the Insight reporting server. |
Logging capacity |
Scales linearly according to the number of Subscriber servers, as each server handles its own logging operations. |
Replication load on Publisher |
Scales linearly according to the number of Subscriber servers. The replication is efficient as only changed information is sent. |