Link Search Menu Expand Document

Optimization Policies Template

Optimization templates apply Optimization policies to appliances.

img

Priority

  • With this template, you can create rules with a priority from 1000 – 9999. When the template is applied to an appliance, Orchestrator will delete all rules having a priority in that range before applying its policies.

  • If you access an appliance directly, you can create rules with higher priority than Orchestrator rules (1 – 999) and rules with lower priority (10000 – 19999 and 25000 – 65534).

    NOTE: The priority range from 20000 to 24999 is reserved for Orchestrator.

  • When adding a rule, the priority is incremented by ten from the previous rule. The priority can be changed, but this default behavior helps to ensure you can insert new rules without having to change subsequent priorities.

Match Criteria

  • These are universal across all policy maps—Route, QoS, Optimization, NAT (Network Address Translation), and Security.

  • If you expect to use the same match criteria in different maps, you can create an ACL (Access Control List), which is a named, reusable set of rules. For efficiency, create them in Configuration > Templates & Policies > ACLs > Access Lists, and apply them across appliances.

  • The available parameters are Application, Address Map (for sorting by country, IP address owner, or SaaS application), Domain, Geo Location, Interface, Protocol, DSCP, IP/Subnet, Port, and Traffic Behavior.

  • To specify different criteria for inbound versus outbound traffic, select the Source:Dest check box.

Source or Destination

  • An IP address can specify a subnet; for example, 10.10.10.0/24 (IPv4) or fe80::204:23ff:fed8:4ba2/64 (IPv6).

  • To allow any IP address, use 0.0.0.0/0 (IPv4) or ::/0 (IPv6).

  • Ports are available only for the protocols tcp, udp, and tcp/udp.

  • To allow any port, use 0.

Wildcard-based Prefix Matching

  • When using a range or a wildcard, the IPv4 address must be specified in the 4-octet format, separated by the dot notation. For example, A.B.C.D.

  • Range is specified using a dash. For example, 128-129.

  • Wildcard is specified as an asterisk (*).

  • Range and Wildcard can both be used in the same address, but an octet can only contain one or the other. For example, 10.136-137.*.64-95.

  • A wildcard can only be used to define an entire octet. For example, 10.13*.*.64-95 is not supported. The correct way to specify this range is 10.130-139.*.64-94.

  • The same rules apply to IPv6 addressing.

  • CIDR notation and (Range or Wildcard) are mutually exclusive in the same address. For example, use either 192.168.0.0/24 or 192.168.0.1-127.

  • These prefix-matching rules only apply to the following policies: Router, QoS, Optimization, NAT, Security, and ACLs.

Set Actions Fields

Set ActionDescription
Network MemoryAddresses limited bandwidth. This technology uses advanced fingerprinting algorithms to examine all incoming and outgoing WAN traffic. Network Memory localizes information and transmits only modifications between locations.

Maximize Reduction – Optimizes for maximum data reduction at the potential cost of slightly lower throughput and/or some increase in latency. It is appropriate for bulk data transfers such as file transfers and FTP, where bandwidth savings are the primary concern.

Minimize Latency – Ensures that Network Memory processing adds no latency. This might come at the cost of lower data reduction. It is appropriate for extremely latency-sensitive interactive or transactional traffic. It is also appropriate when the primary objective is to fully utilize the WAN pipe to increase the LAN-side throughput, as opposed to conserving WAN bandwidth.

Balanced – Is the default setting. It dynamically balances latency and data reduction objectives and is the best choice for most traffic types.

Disabled – Turns off Network Memory.
IP Header CompressionProcess of compressing excess protocol headers before transmitting them on a link and uncompressing them to their original state at the other end. It is possible to compress the protocol headers due to the redundancy in header fields of the same packet, as well as in consecutive packets of a packet stream.
Payload CompressionUses algorithms to identify relatively short byte sequences that are repeated frequently. These are then replaced with shorter segments of code to reduce the size of transmitted data. Simple algorithms can find repeated bytes within a single packet; more sophisticated algorithms can find duplication across packets and even across flows.
TCP AccelerationUses techniques such as selective acknowledgments, window scaling, and maximum segment size adjustment to mitigate poor performance on high-latency links.

NOTE: Slow LAN alert goes off when the loss has fallen below 80% of the specified value configured in the TCP Accel Options window.

For more information, see TCP Acceleration Options.
Protocol AccelerationProvides explicit configuration for optimizing CIFS, SSL, SRDF, Citrix, and iSCSI protocols. In a network environment, it is possible that not every appliance has the same optimization configurations enabled. Therefore, the site that initiates the flow (the client) determines the state of the protocol-specific optimization.

TCP Acceleration Options

TCP acceleration uses techniques such as selective acknowledgment, window scaling, and message segment size adjustment to compensate for poor performance on high latency links.

This feature has a set of advanced options with default values.

img

CAUTION: Because changing these settings can affect service, it is recommended that you do not modify these without direction from Support.

OptionDescription
Adjust MSS to Tunnel MTULimits the TCP MSS (Maximum Segment Size) advertised by the end hosts in the SYN segment to a value derived from the Tunnel MTU (Maximum Transmission Unit). This is TCP MSS = Tunnel MTU – Tunnel Packet Overhead.

This feature is enabled by default so that the maximum value of the end host MSS is always coupled to the Tunnel MSS. If the end host MSS is smaller than the tunnel MSS, the end host MSS is used instead.

A use case for disabling this feature is when the end host uses Jumbo frames.
Auto Reset FlowsNOTE: Whether this feature is enabled or not, the default behavior when a tunnel goes Down is to automatically reset the flows.

If enabled, it resets all TCP flows that are not accelerated, but should be (based on policy and on internal criteria like a Tunnel Up event).

The internal criteria can also include:

Resetting all TCP accelerated flows on a Tunnel Down event.

Resetting

TCP acceleration is enabled.

SYN packet was not seen (so this flow was either part of WCCP redirection or it already existed when the appliance was inserted in the data path).
Enable Silver Peak TCP SYN option exchangeControls whether or not Silver Peak forwards its proprietary TCP SYN option on the LAN side. Enabled by default, this feature detects if there are more than two EdgeConnect appliances in the flow’s data path, and optimizes accordingly.

Disable this feature if there is a LAN-side firewall or a third-party appliance that would drop a SYN packet when it encounters an unfamiliar TCP option.
End to End FIN HandlingThis feature helps to fine tune TCP behavior during a connection’s graceful shutdown event. When this feature is ON (Default), TCP on the local appliance synchronizes this graceful shutdown of the local LAN side with the LAN side of the remote appliance. When this feature is OFF (Default TCP), no such synchronization happens and the two LAN segments at the ends gracefully shut down, independently.
IP Block ListingIf selected, and if the appliance does not receive a TCP SYN-ACK from the remote end within five seconds, the flow proceeds without acceleration and the destination IP address is blocked for one minute.
Keep Alive TimerAllows changing the Keep Alive timer for the TCP connections.

Probe Interval – Time interval in seconds between two consecutive Keep Alive probes.

Probe Count – Maximum number of Keep Alive probes to send.

First Timeout (Idle) – Time interval until the first Keep Alive timeout.
LAN Side Window Scale Factor ClampThis setting allows the appliance to present an artificially lowered Window Scale Factor (WSF) to the end host. This reduces the need for memory in scenarios in which there are many out-of-order packets being received from the LAN side. These out-of-order packets cause much buffer utilization and maintenance.
Per-Flow Buffer(Max LAN to WAN Buffer and Max WAN to LAN Buffer)

This setting clamps the maximum buffer space that can be allocated to a flow, in each direction.
Persist timer TimeoutAllows the TCP to terminate connections that are in Persist timeout stage after the configured number of seconds.
Preserve Packet BoundariesPreserves the packet boundaries end-to-end. If this feature is disabled, the appliances in the path can coalesce consecutive packets of a flow to use bandwidth more efficiently.

It is enabled by default so that applications requiring packet boundaries to match do not fail.
Route Policy OverrideTries to override asymmetric route policy settings. It emulates auto-opt behavior by using the same tunnel for the returning SYN+ACK as it did for the original SYN packet.

Disable this feature if the asymmetric route policy setting is necessary to correctly route packets. In this case, you might need to configure flow redirection to ensure optimization of TCP flows.
Slow LAN DefenseResets all flows that consume a disproportionate amount of buffer and have a very slow throughput on the LAN side. Owing to a few slower end hosts or a lossy LAN, these flows affect the performance of all other flows so that no flows see the customary throughput improvement gained through TCP acceleration.

This feature is enabled by default. The number relates indirectly to the amount of time the system waits before resetting such slow flows.
Slow LAN Window PenaltyThis setting (OFF by default) penalizes flows that are slow to send data on the LAN side by artificially reducing their TCP receive window. This causes less data to be received and helps to reach a balance with the data sending rate on the LAN side.
WAN Congestion ControlSelects the internal Congestion Control parameter:

Optimized – This is the default setting. This mode offers optimized performance in almost all scenarios.

Standard – In some unique cases, it might be necessary to downgrade to Standard performance to better interoperate with other flows on the WAN link.

Aggressive – Provides aggressive performance and should be used with caution. Recommended mostly for Data Replication scenarios.
WAN Window ScaleThis is the WAN-side TCP Window scale factor that is used internally for WAN-side traffic. This is independent of the WAN-side factor advertised by the end hosts.

Back to top

© Copyright 2022 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Aruba Networks and the Aruba logo are registered trademarks of Aruba Networks, Inc. Third-party trademarks mentioned are the property of their respective owners. To view the end-user software agreement, go to Aruba EULA.