Website Global Policy
The Alert Logic Managed Web Application Firewall (WAF) Website Global Policy page includes the following sections. Click on the link to go to the corresponding section to learn more:
To go to the documentation for the previous section of Alert Logic Managed Web Application Firewall (WAF), see Protocol Restrictions . To go to the documentation for next section in the WAF section, see Web Applications.
To access the Website global policy section in the WAF management interface:
- On the left panel, under Services, click Websites.
- On the Websites page, click the website you want to manage.
- Under WAF, click Policy, and then scroll down to the Website policy section.
If you want to see all the settings on the Policy page, on the upper-right corner, change the Display preset to Advance.
To save configuration changes or edits you make to any features and options, you must click Save on the lower-right of the section or page where you are making changes. Click apply changes on the upper-left corner of the page, and then click OK. Your changes will not be stored if you do not properly save your changes.
Validate static requests separately
The Static content policy allows requests without parameters based on file extension (i.e. .gif) and allowed path characters.
To define a static content policy enter or edit file extensions and allowed path characters.
- File extension
-
The file extension is defined as a list of comma separated values.
- Allowed path characters
-
Allowed path characters are defined by selecting them on a list.
The letter A denotes all international alphanumeric characters and other characters are represented by their glyph, their UTF-8 number and a description.
As static content is not supposed to have any parameters (hence the denotation "static") only requests without parameters
and with the method GET
are validated against this rule.
It is possible to allow static requests in general.
Allow all static requests
Radio button |
If selected, requests without parameters like requests for graphic elements, stylesheets, javascript, etc. are allowed in general. Allowing all static requests is faster but less secure as only input to web applications will be inspected when this option is enabled. |
Validate static requests path and extension
Radio button |
If selected, requests without parameters like requests for graphic elements, stylesheets, javascript, etc. are validated using allowed path extension and allowed path characters. Default: |
Allowed path characters
List of check boxes |
Allowed path characters are defined by selecting them on a the list which appears when activating the button .In the list the letter A denotes all international alphanumeric characters and other characters are represented by their glyph, their UTF-8 number and a description.
|
Allowed static file extensions
Input field |
The file extension is defined as a list of comma separated values.
|
Validate cookies for static requests
Check box |
Enable / disable validation of cookies for requests for static content. Default: |
URL path validation
The URL regular expressions filter matches URLs without parameters on a proxy global basis. If a request matches any of the defined regular expressions, it will be marked as valid by WAF and forwarded to the back-end server.
For examples of global URL regular expressions, please refer to Examples of global URL regular expressions.
Full match is implied for each regular expression, meaning that each will match from the start to the end of the request (a caret ^ and dollar $ will be appended if not already present).
Negative validation
Check box |
Select or clear to enable validation of the path element of the URL against negative signatures. Paths not matching attack signatures will be allowed. |
Positive validation
Check box |
Select or clear to enable positive validation of the path element of the request URL. Paths matching one of the regular expressions in the list will be allowed. |
Allowed path |
In the list enter one or more regular expressions defining the global path policy.
|
Denied URL paths
The URL regular expressions block filter matches URLs without parameters on a proxy global basis. If a request matches any of the defined regular expressions it will instantly be blocked.
Suppose for instance that a global paths policy rule allows all URL paths's with the extension ".php" but that you want to block access to all resources in the /admin directory - including subdirectories. To do that simply add the policy rule "/admin/".
The expressions are matching from left to right. Full match is not implied but matching always start at start of line. This implies that for instance the expression /admin will match any URI starting with /admin
Denied path
Input fields |
In the list enter one or more regular expressions defining the global denied path policy.
|
Add predefined Drop-down list |
Select one of the following predefined regular expressions:
|
Query and Cookie validation
Depending on the web server and web application technology and design of the web applications on the back end web server cookie names and values may in some cases be parsed as part of a general request object with the risk that client request cookies may be used to bypass validation controls. It is therefore recommended that cookies are parsed and validated as an integral part of the client query. That is as request parameters.
WAF parses cookies and when learning is enabled the Learner maps cookie values as global parameters.
Cookie validation enabled
Check box |
If enabled client request cookies will be parsed and validated as request parameters. Default: |
Validation
In the global parameters section, parameters which all or many URLs have in common can be added. For instance in many CMS systems an URL can be viewed in a printer friendly version by adding a specific parameter to the URL.
When adding parameters to the list the name of the parameter is interpreted by WAF as regular expressions. Like with the global URL-regular expressions full match from start to end is implied. The value can either be a regular expression or a predefined input validation class.
Enable global parameter signature based negative matching
Check box |
Select or clear the check box Enable global parameter signature based negative matching to enable signature bases matching of parameter names and corresponding values. When learning is enabled for the website this option should be enabled as it ensures that parameters not being validated by positive policy rules are validated negatively and thus not rejected by default. |
Enable global parameter regexp matching |
Select or clear the check box Enable global parameter regexp matching to enable global parameter regexp matching. |
Name
Input fields |
In the list enter a regular expression matching the parameter name or names you want to match.
|
Type
Drop-down list |
Input validation type.
|
Update
Drop-down list |
Controls how the Learner handles the parameter. When update is set to |
Value
Depends on |
Value for input validation.
When type is |
For examples of specifying global parameters using regular expressions please refer to Examples of global parameters regular expressions.
For more general examples using regular expressions for input validation please refer to Examples of regular expressions for input validation.
Full match is implied for each regular expression, meaning that each will match from the start to the end of the request (a caret ^ and dollar $ will be appended if not already present).
Headers validation
Allow only RFC defined headers
Check box |
Enable / disable enforcement of strict HTTP compliant headers. If enabled, WAF will enforce strict HTTP header compliance according the RFC standards and deny any custom HTTP header sent in the request. Default: |
Input headers validation rules
Check box |
The header validation policy rules allow for enforcing a combination of positive and negative validation rules on either specific named headers or all headers. "All" header rules also applies to specific named headers. For each header policy entry the options are:
|
Attack signatures usage
The use of attack signatures can be enabled or disabled for each request method supported.
The check boxes in the negative filtering column enable or disable the use of attack signatures for validating input. The settings only applies to requests or or request parts for which negative filtering is enabled.
Attack Class |
The name of the signature attack class |
HEAD
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
GET
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
POST
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
PUT
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
DELETE
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
MKCOL
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
COPY
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
MOVE
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
PROPFIND
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
PROPPATCH
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
LOCK
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
UNLOCK
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
PATCH
Check box |
Select or clear to enable signature for method Default: Signature dependent. |
Session and CSRF protection
WAF has the ability to protect against session hijacking and CSRF (Cross Site Request Forgery) by:
-
Binding client IPs to session cookies by issuing a validation cookie containing a cryptographic token (a checksum) which validates session id + client IP + a secret for each client request.
-
By binding forms to sessions and verifying the origin of the form through insertion of a form validation parameter containing a cryptographic token which proves that the action formulator (the system issuing the page containing a form with an action) knows a session specific secret.
-
Additionally idle sessions are timed out in order to prevent users from staying logged in making them vulnerable to CSRF attacks.
When the web system issues a session cookie WAF detects it and issues a corresponding session validation cookie. In order to be able to identify the session cookie it is necessary to enter the name of the cookie containing the session id - i.e. PHPSESSID, JSESSIONID, ASPSESSIONID, SID.
An easy way to identify the session cookie name for the site you are configuring protection is to establish a session with the site (logging in, visiting the site or whatever actions are necessary to make the site issue a session cookie) and then view the cookies issued for that specific site in your browser.
- Finding session cookie name in Firefox
-
When a session is established view the cookie in
-> + -> ->Enter the domain name of the site in the search field.
Session ID name
Input field |
The name of the cookie containing the session identifier. This field value is required to enable session and form (CSRF) protection.
|
Secret for signing checksums
Input field |
A hard to guess string used to generate session cookie validation tokens.
|
Idle session timeout
Input field |
Idle session timeout specifies the maximum duration of an idle session before it is dropped resulting in the user being logged out from the web site.
|
Cookie flags
Add Secure flag to session cookie
Check box |
Add secure flag to session cookie to instruct users browser to only send the cookie over an SSL connection. Default: |
Make session cookie HttpOnly
Check box |
Add HttpOnly flag to session cookie to instruct users browser to make the cookie inaccessible to client side script. Default: |
Add SameSite flag to session cookie
Checkbox Drop-down option |
Add SameSite flag to session cookie for none, lax, or strict. Default: |
HSTS - HTTP Strict Transport Security
HSTS is a mechanism enabling web sites to declare themselves accessible only via secure connections - HTTPS. The policy is declared by web sites via the Strict-Transport-Security HTTP response header field. When enabling HSTS in WAF the Strict-Transport-Security header will be injected in server responses if it is not already present.
Enable HSTS
Check box |
Add Strict-Transport-Security header to backend server responses if not already present. Default: |
Max age
Check box |
Max age corresponds to the required "max-age" directive in the HSTS directive and specifies the number of days, after the reception of the STS header field, during which the User Agent (browser) regards the web server (from which the HSTS header was received) as a Known HSTS Host.. Default: |
Enable session protection
Check box |
Enable / disable validation of session identifiers. If enabled, WAF will issue a validation cookie containing a cryptographic token (a checksum) which validates session id + client IP + secret for signing checksums (above) for each client request. The validation cookie is named __PFV__ and is issued whenever WAF detects a set_cookie with a cookie name matching the value configured (above) from the web site to protect. Default: |
Session violation action
Check box |
What WAF should do when an invalid session id is detected. Session violation actions
Default: |
CSRF protection configuration
Generate request form validation tokens (CSRF protection)
Check box |
Enable / disable generation of request form validation tokens (CSRF protection) If enabled, WAF will parse web system responses of type text/* searching for form tags. When forms tags are detected a session specific checksum validating the form action is inserted as a hidden parameter (named ___pffv___) to the form. Default: Now go to Web application settings). If configured the Learner will learn and configure CSRF protection for applications. -> to enable request validation for specific applications (see |
Form violation action
Check box |
What WAF should do when an invalid request is detected. Form violation actions
Default: |
Request authorization configuration
Enable request authorization
Check box |
Enable / disable request authorization for configured web applications. If enabled, WAF will authorize access to resources based on session validity. Request authorization is only enforced for resources for which this feature is enabled. Default: Now go to Web application settings). -> to enable request authorization for specific applications and other resources incl. static files (see |
Credential stuffing and brute force protection
Credential stuffing and brute force attacks are two of the most common methods cybercriminals use to compromise user accounts. Credential stuffing involves using stolen usernames and passwords to gain unauthorized access to systems, exploiting the fact that many users reuse credentials across platforms. Brute force attacks, on the other hand, rely on systematically trying numerous password combinations to break into accounts. Both attack types can lead to significant data breaches, financial loss, and reputational damage.
Behavioral tracking and CAPTCHA challenges
To detect and mitigate credential stuffing and brute force attacks, Fortra WAF employs a behavioral tracking mechanism on URL paths specified by regular expressions. By continuously monitoring traffic, the WAF establishes a dynamic baseline over a trailing one-week period, segmented with one-hour granularity. This approach enables the system to adapt to normal traffic fluctuations while remaining sensitive to unusual spikes that may indicate an attack.
When the traffic from a client - identified by IP address or netmask - exceeds a configurable threshold relative to the established baseline, the WAF issues a CAPTCHA challenge. This challenge serves as a verification step, requiring human interaction to proceed and effectively deterring automated attack attempts. Through this mechanism, the WAF limits the impact of malicious activity while maintaining access for legitimate users.
Once a source IP (or netmask) triggers a violation, based on tracking its activity on the protected pages, it will be required to complete a CAPTCHA to continue accessing the website serving the protected pages – regardless of what pages it requests.
Median Absolute Deviation (MAD) as a method for detecting abnormal traffic increases
Fortra WAF uses Median Absolute Deviation (MAD) for detecting abnormal increases in traffic load. MAD is a statistical approach for detecting unusual patterns by measuring typical variability around the median. It is especially useful in identifying anomalies due to its resilience to outliers.
To detect abnormal increases in traffic, multiples of MAD are used to set probability-based thresholds:
-
1 MAD typically captures about 50% of normal traffic variations.
-
2 MADs cover around 75%, marking moderately unusual traffic levels.
-
3 MADs capture over 90%, flagging highly unusual spikes.
These thresholds help identify the probability of a traffic spike being abnormal, with higher multiples indicating stronger anomalies. This adaptive approach enables consistent, probability-based detection of unusual traffic while accounting for normal fluctuations.
Configuring credentials stuffing and brute force protection
Credentials stuffing and Brute Force protection tracks activity targeting specific URL paths – typically login forms and other forms where attackers may try to guess secrets by repeatedly submitting the form.
Configuration includes:
-
URL paths to track - Tracking and Enforcement
-
Required increase in activity to trigger a violation - Activation threshold
-
IP Precision – whether to track source IPs by netmask or the actual source IP
When the feature is enabled and configured, Fortra WAF will start baselining traffic patterns on the protected pages and mitigate Credentials Stuffing and Brute Forcing by issuing CAPTCHAs to clients triggering protection.
Tracking and enforcement
In this section, the pages to protect are defined. Note that activity is tracked on the protected pages but once activity exceeds thresholds protection is enforced everywhere on the protected website.
Each URL path is tracked individually with its own baseline. To prevent excessive resource consumption the number of URL paths that can be configured is therefore limited.
URLs |
List of URL paths (protected pages) to track.
|
Activation threshold
The activation threshold defines the threshold for when the CAPTCHA service activates, and the HTTP client is required to solve the CAPTCHA to continue using the website.
The relative increases are based on Median Absolute Deviation (MAD) for the baseline profile for the protected page.
Activation Threshold
Dropdown |
|
Source IP precision
Source IP Precision defines the unit that activity is tracked by, and controls are applied to - the scope of tracking.
Options range from “Off” – group all client source IPs, track activity on a general level, and apply CAPTCHA to all clients when surges are detected – to tracking source IPs individually.
The Source IP precision option should be set to Off if an attack scenario involves credential stuffing or brute force attacks from many source IPs from different subnets. This scenario will activate the control based on a general surge in activity above an automatically calculated high water mark.
As both tracking and baselining are affected by the granularity of tracking, both will be reset when Source IP Precision changes.
Source IP Precision
Dropdown |
|
Source status
When Credentials stuffing and Brute Force protection is enabled, source (as defined by Source IP Precision) activity is tracked on the protected pages (URL paths).
When a source triggers a violation by exceeding limits on the protected pages it is flagged as offensive and gets served a CAPTCHA regardless of what page on the protected website it requests next. Until the CAPTCHA is solved it won’t be able to reach the protected website.
Until the offensive status expires a new CAPTCHA will be served to the source IP tracking unit every time the CAPTCHA expires.
Status Expiration
Input field |
Status expiration defines the time in minutes from the source IP tracking unit is flagged as offensive until that status is reset.
|
Reset Status DB
Button |
Manually reset status DB. Will reset status for all offending sources. |
Reset Tracking DB
Button |
Manually reset tracking DB. Will cause source tracking to restart data collection. |
Trusted clients - IP whitelisting
List if IP addresses which are trusted / whitelisted. The in- and output filters can be configured to be bypassed for the whitelisted addresses.
Whitelist
Input field |
Per default, requests originating from any IP address (0.0.0.0/0) is affected when Pass Through Mode is enabled. The white list allows for the definition of specific IP address(es) or networks for which Pass Through Mode is enabled.
|
IP pass through
IP pass through allows for configuring overriding of filter actions based on the source of the request.
Enable HTTP request blocking bypass for trusted clients
Check box |
Enable / disable HTTP pass through With Pass Through for trusted clients enabled, all requests will be forwarded to the real server, but will be otherwise handled the usual way (ie. WAF will learn about the site and log any would be blocked requests not matching the applied access control list). Default: |
Enable IP network blocking bypass for trusted clients
Check box |
Enable / disable network blocking pass through When enabled, IP addresses listed as trusted clients will be included in the global list of IP addresses that are allowed to bypass network blocking and DoS mitigation controls. Note that the address will not be bypassed unless network blocking bypass is allowed in ->Default: |
Trusted domains
The trusted domains is a whitelist of domains which is composed of 1) the domain of the website proxy virtual host and the domains of the host names in Virtual host aliases and 2) a list of other trusted domains which can be entered manually.
The effective list of trusted domains is used in Remote File Inclusion signatures to leave out URLs targeting hosts within the list and when validating redirects to allow redirects to hosts within the list.
Effective trusted domains |
This is the effective list of trusted domains, i.e. the automatically generated list of the domain of the website proxy virtual host, the domains of the host names in Virtual host aliases and the manually entered domains (if any). |
Other trusted domains |
Enter additional domains to the list of trusted domains. Domains are separated by newline. |
Include other trusted domains in domains list | When enabled the manually entered domains will be added to the effective trusted domains list. |
Evasion protection
Block multiple and %u encoded requests
Check box |
Enable / disable blocking of multiple (or %u) encoded requests. In an attempt to evade detection attackers often try to encode requests multiple times. If enabled, WAF will block requests which after being decoded still contains encoded characters. Default: |
Duplicate parameter names
If duplicate parameter names are allowed, wrongly configured web application behavior may result in Alert Logic Managed Web Application Firewall (WAF) not learning the web site correctly and may also lead to WAF bypassing vulnerabilities depending on the target application/web server technology.
An attacker may submit a request to the web application with several parameters with the same name depending on the technology the web application may react in one of the following ways:
-
It may only take the data from the first or the last occurrence of the duplicate parameter
-
It may take the data from all the occurrences and concatenate them in a list or put them in an array
In the case of concatenation it will allow an attacker to distribute the payload of for instance an SQL injection attack across several duplicate parameters.
As an example ASP.NET concatenates duplicate parameters using ',' so /index.aspx?page=22&page=42
would result in the backend web application parsing the value of the 'page' parameter as page=22,42 while WAF
may see it as two parameters with values 22 and 42.
This behavior allows the attacker to distribute an SQL injection attack across the three parameters.
/index.aspx?page='select data&page=1 from table
would result in the backend web application parsing the value of the 'page' parameter as 'select data, 1 from table while
WAF may see it as two parameters with values 'select data
and 1 from table
.
By default, when WAF validates parameters negatively it automatically concatenates the payload of duplicate parameters. It is mostly in the case where a positive application or global rule allows a specific parameter with an input validation rule that makes room for attacks like the above the parameter duplication problem exists. In the page example above the attack would be stopped because the page parameter would be learned as numeric input (an integer). This would not allow text input like in the example above. Nevertheless it is important to configure WAF to mimic the target web applications parsing of requests as closely as possible.
Block duplicate parameter names
Check box |
Enable / disable blocking of duplicate parameter names. If enabled, WAF blocks requests containing duplicate parameter names. Default: |
Join duplicate parameter names
Check box |
Enable / disable concatenation duplicate parameters. If enabled, WAF will concatenate the values of the duplicate parameters using the configured join separator (below). Default: |
Join separator
Input field |
Character(s) used for separating concatenated parameter values.
|
The best option is to disallow duplicate parameter names. It may not be practical though as the use of duplicate parameters
may be intended in some applications - the most prominent example being PHP which parses parameter names suffixed with []
as an array - like par1[]=22&par1[]=42 becoming array(22,42). If this feature is not in use block it.
If the application technology is ASP/IIS or ASP.NET/IIS and it is not possible to disallow duplicate parameters the recommended setting is to join duplicate parameters using ',' as in the join separator example above.
Time restricted access
Access to a website can be restricted on a time basis.
For each weekday enter opening hours.
Opens
Input field |
Time the website opens on the weekday.
|
Closes
Input field |
Time the website closes on the weekday.
|
To specify dates where the website is closed enter a list of dates in the format mm/dd separated by whitespace, comma or semicolon.
Input validation classes
Characters classes are useful when you want to use a predeclared set of criteria used by WAF for input request validation. Eg. if you have lots of HTML forms that use an input field "email", you can define a class and a regular expression which defines what a valid e-mail address is. This class can then be used throughout the entire policy.
When a class
is changed, all affected policy elements are automatically updated to reflect the change.
Rank
Read only |
The class rank when used by the Learner. To change the rank, place the cursor in one of the classes input fields. The rank number will be indented. Use the buttons and in the lower button panel to change the class' rank. |
Name
Input field |
The class' name.
|
Value
Input field |
The class regular expression.
|
X
Button |
Mark class for deletion. When classes are saved the marked classes will be deleted. When deleting classes that are in use in the policy you will be prompted to accept replacement of the deleted classes with existing classes. Learner data samples using deleted classes will be deleted. |
For more information about classes and their corresponding regular expressions, refer to Regular expressions.
Some user input is so complex and unpredictable that, to avoid false positives, positive validation of input ends up being very general and loose. An example of this is free text input fields which often get mapped to the input validation class "printable" which basically allows all printable characters. It is often better validate such input negatively - which WAF does by default.
WAF determines if an input should be validated negatively based on the input validation class rank. By default
the threshold is the class Printable
. If a parameters input is learned/configured to be the class configured as threshold the signatures policy will be used instead
of the class regular expression.
Move up |
Change the rank of the selected class. To move the class upwards. Select the class by clicking anywhere in the class row. When selected the class rank number is highlighted and indented. Click Move up to move the class one step upwards. |
Move down |
Change the rank of the selected class. Works as described above. |
Add new | Add new class. When clicked an empty row will appear at the bottom of the class list. Fill out the blanks and place the class in the class hierarchy with the move buttons. |
Use negative checking above and including class rank
Drop-down list |
The class rank above and including which input will be validated negatively.
To disable negative class checking select |
Bot and client automation management
Bot and client automation management establishes a set of rules for acting on requests matching a pre-classified database of well-known bots and well-known automated clients or, alternatively, unknown user agents and applying several possible controls to the request.
An initial set of rules appears by default, and the groups of bots and automated clients they manage are labeled in the Description field:
Bots - falsified user agent |
Distrust and logs or block activity from bots that impersonate as other - possibly more trustworthy - bots |
Unknown user agent |
Distrust activity from unknown user agents and verify that a human is driving the session by issuing a CAPTCHA |
Bots - known web scraper |
Distrust and log or block activity from bots that are known web-scrapers |
Hacking tools | Distrust activity from known hacking tools – engage trust-based controls, including erring on the side of detection and engage more sensitive signatures |
Client user agents - scriptable | Distrust activity from scriptable user |
Client tools - automated | Distrust activity from automated user |
Bots - no verification data | Distrust bots that do not provide means of verification - or where source IPs are not otherwise known |
Bots - known source IP | Allow and trust bots where claimed identity matches source |
Unknown user agent | Distrust activity from unknown user agents and verify that a human is driving the session by issuing a CAPTCHA |
Adding a new rule defines a group of bots or automated clients based on their general characteristics and defines the controls applied to the new grouping.
Identification and verification of clients
Identification of bots, scriptable user agents, browsers, and other human driven user agents like email clients is based on the user agent set in the request. This is the claimed identity which is then verified based on available means of verification such actually-detected automation behavior (vs expected capability) and source IPs, reverse DNS, or ASN lookup for bots.
When the client session is identified and possibly verified it is classified as either “bot” or “client” and attributes characterizing the session are set.
Evaluation order
Rules are evaluated in the order they are listed in the rules table and the first rule that matches the combined attributes is applied. First match wins.
As rules can potentially be overlapping, evaluation order should follow specificity of attributes with the more specific rues at the top. As an example, the rules allowing verified bots in general are overlapping and conflict with the rule disallowing web scrapers as it is very likely that web scraper bots can be weakly or strongly verified. The web scraper rule is therefore evaluated before the rules allowing verified bots.
Applying controls
Controls are applied to the session identified by the source IP based on the characteristics/attributes set for the session. The assigned session attributes and controls that can be applied are described in the table below.
Rank |
Order rules are evaluated in |
Status |
Turn the rule on or off |
UA |
Attribute - User-Agent classification:
|
Classification |
Attribute - Functional classification. Options are different for bot and client. Options for bot type:
Options for client type:
|
UA Auto Capability |
Attribute - User-agent automation capability:
|
Description | Text field describing the target of a rule |
Verification |
Attribute - Outcome of verification of bot user agent
|
Violation |
Control - Violation type to apply - determines action as configured in WAF Operating Mode Definitions
|
Trust |
Control - Trust score to apply - determines factors like Adaptive Protect Mode and sensitivity of signatures applied to requests from source IP
|
Challenge |
Control - Challenge to apply - intended for bots and automated clients claiming to be human-driven user agents
|
User agent report
The user agent report shows configured rules applied to known bots and user agents.
The report can be filtered by adding filter criteria in the row beneath the heading row. Filters are regular expressions but do not have to be enclosed in “/”.
See the filter examples below:
Show all scriptable user agents |
Automation = script |
Show all scriptable user agents of type library |
Automation = scriptable Classification = library |
Show all scriptable user agents of type library or feed_reader |
Automation = scriptable Classification = library|feed_reader |
Show all user agents in web scraper family "cf-uc_user_agent" |
Family = cf-uc "Show unique only" option = unchecked |
To filter the report, click the button "Apply Filter".
Show unique only
For bots in particular, the same bot with the same fundamental attributes can be represented by different user agents and associated bot names without the “family name” and bot classification changing.
As rules are applied to bot classification attributes a rule that matches one version of a bot also applies to all other versions of that bot. The list is therefore grouped by bot attributes to only show unique samples of how bot rules are applied. To show all versions of a user agent uncheck the option “Show unique only”. A use case for this could be to filter by a specific bot family to see all the user agents in that family.
L7 Source IP and Geolocation based controls
As a two-step procedure, L7 Source IP and Geolocation based controls define how to map facts about connections to the WAF to different ways those connections can be managed.
As a first step, facts about connections to the WAF ("Source Classes") are mapped to Control Groups. Then, Control Groups are mapped to the ways those connections can be managed ("Violation," "Trust," and "Challenge").
Module status
By default, the module for L7 Source IP and Geolocation based controls is disabled. The module can be enabled by changing Module status from Inactive to Active. Proceed to select Save and then confirm the configuration change by selecting apply changes.
Source classes
Source Classes are groups of source IPs. Source Classes each represent facts about connections to the WAF such as country of origin, passing through an anonymizing proxy like TOR, or past or current observed hostile or anomalous behaviour.
Source Class configuration is organized in a table with an expandable section containing unconfigured GeoIP country groups:
Name: Name of the Source Class.
Group: A Control Group is assigned to each Source Class. The Control Group is selected in the Group column in the Source Classes configuration table.
Effective Group: Source Classes not specifically assigned a Control Group will be assigned the Control Group designated as the Default. For each Source Class this is shown in the column Effective Group.
Default Source Class configuration
Class Name | Group | Effective Group |
TOR Exit Nodes IP addresses recently determined to be TOR Exit Nodes |
Dubious Sources | Dubious Sources |
Fortra Threat Intelligence Blacklist IP addresses recently identified with high confidence as bad actors by Fortra Threat Brain |
Blacklisted | Blacklisted |
Anomalous Session IPs IP addresses with recent session behavior determined to be anomalous for this website (see Session Anomaly Detection, below) |
Dubious Clients | Dubious Clients |
RFC1918 private IP addresses Private IP addresses by design not associated with a country code |
Default | No Controls |
Unconfigured GeoIP IP addresses not associated with a country code |
Default | No Controls |
Country Country of origin. The country classes are Hidden until "Show Unconfigured GeoIP* Sources" is clicked |
Default | No Controls |
*The country source classes are based on GeoLite2 data created by MaxMind, available from https://www.maxmind.com.
Control Groups
Control Groups each represent a set of ways to treat connections to the WAF. The controls/actions applied to a source IP depend on the Source Class(es) it belongs to.
Violation: Violation determines whether requests that belong to the Control Group will be treated as Violations.
Violation options are None and Blacklisted. When Blacklisted is selected the action configured for that violation is applied to the request.
Violation action is by default Block when in Protect Mode and Log (but do not block) when in Detect Mode. Violation action is configured in “WAF operating mode definitions” in the “Basic operation” section of the Policy configuration page.
Trust: Trust determines whether requests that belong to the Control Group will be treated with lower confidence that the source is benign, for example subjecting them to additional assessment such as applying lower confidence signatures. Note that acting on Trust in this way is a feature of the advanced signature engine, enabled for each website in the “Website global policy” section under “Attack signatures usage.”
Challenge: Challenge determines whether requests that belong to the Control Group will be presented with a CAPTCHA.
Default: The Control Group selected as the Default will be applied to all requests from source IPs that belong to an unconfigured Source Class group.
Default Control Group configuration
Group | Violation | Trust | Challenge | Default |
No Controls | None | No downgrade | None | * |
Blacklisted | Blacklisted | Low | None | |
Dubious Sources Intended to be used for Source Classes that are considered questionable regardless of detected activity. As an example, TOR exit nodes are assigned to this group. |
None | No downgrade | CAPTCHA | |
Dubious Clients Intended to be used for Anomalous Session IPs - clients that behave differently and have capabilities that are different from what the User-Agent header would suggest. Also, clients that “lie” about their User-Agent – like a bot claiming to be a big search engine but does not validate as such. |
None | No downgrade | CAPTCHA |
Whitelisting vs Blacklisting source IPs / countries
Changing the default Control Group to Blacklisted offers a means of blocking all traffic by default while allowing only specific Control Groups access to the WAF. For example, setting the default to Blacklisted will block all traffic by default, and then, setting the Control Groups for specific country codes to No Controls will allow access for IPs associated with a specific list of country codes.
Show/Hide Unconfigured GeoIP sources toggles a list of countries, each configurable as a Source Class.
Multiple Control Group associations
A source IP may be or become associated with multiple Source Classes. For example, it might be a Tor Exit Node associated with the Dubious Sources Control Group and later get added to the Fortra Threat Intelligence Blacklist associated with the Blacklisted Control Group. In cases where more than one Control Group apply, the strictest of each of the controls from each group will be applied.
In the example case where both Dubious Sources and Blacklisted apply and, with the configuration above, the combination of controls applied will be:
Violation: Blacklisted (Blacklisted: Blacklisted, Dubious Sources: None)
Trust: Low (Blacklisted: Low, Dubious Sources: None)
Challenge: CAPTCHA (Blacklisted: None, Dubious Sources: CAPTCHA)
Trusted Proxy / X-Forwarded-For
It is common for websites to be deployed behind one or more reverse proxies – such as CDNs and Layer 7 load balancers. In fact, the WAF is a reverse proxy itself.
When a request passes through proxy servers, each server adds the respective source IP to the X-Forwarded-For (XFF for short) request header. The XFF header then contains a list of IP addresses the request has passed through. When arriving at the WAF the source IP of the request will be a reverse proxy and the original source IP, the IP the controls should be applied to, will be in the XFF header.
As the XFF header can be forged by a malicious agent, extracting the correct client source IP from the XFF header is based on the concept of “trusted proxies.”
While only proxy servers are supposed to write to the XFF header, as the header is a list that every proxy server it passes through writes to, a client can include an XFF header in the initial request with false IPs. Therefore, only parts of the XFF header IP list can be trusted, namely the IP addresses that were inserted by proxy servers the client making the request cannot be expected to control. Such proxy servers are denoted “Trusted Proxies” in the context of this WAF.
If Trusted Proxy / X-Forwarded-For source IP extraction is not configured or enabled a warning that there is a risk of applying controls to the wrong source IPs will be displayed.
Session anomaly detection
Session anomaly detection is a feature that groups user requests into sessions and uses machine learning to create models that represent a website's typical traffic patterns. These models are then utilized to predict if a future session is anomalous or not.
By default, session anomaly detection is disabled. A user can activate this feature by changing Module status from Inactive to Active Proceed to select Save and then confirm the configuration change by selecting apply changes.
To determine whether a given session is anomalous or not, a machine learning algorithm is utilized. This evaluation is based on a set of features used to calculate a session "score". The score is then compared against a trained model to determine if the session is anomalous.
The session's features include HTTP methods, status codes, the types of resources being requested (e.g., images, JS, HTML, PDF, and dynamic resources like PHP or ASPX), the existence of a Referer header, and whether /robots.txt was requested.
Model status table
Model |
Currently, two models are constructed for sessions: bot and client. A session is trained or predicted based on the user-agent header extracted from requests. |
Learning progress |
The current percentage of sessions that have been collected to meet model training requirements. |
Anomalies | The count of anomalous sessions that have been predicted (detected). |
State |
A model is either in a Training or Predict (detect) state. Predictions for anomalies cannot occur until a model has been fully trained. |
Last modified | The date and time that a model completed training. |
Filesize on disk (bytes) | The size of the serialized model on disk. |
Detection sensitivity slider
The detection sensitivity slider allows users to adjust the threshold used to determine whether a session is anomalous or not. By sliding the control to the left, the sensitivity is decreased, resulting in fewer anomalies being detected. Conversely, sliding the control to the right increases the sensitivity and should detect more anomalies.
Increasing sensitivity effectively “lowers the bar” for when a session is considered anomalous and, consequently, also increases the risk of the model generating false positive anomalies. Conversely, lowering sensitivity raises the bar but also decreases the risk of false positives.
It is recommended to leave the slider in the default position for optimal performance.
Detected anomalies
When a session is detected as anomalous the observation is logged and client source IP is flagged as an Anomalous Session IP which can be managed in “L7 Source IP and Geolocation based controls” where different controls – like blocking, CAPTCHA challenge, and lowering connection trust can be applied to the source IP.
Load defaults
The "Load Defaults" button enables users to reset the current session anomaly detection models. Selecting "Load Defaults" erases any existing configurations, replaces them with the original default settings, and deletes any trained models. Additionally, the module status is set back to "Inactive".
Note that this action is permanent and cannot be undone, so take caution when selecting this option.
Normally it is not necessary to reset learning but if, for instance, website usage patterns change or substantial changes are made to the website and lots of redirects are sent to accommodate Google it may be necessary to reset the learning model.