Configuring Synthetic Transaction Monitoring
A Synthetic Transaction Monitoring (STM) test lets you test whether a service is up or down, and measure the response time. An STM test can range from something as simple as pinging a service, to complex as sending and receiving an email or a nested Web transaction.
This section provides the procedures to set up Synthetic Transaction Monitoring tests.
- Create monitoring definition
- Create STM test
- Edit monitoring definition
- Protocol settings for STM tests
Creating monitoring definition
Follow the procedure below to create monitor definitions:
- Go to ADMIN > Setup > STM tab.
- Under Step 1: Edit Monitoring Definitions, click New.
- In the Edit Monitor Definition dialog box, enter the information below.
- Name – enter a name that will be used for reference.
- Description – enter a description.
- Frequency – how often the STM test will be performed.
- Protocol - See 'Protocol Settings for STM Tests' for more information about the settings and test results for specific protocols.
- Timeout – when the STM test will give up when it fails.
- Click Save.
Creating STM test
Follow the procedure below to create an STM test:
- Go to ADMIN > Setup > STM tab.
- Under Step 2: Create synthetic transaction monitoring entry by associating host name to monitoring definitions, select New.
- Click New and enter the following information:
- Monitoring Definition – enter the name of the Monitor (previous step).
- Host name or IP/IP Range – enter a host name or IP or IP range on which the test will be performed.
- Service Ports – click the Port(s) on which the test will be performed.
- Click Test and Save.
Editing monitoring definition
Follow the procedure below to modify monitor definition settings:
- In the Step 1: Edit Monitoring Definitions dialog box, click the tab based on the required action.
Tab Description Edit To modify the Monitoring Definitions. Delete To delete the selected Monitoring Definition. Clone To duplicate the selected Monitoring Definition - Click Save.
Protocol settings for STM tests
This table describes the settings associated with the various protocols used for Creating monitoring definition.
Protocol | Description | Settings | Notes |
---|---|---|---|
Ping | Checks packet loss and round trip time. | Maximum Packet Loss PCT: tolerable packet loss. Maximum Average Round Trip Time: tolerable round trip time (seconds) from FortiSIEM to the destination and back. If either of these two thresholds are exceeded, then the test is considered as failed. | Make sure the device is accessible from the FortiSIEM node from which this test is going to be performed. |
LOOP Email | This test sends an email to an outbound SMTP server and then attempts to receive the same email from a mailbox via IMAP or POP. It also records the end-to-end time. | Timeout: the time limit by which the end to end LOOP EMAIL test must complete. Outgoing Settings: these specify the outgoing SMTP server account for sending the email.
| Before you set up the test you will need to have set up access credentials for an outbound SMTP account for sending email, and an inbound POP/IMAP account for receiving email. |
HTTP(S) - Selenium Script | This test uses a Selenium script to play back a series of website actions in FortiSIEM. | Upload: select the java file you exported from Selenium. Total Timeout: the script must complete by this time or the test will be considered failed. Step Timeout: each step must complete by this time. | How to export:
|
HTTP(S) - Simple | This test connects to a URI over HTTP(s) and checks the response time and expected results. | URL: the URI to connect to Authentication: any authentication method to use when connecting to this URI Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails Contains: an expected string in the test results Does Not Contain: a string that should not be contained in the test results Response Code: an expected HTTP(S) response code in the test results. The default is set to 200 - 204. | |
HTTP(S) - Advanced | This test uses HTTP requests to connect to a URI over HTTP(s), and checks the response time and expected results. | Click + to add an HTTP request to run against a URI. URI: the URI to run the test against SSL: Whether or not to use SSL when connecting to the URI, and the port to connect on Authentication: the type of authentication use when connecting to the URI Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails Method Type: the type of HTTP request to use Send Parameters: click + or the Pencil icon to add or edit any parameters for the request Contains: an expected string in the test results Does Not Contain: a string that should not be contained in the test results Response Code : an expected HTTP(S) response code in the test results. The default is set to 200 - 204. Store Variables as Response Data for Later Use: click + or the Pencil icon to add or edit any variable patterns that should be used as data for later tests | |
TCP | This test attempts to connect to the specified port using TCP. | Timeout: this is the single success criterion. If there is no response within the time specified here, then the test fails. | |
DNS | Checks response time and expected IP address. | Query: the domain name that needs to be resolved. Record Type: the type of record to test against. Result: specify the expected IP address that should be associated with the DNS entry. Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails. | |
SSH | This test issues a command to the remote server over SSH, and checks the response time and expected results. | Remote Command: the command to run after logging on to the system Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails. Contains: an expected string in the test results. |
You will need to have set up an SSH credential on the target server before setting up this test.
As an example test, you could set Raw Command to ls , and then set Contains to the name of a file that should be returned when that command executes on the target server and directory.
|
LDAP | This test connects to the LDAP server, and checks the response time and expected results. | Base DN: an LDAP base DN you want to run the test against Filter: any filter criteria for the Base DN. Scope: any scope for the test Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails. Number of Rows: the expected number of rows in the test results Contains: an expected string in the test results Does Not Contain: a string that should not be contained in the test results. | You will need to have set up an access credential for the LDAP server before you can set up this test |
IMAP | This tests checks connectivity to the IMAP service. | Timeout: this is the single success criterion - if there is no response within the time specified here, then the test fails. | |
POP | This test checks connectivity to the IMAP service. | Timeout: this is the single success criterion - if there is no response within the time specified here, then the test fails. | |
SMTP | This test checks connectivity to the SMTP service. | Timeout: this is the single success criterion - if there is no response within the time specified here, then the test fails. | |
JDBC | This test issues a SQL command over JDBC to a target database, and checks the response time and expected results. | JDBC Type: the type of database to connect to. Database Name: the name of the target database. SQL: the SQL command to run against the target database Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails. Number of Rows: the expected number of rows in the test results. Contains: an expected string in the test results. Does Not Contain: a string that should not be contained in the test results. | |
FTP | This test issues a FTP command to the server and checks expected results. | Anonymous Login: choose whether to use anonymous login to connect to the FTP directory. Remote Directory: the remote directory to connect to. Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails. | |
TRACE ROUTE | This test issues a trace route command to the destination and parses the results to create PH_DEV_MON_TRACEROUTE events, one for each hop. | Timeout: If there is no response from the system within the time specified here, then the test fails. Protocol Type: Specifies the IP protocol over which trace route packets are send - current options are UDP, TCP and ICMP. Max TTL: Max time to live (hop) value used in outgoing trace route probe packets. Wait Time: Max time in seconds to wait for a trace route probe response. | For the trace route from AO to destination D via hops H1, H2, H3, FortiSIEM generates 3 hop by hop PH_DEV_MON_TRACEROUTE events. First event: Source AO, destination H1, Min/Max/Avg RTT, Packet Loss for this hop. Second event: Source H1, destination H2, Min/Max/Avg RTT, Packet Loss for this hop. Third event: Source H2, destination H3, Min/Max/Avg RTT, Packet Loss for this hop. Fourth event: Source H3, destination D, Min/Max/Avg RTT, Packet Loss for this hop Fourth event: Source H3, destination D, Min/Max/Avg RTT, Packet Loss for this hop. |
When an STM test fails, three system rules are triggered, and you can receive an email notification of that failure by creating a notification policy for these rules:
- Service Degraded - Slow Response to STM: Detects that the response time of an end-user monitored service is greater than a defined threshold (average over 3 samples in 15 minutes is more than 5 seconds).
- Service Down - No Response to STM: Detects a service suddenly went down from the up state and is no longer responding to synthetic transaction monitoring probes.
- Service Staying Down - No Response to STM: Detects a service staying down, meaning that it went from up to down and did not come up, and is no longer responding to end user monitoring probes