Configuring Synthetic Transaction Monitoring

A Synthetic Transaction Monitoring (STM) test lets you test whether a service is up or down, and measure the response time. An STM test can range from something as simple as pinging a service, to complex as sending and receiving an email or a nested Web transaction.

This section provides the procedures to set up Synthetic Transaction Monitoring tests.

Creating monitoring definition

Follow the procedure below to create monitor definitions:

  1. Go to ADMIN > Setup > STM tab.
  2. Under Step 1: Edit Monitoring Definitions, click New.
  3. In the Edit Monitor Definition dialog box, enter the information below.
    1. Name – enter a name that will be used for reference.
    2. Description – enter a description.
    3. Frequency – how often the STM test will be performed.
    4. Protocol - See 'Protocol Settings for STM Tests' for more information about the settings and test results for specific protocols.
    5. Timeout – when the STM test will give up when it fails.
  4. Click Save.

Creating STM test

Follow the procedure below to create an STM test:

  1. Go to ADMIN > Setup > STM tab.
  2. Under Step 2: Create synthetic transaction monitoring entry by associating host name to monitoring definitions, select New.
  3. Click New and enter the following information:
    1. Monitoring Definition – enter the name of the Monitor (previous step).
    2. Host name or IP/IP Range – enter a host name or IP or IP range on which the test will be performed.
    3. Service Ports – click the Port(s) on which the test will be performed.
    4. Click Test and Save.

Editing monitoring definition

Follow the procedure below to modify monitor definition settings:

  1. In the Step 1: Edit Monitoring Definitions dialog box, click the tab based on the required action.

    TabDescription
    EditTo modify the Monitoring Definitions.
    DeleteTo delete the selected Monitoring Definition.
    CloneTo duplicate the selected Monitoring Definition
  2. Click Save.

Protocol settings for STM tests

This table describes the settings associated with the various protocols used for Creating monitoring definition.

ProtocolDescriptionSettingsNotes
PingChecks packet loss and round trip time.Maximum Packet Loss PCT: tolerable packet loss.

Maximum Average Round Trip Time: tolerable round trip time (seconds) from FortiSIEM to the destination and back.

If either of these two thresholds are exceeded, then the test is considered as failed.
Make sure the device is accessible from the FortiSIEM node from which this test is going to be performed.
LOOP EmailThis test sends an email to an outbound SMTP server and then attempts to receive the same email from a mailbox via IMAP or POP. It also records the end-to-end time.Timeout: the time limit by which the end to end LOOP EMAIL test must complete.

Outgoing Settings: these specify the outgoing SMTP server account for sending the email.
    • SMTP Server: name of the SMTP server
    • User Name: user account on the SMTP server
    • Email Subject: content of the subject line in the test email
Incoming Settings: These specify the inbound IMAP or POP server account for fetching the email.
    • Protocol Type: choose IMAP or POP
    • Server: name of the IMAP or POP server
    • User Name: user account on the IMAP or POP server
    • Email Subject: content of the subject line in the test email
Before you set up the test you will need to have set up access credentials for an outbound SMTP account for sending email, and an inbound POP/IMAP account for receiving email.
HTTP(S) - Selenium ScriptThis test uses a Selenium script to play back a series of website actions in FortiSIEM.Upload: select the java file you exported from Selenium.
Total Timeout: the script must complete by this time or the test will be considered failed.
Step Timeout: each step must complete by this time.
How to export:
  • Make sure Selenium IDE is installed within Firefox browser.
  • Open Firefox.
  • Launch Tools > Selenium IDE. From now on, Selenium is recording user actions.
  • Visit websites.
  • Once done, stop recording.
  • Click File > Export Test case as > Java / Junit 4 /WebDriver.
  • Save the file as .java in your desktop. This file has to be inputted in FortiSIEM. 
HTTP(S) - SimpleThis test connects to a URI over HTTP(s) and checks the response time and expected results.URL: the URI to connect to
Authentication: any authentication method to use when connecting to this URI
Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails
Contains: an expected string in the test results
Does Not Contain: a string that should not be contained in the test results
Response Code: an expected HTTP(S) response code in the test results. The default is set to 200 - 204.


HTTP(S) - Advanced This test uses HTTP requests to connect to a URI over HTTP(s), and checks the response time and expected results. Click + to add an HTTP request to run against a URI.

URI: the URI to run the test against
SSL: Whether or not to use SSL when connecting to the URI, and the port to connect on
Authentication: the type of authentication use when connecting to the URI
Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails
Method Type: the type of HTTP request to use
Send Parameters: click + or the Pencil icon to add or edit any parameters for the request
Contains: an expected string in the test results
Does Not Contain: a string that should not be contained in the test results
Response Code : an expected HTTP(S) response code in the test results. The default is set to 200 - 204.
Store Variables as Response Data for Later Use: click + or the Pencil icon to add or edit any variable patterns that should be used as data for later tests
TCPThis test attempts to connect to the specified port using TCP.Timeout: this is the single success criterion. If there is no response within the time specified here, then the test fails.
DNSChecks response time and expected IP address. Query: the domain name that needs to be resolved.
Record Type: the type of record to test against.
Result: specify the expected IP address that should be associated with the DNS entry.
Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails.
SSHThis test issues a command to the remote server over SSH, and checks the response time and expected results.Remote Command: the command to run after logging on to the system 
Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails.

Contains: an expected string in the test results.
You will need to have set up an SSH credential on the target server before setting up this test. As an example test, you could set Raw Command to ls, and then set Contains to the name of a file that should be returned when that command executes on the target server and directory.
LDAPThis test connects to the LDAP server, and checks the response time and expected results. Base DN: an LDAP base DN you want to run the test against
Filter: any filter criteria for the Base DN.
Scope: any scope for the test
Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails.
Number of Rows: the expected number of rows in the test results
Contains: an expected string in the test results Does Not Contain: a string that should not be contained in the test results.
You will need to have set up an access credential for the LDAP server before you can set up this test
IMAPThis tests checks connectivity to the IMAP service.Timeout: this is the single success criterion - if there is no response within the time specified here, then the test fails.
POP This test checks connectivity to the IMAP service. Timeout: this is the single success criterion - if there is no response within the time specified here, then the test fails.
SMTP This test checks connectivity to the SMTP service. Timeout: this is the single success criterion - if there is no response within the time specified here, then the test fails.
JDBC This test issues a SQL command over JDBC to a target database, and checks the response time and expected results. JDBC Type: the type of database to connect to.
Database Name: the name of the target database.
SQL: the SQL command to run against the target database
Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails.
Number of Rows: the expected number of rows in the test results.
Contains: an expected string in the test results.
Does Not Contain: a string that should not be contained in the test results.
FTPThis test issues a FTP command to the server and checks expected results. Anonymous Login: choose whether to use anonymous login to connect to the FTP directory.
Remote Directory: the remote directory to connect to.
Timeout: this is the primary success criterion - if there is no response within the time specified here, then the test fails.
TRACE ROUTEThis test issues a trace route command to the destination and parses the results to create PH_DEV_MON_TRACEROUTE events, one for each hop.Timeout: If there is no response from the system within the time specified here, then the test fails. 
Protocol Type: Specifies the IP protocol over which trace route packets are send - current options are UDP, TCP and ICMP.
Max TTL: Max time to live (hop) value used in outgoing trace route probe packets.
Wait Time: Max time in seconds to wait for a trace route probe response.
For the trace route from AO to destination D via hops H1, H2, H3, FortiSIEM generates 3 hop by hop PH_DEV_MON_TRACEROUTE events.
First event: Source AO, destination H1, Min/Max/Avg RTT, Packet Loss for this hop.
Second event: Source H1, destination H2, Min/Max/Avg RTT, Packet Loss for this hop.
Third event: Source H2, destination H3, Min/Max/Avg RTT, Packet Loss for this hop.
Fourth event: Source H3, destination D, Min/Max/Avg RTT, Packet Loss for this hop
Fourth event: Source H3, destination D, Min/Max/Avg RTT, Packet Loss for this hop.

When an STM test fails, three system rules are triggered, and you can receive an email notification of that failure by creating a notification policy for these rules:

  • Service Degraded - Slow Response to STM: Detects that the response time of an end-user monitored service is greater than a defined threshold (average over 3 samples in 15 minutes is more than 5 seconds).
  • Service Down - No Response to STM: Detects a service suddenly went down from the up state and is no longer responding to synthetic transaction monitoring probes.
  • Service Staying Down - No Response to STM: Detects a service staying down, meaning that it went from up to down and did not come up, and is no longer responding to end user monitoring probes