Configuring Storage
FortiSIEM provides 3 options for storing event data.
-
ClickHouse
-
EventDB
-
Elasticsearch
This document provides separate configuration steps for three event databases.
Configuring ClickHouse Based Deployments
This section covers the following topics.
ClickHouse Configuration Overview
It may be helpful to review the concepts in ClickHouse Operational Overview and the ClickHouse Sizing Guide. First you need to design your ClickHouse Online Cluster and the role of supervisor and worker nodes. There are 3 cases:
-
Small deployments: All-in-one deployment using Supervisor Virtual Machine or a hardware appliance like FortiSIEM 2000G or 3500G.
-
Medium sized deployments: Supervisor is a member of Keeper Cluster but not the Data Cluster. Workers are members of both Keeper and Data Clusters.
-
Large deployments: Supervisor is not a part of Keeper or Data Clusters. Workers entirely form the Keeper and Data Clusters.
The configuration steps involve:
-
Creating storage on Supervisor and Worker nodes depending on their role.
-
Creating a ClickHouse topology to specify the Supervisor and Worker nodes belonging to Keeper cluster and Data cluster.
Next, you need to configure the Archive, where events will be stored after the Online data stores become full. There are two options:
-
For on-premises deployments, you can use a large Warm disk tier as Archive; or real-time archive to NFS.
-
For AWS Cloud deployments, you can use AWS S3 for Archive.
After configuring the online and archive storage, you need to specify the retention policies. See How ClickHouse Event Retention Works for details.
Information on Online event database usage can be seen at Viewing Online Event Data Usage.
Information on Archive event database usage can be seen at Viewing Archive Data.
Creating ClickHouse Online Storage
Case 1: If your FortiSIEM deployment is a hardware appliance, then the appliance acts both as a Keeper node and a ClickHouse Data Node. Follow these configuration steps:
-
Navigate to ADMIN > License and click Upload to load license. For more information, refer to FortiSIEM Licensing Guide.
-
Navigate to ADMIN > Setup > Storage, and click Online to choose storage.
-
From the Event Database drop-down list, select ClickHouse.
-
The Storage Tiers and the disks will be automatically set for you. If you are running a 2000G appliance, then there will be 2 Storage Tiers and 1 disk in Hot Tier (SSD disks) and 1 disk in Warm Tier (Magnetic Disks). If you are running a 3500G appliance, then there will be 1 Storage Tier and 1 disk in Hot Tier (Magnetic Disks).
2000G Storage Setup for ClickHouse
3500G Storage Setup for ClickHouse
-
Click Test.
-
Once it succeeds, then click Deploy.
-
-
The system is now ready for use.
Case 2: If your FortiSIEM deployment is an all-in-one Virtual Machine (VM), then the VM acts both as a Keeper node and a ClickHouse Data Node. Follow these configuration steps:
-
Navigate to ADMIN > License and click Upload to load license. For more information, refer to FortiSIEM Licensing Guide.
-
Navigate to ADMIN > Setup > Storage, and click Online to choose storage.
-
From the Event Database drop-down list, select ClickHouse.
- Storage Tiers: [Required] Choose 1.
-
Disk Path: [Required] Click + and add a 200GB disk path. Use one of the following CLI commands to find the disk names.
fdisk -l
or
lsblk
When using
lsblk
to find the disk name, please note that the path will be ‘/dev/<disk>’. In the below example, running on KVM, the 5th disk (hot) will be ‘/dev/vde’ and the 6th disk (warm) will be ‘/dev/vdf’.
-
Click Test.
-
Once it succeeds, click Deploy.
-
-
The system is now ready for use.
Case 3: In this case, your ClickHouse deployment is a cluster deployment. This will involve creating storage for Supervisor and Worker nodes and forming Keeper and Data Clusters. It may be helpful to review the concepts in ClickHouse Operational Overview and the ClickHouse Sizing Guide before proceeding.
First, during the Supervisor node installation, take the following steps to choose ClickHouse as the Online Event Database and set up storage.
-
Navigate to ADMIN > License and click Upload to load license. For more information, refer to FortiSIEM Licensing Guide.
-
Navigate to ADMIN > Setup > Storage, and click Online to choose storage.
-
From the Event Database drop-down list, select ClickHouse.
-
If the Supervisor will be a Keeper node, then a 200GB disk is required. If Supervisor is neither a Keeper node nor a Data Node, then a small disk is still needed to store Query Results.
-
-
After configuring storage for the Supervisor node, create Worker nodes and add storage. See Adding a Worker Node for details.
Configuring ClickHouse Topology
After configuring storage, you need to set up the ClickHouse topology. This involves:
-
Selecting which Worker nodes belong to the ClickHouse Keeper cluster
-
Choosing the number of shards and the Worker nodes belonging to ClickHouse Cluster in each shard
See Step 2- Create ClickHouse Configuration for details.
Creating ClickHouse Archive Storage
There are two options:
-
For on-premises deployments, you can use a large Warm disk tier as Archive, or you can use real-time archive to NFS.
-
For AWS Cloud deployments, you can use AWS S3 for Archive.
Case 1: To configure real-time archive using NFS, follow these steps:
-
Go to ADMIN > Setup > Storage.
-
Click Archive, and select NFS.
-
Enter the following parameters:
-
IP/Host: [Required] Select IP or Host and enter the IP address/Host name of the NFS server.
-
Exported Directory: [Required] Enter the file path on the NFS Server which will be mounted.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Case 2: To configure AWS S3 for Archive, follow these steps:
-
Go to ADMIN > Setup > Storage.
-
Click Archive, and select AWS S3.
-
For Credential Type, select Environmental Credentials or Explicit Credentials.
-
If Environmental Credentials is selected, you will need to have an Identity and Access Management. Follow the instructions in Creating IAM Policy for AWS S3 Explicit Credentials to create an IAM Policy
-
If Explicit Credentials is selected, then enter the following information:
-
Access Key ID: Access Key ID required to access the S3 bucket(s)
-
Secret Access Key: The Secret Access Key associated with the Access Key ID to access the S3 bucket(s)
-
-
-
For Buckets:
-
In the Bucket field, enter the bucket URL.
-
In the Region field, enter the region. For example, "us-east-1".
Note: To minimize any latency, enter the closest region.
-
If more Buckets are required, click + to add a new row.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Implementation Notes:
-
AWS S3 buckets MUST be created prior to this configuration.
-
When storing ClickHouse data in AWS S3, Fortinet recommends turning Bucket Versioning off, or suspending it (if it was previously enabled). This is because data in ClickHouse files may change and versioning will keep both copies of data - new and old. With time, the number of stale objects may increase, resulting in higher AWS S3 costs. If versioning was previously enabled for the bucket, Fortinet recommends suspending it and configuring a policy to delete non-current versions.
-
Archive data will NOT be automatically purged by FortiSIEM or ClickHouse.
Creating IAM Policy for AWS S3 Explicit Credentials
Take the following steps from your AWS console.
-
From your EC2 Dashboard, select your instance.
-
Navigate to the IAM dashboard.
Note: You can go there by clicking the IAM button, or by clicking on Services and selecting IAM.
-
Click Policies to navigate to the Policies page, and click Create policy.
-
From the Create policy page, click the JSON tab.
-
Paste the following JSON code into the editor to configure your policy.
{ "Version":"2012-10-17", "Statement":[ { "Sid":"VisualEditor0", "Effect":"Allow", "Action":[ "s3:ListStorageLensConfigurations", "s3:ListAccessPointsForObjectLambda", "s3:GetAccessPoint", "s3:PutAccountPublicAccessBlock", "s3:GetAccountPublicAccessBlock", "s3:ListAllMyBuckets", "s3:ListAccessPoints", "s3:PutAccessPointPublicAccessBlock", "s3:ListJobs", "s3:PutStorageLensConfiguration", "s3:ListMultiRegionAccessPoints", "s3:CreateJob" ], "Resource":"*" }, { "Sid":"VisualEditor1", "Effect":"Allow", "Action":"s3:*", "Resource":[ "arn:aws:s3:::demo-bucket", "arn:aws:s3:::demo-bucket/*" ] } ] }
-
Click the Next: Tags button.
Note: Tags does not need to be configured.
-
Click the Next: Review button.
-
On the Create policy page, in the Name field, enter a name for the policy.
-
Click the Create policy button. Your policy has been created.
-
Navigate back to the IAM dashboard and click Roles, and click Create role.
-
For Select trusted entity, select AWS service.
-
Under Use case, select EC2.
-
Click Next, and then click Next again.
-
On the Name, review, and create page, in the Role name field, enter a name for the role.
-
Under Step 2: Add permissions, click the Edit button, and select the policy you created earlier, and click Next.
-
Click Create role.
-
Navigate to the Instances page, select your instance and click the Security tab.
-
Click Actions (located upper left), and select Security > Change security groups > Modify IAM role.
-
Select the role you just created, and click Update IAM role.
Configuring EventDB Based Deployment
This section covers the following topics:
EventDB Configuration Overview
EventDB requires a file location for storing events.
-
For all-in-one based deployments, you need to create a disk and enter that disk path in the GUI. (Case 1)
-
For hardware-based deployments, the disk is already created, and you need to enter specific information in the GUI. (Case 1)
-
For cluster-based installations using Workers, you must set up NFS and provide the mount point in the GUI. (Case 2)
You can set up separate Online and Archive EventDB, with separate file locations.
For managing Online and Archive event retention, see How EventDB Event Retention Works.
Information on Online event database usage can be seen at Viewing Online Event Data Usage.
Information on Archive event database usage can be seen at Viewing Archive Data.
Creating EventDB Online Storage
Case 1: If your deployment is on all-in-one node or a hardware appliance, then follow these steps:
-
Go to ADMIN > Setup > Storage.
-
Click Online, and from the Event Database drop-down list, select EventDB Local Disk.
-
Enter the following information for Disk Name.
-
Hardware appliances: enter “hardware”
-
Software installs: enter the 4th disk name that you configured during FortiSIEM installation. Use the command
fdisk -l
to find the disk name.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Case 2: If your deployment has Worker nodes, then you must configure event database on NFS. Make sure you have NFS server setup and then follow these steps:
-
Go to ADMIN > Setup > Storage.
-
Click Online, and from the Event Database drop-down list, select EventDB on NFS.
-
Enter the following parameters:
-
Server IP/Host: [Required] Select IP or Host and enter the IP address/Host name of the NFS server.
-
Exported Directory: [Required] Enter the file path on the NFS Server which will be mounted.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Creating EventDB Archive Storage
Follow these steps:
-
Go to ADMIN > Setup > Storage.
-
Click Archive, and select NFS.
-
Enter the following parameters:
-
IP/Host: [Required] Select IP or Host and enter the IP address/Host name of the NFS server.
-
Exported Directory: [Required] Enter the file path on the NFS Server which will be mounted.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Configuring Elasticsearch Based Deployment
This section covers the following topics:
Elasticsearch Configuration Overview
FortiSIEM supports 3 Elasticsearch deployments
-
Native Elasticsearch – You deploy your own Elasticsearch (Case 1)
-
AWS Opensearch (previously known as AWS Elasticsearch) (Case 2)
-
Elastic Cloud (Case 1)
Creating Elasticsearch Online Storage
This assumes that you have already deployed Elasticsearch or have an AWS Opensearch or Elastic Cloud account.
Case 1: To configure Native Elasticsearch or Elastic Cloud, follow these steps:
-
Go to ADMIN > Setup > Storage.
-
Click Online, and from the Event Database drop-down list, select Elasticsearch, and from the ES Service Type drop-down list, select Native or Elastic Cloud depending on your Elasticsearch set up.
-
Enter the following parameters.
-
Org Storage: This is relevant for FortiSIEM Multi-tenant deployments. Select one of the following from the drop-down list.
-
All Orgs in One Index – In this option, events from all Organizations are mixed in every Elasticsearch index. This is the most cost-effective option as Elasticsearch does not scale when there are many Organizations with high events per sec, and each Organization being in a separate index may lead to an excessive number of indices (note that Elasticsearch has been observed to have approximately 15K index limit per cluster).
-
Each Org in its own Index – In this option, events from each Organization is in its own Elasticsearch index. This is a flexible option that provides event isolation among Organizations, but Elasticsearch does not scale when there are many Organizations with high events per sec and each Organization being in a separate index may lead to an excessive number of indices.
-
Custom Org Assignment – In this option, Organizations can be grouped into Groups (maximum 15 allowed). Organizations belonging to the same group have their events in the same index. This is a balanced approach that provides some amount of event isolation, but does not let the number of indices grow excessively. To create and deploy a custom Organization to Group Mapping, follow these steps:
-
Click Edit.
-
In the follow up dialog, click Add.
-
In the Mapping table, select an Organization in the left column and select the mapped Group in the right column. The 15 specific Groups are numbered 50,001-50,015. Any Organization that is not explicitly mapped, is mapped to the default Group numbered 50,000. A common use case, map 15 of your important customers to the specific groups and the rest to the default groups. Currently, the number of groups (15) is fixed and cannot be changed.
-
Click Deploy.
-
-
-
Endpoint: Click Edit and enter the following information:
-
URL: Enter Elasticsearch Coordinator node URL.
-
Ingest/Query checkbox: If this Coordinator node is to be used for Ingesting Events then check Ingest. If this Coordinator node is going to be used for Querying events, then check the Query flag. If you have multiple Coordinator nodes, then click + and select the URL and Ingest/Query flags. This flexibility enables FortiSIEM to separate a set of Coordinator nodes for Query and Ingest functionalities.
-
-
Port: The TCP port for the URL above (set to HTTPS/443 by default)
-
User Name: Enter the username for basic authentication to be used with the URL
-
Password: Enter the password for basic authentication to be used with the URL
-
Shard Allocation:
-
If you set it to Fixed, then you enter the number of fixed Shards, FortiSIEM will not create new shards, even if a Shard reaches its size limit during event surge. You can set the Shard Allocation to Fixed only if you know your system well.
-
If you set it to Dynamic, then you enter the number of fixed Starting Shards (default 5) and FortiSIEM will dynamically adjust the number of shards based on event rate. This is the recommended method.
-
-
Replicas: If you set it to N, then there will be N+1 copies of every index in Elasticsearch. The most common value is Replicas = 1. A higher number of replicas can increase query speed and resiliency against failures, but may slow down event ingest and will use more storage space.
-
Event Attribute Template: This defines how FortiSIEM Event Attributes are mapped to Elasticsearch Event Attribute Types. This mapping is used to store events in Elasticsearch. If you set it to Default, then FortiSIEM will use the default mapping. The default mapping maps all (currently 2000+) FortiSIEM Event Attributes and can be a large file. Since this mapping is stored in every index, the global Elasticsearch state also becomes large. It is possible to use a smaller file by including only the FortiSIEM Event Attributes used in your environment. In that case, set this field to Custom and enter the custom mapping file.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Case 2: To configure AWS Opensearch, follow these steps:
-
Go to ADMIN > Setup > Storage.
-
Click Online, and from the Event Database drop-down list, select Elasticsearch, and from the ES Service Type drop-down list, select Amazon
-
Enter the following parameters.
-
Endpoint: Click Edit and the enter the following information:
-
URL: Enter the AWS Opensearch URL.
-
Ingest/Query checkbox: If this endpoint is to be used for Ingesting Events, then check Ingest. If this endpoint is going to be used for Querying events, then check the Query flag. If you have multiple endpoints, then click + and select the URL and Ingest/Query flags. This flexibility enables to separate a set of endpoints for Query and Ingest functionalities.
-
-
Port: The TCP port for the URL above (set to HTTPS/443 by default).
-
Access Key ID: Enter the Access Key ID for use with this endpoint.
-
Secret Key: Enter the Secret Key to be used with this endpoint.
-
Shard Allocation:
-
If you set it to Fixed, then you enter the number of fixed Shards, FortiSIEM will not create new shards, even if a Shard reaches its size limit during event surge. You can set the Shard Allocation to Fixed only if you know your system well.
-
If you set it to Dynamic, then you enter the number of fixed Starting Shards (default 5) and FortiSIEM will dynamically adjust the number of shards based on event rate. This is the recommended method.
-
-
Replicas: If you set it to N, then there will be N+1 copies of every index in Elasticsearch. The most common value is Replicas = 1. A higher number of replicas can increase query speed and resiliency against failures, but may slow down event ingest and will use more storage space.
-
Event Attribute Template: This defines how FortiSIEM Event Attributes are mapped to Elasticsearch Event Attribute Types. This mapping is used to store events in Elasticsearch. If you set it to Default, then FortiSIEM will use the default mapping. The default mapping maps all (currently 2000+) FortiSIEM Event Attributes and can be a large file. Since this mapping is stored in every index, the global Elasticsearch state also becomes large. It is possible to use a smaller file by including only the FortiSIEM Event Attributes used in your environment. In that case, set this field to Custom and enter the custom mapping file.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Creating Archive for Elasticsearch Based Deployments
There are 3 archive options
Configuring HDFS Archive from Elasticsearch
In this option, FortiSIEM HDFSMgr process creates Spark jobs to directly pull events from Elasticsearch and store in HDFS. Follow these steps.
-
Go to ADMIN > Setup > Storage.
-
Click Archive, and select HDFS.
-
Enter the following parameters:
-
Uncheck Real Time Archive.
-
For Spark Master Node:
-
Select IP or Host and enter the IP address or Host name of the Spark Cluster Master node.
-
Set Port to the TCP port number for FortiSIEM to communicate to the Spark Master node.
-
-
For HDFS Name Node:
-
Select IP or Host and enter the IP address or Host name of the HDFS Name node. This is the machine which stores the HDFS metadata: the directory tree of all files in the file system, and tracks the files across the cluster.
-
Set Port to the TCP port number for FortiSIEM to communicate to the HDFS Name node.
-
-
-
Click Test.
-
If the test succeeds, click Deploy.
Configuring Real-time HDFS Archive from FortiSIEM
In this option, FortiSIEM HDFSMgr process creates Spark jobs to pull events from FortiSIEM Supervisor and Worker nodes. Follow these steps.
-
Go to ADMIN > Setup > Storage.
-
Click Archive, and select HDFS.
-
Enter the following parameters:
-
Check Real Time Archive. Set a Start time (in the future) when the real time archive should begin
-
For Spark Master Node:
-
Select IP or Host and enter the IP address or Host name of the Spark Cluster Master node.
-
Set Port to the TCP port number for FortiSIEM to communicate to Spark Master node.
-
-
For HDFS Name Node:
-
Select IP or Host and enter the IP address or Host name of the HDFS Name node. This is the machine which stores the HDFS metadata: the directory tree of all files in the file system, and tracks the files across the cluster.
-
Set Port to the TCP port number for FortiSIEM to communicate to HDFS Name node.
-
-
-
Click Test.
-
If the test succeeds, click Deploy.
Configuring Real-time Archive to NFS
In this option, FortiSIEM Supervisor and Worker nodes store events in NFS managed by FortiSIEM EventDB. This happens while events are getting inserted into Elasticsearch. This approach has no impact in Elasticsearch performance, but events are stored in both Elasticsearch and EventDB and managed independently. Follow these steps.
-
Go to ADMIN > Setup > Storage.
-
Click Archive, and select NFS.
-
Enter the following parameters:
-
IP/Host: [Required] Select IP or Host and enter the IP address/Host name of the NFS server.
-
Exported Directory: [Required] Enter the file path on the NFS Server which will be mounted.
-
-
Click Test.
-
If the test succeeds, click Deploy.
Changing Event Database
It is highly recommended to chose a specific event storage option and retain it. However, it is possible to switch to a different storage type.
Note: In all cases of changing storage type, the old event data is not migrated to the new storage. Contact FortiSIEM Support if this is needed - some special cases may be supported.
For the following cases, simply choose the new storage type from ADMIN > Setup > Storage.
- Local to Elasticsearch
- NFS to Elasticsearch
- Elasticsearch to Local
The following storage change cases need special considerations:
- Elasticsearch to NFS
- Local to NFS
- NFS to Local
- EventDB to ClickHouse
- Elasticsearch to ClickHouse
- ClickHouse to EventDB
- ClickHouse to Elasticsearch
Elasticsearch to NFS
- Log in to FortiSIEM GUI.
- Select and delete the existing Workers from ADMIN > License > Nodes > Delete.
- Go to ADMIN > Setup > Storage and update the Storage type as NFS server
- Go to ADMIN > License > Nodes and Add the recently deleted Workers in step #2.
Local to NFS
If you are running a single Supervisor, then follow these steps.
- SSH to the Supervisor and stop FortiSIEM processes by running:
phtools --stop all - Unmount /data by running:
umount /data - Validate that /data is unmounted by running:
df –h - Edit /etc/fstab and remove /data mount location.
- Log in to FortiSIEM GUI, go to ADMIN > Setup > Storage and update the Storage type as EventDB on NFS.
If you are running multiple Supervisors in Active-Active cluster, then follow these steps.
-
Log on to Leader.
-
Run steps 1-4 in the single Supervisor case described above.
-
Log on to each Follower and repeat steps 1-4 in the single Supervisor case described above.
-
Log on to Leader, go to ADMIN > Setup > Storage and set the Storage type to EventDB on NFS.
-
Log on to any node and make sure that all processes are up on all Supervisors.
NFS to Local
- SSH to the Supervisor and stop FortiSIEM processes by running:
phtools --stop all - Unmount /data by running:
umount /data - Validate that /data is unmounted by running:
df –h - Edit /etc/fstab and remove /data mount location.
- Connect the new disk to Supervisor VM.
- Log in to FortiSIEM GUI, go to ADMIN > Setup > Storage and update the Storage type as Local Disk.
EventDB to ClickHouse
Assuming you are running FortiSIEM EventDB on a single node deployment (e.g. 2000F, 2000G, 3500G and VMs), the following steps shows how to migrate your event data to ClickHouse.
Follow these steps to migrate events from EventDB to ClickHouse.
-
Stop all the processes on Supervisor by running the following command.
phtools –-stop all
Note: This will also stop all events from coming into Supervisor.
-
Edit
/etc/fstab
and remove all /data entries for EventDB. -
If the same disk is going to be used by ClickHouse (e.g. in hardware Appliances), then copy out events from FortiSIEM EventDB to a remote location. You can bring back the old data if needed (See Step 7).
-
Mount a new remote disk for the appliance, assuming the remote server is ready, using the following command.
# mount -t nfs <remote server ip>:<remote share point> <local path>
-
Copy the data, using the following command.
# rsync -av --progress /data /<local path>
Example:
# rsync -av --progress /data /mnt/eventdb
-
-
If the same disk is going to be used by ClickHouse (e.g. in hardware Appliances), then delete old data from FortiSIEM, by taking the following steps.
-
Remove the data by running the following command.
# rm -rf /data/*
-
Unmount, by running the following commands.
# note mount path for /data
# umount /data
-
For 2000G, run the following additional command.
# lvremove /dev/mapper/FSIEM2000G-phx_eventdbcache: y
-
-
For VM based deployments, create new disks for use by ClickHouse by taking the following steps.
-
Edit your Virtual Machine on your hypervisor.
-
Add a new disk to the current disk controller.
-
Run the following in your FortiSIEM Supervisor Shell if the disk is not automatically added.
# echo "- - -" > /sys/class/scsi_host/host0/scan
# echo "- - -" > /sys/class/scsi_host/host1/scan
# echo "- - -" > /sys/class/scsi_host/host2/scan
# lsblk
-
-
Log into the GUI as a full admin user and change the storage to ClickHouse by taking the following steps.
-
Navigate to ADMIN > Setup > Storage > Online.
-
From the Event Database drop-down list, select ClickHouse.
-
From the Storage Tiers drop-down list, select 1.
-
In the Disk Path field, select the disk path.
Example:
/dev/sde
-
Click Test.
-
Click Deploy.
-
-
(Optional) Import old events. For appliances they were copied out in Step 3 above. For VMs, they may be mounted remotely. To do this, run the following command from FortiSIEM.
# /opt/phoenix/bin/phClickHouseImport --src [Source Dir] --starttime [Start Time] --endtime [End Time] --host [IP Address of ClickHouse - default 127.0.0.1] --orgid [Organization ID (0 – 4294967295)
More information on
phClickHouseImport
can be found here.Note the valid time format:
<time> : "YYYY-MM-DD hh:mm:ss" (notice the quotation marks, they need to be included.)
Example:
phClickHouseImport --src /test/sample --starttime "2022-01-27 10:10:00" --endtime "2022-02-01 11:10:00"
Example with import all organizations:
[root@SP-191 mnt]# /opt/phoenix/bin/phClickHouseImport --src /mnt/eventdb/ --starttime "2022-01-27 10:10:00" --endtime "2022-03-9 22:10:00"
Found 32 days' Data
[█ ] 3% 3/32 [283420]█
-
Log into FortiSIEM GUI and use the ANALYTICS tab to verify events are being ingested.
If you are running multiple Supervisors, Workers with EventDB on NFS and want to switch to ClickHouse, take the following steps:
-
Power off all Supervisors, Workers and add new disks for ClickHouse.
-
Power on all Supervisors, Workers.
-
Wait until all processes are up.
-
SSH to Primary Leader Supervisor node.
-
Run the following command.
phtools --stop all
-
Unmount
/data
by running the following command.umount /data
-
Validate that
/data
is unmounted by running the following command.df –h
-
Edit
/etc/fstab
and remove/data
mount location.
-
-
Repeat Step 4 for all Primary Follower Supervisor nodes.
-
SSH to Primary Leader Supervisor node.
-
Configure ClickHouse following the steps in Configuring ClickHouse Based Deployments.
-
Run the following command.
phtools --start all
-
-
Log into FortiSIEM GUI and use the ANALYTICS tab to verify events are being ingested.
Elasticsearch to ClickHouse
To switch your Elasticsearch database to ClickHouse, take the following steps.
Note: Importing events from Elasticsearch to ClickHouse is currently not supported
-
Stop all the processes on Supervisor by running the following command.
phtools –-stop all
Note: This command will also stop all events from coming into the Supervisor. Make sure
phMonitor
process is running. -
Log into your hypervisor and add disks for ClickHouse by taking the following steps. You can have 2 Tiers of disks with multiple disks in each Tier. You must have at least one Tier 1 disk.
-
Edit your Virtual Machine on your hypervisor.
-
Add a new disk to the current disk controller.
-
Run the following in your FortiSIEM Supervisor Shell if the disk is not automatically added.
# echo "- - -" > /sys/class/scsi_host/host0/scan
# echo "- - -" > /sys/class/scsi_host/host1/scan
# echo "- - -" > /sys/class/scsi_host/host2/scan
# lsblk
-
-
Set up ClickHouse as the online database by taking the following steps.
-
Navigate to ADMIN > Setup > Storage > Online.
-
From the Event Database drop-down list, select ClickHouse.
-
From the Storage Tiers drop-down list, select 1.
Note: If you wish to have a warm tier or multiple hot tier disks, additional disks are required
-
Provide the disk path.
-
Click Test.
-
Click Deploy when the test is successful.
Events can now come in.
-
-
Log into FortiSIEM GUI and use the ANALYTICS tab to verify events are being ingested.
ClickHouse to EventDB
To switch your ClickHouse database to EventDB, take the following steps.
Note: Importing events from ClickHouse to EventDB is currently not supported.
-
Stop all the processes on the Supervisor by running the following command.
phtools –-stop all
Note: This is will also stop all the events from coming into Supervisor.
-
Stop ClickHouse Service by running the following commands.
systemctl stop clickhouse-server
systemctl stop phClickHouseMonitor
-
Edit
phoenix_config.txt
on Supervisor and setenable = false
for ClickHouse. -
Edit and remove any mount entries in
/etc/fstab
that relates to ClickHouse. -
Unmount data by taking the following step depending on whether you are using a VM (hot and/or warm disk path) or hardware (2000F, 2000G, 3500G).
-
For VM, run the following command.
umount /data-clickhouse-hot-1
If multiple tiers are used, the disks will be denoted by a number.
Example:
/data-clickhouse-hot-2
/data-clickhouse-warm-1
/data-clickhouse-warm-2
-
For hardware, run the following command.
umount /data-clickhouse-hot-1
-
For 2000G, run the following additional commands.
umount /data-clickhouse-warm-1
lvremove /dev/mapper/FSIEM2000Gphx_hotdata : y
-
-
Delete old ClickHouse data by taking the following steps.
-
Remove old ClickHouse configuration by running the following commands.
# rm -f /etc/clickhouse-server/config.d/*
# rm -f /etc/clickhouse-server/users.d/*
-
-
Clean up "incident" in psql, by running the following commands.
psql -U phoenix -d phoenixdb
truncate ph_incident;
truncate ph_incident_detail;
-
Configure storage for EventDB by taking the following steps.
-
Set up EventDB as the online database by taking the following steps for Creating EventDB Online Storage (Local Disk) OR Creating EventDB Online Storage (NFS).
-
For EventDB Local Disk configuration, take the following steps.
-
Create a new disk for the VM by logging into the hypervisor and create a new disk.
-
Log into the FortiSIEM Supervisor GUI as a full admin user.
-
Navigate to ADMIN > Setup > Storage > Online.
-
From the Event Database drop-down list, select EventDB Local Disk.
-
Target the new local disk.
-
Click Test.
-
Click Deploy.
-
Proceed to Step 11.
-
-
For EventDB on NFS configuration, take the following steps.
Note: Make sure remote NFS storage ready.
-
Create a new disk for the VM by logging into the hypervisor and create a new disk.
-
Log into FortiSIEM Supervisor GUI as a full admin user.
-
Navigate to ADMIN > Setup > Storage > Online.
-
From the Event Database drop-down list, select EventDB on NFS.
-
In the IP/Host field, select IP or Host and enter the remote NFS server IP Address or Host name.
-
In the Exported Directory field, enter the share point.
-
Click Test.
-
Click Deploy.
-
Proceed to Step 11.
-
-
-
Set up EventDB as the online database, by taking the following steps.
-
Log into the FortiSIEM Supervisor GUI as a full admin user.
-
Navigate to ADMIN > Setup > Storage > Online.
-
From the Event Database drop-down list, select EventDB.
-
Click Test.
-
Click Deploy.
-
-
Make sure phMonitor process is running. Events can now come in.
-
Verify events are coming in by running Adhoc query in ANALYTICS.
ClickHouse to Elasticsearch
To switch your ClickHouse database to Elasticsearch, take the following steps.
Note: Importing events from ClickHouse to Elasticsearch is currently not supported.
-
Stop all the processes on Supervisor by running the following command.
phtools –-stop all
Note: This is will also stop all the events from coming into Supervisor.
-
Stop ClickHouse Service by running the following commands.
systemctl stop clickhouse-server
systemctl stop phClickHouseMonitor
-
Edit
phoenix_config.txt
on the Supervisor and setenable = false
for ClickHouse. -
Edit and remove any mount entries in
/etc/fstab
that relates to ClickHouse -
Unmount data by taking the following step depending on whether you are using a VM (hot and/or warm disk path) or hardware (2000F, 2000G, 3500G).
-
For VM, run the following command.
umount /data-clickhouse-hot-1
If multiple tiers are used, the disks will be denoted by a number:
Example:
/data-clickhouse-hot-2
/data-clickhouse-warm-1
/data-clickhouse-warm-2
-
For hardware, run the following command.
umount /data-clickhouse-hot-1
-
For 2000G, run the following additional commands.
umount /data-clickhouse-warm-1
lvremove /dev/mapper/FSIEM2000Gphx_hotdata : y
-
-
Delete old ClickHouse data by taking the following steps.
-
Remove old ClickHouse configuration by running the following commands.
# rm -f /etc/clickhouse-server/config.d/*
# rm -f /etc/clickhouse-server/users.d/*
-
-
Clean up "incident" in psql, by running the following commands.
psql -U phoenix -d phoenixdb
truncate ph_incident;
truncate ph_incident_detail;
-
Make sure
phMonitor
process is running. -
Setup Elasticsearch as online database by taking the following steps.
-
Log into the FortiSIEM Supervisor GUI as a full admin user.
-
Navigate to ADMIN > Setup > Storage > Online.
-
From the Event Database drop-down list, select Elasticsearch.
-
From the ES Service Type drop-down list, select Native, Amazon, or Elastic Cloud.
-
Configure the rest of the fields depending on the ES Service Type you selected.
-
Click Test.
-
Click Deploy.
-
-
Wait for
JavaQuerySever
process to start up. -
Start new events.
-
Verify events are coming in by running Adhoc query in ANALYTICS.
Changing NFS Server IP
If you are running a FortiSIEM Cluster using NFS and want to change the IP address of the NFS Server, then take the following steps.
Step 1: Temporarily Change the Event Storage Type from EventDB on NFS to EventDB on Local
- Go to ADMIN > License > Nodes and remove all the Worker nodes.
- SSH to the Supervisor and stop FortiSIEM processes by running:
phtools --stop all
- Unmount
/data
by running:umount /data
- Validate that
/data
is unmounted by running:df –h
- Edit
/etc/fstab
and remove/data
mount location. - Attach new local disk to the Supervisor. It is recommended that it is at least 50~80GB.
- Go to ADMIN > Setup > Storage > Online.
- Change the storage type to Local Disk and add the local disk's partition to the Disk Name field. (e.g.
/dev/sde
). - Click Test to confirm.
- Click Deploy.
Step 2: Change the NFS Server IP Address
This is a standard system administrator operation. Change the NFS Server IP address.
Step 3: Change the Event Storage Type Back to EventDB on NFS
- SSH to the Supervisor and stop FortiSIEM processes by running:
phtools --stop all
- Umount
/data
by running:umount /data
- Validate that
/data
is unmounted by running:df –h
- Edit
/etc/fstab
and remove/data
mount location. - Go to ADMIN > Setup > Storage > Online.
- Change the storage type to NFS.
- In the Server field, with IP selected, enter the new IP address of the NFS server.
- In the Exported Directory field, enter the correct NFS folder's path.
- Click Test to confirm.
- Click Deploy.
- Go to ADMIN > License > Nodes and add back all the Worker nodes.