Changing Event Database
It is highly recommended to chose a specific event storage option and retain it. However, it is possible to switch to a different storage type.
Note: In all cases of changing storage type, the old event data is not migrated to the new storage. Contact FortiSIEM Support if this is needed - some special cases may be supported.
For the following cases, simply choose the new storage type from ADMIN > Setup > Storage.
- Local to Elasticsearch
- NFS to Elasticsearch
- Elasticsearch to Local
The following storage change cases need special considerations:
- Elasticsearch to NFS
- Local to NFS
- NFS to Local
- EventDB to ClickHouse
- Elasticsearch to ClickHouse
- ClickHouse to EventDB
- ClickHouse to Elasticsearch
Elasticsearch to NFS
- Log in to FortiSIEM GUI.
- Select and delete the existing Workers from ADMIN > License > Nodes > Delete.
- Go to ADMIN > Setup > Storage and update the Storage type as NFS server
- Go to ADMIN > License > Nodes and Add the recently deleted Workers in step #2.
Local to NFS
If you are running a single Supervisor, then follow these steps.
- SSH to the Supervisor and stop FortiSIEM processes by running:
phtools --stop all
- Unmount /data by running:
umount /data - Validate that /data is unmounted by running:
df -h
- Edit /etc/fstab and remove /data mount location.
- Log in to FortiSIEM GUI, go to ADMIN > Setup > Storage and update the Storage type as EventDB on NFS.
If you are running multiple Supervisors in Active-Active cluster, then follow these steps.
- Log on to Leader.
- Run steps 1-4 in the single Supervisor case described above.
- Log on to each Follower and repeat steps 1-4 in the single Supervisor case described above.
- Log on to Leader, go to ADMIN > Setup > Storage and set the Storage type to EventDB on NFS.
- Log on to any node and make sure that all processes are up on all Supervisors.
NFS to Local
- SSH to the Supervisor and stop FortiSIEM processes by running:
phtools --stop all
- Unmount /data by running:
umount /data - Validate that /data is unmounted by running:
df -h
- Edit /etc/fstab and remove /data mount location.
- Connect the new disk to Supervisor VM.
- Log in to FortiSIEM GUI, go to ADMIN > Setup > Storage and update the Storage type as Local Disk.
EventDB to ClickHouse
- Single Node Deployment
- Single Supervisor with Workers Deployment
- Multiple Supervisors and Workers Deployment
Assuming you are running FortiSIEM EventDB on a single node deployment (e.g. 2000F, 2000G, 3500G and VMs), the following steps shows how to migrate your event data to ClickHouse.
Follow these steps to migrate events from EventDB to ClickHouse.
- Stop all the processes on Supervisor by running the following command.
phtools --stop all
Note: This will also stop all events from coming into Supervisor. - Edit
/etc/fstab
and remove all /data entries for EventDB. - If the same disk is going to be used by ClickHouse (e.g. in hardware Appliances), then copy out events from FortiSIEM EventDB to a remote location. You can bring back the old data if needed (See Step 7).
- Mount a new remote disk for the appliance, assuming the remote server is ready, using the following command.
# mount -t nfs <remote server ip>:<remote share point> <local path>
- Copy the data, using the following command.
# rsync -av --progress /data /<local path>
Example:# rsync -av --progress /data /mnt/eventdb
- Mount a new remote disk for the appliance, assuming the remote server is ready, using the following command.
- If the same disk is going to be used by ClickHouse (e.g. in hardware Appliances), then delete old data from FortiSIEM, by taking the following steps.
- Remove the data by running the following command.
# rm -rf /data/*
- Unmount, by running the following commands.
# note mount path for /data
# umount /data
- For 2000G, run the following additional command.
# lvremove /dev/mapper/FSIEM2000G-phx_eventdbcache: y
- Remove the data by running the following command.
- For VM based deployments, create new disks for use by ClickHouse by taking the following steps.
- Edit your Virtual Machine on your hypervisor.
- Add a new disk to the current disk controller.
- Run the following in your FortiSIEM Supervisor Shell if the disk is not automatically added.
# echo "- - -" > /sys/class/scsi_host/host0/scan
# echo "- - -" > /sys/class/scsi_host/host1/scan
# echo "- - -" > /sys/class/scsi_host/host2/scan
# lsblk
- Log into the GUI as a full admin user and change the storage to ClickHouse by taking the following steps.
- Navigate to ADMIN > Setup > Storage > Online.
- From the Event Database drop-down list, select ClickHouse.
- From the Storage Tiers drop-down list, select 1.
- In the Disk Path field, select the disk path.
Example:/dev/sde
- Click Test.
- Click Deploy.
- Navigate to Admin > Settings > Database > ClickHouse Config, and click Test and then click Deploy.
- (Optional) Import old events. For appliances they were copied out in Step 3 above. For VMs, they may be mounted remotely. To do this, run the following command from FortiSIEM.
# /opt/phoenix/bin/phClickHouseImport --src [Source Dir] --starttime [Start Time] --endtime [End Time] --host [IP Address of ClickHouse - default 127.0.0.1] --orgid [Organization ID (0 – 4294967295)
More information onphClickHouseImport
can be found here.
Note the valid time format:
<time> : "YYYY-MM-DD hh:mm:ss" (notice the quotation marks, they need to be included.)
Example:phClickHouseImport --src /test/sample --starttime "2022-01-27 10:10:00" --endtime "2022-02-01 11:10:00"
Example with import all organizations:[root@SP-191 mnt]# /opt/phoenix/bin/phClickHouseImport --src /mnt/eventdb/ --starttime "2022-01-27 10:10:00" --endtime "2022-03-9 22:10:00"
Found 32 days' Data
[█ ] 3% 3/32 [283420]█
- Log into FortiSIEM GUI and use the ANALYTICS tab to verify events are being ingested.
Note: If your Single Node deployment also contains Workers, proceed to the next section for Worker configuration.
If you are running a single Supervisor with Workers, take the following steps for your Workers, after following the prior steps for your Supervisor in Single Node Deployment.
- Navigate to Admin > License.
- Click the Nodes tab.
- From the License > Nodes page, take the following steps for each Worker.
- Select a Worker, and click Edit.
- Add disks.
- Click Test.
- Click Save.
- Repeat steps 3.a-3.d for each Worker. Proceed to step 4 after all Workers have been configured.
- Navigate to Admin > Settings > Database > ClickHouse Config, and click Test and then click Deploy.
If you are running multiple Supervisors, Workers with EventDB on NFS and want to switch to ClickHouse, take the following steps:
- Power off all Supervisors, Workers and add new disks for ClickHouse.
- Power on all Supervisors, Workers.
- Wait until all processes are up.
- SSH to Primary Leader Supervisor node.
- Run the following command.
phtools --stop all
- Unmount
/data
by running the following command.umount /data
- Validate that
/data
is unmounted by running the following command.df -h
- Edit
/etc/fstab
and remove/data
mount location.
- Run the following command.
- Repeat Step 4 for all Primary Follower Supervisor nodes.
- SSH to Primary Leader Supervisor node.
- Configure ClickHouse following the steps in Configuring ClickHouse Based Deployments.
- Run the following command.
phtools --start all
- Log into FortiSIEM GUI and use the ANALYTICS tab to verify events are being ingested.
Elasticsearch to ClickHouse
To switch your Elasticsearch database to ClickHouse, take the following steps.
Note: Importing events from Elasticsearch to ClickHouse is currently not supported
- Stop all the processes on Supervisor by running the following command.
phtools --stop all
Note: This command will also stop all events from coming into the Supervisor. Make surephMonitor
process is running. - Log into your hypervisor and add disks for ClickHouse by taking the following steps. You can have 3 Tiers of disks with multiple disks in each Tier. You must have at least 1 Tier one disk.
- Edit your Virtual Machine on your hypervisor.
- Add a new disk to the current disk controller.
- Run the following in your FortiSIEM Supervisor Shell if the disk is not automatically added.
# echo "- - -" > /sys/class/scsi_host/host0/scan
# echo "- - -" > /sys/class/scsi_host/host1/scan
# echo "- - -" > /sys/class/scsi_host/host2/scan
# lsblk
- Set up ClickHouse as the online database by taking the following steps.
- Navigate to ADMIN > Setup > Storage > Online.
- From the Event Database drop-down list, select ClickHouse.
- From the Storage Tiers drop-down list, select 1.
Note: If you wish to have a warm tier or multiple hot tier disks, additional disks are required - Provide the disk path.
- Click Test.
- Click Deploy when the test is successful.
- Log into FortiSIEM GUI and use the ANALYTICS tab to verify events are being ingested.
ClickHouse to EventDB
To switch your ClickHouse database to EventDB, take the following steps.
Note: Importing events from ClickHouse to EventDB is currently not supported.
- Stop all the processes on the Supervisor by running the following command.
phtools --stop all
Note: This is will also stop all the events from coming into Supervisor. - Stop ClickHouse Service by running the following commands.
systemctl stop clickhouse-server
systemctl stop phClickHouseMonitor
- Edit
phoenix_config.txt
in/opt/phoenix/config
on Supervisor and setenable = false
for ClickHouse. - Edit and remove any mount entries in
/etc/fstab
that relates to ClickHouse. - Unmount data by taking the following step depending on whether you are using a VM (hot and/or warm disk path) or hardware (2000F, 2000G, 3500G).
- For VM, run the following command.
umount /data-clickhouse-hot-1
If multiple tiers are used, the disks will be denoted by a number.
Example:/data-clickhouse-hot-2
/data-clickhouse-warm-1
/data-clickhouse-warm-2
- For hardware, run the following command.
umount /data-clickhouse-hot-1
- For 2000G, run the following additional commands.
umount /data-clickhouse-warm-1
lvremove /dev/mapper/FSIEM2000Gphx_hotdata : y
- For VM, run the following command.
- Delete old ClickHouse data by taking the following steps.
- Remove old ClickHouse configuration by running the following commands.
# rm -f /etc/clickhouse-server/config.d/*
# rm -f /etc/clickhouse-server/users.d/*
- Remove old ClickHouse configuration by running the following commands.
- Clean up "incident" in psql, by running the following commands.
psql -U phoenix -d phoenixdb
truncate ph_incident;
truncate ph_incident_detail;
- Configure storage for EventDB by taking the following steps.
- Set up EventDB as the online database by taking the following steps for Creating EventDB Online Storage (Local Disk) OR Creating EventDB Online Storage (NFS).
- For EventDB Local Disk configuration, take the following steps.
- Create a new disk for the VM by logging into the hypervisor and create a new disk.
- Log into the FortiSIEM Supervisor GUI as a full admin user.
- Navigate to ADMIN > Setup > Storage > Online.
- From the Event Database drop-down list, select EventDB Local Disk.
- Target the new local disk.
- Click Test.
- Click Deploy.
- Proceed to Step 11.
- For EventDB on NFS configuration, take the following steps.
Note: Make sure remote NFS storage ready.- Create a new disk for the VM by logging into the hypervisor and create a new disk.
- Log into FortiSIEM Supervisor GUI as a full admin user.
- Navigate to ADMIN > Setup > Storage > Online.
- From the Event Database drop-down list, select EventDB on NFS.
- In the IP/Host field, select IP or Host and enter the remote NFS server IP Address or Host name.
- In the Exported Directory field, enter the share point.
- Click Test.
- Click Deploy.
- Proceed to Step 11.
- For EventDB Local Disk configuration, take the following steps.
- Set up EventDB as the online database, by taking the following steps.
- Log into the FortiSIEM Supervisor GUI as a full admin user.
- Navigate to ADMIN > Setup > Storage > Online.
- From the Event Database drop-down list, select EventDB.
- Click Test.
- Click Deploy.
- Make sure phMonitor process is running. Events can now come in.
- Verify events are coming in by running Adhoc query in ANALYTICS.
ClickHouse to Elasticsearch
To switch your ClickHouse database to Elasticsearch, take the following steps.
Note: Importing events from ClickHouse to Elasticsearch is currently not supported.
- Stop all the processes on Supervisor by running the following command.
phtools --stop all
Note: This is will also stop all the events from coming into Supervisor. - Stop ClickHouse Service by running the following commands.
systemctl stop clickhouse-server
systemctl stop phClickHouseMonitor
- Edit
phoenix_config.txt
on the Supervisor and setenable = false
for ClickHouse. - Edit and remove any mount entries in
/etc/fstab
that relates to ClickHouse - Unmount data by taking the following step depending on whether you are using a VM (hot and/or warm disk path) or hardware (2000F, 2000G, 3500G).
- For VM, run the following command.
umount /data-clickhouse-hot-1
If multiple tiers are used, the disks will be denoted by a number:
Example:/data-clickhouse-hot-2
/data-clickhouse-warm-1
/data-clickhouse-warm-2
- For hardware, run the following command.
umount /data-clickhouse-hot-1
- For 2000G, run the following additional commands.
umount /data-clickhouse-warm-1
lvremove /dev/mapper/FSIEM2000Gphx_hotdata : y
- For VM, run the following command.
- Delete old ClickHouse data by taking the following steps.
- Remove old ClickHouse configuration by running the following commands.
# rm -f /etc/clickhouse-server/config.d/*
# rm -f /etc/clickhouse-server/users.d/*
- Remove old ClickHouse configuration by running the following commands.
- Clean up "incident" in psql, by running the following commands.
psql -U phoenix -d phoenixdb
truncate ph_incident;
truncate ph_incident_detail;
- Make sure
phMonitor
process is running. - Setup Elasticsearch as online database by taking the following steps.
- Log into the FortiSIEM Supervisor GUI as a full admin user.
- Navigate to ADMIN > Setup > Storage > Online.
- From the Event Database drop-down list, select Elasticsearch.
- From the ES Service Type drop-down list, select Native, Amazon, or Elastic Cloud.
- Configure the rest of the fields depending on the ES Service Type you selected.
- Click Test.
- Click Deploy.
- Wait for
JavaQuerySever
process to start up. - Start new events.
- Verify events are coming in by running Adhoc query in ANALYTICS.