FortiOS 5.6 Online Help Link FortiOS 5.4 Online Help Link FortiOS 5.2 Online Help Link FortiOS 5.0 Online Help Link

Home > Online Help

> Chapter 14 - Hardware Acceleration > Hardware acceleration overview

Hardware acceleration overview

Most FortiGate models have specialized acceleration hardware that can offload resource intensive processing from main processing (CPU) resources. Most FortiGate units include specialized content processors (CPs) that accelerate a wide range of important security processes such as virus scanning, attack detection, encryption and decryption. (Only selected entry-level FortiGate models do not include a CP processor.) Many FortiGate models also contain security processors (SPs) that accelerate processing for specific security features such as IPS and network processors (NPs) that offload processing of high volume network traffic.

Content processors (CP4, CP5, CP6, CP8, and CP9)

Most FortiGate models contain FortiASIC Content Processors (CPs) that accelerate many common resource intensive security related processes. CPs work at the system level with tasks being offloaded to them as determined by the main CPU. Capabilities of the CPs vary by model. Newer FortiGate units include CP8 and CP9 processors. Older CP versions still in use in currently operating FortiGate models include the CP4, CP5, and CP6.

CP9 capabilities

The CP9 content processor provides the following services:

  • Flow-based inspection (IPS, application control etc.) pattern matching acceleration with over 10Gbps throughput
  • IPS pre-scan
  • IPS signature correlation
  • Full match processors
  • High performance VPN bulk data engine
  • IPsec and SSL/TLS protocol processor
  • DES/3DES/AES128/192/256 in accordance with FIPS46-3/FIPS81/FIPS197
  • MD5/SHA-1/SHA256/384/512-96/128/192/256 with RFC1321 and FIPS180
  • HMAC in accordance with RFC2104/2403/2404 and FIPS198
  • ESN mode
  • GCM support for NSA "Suite B" (RFC6379/RFC6460) including GCM-128/256; GMAC-128/256
  • Key Exchange Processor that supports high performance IKE and RSA computation
  • Public key exponentiation engine with hardware CRT support
  • Primary checking for RSA key generation
  • Handshake accelerator with automatic key material generation
  • True Random Number generator
  • Elliptic Curve support for NSA "Suite B"
  • Sub public key engine (PKCE) to support up to 4096 bit operation directly (4k for DH and 8k for RSA with CRT)
  • DLP fingerprint support
  • TTTD (Two-Thresholds-Two-Divisors) content chunking
  • Two thresholds and two divisors are configurable

CP8 capabilities

The CP8 content processor provides the following services:

  • Flow-based inspection (IPS, application control etc.) pattern matching acceleration
  • High performance VPN bulk data engine
  • IPsec and SSL/TLS protocol processor
  • DES/3DES/AES in accordance with FIPS46-3/FIPS81/FIPS197
  • ARC4 in compliance with RC4
  • MD5/SHA-1/SHA256 with RFC1321 and FIPS180
  • HMAC in accordance with RFC2104/2403/2404 and FIPS198
  • Key Exchange Processor support high performance IKE and RSA computation
  • Public key exponentiation engine with hardware CRT support
  • Primarily checking for RSA key generation
  • Handshake accelerator with automatic key material generation
  • Random Number generator compliance with ANSI X9.31
  • Sub public key engine (PKCE) to support up to 4096 bit operation directly
  • Message authentication module offers high performance cryptographic engine for calculating SHA256/SHA1/MD5 of data up to 4G bytes (used by many applications)
  • PCI express Gen 2 four lanes interface
  • Cascade Interface for chip expansion

CP6 capabilities

  • Dual content processors
  • FIPS-compliant DES/3DES/AES encryption and decryption
  • SHA-1 and MD5 HMAC with RFC1321 and FIPS180
  • HMAC in accordance with RFC2104/2403/2404 and FIPS198
  • IPsec protocol processor
  • High performance IPsec engine
  • Random Number generator compliance with ANSI X9.31
  • Key exchange processor for high performance IKE and RSA computation
  • Script Processor
  • SSL/TLS protocol processor for SSL content scanning and SSL acceleration

CP5 capabilities

  • FIPS-compliant DES/3DES/AES encryption and decryption
  • SHA-1 and MD5 HMAC with RFC1321/2104/2403/2404 and FIPS180/FIPS198
  • IPsec protocol processor
  • High performance IPSEC Engine
  • Random Number generator compliant with ANSI X9.31
  • Public Key Crypto Engine supports high performance IKE and RSA computation
  • Script Processor

CP4 capabilities

  • FIPS-compliant DES/3DES/AES encryption and decryption
  • SHA-1 and MD5 HMAC
  • IPSEC protocol processor
  • Random Number generator
  • Public Key Crypto Engine
  • Content processing engine
  • ANSI X9.31 and PKCS#1 certificate support

Determining the content processor in your FortiGate unit

Use the get hardware status CLI command to determine which content processor your FortiGate unit contains. The output looks like this:

get hardware status

Model name: FortiGate-100D

ASIC version: CP8

ASIC SRAM: 64M

CPU: Intel(R) Atom(TM) CPU D525 @ 1.80GHz

Number of CPUs: 4

RAM: 1977 MB

Compact Flash: 15331 MB /dev/sda

Hard disk: 15272 MB /dev/sda

USB Flash: not available

Network Card chipset: Intel(R) PRO/1000 Network Connection (rev.0000)

Network Card chipset: bcm-sw Ethernet driver 1.0 (rev.)

The ASIC version line lists the content processor model number.

Viewing SSL acceleration status

You can view the status of SSL acceleration using the following command:

get vpn status ssl hw-acceleration-status

Acceleration hardware detected: kxp=on cipher=on

Disabling CP offloading for firewall policies

If you want to completely disable offloading to CP processors for test purposes or other reasons, you can do so in security policies. Here are some examples:

For IPv4 security policies.

config firewall policy

edit 1

set auto-asic-offload disable

end

For IPv6 security policies.

config firewall policy6

edit 1

set auto-asic-offload disable

end

For multicast security policies.

config firewall multicast-policy

edit 1

set auto-asic-offload disable

end

note icon Disabling auto-asic-offload also disables NP offloading.

Security processors (SPs)

FortiGate Security Processing (SP) modules, such as the SP3 but also including the XLP, XG2, XE2, FE8, and CE4, work at both the interface and system level to increase overall system performance by accelerating specialized security processing. You can configure the SP to favor IPS over firewall processing in hostile high-traffic environments.

SP processors include their own IPS engine which is similar to the FortiOS IPS engine but with the following limitations:

  • The SP IPS engine does not support SSL deep inspection. When you have SSL deep inspection enabled for a security policy that includes flow-based inspection or IPS, offloading to the SP is disabled and traffic is processed by the FortiGate CPU and CP processors.
  • The SP IPS engine does not support FortiGuard Web Filtering. When you enable flow-based FortiGuard Web Filtering on a FortiGate unit with an SP processor, the SP processor cannot perform FortiGuard lookups and web pages fail to load.

The following security processors are available:

  • The SP3 (XLP) is built into the FortiGate-5101B and provides IPS acceleration. No special configuration is required. All IPS processing, including traffic accepted by IPv4 and IPv6 traffic policies and IPv4 and IPv6 DoS policies is accelerated by the built-in SP3 processors.
  • The FMC-XG2 is an FMC module with two 10Gb/s SPF+ interfaces that can be used on FortiGate-3950B and FortiGate-3951B units.
  • The FortiGate-3140B also contains a built-in XG2 using ports 19 and 20.
  • The ADM-XE2 is a dual-width AMC module with two 10Gb/s interfaces that can be used on FortiGate-3810A and FortiGate-5001A-DW systems.
  • The ADM-FE8 is a dual-width AMC module with eight 1Gb/s interfaces that can be used with the FortiGate-3810A.
  • The ASM-CE4 is a single-width AMC module with four 10/100/1000 Mb/s interfaces that can be used on FortiGate-3016B and FortiGate-3810A units.
note icon Traffic is blocked if you enable IPS for traffic passing over inter-VDOM links if that traffic is being offloaded by an SP processor. If you disable SP offloading, traffic will be allowed to flow. You can disable offloading in individual firewall policies by disabling auto-asic-offload for those policies. You can also use the following command to disable all IPS offloading:

config ips global

set np-accel-mode none

set cp-accel-mode none

end

SP Processing Flow

SP processors provide an integrated high performance fast path multilayer solution for both intrusion protection and firewall functions. The multilayered protection starts from anomaly checking at packet level to ensure each packet is sound and reasonable. Immediately after that, a sophisticated set of interface based packet anomaly protection, DDoS protection, policy based intrusion protection, firewall fast path, and behavior based methods are employed to prevent DDoS attacks from the rest of system.

Then the packets enter an interface/policy based intrusion protection system,

where each packet is evaluated against a set of signatures. The end result is streams of user packets that are free of anomaly and attacks, entering the fast path system for unicast or multicast fast path forwarding.

SP processing flow

Displaying information about security processing modules

You can display information about installed SP modules using the CLI command

diagnose npu spm

For example, for the FortiGate-5101C:

FG-5101C # diagnose npu spm list

Available SP Modules:

 

ID Model       Slot      Interface

0 xh0          built-in  port1, port2, port3, port4,

                        base1, base2, fabric1, fabric2

                        eth10, eth11, eth12, eth13

                        eth14, eth15, eth16, eth17

                        eth18, eth19

You can also use this command to get more info about SP processing. This example shows how to display details about how the module is processing sessions using the syn proxy.

diagnose npu spm dos synproxy <sp_id>

This is a partial output of the command:

Number of proxied TCP connections :                 0

Number of working proxied TCP connections :         0

Number of retired TCP connections :                 0

Number of valid TCP connections :                   0

Number of attacks, no ACK from client :             0

Number of no SYN-ACK from server :                  0

Number of reset by server (service not supportted): 0

Number of establised session timeout :              0

Client timeout setting :                            3 Seconds

Server timeout setting :                            3 Seconds

Network processors (NP1, NP2, NP3, NP4, NP4Lite, NP6 and NP6Lite)

FortiASIC network processors work at the interface level to accelerate traffic by offloading traffic from the main CPU. Current models contain NP4, NP4Lite, NP6, and NP6lite network processors. Older FortiGate models include NP1 network processors (also known as FortiAccel, or FA2) and NP2 network processors.

The traffic that can be offloaded, maximum throughput, and number of network interfaces supported by each varies by processor model:

  • NP6 supports offloading of most IPv4 and IPv6 traffic, IPsec VPN encryption, CAPWAP traffic, and multicast traffic. The NP6 has a maximum throughput of 40 Gbps using 4 x 10 Gbps XAUI or Quad Serial Gigabit Media Independent Interface (QSGMII) interfaces or 3 x 10 Gbps and 16 x 1 Gbps XAUI or QSGMII interfaces. For details about the NP6 processor, see NP6 and NP6lite Acceleration and for information about FortiGate models with NP6 processors, see FortiGate NP6 architectures.
  • NP6lite is similar to the NP6 but with a lower throughput and some functional limitations (for example, the NP6lite does not offload CAPWAP traffic). The NP6lite has a maximum throughput of 10 Gbps using 2x QSGMII and 2x Reduced gigabit media-independent interface (RGMII) interfaces. For details about the NP6 processor, see NP6Lite processors and for information about FortiGate models with NP6 processors, see FortiGate NP6lite architectures.
  • NP4 supports offloading of most IPv4 firewall traffic and IPsec VPN encryption. The NP4 has a capacity of 20 Gbps through 2 x 10 Gbps interfaces. For details about NP4 processors, see NP4 and NP4Lite Acceleration and for information about FortiGate models with NP4 processors, see FortiGate NP4 architectures.
  • NP4lite is similar to the NP4 but with a lower throughput (but with about half the performance )and some functional limitations.
  • NP2 supports IPv4 firewall and IPsec VPN acceleration. The NP2 has a capacity of 2 Gbps through 2 x 10 Gbps interfaces or 4 x 1 Gbps interfaces.
  • NP1 supports IPv4 firewall and IPsec VPN acceleration with 2 Gbps capacity. The NP1 has a capacity of 2 Gbps through 2 x 1 Gbps interfaces.
  • The NP1 does not support frames greater than 1500 bytes. If your network uses jumbo frames, you may need to adjust the MTU (Maximum Transmission Unit) of devices connected to NP1 ports. Maximum frame size for NP2, NP4, and NP6 processors is 9216 bytes.
  • For both NP1 and NP2 network processors, ports attached to a network processor cannot be used for firmware installation by TFTP.
note icon Sessions that require proxy-based security features (for example, virus scanning, IPS, application control and so on) are not fast pathed and must be processed by the CPU. Sessions that require flow-based security features can be offloaded to NP4 or NP6 network processors if the FortiGate supports NTurbo.

Determining the network processors installed on your FortiGate unit

Use either of the following command to list the NP6 processors in your FortiGate unit:

get hardware npu np6 port-list

diagnose npu np6 port-list

Use either of the following command to list the NP6lite processors in your FortiGate unit:

get hardware npu np6lite port-list

diagnose npu np6lite port-list

To list other network processors on your FortiGate unit, use the following CLI command.

get hardware npu <model> list

<model> can be legacy, np1, np2 or np4.

The output lists the interfaces that have the specified processor. For example, for a FortiGate-5001B:

get hardware npu np4 list

ID    Model         Slot       Interface

0     On-board                 port1 port2 port3 port4

                               fabric1 base1 npu0-vlink0 npu0-vlink1

1     On-board                 port5 port6 port7 port8

                               fabric2 base2 npu1-vlink0 npu1-vlink1

The npu0-vlink0, npu1-vlink1 etc interfaces are used for accelerating inter-VDOM links.

How NP hardware acceleration alters packet flow

NP hardware acceleration generally alters packet flow as follows:

  1. Packets initiating a session pass to the FortiGate unit’s main processing resources (CPU).
  2. The FortiGate unit assesses whether the session matches fast path (offload) requirements.
    To be suitable for offloading, traffic must possess only characteristics that can be processed by the fast path. The list of requirements depends on the processor, see NP6 session fast path requirements or NP4 session fast path requirements.
    If the session can be fast pathed, the FortiGate unit sends the session key or IPsec security association (SA) and configured firewall processing action to the appropriate network processor.
  3. Network processors continuously match packets arriving on their attached ports against the session keys and SAs they have received.
  • If a network processor’s network interface is configured to perform hardware accelerated anomaly checks, the network processor drops or accepts packets that match the configured anomaly patterns. These checks are separate from and in advance of anomaly checks performed by IPS, which is not compatible with network processor offloading. See Offloading NP4 anomaly detection.
  • The network processor next checks for a matching session key or SA. If a matching session key or SA is found, and if the packet meets packet requirements, the network processor processes the packet according to the configured action and then sends the resulting packet. This is the actual offloading step. Performing this processing on the NP processor improves overall performance because the NP processor is optimized for this task. As well, overall FortiGate performance is improved because the CPU has fewer sessions to process.
NP network processor packet flow

  • If a matching session key or SA is not found, or if the packet does not meet packet requirements, the packet cannot be offloaded. The network processor sends the data to the FortiGate unit’s CPU, which processes the packet.

Encryption and decryption of IPsec traffic originating from the FortiGate can utilize network processor encryption capabilities.

Packet forwarding rates vary by the percentage of offloadable processing and the type of network processing required by your configuration, but are independent of frame size. For optimal traffic types, network throughput can equal wire speed.

NP processors and traffic logging and monitoring

Except for the NP6, network processors do not count offloaded packets, and offloaded packets are not logged by traffic logging and are not included in traffic statistics and traffic log reports.

NP6 processors support per-session traffic and byte counters, Ethernet MIB matching, and reporting through messages resulting in traffic statistics and traffic log reporting.

Accelerated sessions on FortiView All Sessions page

When viewing sessions in the FortiView All Sessions console, NP4/ NP6 accelerated sessions are highlighted with an NP4 or NP6 icon. The tooltip for the icon includes the NP processor type and the total number of accelerated sessions.

You can also configure filtering to display FortiASIC sessions.

NP session offloading in HA active-active configuration

Network processors can improve network performance in active-active (load balancing) high availability (HA) configurations, even though traffic deviates from general offloading patterns, involving more than one network processor, each in a separate FortiGate unit. No additional offloading requirements apply.

Once the primary FortiGate unit’s main processing resources send a session key to its network processor(s), network processor(s) on the primary unit can redirect any subsequent session traffic to other cluster members, reducing traffic redirection load on the primary unit’s main processing resources.

As subordinate units receive redirected traffic, each network processor in the cluster assesses and processes session offloading independently from the primary unit. Session key states of each network processor are not part of synchronization traffic between HA members.

Configuring NP HMAC check offloading

Hash-based Message Authentication Code (HMAC) checks offloaded to network processors by default. You can enter the following command to disable this feature:

configure system global

set ipsec-hmac-offload disable

end

Software switch interfaces and NP processors

FortiOS supports creating a software switch by grouping two or more FortiGate physical interfaces into a single virtual or software switch interface. All of the interfaces in this virtual switch act like interfaces in a hardware switch in that they all have the same IP address and can be connected to the same network. You create a software switch interface from the CLI using the command config system switch-interface.

The software switch is a bridge group of several interfaces, and the FortiGate CPU maintains the mac-port table for this bridge. As a result of this CPU involvement, traffic processed by a software switch interface is not offloaded to network processors.

Configuring NP accelerated IPsec VPN encryption/decryption offloading

Network processing unit (npu) settings configure offloading behavior for IPsec VPN. Configured behavior applies to all network processors in the FortiGate unit.

config system npu

set enc-offload-antireplay {enable | disable}

set dec-offload-antireplay {enable | disable}

set offload-ipsec-host {enable | disable}

end

 

Variables Description Default
enc-offload-antireplay {enable | disable} Enable or disable offloading of IPsec encryption.

This option is used only when replay detection is enabled in Phase 2 configuration. If replay detection is disabled, encryption is always offloaded.
disable
dec-offload-antireplay {enable | disable} Enable or disable offloading of IPsec decryption.

This option is used only when replay detection is enabled in Phase 2 configuration. If replay detection is disabled, decryption is always offloaded.
enable
offload-ipsec-host {enable | disable} Enable or disable offloading of IPsec encryption of traffic from local host (FortiGate unit).

Note: For this option to take effect, the FortiGate unit must have previously sent the security association (SA) to the network processor.
disable

Example

You could configure the offloading of encryption and decryption for an IPsec SA that was sent to the network processor.

config system npu

set enc-offload-antireplay enable

set dec-offload-antireplay enable

set offload-ipsec-host enable

end

Disabling NP acceleration for individual IPsec VPN phase 1s

Use the following command to disable NP offloading for an interface-based IPsec VPN phase 1:

config vpn ipsec phase1-interface

edit phase-1-name

set npu-offload disable

end

Use the following command to disable NP offloading for a policy-based IPsec VPN phase 1:

config vpn ipsec phase1

edit phase-1-name

set npu-offload disable

end

The npu-offload option is enabled by default.

Disabling NP offloading for unsupported IPsec encryption or authentication algorithms

In general, more recent IPsec VPN encryption and authentication algorithms may not be supported by older NP processors. For example, NP4 network processors do not support SHA-256, SHA-384, and SHA-512. IPsec traffic with unsupported algorithms is not offloaded and instead is processed by the FortiGate CPU. In addition, this configuration may cause packet loss and other performance issues. If you experience packet loss or performance problems you should set the npu-offload option to disable. Future FortiOS versions should prevent selecting algorithms not supported by the hardware.

Disabling NP offloading for firewall policies

Use the following options to disable NP offloading for specific security policies:

For IPv4 security policies.

config firewall policy

edit 1

set auto-asic-offload disable

end

For IPv6 security policies.

config firewall policy6

edit 1

set auto-asic-offload disable

end

For multicast security policies.

config firewall multicast-policy

edit 1

set auto-asic-offload disable

end

Enabling strict protocol header checking disables all hardware acceleration

You can use the following command to cause the FortiGate to apply strict header checking to verify that a packet is part of a session that should be processed. Strict header checking includes verifying the layer-4 protocol header length, the IP header length, the IP version, the IP checksum, IP options, and verifying that ESP packets have the correct sequence number, SPI, and data length.If the packet fails header checking it is dropped by the FortiGate unit.

config system global

check-protocol-header strict

end.

Enabling strict header checking disables all hardware acceleration. This includes NP, SP.and CP processing.

sFlow and NetFlow and hardware acceleration

NP6 offloading is supported when you configure NetFlow for interfaces connected to NP6 processors.

Configuring sFlow on any interface disables all NP4 and NP6 offloading for all traffic on that interface. As well, configuring NetFlow on any interface disables NP4 offloading for all traffic on that interface.

Checking that traffic is offloaded by NP processors

A number of diagnose commands can be used to verify that traffic is being offloaded.

Using the packet sniffer

Use the packet sniffer to verify that traffic is offloaded. Offloaded traffic is not picked up by the packet sniffer so if you are sending traffic through the FortiGate unit and it is not showing up on the packet sniffer you can conclude that it is offloaded.

diag sniffer packet port1 <option>

Checking the firewall session offload tag

Use the diagnose sys session list command to display sessions. If the output for a session includes the npu info field you should see information about session being offloaded. If the output doesn’t contain an npu info field then the session has not been offloaded.

diagnose sys session list

session info: proto=6 proto_state=01 duration=34 expire=3565 timeout=3600 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=3

origin-shaper=

reply-shaper=

per_ip_shaper=

ha_id=0 policy_dir=0 tunnel=/

state=may_dirty npu

statistic(bytes/packets/allow_err): org=295/3/1 reply=60/1/1 tuples=2

orgin->sink: org pre->post, reply pre->post dev=48->6/6->48 gwy=10.1.100.11/11.11.11.1

hook=pre dir=org act=noop 172.16.200.55:56453->10.1.100.11:80(0.0.0.0:0)

hook=post dir=reply act=noop 10.1.100.11:80->172.16.200.55:56453(0.0.0.0:0)

pos/(before,after) 0/(0,0), 0/(0,0)

misc=0 policy_id=1 id_policy_id=0 auth_info=0 chk_client_info=0 vd=4

serial=0000091c tos=ff/ff ips_view=0 app_list=0 app=0

dd_type=0 dd_mode=0

per_ip_bandwidth meter: addr=172.16.200.55, bps=393

npu_state=00000000

npu info: flag=0x81/0x81, offload=4/4, ips_offload=0/0, epid=1/23, ipid=23/1, vlan=32779/0

Verifying IPsec VPN traffic offloading

The following commands can be used to verify IPsec VPN traffic offloading to NP processors.

diagnose vpn ipsec status

NPl/NP2/NP4_0/sp_0_0:

null: 0 0

des: 0 0

3des: 4075 4074

aes: 0 0

aria: 0 0

seed: 0 0

null: 0 0

md5: 4075 4074

sha1: 0 0

sha256: 0 0

sha384: 0 0

sha512: 0 0

 

diagnose vpn tunnel list

list all ipsec tunnel in vd 3

------------------------------------------------------

name=p1-vdom1 ver=1 serial=5 11.11.11.1:0->11.11.11.2:0 lgwy=static tun=tunnel mode=auto bound_if=47

proxyid_num=1 child_num=0 refcnt=8 ilast=2 olast=2

stat: rxp=3076 txp=1667 rxb=4299623276 txb=66323

dpd: mode=active on=1 idle=5000ms retry=3 count=0 seqno=20

natt: mode=none draft=0 interval=0 remote_port=0

proxyid=p2-vdom1 proto=0 sa=1 ref=2 auto_negotiate=0 serial=1

src: 0:0.0.0.0/0.0.0.0:0

dst: 0:0.0.0.0/0.0.0.0:0

SA: ref=6 options=0000000e type=00 soft=0 mtu=1436 expire=1736 replaywin=2048 seqno=680

life: type=01 bytes=0/0 timeout=1748/1800

dec: spi=ae01010c esp=3des key=24 18e021bcace225347459189f292fbc2e4677563b07498a07

ah=md5 key=16 b4f44368741632b4e33e5f5b794253d3

enc: spi=ae01010d esp=3des key=24 42c94a8a2f72a44f9a3777f8e6aa3b24160b8af15f54a573

ah=md5 key=16 6214155f76b63a93345dcc9ec02d6415

dec:pkts/bytes=3073/4299621477, enc:pkts/bytes=1667/66375

npu_flag=03 npu_rgwy=11.11.11.2 npu_lgwy=11.11.11.1 npu_selid=4

 

diagnose sys session list

session info: proto=6 proto_state=01 duration=34 expire=3565 timeout=3600 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=3

origin-shaper=

reply-shaper=

per_ip_shaper=

ha_id=0 policy_dir=0 tunnel=/p1-vdom2

state=re may_dirty npu

statistic(bytes/packets/allow_err): org=112/2/1 reply=112/2/1 tuples=2

orgin->sink: org pre->post, reply pre->post dev=57->7/7->57 gwy=10.1.100.11/11.11.11.1

hook=pre dir=org act=noop 172.16.200.55:35254->10.1.100.11:80(0.0.0.0:0)

hook=post dir=reply act=noop 10.1.100.11:80->172.16.200.55:35254(0.0.0.0:0)

pos/(before,after) 0/(0,0), 0/(0,0)

misc=0 policy_id=1 id_policy_id=0 auth_info=0 chk_client_info=0 vd=4

serial=00002d29 tos=ff/ff ips_view=0 app_list=0 app=0

dd_type=0 dd_mode=0

per_ip_bandwidth meter: addr=172.16.200.55, bps=260

npu_state=00000000

npu info: flag=0x81/0x82, offload=7/7, ips_offload=0/0, epid=1/3, ipid=3/1, vlan=32779/0

Dedicated Management CPU

The web-based manager and CLI of FortiGate units with NP6 and NP4 processors may become unresponsive when the system is under heavy processing load because NP6 or NP4 interrupts overload the CPUs preventing CPU cycles from being used for management tasks. You can resolve this issue by using the following command to dedicate CPU core 0 to management tasks.

config system npu

set dedicated-management-cpu {enable | disable}

end

 

All management tasks are then processed by CPU 0 and NP6 or NP4 interrupts are handled by the remaining CPU cores.

Offloading flow-based content inspection with NTurbo and IPSA

You can use the following command to configure NTurbo and IPSA offloading and acceleration of firewall sessions that have flow-based security profiles. This includes firewall sessions with IPS, application control, CASI, flow-based antivirus and flow-based web filtering.

config ips global

set np-accel-mode {none | basic}

set cp-accel-mode {none | basic | advanced}

end

 

NTurbo offloads firewall sessions with flow-based security profiles to NPx processors

NTurbo offloads firewall sessions that include flow-based security profiles to NP4 or NP6 network processors. Without NTurbo, or with NTurbo disabled, all firewall sessions that include flow-based security profiles are processed by the FortiGate CPU.

note icon NTurbo can only offload firewall sessions containing flow-based security profiles if the session could otherwise have been offloaded except for the presence of the flow-based security profiles. If something else prevents the session from being offloaded, NTurbo will not offload that session.
note icon Firewall sessions that include proxy-based security profiles are never offloaded to network processors and are always processed by the FortiGate CPU.

NTurbo creates a special data path to redirect traffic from the ingress interface to IPS, and from IPS to the egress interface. NTurbo allows firewall operations to be offloaded along this path, and still allows IPS to behave as a stage in the processing pipeline, reducing the workload on the FortiGate CPU and improving overall throughput.

note icon NTurbo sessions still offload pattern matching and other processes to CP processors, just like normal flow-based sessions.

If NTurbo is supported by your FortiGate unit, you can use the following command to configure it:

config ips global

set np-accel-mode {basic | none}

end

basic enables NTurbo and is the default setting for FortiGate models that support NTurbo. none disables NTurbo. If the np-accel-mode option is not available, then your FortiGate does not support NTurbo.

There are some special cases where sessions may not be offloaded by NTurbo, even when NTurbo is explicitly enabled. In these cases the sessions are handled by the FortiGate CPU.

  • NP acceleration is disabled. For example, auto-asic-offload is disabled in the firewall policy configuration.
  • The firewall policy includes proxy-based security profiles.
  • The sessions require FortiOS session-helpers. For example, FTP sessions can not be offloaded to NP processors because FTP sessions use the FTP session helper.
  • Interface policies or DoS policies have been added to the ingress or egress interface.
  • Tunneling is enabled. Any traffic to or from a tunneled interface (IPSec, IPinIP, SSL VPN, GRE, CAPWAP, etc.) cannot be offloaded by NTurbo.

IPSA offloads flow-based enhanced pattern matching to CPx processors

IPSA offloads enhanced pattern matching operations required for flow-based content processing to CP8 and CP9 Content Processors. IPSA offloads enhanced pattern matching for NTurbo firewall sessions and firewall sessions that are not offloaded to NP processors. When IPSA is turned on, flow-based pattern databases are compiled and downloaded to the content processors from the IPS engine and IPS database. Flow-based pattern matching requests are redirected to the CP hardware reducing the load on the FortiGate CPU and accelerating pattern matching.

IF IPSA is supported on your FortiGate unit, you can use the following command to configure it:

config ips global

set cp-accel-mode {advanced | basic | none}

end

basic offloads basic pattern matching. advanced offloads more types of pattern matching resulting in higher throughput than basic mode. advanced is only available on FortiGate models with two or more CP8s or one or more CP9s. If the cp-accel-mode option is not available, then your FortiGate does not support IPSA.

On FortiGates with one CP8, the default cp-accel-mode is basic. Setting the mode to advanced does not change the types of pattern matching that are offloaded.

On FortiGates with two or more CP8s or one or more CP9s the default cp-accel-mode is advanced. You can set the mode to basic to offload fewer types of pattern matching.

Preventing packet ordering problems with NP4, NP6 and NP6lite FortiGates under heavy load

In some cases when FortiGate units with NP4, NP6, or NP6lite processors are under heavy load the packets used in the TCP 3-way handshake of some sessions may be transmitted by the FortiGate in the wrong order resulting in the TCP sessions failing.

If you notice TCP sessions failing when a FortiGate with NP4, NP6, or NP6lite processors is very busy you can enable delay-tcp-npu-session in the firewall policy receiving the traffic. This option resolves the problem by delaying the session to make sure that there is time for all of the handshake packets to reach the destination before the session begins transmitting data.

config firewall policy

set delay-tcp-npu-session enable

end