Load balancing : Load balancing among local servers
 
Load balancing among local servers
When your servers are all physically located in the same or close to the same location, on a single network, both physical distance and routing hops from the client to each of your servers is approximately the same. Instead of physical distance, the most important factor will be:
the number of simultaneous connections and/or requests that your servers can handle, and
how well that load is distributed among them so that client wait time is minimized
To configure load balancing among servers
1. If the traffic uses HTTPS, upload certificates for the web sites, CRLs, OCSP certificates, and CA certificates. Also configure certificate validation rules. For details, see “Secure connections (SSL/TLS)”.
2. Define your back-end servers’ health checks (“Monitoring your servers’ responsiveness”).
3. Define the IP addresses of servers that will receive load balanced traffic (“Defining your pool of back-end servers”).
4. If you want to distribute packets to your server pool based upon application layer (Layer 7) headers, or detailed content in the network layer (Layer 3), configure content routing rules (“Routing based on the application layer”).
5. If you want to change packets before forwarding, configure content rewriting (“Rewriting application layer headers”).
6. Configure:
load distribution methods (“Routing based on current load”)
session persistence methods (“Specifying server-side session persistence”)
session timeouts (“Specifying client-side sessions”)
if applicable, selected certificates and certificate verification rules (“Configuring offloading of client-side SSL/TLS sessions”)
7. Configure a virtual server that will apply the combination of certificates, pools, health checks, persistence, session timeouts, and load balancing method that you want to use when redistributing sessions destined for its IP address to the pool (“Distributing new sessions among your servers”).
See also
Enabling traffic & event logs
Reports