Load Balancer optimizes resource usage, reduces latency, and prevents infrastructure constraints.
Load Balancer Functions
When scaling horizontally, the service helps to determine the necessary workload for a new server in the cluster. If there are two servers in the cluster, each of them will receive an equal amount of traffic. Load Balancer provides flexibility when adding or removing machines, optimizes the use of IT infrastructure resources and speeds up message service.
Load Balancer sends requests only to online servers. If the server goes down, the service will distribute tasks among other IT infrastructure elements. Load Balancer eliminates a single point of failure, providing system protection and a high level of application availability.
Functionally, Load Balancer is similar to a reverse proxy, which also acts as an intermediary between the server and the client.
Load Balancer Algorithms
Load Balancer uses the following algorithms:
- Round Robin — sequential distribution of requests across a group of servers;
- Least Connections — a new request is received by the server with the lowest number of current client connections. The relative computing power of each server is taken into account when determining the number of connections;
- Sticky Sessions — an algorithm for docking user sessions, which helps Round Robin and Least Connections. The server that processed the user’s request is assigned to the user’s session. If the initial server is unavailable, the user’s session will start on the other one;
- Hash — distribution of requests based on a given key, for example, IP or URL;
- IP Hash — the client’s IP address is used to determine which server receives the request.