Load balancing ensures high system availability by distributing workload across multiple components. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The platform uses NGINX for two types of load balancing: TCP and HTTP.
Platform clients can use TCP to balance requests among databases, mail servers, and other distributed applications with network support. Also, TCP can be used instead of HTTP if faster balancing is needed. In this case, you just need to note that this speed is achieved by omitting the request-handling process.
TCP load balancing component receives a connection request from a client application through a network socket. This component decides which node in the environment receives the request. For this request distribution, the platform uses the Round Robin Algorithm.
When the connection is established, requests from the client application continue to go through the same connection to the chosen node. The application cannot determine which instance is selected.
The existing connection can be lost only if a problem occurs, such as a temporary network failure. The next time a request is received, a new connection is created. This connection can be used with any instance in the environment.
To get TCP balancing in your environment, follow the instructions:
1. Create an environment with two or more application servers (for example, Tomcat). In this case, the NGINX will be added automatically. Note that you need to switch on the Public IP for your NGINX node.

2. Click Config for NGINX in your environment.

3. In the opened tab, navigate to tcpmaps > mappings.xml and specify frontend and backend ports. Save the changes.
The frontend is the port to which a user will be connected.
The backend is the port to which the balancer forwards the request.

4. Restart the NGINX node.
That’s all. Your environment now uses TCP balancing for your application servers.