Load Balancer

the Service Is Temporarily Unavailable. Please Try Again Later.

What is Server Load Balancer and How It Works

Many people may wonder what Server Load Balancer is and how Server Load Balancer works, and what are the benefits of using Server Load Balancer.

Server Load Balancer Definition

Server Load Balancer or Server Load Balancing (SLB) is a service that distributes high traffic sites among several servers. Server Load Balancer intercept traffic for a website and reroutes that traffic to servers.

Server Load Balancer distributes inbound network traffic across multiple Elastic Compute Service (ECS) instances that act as backend servers based on forwarding rules. You can use Server Load Balancer to meliorate the responsiveness and availability of your applications.

What is Server Load Balancer?

Server Load Balancer Introduction Video

The Overview of Server Load Balancer

Later you add together ECS instances that reside in the same region to a Server Load Balancer instance, Server Load Balancer uses virtual IP addresses (VIPs) to virtualize these ECS instances into backend servers in a high-performance server pool that ensures high availability. Client requests are distributed to the ECS instances based on forwarding rules.

Server Load Balancer checks the wellness status of the ECS instances and automatically removes unhealthy ones from the server pool to eliminate single points of failure (SPOFs). This enhances the resilience of your applications. You tin besides utilize Server Load Balancer to defend your applications against distributed deprival of service (DDoS) attacks.

The Components of Server Load Balancer

Server Load Balancer consists of three components:

  • Server Load Balancer instances
    A Server Load Balancer instance is a primal load-balancing component in SLB. It receives traffic and distributes traffic to backend servers. To get started with SLB, you must create an SLB case and add at least one listener and two ECS instances to the SLB case.
  • Listeners
    A listener checks for connectedness requests from clients forward requests to backend servers and performs wellness checks on backend servers.
  • Backend servers
    ECS instances are used every bit backend servers in Server Load Balancer to receive and process distributed requests. ECS instances tin can be added to the default server grouping of a Server Load Balancer instance. You can also add together multiple ECS servers to VServer groups or master/secondary server groups afterward the respective groups are created.

Server_Load_Balancer_Definition

The Benefits of Server Load Balancer

  • Potent scalability
    You can increment or decrease the number of backend servers to accommodate the load balancing capacity for your applications.
  • Depression costs
    Server Load Balancer can save 60% of load balancing costs compared with using traditional hardware solutions.
  • Outstanding security
    You can utilize Server Load Balancer with Alibaba Cloud Security to defend your applications against five Gbit/s distributed denial of service (DDoS) attacks.
  • High concurrency
    A Server Load Balancer cluster supports hundreds of millions of concurrent connections, and a single Server Load Balancer case supports tens of millions of concurrent connections.

How Does Server Load Balancer Piece of work?

High availability of the Server Load Balancer architecture

Server Load Balancer instances are deployed in clusters to synchronize sessions and protect backend servers from SPOFs, improving back-up, and ensuring service stability. Layer-iv Server Load Balancer uses the open-source Linux Virtual Server (LVS) and Keepalived software to residual loads, whereas Layer-vii SLB uses Tengine. Tengine, a web server project launched by Taobao, is based on NGINX and adds avant-garde features dedicated to high-traffic websites.

Requests from the Internet reach an LVS cluster along Equal-Cost Multi-Path (ECMP) routes. In the LVS cluster, each car uses multicast packets to synchronize sessions with the other machines. At the aforementioned time, the LVS cluster performs health checks on the Tengine cluster and removes unhealthy machines from the Tengine cluster to ensure the availability of the Layer-vii Server Load Balancer.

Server Load Balancer Best practise:
You tin use session synchronization to preclude persistent connections from existence affected by server failures inside a cluster. However, for short-lived connections or if the session synchronization rule is non triggered by the connection (the three-style handshake is not completed), server failures in the cluster may still affect user requests. To prevent session interruptions caused by server failures within the cluster, you tin add together a retry mechanism to the service logic to reduce the impact on user access.

The high-availability solution with i Server Load Balancer instance

To provide more stable and reliable load balancing services, y'all can deploy Server Load Balancer instances across multiple zones in most regions to attain cantankerous-data-center disaster recovery. Specifically, y'all can deploy a Server Load Balancer instance in 2 zones inside the same region whereby i zone acts as the principal zone and the other acts as the secondary zone. If the primary zone suffers an outage, failover is triggered to redirect requests to the servers in the secondary zone inside approximately 30 seconds. After the primary zone is restored, traffic will be automatically switched back to the servers in the primary zone.

Server Load Balancer Best practice:
We recommend that you create Server Load Balancer instances in regions that support primary/secondary deployment for zone-disaster recovery.
You tin can choose the primary zone for your Server Load Balancer case based on the distribution of ECS instances. That is, select the zone where virtually of the ECS instances are located as the primary zone for minimized latency.
Nevertheless, nosotros recommend that you practise not deploy all ECS instances in the principal zone. When y'all develop a failover solution, yous must deploy several ECS instances in the secondary zone to ensure that requests can nonetheless be distributed to backend servers in the secondary zone for processing when the primary zone experiences reanimation.

Server_Load_Balancer_instance

The high-availability solution with multiple Server Load Balancer instances

In the context of 1 Server Load Balancer example, traffic distribution for your applications can yet exist compromised by network attacks or invalid Server Load Balancer configurations, because the failover between the primary zone and the secondary zone is non triggered. As a result, the load-balancing functioning is impacted. To avoid this state of affairs, you tin create multiple Server Load Balancer instances to form a global load-balancing solution and accomplish cantankerous-region backup and disaster recovery. Too, you lot tin use the instances with DNS to schedule requests so as to ensure service continuity.

Server Load Balancer All-time do:
You can deploy Server Load Balancer instances and ECS instances in multiple zones within the same region or across different regions, and so use DNS to schedule requests.

multiple_Server_Load_Balancer_instances

The Architecture of Server Load Balancer

Server Load Balancer instances are deployed in clusters to synchronize sessions and protect backend servers from SPOFs, improving redundancy, and ensuring service stability. Server Load Balancer supports Layer-4 load balancing of Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic and Layer-7 load balancing of HTTP and HTTPS traffic.

Server Load Balancer forwards client requests to backend servers past using Server Load Balancer clusters and receives responses from backend servers over internal networks.

Server Load Balancer Design

Alibaba Deject provides Layer-iv (TCP and UDP) and Layer-7 (HTTP and HTTPS) load balancing.

  • Layer-four Server Load Balancer combines the open-source Linux Virtual Server (LVS) with Keepalived to balance loads and implements customized optimizations to run into cloud computing requirements.
  • Layer-7 Server Load Balancer uses Tengine to remainder loads. Tengine is a web server project launched by Taobao. Based on NGINX, Tengine has a wide range of advanced features optimized for high-traffic websites.

Server_Load_Balancer_Design_1

Layer-iv Server Load Balancer runs in a cluster of LVS machines for higher availability, stability, and scalability of load balancing in aberrant cases.

Server_Load_Balancer_Design_2

In an LVS cluster, each machine synchronizes sessions with other machines via multicast packets. As shown in the below figure, Session A is established on LVS1 and is synchronized to other LVS machines afterwards the client transfers three data packets to the server. Solid lines indicate the current agile connections, while dotted lines indicate that the session requests will be sent to other normally working machines if LVS1 fails or is being maintained. In this mode, you can perform hot updates, auto maintenance, and cluster maintenance without affecting business concern applications.

Server_Load_Balancer_Design_3

The Scenarios of Server Load Balancer

Server Load Balancer (SLB) can be used to meliorate the availability and reliability of applications with loftier access traffic.

Balance loads of your applications

You can configure listening rules to distribute heavy traffic amongst ECS instances that are fastened every bit backend servers to SLB instances. Y'all can also use the session persistence characteristic to forrad all of the requests from the same client to the same backend ECS case to enhance access efficiency.

Scale your applications

You can extend the service capability of your applications past adding or removing backend ECS instances to adjust your concern needs. Server Load Balancer can be used for both web servers and application servers.

Eliminate single points of failure (SPOFs)

You can attach multiple ECS instances to a Server Load Balancer instance. When an ECS example malfunctions, Server Load Balancer automatically isolates this ECS case and distributes entering requests to other good for you ECS instances, ensuring that your applications continue to run properly.

Implement zone-disaster recovery (multi-zone disaster recovery)

To provide more stable and reliable load balancing services, Alibaba Deject allows you to deploy Server Load Balancer instances across multiple zones in well-nigh regions for disaster recovery. Specifically, you lot tin can deploy a Server Load Balancer instance in two zones inside the same region. I zone is the primary zone, while the other zone is the secondary zone. If the principal zone fails or becomes unavailable, the Server Load Balancer instance will failover to the secondary zone in nigh thirty seconds. When the primary zone recovers, the Server Load Balancer example volition automatically switch back to the main zone.

We recommend that you create a Server Load Balancer instance in a region that has multiple zones for zone-disaster recovery. Nosotros recommend that yous plan the deployment of backend servers based on your business concern needs. In addition, we recommend that you add together at least one backend server in each zone to achieve the highest load balancing efficiency.

As shown in the following figure, ECS instances in different zones are fastened to a single Server Load Balancer example. In normal cases, the Server Load Balancer instance distributes inbound traffic to ECS instances both in the master zone (Zone A) and in the secondary zone (Zone B). If Zone A fails, the Server Load Balancer instance distributes inbound traffic only to Zone B. This deployment mode helps avoid service interruptions acquired by zone-level failure and reduce latency.

Scenarios_of_Server_Load_Balancer_1

Assume that you deploy all ECS instances in the primary zone (Zone A) and no ECS instances in the secondary zone (Zone B) as shown in the post-obit figure. If Zone A fails, your services will exist interrupted because no ECS instances are available in Zone B. This deployment mode achieves low latency at the cost of high availability.

Scenarios_of_Server_Load_Balancer_2

Geo-disaster recovery

You can deploy Server Load Balancer instances in different regions and attach ECS instances of different zones within the same region to a Server Load Balancer instance. You tin use DNS to resolve domain names to service addresses of Server Load Balancer instances in different regions for global load balancing purposes. When a region becomes unavailable, you tin temporarily stop DNS resolution within that region without affecting user admission.

Scenarios_of_Server_Load_Balancer_3

Related Service:

Server Load Balancer

Alibaba Cloud Server Load Balancer (SLB) distributes traffic amidst multiple instances to improve the service capabilities of your applications. You can use SLB to prevent a single point of failure (SPOFs) and ameliorate the availability and the mistake tolerance capability of your applications.

Related Blog:

Quick Guide to Load Balancing on Alibaba Cloud

This article is intended for sharing my experience of what I have learned nearly Alibaba Cloud Server Load Balancer (SLB) from Alibaba Cloud Academy. Although this is not a new topic to me, I took the ACA course to become my fundamentals correct equally I believe it is very important. I've also learned a lot of new things, especially topics related to Alibaba Cloud-specific technologies. I gained a deeper understanding of how Alibaba Deject tin can help my organization increase its profit by reducing operating costs.

Solving Server Reliability Bug with Server Load Balancer

A Server Load Balancer is a hardware or virtual software appliance that distributes the awarding workload across an array of servers, ensuring awarding availability, elastic scale-out of server resources, and supports health management of backend servers and application systems.

How Can a Server Load Balancer Help Your Website or Application?

Traditionally, we demand a spider web server to provide and deliver services to our customers. Usually, nosotros want to recollect if we can have a very powerful web server that can practice anything we desire, such equally providing whatever services and serving every bit many customers as possible.

However, with only ane web server, there are two major concerns. The first ane is there is always limited to a server. If your business is booming, lots of new users are coming to visit your website so one day your website will definitely accomplish your capacity limit and will deliver a very unsatisfying experience to your users.

Besides, if you only have ane web server, a unmarried signal of failure may occur. For case, a power outage or network connection issues may happen to your servers. If your single server is downwardly, your customers will exist totally out of service, and you cannot provide your service anymore. This is the problem you may suffer when you accept just one web server, even if it is very powerful.

Alibaba Clouder

2,630 posts | 669 followers

Follow

You may besides like

Alibaba Clouder

2,630 posts | 669 followers

Follow

Related Products

  • Server Load Balancer

    Reply to sudden traffic spikes and minimize response time with Server Load Balancer

    Larn More
  • Server Baby-sit

    An piece of cake to use service that provides real-time monitoring of servers to ensure high availability

    Learn More
  • Database Overview

    Fully managed and less trouble database services

    Learn More
  • Starter Package

    High-functioning virtual machines with data transfer plan, starting from $2.50 per month

    Learn More than

cravenmosencestiss.blogspot.com

Source: https://www.alibabacloud.com/blog/what-is-server-load-balancer-and-how-it-works_597089

0 Response to "Load Balancer

the Service Is Temporarily Unavailable. Please Try Again Later.

"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel