Load Balancer includes a few key features. These sections can be customized for your subscription with:
- Resource Manager Templates
Frontend IP configuration
The IP address of your Load Balancer. It is a point of contact for customers. These IP addresses can be:
- Public IP Address
- Private IP Address
The IP address type determines the type of load balancer created. The choice of private IP addresses makes an internal balancer load. The option of public IP addresses creates a balance of public responsibility.
|Public load balancer|
Internal load balancer
|Frontend IP configuration||Public IP address||Private IP address|
|Description||The public load balancer maps the public IP and the incoming traffic port to the private IP and the VM port. Upload balancer maps traffic alternately around response traffic from VM. You can distribute certain types of traffic across multiple VMs or services by using load balancing rules. For example, you can distribute web application load across multiple web servers.||Internal Upload Balancer delivers traffic to in-app resources. Restricts access to the forwarded IP addresses of the virtual network. Previous IP addresses and actual networks were never displayed directly on the Internet repository. Internal business applications that run on and are available within or from local resources.|
|SKUs supported||Basic, Standard||Basic, Standard|
A group of visual equipment or conditions in a set of machine scales that indicate an incoming application. Computer guides generally recommend adding multiple scenarios to the back pool to measure cost-effectiveness to meet the maximum volume of incoming traffic.
Load balancer immediately rebuilds with automatic reset when you measure conditions up or down. Adding or removing VMs in the back pool also prepares the load balancer without additional operation. The width of the rear pool of any machine is visible on a single visual network.
When you think about how to build your backyard pool, build a small number of backend pool resources to do with the length of the management tasks. There is no difference in the performance of the data plane or the scale.
Health research is used to determine the condition of the conditions in the back pool. When creating a balancer load, prepare a health check for the load balancer to be used. This health check will determine if the condition is healthy and can get traffic.
You can define an unhealthy limit on your health probes. When the investigation fails to respond, the upload balancer stops sending new connections in adverse conditions. Investigation failure does not interfere with existing communication. The connection continues until the request:
Ends the flow
The end of the idle period took place
VM shuts down
The loading balancer offers a variety of health screening endpoints: TCP, HTTP, and HTTPS. Learn more about Load Balancer Health projects.
The basic load balancer does not support HTTPS methods. The Basic load balancer shuts down all TCP connections (including established connections).
The balancer load rule defines how incoming traffic is distributed in all conditions within the rear pool. The load balancing principle measures the configuration of the front IP and the hole in most backend IP addresses and ports.
For example, use the Route 80 measurement law to move traffic from your front IP to port 80 of your background.
High Availability Ports
Balancer load rule provided with ‘protocol – all and port – 0’.
This rule empowers one law to load-balance all TCP and UDP flows that reach all internal Standard Load Balancer ports.
The decision to measure the load is made for each flow. This action is subject to the connection of the following five Vehicles:
- source IP address
- source port
- destination IP address
- destination port
The rules of HA in the port of measurement load help you with critical situations, such as high availability and scale of network material (NVAs) within virtual networks. The feature can be helpful when a large number of ports should be limited to the load
Inbound NAT rules
Incoming NAT rule forwards incoming traffic sent to the previous IP address and port integration. Traffic is sent to a specific virtual machine, for example, in a back pool. The port transfer is done with the same hash-based distribution as load balancing.
Outgoing rule corrects network outage (NAT) output on all visible devices or situations identified in the back pool. This rule enables the conditions behind the connection (outgoing) to the Internet or other endpoints.
- Load balancer provides load balancing and port forwarding for specific TCP or UDP protocols. Load-balancing rules and inbound NAT rules support TCP and UDP, but not other IP protocols including ICMP.
- Outbound flow from a backend VM to a frontend of an internal Load Balancer will fail.
- A load balancer rule cannot span two virtual networks. All load balancer frontends and their backend instances must be in a single virtual network.
- Forwarding IP fragments isn’t supported on load-balancing rules. IP fragmentation of UDP and TCP packets isn’t supported on load-balancing rules. HA ports load-balancing rules can be used to forward existing IP fragments. For more information, see High availability ports overview.
- You can only have 1 Public Load Balancer and 1 internal Load Balancer per availability set