Dissecting Amazon Elastic Load Balancer (ELB): 18 Facts You Should Know
Load balancing in the cloud is different from the traditional load-balancing and must support scalability and elasticity on the fly. Traditional Load Balancers also require additional maintenance, expertise and management effort from IT ops. This means additional cost. Also, over provisioning of Load Balancers in terms of capacity or numbers will cause unnecessary cost leakage.
AWS Amazon Elastic Load Balancer (ELB) generates new opportunities as well as great challenges for the DevOps team in this perspective. AWS Elastic Load Balancing automatically distributes incoming traffic to your application and to multiple EC2 instances that are attached to your Elastic Load Balancer. At any time, Elastic Load Balancing detects the unhealthy instances in the pool, and distributes the incoming traffic only to the healthy instances until the unhealthy ones are restored.
A quick search of the term “ELB” on the AWS cloud support forums return 255 results – for example:
There are several software alternative load balancers that can be deployed over an EC2 instance such as HAproxy, Zeus, Nginx and Citrix NetScaler and aiCache.
ELB does have a very slow ramp-up and that can be a problem for some applications and services. If your traffic increases and decreases gradually, this probably won’t be an issue but if your traffic is very spikey and you get large increases in the number of requests over a very short period of time, the ramp-up characteristics of ELB could be a problem for you. The only way to know for sure is to test it.” Mitch Garnaat’s answer to How reliable is Amazon ELB vs. HAProxy and other self-hosted solutions? - on Quora
The ELB is one of the most popular components on AWS cloud infrastructure. It is a software based load balancer and Amazon charges its customers $0.025 per hour, plus $0.008 per GB of data transferred through. Are these costs beginning to spiral? Learn how you can Control Your Amazon Data Transfer Costs. In addition to the cost considerations, ELBs have a significant role in the performance and availability of the online service that is deployed on the Amazon cloud. Newvem AWS Cloud analytic service has identified many cases in which AWS users have not properly configured their Elastic Load Balancers (ELBs) and will not achieve required availability levels during an outage.
8Kmiles are experts in deploying and maintaining efficient AWS environments for customers. Based on the company experience I published the following important facts for AWS customers. If you are just getting started with AWS cloud, this article can help you decrease your learning curve on effectively using ELBs.
Here are some important facts you need to know about the AWS Elastic Load Balancer:
1. Algorithms supported by Amazon ELB - Currently Amazon ELB only supports Round Robin (RR) and Session Sticky Algorithms. Round Robin algorithms can be used for load balancing traffic between:
- Web/App EC2 instances which are designed stateless
- Web/App EC2 instances which synchronizes the state between them
- Web/App EC2 instances which synchronizes the state using common data stores like MemCacheD, ElastiCache , Database etc.
Session Sticky algorithm can be used for load balancing traffic between:
- Web/App EC2 instances which are designed to be statefull
The current version of ELB does not support Weighted or Least Connection algorithms like other Reverse proxies. We can probably expect these algorithms to be supported in future.
2. Amazon ELB is not a PAGE CACHE - Amazon ELB is just a load balancer and not to be confused with Page Cache Server or Web Accelerator. Web Accelerators like Varnish can cache pages, Static assets etc. and also do RR load balancing to backend EC2 servers. Amazon ELB is designed to balance loads efficiently and elastically. Amazon ELB can be used with Amazon CloudFront to deliver the static assets and reduce latency for the above use cases.
3. Amazon ELB can be pre-warmed on request basis - Amazon ELB can be pre-warmed by raising a request to Amazon Web Service Support Team. The Amazon team will pre-warm the Load Balancers in the ELB tier to handle the sudden load/flash of traffic. This is advisable for scenarios like Quarterly sales/launch campaigns, promotions, etc. which follow a flash traffic pattern. The Amazon ELB pre-warm cannot be done on hourly/daily basis (I think). It would be a cool feature if the Amazon team could bring ELB Pre-warming as a configurable feature reqs/sec into the AWS console (like Amazon DynamoDB console). Click here to find out how to add a new listener to an existing elastic Load Balancer.
4. Amazon ELB is not designed for sudden load spikes / Flash traffic - Amazon ELB is designed to handle unlimited concurrent requests per second with a “gradually increasing” load pattern. It is not designed to handle heavy sudden spikes of load or flash traffic. For example: Imagine an e-commerce website whose traffic increases gradually to thousands of concurrent requests/sec in hours, Amazon ELB can easily handle this traffic pattern. According to RightScale benchmark, the Amazon ELB was easily able to handle 20K+ requests/sec and more in such patterns. Whereas imagine use cases like Mass Online Exam or GILT load pattern or 3-Hrs Sales/launch campaign sites expecting 20K+ concurrent requests/sec spike suddenly in few minutes, Amazon ELB will struggle to handle this load pattern. If this sudden spike pattern is not a frequent occurrence then we can pre-warm the ELB. Otherwise, we need to look for alternative load balancers in the AWS infrastructure.
5. Protocols supported by Amazon ELB - Currently Amazon ELB only supports the following protocols: HTTP, HTTPS (Secure HTTP), SSL (Secure TCP) and TCP protocols. ELB supports load balancing for the following TCP ports: 25, 80, 443, and 1024-65535. In case RTMP or HTTP Streaming protocol is needed, you will need to use Amazon CloudFront CDN in your architecture. Find out how to update the SSL certificate of an ELB.
6. Amazon ELB timeouts at 60 seconds (kept idle) - Amazon ELB currently timeouts persistent socket connections @ 60 seconds if it is kept idle. This condition will be a problem for use cases which generate large files (PDF, reports, etc.) at backend EC2, send them as response back and keep connections idle during the entire generation process. To avoid this, you’ll have to send something on the socket every 40 or so seconds to keep the connection active in Amazon ELB.
[Newvem analyzes your baseline disaster recovery (DR) status, reflecting how well AWS DR best practices have been implemented, and recommends AWS features and best practices to reach optimal availability, increase outage protection, and quick recovery.]