Amazon Web Services’ EC2 Container Service (ECS) is a containerization orchestration manager supporting docker containers to provide a highly scalable and high-performance platform within AWS. AWS ECS is nothing overly complex in the regard that it leverages simple AWS resources such as EC2, EBS, and ELB, Security Groups, Auto Scaling Groups, and IAM Roles. Through integrating the common AWS resources, AWS ECS is able to deliver an efficient containerization orchestration tool which can create a scalable cost-savings environment. However, not every organization may find AWS ECS the right solution for their containerization needs. Determining if AWS ECS is the right containerization orchestration tool for you takes understanding your requirements for your environment.
When looking at taking your containerization operations to the cloud, there many options in regard to what tool you can use to provide orchestration. Most containerization managers that work with your on-premises solution will also work in the cloud under an IaaS solution, but not all may be the best fit so it is always recommended to review your options. Some basic considerations you need to make for your requirements gathering are:
Many of these requirements are unique to your enterprise and most containerization orchestration tools have a good answer for them. Yet, when choosing a cloud solution to extend your containerization operations there are two major areas you should focus on:
Both of these questions impact the operational budget. Any reengineering of deployment processes will require updates to scripts and code while the new process is taught to the technical team. Each time a new solution is developed, additional time has to be factored in to rework the development environment configurations into the production environment, which the fact you are doing this is defeating a majority of the benefits gained in containerization.
Additionally, selecting an orchestration tool that does not enable elasticity of the entire containerization infrastructure can be wasting your budget. For example, an orchestration management platform running on AWS may require a set number of nodes in a cluster. Containers are then deployed to the cluster and any compute resource that is not used is left idle while you continue to pay for all of it. As demand increase more containers are deployed to the cluster until at capacity. Then, unless manual intervention, the cluster is at full capacity and demand cannot be met. Choosing an orchestration tool that can scale the cluster as well as the services/containers within the cluster is key to leveraging the cost-effective solutions of the cloud.
AWS has made an effort to limit the amount of reengineering needed to get started with ECS. The Amazon EC2 Container Service CLI (Amazon ECS CLI) allows you to simplify your development process as well as easily set up an Amazon ECS cluster and its associated resources through supporting Docker Compose files. You can apply the same Compose definition used to define a multi-container application on your development machine as well as in production. However, all is not perfect and currently ECS CLI supports only Docker Compose version 1 and 2. If your development team heavily leverages many of recent features of Docker Compose you may need to assess if reengineering your solutions is worth it.
When selecting a containerization solution in the cloud an important factor is supporting elasticity of your infrastructure. AWS ECS provides elasticity of the cluster nodes (Container Instances) through Auto Scale Groups and built-in cloudwatch alarms to trigger the increase or decrease of the number of nodes in the cluster. As demand of your service increases, ECS will distribute more containers on the Container Instances to meet the demand. As memory runs low on the cluster nodes due to more containers, ECS automatically provisions more Container Instances through an Auto Scale Group. This feature offers a big performance gain as well as a money-saver. You only pay what you need and never have to manually add additional nodes to your cluster or watch idle compute resources run up your bill.