AWS Fargate

A serverless compute engine purpose-built for containers that lets you run containers on ECS or EKS without managing EC2 instances

Overview

AWS Fargate is a serverless compute engine that works with Amazon ECS and Amazon EKS. Traditional container operations required you to manage the underlying EC2 instance fleet (cluster) yourself, but Fargate lets you fully delegate infrastructure management to AWS. Developers simply specify the container image, CPU and memory requirements, and networking configuration, and Fargate automatically provisions the appropriate compute resources and runs the containers. It uses a pay-per-task model, billing by the second based on vCPU and memory consumption. Because there is no need to patch operating systems or manage scaling, operational overhead is dramatically reduced.

Division of Roles with the EC2 Launch Type and Fargate's Isolation Model

When running containers on ECS, you can choose between two launch types: EC2 and Fargate. With the EC2 launch type, you build and manage your own cluster of EC2 instances and place containers on them. This involves selecting instance types, capacity planning, OS patching, and configuring scaling. With Fargate, all of these tasks are eliminated. Fargate automatically provisions an isolated compute environment based on the CPU and memory requirements specified in the task definition. Each task runs on its own microVM with a dedicated kernel, ensuring strong security isolation even in multi-tenant environments. Unlike Azure Container Instances (ACI), which has limited integration with Azure Kubernetes Service (restricted to Virtual Nodes), Fargate supports both ECS and EKS, enabling serverless execution of Kubernetes workloads as well.

Fargate Spot and Task-Level Cost Optimization

Fargate Spot can reduce costs by up to 70% compared to standard Fargate pricing by using surplus AWS capacity. It is well suited for fault-tolerant workloads such as batch processing, data transformation pipelines, and CI/CD build jobs that can handle interruptions gracefully. Combining Fargate Spot with standard Fargate capacity providers lets you set a ratio - for example, 70% Spot and 30% On-Demand - balancing cost savings with availability guarantees. At the task level, right-sizing CPU and memory allocations based on actual utilization data from Container Insights prevents over-provisioning. Fargate bills by the second with per-vCPU-hour and per-GB-hour rates, so even small reductions in allocated resources compound into meaningful savings across many tasks. For a comprehensive guide to AWS Fargate best practices, check out technical books (Amazon).

Network Design and Hybrid Configurations in Production

Fargate assigns an ENI (Elastic Network Interface) to each task by default, providing native private access to resources within a VPC. Tasks can be placed in private subnets with NAT Gateway access for outbound traffic, or in public subnets with public IP assignment for internet-facing services behind an ALB. Security groups applied at the task level enable fine-grained network access control, restricting traffic to only the ports and sources each service requires. For production environments, a hybrid approach is common: use Fargate for development and staging environments to minimize operational burden, and the EC2 launch type for large-scale production workloads where GPU support, custom AMIs, or maximum cost efficiency in high-density clusters is required. Service mesh integration through App Mesh or ECS Service Connect provides observability and traffic management across Fargate and EC2 tasks within the same cluster.

共有するXB!