In this tutorial we are going to deploy a containerised Tetris game on EKS using Terraform.
We are going to run the project from an EC2 instance to allow us to have appropriate credentials and freedom of installing different tools we will use, which are:
Docker
Terraform
Kubectl
AWS CLI
Step 1: Spinning up AWS EC2 instance and installing Docker, Terraform, AWS CLI and Kubectl.
1- Log in to AWS Management console
2- In the EC2 console launch a new Ubuntu instance and select a Key Pair and a Security Group that allows HTTP, HTTPS traffic and SSH if you are not planning to use AWS Session Manager to connect to the instance.
3- Connect to the instance
4- Install Docker, Terraform, AWS CLI and Kubectl using below by:
Create a new file name “install_tools.sh” by running:
nano install_tools.sh
Paste below shell code into the file which will install the tools:
#!/bin/bash # Update package list and upgrade existing packages sudo apt update && sudo apt upgrade -y # Install necessary dependencies sudo apt install -y apt-transport-https ca-certificates curl software-properties-common # Install Docker # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin sudo usermod -aG docker $USER sudo newgrp docker # Install Terraform sudo apt-get update && sudo apt-get install -y gnupg software-properties-common wget -O- https://apt.releases.hashicorp.com/gpg | \ gpg --dearmor | \ sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \ https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \ sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt update sudo apt-get install terraform # Install AWS CLI curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install # Install kubectl sudo apt install curl -y curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl # Print installed versions echo "Installed versions:" docker --version terraform --version aws --version kubectl version --client echo "Installation complete."
Make the script executable and run the script:
chmod +x install_tools.sh
sudo ./install_tools.sh
Step 2: Building the infrastructure using Terraform.
1- Let’s start by creating a new role for our EC2 instance with permissions to EKS, IAM, S3 and EC2:
In IAM console click on Policies→ Create Policy → Select JSON and paste below code
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:CreateCluster", "eks:DescribeCluster", "eks:ListClusters", "eks:UpdateClusterVersion", "eks:DeleteCluster", "eks:ListNodegroups", "eks:CreateNodegroup", "eks:DeleteNodegroup", "eks:DescribeNodegroup", "eks:UpdateNodegroupConfig", "eks:UpdateNodegroupVersion", "eks:TagResource" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:CreateRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:AWSServiceName": [ "eks.amazonaws.com" ] } } }, { "Effect": "Allow", "Action": [ "iam:GetRole", "iam:ListAttachedRolePolicies", "iam:CreateRole", "iam:TagRole", "iam:ListRolePolicies", "iam:AttachRolePolicy", "iam:ListInstanceProfilesForRole", "iam:PassRole", "iam:DeleteRole", "iam:DetachRolePolicy" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:DescribeVpcs", "ec2:DescribeVpcAttribute", "ec2:DescribeAvailabilityZones" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": "*" } ] }
This will grant the EC2 instance permission to launch resources in EKS, EC2 and IAM, S3.
Give the policy a name then create it.
Create a new role in IAM → Roles → Create role
Select EC2
Select the newly created policy above, and give the role a name and save it.
Now we assign the role to the running EC2 instance. In the EC2 portal, tick the running machine click on Actions button → Security → Modify IAM role then select the above newly created role:
Step 4: Creating Terraform files
Create an S3 bucket which will store Terraform statefile and make note of its name as it will be needed in next steps.
Terraform files are split into multiple files for better organisation and maintainability. Here's the structure and content of the files:
project_root/
│
├── main.tf
├── variables.tf
├── outputs.tf
├── providers.tf
│
└── modules/
├── eks/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
│
└── iam/
├── main.tf
├── variables.tf
└── outputs.tf
Here's the content for each file:
module "iam" {
source = "./modules/iam"
cluster_name = var.cluster_name
}
module "eks" {
source = "./modules/eks"
cluster_name = var.cluster_name
node_group_name = var.node_group_name
desired_node_count = var.desired_node_count
max_node_count = var.max_node_count
min_node_count = var.min_node_count
instance_type = var.instance_type
cluster_role_arn = module.iam.cluster_role_arn
node_group_role_arn = module.iam.node_group_role_arn
}
variables.tf
: Ensure to update the bucket name which will contain terraform statefile
ariable "cluster_name" {
description = "Name of the EKS cluster"
type = string
default = "EKS_CLOUD"
}
variable "node_group_name" {
description = "Name of the EKS node group"
type = string
default = "Node-cloud"
}
variable "desired_node_count" {
description = "Desired number of nodes in the EKS node group"
type = number
default = 1
}
variable "max_node_count" {
description = "Maximum number of nodes in the EKS node group"
type = number
default = 2
}
variable "min_node_count" {
description = "Minimum number of nodes in the EKS node group"
type = number
default = 1
}
variable "instance_type" {
description = "EC2 instance type for the EKS nodes"
type = string
default = "t2.medium"
}
variable "region" {
description = "AWS region"
type = string
default = "us-east-1"
}
variable "s3_bucket_name" {
description = "Name of the S3 bucket for Terraform state"
type = string
default = "tetris112233" #CHANGE TO YOUR BUCKET NAME
}
variable "s3_key" {
description = "S3 key for Terraform state"
type = string
default = "EKS/terraform.tfstate"
}
output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}
output "cluster_security_group_id" {
description = "Security group ID attached to the EKS cluster"
value = module.eks.cluster_security_group_id
}
providers.tf
: Ensure to update the bucket name which will contain terraform statefile
erraform {
backend "s3" {
bucket = "tetris112233" #CHANGE TO YOUR BUCKET NAME
key = "EKS/terraform.tfstate"
region = "us-east-1"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
}
data "aws_vpc" "default" {
default = true
}
data "aws_availability_zones" "available" {
state = "available"
}
data "aws_subnets" "private" {
filter {
name = "vpc-id"
values = [data.aws_vpc.default.id]
}
filter {
name = "availability-zone"
values = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1]]
}
}
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = var.cluster_role_arn
vpc_config {
subnet_ids = data.aws_subnets.private.ids
}
tags = {
Name = var.cluster_name
}
}
resource "aws_eks_node_group" "eks_node_group" {
cluster_name = aws_eks_cluster.eks_cluster.name
node_group_name = var.node_group_name
node_role_arn = var.node_group_role_arn
subnet_ids = data.aws_subnets.private.ids
scaling_config {
desired_size = var.desired_node_count
max_size = var.max_node_count
min_size = var.min_node_count
}
instance_types = [var.instance_type]
tags = {
Name = var.node_group_name
}
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
}
variable "node_group_name" {
description = "Name of the EKS node group"
type = string
}
variable "desired_node_count" {
description = "Desired number of nodes in the EKS node group"
type = number
}
variable "max_node_count" {
description = "Maximum number of nodes in the EKS node group"
type = number
}
variable "min_node_count" {
description = "Minimum number of nodes in the EKS node group"
type = number
}
variable "instance_type" {
description = "EC2 instance type for the EKS nodes"
type = string
}
variable "cluster_role_arn" {
description = "ARN of the EKS cluster IAM role"
type = string
}
variable "node_group_role_arn" {
description = "ARN of the EKS node group IAM role"
type = string
}
output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = aws_eks_cluster.eks_cluster.endpoint
}
output "cluster_security_group_id" {
description = "Security group ID attached to the EKS cluster"
value = aws_eks_cluster.eks_cluster.vpc_config[0].cluster_security_group_id
}
ata "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["eks.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "eks_cluster_role" {
name = "eks-cluster-role-${var.cluster_name}"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
tags = {
Name = "EKS Cluster Role"
}
}
resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role" "eks_node_group_role" {
name = "eks-node-group-role-${var.cluster_name}"
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}
resource "aws_iam_role_policy_attachment" "eks_worker_node_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks_cni_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "ecr_readonly_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_group_role.name
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
}
output "cluster_role_arn" {
description = "ARN of the EKS cluster IAM role"
value = aws_iam_role.eks_cluster_role.arn
}
output "node_group_role_arn" {
description = "ARN of the EKS node group IAM role"
value = aws_iam_role.eks_node_group_role.arn
}
This modular structure separates the IAM and EKS resources into their own modules, making the code more organised and easier to maintain. The main configuration file (main.tf
) in the root directory now simply calls these modules with the necessary variables.
This optimised version includes the following improvements:
Uses a VPC module for better network configuration.
Separates resources into logical files for better organisation.
Uses variables for customisable values.
Applies consistent tagging across resources.
To use this, navigate to where main.tf
is located , then run terraform init
, terraform plan
, and terraform apply
as usual. This will take 5 - 10 minutes to create the needed resources.
Step 5: EKS Deployment and Service
- Firstly set the context for your AWS CLI to point to the newly created cluster by running
aws eks update-kubeconfig --name EKS_CLOUD --region us-east-1
Create deployment.yaml having below content:
apiVersion: apps/v1 kind: Deployment metadata: name: tetris-deployment spec: replicas: 2 # You can adjust the number of replicas as needed selector: matchLabels: app: tetris template: metadata: labels: app: tetris spec: containers: - name: tetris-container image: moeahmed11/tetris:latest ports: - containerPort: 80
Create a service.yaml having below content:
apiVersion: v1 kind: Service metadata: name: tetris-service spec: type: LoadBalancer selector: app: tetris ports: - protocol: TCP port: 80 targetPort: 80
Apply the deployment and service:
kubectl apply -f deployment.yaml kubectl apply -f service.yaml
- Run the below to get the URL of the Load Balancer
kubectl get all
Finally test the deployment by pasting the Load Balancer URL into a browser :)
Step 6: Clean up
Run below commands to delete the deployment, service and remove the resources created by Terraform:
kubectl delete service tetris-service
kubectl delete deployment tetris-deployment
terraform destroy --auto-approve
This should take 5 - 10 minutes, then terminate the your working EC2, and verify deletion of load balancers and EC2 instance created by EKS.