AWS-EKS-KARPENTER
Hari ini post teknikal pertama, cara setup karpenter di EKS, Cara yang gue pake ini simple dengan gunain module yang udah di sediain di terraform.io tinggal di modif2 dikit sesuai kebutuhan
ini link repo untuk code nya gue push di sini
pengen jelasin 1/1 tapi cape, yah aggep ajah udah pada ngerti terraform jadi gue pakein chatgpt ajah yaah buat jelasi code nya.
Berikut working directory kerja nya seperti ini
AWS-EKS-KARPENTER
├── statebucket
│ ├── main.tf
│ ├── outputs.tf
│ ├── terraform.tfvars
│ └── variables.tf
├── terraform
│ ├── main.tf
│ ├── outputs.tf
│ ├── terraform.tfvars
│ └── variables.tf
└── .gitignore
- statebucket/main.tf
###############################################################################
# Provider
###############################################################################
provider "aws" {
region = var.region
allowed_account_ids = [var.aws_account_id]
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
###############################################################################
# S3 Bucket
###############################################################################
resource "aws_s3_bucket" "state" {
bucket = "${var.aws_account_id}-bucket-state-file-karpenter"
force_destroy = true
}
Penjelasan Kode
Provider Configuration
provider "aws" {
region = var.region
allowed_account_ids = [var.aws_account_id]
}
provider "aws": Mendefinisikan AWS sebagai penyedia layanan cloud.region: Menentukan region AWS yang digunakan, diambil dari variabelvar.region.allowed_account_ids: Membatasi penggunaan ke akun AWS tertentu (didefinisikan olehvar.aws_account_id).
Terraform Configuration
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
- Menentukan bahwa penyedia yang digunakan adalah
awsdari sumberhashicorp/awsdengan versi~> 5.0.
S3 Bucket Resource
resource "aws_s3_bucket" "state" {
bucket = "${var.aws_account_id}-bucket-state-file-karpenter"
force_destroy = true
}
aws_s3_bucket: Membuat bucket S3 di AWS.bucket: Nama bucket dibuat dinamis menggunakan variabelvar.aws_account_iduntuk memastikan unik (nama bucket harus unik secara global di AWS).force_destroy: Mengizinkan penghapusan bucket beserta semua isinya tanpa konfirmasi tambahan.
- terraform/main.tf
###############################################################################
# Provider
###############################################################################
provider "aws" {
region = var.region
allowed_account_ids = [var.aws_account_id]
}
#region for public ECR
provider "aws" {
region = "us-east-1"
alias = "virginia"
}
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
}
provider "kubectl" {
apply_retry_count = 5
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
terraform {
backend "s3" {
bucket = "XXXXXXXXXXXX-bucket-state-file-karpenter"
region = "ap-southeast-2"
key = "karpenter.tfstate"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.14"
}
}
}
###############################################################################
# Data Sources
###############################################################################
data "aws_ecrpublic_authorization_token" "token" {
provider = aws.virginia
}
###############################################################################
# VPC
###############################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.13.0"
name = "${var.cluster_name}-vpc"
cidr = "10.0.0.0/16"
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
intra_subnets = ["10.0.104.0/24", "10.0.105.0/24", "10.0.106.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
# Tags subnets for Karpenter auto-discovery
"karpenter.sh/discovery" = var.cluster_name
}
}
###############################################################################
# EKS
###############################################################################
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.24.0"
cluster_name = var.cluster_name
cluster_version = "1.30"
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets
eks_managed_node_groups = {
karpenter = {
# Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
ami_type = "AL2023_x86_64_STANDARD"
instance_types = ["m5.large"]
min_size = 2
max_size = 10
desired_size = 2
taints = {
# This Taint aims to keep just EKS Addons and Karpenter running on this MNG
# The pods that do not tolerate this taint should run on nodes created by Karpenter
addons = {
key = "CriticalAddonsOnly"
value = "true"
effect = "NO_SCHEDULE"
},
}
}
}
# Cluster access entry
# To add the current caller identity as an administrator
enable_cluster_creator_admin_permissions = true
node_security_group_tags = {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery" = var.cluster_name
}
}
###############################################################################
# Karpenter
###############################################################################
module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
cluster_name = module.eks.cluster_name
enable_v1_permissions = true
enable_pod_identity = true
create_pod_identity_association = true
# Attach additional IAM policies to the Karpenter node IAM role
node_iam_role_additional_policies = {
AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
}
###############################################################################
# Karpenter Helm
###############################################################################
resource "helm_release" "karpenter" {
namespace = "kube-system"
name = "karpenter"
repository = "oci://public.ecr.aws/karpenter"
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password
chart = "karpenter"
version = "1.0.0"
wait = false
values = [
<<-EOT
serviceAccount:
name: ${module.karpenter.service_account}
settings:
clusterName: ${module.eks.cluster_name}
clusterEndpoint: ${module.eks.cluster_endpoint}
interruptionQueue: ${module.karpenter.queue_name}
EOT
]
}
###############################################################################
# Karpenter Kubectl
###############################################################################
resource "kubectl_manifest" "karpenter_node_pool" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
nodeClassRef:
name: default
requirements:
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "karpenter.k8s.aws/instance-cpu"
operator: In
values: ["4", "8", "16", "32"]
- key: "karpenter.k8s.aws/instance-hypervisor"
operator: In
values: ["nitro"]
- key: "karpenter.k8s.aws/instance-generation"
operator: Gt
values: ["2"]
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 30s
YAML
depends_on = [
kubectl_manifest.karpenter_node_class
]
}
resource "kubectl_manifest" "karpenter_node_class" {
yaml_body = <<-YAML
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2023
role: ${module.karpenter.node_iam_role_name}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
YAML
depends_on = [
helm_release.karpenter
]
}
###############################################################################
# Inflate deployment
###############################################################################
resource "kubectl_manifest" "karpenter_example_deployment" {
yaml_body = <<-YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
YAML
depends_on = [
helm_release.karpenter
]
}
Penjelasan Kode
Provider Configuration
provider "aws" {
region = var.region
allowed_account_ids = [var.aws_account_id]
}
#region for public ECR
provider "aws" {
region = "us-east-1"
alias = "virginia"
}
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
}
provider "kubectl" {
apply_retry_count = 5
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
provider "aws": Mendefinisikan AWS sebagai penyedia layanan cloud.region: Menentukan region AWS yang digunakan, diambil dari variabelvar.region.allowed_account_ids: Membatasi penggunaan ke akun AWS tertentu (didefinisikan olehvar.aws_account_id).provider "helm": Konfigurasi Helm untuk mengelola Kubernetes menggunakan Terraform.provider "kubectl": Digunakan untuk mengelola manifest Kubernetes langsung dari Terraform.
Terraform Configuration
terraform {
backend "s3" {
bucket = "XXXXXXXXXXXX-bucket-state-file-karpenter"
region = "ap-southeast-2"
key = "karpenter.tfstate"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.14"
}
}
}
backend "s3": Menyimpan state file Terraform di bucket S3.required_providers: Mendefinisikan penyedia yang digunakan beserta versinya.
Data Sources
data "aws_ecrpublic_authorization_token" "token" {
provider = aws.virginia
}
data "aws_ecrpublic_authorization_token": Mendapatkan token otorisasi untuk mengakses AWS Elastic Container Registry (ECR) publik.
VPC
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.13.0"
name = "${var.cluster_name}-vpc"
cidr = "10.0.0.0/16"
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
intra_subnets = ["10.0.104.0/24", "10.0.105.0/24", "10.0.106.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
"karpenter.sh/discovery" = var.cluster_name
}
}
module "vpc": Membuat VPC menggunakan modul resmi Terraform untuk AWS.public_subnet_tagsdanprivate_subnet_tags: Menandai subnet untuk digunakan oleh Kubernetes dan Karpenter.
EKS Cluster
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.24.0"
cluster_name = var.cluster_name
cluster_version = "1.30"
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets
eks_managed_node_groups = {
karpenter = {
ami_type = "AL2023_x86_64_STANDARD"
instance_types = ["m5.large"]
min_size = 2
max_size = 10
desired_size = 2
taints = {
addons = {
key = "CriticalAddonsOnly"
value = "true"
effect = "NO_SCHEDULE"
},
}
}
}
enable_cluster_creator_admin_permissions = true
node_security_group_tags = {
"karpenter.sh/discovery" = var.cluster_name
}
}
module "eks": Membuat cluster EKS dengan addon yang dikelola.eks_managed_node_groups: Konfigurasi node grup yang dikelola oleh EKS.taints: Menambahkan taint untuk memisahkan beban kerja penting dari yang lain.
Karpenter Module
module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
cluster_name = module.eks.cluster_name
enable_v1_permissions = true
enable_pod_identity = true
create_pod_identity_association = true
node_iam_role_additional_policies = {
AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
}
module "karpenter": Mengatur Karpenter untuk pengelolaan autoscaling di Kubernetes.
Helm Karpenter Resource
resource "helm_release" "karpenter" {
namespace = "kube-system"
name = "karpenter"
repository = "oci://public.ecr.aws/karpenter"
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password
chart = "karpenter"
version = "1.0.0"
wait = false
values = [
<<-EOT
serviceAccount:
name: ${module.karpenter.service_account}
settings:
clusterName: ${module.eks.cluster_name}
clusterEndpoint: ${module.eks.cluster_endpoint}
interruptionQueue: ${module.karpenter.queue_name}
EOT
]
}
resource "helm_release" "karpenter": Mengelola pemasangan Karpenter menggunakan Helm.
Karpenter Node Configuration
resource "kubectl_manifest" "karpenter_node_pool" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
nodeClassRef:
name: default
requirements:
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "karpenter.k8s.aws/instance-cpu"
operator: In
values: ["4", "8", "16", "32"]
- key: "karpenter.k8s.aws/instance-hypervisor"
operator: In
values: ["nitro"]
- key: "karpenter.k8s.aws/instance-generation"
operator: Gt
values: ["2"]
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 30s
YAML
depends_on = [
kubectl_manifest.karpenter_node_class
]
}
resource "kubectl_manifest" "karpenter_node_class" {
yaml_body = <<-YAML
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2023
role: ${module.karpenter.node_iam_role_name}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
YAML
depends_on = [
helm_release.karpenter
]
}
resource "kubectl_manifest" "karpenter_node_pool": Mengonfigurasi NodePool untuk Karpenter.resource "kubectl_manifest" "karpenter_node_class": Mendefinisikan kelas node EC2 untuk Karpenter.
Deployment Example
resource "kubectl_manifest" "karpenter_example_deployment" {
yaml_body = <<-YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
YAML
depends_on = [
helm_release.karpenter
]
}
resource "kubectl_manifest" "karpenter_example_deployment": Contoh deployment untuk menguji kemampuan scaling Karpenter.