Let’s Deploy Karpenter to Your EKS Cluster and Configure Scaling the Right Way in Just a Minute! Cloud Scaling Series: EP02
This article is a follow-up to my previous piece on deploying the Cluster Autoscaler. Here, we’ll explore a more advanced scaling strategy using Karpenter.
What is Karpenter?
Karpenter is an open-source project developed by AWS that dynamically manages Kubernetes workloads. It provisions the optimal instance type and size based on demand, ensuring efficient and cost-effective scaling.
Before diving into the steps, let’s first understand how Karpenter operates with the following architecture diagram.
Let’s start by comparing this advanced scaling strategy with the Cluster Autoscaler deployment I explained in my previous article.
As you may recall, the Cluster Autoscaler operates within fixed constraints, adhering to the minimum and maximum node counts specified in each node group. In contrast, Karpenter introduces greater flexibility by allowing us to define various instance types, such as t3.large
, t3.medium
, and t3.small
. Karpenter dynamically selects the best instance type based on demand (replication count) and provisions it accordingly.
How Karpenter Works in Action
Referencing the architecture diagram above, you’ll notice several desired node groups (depicted in blue). These groups are running the minimum number of replicas for each deployment. Now, imagine there’s a sudden surge in request traffic to these deployments. Here’s where Karpenter steps in. It evaluates the demand, identifies the best instance types from the provided list (e.g., t3.micro
, t3.small
, t3.medium
), and provisions them dynamically.
For example:
- Instance selection: Suppose we’ve defined
t3.micro
,t3.small
, andt3.medium
in the configuration. Based on the demand, Karpenter choosest3.small
andt3.medium
instances for provisioning. - Reasoning: A
t3.small
instance has a maximum pod capacity of 11, while at3.medium
can handle up to 17 pods. As shown in the diagram: - The
t3.small
instance is running 9 pods. - The
t3.medium
instance is running 12 pods.
This demonstrates how Karpenter dynamically selects the optimal instance types based on demand, efficiently distributing workloads while staying within the maximum pod limits.
On-Demand Scaling with Karpenter
Karpenter’s dynamic scaling capabilities offer unparalleled flexibility, ensuring that workloads are seamlessly transferred to the most suitable instance types as demand fluctuates. I hope this provides you with a clear understanding of how Karpenter manages scaling on-demand. In the next part of this series, I’ll explore more advanced strategies you can implement using Karpenter.
Now, let’s move on to configuring Karpenter in your EKS cluster.
Perquisites: Karpenter Deployment
Install these tools before proceeding:
- AWS CLI
kubectl
- the Kubernetes CLIeksctl
(>= v0.191.0) - the CLI for AWS EKShelm
- the package manager for Kubernetes
Steps to Deploy Karpenter
1. Setup IAM Role for Service Account
Karpenter requires an IAM role for its service account to interact with AWS resources.
- Create an IAM policy for Karpenter:
aws iam create-policy --policy-name KarpenterControllerPolicy --policy-document file://karpenter-policy.json
Use the Karpenter policy document and save it as
karpenter-policy.json
.
2. Associate IAM Role with Service Account:
- Enable the
eks.amazonaws.com/role-arn
annotation in your EKS cluster.
eksctl utils associate-iam-oidc-provider --region <REGION> --cluster <CLUSTER_NAME> --approve
- Create an IAM role for Karpenter:
eksctl create iamserviceaccount \
--cluster <CLUSTER_NAME> \
--namespace karpenter \
--name karpenter \
--attach-policy-arn arn:aws:iam::<ACCOUNT_ID>:policy/KarpenterControllerPolicy \
--approve
2. Install Karpenter Using Helm
- Add the Karpenter Helm repository:
helm repo add karpenter https://charts.karpenter.sh
helm repo update
2. Create a namespace for Karpenter:
kubectl create namespace karpenter
3. Install Karpenter using Helm: Replace placeholders in the command below with appropriate values:
helm install karpenter karpenter/karpenter \
--namespace karpenter \
--set serviceAccount.create=false \
--set serviceAccount.name=karpenter \
--set clusterName=<CLUSTER_NAME> \
--set clusterEndpoint=<CLUSTER_ENDPOINT> \
--set aws.defaultInstanceProfile=KarpenterNodeInstanceProfile
3. Configure Karpenter Defaults
Karpenter requires a provisioner to determine how to provision nodes.
- Create an AWS EC2 Instance Profile for Karpenter nodes:
aws iam create-instance-profile --instance-profile-name KarpenterNodeInstanceProfile
aws iam add-role-to-instance-profile \
--instance-profile-name KarpenterNodeInstanceProfile \
--role-name <NODE_ROLE>
2. Deploy a Karpenter Provisioner: Create a provisioner.yaml
file:
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
limits:
resources:
cpu: 1000
memory: 1000Gi
provider:
instanceTypes: ["m5.large", "m5.xlarge"]
ttlSecondsAfterEmpty: 30
Apply the configuration:
kubectl apply -f provisioner.yaml
4. Test Karpenter
- Deploy a sample workload:
kubectl run busybox --image=busybox --command -- sleep 3600
2. Check if a new node is provisioned:
kubectl get nodes
Verification
- Ensure Karpenter pods are running:
kubectl get pods -n karpenter
2. Check the logs for the Karpenter controller:
kubectl logs -n karpenter -l app.kubernetes.io/name=karpenter
That wraps up the basic configuration and operation of Karpenter for managing your Kubernetes workloads. Unlike the Cluster Autoscaler, Karpenter offers a more dynamic and flexible scaling strategy tailored to the demands of your EKS environment.
In the next episodes, I’ll dive deeper into advanced strategies and explore how Karpenter’s features can further enhance your scaling capabilities.
See you in the next episode! If you have any questions, feel free to drop them in the comment section below!