Introduction
Jenkins is a leading open-source automation tool that enables Continuous Integration (CI) and Continuous Delivery (CD). With its vast plugin ecosystem, Jenkins helps automate every stage from build and test to deploy, reducing manual errors and increasing software development speed.
Gitlab is not only a Git-based source code repository but also provides a comprehensive DevOps platform. Gitlab's key advantages are tight repository management, built-in Webhooks to trigger external pipelines, and powerful project management and code review features that help teams collaborate effectively.
In this article, I will guide you through setting up Jenkins to automatically pull source code from Gitlab and deploy to AWS EKS. The summary of the steps will be as follows:
- Build a source docker image from the NestJS source code and push it to AWS ECR
- Create an EKS Cluster and deploy with that docker image
- Setup configuration for Jenkins
- Add a Jenkinsfile to the NestJS project and push it to Gitlab
- When clicking build on Jenkins, it will run through the following steps:
- Pull the latest code from Gitlab
- Build a docker image from that source code and push it to AWS ECR
- Update the new docker image for the EKS Cluster
Prerequisites
Since this article mainly focuses on the Jenkins and Gitlab setup process, I will reuse the NestJS source code from previous articles; you can review them or use your project accordingly.
Detail
First, create a folder to set up the Jenkins deployment, including a .env file with the following content; please replace the necessary values accordingly:
AWS_ACCESS_KEY_ID = <AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY = <AWS_SECRET_ACCESS_KEY>
AWS_REGION = ap-southeast-1
EKS_CLUSTER_NAME = <EKS_CLUSTER_NAME>
ECR_REGISTRY = <ECR_REGISTRY>
ECR_REPO_NAME = <ECR_REPO_NAME>
GITLAB_USERNAME = oauth2
GITLAB_API_TOKEN = <GITLAB_API_TOKEN>
GITLAB_REPO_URL = <GITLAB_REPO_URL>
- Because this article will deploy to AWS EKS, you must input the necessary AWS information first. You can review my articles about AWS to know how to get access keys as well as create an EKS Cluster and push a docker image to ECR to have the cluster, registry, and repository name information to input.
- For Gitlab information, GITLAB_USERNAME is the default, you don't need to change it; replace the remaining information with values from your Gitlab account.
- Note that when creating the GITLAB_API_TOKEN, ensure you have granted at least the following permissions:
Before proceeding to the next steps, check to ensure that the resources on K8s have been successfully deployed and are running, as this is a mandatory condition for the next steps to function:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nestjs-app-55cc79b94b-qdkrl 1/1 Terminating 0 59m
pod/nestjs-app-86b6474f5c-v49c5 1/1 Running 0 9s
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 <none>
443/TCP 27h
service/nestjs-service LoadBalancer 172.20.67.58 a75749a3a839a4d7b8725e456078d026-1015643982.ap-southeast-1.elb.amazonaws.com 80:32423/TCP 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nestjs-app 1/1 1 1 23h
NAME DESIRED CURRENT READY AGE
replicaset.apps/nestjs-app-55cc79b94b 0 0 0 20m
replicaset.apps/nestjs-app-86b6474f5c 1 1 1 10m
Next is creating the jenkins.yaml file:
jenkins:
systemMessage: "Jenkins auto config with JCasC for CI/CD."
securityRealm:
local:
allowsSignup: false
users:
- id: "admin"
password: "admin_password_change_me"
globalNodeProperties:
- envVars:
env:
- key: "AWS_REGION"
value: "${AWS_REGION}"
- key: "EKS_CLUSTER_NAME"
value: "${EKS_CLUSTER_NAME}"
- key: "ECR_REGISTRY"
value: "${ECR_REGISTRY}"
- key: "ECR_REPO_NAME"
value: "${ECR_REPO_NAME}"
- key: "DEPLOYMENT"
value: "deployment/nestjs-app"
- key: "CONTAINER_NAME"
value: "nestjs-container"
unclassified:
location:
url: http://localhost:8080/
credentials:
system:
domainCredentials:
- credentials:
- aws:
scope: GLOBAL
id: "aws-credentials-id"
accessKey: "${AWS_ACCESS_KEY_ID}"
secretKey: "${AWS_SECRET_ACCESS_KEY}"
- usernamePassword:
scope: GLOBAL
id: "gitlab-auth-id"
username: "${GITLAB_USERNAME}"
password: "${GITLAB_API_TOKEN}"
description: "Use to clone code from Gitlab"
jobs:
- script: >
pipelineJob('EKS-Deployment-Pipeline') {
definition {
cpsScm {
scm {
git {
remote {
url("${GITLAB_REPO_URL}")
credentials('gitlab-auth-id')
}
branches('*/develop')
}
scriptPath('Jenkinsfile')
}
}
}
}
This code uses Jenkins Configuration as Code (JCasC) to automate Jenkins setup. It defines environment variables for AWS/EKS, initializes credentials to connect to AWS and Gitlab, and automatically creates a Job Pipeline to pull code from Gitlab and perform deployment when changes occur.
Next is creating the Dockerfile:
FROM jenkins/jenkins:lts
USER root
ARG DOCKER_GID=999
RUN groupadd -g ${DOCKER_GID} docker_host || true \
&& usermod -aG ${DOCKER_GID} jenkins || usermod -aG docker_host jenkins
RUN apt-get update && apt-get install -y \
apt-transport-https ca-certificates curl gnupg lsb-release unzip \
&& install -m 0755 -d /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc \
&& chmod a+r /etc/apt/keyrings/docker.asc \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(ls_release -cs) stable" > /etc/apt/sources.list.d/docker.list \
&& apt-get update && apt-get install -y docker-ce-cli \
&& curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip && ./aws/install && rm -rf awscliv2.zip aws \
&& curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
&& install -m 0755 kubectl /usr/local/bin/kubectl && rm kubectl \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN jenkins-plugin-cli --plugins \
configuration-as-code \
docker-workflow \
blueocean \
gitlab-plugin \
pipeline-aws \
kubernetes-cli \
job-dsl \
ansicolor \
json-path-api \
token-macro \
favorite \
git-client \
scm-api
COPY jenkins.yaml /var/jenkins_home/jenkins.yaml
ENV CASC_JENKINS_CONFIG=/var/jenkins_home/jenkins.yaml
USER jenkins
This Dockerfile is used to build a custom Jenkins image. It installs the necessary tools so Jenkins can work with Docker (build images), AWS CLI (manage cloud resources), and Kubectl (command EKS). Additionally, it also automatically installs important plugins and loads the jenkins.yaml configuration file into the system.
Creating the docker-compose.yml file:
services:
jenkins:
user: root
build:
context: .
args:
- DOCKER_GID=${DOCKER_GID}
container_name: jenkins
env_file: .env
environment:
- DOCKER_BUILDKIT=1
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
volumes:
jenkins_home:
This file defines how to launch the Jenkins container. It connects the container to the host machine's Docker socket (so Jenkins can run docker commands), maps access ports, loads environment variables from the .env file, and ensures Jenkins data is stored persistently via a volume.
Please note that the "context: ." part uses the Dockerfile above to run, so place the Dockerfile in the same folder as this docker-compose.yml file.
Creating the setup.sh file:
#!/bin/bash
echo "--- 1: Get GID of Docker Socket ---"
export DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)
echo "Found GID: $DOCKER_GID"
echo "--- 2: Clean old build (if exists) ---"
docker-compose down
echo "--- 3: Build and run Jenkins ---"
docker-compose up -d --build
echo "--- Complete! Waiting Jenkins... ---"
echo "Access: http://localhost:8080"
This is a script that automates the entire startup process: it automatically gets the Docker Group ID to avoid permission errors, cleans up old containers, and then builds and runs Jenkins.
Next, use this command to run Jenkins:
chmod +x setup.sh && ./setup.sh
Now you can access Jenkins with the account information from the jenkins.yaml file created earlier.
It's not over yet; you need to create a Jenkinsfile in the source code project (here, the NestJS project) with the following content:
pipeline {
agent any
environment {
AWS_CRED_ID = 'aws-credentials-id'
IMAGE_TAG = "${env.BUILD_NUMBER}"
IMAGE_NAME = "${ECR_REPO_NAME}:${IMAGE_TAG}"
ECR_REPO_IMAGE = "${ECR_REGISTRY}/${ECR_REPO_NAME}"
IMAGE_FULL_PATH = "${ECR_REGISTRY}/${IMAGE_NAME}"
DOCKER_BUILDKIT = '1'
}
stages {
stage('Hello') {
steps {
echo 'Use Jenkinsfile from Gitlab to check tools version'
sh 'aws --version'
sh 'kubectl version --client'
sh 'docker version'
}
}
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build & Push Image') {
steps {
script {
sh "aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ECR_REPO_IMAGE}"
def appImage = docker.build("${ECR_REPO_IMAGE}")
appImage.push("${IMAGE_TAG}")
appImage.push('latest')
}
}
}
stage('Deploy to EKS') {
steps {
withAWS(credentials: "${AWS_CRED_ID}", region: "${AWS_REGION}") {
script {
sh "aws eks update-kubeconfig --region ${AWS_REGION} --name ${EKS_CLUSTER_NAME}"
sh "kubectl set image ${DEPLOYMENT} ${CONTAINER_NAME}=${IMAGE_FULL_PATH}"
sh "kubectl rollout status ${DEPLOYMENT}"
}
}
}
}
}
post {
always {
sh "docker image prune -f"
}
}
}
This Jenkinsfile defines the steps in the CI/CD pipeline: starting by checking tool versions, pulling the latest code, building the Docker image for the NestJS application, then pushing this image to AWS ECR, and finally updating the new image for the EKS Cluster to complete the deployment process.
Then push this NestJS project to Gitlab and return to Jenkins, because I already created a job during configuration in the jenkins.yaml file, you can immediately press build without creating a job manually.
The Jenkins build process will execute according to each step we defined in the Jenkinsfile.
You can also review the very detailed build history as follows:
The Docker image of the new source code has been successfully built and pushed to AWS ECR.
Check the results using Postman.
Happy coding!
See more articles here.
Comments
Post a Comment