Guide to Setting Up CI/CD for NextJS with Jenkins, Gitlab, and AWS ECS

Introduction

In the previous article, I provided instructions on setting up CI/CD with Jenkins and Gitlab to deploy a NestJS project to AWS EKS. Now, I will provide similar guidance for deploying a project using the NextJS framework to AWS ECS, triggering an auto build on Jenkins when pushing code to Gitlab, and running tests for the project before deployment.


A summary of the steps involved will be as follows:

  • Build the Docker image for the NextJS project and push it to AWS ECR.
  • Deploy that Docker image to AWS ECS.
  • Set up Jenkins to connect with Gitlab.
  • When code is pushed to Gitlab, Jenkins will trigger an auto build of the steps defined in the Jenkinsfile, including:
    1. Pulling the new code from Gitlab.
    2. Running tests for the project.
      • If the test fails, stop the deployment process.
      • If the test passes, proceed to the next step.
    3. Building the Docker image with the new code and pushing it to AWS ECR.
    4. Deploying to AWS ECS by restarting the service, which will use the latest Docker image to deploy.
    5. Because CDN is used, CloudFront cache must be invalidated.

Prerequisites

As this article focuses on the CI/CD setup part, I will also reuse the existing project from before, only adding the new setup parts. You can view this in conjunction with the previous articles to understand more clearly.

Before proceeding, you must successfully deploy to AWS ECS first (you can check my previous articles for instructions on this), so that after the setup process is successful, we only need to change to the new Docker image and everything will work.

Please check to ensure that the NextJS web app has been successfully deployed to AWS ECS.

Detail

First, still in the Jenkins setup folder, create a .env file with the following content:

AWS_ACCESS_KEY_ID = <AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY = <AWS_SECRET_ACCESS_KEY>
AWS_REGION = ap-southeast-1

ECS_CLUSTER_NAME = <ECS_CLUSTER_NAME>
ECS_SERVICE_NAME = <ECS_SERVICE_NAME>
CLOUDFRONT_ID = <CLOUDFRONT_ID>

GITLAB_USERNAME = oauth2
GITLAB_API_TOKEN = <GITLAB_API_TOKEN>
GITLAB_REPO_URL = <GITLAB_REPO_URL>
WEBHOOK_SECRET_TOKEN = <WEBHOOK_SECRET_TOKEN>

  • You need to provide the AWS key and ECS information, as well as CloudFront, for the Jenkins setup process. This information can be viewed on the AWS Web Console or used with AWS CDK during deployment.
  • In addition to the Gitlab information as in the previous guide, a WEBHOOK_SECRET_TOKEN is also required. This is a random string you can generate using uuid to serve as a secret for communication between Gitlab and Jenkins.


Create a jenkins.yaml file:

jenkins:
systemMessage: "Jenkins auto config with JCasC for CI/CD."
securityRealm:
local:
allowsSignup: false
users:
- id: "admin"
password: "admin_password_change_me"
globalNodeProperties:
- envVars:
env:
- key: "AWS_REGION"
value: "${AWS_REGION}"
- key: "ECS_CLUSTER_NAME"
value: "${ECS_CLUSTER_NAME}"
- key: "ECS_SERVICE_NAME"
value: "${ECS_SERVICE_NAME}"
- key: "CLOUDFRONT_ID"
value: "${CLOUDFRONT_ID}"

unclassified:
location:
url: http://localhost:8080/

credentials:
system:
domainCredentials:
- credentials:
- aws:
scope: GLOBAL
id: "aws-credentials-id"
accessKey: "${AWS_ACCESS_KEY_ID}"
secretKey: "${AWS_SECRET_ACCESS_KEY}"
- usernamePassword:
scope: GLOBAL
id: "gitlab-auth-id"
username: "${GITLAB_USERNAME}"
password: "${GITLAB_API_TOKEN}"
description: "Use to clone code from Gitlab"

jobs:
- script: >
pipelineJob('Deployment-Pipeline') {
definition {
cpsScm {
scm {
git {
remote {
url("${GITLAB_REPO_URL}")
credentials('gitlab-auth-id')
}
branches('*/develop')
}
scriptPath('Jenkinsfile')
}
}
}
triggers {
gitlab {
triggerOnPush(true)
secretToken("${WEBHOOK_SECRET_TOKEN}")
}
}
}

This file uses Jenkins Configuration as Code (JCasC) to automate system configuration: initializing an admin user, setting global environment variables (AWS, ECS), storing credentials for AWS and Gitlab, and automatically creating a pipeline job to pull code from the develop branch and trigger a build upon a push event from Gitlab.


Next is the docker-compose.yml file:

services:
jenkins:
user: root
build:
context: .
args:
- DOCKER_GID=${DOCKER_GID}
container_name: jenkins
env_file: .env
environment:
- DOCKER_BUILDKIT=1
- JAVA_OPTS=-Dhudson.model.DirectoryBrowserSupport.CSP=
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped

volumes:
jenkins_home:

This file is used to launch Jenkins as a container, mapping the .env file to pass parameters, opening necessary access ports, and most importantly, mounting the docker.sock volume so the Jenkins container can control the host machine's Docker to build other images (Docker-out-of-Docker).


Create a Dockerfile:

FROM jenkins/jenkins:lts

USER root

ARG DOCKER_GID=999

RUN groupadd -g ${DOCKER_GID} docker_host || true \
&& usermod -aG ${DOCKER_GID} jenkins || usermod -aG docker_host jenkins

RUN apt-get update && apt-get install -y \
apt-transport-https ca-certificates curl gnupg lsb-release unzip \
&& install -m 0755 -d /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc \
&& chmod a+r /etc/apt/keyrings/docker.asc \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(ls_release -cs) stable" > /etc/apt/sources.list.d/docker.list \
&& apt-get update && apt-get install -y docker-ce-cli \
&& curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip && ./aws/install && rm -rf awscliv2.zip aws \
&& curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
&& install -m 0755 kubectl /usr/local/bin/kubectl && rm kubectl \
&& apt-get clean && rm -rf /var/lib/apt/lists/*

RUN jenkins-plugin-cli --plugins \
configuration-as-code \
docker-workflow \
blueocean \
gitlab-plugin \
pipeline-aws \
kubernetes-cli \
job-dsl \
ansicolor \
htmlpublisher \
json-path-api \
token-macro \
favorite \
git-client \
scm-api

COPY jenkins.yaml /var/jenkins_home/jenkins.yaml
ENV CASC_JENKINS_CONFIG=/var/jenkins_home/jenkins.yaml

USER jenkins

  • This Dockerfile customizes the official Jenkins image by: installing the Docker CLI to build images, installing the AWS CLI and kubectl to interact with cloud services, automatically installing a range of plugins necessary for CI/CD, and loading the jenkins.yaml configuration file created above.
  • Here, I have added the htmlpublisher plugin used to support showing the test coverage report.


Next, in the NextJS project, add a Jenkinsfile as follows:

pipeline {
agent any
environment {
AWS_CRED_ID = 'aws-credentials-id'
IMAGE_TAG = "${env.BUILD_NUMBER}"
IMAGE_NAME = "${ECR_REPO_NAME}:${IMAGE_TAG}"
ECR_REPO_IMAGE = "${ECR_REGISTRY}/${ECR_REPO_NAME}"
DOCKER_BUILDKIT = '1'
}

stages {
stage('Checkout') {
steps {
checkout scm
}
}

stage('Test & Extract Report') {
steps {
script {
sh "docker build --target test -t ${ECR_REPO_NAME}:test ."
sh "docker create --name extract-container ${ECR_REPO_NAME}:test"
sh "docker copy extract-container:/app/coverage ./coverage"
sh "docker rm extract-container"
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'coverage/lcov-report',
reportFiles: 'index.html',
reportName: 'Code Coverage Report'
])
}
}
}

stage('Build & Push Image') {
steps {
script {
sh "aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ECR_REPO_IMAGE}"

def appImage = docker.build("${ECR_REPO_IMAGE}")
appImage.push("${IMAGE_TAG}")
appImage.push('latest')
}
}
}

stage('Deploy') {
steps {
withAWS(credentials: "${AWS_CRED_ID}", region: "${AWS_REGION}") {
script {
echo "Updating ECS Service: ${ECS_SERVICE_NAME} in Cluster: ${EKS_CLUSTER_NAME}"
sh """
aws ecs update-service \
--cluster ${ECS_CLUSTER_NAME} \
--service ${ECS_SERVICE_NAME} \
--force-new-deployment \
--region ${AWS_REGION}
"""

echo "Waiting for ECS service deployment to stabilize..."
sh """
aws ecs wait services-stable \
--cluster ${ECS_CLUSTER_NAME} \
--services ${ECS_SERVICE_NAME} \
--region ${AWS_REGION}
"""
}
}
}
}

stage('Invalidate Cache') {
steps {
withAWS(credentials: "${AWS_CRED_ID}", region: "${AWS_REGION}") {
sh "aws cloudfront create-invalidation --distribution-id ${CLOUDFRONT_ID} --paths '/*'"
}
}
}
}

post {
always {
sh "docker image prune -f"
deleteDir()
}
}
}

This Jenkinsfile defines a pipeline workflow consisting of 5 stages: pulling code from Gitlab, running tests and exporting the coverage report, building the Docker image and pushing it to AWS ECR, updating the service on AWS ECS to deploy the latest version, and finally invalidating the CloudFront cache so users immediately see the changes. After running, it will automatically clean up temporary resources to save memory.


After starting Jenkins up, you can see that the job has been created (as configured in jenkins.yaml). Then, go to Gitlab to add a Webhook and provide the information including:

  • URL: is the URL of the Jenkins job. Since I am using this Gitlab directly from Gitlab.com, it will require the URL to be a real URL. You can deploy Jenkins on some free host to test and input the job URL here. The job URL will look like this:

https://jenkin-url/project/{pipeline-name}
https://jenkin-url/project/Deployment-Pipeline

  • Secret token: is the value of WEBHOOK_SECRET_TOKEN that you created above in the .env file.


The result now is when you push code to Gitlab, Jenkins will automatically trigger to run the steps.



You can view the Code Coverage Report part on Jenkins; this is the result after running the test.



After deploying, the web application will update accordingly.

Happy coding!

See more articles here.

Comments

Popular posts from this blog

All Practice Series

Deploying a NodeJS Server on Google Kubernetes Engine

Kubernetes Deployment for Zero Downtime

Setting up Kubernetes Dashboard with Kind

Using Kafka with Docker and NodeJS

Monitoring with cAdvisor, Prometheus and Grafana on Docker

Practicing with Google Cloud Platform - Google Kubernetes Engine to deploy nginx

Kubernetes Practice Series

NodeJS Practice Series

Sitemap