Deploying a NodeJS Server on Google Kubernetes Engine

Introduction to GKE

Google Kubernetes Engine (GKE) is a managed Kubernetes service provided by Google Cloud Platform, facilitating simple and efficient deployment of Docker images. We only need to provide some configuration for the number of nodes, machine types, and replicas to use.

Some Concepts

Cluster

A Cluster is a collection of Nodes where Kubernetes can deploy applications. A cluster includes at least one Master Node and multiple Worker Nodes. The Master Node is used to manage the Worker Nodes.


Node

A Node is a server in the Kubernetes Cluster. Nodes can be physical servers or virtual machines. Each Node runs Kubernetes, which is responsible for communication between the Master Node and Worker Node, as well as managing Pods and containers running on it.


Pod

A Pod is the smallest deployable unit in Kubernetes. Each Pod contains one or more containers, typically Docker containers. Containers in the same Pod share a network namespace, meaning they have the same IP address and port.


So, you can understand that each Cluster is a cluster of virtual machines connected together on the Kubernetes platform. Within this cluster, users can deploy various applications (pods) and set up auto-scaling based on traffic. Users don't need to worry about which nodes the applications (pods) run on; Kubernetes manages that.


Kubernetes
Kubernetes

Creating a NodeJS Server and Deploying it on GKE

The summarized process includes the following steps:

1. Create a NodeJS server

2. Build a Docker image

3. Push the Docker image to GCP Artifact Registry

4. Deploy the image from Artifact Registry to GKE


1. Create a NodeJS server

Here, I'll use the Express framework to create a simple NodeJS server as follows:

import express from 'express'

const app = express()
const port = 3000

app.get('/', (req, res) => {
res.send('This is NodeJS Typescript Application! Current time is ' + Date.now())
})

app.listen(port, () => {
console.log(`Server is listening on port ${port}`)
})

You can deploy using either JavaScript or TypeScript. If you need to know how to set up a project using TypeScript, you can refer to this article.


2. Build Docker image

First, let's create a .Dockerfile with the following content:

FROM node:20-alpine

# Create app directory
WORKDIR /app

COPY package*.json /app

# Install dependencies
RUN npm install

COPY . /app

# expose port 3000
EXPOSE 3000

CMD npm start

The content of this file is pretty straightforward. If you have any questions, feel free to ask in the comments below.


Next, execute the following command to build the image:

docker build . -t {host}/{project id}/{image name}:{version}
# ex:
docker build . -t gcr.io/project-id/express-ts:latest

In which grc.io is the host of Google Cloud Artifact Registry. Meanwhile, project-id is the project ID of the project you're currently working on. You can use the following command to retrieve the project ID:

gcloud config get-value project


Then, run the Docker image to check its status using the following command:

docker run -dp 3000:3000 gcr.io/project-id/express-ts:latest


3. Push the Docker image to GCP Artifact Registry

To push the Docker image, first log in to Google Cloud, then execute the following command:

docker push gcr.io/project-id/express-ts:latest


4. Deploy the image from Artifact Registry to GKE

First, create a cluster using the following command:

# gcloud container clusters create {cluster name} \
# --project {project id} \
# --zone {zone id} \
# --machine-type {machine type}

# ex:
gcloud container clusters create k8s-cluster \
--project project-id \
--zone asia-southeast1-a \
--machine-type e2-micro


Please wait until the cluster is created with a status of RUNNING. After that, you can check the current clusters as follows:

Clusters
Clusters


Next, create a file `deployment.yaml` with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
name: label-name
spec:
replicas: 1
selector:
matchLabels:
app: label-name
template:
metadata:
labels:
app: label-name
spec:
containers:
- name: express-ts
image: gcr.io/project-id/express-ts:latest
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: service-name
labels:
service: label-name
spec:
selector:
app: label-name
type: LoadBalancer
ports:
- protocol: TCP
port: 80 # port service
targetPort: 3000 # port pod

In the content of the deployment.yaml file above, I've created a Deployment Pod to start a container from the Docker image. At the same time, I've created a LoadBalancer Service to map the port from the container to enable access from outside.

Replace with the corresponding project ID or use the image you want to deploy, and you can replace the values of label-name, deployment-name, and service-name to suit your needs.


Next, execute the following command to apply the configuration information to Kubernetes:

kubectl apply -f deployment.yaml

Please note that the kubectl apply command is equivalent to upsert, meaning that when there are changes in the deployment.yaml file, you only need to run the apply command again to create/update the configuration for Kubernetes.


You can use the following commands to check the status of the deployment and service:

Pod info
Pod info


Service info
Service info


Now, get the value of the EXTERNAL-IP field and access it to check the status of our deployed NodeJS server.

Request by curl
Request by curl

Access by browser
Access by browser

If you enjoyed this, don't forget to hit the like button and share it with your friends!

Comments

Popular posts from this blog

Kubernetes Practice Series

NodeJS Practice Series

Docker Practice Series

React Practice Series

Sitemap

Setting up Kubernetes Dashboard with Kind

DevOps Practice Series

Create API Gateway with fast-gateway

A Handy Guide to Using Dynamic Import in JavaScript