Deploying a NodeJS Server on Google Kubernetes Engine
Introduction to GKE
Google Kubernetes Engine (GKE) is a managed Kubernetes service provided by Google Cloud Platform, facilitating simple and efficient deployment of Docker images. We only need to provide some configuration for the number of nodes, machine types, and replicas to use.
Some Concepts
Cluster
A Cluster is a collection of Nodes where Kubernetes can deploy applications. A cluster includes at least one Master Node and multiple Worker Nodes. The Master Node is used to manage the Worker Nodes.
Node
A Node is a server in the Kubernetes Cluster. Nodes can be physical servers or virtual machines. Each Node runs Kubernetes, which is responsible for communication between the Master Node and Worker Node, as well as managing Pods and containers running on it.
Pod
A Pod is the smallest deployable unit in Kubernetes. Each Pod contains one or more containers, typically Docker containers. Containers in the same Pod share a network namespace, meaning they have the same IP address and port.
So, you can understand that each Cluster is a cluster of virtual machines connected together on the Kubernetes platform. Within this cluster, users can deploy various applications (pods) and set up auto-scaling based on traffic. Users don't need to worry about which nodes the applications (pods) run on; Kubernetes manages that.
Kubernetes |
Creating a NodeJS Server and Deploying it on GKE
The summarized process includes the following steps:
1. Create a NodeJS server
2. Build a Docker image
3. Push the Docker image to GCP Artifact Registry
4. Deploy the image from Artifact Registry to GKE
1. Create a NodeJS server
Here, I'll use the Express framework to create a simple NodeJS server as follows:
You can deploy using either JavaScript or TypeScript. If you need to know how to set up a project using TypeScript, you can refer to this article.
2. Build Docker image
First, let's create a .Dockerfile with the following content:
The content of this file is pretty straightforward. If you have any questions, feel free to ask in the comments below.
Next, execute the following command to build the image:
In which grc.io is the host of Google Cloud Artifact Registry. Meanwhile, project-id is the project ID of the project you're currently working on. You can use the following command to retrieve the project ID:
Then, run the Docker image to check its status using the following command:
3. Push the Docker image to GCP Artifact Registry
To push the Docker image, first log in to Google Cloud, then execute the following command:
4. Deploy the image from Artifact Registry to GKE
First, create a cluster using the following command:
Please wait until the cluster is created with a status of RUNNING. After that, you can check the current clusters as follows:
Clusters |
Next, create a file `deployment.yaml` with the following content:
In the content of the deployment.yaml file above, I've created a Deployment Pod to start a container from the Docker image. At the same time, I've created a LoadBalancer Service to map the port from the container to enable access from outside.
Replace with the corresponding project ID or use the image you want to deploy, and you can replace the values of label-name, deployment-name, and service-name to suit your needs.
Next, execute the following command to apply the configuration information to Kubernetes:
Please note that the kubectl apply command is equivalent to upsert, meaning that when there are changes in the deployment.yaml file, you only need to run the apply command again to create/update the configuration for Kubernetes.
You can use the following commands to check the status of the deployment and service:
Pod info |
Service info |
Now, get the value of the EXTERNAL-IP field and access it to check the status of our deployed NodeJS server.
Request by curl |
Access by browser |
If you enjoyed this, don't forget to hit the like button and share it with your friends!
Comments
Post a Comment