Posts

NodeJS Practice Series

Image
Introduction NodeJS is an open-source and cross-platform JavaScript runtime environment . Here are some key points about NodeJS : V8 Engine : NodeJS runs on the V8 JavaScript engine , which is also the core of Google Chrome . This allows NodeJS to be highly performant. Asynchronous and Non-Blocking : NodeJS  uses an event-driven, non-blocking I/O model. It’s lightweight and efficient, making it ideal for data-intensive real-time applications. Single-Threaded : NodeJS  runs in a single process, handling multiple requests without creating new threads. It eliminates waiting and continues with the next request. Common Language : Frontend developers who write JavaScript for browsers can use the same language for server-side code in NodeJS . You can even use the latest ECMAScript standards without waiting for browser updates. This page is designed to compile articles related to NodeJS , including how to integrate it with various libraries and relevant tech stacks. I will continue to u

Kubernetes Horizontal Pod Autoscaling

Image
Introduction There are two common scaling methods: Vertical scaling and Horizontal scaling . Vertical scaling involves adding more hardware, such as RAM or CPU , or increasing the number of server nodes. Horizontal scaling , on the other hand, means adding more instances of an app to fully utilize the available resources on a node or server. However, horizontal scaling has its limits. Once a node's resources are maxed out, vertical scaling becomes necessary. This article will focus on horizontal scaling using Kubernetes Horizontal Pod Autoscaling (HPA) , which automatically scales resources up or down based on system demands. Implementation Process 1. Build a Docker image for your application. 2. Deploy the image using a Deployment and LoadBalancer service. 3. Configure HPA to automatically scale resources. To use HPA for auto-scaling based on CPU/Memory , Kubernetes must have the metrics-server installed. If you’re using a cloud provider, the metrics-server is usually instal

SSH to Google Compute Engine

Image
Introduction I previously wrote a guide on creating a Virtual Machine (VM) instance on Google Cloud and accessing it via gcloud . However, if your Google Cloud account lacks permission to manage VM instances, or if you want to create a VM instance that allows SSH for easy sharing with other users and compatibility with various SSH tools, follow the steps below. Configure SSH access for VM instance Firstly, you need to create a compute instance as follows: gcloud compute instances create {instance name} \ --zone={zone} \ --machine-type= { machine type} # ex: gcloud compute instances create instance-1 \ --zone=asia-southeast1-a \ --machine-type=e2-micro Next, SSH into this VM to perform the necessary configurations. Typically, a Google VM instance will have a Distributor ID of Debian . Use the following command to check this before proceeding with the next steps. lsb_release -a Next, set the password for the root account as follows: sudo passwd Next, use t

Setting Up an EXTERNAL-IP for Local LoadBalancer Service

Image
Introduction If you've used a LoadBalancer service from a Cloud Provider , you'll know how convenient it is to have an EXTERNAL-IP assigned automatically. However, when using local Kubernetes , the default setting doesn't provide an EXTERNAL-IP . Building on our previous discussion, this guide will show you how to use ` cloud-provider-kind ` to assign an EXTERNAL-IP to your local LoadBalancer service . First, make sure you've set up your local Kubernetes using Kind as outlined in my previous guide. This is necessary to proceed with the next steps. Installing cloud-provider-kind Since this is a Go package , you'll need to install Go first. Then, you can install the package with the following steps: go install sigs.k8s.io/cloud-provider-kind@latest Then execute command to use: cloud-provider-kind Keep in mind that you need to keep the terminal running while using Kubernetes to create the EXTERNAL-IP . Testing with local EXTERNAL-IP Create a deployment and e

Setting up Kubernetes Dashboard with Kind

Image
Introduction In a previous article, I guided you through using Helm to deploy on Google Kubernetes Engine . However, if you want to cut down costs by using Kubernetes in your local environment instead of relying on a cloud provider during development, then Kind is your go-to. There are several tools to help set up Kubernetes locally, such as MiniKube , Kind , K3S , KubeAdm , and more. Each tool has its own pros and cons. In this article, I'll walk you through using Kind to quickly set up a Kubernetes cluster on Docker . Kind stands out for its compactness, making Kubernetes start up quickly, being user-friendly, and supporting the latest Kubernetes versions. Working with Kind Firstly, follow the instructions here to install Kind according to your operating system. If you're using Ubuntu , execute the command: [ $( uname -m ) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/ki

Helm for beginer - Deploy nginx to Google Kubernetes Engine

Image
Introduction Helm is a package manager for Kubernetes , which simplifies the process of deploying and managing applications on Kubernetes clusters. Helm uses a packaging format called charts , which are collections of files that describe a related set of Kubernetes resources. Key Components of Helm Charts : Helm packages are called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers , databases, caches, and so on. Values : Charts can be customized with values, which are configuration settings that specify how the chart should be installed on the cluster. These values can be set in a ` values.yaml ` file or passed on the command line. Releases : When you install a chart, a new release is created. This means that one chart can be installed multiple times into the same cluster, and each can be indep

Setup Gitlab CI

Image
Introduction Gitlab is a comprehensive platform designed for software development and version control using git. It provides a user-friendly web interface that enhances the speed of working with git, making it easier to manage Git repositories. Gitlab  offers a range of features including: Free public and private repositories: You can host your code securely and privately or share it with the world. Continuous Integration/Continuous Deployment (CI/CD) : Automate the testing and deployment of your code. Free private docker image storage on Container Registry In this article, I'll guide you on how to push a Docker image to the Gitlab Container Registry and set up CI to automatically build and push Docker images when you push code to a Gitlab repository. Pushing a Docker Image to the Gitlab Container Registry First, you'll need a Gitlab account and a repository (either public or private will work). Use the NodeJS Typescript Server project I introduced earlier , or any proj