Posts

Helm for beginer - Deploy nginx to Google Kubernetes Engine

Image
Introduction Helm is a package manager for Kubernetes , which simplifies the process of deploying and managing applications on Kubernetes clusters. Helm uses a packaging format called charts , which are collections of files that describe a related set of Kubernetes resources. Key Components of Helm Charts : Helm packages are called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers , databases, caches, and so on. Values : Charts can be customized with values, which are configuration settings that specify how the chart should be installed on the cluster. These values can be set in a ` values.yaml ` file or passed on the command line. Releases : When you install a chart, a new release is created. This means that one chart can be installed multiple times into the same cluster, and each can be indep

Setup Gitlab CI

Image
Introduction Gitlab is a comprehensive platform designed for software development and version control using git. It provides a user-friendly web interface that enhances the speed of working with git, making it easier to manage Git repositories. Gitlab  offers a range of features including: Free public and private repositories: You can host your code securely and privately or share it with the world. Continuous Integration/Continuous Deployment (CI/CD) : Automate the testing and deployment of your code. Free private docker image storage on Container Registry In this article, I'll guide you on how to push a Docker image to the Gitlab Container Registry and set up CI to automatically build and push Docker images when you push code to a Gitlab repository. Pushing a Docker Image to the Gitlab Container Registry First, you'll need a Gitlab account and a repository (either public or private will work). Use the NodeJS Typescript Server project I introduced earlier , or any proj

Github CI/CD with Google Cloud Build

Image
Introduction Continuous Integration (CI) : This is the process of building, testing, and performing necessary actions to ensure code quality before it gets merged into the main branch for deployment. Continuous Delivery (CD) : This usually happens after CI and includes steps to deploy the source code to various environments like staging and production . This guide will show you how to set up CI/CD on Github using Google Cloud Build . While Github provides shared runners, if you or your organization have many jobs that need executing during development, setting up your own runner is a better choice. Before proceeding, you should understand some basics about Google Cloud Run to build and deploy Docker images. You can refer to this article for more details: Build Docker image for NodeJS Typescript Server . Setting Up GitHub CI/CD First, create a Github repository. You can choose either a public or private repository. You can use a NodeJS TypeScript application, following my guide o

Using Google Cloud Run to Deploy Docker Image

Image
Introduction Google Cloud Run (GCR) makes deploying a Docker image as easy as running it locally. GCR also includes customizable configuration options for managing services, simplifying the deployment process significantly. Build Docker Image The key step in deploying with a Docker image is successfully building that image. In this guide, we’ll use a NodeJS server Docker image created in this article . Follow the steps to build your Docker image (or use an existing one), and push it to Google Artifact Registry before proceeding. Deploy Docker Image To deploy a Docker image using Google Cloud Run , simply use the following command: gcloud run deploy express-ts --image {docker image} --port {port container} --region {region id} --max-instances {number of instance} --allow-unauthenticated --image : is the link to the Docker image on Google Artifact Registry or Docker Hub --port : is the container port you are exposing --max-instances : is the number of instances

Monitoring with Grafana

Image
Introduction In my previous article, I guided you through setting up cAdvisor , Prometheus  and Grafana on Docker , which are widely used for system monitoring. Now that you've successfully started Grafana , follow these next steps to start using it. Using Grafana When you access the Grafana login page, use the following default credentials for your first login. You can change them later as needed. username: admin password: admin Here's how the homepage looks. First, you need to add Prometheus as a data source: 1. Go to Connections > Add new connection > select Prometheus . 2. Enter the Prometheus server URL that you defined when starting Docker Compose . Once the data source is successfully added, create a new dashboard: 1. Click on New > New dashboard > Add visualization . 2. Select Prometheus as the data source and choose a metric to use, such as ` container_memory_cache `. After saving, you can view the results on your dashboard as follows: Using Dash

Monitoring with cAdvisor, Prometheus and Grafana on Docker

Image
Introduction Monitoring a system is crucial after deploying a product to a production environment. Keeping an eye on system metrics like logs , CPU , RAM , disks , etc, helps identify the system's status, performance issues, and provides timely solutions to ensure stable operations. While cloud providers like Google , Amazon , or Azure offer built-in monitoring systems, if your company needs to manage multiple applications/systems/containers and desires a centralized monitoring system for easier management, using cAdvisor , Prometheus , and Grafana is a sensible choice. These three popular open-source tools are widely used by DevOps teams, especially for monitoring container applications. cAdvisor Developed by Google , cAdvisor is an open-source project used to analyze resource usage, performance, and other metrics from container applications, providing an overview of all running containers. Find more details here Prometheus Prometheus is a toolkit for system monitoring and a

Deploy React Application to Google Kubernetes Engine

Image
Introduction In this article, I will guide you through deploying a React Application to Google Kubernetes Engine (GKE) . Previously, I wrote an article about deploying a NodeJS Application to GKE , which you can refer to for some basic information before continuing. Steps to Follow The process is quite similar to deploying a NodeJS Application and includes the following steps: Create a React Application Build a Docker image Push the Docker image Deploy the Docker image to GKE You will notice that when working with Kubernetes , the main difference is in the step where you build the Docker image . Depending on the application you need to deploy, there are different ways to build the Docker image . However, the common point is that once you build the Docker image , you have completed almost half of the process. This is because the subsequent steps involving Kubernetes are entirely the same. Detailed Process 1. Create a React Application In this step, you can either use an existing R

Using Nginx on Docker

Image
Introduction Nginx is a popular open-source web server known for its superior performance compared to the Apache web server. Nginx supports various functionalities, including deploying an API gateway (reverse proxy) , load balancer , and email proxy . It was initially developed to build a web server capable of efficiently handling 10,000 concurrent connections with low memory usage. Run Nginx with Docker To use Nginx with Docker , simply execute the following command: docker run -dp 8080:80 nginx:alpine By default, Nginx uses port 80 , but you can map it to a different port if needed. Custom Nginx Configuration To customize the Nginx configuration, first, create a ` docker-compose.yml ` file with the following content: services : serviceName : image : nginx:alpine ports : - 8080:80 volumes : - ./default.conf:/etc/nginx/conf.d/default.conf - ./index.html:/usr/share/nginx/html/index.html In the ` volumes ` field, note that I have mapped two file