Deploying a Python Flask Server to Google Kubernetes Engine

Introduction

In this article, I will guide you through deploying a Python Flask Server to Google Kubernetes Engine (GKE). Previously, I wrote an article about deploying a NodeJS Application to GKE, which you can refer to for some basic information before continuing.

Steps to Follow

The process is quite similar to deploying a NodeJS Application and includes the following steps:
  1. Create a Python Flask Server
  2. Build a Docker image
  3. Push the Docker image
  4. Deploy the Docker image to GKE
You will notice that when working with Kubernetes, the main difference is in the step where you build the Docker image. Depending on the application you need to deploy, there are different ways to build the Docker image. However, the common point is that once you build the Docker image, you have completed almost half of the process. This is because the subsequent steps involving Kubernetes are entirely the same.

Detailed Process

1. Create a Python Flask Server

In this step, you can either use an existing Python project or create a new one. If you want to use the project I’m using in this article, follow these steps:

Create a file named `app.py` with the following content:

from datetime import datetime
from flask import Flask, json

app = Flask(__name__)

@app.route('/', methods=['GET'])
def index():
data = {'title': 'Python Application', 'now': datetime.now()}
return app.response_class(
response=json.dumps(data),
status=200,
mimetype='application/json'
)

if __name__ == '__main__':
app.run(host='0.0.0.0')


Next, create a `requirements.txt` file to list the packages you need to install. The file should include:

Flask>=2.0


Then, install the packages with the following command:

pip install -r requirements.txt


By default, Flask uses port 5000. Let execute the code and check the results:

python app.py

2. Build Docker Image

Create a Dockerfile with the following content:

FROM python:alpine

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .
CMD ["python", "app.py"]

  • Building a Docker image is quite straightforward. You just need to copy the source code, install the packages listed in the `requirements.txt` file, and run your project.
  • You'll perform the copy process twice: once to copy the `requirements.txt` file and once to copy all resources into Docker. The reason for splitting this into two steps is to leverage Docker's layer caching feature. This helps when copying the `requirements.txt` file and installing the packages.


Next, create a `.dockerignore` file to exclude any files that shouldn't be copied during the Docker image build process.


To build the image, execute the following command:

docker build . -t python-app


3. Push Docker Image

To push your Docker image to Google Cloud Artifact Registry, check out this article I mentioned earlier. Alternatively, you can also push it to Docker Hub.

4. Deploy Docker Image to GKE

Now, let's create a cluster with the following command:

# gcloud container clusters create {cluster name} \
# --project {project id} \
# --zone {zone id} \
# --machine-type {machine type}

# ex:
gcloud container clusters create k8s-cluster \
--project project-id \
--zone asia-southeast1-a \
--machine-type e2-micro

Replace the placeholders for the cluster name, project ID, zone, and machine type as needed.

As mentioned earlier, when deploying projects to Kubernetes, the main difference lies in how you build the Docker image, which varies by project type. Once you have the Docker image, the content of the `deployment.yml` file will be similar to the one used for deploying a NodeJS application. You just need to update the image and port information accordingly.

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
name: label-name
spec:
replicas: 1
selector:
matchLabels:
app: label-name
template:
metadata:
labels:
app: label-name
spec:
restartPolicy: Always
containers:
- name: python-app
image: gcr.io/{project id}/python-app # or use image from docker hub
---
apiVersion: v1
kind: Service
metadata:
name: service-name
labels:
service: label-name
spec:
selector:
app: label-name
type: LoadBalancer
ports:
- protocol: TCP
port: 80 # port service
targetPort: 5000 # port pod


After that, apply to initialize the resources:



Wait a moment to check that the pod, service, and deployment have been successfully created:



Then, access the EXTERNAL-IP of the LoadBalancer to see the results.



To delete the resources, use the following command:


See you again in the next articles!

Comments

Popular posts from this blog

Kubernetes Practice Series

NodeJS Practice Series

Docker Practice Series

Deploying a NodeJS Server on Google Kubernetes Engine

React Practice Series

Setting up Kubernetes Dashboard with Kind

Sitemap

Using Kafka with Docker and NodeJS

A Handy Guide to Using Dynamic Import in JavaScript

Create API Gateway with fast-gateway