DevOps in Motion! – How We Constructed the DevDash Demo


Introduction to DevDash Demo

It began with Paul Z suggesting that we make some cool and enjoyable demo for Cisco Stay 2022. A scavenger hunt maybe. A number of months later, it grew to become a full-blown net utility with DevOps deployment interacting with IoT gadgets. Our first DevDash demo went dwell at Cisco Stay 2022 Vegas.

DevDash 1Right here’s how the DevDash Demo works. To simply accept the problem, customers are assigned to IoT race vehicles. Customers tackle the problem to reply enjoyable, developer, laptop associated a number of alternative questions. Each time a person solutions a query appropriately, their automobile strikes towards the end line. Reply the query incorrectly, and it units the automobile backward. Race outcomes are recorded and posted to our “hall-of-fame” leaderboard. Customers with the quickest time gained some cool prizes on the occasion. And naturally, the bragging rights.

DevDash is a enjoyable venture that showcases:

  • Tips on how to construct an actual world microservices-based net utility utilizing the F.A.R.M stack framework: FastAPI, React JS, and MongoDB
  • Construct a production-quality, bare-metal Kubernetes cluster on Raspberry Pi to host the online utility
  • Construct IoT 4WD race vehicles utilizing the Freenove automobile kits integrating with Raspberry Pi

The frontend net utility is written in Javascript with ReactJS library, backend net companies in Python with FastAPI, persistent knowledge saved within the MongoDB. All software program are packaged as containers operating on the Kubernetes cluster. The online companies ship REST APIs as instructions to manage the race automobile IoT gadgets.

Beneath is the community diagram:

Network Diagram

Let me stroll you thru the method of how I put collectively all the items that made the DevDash Demo…

Kubernetes cluster on Raspberry Pi

I’ll be strolling you step-by-step on how I construct this bare-metal 3-node Kubernetes cluster operating on Raspberry Pis, the 4th Raspberry Pi on the cluster might be operating as a Wifi router (routing visitors to the IoT race vehicles within the 10.20.x.x subnet) and MongoDB Database server for knowledge persistence.

DevDash 3

The {Hardware}

Initially, I wished to energy the Raspberry Pis with Energy Over Ethernet (POE) from the swap. Including the POE Hat to the Raspberry Pi made house too tight within the GeekPi case, so I used energy from USB C ports within the energy strip as a substitute. I believe this feature is so much cheaper and less complicated since you don’t want to purchase the POE Hats for the Raspberry Pi. I taped the mini swap on the facet of the case for straightforward cable administration. I used the underside Raspberry Pi to energy the cooling fan, so I related the USB C cable to a 9W adapter to verify there may be sufficient energy for the fan. It’s a simple course of to assemble the cluster. The directions from Geekpi are simple to comply with.

The bottom OSes

DevDash 4I used Raspberry Pi Imager to flash all 4 of the Raspberry Pi SD playing cards with Ubuntu Server 20.04 LTS 64-bit (headless). I selected Ubuntu because it’s nicely supported and I used to be fairly accustomed to the OS. There could also be newer, extra secure Ubuntu releases by the point you learn this. With the RP Imager utility, you may preset the hostname, allow SSH and set username and password, locale settings. I didn’t allow wi-fi LAN throughout this course of. I’ll describe the steps later methods to allow wi-fi for one of many Pi and make it a Wifi router.

Set the hostname for these Pis as follows:

  • pi-server – This might be configured because the Wifi router and MongoDB database server
  • k3s-primary – That is the Kubernetes controller node
  • k3s-worker1 – That is the Kubernetes employee node 1
  • k3s-worker2 – That is the Kubernetes employee node 2

Check with the community diagram above for extra info.  Configure every node as soon as it boots:

I replace /and many others/hosts file on each node within the cluster to incorporate hostname of all nodes. I additionally added them to the SSH config file (~/.ssh/config). SSH to every node and ensure you can ping different nodes within the cluster. You should utilize ansible to automate all the above processes with a YAML config file.

Ensure you run apt replace command to replace the bundle index earlier than putting in Kubernetes and different software program packages.

$ sudo apt replace

K3s Kubernetes

Now I’m prepared to put in K3s Kubernetes to the cluster. Why K3s? K3s is a light-weight Kubernetes distribution, optimized for ARM. It additionally includes a simplified set up and replace course of. K3s can be a extremely obtainable, licensed Kubernetes distribution designed for manufacturing workloads in unattended, resource-constrained, distant areas or inside IoT home equipment.

I take advantage of Docker to construct and deploy containers to the K3s cluster so first I would like to put in the Docker engine to all nodes.

  1. Set up Docker to every k3s node with directions right here.
  2. Directions to put in K3s to Kubernetes cluster right here. It takes actually minutes to put in K3s to all nodes. Be certain to examine if K3s is put in correctly with all nodes in Prepared state and also you’re good to go.
$ kubectl get node 

 DevDash 5

One of many cool options of K3s is that it contains traefik by default, so that you don’t want to put in a load-balancer, nor an ingress controller. Every little thing is included and prepared so that you can use!

Construct DevDash net utility containers

Excessive-level structure

All sources for this venture can be found in a github repository right here. I take advantage of the FARM stack to implement this demo as a result of it looks as if , full-stack framework for net utility growth. You should utilize the fundamental framework of this net app as a template for nearly any net utility.

Here’s a high-level structure of the DevDash demo app utilizing the FARM stack.

Devdash Architecture

The webapp has 4 fundamental elements within the git repository:

  • backend – Backend companies written in Python
  • frontend – Frontend GUI utility written in Javascript with React JS library
  • deployment – YAML scripts to deploy app to Kubernetes cluster
  • devrel500 – Python app utilizing FastAPI and Freenove python library to course of REST APIs despatched from backend companies to manage the IoT race automobile.

DevDash consists by a number of completely different micro-services, developed in ReactJS + Python, and packaged with Docker containers.

DevDash 7

Organising your growth surroundings

To begin constructing this microservices-based utility, you’ll need to put in Docker, Node, Python, and kubectl in your workstation:

  • Go to Docker obtain and get Docker Private Version. When you find yourself performed open a terminal in your workstation (ie. terminal or iterm for Mac, Command Immediate or PowerShell on Home windows), and please examine that Docker is appropriately put in in your system with docker model.
  • Go to Node obtain and comply with directions to put in Node
  • I take advantage of kubectl on mac to put in kubectl device on my Mac. After the set up, copy /and many others/rancher/k3s/k3s.yaml from major node (k3s-primary) to ~/.kube/config in your workstation
  • I take advantage of Python 3.9.10 to construct the backend micro-services however any python 3.x ought to work. Go to python to obtain and set up python.

All of the required code to construct your DevDash utility is saved in GitHub, a repository internet hosting service that helps Git Model Management System. You possibly can simply register for a free GitHub account, and you’ll need to set up the Git CLI in your workstation.

As soon as set up is full, go and examine that Git is appropriately put in in your system by operating the next command in a terminal window:

$ git model

Now create a listing in your person dwelling listing, the place you wish to retailer all DevOps content material associated to this tutorial and enter it.

$ mkdir devdash
$ cd devdash

Inside this new listing you’ll now clone the content material from GitHub repositories (aka repos) that host all required code to construct and deploy devdash containers.

$ git clone https://github.com/davidncsco/devdash.git

Construct DevDash containers

Let’s begin with backend container.

$ cd backend

Try the python code on this listing:

  • mannequin.py – outline knowledge mannequin for DevDash
  • database.py – all database operations (CRUD)
  • fundamental.py – REST APIs routes that may be referred to as from frontend net app
  • utils.py – Utility features
  • chief.py – Leaderboard show.

Dockerfile within the backend listing defines methods to construct the backend container. Right here, I outline DB_URL surroundings variable for the URL to hook up with MongoDB which I’ll describe later within the DB server part. As a result of we additionally deploy the identical utility nearly on a sandbox, I take advantage of VIRTUAL surroundings variable to distinguish completely different construct environments.

FROM python:3.9.10-slim as backend

COPY ./utils.py ./fundamental.py ./database.py ./mannequin.py /app/
COPY ./necessities.txt /app
COPY ./knowledge /app/knowledge
ARG DB_URL="mongodb://davidn:[email protected]:27017/"
ARG VIRTUAL=0
ENV DB_CONNECT_URL=${DB_URL}
ENV VIRTUAL_EVENT=${VIRTUAL}

WORKDIR /app

RUN pip3 set up -r necessities.txt

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host=0.0.0.0"]

Execute the Docker construct command to generate the container picture for backend server. Since we’re constructing for Raspberry Pi operating Linux with ARM processor, we have to specify the goal platform. Right here, I used xitrum/backend because the container title and 3.0.1 because the tag.

$ docker buildx construct --platform linux/arm64 -t xitrum/backend:3.0.1

You will note output just like this from docker buildx command:

DevDash 10

It takes nearly 5 minutes to construct the backend picture on my Macbook.  Subsequent I need to push this picture to Docker Hub so I can deploy it later. You possibly can join a free private docker hub account, click on on Docker Hub hyperlink for more information.

$ docker push xitrum/backend:3.0.1

Equally, you should use the identical course of to construct the frontend container.

$ cd ../frontend

Try the code and assets on this listing that make up our Frontend UI for the webapp:

  • src – The React Entrance Finish javascript supply code.
  • static/questions – Database of questions used within the problem. Every query is a PNG picture.
  • bundle.json – retailer the metadata related to the venture in addition to to retailer the listing of dependency packages.

Let’s look at the content material of the frontend Dockerfile used to construct this part. Constructing the frontend picture is a 2-stage course of. First, we use node to construct the frontend Javascript utility. Then, within the second stage, we bundle these scripts with nginx as our net server to run these Javascripts. Right here I take advantage of API_URL to outline Ingress public route for Traefik reverse proxy which I’ll describe in additional particulars within the deployment part.

# Dockerfile - construct the bottom picture
FROM node:17.8.0-alpine as build-frontend
WORKDIR /app
ARG API_URL=http://devrel-500
ARG VIRTUAL=false
ENV PATH /app/node_modules/.bin:$PATH
ENV REACT_APP_API_URL=${API_URL}
ENV REACT_APP_VIRTUAL_EVENT=${VIRTUAL}
ENV WDS_SOCKET_PORT 0
COPY bundle.json .

RUN npm set up --silent

COPY ./ /app/

#RUN npm set up [email protected] -g --silent
RUN npm run construct

# Construct for manufacturing with nginx
FROM nginx:1.20.2-alpine

COPY --from=build-frontend /app/construct/ /usr/share/nginx/html
COPY ./static/questions /usr/share/nginx/html/static/questions
$ docker buildx construct --platform linux/arm64 -t xitrum/frontend:3.0.1

You will note output just like this from docker buildx command:

DevDash 11

It takes a bit of over 5 minutes to construct the frontend picture. I additionally push this picture to Docker Hub together with the backend picture.

$ docker push xitrum/frontend:3.0.1

Now that we’ve got constructed each frontend and backend docker photographs, we might want to construct our MongoDB server for persistent knowledge.

Construct MongoDB database server

MongoDB is the main NoSQL database administration system. It’s based mostly on what we name the doc mannequin or collections of paperwork. These paperwork are like information in JSON format makes it actually appropriate for our knowledge mannequin and python dictionary. There are alternative ways of constructing the MongoDB server for our utility. We might use prebuilt MongoDB container from docker hub and deploy it utilizing the pattern configuration YAML scripts in deployment listing using Kubernetes volumes for knowledge persistence.

An easier approach is to construct our MongoDB server utilizing pre-compiled distribution for linux/arm platform just like the Raspberry Pi. I comply with the directions right here to put in MongoDB Server v5.0.5 to the K3s major node k3s-primary. Be aware that the MongoDB server might be put in on any of the Raspberry Pi in our cluster. The set up course of solely takes about 5-10 minutes.

As soon as the DB server is up and operating, you should use any NoSQL shopper to hook up with the MongoDB server. I take advantage of Robo 3T, a free MongoDB Consumer, to check and hook up with our DB server. I then use it to create a person with admin privilege to carry out database operations from our utility. This might be used to outline our DB_URL db connection string.

DB_URL=”mongodb://davidn:[email protected]:27017/”

Just remember to allow the MongDB service at startup on the put in node k3s-primary

$ sudo systemctl allow mongodb.service

Deploy Webapp containers

Now that we’ve got constructed our net app containers, it’s time to deploy them to the kubernetes cluster.

$ cd deployment

Try the two yaml information that I take advantage of to deploy the backend and frontend containers:

  • devrel500_backend.yaml
  • devrel500_frontend.yaml
form: Deployment
apiVersion: apps/v1
metadata:
  title: backend
  labels:
    app: backend
    title: backend

spec:
  replicas: 2
  selector:
    matchLabels:
      app: backend
      activity: backend
  template:
    metadata:
      labels:
        app: backend
        activity: backend
    spec:
      containers:
        - title: backend
          picture: xitrum/backend:3.0.1
          ports:
            - containerPort: 8000
              protocol: TCP
---
apiVersion: v1
form: Service
metadata:
  title: backend
spec:
  ports:
    - title: backend
      port: 8000
      targetPort: 8000
  selector:
    app: backend
    activity: backend
---
form: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  title: backend
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: net

spec:
  guidelines:
    - host: devrel-500
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              title: backend
              port:
                quantity: 8000

I take advantage of traefik built-in Kubernetes ingress controller to handle entry to cluster companies by creating the Ingress spec within the backend yaml file. This yaml creates the backend deployment with ReplicaSets to deliver up 2 backend pods for load balancing and excessive availability.

To create the backend deployment:

$ kubectl apply -f devrel500_backend.yaml

Equally, to create the frontend deployment:

$ kubectl apply -f devrel500_frontend.yaml

Use this command to examine if backend and frontend are deployed and operating:

$ kubectl get all

Every little thing might be prepared when each pod seems as operating and READY 1/1.

DevDash 12

Execute this command Offers you extra particulars in regards to the path to backend and frontend companies

$ kubectl get ingress

DevDash 13

You possibly can add these hosts and IPs entries to /and many others/hosts on the native workstations that might be used to run DevDash net utility. To run the frontend net app, merely enter this URL in an online browser http://devrel-500-1 (as outlined within the devrel500_frontend.yaml ingress part).

DevDash 8

Construct IoT Race Automobiles

The final piece of the puzzle is to construct the IoT race vehicles and to speak with them. Within the DevDash webapp, person should reply a sequence of questions to finish the problem. Reply the query appropriately, the automobile will march towards the end line. Reply the query incorrectly will set the automobile backward. So there have to be some communication between the webapp and the IoT race vehicles.

DevDash 9

Set up base OS and configure the race automobile

First, flash the Raspberry Pi with Raspberry Pi OS Lite (64-bit) utilizing Raspberry Pi Imager like we did with the cluster nodes.

  • Set hostname to pi-car-x (the place x is the automobile quantity). This would be the similar as set within the backend DB.
  • Allow SSH
  • Set username/password
  • Set SSID for the wi-fi community and Wi-fi LAN nation to your nation code (EN)

We use the Freenove python library to speak with the I/O board and FastAPI and to create net companies so we will talk from the backend server utilizing REST APIs. You will discover these python library information within the devrel500 folder.

Reboot and SSH to the RPi on the automobile. It’s essential do some fundamental configuration in your RPi.

$ sudo raspi-config

Choose Interface Choices then select to Allow computerized loading of I2C kernel module. This may enable the python library to speak with the IO board on the automobile and to manage these motors that spin the wheels in addition to studying values from the infrared sensors.

Assign static IP deal with to the wi-fi interface on the RPi by including the next traces to /and many others/dhcpcd.conf (change the final octet for every automobile)

interface wlan0
static ip_address=10.20.0.xx/24
static routers=10.20.0.1
static domain_name_servers=10.20.0.1 8.8.8.8

Set up net companies and python library

Reset energy on the automobile to reboot the RPi. Raspian OS comes with Python 3.x by default, you solely want to put in some python modules required to run FastAPI net companies. Non permanent join the RPi ethernet port to your router so you’ve entry to the web to put in dependency packages.

The devrel500 folder within the Git repository incorporates all of the information you want for our python library. Use scp to add these information to the RPi default person dwelling listing below a brand new folder title devrel500

  • Buzzer.py, Motor.py, PCA9685.py are Freenove python libary to manage the automobile
  • necessities.txt incorporates an inventory of required modules to run FastAPI
  • automobile.py implements net companies to speak with the automobile from the backend server.
  • devrel500.service is the startup script to allow our companies at boot time.

Set up all dependency python modules and allow the service at boot time. As a result of we’re going to run this service as root so we have to set up all dependency python modules with sudo

$ cd devrel500
$ sudo pip set up -r necessities.txt
$ sudo cp devrel500.service /lib/systemd/system/
$ sudo systemctl allow devrel500.service

You possibly can examine to see if the service is operating after the service is enabled.

$ sudo systemctl standing devrel500.service

Now the IoT good automobile is able to take command from the backend server.

Configure a Raspberry Pi 4 as a wifi router

We have to route visitors from our kubernetes cluster community to the wi-fi community so we will run DevDash net app from the workstations and in addition sending REST APIs to the race vehicles. We might use a industrial router however why not use a raspberry pi in our cluster? On this part, I’ll present you methods to flip a Raspberry Pi 4B right into a wifi router. Raspberry Pi 4B wi-fi chip helps each 2.4Ghz and 5Ghz spectrums.

Flash the Raspberry Pi with Ubuntu Server 20.04.04 LTS (64-bit) utilizing Raspberry Pi Imager.

  • Set hostname to pi-server
  • Allow SSH
  • Set username and password. I take advantage of devnet as username on all RPi within the cluster.
  • DO NOT configure wi-fi LAN right now.

Reboot and SSH to the RPi.

Assign static IP deal with to the RPi by creating create a brand new file /and many others/netplan/00-config.yaml with this content material

community:
  model: 2
  renderer: networkd
  ethernets:
    eth0:
      addresses: [10.0.0.54/24]
      gateway4: 10.0.0.1
      nameservers:
        addresses:
           [10.0.0.1, 8.8.8.8]
    wlan0:
      dhcp4: false
      addresses:
      – 10.20.0.1/24

Execute this command and reboot to use the modifications:

$ sudo netplan apply
$ sudo systemctl restart system-networkd

After reboot, examine if each eth0 and wlan0 interfaces are up and assigned with the proper IP addresses.

$ sudo ip a

Now we’re going to put in some new packages:

$ sudo apt replace
$ sudo apt set up hostapd
$ sudo apt set up dnsmasq

Create /and many others/hostapd/hostapd.conf and add this content material for our AP configuration. Be aware that we’re setting the entry level to make use of:

  • 4GHz 802.11g, channel 6
  • SSID = routerpi, passphrase = devrel500

You possibly can change this base on this frequency bands desk

  • 4GHz – b/g/n/ax with as much as 14 channels, relies on the nation
  • 5GHz – a/n/ac/ax with as much as 37 channels, relies on the nation

country_code=US
interface=wlan0
ssid=routerpi
hw_mode=g
channel=6
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=devrel500
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

Add the next traces to the top of /and many others/dnsmasq.conf for DHCP configuration

interface=wlan0
dhcp-range=10.20.0.20,10.20.0.30,255.255.255.0,300d
area=wlan
deal with=/gw.wlan/10.20.0.1

Be aware: I reserved the IP deal with vary 10.20.0.11-19 as static IP addresses for the race vehicles.

Allow IP forwarding and IP route between wi-fi LAN and wired community by un-comment the next line in /and many others/sysctl.conf

internet.ipv4.ip_forward=1

And execute the iptables command

$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Reset and allow AP and DHCP companies

$ sudo systemctl unmask hostapd
$ sudo systemctl allow hostapd
$ sudo systemctl allow dnsmasq.service
$ sudo systemctl daemon-reload
$ sudo reboot now

Wait a couple of minutes and it is best to see the routerpi wifi community seems on wifi community listing then examine in the event you can hook up with this wi-fi community out of your smartphone or laptop computer.

Voila! You now have a transportable wifi router in your cluster. Properly performed!

Epilogue

Thanks for occurring this lengthy journey with me! We now have coated fairly a bit of various applied sciences:

  • Construct a naked steel, transportable Kubernetes cluster with Raspberry Pis
  • Tips on how to use the FARM framework to implement a shopper/server net utility
  • Construct a wifi router on a Raspberry Pi
  • DevOps in motion – constructed and revealed docker containers, and deployed them regionally to construct an actual microservices-based utility.
  • Assemble and construct IoT gadgets, create net companies for wi-fi communication

Hope you’ve as a lot enjoyable as I did with this venture!

 

Share:

Similar Posts

Leave a Reply

Your email address will not be published.