How many highly technical acronyms can you potentially use for the title of a blog post :-) Believe me, I had in mind to combine this post with two more topics… so please be gentle and read on. I tried hard.
This blog post is part of our smart mirror series – we’re recreating an existing showcase and put special focus on true Edge AI and other cool technologies. Please also explore our posts about Transfer Learning & the Teachable Machine, an intro to the revised Smart Mirror v2 and our Smart Mirror Oktoberfest update!
For the updated version of the smart mirror, we intend to run all the processing and ML inference on a small well-known computer: the Raspberry PI. With the help of the Coral USB Accelerator we’re speeding up the processing. As the name suggests, the Coral USB Accelerator is connected to the PI via USB – below is a quick image that show’s our older setup that – from a hardware perspective – was still using the Coral Teachable Machine that I blogged about earlier.
As we love Docker, we also set a goal early on to containerise our application. Looking at the application from a special Docker perspective, the system requirements and special challenges are these:
- We have a web app: We decided to use Python3’s Flask as a web framework in combination with the Flask SocketIO plugin to provide some server to client communication. We mainly need Socket IO this for the hardware button to tell the web UI to start our demo.
- We have a ML application with hardware dependency: For the Machine Learning (ML) part to work, we need a) libraries such as the edge tpu lib and the python3 APIs installed. But b) we also need to have access to the USB ports and the permissions to talk to USB.
- We also have a physicial button which is used to start/restart the demo – this means we need GPIO access from the Python3 RPI.GPIO library.
There are plenty of examples out there for how to get a web app running and exposing its ports via Docker. For the Coral USB Accelerator, there’s not quite as much, as well as for running a containerized application specifically on a Raspberry PI and with GPIO access. We needed to find a way to give our container access to the USB Accelerator and the GPIO pins, which I describe in the next sections.
A Dockerfile for Web, Coral USB Accelerator and GPIO access
In case you come here to get a quick copy & paste example for the Dockerfile, check the file below. It bundles the exposure of ports for the web app with all installations necessary for the Edge TPU libs. Special care has to be taken when the container is run – we need to give special privileges and access – see the example commands below.
FROM debian:buster RUN apt update RUN apt install curl gnupg ca-certificates zlib1g-dev libjpeg-dev -y RUN echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - RUN apt update RUN apt install libedgetpu1-std python3 python3-pip python3-edgetpu -y RUN curl https://dl.google.com/coral/python/tflite_runtime-1.14.0-cp37-cp37m-linux_armv7l.whl > tflite_runtime-1.14.0-cp37-cp37m-linux_armv7l.whl RUN pip3 install tflite_runtime-1.14.0-cp37-cp37m-linux_armv7l.whl COPY . /app WORKDIR /app RUN pip3 install -r requirements.txt EXPOSE 5005 ENTRYPOINT ["python3"] CMD ["mirror.py"]
Before looking into how we RUN this image, here are a few comments on the above Dockerfile:
- We extend our image from debian:buster as the lastest Raspbian is based on buster. It’s not as small as an Alpine base image, but with 50mb still small enough. The USB Accelerator installation is made for Debian-based systems, using alpine here might be too complicated. We have no special goal to keep the size small – by the way: 316MB is the final compressed Docker image size.
- curl, gnupg and ca-certificates are mainly required for the installation of the Edge TPU dependencies. One could download these and save a few MB. We need zlib1g-dev libjpeg-dev for the Python3 Pillow library – the C Bindings for Python will be compiled during installation time for our ARM-based Raspberry PI. Again, the dev dependencies could be remove otherwise and a staged docker build could result in a slimmer image – but that’s not our main goal here.
- To support the Coral Edge TPU (via USB Accelerator) and to install the Python3 libs for it we intall these dependencies: libedgetpu1-std python3 python3-pip python3-edgetpu.
- For Tensorflow Lite itself, we use the “runtime only” installation which saves some space and time.
- For the other Python3 dependencies such as Flask, Flask Socket-IO, or Pillow (image processing) we used a requirements.txt file and install these via ‘pip3 install -r requirements.txt’
For completeness, here’s the requirements.txt file – these dependencies are installed via pip3:
Flask==1.1.1 Flask-SocketIO==4.2.1 Pillow==6.1.0 RPi.GPIO==0.7.0
To build the app, you would first build the Docker image on the Raspberry PI. If you wanted to build it on your Mac, you’d need to think about running something like QEMU to emulate the ARM architecture, which makes things really complicated. Another option is to use a CI/CD system such as GitLab and use a remote GitLab Runner on a Raspberry PI to build your container. The easiest option is really to just checkout your project to PI, have Docker installed and then build it there.
Here’s how to build it (easy, I know):
cd smartmirror docker build -t hansamann/smartmirror:1.0 .
And if we assume it all works, we can run it – take special note of how ports are exposed and the container gets USB access for the Edge TPU:
docker run -d --restart unless-stopped --privileged --device /dev/gpiomem -p 5005:5005 -v /dev/bus/usb:/dev/bus/usb hansamann/smartmirror:1.0
So here, we’re running a detached container (-d) , which means docker starts it in the background. For development and debugging, it might be handy to use -it instead, which attaches your terminal session to it and will route the keyboard to the container.
The option –restart unless-stopped makes sure our container starts when the Raspberry PI reboots or if the container crashed. It will only be stopped if we explicitly stop it via some docker commands. That’s nice.
As our container needs to detect a physical button that is connected to the GPIO pins of the Raspberry PI, we need –priviliged access. Using –device /dev/gpiomem, we also make the GPIOs accessible from within the container.
The Flask web application is exposed on port 5005 on the local system via -p 5005:5005. Please also note the EXPOSE command in the Dockerfile for making this work.
A volume mount is finally used to make the USB devices, including the Coral USB Accelerator, available to the code running inside the container. For Linux, everything is a file :-) USB devices are located here on Raspbian: /dev/bus/usb – hence we create a Docker volume mount like this: -v /dev/bus/usb:/dev/bus/usb.
Summary & Next up…
I hope you found this post really valuable – if you’re thinking of containerizing your EdgeTPU application for usage on a Raspberry PI, you should have a good tutorial at hand now. If you have any questions, just let Lars and me (Sven) know or comment below!
Next up we will focus on refactoring the web-based UI of our demo. Luckily, with our new colleague Valentin on board, we’re very much looking forward to doing this. So watch out for a blog post about our updated UI soon. Before that, I might also share some updates on the hardware (we created a small, smart mirror specific, lasercut base for holding the main components and the button together and some thoughts that might help in understanding transfer learning. For now – have fun!