- You will learn
- Introduction
- Installing Docker
- Creating Dockerfile
- Building Image and Running Container
- Creating Volumes
- Using GitHub Actions for full automation
- how to write Dockerfile
- how to build Docker image and run the container
- how to use docker-compose
- how to make docker keep the files throughout the container's runs
- how to parse environment variables into container
- how to use GitHub Actions for automation
- how to setup self hosted runner
- how to use runner secrets
Let's say you have got a nice discord bot written in python and you have a VPS to host it on. Now the only question is how to run it 24/7. You might have been suggested to use screen multiplexer, but it has some disadvantages:
- Every time you update the bot you have to SSH to your server, attach to screen, shutdown the bot, run
git pull
and run the bot again. You might have good extensions management that allows you to update the bot without restarting it, but there are some other cons as well - If you update some dependencies, you have to update them manually
- The bot doesn't run in an isolated environment, which is not good for security
But there's a nice and easy solution to these problems - Docker! Docker is a containerization utility that automates some stuff like dependencies update and running the application in the background. So let's get started.
The best way to install the docker is to use the convenience script provided by Docker developers themselves. You just need 2 lines:
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
To tell Docker what it has to do to run the application, we need to create a file named Dockerfile
in our project's root.
- First we need to specify the base image. Doing that will make Docker install some apps we need to run our bot, for example the Python interpreter
FROM python:3.10-bullseye
- Next, we need to copy our requirements to some directory inside the container. Let's call it
/app
COPY requirements.txt /app/
- Now we need to set the directory as working and install the requirements
WORKDIR /app
RUN pip install -r requirements.txt
- The only thing that is left to do is to copy the rest of project's files and run the main executable
COPY . .
CMD ["python3", "main.py"]
The final version of Dockerfile looks like this:
FROM python:3.10-bullseye
COPY requirements.txt /app/
WORKDIR /app
RUN pip install -r requirements.txt
COPY . .
CMD ["python3", "main.py"]
Now update the project on your VPS and we can run the bot with Docker.
- Build the image (dot at the end is very important)
$ docker build -t mybot .
- Run the container
$ docker run -d --name mybot mybot:latest
- Read bot logs (keep in mind that this utility only allows to read STDERR)
$ docker logs -f mybot
If everything went successfully, your bot will go online and will keep running!
Just 2 commands to run a container is cool but we can shorten it down to just 1 simple command. For that, create a docker-compose.yml
file in project's root and fill it with the following contents:
version: "3.8"
services:
main:
build: .
container-name: mybot
Update the project on VPS, remove the previous container with docker rm -f mybot
and run this command
docker-compose up -d --build
Now the docker will automatically build the image for you and run the container.
The files creating during container run are destroyed after its recreation. To prevent some files from getting destroyed, we need to use volumes that basically save the files from directory inside of container somewhere on drive.
- Create a new directory somewhere and copy path to it
$ mkdir mybot-data && echo $(pwd)/mybot-data
My path is /home/exenifix/mybot-data
, yours is most likely different.
2. In your project, store the files that need to be persistant in a separate directory (eg. data
)
3. Add the volumes
construction to docker-compose
so it looks like this:
version: "3.8"
services:
main:
build: .
container-name: mybot
volumes:
- /home/exenifix/mybot-data:/app/data
The path before the colon :
is the directory on drive and the second path is the directory inside of container. All the files saved in container in that directory will be saved on drive's directory as well and Docker will be accessing them from drive.
Now it's time to fully automate the process and make Docker update the bot automatically on every commit or release. For that, we will use a GitHub Actions workflow, which basically runs some commands when we need to. You may read more about them here.
We will not have the ability to use .env
files with the workflow, so it's better to store the environment variables as actions secrets.
- Head to your repository page -> Settings -> Secrets -> Actions
- Press
New repository secret
- Give it a name like
TOKEN
and paste the value Now we will be able to access its value in workflow like${{ secrets.TOKEN }}
. However, we also need to parse the variable into container now. Editdocker-compose
so it looks like this:
version: "3.8"
services:
main:
build: .
container-name: mybot
volumes:
- /home/exenifix/mybot-data:/app/data
environment:
- TOKEN
To run the workflow on our VPS, we will need to register it as self hosted runner.
- Head to Settings -> Actions -> Runners
- Press
New self-hosted runner
- Select runner image and architecture
- Follow the instructions but don't run the runner
- Instead, create a service
$ sudo ./svc.sh install
$ sudo ./svc.sh start
Now we have registered our VPS as a self-hosted runner and we can run the workflow on it now.
Create a new file .github/workflows/runner.yml
and paste the following content into it (it is easy to understand so I am not going to give many comments)
name: Docker Runner
on:
push:
branches: [ master ]
jobs:
run:
runs-on: self-hosted
environment: production
steps:
- uses: actions/checkout@v3
- name: Run Container
run: docker-compose up -d --build
env:
TOKEN: ${{ secrets.TOKEN }}
- name: Cleanup Unused Images
run: docker image prune -f
Run docker rm -f mybot
(it only needs to be done once) and push to GitHub. Now if you open Actions
tab on your repository, you should see a workflow running your bot. Congratulations!
I have made a nice utility for reading docker container's logs and stopping upon meeting a certain phrase and it might be useful for you as well.
- Install the utility on your VPS with
$ pip install exendlr
- Add a step to your workflow that would show the logs until it meets
"ready"
phrase. I recommend putting it before the cleanup.
- name: Display Logs
run: python3 -m exendlr mybot "ready"
Now you should see the logs of your bot until the stop phrase is met.
WARNING
The utility only reads from STDERR and redirects to STDERR, if you are using STDOUT for logs, it will not work and will be waiting for stop phrase forever. The utility automatically exits if bot's container is stopped (eg. error occured during starting) or if a log line contains a stop phrase. Make sure that your bot 100% displays a stop phrase when it's ready otherwise your workflow will get stuck.