Skip to content

Instantly share code, notes, and snippets.

@125m125
Last active March 2, 2024 00:28
Show Gist options
  • Save 125m125/0d5c77a66be88839e151ae8692abe18a to your computer and use it in GitHub Desktop.
Save 125m125/0d5c77a66be88839e151ae8692abe18a to your computer and use it in GitHub Desktop.
Simple automatic backup with gpg asymetric encryption
#!/bin/bash
# go to working directory
echo "$(date) Preparing..."
cd /root/backup/backups
# get current date for file names (careful if you run it more than once per day
DATE=`date +%Y%m%d`
NAME=$(uname -n)-${DATE}
set -e # Enable errexit to exit on error
cleanup() {
echo "$(date) cleaning up..."
rm -r /root/backup/backups/${DATE}/
}
trap cleanup EXIT # Set the cleanup function to be called on exit
# create and use working directory
mkdir -m700 ${DATE}
cd ${DATE}
chown root:root .
# create and use files directory to collect files for the archive
mkdir -m700 files
cd files
# create mysqldumps: use password/configurations from mysql-pw.txt; use utf8mb4 charset; use hex blobs for portability; dump as single transaction; individual insert statement for each row (makes it possible to compare with diff tools, not recommended for larger databases; one database per file
echo "$(date) dumping:"
mkdir mysqldump
cd mysqldump
echo "$(date) db1..."
mysqldump --defaults-extra-file=/root/backup/mysql-pw.txt --default-character-set=utf8mb4 --hex-blob --single-transaction --skip-extended-insert --triggers --routines --events --databases db1 -u dump > db1.sql
echo "$(date) db2..."
mysqldump --defaults-extra-file=/root/backup/mysql-pw.txt --default-character-set=utf8mb4 --hex-blob --single-transaction --skip-extended-insert --triggers --routines --events --databases db2 -u dump > db2.sql
# ....
# Copy all config files (careful with permissions if not root)
cd ..
echo "$(date) copying config files..."
cp -r /etc/ .
echo "$(date) backing up pihole..."
mkdir pihole
cd pihole
pihole -a -t
service pihole-FTL stop
cp /etc/pihole/pihole-FTL.db .
service pihole-FTL start
cd ..
echo "$(date) dumping mongodb..."
mkdir mongo
cd mongo
mongodump -o .
cd ..
# TODO: back up other important files, add some home folders, ... remember to end in the <date>/files directory
# compress files to tgz-archive
cd ..
echo "$(date) compressing..."
tar czf "${NAME}.tgz" files/
# asymetrically encrypt archive with public key
echo "$(date) encrypting..."
gpg --output ../${NAME}.gpg --encrypt --recipient backup@125m125.de ${NAME}.tgz
cd ..
# scp encrypted archive to offsite backup server
echo "$(date) scping..."
scp ${NAME}.gpg offsite:
# remove backups older than 6 days
echo "$(date) removing old backups..."
find . -name "$(uname -n)*" -mtime +6 -exec rm {} \;
echo "$(date) done..."
[client]
password=<password>
# Generate gpg keypair on private computer
gpg --generate-key
# Make sure to create a secure backup of the private key, best in multiple locations and able to survive for longer than the backups (for example by creating and printing a qr code etc. Make sure the software to restore the key works after system updates etc). Without it, you won't be able to decrypt the backup files. I wouldn't store it on the regular backup servers, since that kind of defeats the point of using asymetric encryption (or at make sure you encrypt it with good unique passwords etc)...
# Export public key
gpg --export --armor --output backup.pub
# Copy key to server
scp backup.pub server:
# On the server
ssh server
# Create mysql backup user
mysql -uroot -p
CREATE USER 'dump'@'localhost' IDENTIFIED BY 'password';
GRANT SELECT, RELOAD, PROCESS, LOCK TABLES, EXECUTE, SHOW VIEW ON db1.* TO 'dump'@'localhost';
GRANT SELECT, RELOAD, PROCESS, LOCK TABLES, EXECUTE, SHOW VIEW ON db2.* TO 'dump'@'localhost';
# ...
exit
# I assume you use the root user for the backup, since it needs read access to all config files. While this might not be recommended, that makes it easier. Feel free to create a dedicated user and give it the required access rights
sudo su
# Configure ssh access to the backup server.
# I recommend a new dedicated ssh key, since it has to be without a password to work for automatic backups. The backup server can't use MFA either.
# I will leave that to an exercise for the user (there are a lot of tutorials out there ;) )
# configure access to backup server by adding the Host entry
nano ~/.ssh/config
# Import gpg key
gpg --import backup.pub
# Trust key
gpg --edit-key backup@125m125.de
trust
5 # ultimate
y # confirm ultimate
save # save and exit
# create backup main directory
mkdir -m700 /root/backup/
cd /root/backup
mkdir -m700 backups
# create mysql-pw.txt with the password for the user
nano mysql-pw.txt
# create script for creating the backups; remember to adjust variables and add important files
nano createBackup.sh
# add periodic updates
crontab -e
# for example "21 3 * * * bash /root/backup/createBackup.sh > /root/backup/backup.log 2>&1" to run every night at 03:21 while writing the logs to /root/backup/backup.log
# With monitoring using healthchecks.io: curl -fsS -m 10 --retry 5 https://hc-ping.com/your-uuid-here/start ; bash /root/createBackup.sh > /root/backup.log 2>&1 ; curl -fsS -m 10 --retry 5 --data-binary @/root/backup.log https://hc-ping.com/your-uuid-here/$?
Host offsite
HostName <hostname or IP>
User <username on target server>
IdentityFile /path/to/private_key
Port <port>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment