I’ve got a couple of Ghost 6 sites that are now starting to feel like a lot of work has been put in, yet I do not have a good backup and restore solution. I keep looking at the docs, hoping that I’ll one day see a category called Backup/Restore, but that hasn’t happened yet. Instead, I have two sites with close to 8 months worth of content that I wouldn’t even begin to know how to recreate. It doesn’t help that I am contemplating moving one of these sites from one hosting company to another.
While I’m starting to feel a little more comfortable with Docker, I gotta admit it’s a big change from the traditional way I’ve always deployed websites in the past…
Anyone have any good solutions for backing up an entire site?
Thanks for all your eyeballs and any pointers you might have - this forum has always been so helpful.
Get a dump of your Mysql database, and a copy of your content folder periodically. This will be the most comprehensive backup you could have.
If you want to do it more professionally, write a simple bash script that gets a copy your mysql database then copies it to a remote storage (like S3), together with your content folder. Then run this script every day (or week) with crontab.
Pretty sure I was able to back up my mysql database using the above command, but I notice in /opt/ghost/data directory that there’s a /ghost and a /mysql directory. I assume these are the content directories?
Again, for me the variable that I’m not familiar with is how the Docker containers interact with these directories. Do I only need the /ghost directory and the mysql backup, or do I need the full contents of the /opt/ghost/data directory (both /ghost and /mysql) for a full backup?
Thanks again for your initial response. Hope I’m not being too dense.
I think you use Docker Compose file provided by Ghost Team. Then yes, that /opt/ghost/data should be your content folder. And, that mysql folder is also copy of your mysql docker volume. But, technically getting a copy of that folder while mysql is still running can lead data loss. Because Mysql doesn’t always write data immediately to the disk. But if you get a dump with mysqldump command, you can be sure you get the data at that point without any loss. And importing data from an sql file is always easier than volume disk.
since you mentioned you have a few containers, here is a bash script i use that iterates through each of the running docker containers i have, pauses them, rsyncs the contents to another location with an ssh key and then resumes the container. you can change the log, destination, source path, and ssh key to be what you would need, dont change the backup_containers line as that builds the list of running containers. i have this run from a cron job
#! /bin/bash
BACKUP_LOG=/home/ghost/scripts/container_backup/logs/containerBackup.log
BACKUP_CONTAINERS=$(docker ps -aq)
BACKUP_DESTINATION="user@123.0.0.123:/home/ghost/backup/$(hostname)"
BACKUP_SRCPATH=/home/ghost/containers/
EXCLUDE=(''*log/' 'logs/' '*_o.jpg' '*_o.png')
SSHKEYS=/home/ghost/.ssh/id_sshkey
exclude_opts=()
for item in "${EXCLUDE[@]}"; do
exclude_opts+=( --exclude="$item" )
done
echo "Beginning backup" | tee -a $BACKUP_LOG
for BC in $BACKUP_CONTAINERS
do
echo "Setting Label" | tee -a $BACKUP_LOG
BACKUP_LABEL=$(docker inspect $BC --format '{{.Name}}')
echo "Pausing ${BACKUP_LABEL}" | tee -a $BACKUP_LOG
docker pause "$BC"
rsync -avzhe "ssh -i ${SSHKEYS}" --no-perms "${exclude_opts[@]}" ${BACKUP_SRCPATH}/${BACKUP_LABEL} $BACKUP_DESTINATION --log-file=$BACKUP_LOG
echo "Resuming ${BACKUP_LABEL}" | tee -a $BACKUP_LOG
docker unpause "$BC"
done
echo "Backup complete" | tee -a $BACKUP_LOG
one other thing i forgot to mention about that script is i have the volumes as a bind moutn to the file system, not docker volumes, so your compose file should have the volumes like this for that script to work