Ghost update many sites efficiently?

Hi,

I host many Ghost sites on a Linux (Ubuntu) server. All are working fine.

When is time to update… I go to each site (i.e. “cd /var/html/foo” run “ghost update”, then “cd /var/html/bar” run “ghost update”, etc.) and run “ghost update” on all of them. The ghost CLI dutifully does its job each time. No issues there.

I am wondering if there’s a more efficient way to update all sites at once… Or it has to be done this way due to the new version having to link for each site?

Thanks!

J

1 Like

I have created a bash script to do the job, we use docker to manage all our demo instances.

From the above script, you could get some ideas on how to do that for ghost cli.

2 Likes

If you don’t want to go Docker, a file bash “for loop” saved in a file called /bin/updated-ghost-blogs will do. Something like this (untested):

#!/bin/bash

GHOST_DIRS="/sites/blog1 /sites/blog2"

for gdir in $GHOST_DIRS;
   cd $gdir;
   ghost update;
end;

Because this is still interactive, if it detects an permission with one, you can see that and deal with the output.

A fancier version of this script would download the new tar file once at the beginning, and then symlink into each directory in the expected location so you don’t have to download the new release N times.

If all your directories under “/sites” are Ghost blogs, you could also dynamically look them up:

GHOST_DIRS=$(find /sites -maxdepth 1 -mindepth 1 -type d)
3 Likes

Hi,

Thanks for the quick response. I thought about the Bash option. Ideally I thought about a docker container… But I ran in the issue that Ghost wanted to have access to systemd, and it was not possible in Docker. That was about 1 yr ago or so. Not sure if that still the case; I have not tried Docker lately.

In other CMSs they had the option to have a dir with the code and then create symlinks to other sites. I was considering that option. But not sure if Ghost will be happy.

I will give the Bash script a try. Thanks!

J

Ghost only uses systemd to manage the service.

If Ghost is run in a Docker container, there’s no need to setup the systemd service, since Docker itself can manage making sure the service starts at boot and is restarted if it fails.

Personally, I prefer systemd for service management and prefer podman to manage containers when I need to and use Docker only when necessary.

2 Likes

Understood. Thanks for sharing the info.

if you get error:
syntax error near unexpected token

You can use the following syntax: do + done

#!/bin/bash

GHOST_DIRS="/var/www/blog1 /var/www/blog2"

for gdir in $GHOST_DIRS;
do
   cd $gdir;
   ghost update;
done
2 Likes

Thanks. I might have been mixing Fish and Bash syntax…

1 Like

Hi,

I just updated my sites using the script provided with minor changes, and it worked:

#!/bin/bash
# Mass ghost upgrade…
# All dirs here must be ghost installations

BDIR=`pwd` # Where we are now?
GHOST=`/bin/ls -d */` # Only get dirs

for i in $GHOST; # Start the loop
do
cd $i;
ghost update --no-restart;
cd $BDIR;
done

After the script has run, remove the old code dirs with find and exec, if desired:

you@yourserver:/var/yourdir$ find . -type d -name “5.XY.Z” -exec rm -fr {} ;

Might need to reload NGINX and restart GHOST manually. Good luck!

1 Like

Here’s a version of the script which allows customizing GHOST_DIRS through the environment:

#!/bin/bash
# Mass ghost upgrade…
# Run from directory above all your Ghost installs 
# or set "GHOST_DIRS" to space-separated list of custom dirs

: "${GHOST_DIRS:=$(ls -d */)}"
for i in $GHOST_DIRS; { (cd "$i"; ghost update --no-restart) }

Within the for-loop, the parens create a sub-shell. The Eeffect is that the “cd” only affects the subshell, so when the subshell exits after each directory is visited, the “current directory” reverts the one of the parent process, which never changed. That’s why it’s safe to remove the steps to store and later restore the current $PWD.

I’m not sure why --no-restart is used.

Example of how to pass a custom set of dirs: :

GHOST_DIRS="./my-first ./my-second" ./mass-update.sh

Or set GHOST_DIRS in your .bash_profile to make the change permanent.

The script will still be somewhat slow and inefficient, but Ghost will download a new copy of the release each time. A speed-up could be gained by downloading the release once, and copying it into place for each install.

I haven’t looked closely how feasible or easy that is, but a lot of the time spend in the upgrade cycle are the “download and unpack” stages, which would be exactly the same for each install.

1 Like