Hi,
I am running Ghost self-hosted on Kubernetes. I have two identical replicas deployed as part of a Kubernetes deployment. Each replica has a 50/50 chance of processing an incoming request. Strangely when I make a request for a specific post (which exists and is published) one of the replicas returns the post, however, the other replica returns the 404 page. I can confirm that this is happening by examining the logs of each replica.
I am not sure whether this is a bug or intended behaviour. Is there some configuration I am missing for running Ghost with multiple replicas? Or is Ghost only intended to be run as a singleton. Thanks in advance for any help or advice.
I read through several forum posts about this and found several links to this FAQ page, but no discussion about why this technical limitation exists.
My guess is this: Embedded in Ghost is a background task scheduling system which runs a bunch of tasks which are designed to be scheduled and run just once-- I presume if duplicate tasks were run in parallel on a twin server, then bad things would happen.
Is there another technical reason besides the category of tasks which expect there is only one copy of running?
I host over 100 Node.js apps that also use a built-in task scheduler, and that was an issue we had to solve. In the end, we were able to support both redundancy across EC2 Instances on different hosts in case one data center had a problem, as well as the Node.js cluster module for customers who needed more CPU processing.
I think the other thing is Ghost has a lot of in-memory caches (e.g. settings), and as you probably know, the 2 hardest things in programming are naming things, cache invalidation, and off-by-one errors
Correct, but I think there are also internal settings/state that would need to by synced (e.g. [I don’t recall if this is actually an issue] if you create a new post, only the node that received the request would be able to serve the slug unless the other nodes were restarted)