Ghost goes down with many api requests

  • Version: 4.41.3

  • Installation: Bitnami installation

  • Problem:

We have a lambda function that creates new members through the ghost api. However, sometimes this lambda function creates very many requests (many new members) and repeatedly hits the membership api.

We find that once this happens, ghost errors out and goes down.

Any help on why this may be happening or where we should look?

Thanks!

Perhaps you could use some sort of queue mechanism here? Looks like you’re just going to have to batch process over a longer period of time. Maybe check out Cloudflare Queues · Cloudflare Queues ?

1 Like

Thanks! I think implementing a queueing system might definitely help!

However, after much digging we discovered it’s not just the API requests that takes ghost down. It’s actually part of a larger issue explained here: Ghost goes down with high read/write operations (Prev. Newsletter Issue)

Will you be able to help us with this?

Thank you very much and Happy New Year!

First, congratulations on having so many new members at once that you are crashing Ghost.

You mentioned elsewhere that you have Apache as a reverse proxy in front of Ghost. Review the number of connections that Apache is tuned to support and consider turning that down. I don’t use Apache anymore, but look settings like MaxRequestWorkers, MaxConnectionsPerChild and ListenBackLog and MaxSpareThreads and MaxSpareServers. Some of these values may need to be turned /down/. See:

https://httpd.apache.org/docs/2.4/mod/prefork.html#minspareservers

Share your Apache performance tuning settings if you want someone to peer-review.
those.

Enable your Apache server-status page and keep an eye on it to see how your actual workload compares to your settings:

If Apache gets more requests than it is configured to handled, they will be queued according to ListenBacklog. This works sort like the job queue design, in that it effectively queues and throttles the traffic, but without the complexity of adding a job queue. But if you are really sending a massive amount of API traffic at once, then at some point you can’t set your ListenBackLog high enough because even that would run out of memory.Then you need a job queue design to throttle the traffic.