What is needed to make ghost highly available and scalable?
I currently have my ghost blog set up on a single ec2 instance with a Cloudflare CDN but I’m expecting this website to be high traffic. What else do I need to make sure it can handle growing traffic.
I have a couple of suggestions -
Cache as much as possible. Most of your content will be static so there’s no reason to stress your instance out by asking for the same resource over and over. I personally use varnish as the middleman between nginx and ghost (note the instance I use is pretty different from the recommended stack - I don’t require high availability so it’s my experimental playground), and you can add cache headers which are validated based on something like etag. I’m no caching expert, so take what I say with a grain of salt, but if you implement proper caching, it should be possible for your instance to rarely be hit
Make your page performant - i.e. minify assets - the more content you send the more resources are required to fulfill it. The more resources that are required the less you can handle at once. For example, you don’t need to send a full 4k image which is 12 MB for your 60px square author profile photo - if you resize it (which would bring the size down to 0.5 MB) it will take (theoretically) 1/24 as long to send the asset, which means your instance can be responding to other requests the other 23/24 time
In addition to Vikaspotluri123’s suggestion above about CDNs and minifying assets, you may also want to take a look at https://philio.me/installing-ghost-high-availability/ (not my own site, I should mention. I found it via a goodle search one day) which outlines a HA ghost blog setup. Since you mentioned that you are using EC2, what I would suggest for a high traffic site is the following. I should note that AWS costs can grow fairly quickly, so depending on your budget, this may not all be feasible.
- use a load balancer, probably an AWS ELB, so that you have the option of spreading the load across multiple EC2 instances hosting the same ghost blog.
- possibly the database to be off-loaded into an AWS RDS.
- the HA article above uses Gluster as a common, distributed filesystem across the EC2 instances. Another option is AWS’s EFS (Elastic File System), which, unless mistaken, is also a kind of distributed filesystem that can be shared across EC2 instances.
I thought that Ghost is made to run only on one instance.
It’s not so much Ghost is made to run on only one instance, it’s Ghost is only capable of running on one instance right now - see External in-memory cache option
Based on the article, it looks like the author is trying to isolate the components of the server, which is something that’s done in scaling, but you still have the “bottleneck” of the single instance of Ghost
The AWS ELB does, if I recall properly, allow the use of application cookies for sticky sessions so the user doesn’t “bounce around” across EC2 instances and loose the in-memory session data.
It seems this is a per-user based thing rather than a global thing (I’m not super familiar with AWS so this interpertation could be completely wrong). The issue is Ghost itself caches quite a bit of stuff in order to minimize database lookups so you’re going to run into the issue of different users being served different content. This might not be too much of an issue for the clientside (since the worstcase scenario is the client gets stale content), but it can be a huge risk on the admin side because different instances will have different settings which means (for example) a user who was demoted from the administrator role could still maintain their permissions and wreak havoc
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.