Serving a file uploaded onto a VPS manually

I have a new Ghost installation running on a VPS server, and everything seems to be working well. My issue is that I want to link from within the Ghost content to files I’ve uploaded manually to my server (in this case, older plain HTML files, PDF files, etc).

It’s not clear to me how I can go about this. If I put a file into the directory that Ghost was installed into, which I understand to be the document root, and then I link to that file, I receive a 404 error.

For example:

I upload uploadedfile.html to /var/www/ghostinstall

and then use a hyperlink inside my Ghost content that points to

this resolves as a 404.

Part of the issue that I have is that many of the HTML files I need to upload have complex urls in them that use full URL paths, and so I need these files to be in the document root so that the hyperlinks within them still resolve correctly.

If anybody has any thoughts, I’d be most grateful.

Thank you in advance.

You can add the uploaded file to your theme as discussed in this thread:

I haven’t done it myself so unfortunately I can’t guide you more, but you will probably find answers if you ask for more details on that thread.

1 Like

Thank you. Unfortunately, I’m dealing with a large number of files, so this doesn’t seem tenable, alas.

1 Like

You can solve this by configuring your web server to not pass a specific path to Ghost and instead serve that directory… directly. Here’s an example syntax for Nginx:

    location /static/ {
      alias /sites/;

There you can see I’ve told Nginx to map the /static directory to a particular directory inside my content folder. Adjust the paths to suit yourself.


Thank you. This is very helpful.

Though, I don’t think I can do this for the document root?

If I open the website specific .conf file at /etc/nginx/sites-enabled then I find that

location / {

has already been defined.

Is it possible to have: resolve to Ghost

whilst simultaneously having

resolve to some other part of the file system;


Could I manually define every file I want served from instead?


looks like it might work for the latter problem of serving individual files from

However, unfortunately, I’m still getting a 404 when I try to implement this.


I have:

location /directory/ {
       alias /sites/

But this is returning an nginx 404 when I try to load a file from


I now have the first part of the problem:

location /directory/ {
       alias /var/www/xyz/content/archive/directory/

working correctly. However, when I try to implement the solution below, for individual files that I need to be available at the root url:

I receive an nginx 404 page.

The code is:

location /file.txt/ {
   alias var/www/xyz/content/archive/file.txt/

where the ultimate path of the file is var/www/xyz/content/archive/file.txt

and the URL is

I’ve checked the permissions, and everything in the path has read, execute permissions.

Might not be the complete solution, but try removing the trailing slash from the alias? (So, .../archive/file.txt not .../archive/file.txt/)


Thank you. Unfortunately, this doesn’t seem to change anything.

Since the location has already been defined, you can rename that block and then use a try_files directive.

Here’s an example from the Internet™: nginx try_files with a proxy_pass · GitHub

And here’s how you might use it with a Ghost config (don’t forget to back up your existing config!)

// ...

location / {
  try_files $uri @ghost;

// ...

location @ghost {
  internal; // Tells nginx that this isn't a public route
  // Config from `location /` block before it was changed
1 Like