Many errors after update to 3.13.1

So I think I may have gone down a rabbit hole trying to fix this, and permanently broken things. At first I was just getting a hang on restarting Ghost after updating. So I found this which I didn’t really realize was for a user with sqlite3 installed, but I had been receiving the sharp error when running the ‘npm rebuild’ command in versions/3.13.1 so I figured it was at least related.

This didn’t resolve my issues, and I found another thread that included running ‘ghost buster’ which seems to have done something to permissions. So I deleted my original ghost-mgr user (using DigitalOcean to host), and created a new one with what I thought were the correct permissions, but I’m receiving the below error, and I’m clearly not nearly smart enough to try and dig into symlinks or whatever this is, any help would be greatly appreciated:

Debug Information:
OS: Ubuntu, v18.04.4 LTS
Node Version: v10.20.1
Ghost Version: 3.13.1
Ghost-CLI Version: 1.13.1
Environment: production
Command: ‘ghost update 3.13.1 --force’
An error occurred.
Message: ‘EACCESS: permission denied, unlink ‘/var/www/ghost/versions/3.13.1/node_modules/dtrace-provider/build/’’

Can you try running ghost doctor?

That should tell you if you have permissions issues, and also give you a command to run to fix them.

I’m unfortunately in a bad cycle there. I wasn’t receiving this error before I messed with my user, cause I’d “updated” (reverted back down from 12.x) node to 10.20.1 and then updated ghost-cli and then force updating ghost 3.13.1 with zero errors until I tried to start Ghost, which just hung and then gave me the ‘Could not communicate with ghost.’ error after about 5 minutes.

Now running ‘ghost doctor’ gives me an error at Checking binary dependencies, and the following message:
Message: The installed node version has changed since Ghost was installed.
Help: Run ghost update 3.13.1 --force to re-install binary dependencies

Which was of course what was giving me the EACCESS error. Folder and file permissions do pass prior to the binary dependencies failure.

Addendum: I had been using NVM previously, and during this rabbit hole run did find articles stating that it was not good to use it in production, and to install node globally per the main install instructions. I just remembered doing that pretty early on.

Quickest way to unpick might be to upgrade back to Node.js 12. Means you won’t have to upgrade again later.

Another solution here is to remove the problematic node_modules folder.

Or do both!

These might buy you a successful run of ghost update 3.13.1 --force

If not keep posting the errors here :slight_smile: we’ll get to the bottom of it.

Hrmm, back on node 12, I don’t know if this is going to be recoverable. All of the files are there, but I just tried running ‘ghost ls’ and it says there are no installed instances of ghost. I think I may have messed up the whole thing here by deleting the original ghost-mgr user. Is there something beyond what’s found here to set a new user up to have access to this ghost install?

New error is showing when I run ‘ghost update 3.13.1 --force’:
Message: 'EACCESS: permission denied, rmdir '/var/www/ghost/versions/3.13.1/node_modules/iltorb/build/bindings''

Did you remove the node_modules folder?

I had not done that, no. Fantastic I’m back to square one (major progress in my book, I thought all was lost). So the original error I had been receiving is back after ghost start hangs for about 5 minutes:

Debug Information:
    OS: Ubuntu, v18.04.4 LTS
    Node Version: v12.16.2
    Ghost Version 3.13.1
    Ghost-CLI Version: 1.13.1
    Environment: production
    Command: 'ghost start'
Message: Could not communicate with Ghost
Suggestion: journalctl -u ghost_IP -n 50
Stack: Error: Could not communicate with Ghost
    at Server.<anonymous> (/usr/ocal/lib/node_modules/ghost-cli/lib/utils/port-polling.js:56:20)
    at Object.onceWrapper (events.js:416:28)
    at Server.emit (events.js:310:20)
    at emitCloseNT (net.js:1657:8)
    at processTicksAndRejections (internal/process/task_queues.js:83:21)

You know what though, the ghost_IP.service is giving an error:
Failed to execute command: No such file or directory

Edit: It’s giving that error when I run the suggested command:
journalctl -u ghost_IP -n 50

Oh, it’s still looking in the NVM directory:
ghost_IP.service: Failed to execute command: No such file or directory
ghost_IP.service: Failed at step EXEC spawning /home/ghost-mgr/.nvm/v

Is there a way to fix that?

Re-ran setup and it asked to setup systemd again, that worked. Thank you so much for getting me back close to a solution!

1 Like

Ooh glad we got you close enough to get it all back to working. Fantastic :raised_hands: