Rules regarding cleanup of json files in content/data?

And

  • How was Ghost installed and configured? Ubuntu - Self
  • What Node version, database, OS & browser are you using? v10 Node, MySQL - Browser irrevalent
  • What errors or information do you see in the console? - Just growth of files
  • What steps could someone else take to reproduce the issue you’re having? I have no idea

So I was getting disk warnings and the first one was my mistake with the default of “info” logging and I had 26gb of logs. I deleted them all and set them to error as the configuration. That has easily solved this issue, but I kept digging.

Then I saw the content/data folder with 2.1gb of stuff. I use the MySQL connection, so I’m sure the ghost-dev.db from 2017 isn’t used but that isn’t the problem. I started deleting a few of these because they look like copies of exports I do in the UI.

This file output is too large to post here, so here is a snippet of running a file list - https://connortumbleson.com/uploads/2021/01/ghost-output.txt

Can I safely delete these? It looks like something weird happened with some backups duplicated with same exact filesize of like 20+ times.

Can I safely delete these?

If you don’t need them, yes. The files are created automatically when database changes are made during an upgrade, they’re like a failsafe so data can be retrieved manually if something goes wrong. They’re not used by Ghost itself so removing them won’t affect anything.

1 Like

As for why you have so many for the same date a few seconds apart, I don’t know. How are you running upgrades?

My guess after looking into it as those were days that upgrades did not go well for me. I just normally swap over to the ghost user and run ghost update, unless it complains about needing to update ghost-cli then I globally update that prior.

What I think happened those days is migrations failed, which has only happened a few times. My gut is that systemd didn’t see non-0 launch and just retried launching over n over (which I think runs migrations?)

So it probably failed, generated backup over and over until I noticed which was nearly instantly (a few minutes) but probably enough time for that go to crazy running over n over.