Slow page creation via API, but why? Where to look?

I am creating a script that will populate a ghost blog. At present, I am using the ghost api javascript library.

It works, but it seems slow to me. I want to see if this is a known issue. Since I intend to create thousands of pages, efficiency is important to me.

In my current test case, I am seeing the api call for creating a post take 20+ seconds to complete.

Here are some specs:

Ghost is running via docker on a Raspberry PPi 4+ with 8gb ram on the local network. There is no other load on the system and it is not thrashing swap. There is no swap used at all actually.

The Pi itself is running on an SD card which I know is not the fastest thing in the world, but in general the responsiveness of the Pi has been OK and this is way off the charts.

While it takes 20+ seconds to create a page, it is served to the browser in less than a second, as expected.

The page being created is well formed HTML, bracketed by “” and “”

The total length of the generated html is ~220k bytes.

The 20+ seconds I am reporting begins after the html is fully generated. My expectations would be that the api would create the page in the same time or less than it takes to serve the page.

Is there something I am missing or should take a look at? Is there some processing of the html happening by ghost during this call that takes a lot of time? if so, is there a way around this?

Thanks for your help in advance!

are you using MySQL or sqlite?

Neither, MySQL. It’s not a development install, although I’m pretty sure it’s in development mode.

Rebooting now, will get back to you with that info asap

from my docker-compose file:

image: yobasystems/alpine-mariadb:latest

The reason for the non-official mariadb is because mariadb does not have an arm7 docker image to match the RPI 4B.

I suppose I could switch it to mysql:latest if that is sufficiently compatible with ghost.

MySQL 8 is the only officially supported database for Ghost:

So, it is very compatible with Ghost. I am using the mysql:8.0 image myself.

weird. I will have to look into why I have mariadb. I am not partial, I jsut recall I had trouble finding a mariadb arm 7 version.

My guess is I would have gotten mariadb in it because the original docker compose file used it, I will have to double check that.

I’d rather use an official build anyway in the long run.

Not sure why this would be the issue, but can’t rule it out at this stage.

ran ghost log while creating a post and got this. So yes, almost 32 seconds to create the record.

[2024-04-25 16:56:58] INFO PostAddedEvent received, adding post 662a8b3d29d5f20001687901 to matching collections
[2024-04-25 16:56:58] INFO Adding one-off job to queue with current length = 0 called ‘sendWebmentions’
[2024-04-25 16:56:58] INFO “POST /ghost/api/admin/posts/?source=html” 201 31839ms
[2024-04-25 16:56:58] INFO [Webmention] Sending all webmentions for http://localhost:2368/title-429/

My API call looks like this:

          title: json["Itemsx"]["Name"],
          status: "published",
        { source: "html" }
      .then((response) => {
        if (argv.debug) {
          "Success - Data, Command: ",
          [JSON.stringify(lines), JSON.stringify(argv)].join(",")
        );"----- finished  ");
      .catch((error) => {
        console.log("Error (1): ");

ETA: Seem like while I was typing this, ghost log spit out a whole series of errors like this. I wasn’t interacting with it at all, so not sure what this is from. No indication of the url that is timing out :(

MESSAGE: Timeout awaiting ‘request’ for 15000ms

RequestError: Timeout awaiting ‘request’ for 15000ms
at ClientRequest. (/var/lib/ghost/versions/5.81.1/node_modules/got/dist/source/core/index.js:970:65)
at Object.onceWrapper (node:events:632:26)
at ClientRequest.emit (node:events:529:35)
at ClientRequest.emit (node:domain:489:12)
at origin.emit (/var/lib/ghost/versions/5.81.1/node_modules/@szmarczak/http-timer/dist/source/index.js:43:20)
at TLSSocket.socketErrorListener (node:_http_client:501:9)
at TLSSocket.emit (node:events:517:28)
at TLSSocket.emit (node:domain:489:12)
at emitErrorNT (node:internal/streams/destroy:151:8)
at emitErrorCloseNT (node:internal/streams/destroy:116:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
at Timeout.timeoutHandler [as _onTimeout] (/var/lib/ghost/versions/5.81.1/node_modules/got/dist/source/core/utils/timed-out.js:36:25)
at listOnTimeout (node:internal/timers:571:11)
at process.processTimers (node:internal/timers:512:7)

Hmm thanks everyone so far.

Here’s where I am at.

I don’t know why when I originally set up this OS I installed a 32bit version of Ubuntu server (it’s been a while!) but it seems like that is what I did and that’s why I couldn’t install the 64 bit arm version of mysql8. And there isn’t a 32 bit version, so that’s how I ended up with the third party docker image for mariadb - it was the best I could do at the time. I remember that, I don’t know why I didn’t think to see if it was because I had a 32 bit os, but here we are.

In figuring this out, I pulled the latest version of the docker images, and that seems to have hosed what I had - the ghost image won’t start because the mariadb image has a maintenance lock on a db upgrade. The last one I can see in the db table for locks is 5.81, but I think there is supposed to be 5.82.

The docker logs on the ghost image say:

[2024-04-25 23:54:03] INFO Ghost is running in production…
[2024-04-25 23:54:03] INFO Your site is now available on http://localhost:2368/
[2024-04-25 23:54:03] INFO Ctrl+C to shut down
[2024-04-25 23:54:03] INFO Ghost server started in 2.484s
[2024-04-25 23:54:03] ERROR Migration lock was never released or currently a migration is running.

Migration lock was never released or currently a migration is running.

“If you are sure no migration is running, check your data and if your database is in a broken state, you could run yarn knex-migrator rollback.”

Error ID:

MigrationsAreLockedError: Migration lock was never released or currently a migration is running.
at /var/lib/ghost/versions/5.82.2/node_modules/knex-migrator/lib/locking.js:62:23
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async DatabaseStateManager.getState (/var/lib/ghost/versions/5.82.2/core/server/data/db/DatabaseStateManager.js:40:13)
at async DatabaseStateManager.makeReady (/var/lib/ghost/versions/5.82.2/core/server/data/db/DatabaseStateManager.js:73:25)
at async initDatabase (/var/lib/ghost/versions/5.82.2/core/boot.js:69:5)
at async bootGhost (/var/lib/ghost/versions/5.82.2/core/boot.js:523:9)

[2024-04-25 23:54:03] WARN Ghost is shutting down
[2024-04-25 23:54:03] WARN Ghost has shut down
[2024-04-25 23:54:03] WARN Your site is now offline
[2024-04-25 23:54:03] WARN Ghost was running for a few seconds

Because the ghost image crashes so fast, I can’t attach a terminal to run the suggested recovery commands.

So now I am going to see if I can rebuild the system with the right OS and then with the right mysql image. I am not worried about the data, that was all dev tests. My scripts and themes are in git repos, so I can push them up to github and pull them down on the new system. A pain to rebuild the enviroment, but seems like I will be fine.

Then I can test the timing of the api again qnd bring the info back here. Probably next week sometime on that. I got a shiny new ssd drive to replace the sd card, so hopefully that will work. Who knows, maybe mariadb just didn’t like the sd card. It was destined to be replaced anyway. So we will see.

If you have any feedback on recovering that db, I am interested why it didn’t succeed. Maybe that’s worth a separate report?

Stay tuned :slight_smile:


While I wait for the new IS to install on the SSD, I decided to try my script timing test on the new db with the latest docker images for ghost and mariadb.

this time it was a lot faster. Still not as fast as I thought it would be, but 5 seconds instead of 31:

[2024-04-26 00:50:04] INFO PostAddedEvent received, adding post 662afa3a86392d0001121ded to matching collections
[2024-04-26 00:50:04] INFO Adding one-off job to queue with current length = 0 called ‘sendWebmentions’
[2024-04-26 00:50:04] INFO [Webmention] Sending all webmentions for http://localhost:2368/aaa-2/
[2024-04-26 00:50:04] INFO “POST /ghost/api/admin/posts/?source=html” 201 5199ms

So now I have the following hypotheses maybe folks reading can weigh in on:

1 - The indexes in the old db got hosed and might be recoverable.
2 - the timing is related to the number of posts (here this was the second, before I was up to about 430)
3 - mariadb does’t play nice with sd cards at some point
4 - the sd card itself is hosed
5 - new info: since I don’t have a mailserver running, emails are queuing and timing out and failing