I have trouble sending bulk email with a DO instance, with Mailgun setup and functional with under 100 subs, but failing now that I had 400 in a few days.
This is the error message I get (in part, the rest is repetitive):
The email service was unable to send an email batch."",“stack”:"UnhandledJobError: Processed job threw an unhandled error\n at /var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:108:40\n at async JobManager.worker (/var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:12:22)\n\nInternalServerError: The server has encountered an error.\n at new GhostError (/var/lib/ghost/versions/4.1.2/node_modules/@tryghost/errors/lib/errors.js:10:26)\n at sendEmailJob (/var/lib/ghost/versions/4.1.2/core/server/services/mega/mega.js:295:15)\n at async /var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:101:25\n at async JobManager.worker (/var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:12:22)\n\nError: insert into email_recipients (batch_id, email_id, id, member_email, member_id, member_name, member_uuid) select ‘608ff5bd20f8cd00010eb61f’ as batch_id, ‘608ff5bd20f8cd00010eb61e’ as email_id, ‘608ff5bd20f8cd00010eb620’ as id, ‘abc1@xyz.com’ as member_email, ‘5fdeb23edd481a0007c542b3’ as member_id, NULL as member_name, ‘4a832638-a4da-45d6-8e23-65d7fda70558’ as member_uuid union all select ‘608ff5bd20f8cd00010eb61f’ as batch_id, ‘608ff5bd20f8cd00010eb61e’ as email_id, ‘608ff5bd20f8cd00010eb621’ as id, ‘abc2@xyz.com’ as member_email,…
Just in case the issue is not the new Mailgun account…
I tried to better format the logs. There was about 450 emails in that one line log alone. I trimmed to two emails for our sanity,
log
{"name":"Log","hostname":"ghost4029","pid":1,"level":50,"err":{"id":"14cce170-ad37-11eb-8921-ab910ea49dc5","domain":"https://opmglobal.com/opmwire","code":"SQLITE_ERROR","name":"UnhandledJobError","statusCode":500,"level":"critical","message":"Processed job threw an unhandled error","context":"\"The email service was unable to send an email batch.\"","stack":"UnhandledJobError: Processed job threw an unhandled error\n at /var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:108:40\n at async JobManager.worker (/var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:12:22)\n\nInternalServerError: The server has encountered an error.\n at new GhostError (/var/lib/ghost/versions/4.1.2/node_modules/@tryghost/errors/lib/errors.js:10:26)\n at sendEmailJob (/var/lib/ghost/versions/4.1.2/core/server/services/mega/mega.js:295:15)\n at async /var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:101:25\n at async JobManager.worker (/var/lib/ghost/versions/4.1.2/node_modules/@tryghost/job-manager/lib/job-manager.js:12:22)\n\nError: insert into `email_recipients` (`batch_id`, `email_id`, `id`, `member_email`, `member_id`, `member_name`, `member_uuid`) select '6091e3c47612d70001d6aa1c' as `batch_id`, '6091e3c47612d70001d6aa1b' as `email_id`, '6091e3c47612d70001d6abde' as `id`, 'dummyname@gmail.com' as `member_email`, '6091ad6120f8cd00010ebcee' as `member_id`, NULL as `member_name`, '09b8b005-ed70-42ab-93af-50f775d27364' as `member_uuid` union all select '6091e3c47612d70001d6aa1c' as `batch_id`, '6091e3c47612d70001d6aa1b' as `email_id`, '6091e3c47612d70001d6abdf' as `id`, 'dummyname@yahoo.com' as `member_email`, '6091bbe620f8cd00010ebd23' as `member_id`, NULL as `member_name`, '3d10d372-fdb7-4649-a9cb-8486fd1274e7' as `member_uuid` - SQLITE_ERROR: too many SQL variables"},"msg":"Processed job threw an unhandled error","time":"2021-05-05T00:16:04.874Z","v":0}
There is a similar issue here with the app Strapi. This user found a way to mitigate this:
For now I wrote simple chunk method, but I think it should be handled by strapi or bookshelf (they closed this issue).
We have to remember that :
this error did not happened with 50 emails
but the error happened with 450+ emails.
I also think there is some internal buffer memory issues when dealing with 100x or 1000x of email (regardless we use MSQL or SQLite.) But I’m not sure.
Delivery in chunk
Do you feel it would make sens to have Ghost sending bulk email to members in chunks, in order to to prevent this kind of issues?
Email recipient creation and sending is already done in batches. The specific problem here (SQLITE_ERROR: too many SQL variables) is that earlier versions of SQLite (and even current versions if you have a lot of data) do not cope with the size of queries generated with larger datasets. SQLite is intended for local development, not production sized databases.
The other problem with Mailgun limits on new accounts that’s been reported a few times is something else entirely and is a new behaviour we’ve seen from Mailgun. We still need to investigate that and determine the impact, it’s possible that simply reducing the batch size submitted via the API won’t be enough and we’d actually need to implement a way of spreading the requests over time which would require significant development effort.
Thanks Kevin…on Mailgun, they assure me “no limits on account at all”…but I suppose there might be something going on behind the scenes…they say they use AI to monitor all this.
@josephkman if you have the same error as @pascalandy (SQLITE_ERROR: too many SQL variables) the problem is nothing to do with your Mailgun account but with using sqlite as your database.
Just wanted to confirm this is almost certainly the solution (replacing SQLite with MySQL)…we have yet to implement it, but I came across an article that points this out too: