Hello Community,
I am planning to improve Ghosts search by adding a chatbot that can not just do a syntax search, but also make sense of content and give proper answers (and still refer to the article for further information). My content is pretty long and finding the right answer can be tricky, especially when the keyword is not part of the headline.
This can be done with Chatbot software like Flowise (Open Source) for example. However, data-wise, I would need my Ghost content to be stored in a vector database like Pinecone, Qdrant, SimpleStore, Supabase, etc…, to then use embeddings (I’d do this with Open AI) to search and cluster the knowledge, and to generate answers.
I know there is an Algolia integration available. But I find Algolia not the best solution for what I’m trying to achieve. It is a nice way to improve indexing and auto-complete searching, and for sure a step up from the native search. But it is not made to provide proper AI generated short answers based on my content.
So my idea is to somehow duplicate my Ghost content into a vector database to use it from there for the desired search queries. Not the smoothest solution because there would be a lot of syncing required.
Does anyone have an idea about a different approach or has done something similar? Is there maybe an easy syncing solution between Ghost and a vector storage available?