Hey Jo, chat will be quickest when you're inside an object (e.g. a transcript, a document, an insight). It should respond within a second or two. It will be slower if you're looking at a bunch of things (e.g. a whole project or a folder or a channel) as there's a lot of context that needs to be passed into the LLM. Regarding search, search performance is something we are working on, however the initial keyword search in the dialog should be fast. It's just when you click ‘Explore topics and questions’ where it'll be a bit slower as Dovetail is doing a deep semantic search and also summarizing all of the results.