some feedback on the prompting for AI summarisation (either within Search or the Chat window):
from the little i’ve played around with either searching for data (using the AI search) or chatting with the data, i’ve noticed that references seem to favour data (e.g. Interviews) rather than insights themselves. again, not sure how prevalent that is, but i would expect more references to insights, rather than data
from our mental model: if one of the AI results links to 1-2 pieces of data (again, individual interviews, in most of our cases), that gives us much less confidence than if the AI results link to 1-2 insights (which tend to be based on multiple pieces of data).
in the hierarchy of how we weigh confidence: data is the smallest unit and you need several piece of data to create an insight, and then you need several insights to have increased confidence in a wider pattern.
tl;dr if we’re asking the AI to give us “biggest pains for our customers” and the AI only links to a handful of interviews, this is going to come across as mostly anecdotal evidence and won’t give us much confidence in the results.
🧂 grain of salt: i’ve played very little with the AI features, and this is a perspective limited to research done with interviews. if you use mostly surveys or other inputs, this might not be valid.
❓ my question is: in the prompting for these features, is there an emphasis on data, vs insights? or is this just a fluke?