Feedback on New Contextual Chat and Generative Insights Features

·
·

I showed the new contextual chat and generative insights features to some colleagues and they were excited, but had some feature requests:

  • Contextual chat at the Project Data level needs to be able to specify which sources to look at

  • One colleague would really want a way to iterate on the generated insights, much like she does when working directly with an LLM chat interface. Something like:

  • "You focused on these pain points, but I don't agree they're the most important. I noticed XYZ. Please rewrite this insight to include discussion of these points

  • Avatar of Benjamin Humphrey
    Benjamin Humphrey
    ·
    ·

    Thanks Emi! We're working on both of those things. Expect to see it improve week-on-week throughout the rest of this year.

  • Avatar of Emi Fogg
    Emi Fogg
    ·
    ·

    Got another one! Marketing team is using Dovetail and has hit some undesired friction:

    • they ran the 16 calls and uploaded to dovetail

    • they want to use contextual chat on the whole data set and can’t

    • So they tried using global search with the 16 calls selected, asked a question, and got “no results”

    • They have made some insights, but because they didn’t tag the interviews they don’t have highlights to add to the insights, although they’ve started making themes (pictured)

    They’re new to Dovetail, and unlike me, don’t tag interviews first before running synthesis. Previously, i think they fed the transcripts into an LLM to aid with synthesis and are looking for a similar interaction. I recall seeing something like their suggestion (pictured) from a promotional release video last year on Insights. Is this on the roadmap/vision still?

  • Avatar of Benjamin Humphrey
    Benjamin Humphrey
    ·
    ·

    Thanks for the feedback! We are working as hard as we can on improving the chat experience. There are some trade-offs as to the amount of data that chat considers and the subsequent impact on performance and accuracy of answers. 16 calls (I assume each is 30 minutes) is 8 hours of transcripts which is quite a lot of tokens for the LLM to process. It's doable with certain models but can be slow. We're exploring all the options we can to make it work!

  • Avatar of Benjamin Humphrey
    Benjamin Humphrey
    ·
    ·

    Re: automatically adding highlights—yes still something we are working on, however we're going to go a bit further than that and rebuild insights as an AI-native feature that enables you to work alongside AI to write your insights/reports and embed evidence.

  • Avatar of Emi Fogg
    Emi Fogg
    ·
    ·

    Now that second one has me interested. I do love a good tagging system, but other users aren't so keen on it. They're LLM-copilot type users so I'm sure they'll love that update!