Enhancing Dovetail for Non-Verbal Usability Research Insights
In our usability research workflow we run into a limitation that prevents us from capturing the key behavioural observations in Dovetail. Context For our product research we focus primarily on what participants do rather than what they say. Participants complete a task in our lab without verbalising their thoughts. They read the assignment aloud and then carry out the task while we record the screen. Only after the task, during a retrospective interview, they explain their actions and reasoning while watching the replay. Current challenge When uploading these task recordings to Dovetail, we cannot highlight specific interactions because highlights require a transcript segment. Since there is no speech during the task execution, no transcript is generated, which means we cannot:
Mark a specific behavioural moment as a highlight
Attach tags to that segment
Add an observation or short description of what happened in that moment
This makes it difficult to analyse non-verbal behaviour and to keep our observations structured in Dovetail. Feature request Would it be possible to:
- 1.
Create highlights without requiring a transcript, directly on the video timeline
- 2.
Allow adding a written description to a highlight (e.g. “User hesitates before tapping”, “User scrolls past navigation”, etc.)
- 3.
Enable tagging on such transcript-free highlights
This would allow teams like ours—who work with non-verbal, task-based usability sessions—to use Dovetail for observation-driven research as well. Why this matters This feature would help us capture behavioural insights more accurately and maintain a consistent workflow across studies. It would also broaden the applicability of Dovetail for moderated and unmoderated usability tests where participants remain silent during tasks.
