From eight interviews to organised findings in minutes

Drop a folder of recordings. Get back a report with the best quotes, grouped by screen and theme, with friction and delight already flagged. Ready to share.

Download for Mac View on GitHub

Free and open source. macOS app or command line.

25-second product video

Find the quotes that matter

Every interview, distilled to the moments worth sharing. Filler words gone, meaning and emotion intact. Click a timecode to hear exactly what they said.

Screenshot: Quotes

Never lose context

Full transcripts with speaker identification and word-level highlighting synced to playback. When a stakeholder asks "what were they really saying?", you can show them.

Screenshot: Transcripts

Watch the moment, not the hour

Every quote links to its video. Click a timecode, watch ten seconds of context, close the player. No scrubbing through hour-long recordings.

Screenshot: Video

Organise your way

Tag quotes with your own codes or import a standard framework — Nielsen's heuristics, Garrett's Elements of UX, Morville's honeycomb. Drag, drop, colour-code.

Screenshot: Tags & codebooks

See where the pain is

Sentiment analysis surfaces frustration, confusion, delight, and trust across all your interviews. Signal cards tell you: three participants struggled with the same form. That's your finding.

Screenshot: Signals

Share without the setup

Export to CSV for Miro, FigJam, or a spreadsheet. Or hand over a self-contained HTML file — stakeholders open it in a browser, no install needed. Speaker codes, not names: the anonymisation boundary is built in.

Screenshot: Export

Analysis shouldn't cost more than the research

You've run eight interviews. The recordings are in a folder and stakeholders want findings. The professional tools cost more than your project budget. The DIY path means days of rewatching, copying quotes into sticky notes, hoping you haven't missed a pattern.

Bristlenose closes that gap. Point it at your recordings — audio, video, or transcripts from Zoom, Teams, or Google Meet — and it handles the mechanical work. Transcription runs on your laptop. An analysis pass extracts quotes, identifies speakers, groups everything by screen and theme. A typical study takes two to five minutes.

What you get back is a report designed for how researchers actually work. Star the quotes that belong in your deck. Hide the noise. Tag with your own codebook. Search, filter, export. The keyboard shortcuts assume you'll do this all day: j/k to move, s to star, t to tag, r to repeat the last tag.

Built by a practising user researcher, not a platform company. Free, open source, no accounts, no telemetry.

Three steps

  1. Point at a folder Any mix of audio, video, or existing transcripts.
  2. Wait a few minutes Transcription, speaker identification, quote extraction, thematic grouping.
  3. Curate and share Star, tag, filter. Export the findings that matter.

Your recordings never leave your laptop

No Bristlenose server. No account. No telemetry. Transcription is local. The analysis pass sends transcript text to your chosen AI provider — or stays entirely on your machine with Ollama. You own your data.

Pick your AI, pay what suits you

Pricing

Half price during beta

A typical study costs about $1.50 in API fees. You bring your own key.

Free from the command line with a local model.

Follow the build

UXR method, new features, questions, and news. No spam.