Repo path
research/opportunities/leanne-time-consuming-tasks.md
A rich operational pain-point repository, especially useful for launch-readiness, fieldwork coordination, and proving time-to-value for deployment tooling.
Repo path
research/opportunities/leanne-time-consuming-tasks.md
Notes
No additional notes recorded.
Mapped workstreams
Mapped appendices
Raw source preview
Raw, unprocessed file text shown below. Preview truncated for readability.
---
title: 'Research Opportunity: Most Time-Consuming Tasks — Leanne'
source: notion
interviewee: Leanne
date: '2025-12'
type: opportunity-analysis
status: active
tags: [research, opportunities, fieldwork, ops, leanne]
---
> **Note:** A qualitative synthesis of this interview also exists at . This file contains the JTBD opportunity cards extracted from the same session.
---
# Understanding your most time consuming tasks - Leanne Dec 2025
### Opportunity #1: Reduce the time burden of multi-market data checking
- **Customer Statement:** “I [need]… it takes like *45 minutes an hour* to… check the data… because I have to go through each market individually.â€
- **How Might We:** How might we help ops validate multi-market data quality without repeating the same checks market-by-market?
- **Context:** During fieldwork on multi-market trackers (e.g., Bord Bia Beef; 7–8 markets).
- **Supporting Evidence:**
- Direct quote: “...it takes like 45 minutes an hour to like check the data because I have to go through each market individually...†(≈5:21)
- **Distinctness Check:** This is specifically about **multi-market repetition** (not the general pain of quality checks).
### Opportunity #2: Make survey-duration (LOI) checking easier in Decipher
- **Customer Statement:** “I [struggle]… with decipher… to get the duration of the survey you have to export a completely different set of data.â€
- **How Might We:** How might we make LOI/duration validation available without separate exports?
- **Context:** Data checks in Decipher while fieldwork is live.
- **Supporting Evidence:**
- Direct quote: “...with decipher the to get the duration of the survey you have to export a completely different set of data.†(≈5:21–6:10) EmpathyIQ Conversation_ underst…
- **Distinctness Check:** This is about **LOI extraction friction**, not bot/open-end review.
---
### Opportunity #3: Speed up open-ended response quality checks
- **Customer Statement:** “I [spend time]… scrolling and… looking for gibberish… classic bot responses…â€
- **How Might We:** How might we help ops quickly detect gibberish/bot patterns in open ends?
- **Context:** First step of data checking; happens repeatedly during field.
- **Supporting Evidence:**
- Direct quote: “...I check the open lens… just me scrolling and like looking for gibberish… the classic bot responses...†(≈6:35–7:20) EmpathyIQ Conversation_ underst…
- **Distinctness Check:** This is **open-end review** (separate from straight-lining, LOI, or trap questions).
---
### Opportunity #4: Handle multi-language open ends without manual translation hacks
- **Customer Statement:** “Unless the lead translated, which I just use excel’s little translate tool…â€
- **How Might We:** How might we support reliable translation workflows for open-end review?
- **Context:** Open-end checking across markets/languages.
- **Supporting Evidence:**
- Direct quote: “...unless the lead translated, which I just use excel’s little translate tool.†(≈6:35–6:55) EmpathyIQ Conversation_ underst…
- **Distinctness Check:** This is about **language/translation friction**, not the act of checking content.
---
### Opportunity #5: Catch “drop-off in effort†across multiple open ends per respondent
- **Customer Statement:** “Sometimes you’ll have someone who answers… fine, and then… next three… it’s like gibberish… they don’t care after that.â€
- **How Might We:** How might we flag respondents whose quality degrades across later open ends?
- **Context:** Surveys with multiple open-ended questions.
- **Supporting Evidence:**
- Direct quote: “...answers… the first time, fine… next three… it’s like gibberish… they don’t care after that.†(≈6:55–7:35) EmpathyIQ Conversation_ underst…
- **Distinctness Check:** This is specifically about **within-respondent quality drift**, not generic bot detection.
---
### Opportunity #6: Reduce subjectivity and inconsistency in quality decisions
- **Customer Statement:** “It seems like a very subjective process… I usually say… around four straight lined questions…â€
- **How Might We:** How might we help standardise quality thresholds while still allowing context-specific judgement?
- **Context:** Removing respondents based on multiple signals (opens, straight-lining, LOI, traps).
- **Supporting Evidence:**
- Direct quote: “...it seems like a very subjective process.†(≈10:12) and “...around four straight lined questions...†(≈9:00–9:40) EmpathyIQ Conversation_ underst…
- **Distinctness Check:** This is about **decision rules/governance**, not tooling mechanics.
---
## Individual JTBD Opportunity Cards
*The following cards were extracted from this interview session.*
---
# Align client expectations with realistic field timelines
Context (When/Where): Client-driven deadlines (esp. Musgrave).
Customer Statement: Clients want it tomorrow, but we can’t get 1000 completes in Ireland in four days; it’s minimum seven.
Distinctness Check: Expectation-setting distinct from provider tooling or QC operations.
How Might We: How might we set and defend realistic fieldwork timelines earlier to avoid crunch?
Opportunity #: 20
Parent Opportunity: Keep fieldwork on track despite provider and timeline constraints
Parent item: Keep fieldwork on track despite provider and timeline constraints (Keep%20fieldwork%20on%20track%20despite%20provider%20and%20timel%202e72425bf10381c18173e4a86f97861b.md)
Source Document: EmpathyIQ Conversation_ understanding your most time consuming tasks (3).docx
Supporting Evidence Location: ≈16:15–16:35
Supporting Evidence Quote: ...I'm not going to get 1000 people in Ireland in four days… it's a minimum of seven...
user: Leanne
---
# Avoid late-stage fieldwork extensions caused by delayed cleaning
Context (When/Where): Near end of field when cleaning reveals lots of removals.
Customer Statement: If we let the project fill up and then clean, we may need to go back out with only days left and extend fieldwork.
Distinctness Check: Timeline risk distinct from daily QC mechanics.
How Might We: How might we prevent end-of-field panic caused by late discovery of bad completes?
Opportunity #: 17
Parent Opportunity: Keep fieldwork on track despite provider and timeline constraints
Parent item: Keep fieldwork on track despite provider and timeline constraints (Keep%20fieldwork%20on%20track%20despite%20provider%20and%20timel%202e72425bf10381c18173e4a86f97861b.md)
Source Document: EmpathyIQ Conversation_ understanding your most time consuming tasks (3).docx
Supporting Evidence Location: ≈19:02–20:10
Supporting Evidence Quote: ...let the project fill up… cleaning… go back out… with only a day left… extend field work...
user: Leanne
---
# Capture feasibility feedback earlier at questionnaire stage
Context (When/Where): Questionnaire drafting → client approval stage.
Customer Statement: I’m not asked to give feedback on questionnaires; I want to review before it goes to clients.
Distinctness Check: Upstream prevention distinct from downstream QC or adoption enablement.
How Might We: How might we catch feasibility mismatches before client sign-off?
Opportunity #: 46
Parent Opportunity: Make project launch and operations information discoverable and actionable
Parent item: Make project launch and operations information discoverable and actionable (Make%20project%20launch%20and%20operations%20information%20dis%202e72425bf10381268a9cd51c803f4fd5.md)
Source Document: EmpathyIQ Conversation_ understanding your most time consuming tasks (3).docx
Supporting Evidence Location: ≈1:26–2:25
Supporting Evidence Quote: ...not… asked to give feedback on questionnaires… I want… reviewer questionnaire before it goes to clients…
user: Leanne
---
# Catch within-respondent quality drop-off across multiple open ends
Context (When/Where): Surveys with multiple open-ended questions.
Customer Statement: I see people answer fine at first and then later open ends become gibberish when they stop caring.
Distinctness Check: About quality drift within a single respondent over time, not generic bot detection.
How Might We: How might we flag respondents whose quality degrades across later open ends?
Opportunity #: 5
Parent Opportunity: Deliver high-quality respondent data without consuming excessive ops time
Parent item: Deliver high-quality respondent data without consuming excessive ops time (Deliver%20high-quality%20respondent%20data%20without%20consu%202e72425bf1038162a542c3f72f07f483.md)
Source Document: EmpathyIQ Conversation_ understanding your most time consuming tasks (3).docx
Supporting Evidence Location: ≈6:55–7:35
Supporting Evidence Quote: ...answers… the first time, fine… next three… it’s like gibberish… they don’t care after that.
user: Leanne
---
# Create a single operational tracker across proposals and projects (“The monsterâ€)
Context (When/Where): Cross-project operational visibility.
Customer Statement: We used to have ‘The monster’ that tracked every proposal and project in one place.
Distinctness Check: End-to-end operational visibility broader than budget/CPI lookup alone.
How Might We: How might we give the team a single system of record across projects and steps?
Opportunity #: 29
Parent Opportunity: Make project launch and operations information discoverable and actionable
Parent item: Make project launch and operations information discoverable and actionable (Make%20project%20launch%20and%20operations%20information%20dis%202e72425bf10381268a9cd51c803f4fd5.md)
Source Document: EmpathyIQ Conversation_ understanding your most time consuming tasks (3).docx
Supporting Evidence Location: ≈29:55–30:40
Supporting Evidence Quote: ...something called… The monster… tracked every proposal, every project… in one place...
user: Leanne
---
# Deliver high-quality respondent data without consuming excessive ops time
Context (When/Where): During live fieldwork and pre-handoff cleaning (multi-market trackers, daily checks, removals)
How Might We: How might we help ops produce high-quality, analysis-ready respondent data with far less manual checking, exporting, and subjective decision-making?
Parent Opportunity: Deliver high-quality respondent data without consuming excessive ops time
Sub-item: Reduce the time burden of multi-market data checking (Reduce%20the%20time%20burden%20of%20multi-market%20data%20checki%202e72425bf103819f962dfbafd46ec56e.md), Make survey-duration (LOI) checking easier in Decipher (Make%20survey-duration%20(LOI)%20checking%20easier%20in%20Deci%202e72425bf10381a7bd68f724918fdbee.md), Speed up open-ended response quality checks (Speed%20up%20open-ended%20response%20quality%20checks%202e72425bf1038123a3c4d8531808a846.md), Handle multi-language open ends without manual translation hacks (Handle%20multi-language%20open%20ends%20without%20manual%20tra%202e72425bf10381dd9994d9bbe1f075fd.md), Catch within-respondent quality drop-off across multiple open ends (Catch%20within-respondent%20quality%20drop-off%20across%20mu%202e72425bf103819f8116d3bf448d4d4c.md), Reduce subjectivity and inconsistency in quality decisions (Reduce%20subjectivity%20and%20inconsistency%20in%20quality%20d%202e72425bf103814d92a6f629c7fd099a.md), Support trap-question workflows that remove respondents after completion (Support%20trap-question%20workflows%20that%20remove%20respon%202e72425bf1038151ae13f195c9144517.md), Make straight-lining detection and removal more direct end-to-end (Make%20straight-lining%20detection%20and%20removal%20more%20di%202e72425bf10381b3bdeed0a320d561d1.md), Enable open-end cleaning/removal within the same tooling used for checks (Enable%20open-end%20cleaning%20removal%20within%20the%20same%20t%202e72425bf10381c9868cce348b23c648.md), Reduce “lots of downloading†during quality checks (Reduce%20%E2%80%9Clots%20