Supporting evidenceStrategy and GTM narrative

Internal narrative: the EmpathyIQ story

Longer-form articulation of the product promise, who it serves, and why the research workflow should be reframed around trust and continuity.

Repo path

strategy/internal-narrative-empathyiq-story.md

Notes

No additional notes recorded.

Raw source preview

Raw, unprocessed file text shown below.

---
title: "Internal Narrative: EmpathyIQ Story"
source: notion
notion_id: "2fe2425b-f103-80b5-ba10-f52b35511e89"
migrated: "2026-04-02"
status: active
tags: [strategy, narrative, internal]
---

# Internal Narrative: The EmpathyIQ story (an end-to-end survey experience)

*This document walks through how a project moves through EmpathyIQ step by step. For the full product strategy and positioning, see the companion document: [In Depth — What is EmpathyIQ](./in-depth-what-is-empathyiq.md).*

---

## What research actually feels like

When a researcher receives a brief, the real risk doesn't start at fieldwork. It starts earlier. Before any survey is written, the researcher is already asking themselves:

- Do I actually understand the problem?
- Do I have the right context?
- Am I about to ask the wrong questions in a very polished way?

That uncertainty is usually invisible, but it shapes everything that follows.

This walkthrough shows how EmpathyIQ supports the researcher at every stage, following a quantitative project from brief to report.

---

## Step 1: Creating a project and grounding the work

In EmpathyIQ, the researcher starts by creating a project, not a questionnaire or discussion guide.

The first thing she uploads is the brief. The platform doesn't immediately push her into authoring. Instead, it treats the brief as something to be understood and tested.

AI reviews the brief and highlights:

- Missing background information
- Contextual gaps that might affect interpretation
- Areas where assumptions are likely being made

The system produces a clear checklist of context that needs to be gathered: competitor context, category benchmarks, pricing references, and historical learnings from related research.

Nothing is auto-added. The researcher remains in control. She can upload materials herself, or ask AI to gather supporting information. The system can search the internet and produce a report for her to review. It can also refer back to the company's internal knowledge base, digging through old projects, past reports, and anything else that has been uploaded or ingested to surface context that already exists.

Only once she approves does anything become part of the project's knowledge base. At this point, the project has shared context, not just instructions. And everything added doesn't just support this study. It becomes part of the company's growing knowledge base, reusable in future work.

---

## Step 2: Designing the research

With context in place, the researcher moves into research design. She can upload an existing questionnaire or discussion guide, or ask the platform to draft one based on the brief and background.

The system does not act as an auto-author. Instead, it:

- Reviews questions against best-practice principles built from the direct experience of working market researchers
- Highlights ambiguity, bias, and structural risk
- Asks clarifying questions where intent is unclear
- Makes logic explicit rather than implicit
- Formats Word documents into clean, platform-ready structure

Design decisions become visible and discussable, not buried in expertise. The questionnaire lives in a shared notepad: editable, reviewable, sharable with stakeholders, and independent of final programming.

When it's ready, the researcher publishes it and the platform handles the programming. No copy-paste. No silent interpretation. And if the researcher wants, she can go into the authoring platform to make manual edits directly.

---

## Step 3: Survey QA and pre-launch validation

Before anything launches, the researcher needs confidence that the survey works as intended: structurally, logically, and narratively.

As the researcher tests the survey:

- An AI agent reads along screen-by-screen
- Flow and sequencing are checked for logical continuity
- Structural issues are flagged where questions don't connect cleanly
- Potential respondent confusion is highlighted
- Suggested improvements are offered, with clear rationale

The researcher can navigate question-by-question, see all logic paths and conditions explicitly, and review how piped text behaves across different respondent journeys.

---

## Step 4: Translation QA (for multilingual studies)

For global studies, translation introduces a different kind of risk: meaning drift. EmpathyIQ separates translation from validation.

The researcher can use AI translation directly, or export the survey to Excel and work with human translators. When translations are imported back in, the platform does not assume correctness. AI Translation QA:

- Evaluates each translated item in the context of the original question
- Flags meaning drift, ambiguity, or loss of nuance
- Highlights inconsistencies across languages

Through the translation interface, the researcher can review translations question-by-question, see how piped text interacts with each question, and inspect how translations behave across different logic paths.

---

## Step 5: Pre-launch checkpoint

Once QA is complete, the platform presents a clear, explicit pre-launch checklist:

- Outstanding risks
- Unresolved issues
- Decisions still requiring human confirmation

At no point does the system say "you're good to go" without explanation. Launch confidence comes from explicitly understanding remaining risk, not from passing an opaque validation gate.

---

## Step 6: Sourcing respondents

Before fieldwork begins, the researcher configures sample within the project. EmpathyIQ connects natively to panel providers, including Pure Spectrum and a growing network of supply partners, so targeting, quotas, and sample flow are managed directly within the platform. Every respondent that enters the study passes through EmpathyIQ's quality framework before their data reaches the dataset.

---

## Step 7: Fieldwork with live quality control

Once the study is live, the Trust Centre takes over.

This is not a dashboard buried in an analytics tab. It's the first thing the researcher sees when she logs in. Across all live studies, she sees the three core quality dimensions (Real, Unique, Engaged) with clear indicators of which studies have issues, which dimensions are affected, and how severe the problems are.

When something looks off, she clicks into the affected study and sees flagged respondents with clear reasons for each flag.

For each respondent, she can:

- Review why they were flagged
- See their individual answers
- Accept or reject them

Rejected responses trigger automatic quota replacement. The platform prompts this review daily, building the audit trail in real time.

A hands-on team can review every flagged respondent individually. A lighter-touch team can set thresholds so that critical flags are auto-rejected while only borderline cases surface for review. Either way, the quality framework runs on every study, every time.

---

## Step 8: Turning data into defensible insight

After fieldwork, EmpathyIQ already holds the brief, the background research, the questionnaire, and the cleaned data, all connected within the project.

The researcher explores results conversationally: identifying patterns, testing hypotheses, asking whether findings are supported elsewhere. The system surfaces corroborating research from past projects, highlights contradictions, and suggests relevant findings, regardless of when or how the original research was conducted.

When building a report, the researcher:

- Plans the narrative slide-by-slide
- Pulls in data deliberately
- Writes the story she wants to tell

AI reviews the narrative: checking whether claims are supported, identifying missing context, and suggesting additional evidence.

The final report is published with full traceability: how the insight was formed, what evidence supports it, and where judgment was applied.

---

## What happens after the project

At this point, the project doesn't just close. It becomes part of the system.

The brief, the context, the materials, the cleaned data, and the report all live in the knowledge base. They are structured, tagged, and queryable alongside every other project the company has ever run through the platform.

The next time a researcher starts a new study in a related area, the system already knows what was learned before. It can surface relevant past findings, flag where new research might duplicate existing knowledge, and help the team build on what already exists instead of starting from scratch.

Over time, this is what turns individual studies into a compounding body of customer understanding.

---

## Beyond the research team

As the knowledge base grows, it stops being something only the research team interacts with.

Product, marketing, sales, and commercial teams can access the knowledge through a conversational interface. They can ask questions of the research directly, without waiting for a report or going through the insight team as an intermediary.

The insight team stays in control of quality and methodology. But the knowledge they produce becomes available to everyone who needs it, in real time.

---

## What makes this different

At no point does EmpathyIQ try to replace the researcher. It follows the same path a good researcher already follows, and provides support at every stage.

- It surfaces risk before it compounds
- It preserves context so nothing is lost between stages
- It supports judgment with evidence rather than replacing it with automation
- It makes decisions auditable, so stakeholders can see how conclusions were reached

Confidence isn't assumed. It's constructed, reviewed, and made visible at every stage.

This is a market research platform built by market researchers, drawing on over two decades of experience running global research programmes, to solve the problems they know firsthand.