Repo path
strategy/confidence-positioning-brief-internal.md
The most direct articulation of the confidence thesis and how it should land with the business internally and in demos.
Repo path
strategy/confidence-positioning-brief-internal.md
Notes
No additional notes recorded.
Mapped workstreams
Mapped appendices
Raw source preview
Raw, unprocessed file text shown below. Preview truncated for readability.
--- title: "Confidence Positioning Brief (Internal)" source: notion notion_id: "3192425b-f103-807d-bd37-f80d55f380ed" migrated: "2026-04-02" status: active tags: [strategy, confidence, positioning, internal] --- # Confidence Positioning Brief (Internal thinking doc) > Become the team the business trusts for customer understanding. > > EmpathyIQ makes every study your team delivers defensible — from design through fieldwork to the boardroom — and turns your research into a structured knowledge base the entire organisation depends on. ## The world your buyer lives in If you run an insight team inside a mid-to-large enterprise today, you are under pressure from every direction. Your stakeholders want answers faster. They've seen what AI can do and they want to know why research still takes weeks. They don't care about methodology, they care about speed and cost. At the same time, other teams have started going around you. Marketing is running SurveyMonkey polls. Product is using AI chatbots to "talk to customers." The C-suite is pulling data from social listening tools and making strategic decisions without you in the room. They're not doing this to undermine you, they're doing it because you're too slow, or too busy, or they don't know the value you provide. Meanwhile, your own team is stretched. You've got junior researchers designing studies that you don't have time to review properly. You've got senior researchers spending their days cleaning data instead of thinking. You've got a decade of past research buried in shared drives that nobody can find, so every new project starts from scratch. And underneath all of this, there's a deeper fear: that the work your team produces might not be as solid as it needs to be and that one day, someone important is going to notice. This is what the head of insight is actually worried about. Not survey tools. Not AI features. They're worried about whether their team can keep up, whether the work can be trusted, and whether they'll still matter in two years. EmpathyIQ is built to make that fear go away. --- ## The 20-second version **EmpathyIQ makes every study your team produces defensible — before it launches, while it's in field, and when it reaches the boardroom.** It does this by building confidence directly into the research process: catching design flaws before they become expensive mistakes, monitoring data quality in real time so problems are fixed during fieldwork instead of discovered after, and pressure-testing conclusions against evidence so the story you tell can be stood behind. The result: your team delivers faster without cutting corners. Your juniors produce work you'd trust from a senior. Your seniors spend their time on thinking, not cleaning. And when your stakeholders ask "how do we know this is right?" — you have an answer. --- ## What confidence actually means — and why it matters now Every research platform talks about quality. Most of them mean "our surveys look professional" or "we use AI to speed things up." That's not what we mean. Confidence, in EmpathyIQ, is specific and measurable. It means: **You can prove your research was designed properly.** Not because a senior researcher happened to review it, but because the system flagged ambiguous questions, identified leading language, caught missing context, checked screening alignment, and carried the important trade-offs through a visible review flow before anything launched. If your stakeholder asks "why did you ask it this way?" — the answer is documented. **You can prove your data is clean.** Not because you spent hours in Excel after fieldwork, but because the platform monitored every respondent across 38+ quality metrics in real time — catching the ones who speed through grids while straightlining, the duplicates, the bots — and walked your team through a structured review process while the study was still live. If your stakeholder asks "can we trust this sample?" — you can show them exactly why. **You can prove your conclusions are supported.** Not because your analyst is talented, but because the system checked claims against the underlying data, surfaced contradictions, and pulled in corroborating evidence from past research. If your stakeholder asks "is this just one study or do we really know this?" — you can point to the evidence trail. This is what makes confidence different from quality. Quality is a feature. Confidence is an audit trail. It's the ability to defend every decision in the research process — not after the fact, but because the process itself was built to be defensible. --- ## Why this matters for the insight team's position Confidence isn't just about better research. It's about the insight team's role in the organisation. Right now, insight teams are being squeezed from two sides. On one side, there's pressure to adopt AI and deliver faster. On the other, there's the risk of being bypassed entirely — replaced by tools that promise instant answers without the rigour. EmpathyIQ resolves both pressures at once. **It lets the team move faster without the fear of it falling over.** AI is embedded throughout the process — but it's not replacing the researcher's judgment. It's augmenting it. The system handles the grunt work (formatting questionnaires, monitoring respondent quality, scanning for design flaws) so the researcher can focus on the thinking that actually matters. Speed goes up. Risk doesn't. **It makes the team's work visibly, provably better than what other teams can do on their own.** When marketing runs a quick poll or product asks ChatGPT about customers, there's no quality framework, no validation, no audit trail. When the insight team runs research through EmpathyIQ, every step is checked, every decision is traceable, and the conclusions are linked to evidence. That's the difference between "we think" and "we know" — and it's the difference that protects the insight team's relevance. **It gives senior leaders oversight without requiring their time.** A head of insight can't personally review every questionnaire and every dataset. But with EmpathyIQ, they don't have to. The system applies the same rigour to every project regardless of who's running it. The senior researcher sets the standards. The platform enforces them. The result is consistent quality across the team, even when the head of insight isn't in the room. **It turns the insight team from a service desk into an indispensable function.** When research is just "the team that runs surveys," it's easy to bypass. When research is the function that maintains the company's structured, validated, continuously growing understanding of its customers — and other teams depend on that understanding to do their jobs — it becomes infrastructure. You don't bypass infrastructure. You invest in it. --- ## How confidence works in practice ### Confidence before: catch it before it costs you When a researcher uploads a brief, the system doesn't just accept it — it interrogates it. What context is missing? What assumptions are being made? What does past research already tell us about this question? When the researcher designs a questionnaire or discussion guide, an AI review layer — built from the methodology expertise of working market researchers — checks every question against best-practice principles. Ambiguity, bias, leading language, structural risk, screening misalignment, and the way earlier questions can distort later answers are all flagged with clear rationale before a single respondent sees it. The strongest wedge today is not just "a better checker." It is a governed redesign workflow that carries the work from review into follow-up, structured changes, revision, and final QA. That is where the pre-launch confidence story is stronger than generic AI review, even if builder-native productization and enterprise packaging are still catching up with incumbent platforms. Before launch, the platform presents an explicit checklist of remaining risks. Not a green light. Not "you're good to go." A clear-eyed view of what's been resolved, what hasn't, and what judgment calls the researcher still needs to make. **The outcome:** No more "I wish I'd asked that differently" moments discovered in field. No more expensive mistakes caught too late. The research team can move faster into fieldwork because the design has already been stress-tested — and the head of insight can trust that it was, even if they didn't review it personally. ### Confidence during: know your data is clean while you can still fix it Once a study goes live, the Trust Centre monitors every respondent in real time across three core dimensions: are they real, are they unique, and are they engaged? This isn't a dashboard buried in an analytics tab. It's the first thing a researcher sees when they log in. Live studies are displayed with their quality status. Issues are surfaced immediately. The Trust Centre covers more in-field respondent-quality dimensions than most platforms. Most platforms check response speed at the survey level — did this person finish a 15-minute survey in 3 minutes? EmpathyIQ checks speed at the question level and scores each respondent based on how many individual questions they rushed. It then combines that with pattern recognition: if someone is straightlining a grid while also speeding through it, or alternating 1-2-1-2 at speed, the system flags that specific question as disengaged — not just the respondent overall. Where it does not yet match Qualtrics Response Quality is on the later control point: post-field response-quality and fraud cleanup. That is a real product gap. The Trust Centre's stronger angle today is live review while fieldwork is still running, with clearer signal logic and human review before a final call is made. More than 38 individual metrics feed into each respondent's quality profile. Flagged respondents are surfaced with clear reasons. The researcher reviews them, decides to accept or reject, and rejected respondents automatically trigger quota replacement. The platform prompts this review daily. **The outcome:** Data quality is managed during fieldwork, not discovered after. The team no longer wastes hours in post-field Excel clean-up arguing with panel companies about which respondents were legitimate. The audit trail is built as you go — every decision documented, every rejection justified. When a stakeholder asks "how confident are you in this data?" the answer isn't "we think it's fine" — it's "here's exactly what we checked and what we removed." The Trust Centre is designed to meet teams where they are. A hands-on team can review every flagged respondent individually. A lighter-touch team can set thresholds so that critical flags are auto-rejected while only borderline cases surface for review. Either way, the quality framework runs on every study, every time. ### Confidence after: stand behind the story When the researcher builds a report, the system already holds the brief, the background research, the questionnaire, and the cleaned data — all connected within the project. The researcher explores results conversationally, testing hypotheses and identifying patterns. Because the knowledge base spans projects, the system can surface findings from past research that corroborate or challenge the current story. AI reviews the narrative: are claims supported by the data? Is context missing? Are there findings from other studies that strengthen — or complicate — the conclusion? As the knowledge base grows, this becomes a way for the insight team to deliver something no one else in the organisation can: not just "here's what this study found" but "here's what this study found, and here's how it connects to everything else we know." Cross-study corroboration turns individual findings into organisational conviction. **The outcome:** Reports are defensible. Conclusions are traceable. And the insight team can say