Supporting evidenceProduct and SKU readiness

Strategy portfolio narrative

Connects the product portfolio, proof gaps, ICP implications, and commercial open questions into one living narrative.

Repo path

strategy/strategy-portfolio-narrative-living-artifact.md

Notes

No additional notes recorded.

Raw source preview

Raw, unprocessed file text shown below.

---
title: "Strategy + Portfolio Narrative — Living Artifact"
status: active
tags: [context, internal, synthesis, strategy, portfolio]
last_updated: '2026-04-02'
---

# Strategy + Portfolio Narrative — Living Artifact

*This document bridges the strategic narrative (why EmpathyIQ exists, what it promises) and the portfolio database (what we are building, at what readiness level). It is intended to be maintained as the portfolio evolves. Update it when strategic decisions change, when portfolio priorities shift, or when new evidence from customers changes assumptions.*

---

## How the portfolio maps to the strategy

### The strategic promise and the portfolio response

**Promise:** "Become the team the business trusts for customer understanding."

**What delivers it:**

| Strategic layer | Products that deliver it | Readiness |
|---|---|---|
| Research Execution — run trustworthy research | Quant Surveys + Discussion Groups + AI Interviews + Panel | Quant Surveys ~95%, Discussion Groups separate, AI Interviews outsourced (Tellet), Panel live (Musgrave) |
| Research Design & Planning — design grounded, defensible research | Research Architect + Research Guard | Research Architect: early definition; Research Guard: beta redesign workflow, with stronger review depth than productized surface today |
| Knowledge — accumulate and access what you've learned | Insight Navigator + project-level research records | Being built; not deeply integrated |
| Confidence — prove the work was done right | Trust Centre (During) + Research Guard (Before) + Claim corroboration (After) | Trust Centre: built. Research Guard: beta, strongest on governed pre-launch redesign but still weaker on productization and enterprise packaging. Claim corroboration: not yet. |

### The core workflow → confidence mapping

The Quantitative Research Platform has five Core Workflows. Three are differentiating; two are hygiene:

| Core Workflow | Confidence stage | Differentiating? | Current state |
|---|---|---|---|
| Survey Design | Before | Yes — catch design flaws pre-launch | PoC (Questionnaire Reviewer) |
| Survey Deployment | During | Yes — real-time respondent quality (Trust Centre) | Built, core differentiator |
| Analysis & Insight | After | Yes — defensible, reusable, corroborated reporting | Earliest phase |
| Data Access | — | No — table stakes | Table stakes |
| Administration & Governance | — | No — table stakes (must be solid) | Functional |

### What the portfolio CSV tells us about build posture

The portfolio uses four readiness labels:
- **There** — built and meeting expectations
- **Almost There** — built but needs refinement
- **Not Started** — planned but not built
- **Now / Next / Later** — prioritisation horizon

As of Q1 2026: Survey Deployment (Trust Centre) is the most complete Core Workflow. Survey Design / Research Guard now has deeper mechanism proof than product maturity, and still needs a more productized review surface. Analysis & Insight is early.

---

## Open strategic questions (as of Q1 2026)

These are the questions that have not been answered and that incoming evidence should resolve:

1. **Is "confidence before/during/after" differentiating or hygiene?** We believe it's differentiating. But the first 10 customer interviews need to validate whether researchers name this as a source of willingness to pay, or just "table stakes."

2. **Is "reduced uncertainty" the right umbrella language?** "Confidence" risks sounding like an admission that research might be wrong. "Defensibility", "audit trail", "governance" may land better with enterprise procurement. Test in interviews.

3. **What is the fastest success loop per product?** We need a proof metric that shows value within the first project a customer runs. Currently undefined for analysis & insight and knowledge repo.

4. **Pricing and packaging for enterprise conversion.** Post-pilot pricing is still open. See [strategy/pricing-model.md](../../Strategy/pricing-model.md) for current level-based structure — but exact seat pricing, commit thresholds, and enterprise contract terms are OPEN.

5. **Panel platform scope.** "Panel" is in the portfolio as a product concept. What does EmpathyIQ actually mean by panel? Recruitment aggregator? First-party panel management? A relationship directory? This is unresolved.

6. **Persona builder definition and feasibility.** Persona Builder PoC exists. But the product vision (AI-driven personas from real respondent data) requires the knowledge base to have critical mass. Define the minimum viable threshold.

---

## Customer evidence — what we know and what we don't

### What the interviews confirm (strong signal)

- **Reporting is the dominant time sink.** Danny, Michelle, and Stephen all described a gap between "story known" and "slides finished." This is not about analysis — it's about visualisation and evidence packaging. Confirms P-103.
- **Control over visual output is non-negotiable.** Researchers reject tools that make structural or visual decisions on their behalf. Auto-generated decks are distrusted. Any Analysis & Insight feature must preserve user control.
- **Fieldwork ops quality checking is heavily manual.** Leanne's interview surfaced 20+ specific pain points around multi-market QC, respondent removal, and open-end cleaning. This validates the Trust Centre positioning.
- **Qual design confidence is about priming control, not just question wording.** Stephen's interview added nuance: "confidence before" in a qual context means avoiding ordering mistakes that prime respondents, and adapting the guide through early fieldwork sessions.

### What we still don't know (open evidence gaps)

- **Willingness to pay.** No interviewee has explicitly named a price or a switching budget. Pain is clear; WTP signal is weak.
- **The fastest proof metric.** We know what the problems are. We don't know which solved-problem is enough to trigger an annual contract commitment.
- **Whether agency-side researchers have meaningfully different needs.** All interviews so far are client-side. The GTM decision document reflects this ICP choice, but the evidence base is thin.

---

## Architecture coverage: what's complete, partial, and missing

### Complete (content exists, no gaps identified)

- Strategy narrative: [strategy/in-depth-what-is-empathyiq.md](../../Strategy/in-depth-what-is-empathyiq.md)
- Confidence positioning (internal + external)
- Competitive feature review and JTBD value proposition analysis
- Market analysis (confidence lever evidence, pre-mortem, gap analysis)
- Trust Centre documentation embedded in portfolio features
- 4 of 6 PoC setups (questionnaire reviewer, persona builder, MEAT IQ, prompt improver)
- Pricing model (level-based structure)
- GTM readiness plan and key decisions

### Partial (exists but needs updating or expansion)

- **Analysis & Insight roadmap** — the "after" confidence thesis is the weakest in the portfolio. Feature families exist in the CSV but have minimal spec content.
- **Insight Navigator** (formerly Knowledge Repository) — the concept is clear in strategy docs but portfolio feature specs are sparse.
- **Qual Discussion Groups** — Core Workflows defined but Job Areas and Feature Families are less developed than the Quant product.
- **Insight patterns register** — referenced in interview output files (P-103, P-003, etc.) but no central patterns document exists in this repo.

### Missing (not yet captured anywhere in the repo)

- **Proof metrics per product** — what measurable outcome proves value in the first project
- **Pilot outcomes** — what happened with Musgrave and UKOmnibus Group during their pilots
- **WTP evidence** — any pricing experiment or explicit pricing conversation from interviews
- **CLAUDE.md** — an AI agent orientation file at the repo root (distinct from this document; closer to a `.cursorrules` or coding context file)

---

## How to maintain this document

Update this document when:

- A strategic decision changes (add to the relevant section; update the "open questions" block)
- New customer evidence arrives (add to the "what we know" section; close open evidence gaps)
- The portfolio CSV is updated (review the coverage table and update readiness labels)
- A new document is added to the repo that resolves an architectural gap

Do **not** use this document as the source of truth for portfolio structure (use the CSV) or for specific strategic claims (use the strategy documents). This document is a synthesis layer — it points to where things live, not what they contain.