Repo path
pm/decisions/strategy-decisions.md
Rich context on launch posture, suite strategy, delivery mode, shared services, and what still remains genuinely open.
Repo path
pm/decisions/strategy-decisions.md
Notes
No additional notes recorded.
Mapped workstreams
Mapped appendices
Raw source preview
Raw, unprocessed file text shown below. Preview truncated for readability.
--- title: 'Strategy Decisions and Rationale (Addendum)' source: notion notion_id: '3032425b-f103-80b3-9f1a-d2b58a2d9b07' migrated: '2026-04-02' last_refreshed: '2026-02-05' status: active tags: [strategy, decisions, rationale, internal] --- # Strategy_Decisions_and_Rationale_(Addendum) **Last refreshed:** 2026-02-05 **Purpose:** A durable, portable record of my current thinking on EmpathyIQ’s strategy, portfolio, and how the architecture ladders from product promises down to what we build. This is meant to be queried, challenged, and updated over time. **Role:** Decision + rationale record (includes founder/CTO context and working assumptions). **Use for:** why we believe what we believe, what’s locked, what’s open, and the reasoning behind portfolio structure and launch posture. **Not authoritative for:** current portfolio structure, readiness, or Now/Next/Later labels (see the portfolio CSV + interpretation/change guides). **Precedence:** If this doc conflicts with the latest CSV labels, treat it as “context†and use the CSV + guides for the current state. **Companion doc:** the “Confidence Story†narrative explains the end-to-end experience and differentiation; this addendum captures decisions, rationale, and open questions. --- ## 1) The core problem we’re solving (as I currently see it) Enterprise client-side researchers already “do research.†They already have tools, vendors, agencies, templates, and ways of getting things done. The real issue isn’t that they’re missing steps in the lifecycle. They’re missing **trust** and **continuity** across steps. What’s broken is the invisible backbone: - Too many handoffs - Too many manual checks and specialist dependencies - Too much rework late in the process - Too much risk that the data is dirty or the story can’t be defended - Too much “deck rebuild†from scratch - Too much fragmentation between quant, qual, and institutional knowledge So the suite can’t just say “we cover everything end-to-end.†They already have end-to-end coverage in practice (just messy). The real question they ask (implicitly) is: > “Why should I rip out or coexist with my current stack?†> The suite needs to win on system-level outcomes: - **Reduced uncertainty** - **Defensible decision-making** - **Consistency and continuity** - **Reduced operational burden (less babysitting)** --- ## 2) The portfolio “throughlineâ€: Confidence vs Certainty vs Speed ### The market narrative today: speed Most competitors sell some form of “speedâ€: - faster studies - faster analysis - more qual at quant-like scale - more with fewer people Speed is real value, but it’s also table stakes language now. ### My current hypothesis: certainty (reduced uncertainty) is the real value I think a stronger truth is this: - Companies don’t ultimately want speed. - They want **certainty** (or at least reduced uncertainty) so they can act without regret. That “certainty†breaks down into three places where teams bleed time, risk, and trust: 1. Certainty we’re asking the right questions (and asking them well) 2. Certainty we’re collecting real, clean, decision-grade data 3. Certainty we’re surfacing the right insights and can defend them Speed is often a byproduct of certainty: - If we reduce rework and late surprises, the process becomes faster. - If we make the chain defensible, the deck becomes faster to produce and reuse. - If we detect quality issues early, fieldwork runs smoother. ### Why I’m careful with the word “certainty†“Certainty†can sound like a guarantee, which research can’t honestly promise. So the positioning I’m leaning toward is: - **Reduce uncertainty** - **Decision-grade confidence** - **Defensible outcomes** - **Fewer late surprises** ### The structure that still works (and maps to product design) Even if “reduced uncertainty†is the umbrella, the best operating model I’ve found for actually building is: **Confidence before / confidence during / confidence after research** It’s specific, it maps to workflows, and it keeps us honest about what we’re improving. --- ## 3) Who we’re building for (DECISION – locked) ### ICP (current) - Enterprise buyers (not individual subscribers first) - Client-side researchers (explicitly not agency-side first) - Lower tolerance for complexity - Higher need for defensibility and trust ### Why this matters operationally Even a great product can fail here if it requires an org change that the customer can’t absorb. So we need to design for: - assisted onboarding - guardrails - reliability and governance - a clear proof loop that makes adoption feel rational and safe --- ## 4) What the portfolio is (and why these tools belong together) ### Portfolio stance **(DECISION – locked)** EmpathyIQ is a **Research OS / suite**. Not “just a survey tool.†### Portfolio products (current list) In the portfolio today: 1. **Platform (Shared Services)** 2. **Quantitative research platform** 3. **Qualitative discussion group platform** 4. **Qualitative AI interviewer** (outsourced integration, branded as part of EmpathyIQ) 5. **AI Knowledge repository** (query uploaded documents/reports via natural language) Planned / future (not yet fully defined): 6. **Panel platform** (rudimentary; not yet built) 7. **Persona builder** (future concept) ### Why these belong together (the non-hand-wavy version) These products belong together if, and only if, they reinforce system-level outcomes that point tools don’t deliver well. The suite must be able to say: - we reduce uncertainty *across* research types (quant + qual + knowledge) - we reduce operational burden via shared primitives (identity, governance, audit trail, templates, reuse) - we preserve continuity so insights compound instead of resetting every project --- ## 5) The delivery reality: SaaS vs software-under-service (WORKING STANCE / HYPOTHESIS) ### Founder belief (important context) The founder’s view is that pure SaaS adoption in research is often an org change and the market historically hasn’t gone “all-in†on self-serve in large numbers. He believes “software under service†or “do-it-together†may be the durable wedge. ### My working stance We can still build a product business, but we need delivery modes that match enterprise reality: Delivery modes (strategy-level, not architecture): 1. **Self-serve** 2. **Do-it-together** (assisted adoption) 3. **Software-under-service** (managed delivery using our platform) The risk is obvious: services can hijack roadmap and turn into bespoke chaos. So the rule I want is: if we do service wrappers, they must be **productized** (repeatable templates + tight SKUs), not one-off custom work. --- ## 6) Launch posture and pilot ask (WORKING PLAN) ### Launch posture (current) We likely launch with four products visible as a suite: 1. Quant tool (survey editor) — ~95% there 2. Discussion forum tool — owned, separate flow 3. AI Knowledge repository — being built; not deeply integrated yet 4. AI interviewer — outsourced integration, branded inside EmpathyIQ The “suite posture†matters because the founder is emotionally attached to not being pigeonholed as only a survey tool. ### Pilot (the ask) Working pilot structure: - handful of customers (already testing with Musgrave and UKOmnibus Group; aiming for Ryanair) - ask them to run applicable research on the platform (quant and/or qual) - bi-weekly conversations with us to get to root issues - pilot for ~3 months, free, with a “break clause†- target outcome: convert to a 1-year paid contract after the pilot (price TBD) - ideally allow logo/name usage (discount incentives possible) What we provide: - high-touch onboarding - direct channel support - priority fixes for pilot blockers - influence over shaping the product ### What “success†looks like (directional) - consistent in-platform usage (studies launched, exports, knowledge repo usage) - evidence of reduced uncertainty / reduced rework (proxy metrics) - willingness to sign an annual contract (killer signal) (Exact thresholds are OPEN until we see baseline behaviour.) --- ## 7) Our planning system: how we structure everything (canonical) ### Why we needed an architecture system The founder tends to zoom up to portfolio and suite-level thinking when we need to make near-term build decisions. I need a system that: - shows how everything fits together - makes scope and tradeoffs concrete - keeps the portfolio discussion from looping forever - lets us ladder features to outcomes and to the suite promise ### Canonical hierarchy (locked) **Portfolio → Product → Core Workflow → Job Area → Feature Family → Feature → Document** ### Canonical system-of-record (locked) - **Notion Architecture Nodes DB** is canonical - nesting is represented via sub-items - the main table stays lightweight for scanning and navigation - details live in page bodies and nested Documents --- ## 8) Quantitative Research Platform: how it ladders ### Product promise (still being tightened) The intent is: - not another survey tool - an end-to-end quant research product - strongest value at the “fragile points†where teams second-guess, babysit, and rework ### Core Workflows (current) These are the pillars of the quant product: 1. **Survey Design** (differentiating) 2. **Survey Deployment** (differentiating) 3. **Analysis & Insight** (differentiating) 4. **Data Access** (table stakes) 5. **Administration & Governance** (table stakes) ### How the “confidence model†maps - **Before** = Survey Design - **During** = Survey Deployment - **After** = Analysis & Insight - Hygiene foundations = Data Access + Admin/Governance ### Survey Design (the “before†thesis) What I think matters most here: - reassurance that the design is methodologically sound - nothing important overlooked - internal consistency - reduced reliance on specialist review - less fragile handoff to programming - less manual formatting and renumbering-type effort - easy collaboration with stakeholders/clients without chaos ### Survey Deployment (the “during†thesis) The biggest deployment pain I’m focused on: - real people vs bots - data quality signals (speeding, straight-lining, etc.) - flagging and removing/handling bad responses early - fewer fieldwork manager “tests†and firefighting mid-field If we crack this, fieldwork becomes calmer and more reliable, and confidence rises fast. ### Analysis & Insight (the “after†thesis) Two major drains: 1. Deck/report creation is painfully manual and repetitive 2. Open ends are time consuming to interpret and synthesize A real differentiator would be: - reusable reporting outputs - the ability to define and reuse slide structures/templates - pull forward prior presentation patterns without re-entering everything - structured, defensible synthesis (especially for open ends) ### Data Access (hygiene) I don’t currently see a strong differentiator here. The plan is to keep it table stakes and revisit only if customers show a real pain. ### Administration & Governance (hygiene) Needs to be solid and boring: - project/study lifecycle - permissions - audit logs - organization and retrieval If it’s weak, enterprise adoption dies. --- ## 9) Platform (Shared Services): what it is and why it exists The platform exists so the suite can behave like an enterprise product: - identity/auth - authorization and governance - billing and entitlements (SKUs across products) - observability and reliability I’ve listed platform workflows as: 1. Identity and Access Management 2. Authorization and governance 3. Observability 4. Billing, Entitlements & Access There has been overlap in naming historically (“governance and access†vs “admin and governanceâ€). This needs rationalization, but the intent is clear: