Supporting evidenceStrategy and GTM narrative

Ideas log

Captures key product- and GTM-shaping insights from the website build process, including commercial taxonomy, product naming, and founder-facing site logic.

Repo path

pm/ideas-log.md

Notes

No additional notes recorded.

Raw source preview

Raw, unprocessed file text shown below. Preview truncated for readability.

# Ideas Log

Most recent entries at the top.

---

## 2026-04-14 — Research Architect: the missing first step in the research workflow

**Origin:** Product and IA conversation while building the website content pass.

The platform had tools for executing research (Quant Surveys, Discussion Groups, AI Interviews), tools for protecting quality (Research Guard, Trust Centre), and a tool for retrieving and synthesising past work (Insight Navigator). What it did not have was anything at the very start — before the questionnaire is designed, before Research Guard reviews it, before fieldwork begins.

The gap: most insight teams receive a brief and then immediately try to design research without first gathering the context that should shape the design. The desk research, competitive landscape, industry analysis, and prior findings that would make the questionnaire sharper are either missing, scattered across drives and tabs, or handled inconsistently between projects.

**Desmond's framing:**
> "This one helps you organise all your thoughts so that your research materials and contextual materials are sufficiently available to then design research. Research Guard makes sure that the research you design meets the spec — Research Architect helps you organise everything so you have enough context to design it well in the first place."

The concept as defined:
- Upload the brief → get a structured checklist of what context is worth gathering
- Platform offers to conduct some of that context gathering (desk research, competitive, industry)
- All materials organised in one project container before the first question is written
- Platform-agnostic: conduct the study on any tool if needed, import the materials and results back in afterwards

**The value proposition that emerged:**
> "Give us your brief. We'll help you design better research and do some of the legwork for you."

Originally named **Compass** — renamed to **Research Architect** on 2026-04-15. The two-word functional naming pattern aligns with Research Guard and Insight Navigator.

**Full workflow sequence now explicit:** Research Architect → Research Guard → Fieldwork tools → Insight Navigator. Each tool hands off to the next and links across.

**Why it matters:** Research Architect fills the pre-design context gap and makes the platform's lifecycle story complete. It also positions `<un>peel` as the place research begins, not just the place it lives or gets reviewed — a stronger anchor for the whole relationship.

**Status:** Page built and live. Connects to: Website solution section — Refine → Collect → Surface (the "Refine" co-pilot vision described there as long-term is now being built as Research Architect).

---

## 2026-04-14 — Don't name competitor AI tools directly: position against the category

**Origin:** Insight Navigator copy session — decision on how to frame the competitor.

The first draft of the Insight Navigator Section 1 named ChatGPT, Claude, and Gemini explicitly. Desmond rejected this:

> "I don't want to call them out specifically. I want it more broad. I want to send the message that we've fully thought through our implementation of this and have specifically tailored it to research in a way that most other direct and indirect competitors likely have not."

The resulting framing: "generic AI solutions" as the category, with the contrast landing on architectural difference — generic tools default to keyword proximity because they lack research context (brand, market, method, audience, fieldwork window, metric). Insight Navigator was purpose-built around research context.

**The distinction matters:** Naming specific tools makes the copy feel reactive and date-sensitive. Positioning against a category ("general-purpose AI") makes the copy evergreen and puts `<un>peel` in a different class by default. The message becomes: this wasn't bolted onto a language model, it was designed specifically for how research retrieval needs to work.

**Why it matters:** Any team member making the internal business case for `<un>peel` over continuing to use generic AI tools will find "purpose-built for research" easier to argue than "better than ChatGPT." The category framing also holds if specific tools improve or get replaced.

**Status:** Applied to Insight Navigator Section 1. Principle extends to any copy where generic AI tools could be named as alternatives.

---

## 2026-04-14 — Insight Navigator headline: system of record over traceable answers

**Origin:** Positioning conversation while reviewing Insight Navigator hero copy.

Original headline: *"Turn past studies into traceable answers your team can use today."*

Strengths: actionable, "traceable" is a real differentiator, clear.
Weakness: tactical and reactive — "use today" implies urgency over compound value. Undersells what the product is actually building toward.

Alternatives considered, with the angles they represent:
- *"Build a system of record for everything your team knows about customers."* — enterprise-familiar, strategic, positions what you're building not just what you're finding ✓
- *"Stop starting from zero. Your research is already an answer."* — most emotionally resonant, addresses the pain directly
- *"The more your team researches, the more useful everything becomes."* — captures the flywheel but soft as a standalone
- *"Own a growing body of customer understanding — not just a folder of old decks."* — punchy, but "old decks" is slightly informal for enterprise

**Decision:** System of record. Enterprise buyers already use this phrase when justifying platform investments internally — using it in the headline makes the business case easier to articulate upward. It also positions Insight Navigator as infrastructure, not just a search feature.

**Why it matters:** The headline choice shapes how buyers categorise what they're buying. "Traceable answers" buys a feature. "System of record" buys a platform capability. The second framing justifies a larger commercial commitment.

**Status:** Live in siteContent.ts and source doc.

---

## 2026-04-14 — Discussion Groups: researchers explicitly sell it as a budget alternative to focus groups

**Origin:** Desmond flagged during the Discussion Groups product page review.

Researchers don't just use Discussion Groups because they prefer async qualitative work. A common and explicit use case is: a client needs in-depth qualitative insight but does not have budget for a full moderated focus group programme. Discussion Groups get sold in as the alternative.

This is a real commercial positioning that emerges in sales conversations — not just a feature description. The page should reflect it because it gives buyers a frame for when to choose this over commissioning a focus group, not just over running a survey.

**Implication for copy:** The "When teams use it" section should include budget/focus group positioning explicitly — not buried but not leading either. It validates a purchasing scenario that buyers already have in mind.

**Why it matters:** Buyers who arrive at the page with a focus group brief and a reduced budget are already pre-sold on the format — they just need confirmation that this tool is the credible version of it.

**Status:** Applied to Discussion Groups Section 2 on the product page. Also saved in product context memory.

---

## 2026-04-13 - Whitelabel positioning should align to vendor truth, not vendor phrasing

**[2026-04-13 naming update]** The public tool name is now **AI Interviews** (was AI Interviewer). The route has moved to `/platform/ai-interviews`. The in-survey feature previously called "AI Chat questions" is now **Conversational Questions** — kept as a feature inside the Surveys tool, not a separate page or nav item.

**Origin:** AI Interviewer review after confirming the product is a white-labeled Tellet experience.

The AI Interviewer page should borrow from the truths Tellet is strongest on:

- guided conversational interviews
- qualitative depth at a speed manual programs struggle to match
- voice, video, or text responses
- faster access to transcripts, themes, quotes, and follow-up questions
- broader reach across markets and segments without the same scheduling burden

But the page should not borrow Tellet's slogans or tone directly. The job is to stay faithful to what the underlying product actually does while still sounding like `<un>peel` and fitting the wider platform story.

That means the public framing should stay anchored in:

- researcher control over the objective, guide, context, and boundaries
- evidence arriving fast enough to act on while work is still live
- qualitative outputs staying connected to the project record and to Insight Navigator

**Why it matters:** This keeps the copy honest, strengthens a partner product instead of fighting it, and avoids the weak middle ground where the page sounds generic because it is too far from the real product truth or derivative because it copies the vendor's own lines too closely.

**Status:** Complete

---

## 2026-04-13 - Survey-platform differentiation is workflow control plus live quality control, not just authoring

**Origin:** Desmond chain-of-thought notes on what actually makes the survey tool valuable.

The survey platform should not be positioned like a standard authoring tool with a few convenience features attached. The stronger story is:

- building and launch setup are easier without heavy scripting
- sample buying, live respondent review, and quota management sit in the same workflow
- quality checks are built into both design and fieldwork rather than added as a separate expert-only phase
- the team can combine quant structure with targeted qualitative probing when needed
- reducing programming friction raises the team's focus up the stack, so more effort goes into research quality rather than technical survey stitching

The most important concrete truths to preserve in copy are:

- direct integration with Trust Centre for live review and respondent removal while fieldwork is happening
- stronger open-end evaluation for gibberish, irrelevance, and banned mentions
- question-level and logic-aware flag conditions that bring suspicious answers forward without forcing an automatic disqualify
- cleaned quotas updating after respondent removals so quality control does not break fieldwork operations
- launch-readiness reminders before a study goes live
- no-code setup for routing, quota structures, distribution, and translation
- native PureSpectrum integration for sample purchase inside the platform
- Conversational Questions inside quant studies [formerly "AI Chat"], where qualitative responses stay linked to the quant data for later filtering
- a testing workflow designed to keep the logic needed for question validation in one place, reducing back-and-forth and the number of full survey passes needed before launch

Features worth remembering but not leading with publicly right now:

- collaborative survey notepad, because adoption depends on changing existing document habits
- OMNI builder stakeholder-intake workflow, because it is less relevant to the current enterprise website story
- improved test mode that exposes referenced logic on one screen, because the public copy should stay careful until the exact implementation is launch-ready
- one-click programming from a reviewed questionnaire into the survey platform, because it is a future-state promise
- additional sample suppliers beyond PureSpectrum, because the current live integration story is simpler and more defensible

**Why it matters:** The product page should make the value legible to a buyer who cares about launch speed and data quality, not just to someone comparing feature checklists.

**Status:** Complete

---



## 2026-04-13 - Commercial taxonomy should mirror how buyers buy

**Origin:** Website naming and IA session

The site taxonomy should separate:

- **Research tools** - the products teams buy and use directly
- **Trust laye