Supporting evidenceProduct and SKU readiness

Survey design readiness

Evidence that survey design and pre-launch confidence capabilities are close to launchable rather than merely aspirational.

Repo path

portfolio/quantitative-research-platform/survey-design.md

Notes

No additional notes recorded.

Raw source preview

Raw, unprocessed file text shown below.

---
type: core-workflow
product: Quantitative Research Platform
readiness: Almost There
---

# Survey Design

Enable researchers to design surveys with confidence that the right questions are being asked and everything is ready to launch — without time-consuming manual validation or reliance on specialist gatekeepers.

---

## Advanced Survey Authoring
**Readiness:** Almost There

Surveys can be authored exactly as designed without reliance on third-party research platforms.

| Feature | Outcome / Promise | Readiness | Priority |
|---|---|---|---|
| Question Authoring Interface | Allow researchers to create and update questions efficiently without formatting friction | There | Done |
| Multi-Language Support | Allow the same survey to run across regions without duplicating work *(spec below)* | Almost There | Now |
| Automatic Translations | Allow surveys to be translated quickly without manual localisation effort | Not Started | Later |
| Automated AI Probing | Allow researchers to build qualitative discussion directly into quantitative surveys | There | Done |
| IF/Then/Else Piping | Display text based on conditional logic | Not Started | Later |
| Autopopulate Data | Autopopulate data and skip question if only one option is selectable | There | Now |
| Survey Redirect Question Type | — | Almost There | Now |
| Discrete Choice Model Question Type | — | Not Started | Later |
| Quality check / validation question type | — *(spec below)* | Not Started | Now |
| Media Evaluation Question Type | — | Not Started | Later |
| Text Highlighter Question Type | — | Not Started | Later |

**Feature Families:** Conditional Logic Configuration (IF/Then/Else, Autopopulate) · Quantitative Question Library (specialist question types)

#### Question Authoring Interface

#### Multi-Language Support

**Readiness:** Almost There | **Priority:** Now | **Type:** Feature

Allow the same survey to run across regions without duplicating work.

**Problem:** Multi-market surveys require translating content into multiple languages (typically 7–8, sometimes more). Today this relies on external translation teams and spreadsheet-based workflows — long back-and-forth cycles, fragmented files, and high coordination overhead. Confirming translations are present, correctly placed, and grammatically correct in context is time-consuming and risky, especially with piping logic where inserted values can change grammar or conjugation.

**Target users:** Internal PMs implementing and validating translations (primary); translators reviewing/editing content (secondary).

**Intended outcomes:**
- Import translations incrementally as they arrive, without blocking other languages
- Quickly verify completeness and correct mapping
- Review translations in context without repeatedly running full survey tests
- Validate piped text scenarios without jumping between questions or test runs
- Confidently catch issues before launch

**Key capabilities:**

*Translation Input & Management*
- Export survey content for translation (Excel format)
- Import multi-language or single-language files
- Support incremental imports (only provided languages updated)
- Auto-translate via AI (V2)

*QA & Review Experience*
- Question-level navigation toolbar (jump to Q#)
- Side-by-side view: English + selected language on left, contextual preview on right
- Clicking a translation item scrolls preview to relevant question and highlights the text
- Preview supports static question context, language toggle, piped-text simulation via dropdown
- No need to simulate full survey logic flows in V1

*Change Management*
- When English content changes: mark affected translations as outdated, block publishing for impacted languages
- Warn authors when editing content that already has translations

*Roles & Permissions*
- Dedicated Translator role (edit translations only)
- Edit access: project leads, project builders, translators, admins
- Other roles: comment/suggest only

**Out of scope V1:** AI-powered translation; full logic-path simulation; translation of non-survey content.

**Success bar:** An internal PM can implement and QA one language with significantly less cognitive load and fewer manual workarounds than on comparable survey platforms, without needing to repeatedly run live survey tests.

#### Automatic Translations

#### Automated AI Probing

#### IF/Then/Else Piping

#### Autopopulate Data

#### Survey Redirect Question Type

#### Discrete Choice Model Question Type

#### Quality Check Flagging — V1 Spec

**Readiness:** Not Started | **Priority:** Now | **Type:** Feature (within Quantitative Question Library FF)

Quality-check flagging will be implemented as a **question-level setting**, not a standalone question type. Researchers configure flagging rules directly on supported questions. When a rule is violated, the respondent is **flagged, not terminated** — they always finish the survey. Flags are stored for downstream review.

**Core rule model:**
> **If** [previous-question condition is true] **Then** [current-question answer must/must not satisfy rule] **Otherwise** [flag respondent]

This means V1 is for **conditional consistency checks**, not standalone validation with no prior dependency.

**V1 supported question types:** Single choice · Multi-choice · Carousel · Multi-choice carousel · Drop-down

**Excluded from V1:** Open ended · Free text · AI chat · Carousel rating · Rating · File upload · Info page · NPS · Ranking · Rank sort · Heatmap

**Conditioning engine (required evaluators):** is · is not · greater than · greater than or equal to · less than · less than or equal to — evaluating against a fixed configured value or a previous answer.

**Validation evaluators:** must select X · must not select X · must select one of · must not select one of

**Runtime behaviour:** Condition evaluated → if met, current answer checked → if validation fails, respondent flagged → respondent continues survey → flag stored.

**Downstream:** Platform stores that respondent was flagged + enough detail to understand why. Surfaces in Security Centre / respondent review. V1 does NOT include automatic exclusion, weighted scoring, fraud classification, or complex aggregate logic.

**Builder UX requirements:**
- Dedicated Quality Check / Flagging section within supported question settings
- Clearly distinct from display logic, skip/routing logic, and termination logic
- Only expose on supported question types and valid evaluators for the question type
- UI makes clear: internal check only, does not terminate respondent, creates a review signal downstream

**Out of scope V1:** Termination as outcome · standalone validation question type · free-text semantic validation · AI-evaluated inconsistency checks · respondent-facing messaging that they've been flagged.

**Follow-on tickets should reference this spec for:** builder UX, conditioning engine evaluator expansion, rule builder implementation, runtime rule evaluation, respondent flag storage/data model, downstream reporting/Security Centre visibility, QA scenario design.

#### Media Evaluation Question Type

#### Text Highlighter Question Type

---

## Collaborative Survey Design
**Readiness:** There

Enable teams to align on survey structure, logic, and rationale without fragmented tools or miscommunication.

| Feature | Outcome / Promise | Readiness | Priority |
|---|---|---|---|
| Question Structure Capture | Allow teams to agree on question flow without losing structural intent | There | Done |
| Programming Logic Capture | Allow skip and branch logic to be captured collaboratively without misinterpretation | There | Done |
| Design Notes Capture | Preserve design rationale so survey intent isn't lost over time or handoffs | There | Done |
| Share Collaborative Environment | Allow teams to share collaboration environments with external stakeholders without losing control or context | There | Done |

#### Question Structure Capture

#### Programming Logic Capture

#### Design Notes Capture

#### Share Collaborative Environment

---

## AI Assisted Questionnaire Design & Review
**Readiness:** Not There

Help researchers move from idea to credible first draft without starting from a blank page.

| Feature | Outcome / Promise | Readiness | Priority |
|---|---|---|---|
| Brief to Questionnaire Drafting | Speed up first-draft creation without manually translating briefs into questions | Not There | Later |
| Best Practice Standards Analysis | Help researchers validate questionnaire quality without relying on manual reviews or expert checks | Not There | Next |
| Uploaded Questionnaire Review | Reduce setup time by reformatting existing questionnaires into structured documents automatically | Not There | Next |
| AI Assisted Survey Flow Review | Validate participant experience without running repeated live tests | Not Started | Later |
| Survey Testing Navigation Toolbar | Allow rapid validation of complex logic paths without navigating surveys manually | Not There | Now |

#### Brief to Questionnaire Drafting

#### Best Practice Standards Analysis

#### Uploaded Questionnaire Review

#### AI Assisted Survey Flow Review

#### Survey Testing Navigation Toolbar

---

## Survey QA & Completeness Checks
**Readiness:** Not Started

Catch survey design issues early so errors don't surface during fieldwork or analysis.

| Feature | Outcome / Promise | Readiness | Priority |
|---|---|---|---|
| Automated Survey Testing | Populate surveys with test data to validate survey logic and data issues before launch without exhaustive manual testing | Not Started | Later |

#### Automated Survey Testing