Appendix briefMarket and competitive evidence

Research Guard questionnaire review competitive analysis

A dated deep-dive benchmarking Research Guard's questionnaire-review workflow against Qualtrics ExpertReview, Conjointly, and adjacent competitors, clarifying where the product truly leads and where incumbents still have maturity advantages.

Repo path

competitive/research-guard-questionnaire-review-competitive-analysis-2026-04-18.md

Notes

No additional notes recorded.

Raw source preview

Raw, unprocessed file text shown below. Preview truncated for readability.

---
title: "Research Guard questionnaire review competitive analysis"
source: internal-research
created: "2026-04-18"
status: active
tags: [competitive, research-guard, questionnaire-review, qualtrics, internal]
---

# Competitive Analysis: Questionnaire Review Workflow vs Current Market Features

Date checked: 2026-04-18

## Scope

This note compares the current questionnaire review workflow defined in this repo against current external products that explicitly review, score, or guide questionnaire logic and design.

Important caveat:

- This repo is "the working space for designing, testing, and maintaining the artefacts" and "is not the production workflow itself" (`00_foundation/questionnaire_workflow_project_index_v1.md`).
- So this comparison separates:
- what the repo demonstrably specifies today
- what the repo implies as a product proposition if operationalized
- what vendors currently claim in their published product/help pages

## Executive Summary

- The strongest external benchmark is Qualtrics overall, because it combines build-time survey review, post-collection response-quality controls, and a documented enterprise security posture.
- The closest direct design-review competitor is Conjointly Script review, because it explicitly checks wording, scales, order, screening, response bias, and other methodological issues before launch.
- SurveyMonkey Genius is a meaningful but lighter competitor for questionnaire-review governance. However, SurveyMonkey's broader AI surface (default sentiment analysis, response quality scrubbing, thematic analysis) is expanding faster than the questionnaire-review comparison alone suggests.
- Toluna Start is a meaningful competitor where sample quality and launch checks matter, but the "Quality Center" label used in earlier versions of this analysis was not confirmed on current Toluna pages. What Toluna actually advertises: QProbe (AI probing for insufficient answers during fieldwork) and embedded AI fraud detection — still launch-quality focused, not redesign governance.
- QuestionPro, Alchemer, and Medallia are more adjacent than direct. Alchemer's AI investment has concentrated in Alchemer Pulse (a post-collection VoC/open-text synthesis platform launched late 2025), not in pre-launch design review — so the "adjacent" classification is still correct but for a different reason than creation-time assistance.
- Forsta (formerly Confirmit + FocusVision, merged) competes for the same enterprise research OS positioning and should be tracked. It positions AI agents across setup, analysis, and reporting. It is not a direct questionnaire-review competitor, but it is relevant context for any full-platform conversation.
- The repo's biggest competitive strength is not just "finding issues." It is the staged governance model: early review, follow-up, canonical review, structured packaging, revision, final QA, provenance, and frozen run artefacts.
- The repo's biggest competitive weakness is productization: no visible production UI, no documented enterprise security/compliance controls, and no vendor-grade post-field response-fraud/data-quality layer.
- **Naming note:** Qualtrics support URLs now reference the umbrella term "QualityIQ," with ExpertReview appearing as a component or prior name. This analysis uses "ExpertReview" throughout, which remains accurate per Qualtrics' own documentation, but the product may be migrating to the QualityIQ label.

## What The Current Workflow Actually Offers

The current workflow is materially more than a survey checker.

| Current workflow capability | Repo evidence | Competitive significance |
|---|---|---|
| Run-first frozen execution bundles | `README.md`, `WORKFLOW_STATUS.md`, `run_management/initialize_run_bundle.py` | Strong auditability and reproducibility; unusual in survey-builder competitors |
| Separate early and canonical review passes | `25_best_practice_review_early/`, `55_best_practice_review_canonical/`, `99_review/best_practice_review_integration_handoff.md` | Differentiates between provisional early critique and target-aware final review |
| Review authority pack with rubric + handbook + provenance registry | `00_foundation/Questionnaire_Best_Practice_Review_Rubric_v1.md`, `00_foundation/Questionnaire_Best_Practice_Review_Handbook_v1_Compact.md`, `99_review/best_practice_claim_registry_v1.md` | Stronger methodological traceability than most in-product AI review features |
| Screening alignment checks | `25_best_practice_review_early/Questionnaire_Best_Practice_Review_Output_Schema_v1_Compact.md` | Explicitly checks audience criteria against screener, routing, quota, and termination logic |
| Ordered issue map across stages 25 / 45 / 40 | same schema | Creates an ordered live conversation spine, not just a flat list of warnings |
| One-by-one design-tradeoff follow-up | `45_best_practice_followup/`, integration handoff | Supports co-design and approvals rather than silent auto-rewrite |
| Design previews before approval | review schema, clarification schema | Strong user-trust feature; rare in competitor docs reviewed |
| Package candidates and structured change packaging | review schema + `60_change_packaging/` | Converts review into governable edits instead of leaving findings as prose |
| Revision receipt + final QA gate | `70_revision/`, `80_final_qa/` | Closes the loop after review; most competitors stop at suggestions |
| Stateless JSON persistence and provenance | project index, integration handoff, revision receipt schema | Good fit for API orchestration and audit-heavy workflows |

## Why The Repo Proposition Is Different

The workflow is optimized around a research-operations problem that most survey platforms do not center:

- recover a questionnaire from source material
- review it against a methodology authority pack
- ask only the live questions that matter
- convert approved findings into structured changes
- regenerate a revised canonical representation
- verify that the approved changes were actually applied

That makes the proposition closer to "questionnaire review and governed redesign system" than to "survey builder with a checker."

## Feature-By-Feature Competitive Readout

| Workflow feature / proposition | Closest external analogue | Competitive read |
|---|---|---|
| Early review from fast scaffold plus intake | Qualtrics ExpertReview; SurveyMonkey Genius | Repo is stronger on strategic, pre-canonical review framing; vendors are stronger on direct builder integration and immediacy |
| Canonical target-aware review after reconciliation / clarification | No clear like-for-like competitor found in reviewed docs | Likely distinctive |
| Screening alignment checks against visible screener/routing/termination | Conjointly Script review partly overlaps via screening recommendations | Repo is more explicit and machine-structured |
| Ordered issue map preserving questionnaire order across live review | No clear equivalent found | Distinctive advantage |
| One-question-at-a-time design follow-up | Conjointly AI Helper chat is the nearest analogue | Repo is stronger on governed approvals and downstream linkage |
| Design preview before approval | No clear equivalent found in reviewed docs | Distinctive advantage |
| Package candidates linked to later change packaging | No clear equivalent found | Distinctive advantage |
| Revision + final QA after review | Qualtrics Response Quality is adjacent, but it is post-collection data cleaning, not pre-launch redesign QA | Repo addresses a different, earlier control point |
| Provenance-tagged authority pack | Qualtrics and SurveyMonkey cite research-backed guidance, but not with repo-visible claim registry / coverage matrix style provenance | Repo is stronger on internal methodological auditability |
| Response fraud / duplicate / bot / speeder handling | Qualtrics Response Quality | Major gap for the repo today |
| Enterprise security compliance and admin controls | Qualtrics Security | Major gap for the repo today |

## Competitor Comparison

| Competitor / feature | How direct a competitor it is | What it does well | Where the repo workflow is stronger | Where the competitor is stronger |
|---|---|---|---|---|
| Qualtrics ExpertReview (QualityIQ) | Very direct | Digital survey reviewer inside the builder; checks methodology, survey errors, compliance, sensitive-data requests; predicts survey quality | Multi-stage governance, follow-up, canonical targeting, packaging, revision, provenance | Product maturity, immediate in-builder feedback, broad project support, no workflow assembly required |
| Qualtrics Response Quality | Complementary rather than like-for-like | Flags speeders, bots, duplicates, completion-rate issues, sensitive data; supports filtering low-quality responses | Repo reviews the questionnaire before launch and links critique to redesign | Strong post-field data-quality and fraud controls that the repo currently lacks |
| SurveyMonkey Genius | Direct but lighter on questionnaire-review governance; broader AI surface expanding fast | ML scoring, predicted completion rate, question-linked recommendations; sentiment analysis and response quality scrubbing now enabled by default; thematic analysis now generates natural-language paragraph summaries | Far richer workflow, structured governance, target-aware packaging, redesign pipeline | Easier to use, native builder integration, immediate score and predictions; AI breadth is growing rapidly beyond questionnaire review |
| Conjointly Script review | Very direct | LLM-based script review for wording, scale consistency, missing answers, order, leading bias, screening, data quality | Structured downstream governance, canonical review, packaging, QA loop | Highly similar design-review surface, fast conversational review in-product, low-friction access |
| Toluna Start | Direct but narrower; "Quality Center" label not confirmed on current pages | Launch-readiness guidance, QProbe AI probing for insufficient answers, embedded AI fraud detection, panel-quality framing | Stronger methodology provenance and redesign pipeline | Better integrated with sample and field-quality operations |
| Forsta (formerly Confirmit + FocusVision) | Adjacent — competes on enterprise research OS positioning | AI agents across setup, analysis, and reporting; strong enterprise heritage in quant and qual | Pre-launch questionnaire review depth, governing redesign workflow, screening alignment | Deeper enterprise integration, longer track record with large research operations |
| QuestionPro advanced branching / logic features | Adjacent | Strong survey logic authoring and testing | Explicit design-review system, screening alignment, methodological governance | Better native authoring and logic execution environment |
| Alchemer Pulse / Alchemer AI | Adjacent — AI investment has pivoted to post-collection VoC | Alchemer Pulse (launched Dec 2025): AI-powered open-text synthesis, theme classification, sentiment across large feedback volumes; post-field insights automation | Pre-launch questionnaire design review, governed redesign workflow, screening alignment | Strong post-field open-text analysis and VoC automation at volume |
| Medallia survey design best practices | Adjacent | Templated best practices for CX/digital surveys | General questionnaire review depth and change governance | Stronger operational CX deployment context |

## Qualtrics Deep Dive

### 1. ExpertReview vs this workflow

Qualtrics ExpertReview is the strongest direct benchmark found in the review:

- It is described as a digital reviewer for surveys.
- It measures the quality of survey elements including questions, logic, and quotas.
- It recommends improvements and gives research-based explanations.
- It predicts the quality of the data likely to be collected.
- It checks methodology, survey errors, accessibility/compliance, and sensitive-data requests.

Where Qualtrics is better today:

- Native in-builder review while au