Repo path
portfolio/quantitative-research-platform/survey-deployment.md
Evidence that deployment, panel/sample operations, and in-field quality controls are already substantially specified.
Repo path
portfolio/quantitative-research-platform/survey-deployment.md
Notes
No additional notes recorded.
Mapped workstreams
Mapped appendices
Raw source preview
Raw, unprocessed file text shown below. Preview truncated for readability.
--- type: core-workflow product: Quantitative Research Platform readiness: Almost There --- # Survey Deployment Enable teams to run surveys in the field with confidence that responses are genuine and data quality is being reviewed in real time, while the study is still live and before bad data becomes a reporting problem. --- ## Real Time Quality Assurance **Readiness:** Almost There Monitor respondent quality in real time, surface suspicious cases for live review, and act on clear problems before they pollute the study. | Feature | Outcome / Promise | Readiness | Priority | |---|---|---|---| | Trust Centre | Give researchers a live respondent-quality workflow with clear signals and reviewable decisions while fieldwork is still running | Almost There | - | **Feature Families:** Real / Unique / Engaged (38+ metrics across three dimensions) #### RTQA Risk Framework Quality assurance is based on 3 risk pillars: **Real / Unique / Engaged**. Every flag maps to one of these pillars. Within each pillar, rules determine when a respondent is flagged as "not real", "not unique", or "not engaged", based on combinations of individual flags. Each flag has an internal severity: **Hard / Strong / Moderate / Weak**. There are two layers of risk assessment: 1. Individual pillar-level risk (Real, Unique, Engaged) 2. Overall respondent-level risk (rolls up from all three pillars) **Detailed rule docs:** - [Global rules](trust-centre-risk-framework/global-rules.md) - [Real rules](trust-centre-risk-framework/real-rules.md) - [Unique rules](trust-centre-risk-framework/unique-rules.md) - [Engaged rules](trust-centre-risk-framework/engaged-rules.md) **Respondent-level risk definitions:** | Risk Level | Conditions | |---|---| | Critical | Not Real + Not Unique + Not Engaged - OR - Not Real + Not Unique - OR - Hard fail on Real | | High | Fail any 2 of Real/Unique/Engaged - OR - Fail 1 + flagged on another | | Medium | Fail 1 of Real/Unique/Engaged - OR - Flagged (not failed) on 2+ pillars | | Low | Flagged (not failed) on any pillar | #### Real - Flag Definitions **FLAG "NOT REAL" IF:** (1) any single Hard flag triggered, (2) 2+ Strong flags triggered, or (3) 1 Strong + 2 Moderate flags triggered. | Flag | Severity | |---|---| | Emulator Detected | Hard | | Bot Detection (BotD) Flag | Hard | | TOR Network Detected | Hard | | FIRDA Flag | Hard | | Virtual Machine Detected | Hard | | Location Spoofing Detected | Hard | | Remote Control Software Detected | Hard | | Man-in-the-Middle (MITM) Attack Suspected | Hard | | Failed Bot Check | Hard | | VPN Detected | Strong | | Proxy Detected | Strong | | Cloud Hosting Detected | Strong | | Illegal Mobile Number | Strong | | No Device Fingerprint | Strong | | Phone Not Validated | Moderate | | No Device Data | Moderate | | Incognito Mode Detected | Moderate | | Developer Tools Detected | Moderate | | Non-Irish IP Address | Moderate | | Ad Blocker Detected | Moderate | | No Bot Check Performed | Weak | #### Unique - Flag Definitions **FLAG "NOT UNIQUE" IF:** (1) any Hard flag triggered, or (2) 2 Strong unique signals are triggered together, for example Duplicate IP Address + Matching Name and Date of Birth. | Flag | Severity | |---|---| | Duplicate IP Address | Strong | | Mobile Number Already Used | Hard | | Duplicate Device Fingerprint | Hard | | Duplicate Email Address | Hard | | Matching Name and Date of Birth | Strong | `Duplicate ID` is not a standalone Trust Centre signal. Higher-confidence duplicate decisions come from combinations of the tracked signals above. #### Engaged - Flag Definitions **FLAG "NOT ENGAGED" IF:** (1) any Hard flag triggered, or (2) 2+ attention categories are flagged. | Flag | Attention Category | Severity | |---|---|---| | Speeding | Speeding | Moderate | | Section-level speeding | Speeding | Moderate | | Straight Lining | Carousel Attention | Moderate | | Repeating patterning | Carousel Attention | Moderate | | Carousel Attention Check | Carousel Attention | Strong | | Failed attention checks | Attention | Hard | | Contradictory Responses | Attention | Strong | | Copied open ends | Invalid Open Ends | Hard | | Gibberish | Invalid Open Ends | Hard | | Consistently short open ends | Invalid Open Ends | Moderate | | Irrelevant open ends | Invalid Open Ends | Strong | | Stimulus under-exposure | Under Exposure | Weak | #### Trust Centre **Readiness:** Almost There | **Type:** Feature Provides researchers with a live respondent-quality workflow across active studies, so teams can see what looks suspicious, review the hard cases, and keep a record of the decisions made while fieldwork is still running. **Problem:** Most teams discover data-quality issues too late, rely on blunt heuristics, or clean the sample after the damage is already in the dataset. That creates uncertainty, re-fielding risk, and slow defensive work later. **Target users:** Researchers and fieldwork leads (primary); ResearchOps and Admins (secondary). **Intended outcomes:** - Researchers can quickly answer: "What needs review right now, and why?" - Teams can review flagged respondents while fieldwork is still live instead of waiting for a post-field cleanup pass - Quality decisions stay attached to the project record, making later reporting easier to defend **Key capabilities (v1):** - Respondent overview grouped into Real, Unique, and Engaged signals - Question-level and composite flags visible while fieldwork is live - Review actions that let the team quarantine, accept, or remove respondents with a record of why the call was made - Study-level pattern visibility so teams can spot hotspots and improve setup over time **Non-negotiable principles:** 1. Review-first, not blind automation - clear-cut problems can move quickly, but the hard cases stay reviewable 2. Explainable signals over opaque scoring - the team should be able to see why a respondent was flagged 3. Actions should be safe and reversible - avoid broad "easy button" controls that could harm legitimate use 4. Designed for scale - no alert fatigue **Explicitly not v1:** full Qualtrics-style post-field response-quality coverage or a black-box cleanup engine. **Success bar:** Researchers use Trust Centre as the default during-field quality workflow, suspicious cases are handled before they become reporting arguments, and teams face fewer late cleanup surprises. --- ## Quota Definition **Readiness:** Almost There Prevent quota mistakes before fielding, not after data loss. | Feature | Outcome / Promise | Readiness | Priority | |---|---|---|---| | Quota Variable Configuration | Allow researchers to define quota dimensions without technical setup | There | Done | | Quota Structure & Nesting | Allow complex quota structures to be configured without logical errors | There | Done | #### Quota Variable Configuration #### Quota Structure & Nesting --- ## Quota Enforcement **Readiness:** There Maintain sample integrity in real time without constant monitoring or manual fixes. | Feature | Outcome / Promise | Readiness | Priority | |---|---|---|---| | Quota Closure & Routing Logic | Prevent over-collection by automatically redirecting traffic when quotas are met | There | Done | | Quota Rules & Thresholds | Allow quota behaviour to be enforced consistently without manual intervention | There | Done | | Real-Time Quota Monitoring | Allow researchers to see quota progress instantly without manual tracking | There | Done | #### Quota Closure & Routing Logic #### Quota Rules & Thresholds #### Real-Time Quota Monitoring --- ## Deploy Survey to Specific Audiences **Readiness:** Almost There Help researchers choose viable audiences without late-stage feasibility surprises. | Feature | Outcome / Promise | Readiness | Priority | |---|---|---|---| | Third Party Audience Collection | Allow surveys to be deployed to external panels without custom integrations | There | - | | In-house Sample Collection | Allow researchers to target internal panels without duplicating audience definitions | Almost There | Next | | Anonymous Audience Collection | Allow surveys to be distributed broadly without managing individual participants | There | Now | | Direct Third Party Sample Integration | Directly integrate third-party sample provider APIs without custom connector work | Almost There | Now | | In-house Panel Sample Feasibility Indicators | Provide early visibility into sample feasibility before surveys are launched | Almost There | Next | #### Third Party Audience Collection #### In-house Sample Collection #### Anonymous Audience Collection #### Direct Third Party Sample Integration #### In-house Panel Sample Feasibility Indicators #### What is an Audience? An **Audience** is a sample source for a study - where respondents come from. In EmpathyIQ, an audience can be any combination of: - **External sample provider** - e.g. Toluna supplies online respondents who are pre-screened/targetable (age, region, etc.) - **Direct list (email)** - e.g. a customer gives us 10,000 emails to invite directly - **Owned panel** - e.g. people recruited into our own panel (managed profiles, incentives, frequency rules) > Different sources behave differently (cost, targeting fidelity, fraud risk, UX obligations). We often blend audiences to hit both feasibility and quality. **Key characteristics:** **1. Surveys can use multiple audiences** - Niche + GenPop: use a youth-specialist provider for 18-24 boost + a broad GenPop provider for the remainder - Panel + Top-up: owned panel covers 500 completes, external provider tops up to 2,000 **2. Quotas are needed per audience to prevent overruns and protect budget** - Per-provider caps prevent a high-cost provider over-delivering and blowing the budget **3. Survey skip logic can be audience-specific** - Unknown direct list: route to extra screening questions to validate quality - Panel members: skip questions already answered in previous studies **4. Audience targets can change mid-field** - If Audience A under-performs, reallocate targets to Audience B live - without redeploying the survey **5. Analyse performance and results by audience to catch issues early** - Slice data by Audience to spot supplier mis-targeting (e.g. region answers do not match requested targeting) - gives you grounds to dispute price increases **6. Each audience has a source-specific UX** - External providers: must redirect to their complete/terminate/over-quota/quality URLs - Owned panel: on completion, show panel-branded page with points earned - White-label for external traffic to avoid exposing panel brand **Who sets up audiences:** Project Manager / Fieldwork Lead (guided by Research Lead). They pick sources, set per-audience quotas, targeting, pricing, and UX/redirects, then monitor daily and reallocate as needed. **Daily reporting cadence per audience provider:** Completes to date / Completes remaining / Incidence Rate (IR) / Completion Rate (CR) & drop-off / LOI (median/mean) --- ## Runtime Data Capture **Readiness:** There Ensure every response carries the correct contextual data, even when information comes from external sources. | Feature | Outcome / Promise | Readiness | Priority | |---|---|---|---| | URL Variable Capture | Allow surveys to adapt based on incoming context without manual data stitching | There | Done | #### URL Variable Capture --- ## Real Time Performance Monitoring **Readiness:** There Give researchers immediate access to real-time monitoring of study progress. | Feature | Outcome / Promise | Readiness | Priority | |---|---|---|---| | Progress Dashboard | Allow researchers to view survey progress including incidence performance, LOI, and completion rates at a glance | There | Done | | Email Templates | Create a custom email template per deployment without rebuilding from scratch | Almost There | Next | | Survey-Level Theming | - | Not There | Later | #### Progress Dashboard #### Email Templates #### Survey-Level Theming ---