Measurement is the critical building block for improvement

Help create better benchmarks and evaluations for democratic processes.

We need to be able to measure, evaluate, and benchmark democratic processes. Understanding what is important to measure, finding ways to gather the relevant data, creating the best analysis tools, and comparing practices will lead to more effective and capable processes. Without measurement infrastructure, our rate of progress will be significantly slower.

This page includes sections for:

What we can't do today

This list contains all the related goals for improving the measurability of deliberative processes.

Measurability research questions

We've outlined the research questions that we think will help improve our ability to measure and evaluate processes.

Viewing Research Questions. Filtered by: Dimension being Measurability .
Explore further →

What consent, anonymization, and data governance protocols (comparing opt-in vs. opt-out, persistent vs. temporary storage, restricted vs. open licensing) enable practitioners to balance participant privacy and autonomy against the research value of maintaining rich deliberative records?

Urgent

How do downstream effects from participation systematically vary across different deliberative process formats (comparing citizens' assemblies, deliberative polls, mini-publics, and online forums), and what process features predict effect heterogeneity?

Urgent

What particular knock-on effects from participation (spanning civic engagement, political efficacy, discussion spillover, network influence, or policy awareness) are most important to measure, and what longitudinal methods best capture them without excessive participant burden?

Urgent

What observable deliberative quality dimensions (such as turn-taking equity, argument depth, perspective inclusion, or respectfulness) can be reliably measured through automated content analysis or human observation in real time, and what does measurement reveal about facilitator behavior changes?

Urgent

What measurement approaches (comparing explicit belief statements, semantic mapping, implicit preference tasks, or network analysis of argument adoption) best capture individual and group learning and preference shifts while remaining feasible to administer at deliberation intervals?

Urgent

How do different methods for measuring preference transformation (pre/post surveys, in-process journaling, exit interviews, or network tracking) correlate with one another and with long-term behavioral change, under different deliberative process formats?

Urgent

What recording modalities (comparing video, audio-only, spatial tracking, or multimodal combinations) most reliably preserve the substance of deliberation while remaining minimally intrusive and respectful of participant discomfort?

Urgent

Which transcription and annotation approaches (comparing human verbatim, human semantic, hybrid human-AI, or AI-only) best handle cross-talk, non-verbal communication, and emotional valence while maintaining accuracy standards?

Urgent

Which evaluation metrics (comparing single-dimension vs. composite indices) are sensitive enough to detect quality differences within similar processes but robust enough for valid comparison across different topics, geographies, and participant populations?

Urgent

What constellation of outcomes (spanning legitimacy, recommendation quality, participant satisfaction, opinion change, and downstream policy impact) must any democratic process achieve to be considered successful, and how do these vary with process purpose?

Urgent

How can process outcomes (spanning legitimacy, recommendation quality, participant satisfaction, opinion change, and downstream policy impact) be operationalized as measurable indicators practitioners can feasibly collect?

Urgent

How can practitioners balance (through adaptive protocols or meta-evaluation frameworks) universal standards for cross-context learning against context-specific adaptations required by local stakeholder concerns and governance structures?

Urgent

These are the capabilities for measuring and quickly improving how we learn from process runs.