Protocol for Testing AI and Verification Tools

The Protocol for Testing AI and Verification Tools responds to a growing challenge faced by fact-checkers, journalists, researchers, and analysts: while the volume of AI-generated content and sophisticated influence operations is increasing rapidly, the number of verification tools claiming to address these challenges has grown even faster. As a result, identifying, evaluating, and responsibly adopting effective tools has itself become a major burden.

Developed collaboratively within BECID and aligned with the wider EDMO network, this protocol introduces a structured, transparent, and ethical methodology for testing AI-based and digital verification tools. It recognises that automation can support verification work at scale, but that human judgment must remain central. The protocol therefore balances technological innovation with investigative rigour, multilingual sensitivity, and ethical safeguards tailored to the Baltic context.

The protocol outlines a three-phase process – preparation, testing, and monitoring –covering everything from environmental scanning and tool selection to usability testing, bias and fairness assessments, and organisational impact analysis. It places strong emphasis on multilingual capability, GDPR compliance, ethical risk assessment, and real-world applicability within existing workflows. By standardising evaluation methods and sharing results openly, the protocol supports informed decision-making, reduces wasted effort, and strengthens collaboration both within BECID and across the EDMO network.

Ultimately, the protocol aims to turn scattered experimentation into actionable evidence, helping organisations decide which tools are worth adopting, under what conditions, and with what long-term impact on verification work and resilience against disinformation.

Read the full protocol here: