Effective and efficient artificial intelligence tools have been developed to help combat the increasingly sophisticated threat from online disinformation.
The vera.ai project aimed to develop advanced AI tools to address the significant impact of disinformation on public trust and societal resilience. These tools focus on content analysis, enhancement, and evidence retrieval, including detecting deepfakes and manipulated content, and tracking disinformation campaign impacts. An intelligent verification assistant, powered by chatbot technologies, was also developed to support media professionals. The project leveraged a multidisciplinary team of experts in social and communication science, machine learning, natural language processing, and media forensics. Prototypes were validated through real-world testing with media partners, and co-creation with journalists improved usability, transparency, and practical relevance, ensuring scientific robustness and impact through continuous expert feedback.
The vera.ai project successfully advanced explainable and trustworthy AI, emphasizing the crucial role of human oversight in ensuring the usability of these technologies. The project delivered practical tools and methodological insights that are expected to enhance Europe’s capacity to detect, analyze, and respond to evolving AI-driven disinformation and coordinated manipulation campaigns. Key results, made publicly accessible, include updated tools for media professionals such as the Fake News Debunker verification plugin, Truly Media, and the Database of Known Fakes. Additionally, the project produced high-impact scientific publications, open-source repositories, and datasets.
Following its completion, vera.ai partners continue to support and enhance the developed tools and technologies, recognizing that online disinformation constantly evolves with new threats requiring continuous development of detection and analysis methods. This ongoing work is vital because coordinated disinformation campaigns can undermine public debate, distort electoral processes, and erode confidence in institutions and media. In crisis situations, unverified information risks amplifying panic and causing real-world harm. For journalists, the inability to quickly and reliably assess content threatens credibility. The vera.ai project's work is expected to significantly strengthen information integrity, especially in journalism and fact-checking, by using AI-assisted content analysis, synthetic media detection, and coordinated inauthentic behavior monitoring to enhance speed, accuracy, and credibility. Its applications extend to public institutions, platform governance, and regulatory compliance, particularly in light of frameworks like the Digital Services Act.