Radiolytica

Radiolytica

Harmful information, encompassing misinformation, disinformation, and hate speech, poses a severe threat in humanitarian crises, where it can undermine public health efforts and jeopardise the safety of both affected populations and aid workers. While digital tools have been developed to detect such content online, they are largely unsuitable for many humanitarian contexts in sub-Saharan Africa, where internet penetration is low and the vast majority of people rely on radio as their most trusted source of news. Despite its influence, humanitarian organisations face significant logistical hurdles in monitoring radio due to language diversity, the absence of fixed broadcasting schedules, and a lack of automated tools for offline audio.

This project aims to provide rigorous evidence on the scale of this problem by developing an AI-powered system that records and transcribes radio broadcasts from local languages into written text. Using the Eastern Democratic Republic of Congo as a primary case study, we leverage Natural Language Processing and generative AI to investigate the magnitude of harmful information specifically surrounding health and epidemics. By identifying peak periods and key themes of harmful broadcasts, the project provides actionable insights that allow for targeted humanitarian countermeasures.

Furthermore, to determine which interventions are most effective at protecting vulnerable communities, we investigate the impact of radio content provided by humanitarian actors. We analyze the differences in the prevalence of harmful information between radio stations that broadcast humanitarian media productions and those that do not. Ultimately, this project seeks to establish a scalable monitoring system that enables organisations to respond to misinformation with timely fact-checking and pre-emptive narratives to safeguard the integrity of humanitarian efforts.

 

JavaScript has been disabled in your browser