Combatting Abusive Language Online

This project creates methods to address abusive language online by generating counter speech and creating opportunities of engagement. Increasingly, social media platforms are turning to Natural Language Processing (NLP) and other machine-based learning methods in order to comply with governmental regulations on abusive language, as well as user expectations and their own ethics statements. Given the difficulty of detecting and evaluating abusive language automatically, these algorithms are notoriously error-prone. Drawing from work in NLP, political theory, and social activism, the Digital Democracies Group will create methods to turn hostile speech into more productive discussions and develop algorithms to determine how to create effective counter speech.