WAI2: New perspectives on bias and discrimination in language technology University of Amsterdam Amsterdam, Netherlands, November 4-5, 2024 |
Conference website | https://wai-amsterdam.github.io/ |
Submission link | https://easychair.org/conferences/?conf=wai21 |
Submission deadline | September 15, 2024 |
One of the central issues discussed in the context of the societal impact of language technology is that machine learning systems can contribute to discrimination, for instance by propagating human biases and stereotypes. Over the last couple of years, a lot of effort has been invested in addressing these issues through the development of tools for measuring and mitigating biases.
Despite these efforts, we are far from protecting people from the potential harms caused by integrating this technology in their lives. At the same time, language technology is implemented with breathtaking speed in all kinds of applications that reach millions of users all over the world. This creates a rather absurd situation: On the one hand, this fascinating and advanced technology surprises us constantly anew with its growing capabilities. On the other hand, it is also holding us back by propagating existing (Western) stereotypes and biases to the global population at scale, and poses new risks of disadvantaging the vulnerable in society.
To improve the effectiveness of our efforts to address these risks, we need to take a step back and re-evaluate our approach. Questions to consider are:
- What is working about our methods of measurement and what is not?
- What can we achieve with bias mitigation, and what are the (proximate and inherent) limitations of this approach?
- Should we fundamentally rethink the way we approach algorithmic discrimination, and if so, which alternatives should we explore?
- How can we effectively combine the insights and expertise from other disciplines to create new perspectives on bias and discrimination in language technology?
The goal of this workshop is to bring together researchers from different fields to discuss the state of the art on bias measurement and mitigation in language technology and explore new avenues of approach. We invite papers on new approaches to measurement and mitigation, as well as proposals for how to rethink our approach to bias and algorithmic discrimination more generally.