Microsoft Research and University of Massachusetts Amherst surveyed 146 papers analyzing bias in NLP (natural language processing) systems, with researchers finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing bias is an inherently normative process.
They further find that these papers’ proposed quantitative techniques for measuring or mitigating bias are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these findings, researchers describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing bias in NLP systems.
These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of “bias”—i.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statements—and to center work around the lived experiences of members of communities affected by NLP systems, while interrogating and reimagining the power relations between technologists and such communities.