Artificial intelligibility: the role of gender in assigning humanness to natural language processing systems
Published online: 06 May 2024
In place of ‘Artificial Intelligence’, this article proposes artificial intelligibility as the more accurate term for describing the mistaken assignment of humanness to non-living objects. Artificial Intelligibility is manifested when a user assumes an object has capacity to understand them simply because it is understandable to them. Relatively simplistic Natural Language Processing systems may perform genres of humanness in conversational interactions to the degree that they are imagined to be sentient, as ‘Artificial Intelligence’. However, decolonial scholars have observed that humanness developed as a mutable, sociogenic construct. Decolonial gender theorists have further derived the role played by heteronormativity and repronormativity in upholding this power. Equating the intelligible performance of a gendered genre of humanness with intelligence risks obfuscating this interplay while reinforcing eugenic stratifications of life. This article reframes behavioural measures of ‘intelligence’ to gendered ‘intelligibility’ to explore the role of gender in enabling entry into the symbolic order of humanness. It presents three key findings from my doctoral thesis to question how and why some gendered, NLP-incorporating devices can be imagined to have lives that matter within the same economy of value that renders some humans and animals killable.