Whether we are willing to admit it or not, Artificial intelligence (AI) is a part of life, and its inherent biases pose serious questions about the future of equity, inclusion, and the way we understand human diversity. The ascent of Large Language Models (LLMs) such as ChatGPT, Gemini, and Claude has revealed AI as both a harbinger of unprecedented opportunity and a mirror reflecting our societal prejudices. In other words, because humans made AI, it carries within it the seeds of hate that we expend so much energy trying to root out of our institutions and practices. Worse, even as we hope it will one day be more intelligent than we are, without intervention, AI is also likely to build on the racist imperatives inherent in its programming. The following is an exploration of the relationship between racial categorization and AI, that considers the opportunities and dangers of AI storytelling.
The Paradox of Representation in AI
Language is a vehicle for human expression and perception. There are myriad ways in which the words we use to describe our interaction with the external world influence our cognition. Studies highlighting the impact of language on perception (Deutscher, 2010; Boroditsky, 2011) reveal how linguistic structures embody and propagate cultural biases. The phenomenon of language shaping thought illustrates the subtle yet profound influence of linguistic patterns on our understanding of the world. This insight is critical when considering AI's role in generating and framing discourse, emphasizing the need for vigilance against perpetuating biases through automated language production.
The narrative around racial bias in AI, notably in facial recognition technologies (Buolamwini & Gebru, 2018), spotlights the ethical tension at AI's core. The replication of racial biases in AI is not a flaw of technology but a reflection of the biases embedded within the datasets it learns from. Even as it is recognized, steps to avoid replicating implicit bias in AI are limited. This is in part due to how the bias is perceived. More research needs to be done on the rhetorical effects of an AI's implicit biases, particularly concerning LLMs and their capacity to tell stories about the world. Meredith Broussard's "Coded Bias" has made a compelling argument for algorithmic bias as a central civil rights issue, advocating for critically examining the frameworks underpinning AI systems to mitigate these biases effectively. Still, it is not merely that there is an assumption of technological neutrality. Rather, it is essential to understand that in the absence of a deep interrogation of the role of language in constructing reality, the capacity for LLMs to replicate implicit bias simply by selecting definitive terms is going unchecked.
Toward a Future of Inclusive Narratives
Reflecting on my interactions with AI, I note with some concern that the discourse should extend beyond the mechanics of bias in embodying racial identity, including power distribution through language. In addition to thinking about how implicit bias in programming and training might lead to the stereotyping of black bodies in surveillance, another line of inquiry should be how we are allowing AI to define and categorize terms such as "black," "stereotpying," and "surveillance." Without intervention at the symbolic level, we end up with AI like Gemini applying an extreme version of a superficial lay theory of diversity. While there has been speculation about how Google managed to get its AI diversity training so incredibly wrong, it is obvious to anyone who has taught first-year university courses that give a diversity credit: a shallow approach to diversity framed by empty jargon and soundbites, will make a mockery at any attempt at critical thought.
AI has the potential to replicate, amplify, and export America's unique brand of racial thinking and the associated discrimination. This is important since the cultural logics that permit Americans to navigate the complex rules of racial diversity are embedded in the language used to define it. The challenge transcends the technological realm, inviting a broader dialogue on harnessing AI to reflect human diversity beyond the confines of narrow racial categorization while confronting the embedded biases that threaten to undermine this goal. Telling different stories about what racial categories are actually able to tell us about human diversity is an excellent place to start.
The intersection of AI and storytelling invites us to reimagine the fabric of narrative creation, emphasizing the need for diversity and ethical consideration in AI development. By actively promoting inclusivity and adhering to ethical guidelines, we can steer the discourse toward a future where AI serves as a bridge connecting diverse human experiences. Like any growing intelligence (here I am thinking of a young child), if we tell AI that racial narratives are outdated, that is the reality that it will replicate. It is possible to leverage technology to change the future, especially in a world where fundamental narratives about the reality of human diversity transcend systemic inequities.
AI, especially those that manage the language that shapes our reality, might be our last and best hope of confronting and reshaping the racialized narratives. In its current state, however, it is likely to exacerbate and formalize racial prejudice and systemic inequities in ways we are yet to understand.
Things to read:
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Abstraction in Sociotechnical Systems. ACM Conference on Fairness, Accountability, and Transparency (FAT*), 59-68. This work addresses the challenges of applying fairness in the design and deployment of AI systems.
Benjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity. Benjamin offers a critical analysis of the relationship between machine bias and systemic racism.
Boroditsky, L. (2011). "How Language Shapes Thought." Scientific American, 304(2), 62-65. This article provides an overview of research into how different languages influence cognitive processes.
Broussard, Meredith. (2018). Artificial Unintelligence: How Computers Misunderstand the World. MIT Press. This book critiques the myth of technological neutrality and delves into the biases embedded in AI.
Buolamwini, J., & Gebru, T. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77-91.
Deutscher, Guy. (2010). Through the Language Glass: Why the World Looks Different in Other Languages. Metropolitan Books. Deutscher explores how language shapes thought and cultural perception.
Eubanks, Virginia. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press. Eubanks discusses the impact of automated decision-making systems on low-income communities.
Gianfrancesco, M.A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). "Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data." JAMA Internal Medicine, 178(11), 1544-1547.
Noble, Safiya Umoja. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. Noble's work examines how search engines perpetuate racial biases and stereotypes.
O'Neil, Cathy. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. This book explores the societal impacts of data science and algorithmic decision-making.
Comments