By UBC Sociology Assistant Professor Laura K. Nelson
Artificial Intelligence (AI) has become an omnipresent force in our lives, permeating numerous aspects of society, from healthcare to transportation to finance to our daily interactions with digital assistants. The intersection of AI and society has sparked ongoing debates that highlight the multifaceted role of AI in shaping our world—and in enabling our understanding of that world—in ways that are deeply sociological.
While AI is a current buzzword, computer scientists, sociologists, and others have been developing and using the technology and methods underlying the current iterations for well over a decade. In 2013, Google introduced one of the pioneering large word embedding models, Word2Vec, which represented a significant breakthrough in natural language processing (Large Language Models, the basis of GPTs—including ChatGPT—use methods similar to Word2Vec).
These models revolutionized the field by training on vast amounts of text data, allowing them to capture nuanced linguistic relationships and meaning. As these models gained prominence, researchers and society at large began to uncover a challenging issue: because they are trained on social data, these models encoded biases present in the training data, reflecting societal prejudices, stereotypes, and cultural associations. This revelation sparked intense debate about the ethical implications of AI and the responsibility of developers to address and rectify bias in machine learning systems, a discussion that continues today.
At the same time, sociologists and other social scientists leveraged the fact that these models encode social biases to better understand social processes (sociological theory, from Hegel to Mead to Du Bois, tells us why these models are so adept at capturing societal prejudices and stereotypes!). Sociologists have since used the methods to examine moral associations with weight, the cultural meaning of class, and changes in gendered stereotypes over the past 100 years, among other topics.
Sociologists also examine the social implications of the use of AI in our daily lives. Themes include AI’s impact on labor markets and employment, the ways surveillance shapes work, the impact of AI on the formation of online communities and political polarization, and the consequences of AI’s use in healthcare, criminal sentencing, policing, crime prediction, and welfare systems.
The entwinement of AI and society means that sociology is, and will remain, at the center of AI debates. A sociological imagination is essential to make sense of this new world.
Arseniev-Koehler, Alina, and Jacob G. Foster. 2022. “Machine Learning as a Model for Cultural Learning: Teaching an Algorithm What it Means to be Fat.” Sociological Methods & Research, 51(4), 1484-1539.
Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press.
Brayne, Sarah. 2020. Predict and Surveil: Data, Discretion, and the Future of Policing. Oxford: Oxford University Press.
Levy, Karen. 2023. Data Driven: Truckers, Technology, and the New Workplace Surveillance. Princeton, NJ: Princeton University Press.
Related UBC Courses
SOCI 280: Data & Society
ARST 556Q: Accountable Computer Systems
This essay was featured in the Fall 2023 edition of UBC Sociology’s department newsletter, Think Sociology! You can read the newsletter here.