Mar 23
I found this week’s readings to reflect not only on how LLMs represent culture, but also on who defines culture and to what end. While there is an increasing body of work examining cultural bias in models, I found myself agreeing with critiques around the lack of interdisciplinarity and lack of situatedness.
One aspect that stood out to me in was the observation that most studies do not explicitly define culture, and instead rely on proxies such as language, geography, or values. This made me question what exactly is being measured when researchers claim to study “culture.” If culture is operationalized through narrow and often convenient proxies, then these studies risk capturing only surface-level patterns rather than lived realities. The distinction between “thin” and “thick” descriptions of culture is particularly useful here, as it highlights how most NLP work relies on outsider, simplified representations rather than grounded, contextual understandings. This raises an important question: if these studies were situated in real-world contexts, would concerns about cultural misrepresentation remain as central, or would other issues emerge as more pressing?
This connects to a broader concern I had about interdisciplinarity. It is striking that research engaging with deeply social concepts like culture often does not meaningfully involve scholars from anthropology or sociology, a gap that the authors themselves acknowledge . Given recent industry trends, including the dissolving of Responsible AI teams (ex: IBM and Microsoft FATE), I wonder whether this gap is likely to persist. If so, there is a risk that “culture” in AI becomes a technical variable to optimize rather than a complex social phenomenon to understand.
What I found myself returning to most is the question of material impact. The paper explicitly highlights the lack of situated studies that examine how cultural misrepresentation in LLMs affects users in practice . This made me wonder: does the lack of cultural representation in AI systems actually have any meaningful material consequences for the communities being discussed? Or is the significance of this issue being overstated relative to other more structural concerns?
I think Olúfẹ́mi O. Táíwò idea of elite capture becomes a useful lens. It made me wonder whether some strands of research on cultural representation in AI are shaped by relatively privileged actors who are able to define what counts as a problem. Is representation in AI in a way is more about identity politics that focuses more on non-material aspects such as representation and culture than the material aspects such as wages, housing, and healthcare? In the book Elite Capture, Táíwò has argued that identity politics is deployed by elites in the service of their own interests, rather than in the service of the vulnerable people they often claim to represent. If representation becomes the primary focus, there is a risk that resources such as funding, attention, and research effort are directed toward improving just identity representation in models, rather than addressing more immediate material inequalities such as access to quality education, healthcare, public infrastructure, or fair wages. In that sense, I found myself asking whether the current emphasis on representation in AI might sometimes reflect the priorities of globally connected academic and industry elites more than the needs of the communities they claim to represent.
This leads to a related question: are current efforts focused on improving representation in LLM outputs diverting attention from more fundamental material concerns that shape people’s lives? Even if a model becomes better at representing a culture in its responses, it is unclear how that translates into tangible improvements in living conditions. For instance, does better cultural representation in AI systems meaningfully impact access to education, healthcare, or economic opportunity? Or does it primarily operate at the level of discourse and visibility without addressing structural inequalities? While representation can be important, these readings made me question whether its current prioritization risks obscuring more urgent, material dimensions of inequality that lie outside the scope of AI systems altogether.