Mar 15

I found this week’s readings interesting because they made me think about LLMs not only as systems that reproduce bias, but also as tools that actively shape how culture and information circulate online. While many critiques focus on the harms of these systems, the papers also made me reflect on why people might still find LLM generated content useful and appealing in everyday contexts.

One aspect that stood out to me in the paper by Bhagat et al. was how geographical disparities in training data appear in the outputs that language models generate. The authors show that when models produce travel recommendations or location based stories, places in wealthier countries tend to receive more detailed and unique descriptions compared to places in poorer regions. For example, stories set in lower income countries were more likely to emphasize hardship and sadness, while locations in wealthier countries received richer descriptions and more geographical references. This suggests that the uneven representation of places in training data can shape how different parts of the world are portrayed through AI generated content.

At the same time, I also wondered how much of this problem comes from the underlying structure of the internet itself rather than from the models alone. If the online content used to train these models already overrepresents wealthier regions, then models might simply be reflecting existing inequalities in global knowledge production. This raises the question of whether improving training data diversity would actually solve the issue. If models are trained on more geographically balanced datasets, would that meaningfully change how they represent different regions, or would deeper structural inequalities in global information production continue to shape their outputs?

Another theme that resonated with me came from Agarwal et al., who study how AI writing suggestions influence users’ writing styles. Their experiment found that when participants from India used AI writing assistance, their writing gradually shifted toward Western linguistic styles and cultural references. In some cases participants even described their own cultural practices through a Western framing after receiving suggestions from the model. This finding highlights how LLM based writing tools can subtly influence not only what people write but also how they express cultural ideas.

However, I also found myself thinking about this phenomenon in the broader context of cultural diffusion. Throughout history, dominant cultures have shaped global communication through media, education, and language. English itself became a global language through similar processes of cultural influence. In that sense, LLMs might not be introducing an entirely new dynamic but rather accelerating patterns that already exist. If people adopt Western styles while using AI writing tools, it might partly be because these styles are already dominant in global communication spaces.

This also raises questions about the role of efficiency in shaping cultural expression, especially in professional contexts. In many corporate and business environments, communication already follows a fairly standardized style that may not align with everyone’s cultural or linguistic background. If AI writing tools help users write faster and produce text that sounds more professional within those environments, users might find it useful to adopt the dominant style even if it does not reflect their own cultural way of communicating. In that sense, the alignment with dominant cultural norms might not always be experienced as purely negative. It could also function as a form of practical assistance that helps people navigate professional spaces that already expect a certain tone or language. This makes me wonder whether users actually perceive this alignment as harmful cultural homogenization, or whether some may see it as a helpful tool for adapting to institutional expectations in workplaces that prioritize standardized communication.