Large AI models are cultural and social technologies
Farrell, Henry, Alison Gopnik, Cosma Shalizi, and James Evans. 2025. โLarge AI Models Are Cultural and Social Technologies.โ Science 387 (6739): 1153โ56. https://doi.org/10.1126/science.adt9819.
Notes
- Distinction between models and agents
- Models are static statistical machines that regurgitate the information that's been already fed inside
- Whereas the intelligence of humans arises from absorbing and synthesizing new information from the world
- That's where the concept of agents is more important, since a truly intelligent should be capable of absorbing information from different mediums real time and "think" more intelligently
- Large models are predominantly summarizing machines that represents and compresses the digitized human knowledge
- The authors idea of creating "society-like" ecologies of multiple language models resonated with me deeply as even I think it would be kind of impossible for a single model to represent the entire worldview
- the intelligence of a language model is similar to a single human in the sense that a humans worldview at a given point is an amalgamation of their entire previous experiences and the knowledge that they had accumulated
- While a person might be aware of multiple "realities", at a given time, a person has to choose the reality that they want to represent in the artifacts that they produce. These artifacts could be speech, writing, or any cultural object.
- Similarly with a large model even if it's aware of multiple "realities" or worldviews, the artifacts that it produces have to ultimately represent a certain reality inevitably as one artifact itself can't represent all
- So to be able to generate culturally-representative large models, it might be futile to try augment the single model with more data but rather have multiple large models where for each model, a certain worldview takes precedence in the representation of the data in which it's trained than other
- But I think it can be only possible if the models also represent the "evils" that is represented in the culture. When I say "evils", I just mean the socially undesirable characteristics represented by a culture
- I think developing such a model would be an interesting dilemma between "censoring" a model vs "culturing" a model
In-text annotations
"A price, an election result, or a measure such as gross domestic product (GDP) summarizes large amounts of individual knowledge, values, preferences, and actions." (Page 1)
"Large models not only abstract a very large body of human culture, they also allow a wide variety of new operations to be carried out on it." (Page 2)
"Someone asking a bot for help writing a cover letter for a job application is really engaging in a technically mediated relationship with thousands of earlier job applicants and millions of other letter writers and RLHF workers." (Page 3)
"Combining and balancing these perspectives may provide more sophisticated means of solving complex problems" (Page 3)
"One way to do this may be to build โsociety-likeโ ecologies in which different perspectives, encoded in different large models, debate each other and potentially cross-fertilize to create hybrid perspectives (12) or to identify gaps in the space of human expertise (13) that might usefully be bridged." (Page 3)