Apr 19
This week’s readings focus on the complexities in doing participatory design. While participatory approaches are often framed as more inclusive and democratic, the readings question whether participatory design done by privileged technologists can actually achieve these goals. In particular, two issues stood out to me. First, as Anand Giridharadas argues in Winners Take All, there is often a tendency to “do good” while maintaining the status quo, where those in power attempt to fix problems without giving up control. Second, technologists and designers often come from very different social and cultural backgrounds, or 'habitus' as described by Pierre Bourdieu, than the communities they design for. Because of this, even well-intentioned participatory efforts carry significant blindspots, where designers misinterpret needs or prioritize solutions that reflect their own perspectives rather than those of the communities involved.
One thing that stood out to me is how most participatory AI projects are structured more around consultation than actual collaboration. As Delgado et al. show, the majority of projects involve stakeholders through activities like surveys or interviews, primarily to elicit preferences or values, rather than to shape the system itself. Very few projects involve participants in defining the problem, choosing the model, or even deciding whether AI should be used at all. In that sense, participation feels more like a way to validate pre-existing decisions rather than genuine collaboration.
This connects strongly to Giridharadas’ idea of philanthro-capitalism. There is a kind of “doing good without giving up power” dynamic here. Participatory design allows technologists and organizations to appear inclusive and socially responsible, while still retaining control over the system’s goals, design, and deployment. The fact that stakeholders are often selected by project leads further reinforces this, since participation itself becomes curated and controlled. It starts to feel like participation is less about empowerment and more about legitimacy.
Another issue that became clear to me is the role of habitus. The designers and technologists building these systems often come from very different social and cultural backgrounds than the communities they are working with. Even when they are well-intentioned, their assumptions about what problems matter and what solutions are appropriate are shaped by their own experiences. This shows up in subtle ways, such as relying on proxies or “representatives” to speak for communities, or using UX practitioners as intermediaries, which can actually limit direct engagement. In that sense, participatory design carries inherent blindspots, where important aspects of people’s lived realities may be overlooked or misunderstood because they fall outside the designers’ own frames of reference.
Suresh et al. pushes this critique further by showing that there are structural limits to participation itself, especially in the context of large-scale AI systems. They argue that corporate control over foundation models creates a “participatory ceiling,” where meaningful participation is constrained by business incentives and centralized decision-making. If companies are ultimately accountable to shareholders, then there is little incentive to share real decision-making power with communities. This explains why participation often remains at the level of consultation rather than co-creation. This makes me wonder, especially in the context of foundation models, what would it actually take to move beyond this participatory ceiling? Is meaningful participation even possible within systems that are inherently centralized and corporate-controlled, or do we need entirely different institutional and governance structures to enable it?
What I found particularly interesting is the idea that not all participation is necessarily good. The paper highlights how participatory processes can be tokenistic or even extractive, reinforcing existing power dynamics instead of challenging them. This challenges the common assumption that “some participation is better than none.” In some cases, limited or superficial participation might actually close off possibilities for more meaningful engagement by giving the impression that inclusion has already been achieved.
All of this makes me think that what is needed is not just more participation, but a different orientation altogether. A more justice-oriented approach would require shifting control over problem definition, design, and decision-making to the communities themselves. This could also mean questioning whether AI is the right solution in the first place, something that most projects currently do not allow stakeholders to do. It also requires recognizing people not just as users or informants, but as designers and experts of their own contexts. This leads me to a broader question: how can we move towards a more justice-oriented design practice that actively accounts for these blindspots and enables communities to define problems and solutions on their own terms, rather than through the lens of external interventionists?
At the same time, the readings also make it clear that this is not easy to achieve. Organizational constraints, timelines, and funding structures all work against deep, long-term engagement. But that does not mean the current model is sufficient. If anything, it suggests that without addressing these structural constraints, participatory design risks becoming another layer of abstraction that makes systems appear more inclusive without actually changing who holds power.