The Participatory Turn in AI Design - Theoretical Foundations and the Current State of Practice

Delgado, Fernando, Stephen Yang, Michael Madaio, and Qian Yang. 2023. “The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice.” Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (New York, NY, USA), EAAMO ’23, October 30, 1–23. https://doi.org/10.1145/3617694.3623261.

Notes

In-text annotations

"The majority of projects operationalized stakeholder participation via discrete preference or value elicitation." (Page 6)

"From our corpus analysis, we found that in motivating “why is participation needed”, the majority of projects described their goal as to improve the user experience (80 out of 80) or to better align AI with stakeholders’ preferences and values (52 out of 80). This reflects a state of participatory practice which focuses less on collaboration and stakeholder ownership and more on consultation and inclusion. In fact, only 10% (8 out of 80) of the projects involved stakeholders with the goal of shaping the system’s scope and purpose." (Page 6)

"Only 10% (8 of 80) of the projects allowed stakeholders to provide input into the type of model and features used, the design of objective or loss functions, or (for AI classification models) the decision thresholds." (Page 6)

"None of the projects allow stakeholders to rule out AI as a solution, since these projects were defined as “AI projects” in the first place. Mapping these observations onto the levels of participation framework, we can see that a vast majority of projects fall into the consultative realm in terms of “what’s on the table“" (Page 6)

In Winners Take All, Giridharan argues that the "elite charade" allows the powerful to feel like they are "doing good" while they continue to benefit from the very systems that cause the harm they are trying to "fix."

"As it relates to “who is involved?,” the majority of stakeholders brought onto the projects in our corpus were identified and chosen by project leads (74 of 80)." (Page 6)

They often belong to a similar 'Habitus' as defined by Pierre Bourdieu. Habitus creates blindspots for the people belonging to privileged social classes who often design these technological solutions. Even if the elites are well-intentioned, the solutions are often informed by their lived experiences, which might be far away from the lived experiences of the people for whom the solutions are designed.

"Even fewer projects (3 out of 80) had community stakeholders involved throughout the life cycle of the AI design project." (Page 7)

"As it relates to participatory methods, 85% (68 out of 80) of the projects engaged stakeholders in eliciting preferences or values through methods such as surveys and interviews, role-playing, and collaborative story-boarding." (Page 7)

"In fact, fewer than a quarter (18 out of 80) of the research articles we reviewed described engaging with the same participants more than once throughout the research project." (Page 7)

"Researchers and practitioners feel they have to substantially scale back their participatory ambitions. Interviewees across industry, the public sector, and academia reported that top-down organizational constraints limited the time and resources available for participatory approaches." (Page 8)

"When there is corporate interest in participatory projects, AI researchers and practitioners faced unanticipated expectations to include stakeholders other than those they were initially focused on based on “the business needs for that particular AI system”" (Page 8)

"For academic researchers conducting participatory work, the long-term relationship development needed for adequate understanding of stakeholders’ needs and social contexts is further undermined by the demand for rapid publication of results, the need to secure funding for students, and tenure requirements for faculty" (Page 8)

"Across all sectors, organizational timelines and resources were described as being at odds with involving a wide range of stakeholders, and certainly ill-equipped to support the “deep ethnography” [P9] that enables a rich understanding of the social landscape in which algorithms are being designed for." (Page 8)

"One interviewee went further to describe the notion of full participation as an impossibility, as for them this would require participants “literally coming to the office with me and making every decision with me and doing all these things; all of a sudden, they don’t have a life to live, right?” [P10]. This straw-man description of participation seems to serve at one level to justify a diluted version of stakeholder involvement, and another as a source of anxiety about never being able to do participation in design any justice." (Page 8)

"Some interviewees [P9, P10, P11] justified such pragmatic choices by arguing that participation—any form of participation, however limited—would be beneficial for stakeholders and better than the alternative of not having participated at all. As one researcher told us: “the reality is a lot of times is, when I say participatory, I mean two hours [...] but I would argue it’s better than nothing or like no participation” [P10]. As they elaborated, “I’m so in the mindset that if you get 80% closer to something, that’s a win. If you genuinely get closer to something, I think that’s better” [P10], while another interviewee expressed the importance of doing “whatever I can, and it won’t be perfect, and it will be rough and it will be messy” [P9]. Meanwhile, others highlighted the importance of developing “lighter-weight methods”" (Page 9)

"For example, in some cases, participants involved in design were not necessarily members of an affected stakeholder group themselves, but rather individuals who the project team perceived as being able to stand-in and voice stakeholders’ preferences and values based on lived and/or work experience" (Page 9)

"UX/HCI practitioners as mediators between AI experts and domain stakeholders. Another tactic described in the corpus and interviews consisted of team members described as having UX/HCI expertise “function as mediators” [P4] between impacted stakeholders and AI researchers in a way that (perhaps unintentionally) limited the interaction between these groups." (Page 9)

"However, it is critical to acknowledge that some participation may not be better than no participation, in situations where participation is extractive (and thus harms participants [65, 203]), tokenistic in ways that reify or amplify existing power structures within participating communities or between stakeholders and designers [54], or cases where “pseudo-participation” [164] may foreclose or co-opt more meaningful forms of participation" (Page 11)

"As Miceli et al. [147] and Sloane et al. [203] have argued, such forms of participation may be tokenistic, extractive, or may inadvertently reinforce or amplify existing power dynamics rather than challenging power structures, as in the aims of participatory design [201]." (Page 11)

"Stale modeling. In the case of algorithmic models as proxies, even if these models are able to faithfully capture capture people’s preferences at the moment of training, they risk becoming stale to people’s changing preferences over time. These fossilized preference models may lead to a substantive distinction between human agent preferences and algorithmic voting across policy choices [cf. 180], thus functioning as what some have referred to as a ‘technology of de-politicization’" (Page 12)