Class 4 Reflection
A key insight that stood out to me in this week’s readings is the importance of human intent and existing capabilities in shaping the success of technological interventions. In both Digital Study Hall (DSH) and Digital Green (DG), the role of mediators and the surrounding institutional infrastructure were central to outcomes. Technology appeared highly effective in contexts with strong foundations, but added little in places where human capacity was already constrained. These cases make it clear that technology alone cannot solve social development challenges. They also raise a harder question for me about whether investments in “technology for social good” are always justified, especially when the returns seem limited without parallel investments in people and systems.
The DG case, in particular, stood out in how their approach has evolved over time. The paper argues that video-based interventions were effective largely because of mediators, who did more than just disseminate information. They facilitated the support that the farmers required. In a recent project, I interviewed a senior leader at DG about their AI chatbot for farmers. They mentioned that the organization has been increasingly focusing on reaching farmers through Facebook ads to scale access. Rather than relying primarily on extension workers, which is a slower and more resource-intensive model. This shift feels somewhat at odds with their earlier insights about the centrality of human mediators in driving impact. At the same time, I recognize the constraints they operate under. Extension workers are often embedded within government systems, and organizations like DG have limited control over that human infrastructure. This makes me wonder whether the shift reflects structural constraints, changing farmer behavior, or simply the affordances of newer technologies.
I also found the DSH findings striking in how closely they resemble the patterns I have seen in my fieldwork, despite the paper being over 14 years old. In a project where we studied teachers’ use of an AI-based lesson planning tool, those with higher morale consistently made better use of the technology. More broadly, schools with good infrastructure, dedicated staff for administrative support, and motivated leadership tended to have more teachers with high morale. This aligns closely with what DSH observed. In our case, while we could provide an AI-based tool, the actual impact depended heavily on factors outside our control.
These experiences push me to question where effort is best directed. If outcomes are so deeply shaped by underlying systems, would it be more productive to focus on strengthening those systems rather than building tools that primarily target frontline workers? At the same time, this raises a more difficult question. What role can technology realistically play in improving these underlying systems, rather than simply layering on top of them?