Apr 27

I found both readings especially relevant to how I think about designing studies and eliciting participation in HCI and ICTD research. What I appreciated most about Ming et al.’s paper is that it treats response bias not as a small methodological issue, but as something deeply connected to power dynamics, study design, and the social relationship between researchers and participants. Although the paper focuses on accessibility research, I found many of the ideas equally applicable to other forms of ICTD research as well. The discussion of study fatigue, incentive bias, novelty bias, and the effect of social distance felt particularly important. For example, participants may give overly positive feedback simply because they are excited to try something new, want compensation, or feel grateful toward researchers who they perceive as helping them. The paper also highlights how differences in social background between researchers and participants can intensify this, where participants may respond more positively to foreign or socially distant researchers because of the perceived hierarchy.

This felt very relevant to my own research. One approach I have found useful is relying on mixed methods by combining qualitative interviews with quantitative data such as application usage, interaction logs, or behavioral traces from the system itself. I find this helpful because it allows some form of triangulation rather than relying only on self-reported experiences, especially when there may be strong power differentials or incentives shaping participant responses. At the same time, I am also aware that this approach only works in certain contexts, especially application-specific user studies where such data is available. In many other forms of fieldwork, there may not be a comparable source of quantitative validation. This made me think about a broader question: are there certain kinds of research questions that are more likely to produce response bias than others? Ming et al. discuss how some topics naturally create stronger bias because participants may feel judged, dependent, or grateful. But I wonder how study design strategies should change depending on the type of question being asked. For example, what kinds of methods work best when studying highly sensitive topics like health, disability, or financial insecurity, where participants may feel pressure to respond in socially desirable ways?

Hamna et al.’s paper provided an interesting contrast because it shows a participatory approach to LLM evaluation through long-term engagement with community service organizations (CSOs) in healthcare settings. I particularly liked their decision not to create gold-standard answers for the benchmark, since healthcare questions are often open-ended, context-dependent, and shaped by lived experience rather than having one universally correct response. Their argument that creating gold-standard responses would flatten this variability and reintroduce institutional bias felt very convincing.

At the same time, while reading the paper, I kept thinking about the issue of participation fatigue discussed by Ming et al. Hamna et al. are clearly careful about respecting the limited time and resources of CSOs by using short and focused interviews, especially since these organizations are already balancing urgent priorities. But I still wonder what happens over time when the same organizations are repeatedly asked to participate in research projects. Even well-intentioned participatory work can become extractive if communities are constantly expected to contribute their expertise without long-term support or reciprocal benefit. This made me wonder about about sustainability: how do we move from one-time community participation to genuine long-term collaboration? If participatory design depends on trust and repeated engagement, then how do researchers build partnerships that are not just project-based, but durable and mutually beneficial?