Methods for the Design and Evaluation of HCI+NLP Systems
Heuer, Hendrik, and Daniel Buschek. 2021. “Methods for the Design and Evaluation of HCI+NLP Systems.” arXiv. https://doi.org/10.48550/arXiv.2102.13461.
Notes
HCI methods for evaluating NLP systems
- User-centered NLP
- “user studies ensure that users understand the output and the explanations of the NLP system” (Heuer and Buschek, 2021, p. 2)
- Co-creating NLP
- “deep involvement from the start enables users to actively shape a system and the problem that the system is solving” (Heuer and Buschek, 2021, p. 2)
- Experience Sampling
- “richer data collected by (active) users enables a deeper understanding of the context and the process in which certain data was created” (Heuer and Buschek, 2021, p. 2)
- Crowdsourcing
- “an evaluation at scale with humans-in-the-loop ensures high system performance and could prevent biased results or discrimination” (Heuer and Buschek, 2021, p. 2)
- User Models
- “simulating real users computationally can automate routine evaluation tasks to speed up the development” (Heuer and Buschek, 2021, p. 2)