|Title||Benefits and Barriers of User Evaluation in Software Engineering Research|
|Publication Type||Conference Paper|
|Year of Publication||2011|
|Authors||Buse, RPL, Sadowski C, Weimer W|
|Conference Name||OOPSLA '11: Proceedings of the ACM international conference on Object oriented programming systems languages and applications|
|Conference Location||Portland, Oregon, USA|
|Keywords||Human study, User evaluation|
In this paper, we identify trends about, benefits from, and barriers to performing user evaluations in software engineering research. From a corpus of over 3,000 papers spanning ten years, we report on various subtypes of user evaluations (e.g., coding tasks vs. questionnaires) and relate user evaluations to paper topics (e.g., debugging vs. technology transfer). We identify the external measures of impact, such as best paper awards and citation counts, that are correlated with the presence of user evaluations. We complement this with a survey of over 100 researchers from over 40 different universities and labs in which we identify a set of perceived barriers to performing user evaluations.