Managers of workforce training programs are often unable to afford costly, full-fledged experimental or nonexperimental evaluations to determine their programs’ impacts. Therefore, many rely on the survey responses of program participants to gauge program impacts.
Smith, Whalley, and Wilcox present the first attempt to assess such measures despite their already widespread use in program evaluations. They develop a multidisciplinary framework for addressing the issue and apply it to three case studies: the National Job Training Partnership Act Study, the U.S. National Supported Work Demonstration, and the Connecticut Jobs First Program.
Each of these studies were subjected to experimental evaluations that included a survey-based participant evaluation measure. The authors apply econometric methods specifically developed to obtain estimates of program impacts among individuals in the studies and then compare these estimates with survey-based participant evaluation measures to obtain an assessment of the surveys’ efficacy.
The authors also discuss how their findings fit into the broader literatures in economics, psychology, and survey research.
Download Full Text (5.8 MB)
Upjohn project #69412
9780880996815 (cloth) ; 9780880996587 (pbk.) ; 9780880996594 (ebook)
Smith, Jeffrey A., Alexander Whalley, and Nathaniel T. Wilcox. 2021. Are Participants Good Evaluators? Kalamazoo, MI: W.E. Upjohn Institute for Employment Research. https://doi.org/10.17848/9780880996594
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.