COS 7-1 - Evaluating citizen science project performance: A science outcomes inventory and implementation guidelines

Monday, August 8, 2016: 1:30 PM
220/221, Ft Lauderdale Convention Center
Andrea Wiggins and Jonathan Brier, College of Information Studies, University of Maryland College Park, College Park, MD
Background/Question/Methods

Citizen science continues to increase in popularity across the scientific domains, especially in ecology; however, current evaluation tools focus primarily on participant outcomes and provide little guidance for evaluation of a diverse portfolio of potential science products. Guidelines and criteria for evaluating project performance can provide needed decision information for peer review and funding decisions, as well as policy and practice for both science and conservation. We present a Science Outcomes Inventory developed for citizen science project evaluation, based on outcomes drawn from highly successful projects such as eBird and Nature’s Notebook. The inventory includes three categories for evaluation: science products, such as publications, data, and public communications, are contextualized with details about organizational features and dataproduced or processed by citizen science participants. We piloted the instrument with two citizen science projects at the Smithsonian Environmental Research Center with the dual purpose of evaluating the projects as well as the process across project structures and time. The pilot implementation walked science staff through the inventory in interview and focus group formats, concluding with questionnaire items about the evaluation process and instrument.

Results/Conclusions

Our pilot implementation verified the applicability of many of the inventory items for projects that were substantially different from those that formed the basis for the inventory. We also verified that the knowledge of multiple team members was required to complete the inventory in full detail; for example, PIs had detailed managerial knowledge, while technicians provided extensive process information. The evaluation helped staff identify their own knowledge gaps about the projects they worked with and reminded investigators of potential products that they felt they could or should be producing. For new members of the research group, the process revealed the most gaps in current knowledge, and could therefore play a role in onboarding new research team members. We observed that for longitudinal implementation with repeated evaluations, inventory items could be classified as baseline, threshold (one-time), or ongoing metrics for evaluation, to support process efficiencies for longitudinal evaluation. In addition, we saw that open availability of project data was often prerequisite to broader impacts, i.e., if data were not openly available, a number of other potential outcomes were unlikely to occur. Ongoing research will formalize the implementation guidelines for broader application.