Photographic identification offers many advantages as a non-invasive method of capture-mark-recapture. However, because computer vision for wildlife applications has not yet reached the point of fully-automated matching, all forms of photo-id remain constrained by the potential for human errors. Error type and frequency greatly impact the structure of image databases, as well as the accuracy and precision of analyses or population estimates. Ten years of photographic CMR work consisting of more than 12,000 images of the Massachusetts state-threatened marbled salamander (Ambystoma opacum) provide an exceptional platform for interdisciplinary collaboration and exploration of human error rates through a blind, trial-based collection of matching information. Sixty students each examined a complete factorial series of 15 online trials (varying in database size and number of matching images) – viewing a total of 2,625 pairs of images each. Covariates such as experience, time to decision, and trial order were also documented.
Results/Conclusions
False negatives (missed matches) accounted for almost all errors, and occurred over a broad range of observer experience. Consequently, trials containing more matches generated more errors, regardless of database size. False positives (making incorrect matches) were significantly less likely to occur with increasing experience level. Observer fatigue had a smaller impact than photographic quality on the frequency of errors. These results form a springboard for the larger question of the feasibility of collecting high quality data from either large numbers of citizen scientists or computer-assisted approaches. We hope that a simple emphasis on visual-based, binary questions (match or non-match) can bridge differences in age, gender, education levels, and technological proficiencies that might normally separate a community of learners, enabling photo-id to become a vector for reciprocal learning between scientists and the public.