PS 3-33 - Which measures of β-diversity are most resilient to method error?

Monday, August 8, 2016
ESA Exhibit Hall, Ft Lauderdale Convention Center
Philip Jason Schroeder, Biology, University of Central Florida, Orlando, FL and David Jenkins, University of Central Florida, Orlando, FL
Background/Question/Methods

Much thought has been expended on the most theoretically appropriate measure of beta. Comparatively little research has focused on the practical consideration of error when measuring beta diversity. Specifically, little has been done to answer whether certain measures of beta diversity are more resilient to error than others. We examined the effect of three different types of error on the measurement of beta diversity. The three types were taxonomic misidentification, numerical undersampling and geographical undersampling. Simulated metacommunities were assembled to a steady-state at two different scales and densities to establish initial patterns. Eight abundance-based beta diversities and six presence/absence beta diversities were then computed for each metacommunity, where measures were selected for a combination of popularity and conceptual distinctiveness. After the initial measure at steady-state, each metacommunity was exposed to five levels of each error type and measured for beta diversity again. This process was repeated 1000 times per scale per error type. Beta diversities were then compared for error rates with 95% confidence intervals.

Results/Conclusions

For abundance data, the Bray-Curtis index and Legendre’s total variance were most robust to sampling errors. The Bray-Curtis index ranged from 2.83% to 27.41% across all simulations. Meanwhile, Legendre’s total variance ranged from 6.15% to 54.40%. For presence/absence data Jaccard’s and Cody’s indices were most effective. Jaccard’s index ranged from 2.99% to 21.49% and Cody’s ranged from 4.45% to 42.69%. Notably, Cody’s index and Legendre’s total variance are identical for presence/absence data. Several popular methods, such as Sorensen’s and Simpson’s indices, were relatively poor performers for both data types and should not be used. Additionally, all measures based on the use of minimum and maximum values (i.e. Simpson’s or Beta-2) were extremely sensitive to all forms of error often more than doubling the error rates of the best performer. Ecological research that employed these measures should be revisited. The increasing prominence of citizen science and data mining requires finding numerical tools that can reliably handle sampling error. Bray-Curtis, Legendre's, Jaccard's, and Cody's beta diversities are recommended.