OOS 12-6
Comparing snapshot methods, time series analysis, and simple bench marks for forecasting biodiversity

Monday, August 10, 2015: 3:20 PM
341, Baltimore Convention Center
Ethan P. White, Department of Wildlife Ecology & Conservation and the Informatics Institute,, University of Florida, Gainesville, FL
Background/Question/Methods

There are traditionally two approaches to making predictions for how ecological systems will change in the future. Snapshot methods, which use data from a single moment in time to make predictions for what will happen next, and time-series analyses, which use information on the long-term dynamics of the system to inform forecasts. These approaches are rarely compared directly using empirical data and are also rarely compared to simple bench marks. I will use existing examples and new analysis of data from the Breeding Bird Survey of North America to discuss comparisons of the performance of snapshot methods, time-series analysis, and simple benchmarks in making forecasts for biodiversity. New analysis will include the development of forecasting benchmarks and comparison of space-for-time substitution snapshot models and time-series analyses using hindcasting.

Results/Conclusions

Understanding what forecasting methods to use in which situations will require that we understand how different approaches perform relative to one another. Benchmarks, such as the long-term average of biodiversity at the site and the biodiversity of the last observed year, are important for understanding the overall effectiveness of these forecasts. Accomplishing these comparisons requires direct empirical testing. Benchmark development for the Breeding Bird Survey of North American data shows that simple benchmarks can explain on the order of 70% of the short-term forecasts for species richness, and 45% of the short-term forecasts for population densities of individual species. The quality of these benchmark forecasts decays through time. Snapshot and time-series based forecasts need to be able to outperform these benchmarks in order to provide improved forecasts. Examples of cases where more detailed models outperform benchmarks, and cases where they do not, are discussed.