Uncertainty analysis: An evaluation metric for synthesis science
The methods for conducting reductionist ecological science are well known and widely used. In contrast, those used in ecological synthesis science are still being developed, vary widely from study to study, and often lack the rigor of reductionist approaches. This is unfortunate because the synthesis of ecological parts into a greater whole is critical to understanding many of the environmental problems society faces. Here the question of how the rigor of synthesis science might be increased is explored by contrasting reductionist and synthesis science in terms of uncertainty.
The uncertainty associated with a result is a standard evaluation metric essential to scientific rigor. In reductionist science, measurement uncertainty (i.e., experimental error) is described by precision and accuracy. While rigorously determined in laboratory analyses and climatic measurements, in many field studies measurements are assumed to be acceptably precise and accurate. Natural variation in space and time is the main concern in most field studies and is characterized by repeated sampling to quantify classical statistical moments (e.g., mean and variance). Many measurements are transformed into variables of interest (e.g., diameter to biomass) by using relationships contained in models. Uncertainty enters at this stage of analysis in two ways: uncertainty about model parameter values (so-called regression error) and uncertainty about the form of the relationships (model selection error). In reductionist science, regression error is often quantified, while model selection error is generally overlooked, as one model is usually selected as the “best.” Synthesis science shares all these forms of uncertainty, but perhaps model selection error, already the least understood, is most important. It is important to the synthesis process because it is related to how subparts are put together. When there is imperfect knowledge about relationships or how to model there can be several plausible approaches. Therefore a key step in increasing the rigor of synthesis science would be to evaluate model selection error and not to favor one plausible approach over another. By quantifying the uncertainty of synthetic science, one should be able to rigorously compare the results of one synthetic result to another and judge whether they are different within the bounds of measurement and knowledge. However, to be accepted as a standard method, best practices analogous to those used in reductionist science need to be developed and implemented.