Tuesday, August 5, 2008 - 10:10 AM

COS 31-7: Evaluating scaling models in biology using a hierarchical Bayesian framework

Charles A. Price1, Ethan P. White2, Joshua S. Weitz1, and Kiona Ogle3. (1) Georgia Institute of Technology, (2) Utah State University, (3) University of Wyoming

Background/Question/Methods

The past few decades have seen a surge of interest in scaling patterns in biology in part due to the introduction of several predictive models purporting to explain their origin. A common feature of these models is that they rely on physical first principles to predict the scaling of prominent patterns such as mammal basal metabolic rate with mass, or tree height with stem diameter. Almost without exception, tests of these models have relied on fitting a regression function to log transformed plots of bivariate data and assessing if the confidence intervals for the scaling exponent contain the predicted or hypothesized value associated with a given model. Such approaches have led to vigorous debate regarding model goodness of fit, what types of data are appropriate (e.g. maximum vs. standard metabolism), and how to implement the regression model. Here we utilize a hierarchical Bayesian framework to comparing scaling models. We evaluate a suite of models that describe the scaling of organismal morphology, including: Stress Similarity, Elastic Similarity, Fractal Similarity, and Geometric Similarity. Our approach is unique in that we evaluate multiple scaling predictions simultaneously utilizing three allometric datasets: one for temperate forest trees, a second for Sonoran Desert annual and perennial plants, and third for leaves. The advantages of the hierarchical Bayes approach include: (1) the implementation of a multivariate regression analysis that explicitly accounts for any correlation that may remain after having accounted for the scaling relationships; (2) explicit modeling of the uncertainty associated with all quantities such that a measurement error model is used for the independent variable; (3) explicit accounting of uncertainty in the estimates of the scaling parameters and direct inference about how the estimates compared to hypothesized values.

Results/Conclusions

Bayesian goodness-of-fit statistics, such as the Deviance Information Criterion, suggest that general, less flexible models (with fewer parameters) that make static predictions do not perform as well as models that incorporate biological flexibility (with more parameters). However, all of the static models (Elastic, Stress, Fractal and Geometric Similarity) performed reasonably well. Importantly our results also highlight the need for more rigorous tests of model assumptions such as self similarity in biomechanical or hydrodynamic properties. In summary, our analysis indicates a tradeoff between model generality, explanatory power and biological relevance.