COS 36-9
Re-examining the causes and meaning of the risk allocation hypothesis
The risk allocation hypothesis has inspired numerous studies seeking to understand how the temporal pattern of predation risk affects the foraging behavior of prey. One of the more interesting predictions of the hypothesis is that as the occurrence of low predation risk periods becomes rarer that prey should increase their foraging efforts during both low- and high-risk periods. I use a combination of dynamic state variable models and genetic algorithms to examine how the various assumptions made in the risk allocation model affect its predictions, and I seek to show the causes of the predicted patterns. I manipulate the frequency of predation risk environments, the reliability of information prey receive about the current environment, and the prey fitness function. The risk allocation hypothesis is often interpreted as being about how a prey’s cognition about probabilities of future environmental states affects their behavior in the present. To test this I have prey evolve their cognitive rules about they how weigh information received about the current state of the environment and their expectations about future states, and I test their behaviors in other environments.
Results/Conclusions
I find that the risk allocation pattern occurs not only for a threshold fitness function, the fitness function in the original version of the model, but also for a linear fitness function. However, the causes of the risk allocation pattern and the robustness of the results differ for the two fitness functions. Prey receiving imperfect information about the current state of the environment reduces the strength of the risk allocation pattern, but the pattern can persist even with fairly unreliable information. I find that individuals that have evolved cognitive rules in the same environmental conditions (and thus they don't differ much) produce the risk allocation pattern when exposed to different frequencies of predation risk levels, and that the strength of the risk allocation pattern is largely unaffected by the frequencies of predation risks levels in which individuals evolved. Thus, the predicted patterns of the risk allocation hypothesis are not driven by the expectations of prey about the future states of the environments, but rather are driven by their current energetic state and how many periods of foraging remain in the season. I discuss how that affects experimental designs for testing risk allocation and predictions about prey behavior.