Box 1: Relationship between ‘evasion landscapes’ and
‘landscapes of fear’
Each prey individual’s habitat domain can be characterized by an
‘evasion landscape’, or spatial variability in its probability of
evading a predator during an encounter situation (e.g., as a function of
background features, proximity to refugia, terrain). Upon perceiving
predation risk (from background to immediate), prey individuals whose
evasion landscapes are heterogeneous during a given time period may
therefore move to locations that facilitate their likelihood of predator
evasion (e.g., successfully hiding). These locations would generally
correspond to regions of the prey individual’s ‘landscape of fear’ (LOF,
the mapping of predation cost of foraging to the physical landscape;
Laundré et al. 2001; van der Merve & Brown 2007) where its perceived
predation cost of foraging is relatively low, at least with respect to
the costs associated with the conditional probability of capture given
an encounter. All else equal, in other words, we would expect peaks in
the topographic visualization of the predation cost of foraging (LOF) to
tend to match areas of the evasion landscape where the prey individual
has relatively low probability of evading a predator. Note, however,
that the true predation cost of foraging at any location on the LOF is
complex. It is the product of the risk of predation and the marginal
rate of substitution of energy for survivorship (Brown 1988, 1992). The
risk of predation itself is a product of the probability of encountering
a predator (which depends on where an individual is on the ‘encounter
landscape’) and the conditional probability of capture given an
encounter (which depends on where an individual is on the evasion
landscape and its means of resistance, if any). Both of these can be
altered by the prey’s risk management strategies (time allocation and
vigilance behavior) and the derring-do (willingness to risk injury to
better able prey capture) of the predator (Brown et al. 2016). Thus, for
any prey species, measurement of both the encounter landscape and
inverse of the evasion landscape assist in delineating the LOF (Gaynoret al . 2019).
Box 2 : State dependent foraging games between gerbil prey and
owl predators
The interaction of predator and prey is a state-dependent foraging game
where the prey must manage risk using time allocation and vigilance
(Brown 1999), and the predators must manage fear: as prey become more
afraid, they become less catchable. The predator’s tools include time
allocation and derring-do; a more daring predator is more willing to
risk injury in order to capture its prey (Brown et al . 2016).
Here we focus on Allenby’s gerbil (Gerbillus andersoni allenbyi ),
a nocturnal rodent of sand dunes in the Middle East, and its barn owl
(Tyto alba ) predator. Within an outdoor vivarium (17 x 34 x 4.5
m), it is possible to manipulate the energetic states, and subsequently
quantify the foraging behavior, of both gerbils and owls (Kotleret al . 2004).
In theory, a forager should exploit depletable resource patches until
the benefits of its harvest rate no longer exceed the sum of energetic,
predation, and missed opportunity costs of foraging (Brown 1988). The
food density at which this occurs is called the giving-up density (GUD)
and is a behavioral indicator of foraging costs for that context.
Energetic costs of foraging and risk factors should all lead to higher
GUDs, and do so in gerbils (Kotler et al . 1991; Kotler et
al . 1993). The predation cost is highly state-dependent as it equals
predation risk multiplied by the survivor’s fitness divided by the
marginal value of the food. Hungry animals and those in a low state or
with poor prospects should be less fearful and have lower GUDs.
In vivarium experiments, gerbils that received supplemental food,
relative to those that did not, used food patches less intensively, had
higher GUDs, and avoided risky open microhabitat (Kotler 1997; Kotleret al . 2004). These effects carried over into the subsequent
night when no gerbils received supplemental food. Gerbils that had
received supplemental food previously responded more strongly to owls
than those that did not (Kotler 1997). These results show how a higher
energetic state acts to magnify foraging costs and alter behaviors,
ultimately leading to diminished risk taking during phase two.
Tracking gerbil foraging over the course of lunar cycles revealed the
dynamic nature of risk management and feedbacks with state (Kotleret al . 2010). Starting at new moon, as the moon waxes, gerbils
increased vigilance to counter the greater ease of predator encounter,
and reduced their time allocation to limit their exposure to predators;
they sacrificed state to buy safety. By full moon, the gerbils upped
vigilance even more, but increased time foraging; they defended state to
guard against starvation. As the moon waned, gerbils decreased vigilance
and increased foraging time to rebuild state. By new moon, vigilance was
at a minimum, and foraging time began to decline; state had been rebuilt
in time for another cycle (Kotler et al . 2010).
Prey foraging behavior also depends on the interaction between the state
of the prey and that of predator. Using vivarium experiments, Berger-Tal
& Kotler (2010) showed that hungry barn owls (Tyto alba ) were
4-7 times more active than their satiated counterparts. Gerbils
responded to this increase in predator activity by visiting fewer
patches and leaving them at higher GUDs, but only when in high energetic
state (Berger-Tal et al . 2010).
Predators, too, consider their state as well as that of their prey.
Hungry owls, for example, showed derring-do by performing dangerous
attack maneuvers (plunging into areas with stiff, spikey experimental
shrubs) more than twice as often as well-fed conspecifics (Embaret al . 2014a). Moreover, owls choose between well-fed and hungry
gerbils (Embar et al . 2014b). In spring when gerbils were
reproductive, owls favored well-fed gerbils; in the summer when they
were months away from breeding, owls favored hungry gerbils. That may
seem odd, but well-fed gerbils are more active in spring when energy
supports offspring, and hungry gerbils are more active than well-fed
gerbils in summer when survivorship to the next reproductive season is
paramount. Owls, when given the choice between gerbils with fleas and
gerbils without, chose the more active flea-free gerbils (Raveh 2018).
In all cases, then, owls sought more active prey.
In summary, foraging games between gerbils and their predators are
contingent on environmental factors such as microhabitat and moon phase
and biotic factors such as the energetic states of predators and prey.
Prey manage risk, predators manage fear, and these actions feed back
between the players and the environment throughout each night (Kotleret al . 2002), across moon phases (Kotler et al . 2002,
2010), and over the seasons (Kotler et al . 2004).
Box 3 : The timing of predation risk as an emergent driver of
contingency in NCEs
How prey invest in defense at any given time during phase two (prey
response to perceived risk) may depend on the temporal pattern of
intrinsic predation risk. Namely, according to the risk allocation
hypothesis, defensive investment should be greatest in response to
transient pulses of high risk against a background of relative safety
(given that periods during which safe feeding can occur should soon
return), and reduced when pulses of safety occur against a background of
elevated danger (Lima & Bednekoff 1999). By implication, prey in
systems where predation danger is highly punctuated may be able to
compensate for heavy anti-predator investment when predators are most
active (and/or lethal) by feeding during periods of predator inactivity.
For example, vicuñas (Vicugna vicugna ) exploit puma (Puma
concolor ) downtimes (during the day) to utilize their feeding grounds
but avoid these densely-vegetated areas when low light levels and ample
stalking cover combine to enhance puma lethality (Smith et al .
2019). Under these circumstances, demographic risk effects experienced
by prey populations and the potential for prey to transmit indirect NCEs
during phase three may be limited.
To date, empirical support for the risk allocation hypothesis has been
mixed (Ferrari et al . 2009), perhaps in part because prey
condition in some assessments has been high enough to allow for
continuous anti-predator investment even when risk is chronic (Matassa
& Trussell 2014), or because some focal prey species were not given
sufficient time to learn the risk regime (Moll et al . 2017). Our
review offers an additional, non-mutually exclusive explanation. Namely,
the temporal pattern of intrinsic risk experienced by a prey individual
is an emergent outcome of the interaction between the properties (e.g.,
activity) of the predator(s) by which it is threatened and setting in
which an encounter might take place. Moreover, as outlined earlier, the
response of any prey individual/species to perceived intrinsic danger
cues during phase two hinges on its own properties (e.g., escape
tactics). Thus, proper quantification of the temporal pattern of risk
and how prey should respond to perceived stimuli in any situation
requires explicit consideration of each of these drivers of context
dependence, as well as their interplay. It is possible that, lacking the
capacity to be this comprehensive, some prior tests of the risk
allocation hypothesis may have misrepresented the temporal pattern of
risk. We view studies exploring this possibility as a fruitful line of
inquiry. In the meantime, a recent investigation by Dröge et al .
(2018) offers a path forward, at least in terms of accounting for
predator properties. Namely, their ability to explain vigilance
responses by African ungulates was greatest when immediate risk stimuli
(predator proximity) were considered in relation to patterns of
long-term risk associated specifically with the approaching predator
species rather than the predator guild overall.