Introduction
Communicating uncertainty in ecological models is a pressing challenge across ecology, motivating the need for new educational tools to train students in understanding and interpreting uncertainty in model predictions. Uncertainty in ecological model predictions is inherent across ecological disciplines, ranging across population and community ecology models (e.g., Halpern et al. 2006, Bird et al. 2021), disease ecology models (e.g., Briggs et al. 2009, McClintock et al. 2010), landscape ecology models (e.g., Wu et al. 2006, Lechner et al. 2012), and ecosystem models (e.g., Link et al. 2012, Melbourne-Thomas et al. 2012). Sources of uncertainty in ecological models include uncertainty in model parameter estimates, initial conditions, and the underlying processes being modeled (Dietze 2017). Combined together, these sources of uncertainty can have important implications for interpreting model results, as well as their utility in decision-making (e.g., Berthet et al. 2016, Cheong et al. 2016). However, uncertainty is rarely communicated or is communicated poorly (Boukhelifa and Duke 2009, Hullman 2020), hindering the use of model output for both advancing ecological understanding and decision-making (Joslyn and Savelli 2010, Milner-Gulland and Shea 2017). This is likely because uncertainty is a difficult concept for most individuals to understand (Belia et al. 2005), as well as to mathematically quantify and represent graphically with visualizations (Spiegelhalter et al. 2011, Potter et al. 2012, Bonneau et al. 2015). Given low levels of visualization literacy in both the general and scientific population (Maltese et al. 2015), educational tools to improve communication of ecological model uncertainty are critically needed.
Ecological forecasting provides a powerful framework for teaching students uncertainty communication and data science skills, which are increasingly needed for 21st century careers (Rieley, 2018, Vought and Droegemeier, 2020). Ecological forecasts, which are future, out-of-sample model predictions of ecological variables with quantified uncertainty (Table 1), can serve as useful decision support tools for a variety of users (Tulloch et al. 2020, Bodner et al. 2021). Because of the utility of forecasts in both informing decision-making and the testing of ecological theory (Dietze et al. 2018, Lewis et al. 2022a, Carey et al. 2022), ecological forecasting is a rapidly growing sub-field of ecology (Lewis et al. 2022b).
Many near-term (day to decade ahead) ecological forecasts are developed using the iterative forecasting cycle (Lewis et al. 2022b), which has the potential to teach students foundational ecological forecasting concepts (Moore et al. 2022a). The iterative, near-term forecasting cycle consists of multiple steps, which parallel the scientific method: 1) make a prediction about ecological phenomena, 2) develop a model which represents that hypothesis, 3) quantify uncertainty around predictions, 4) generate a forecast with uncertainty, 5) communicate the forecast to users, 6) assess the forecast with observations, and 7) update the forecast with new data (Dietze et al. 2018, Moore et al. 2022a). Altogether, teaching this iterative framework in ecology courses could improve student understanding of complex ecological concepts (Selutin and Lebedeva 2017), as well as uncertainty visualization skills.
Communicating and interpreting ecological forecast visualizations presents several unique challenges. First, forecasts are inherently uncertain, yet they are needed to guide environmental management decisions, making it critical to properly communicate the uncertainty associated with forecast predictions (Berthet et al. 2016). Second, while there are numerous studies on visualizing data uncertainty (Olston and Mackinlay 2002, Potter et al. 2012, Smith Mason et al. 2017, Wiggins et al. 2018), little consensus has emerged as to the best approach for visualizing forecast uncertainty for both end user comprehension and decision support. Third, it has been well-documented that different approaches to visualizing uncertainty result in varying levels of comprehension by users (Ramos et al. 2013, Cheong et al. 2016, McKenzie et al. 2016, Kinkeldey et al. 2017). Altogether, these challenges emphasize the need for thoughtful representation of uncertainty in forecasts, as well as the need for educational materials that teach students how to interpret and develop forecast visualizations for decision support applications.
Several pedagogical methods may be useful for incorporating uncertainty visualization skills into introductory ecological forecasting education. First, having students create their own visualizations has been shown to improve data visualization literacy (Huron et al. 2014, Börner et al. 2016, 2019, Alper et al. 2017). Second, teaching students how to produce a range of visualizations for the same forecast using a toolbox of different visualization styles may enable them to communicate their forecast to a broader range of users, as well as adapt their visualizations for different user needs. For example, teaching students how to communicate uncertainty in a single forecast using multiple methods (e.g., representing uncertainty with numbers, words, icons, and graphs such as maps or time series; sensu Spiegelhalter et al. 2011) can help illustrate the multitude of ways uncertainty can be visualized and build students’ ability to interpret diverse forecast visualizations. Third, teaching students to communicate forecast uncertainty using thresholds which are directly meaningful for decision-making has proven utility in uncertainty communication (Kox et al. 2018). For example, communicating a forecast of the abundance of an endangered species as a forecast index (e.g., the likelihood of encountering that endangered species at a site) may be a more effective communication style for some forecast users by placing forecast output in a decision-making context (see Table 1 for definitions). Fourth, emphasizing the importance of identifying forecast users and specifically the decisions which could be made with forecasts could increase the relevance of ecological forecasting for students. Presenting ecological concepts in culturally and societally relevant contexts is known to stimulate student engagement (Cid and Pouyat 2013, Vance-Chalcraft and Jelks 2022, Henri et al. 2022), and can lead to more collaborative and effective research and management broadly within the scientific community (Armitage et al. 2009, Cvitanovic et al. 2013).
In addition to the pedagogical approaches above, integrating the concepts of decision science (e.g., through structured decision-making or decision use cases, see Table 1 for definitions; Clemen and Reilly 2004, Gregory et al. 2012) may help students better understand the needs of different forecast users, and correspondingly lead to improved forecast visualizations. Current ecological forecasting teaching materials have largely been methodology-focused, omitting application and communication components (Willson et al. 2022). This focus on methods skill-building, while very valuable, may fail to engage introductory students who have yet to master the computational and quantitative skills needed for forecast development.
To introduce students to key concepts in uncertainty visualization and communication in the context of using near-term ecological forecasts for real-world decision-making, we developed a 3-hour teaching module, “Using Ecological Forecasts to Guide Decision-Making,” as part of the Macrosystems EDDIE (Environmental Data-Driven Inquiry and Exploration; MacrosystemsEDDIE.org) program. The module entailed a short introductory lecture, three scaffolded, hands-on forecasting activities embedded within an online interactive tool built using an R Shiny application (Chang et al. 2022) and discussion questions. Instructors were also provided with a pre-module student handout which included suggested readings and discussion questions to provide students with background information before beginning hands-on module activities. To test the effectiveness of our interactive teaching module on students’ ability to learn uncertainty communication and foundational ecological forecasting concepts within a decision support framework, we conducted pre- and post-module assessment surveys. We analyzed the student assessment data to determine how completion of the module affected: 1) students’ ability to interpret and communicate uncertainty in forecast visualizations, and 2) students’ understanding of foundational ecological forecasting concepts.
Methods Module Overview
We designed Macrosystems EDDIE (Environmental Data-Driven Inquiry and Exploration; MacrosystemsEDDIE.org) Module 8 “Using Ecological Forecasts to Guide Decision-Making” to teach students uncertainty communication and foundational ecological forecasting concepts within a decision support framework. This is the 8th module in the Macrosystems EDDIE teaching module series (Carey et al. 2020, Hounshell et al. 2021, Moore et al. 2022a). Specifically, the module activities encompassed a range of decision support concepts and applications, such as structured decision-making through role-playing and identification of forecast user needs. The version of the module used for this study is archived and available for download from Woelmer et al. (2022a, 2022b). All module materials are publicly available for use and are iteratively updated following user feedback; the most recent version of the module can be accessed at: https://serc.carleton.edu/eddie/teaching_materials/modules/module8.html. Our assessment focused on measuring student understanding of uncertainty communication and foundational ecological forecasting as two important yet currently overlooked concepts within undergraduate ecology curricula (Willson et al. 2022).
This module, following the Macrosystems EDDIE pedagogical framework (Carey et al. 2020), consisted of a suite of three self-contained, scaffolded activities (Activities A, B, and C) which can be adapted to meet the needs of individual lecture or laboratory classes. The three activities taught students different ways to visualize forecasts (Activity A); how uncertainty in forecast visualizations can influence decision-making (Activity B); and how to create visualizations of probabilistic ecological forecasts tailored to a specific user (Activity C). All Macrosystems EDDIE modules follow the 5E Instructional Model (Bybee et al. 2006), which uses activities to enable engagement, exploration, explanation, elaboration, and evaluation. This module, as well as other Macrosystems EDDIE modules, are primarily geared towards the undergraduate level but can also be applied in graduate-level courses (e.g., Moore et al. 2022a).
Because uncertainty interpretation and communication are not commonly integrated into undergraduate ecology education (Willson et al. 2022), we introduced students to a broad suite of methods currently applied in visualization and decision science within the module. These methods include: 1) creating one’s own visualizations (Huron et al. 2014, Alper et al. 2017, Börner et al. 2016, Börner et al. 2019), 2) visualizing uncertainty in multiple ways (sensu Spiegelhalter et al. 2011), 3) using meaningful thresholds for decision-making (Kox et al. 2018), 4) identifying forecast users to increase engagement and relevance (Cid and Pouyat 2013, Henri et al. 2022, Vance-Chalcraft and Osborne Jelks, 2022), and 5) considering forecast user decision needs to guide visualization development (Raftery 2016).
Our module assessment (described below) focused on two learning objectives (LOs) taught throughout the module activities. The two LOs were: LO1) describe what ecological forecasts are and how they are used (Activity A, B, C); and LO2) identify different ways to represent uncertainty in a visualization (Activity A, B, C). In addition to LO1 and LO2, this module included four additional LOs for instructors: LO3) identify the components of a structured decision (Activity B); LO4) discuss how forecast uncertainty relates to decision-making (Activity A, B, C); LO5) match forecast user needs with different levels of forecasting decision support (Activity A, C); and LO6) create visualizations tailored to specific forecast users (Activity C). The activities within the module were designed to meet all six LOs, with several activities targeting multiple LOs (Appendix S1: Table S1). Our focus on LO1 and LO2 for the assessment was motivated by the importance of increasing representation of foundational ecological forecasting and uncertainty communication concepts, respectively, in undergraduate ecology curricula (Appendix S1: Table S1).
Detailed module description The module included an introductory PowerPoint lecture, a suite of three activities embedded within an R Shiny application accessed in a web browser, and discussion questions. First, the PowerPoint presentation (~20 minutes) introduced students to the key concepts taught in the module, including a general introduction to ecological forecasting and a case study of an ecological forecasting application with visualization examples. Instructor notes for each slide were provided, as well as an ‘Introduction to R Shiny’ guide for students and instructors who were not previously familiar with using R Shiny applications.
For the case study within the introductory PowerPoint lecture, students were given an example of a forecast of the future distribution of the invasive spongy moth (Lymantria dispar ) and introduced to different types of forecast users and corresponding decisions that different forecast users could make, as well as different ways of visualizing the same forecast for individual forecast users’ decision use cases (Table 1). For example, a homeowner deciding whether to treat the oak trees on their property to prevent spongy moth invasion might benefit from a forecast index visualizing the percent likelihood of spongy moth colonization in a particular location. In contrast, a natural resource manager deciding where to prioritize conservation efforts of a native competitor of spongy moth might prefer a map of spongy moth densities and associated uncertainty across the region. Through the case study, students were shown a range of visualization types that can be altered to suit different decision use cases. Students were taught about how uncertainty can be represented and communicated using several methods, including numbers, words, icons, and graphs. For example, using the same forecast, uncertainty could be communicated with numbers (‘22% chance of a spongy moth outbreak’), words (‘low risk of spongy moth outbreak’), an icon (showing a ‘traffic light’ symbol indicating ‘green’ for low risk), or a graph (a map of the likelihood of an outbreak across a region) (Appendix S1: Figure S2). Within these four categories, students were taught how to communicate forecast output (e.g., the density of spongy moths in a given area, see Table 1 for an example), which uses output directly from a forecast model. In addition, they were taught to communicate using a forecast index, which is forecast output that is translated into an index based on some threshold which is meaningful to decision-making (e.g., the likelihood of a spongy moth outbreak; Table 1, Appendix S1: Figure S2).
Second, following the presentation, students were instructed to access the module via the R Shiny application and work through the module activities A, B, and C with a partner. R Shiny is an interactive tool built within the R coding environment that allows users to interact with complex data through a simple web browser interface (Chang et al. 2021, Kasprzak et al. 2021), increasing the ease of use. Applications developed using R Shiny have been proven effective at teaching students challenging topics in a variety of educational settings (e.g., Fawcett 2018, Moore et al. 2022a). All module activities were designed to meet one or more of the module LOs (Appendix S1: Table S1).
Within the Shiny app, students first completed Activity A, “Explore ecological forecast visualizations and decision use,” in which they individually selected an ecological forecast from a curated list of current forecasting systems (Appendix S1: Table S2), answered several embedded questions about how their selected forecast is visualized and how it can be used, and then compared their answers with their partner. Through these activities, students directly addressed LO1 (‘Describe what ecological forecasts are and how they are used’) by analyzing forecasts and identifying forecast users and LO2 (‘Identify different ways to represent uncertainty in a visualization’) by analyzing how or whether their forecast visualizes uncertainty.
In Activity B, “Make decisions using an ecological forecast,” students completed an in-depth case study in which they role-played as resource managers and made decisions about optimizing multiple objectives using two different forecast visualizations (Figure 2A). The use of role-playing as an active form of learning has documented success in education, especially in science education (Howes and Cruz 2009), but has not been tested in ecological forecasting education specifically. Students were given a case study in which they were asked to role-play as water managers and make decisions about whether or not they should allow a swimming race in a drinking water reservoir given different forecasts of potentially toxic algal blooms occurring at the time of the race (see Appendix S1: Text S1 for a full description of the case study scenario).
As part of Activity B, students were taught to use structured decision-making techniques to apply their management objectives for the drinking water case study. Specifically, students were taught the PrOACT structured decision-making tool (see Table 1 for definition, e.g., Hammond et al. 2002, Hemming et al. 2022). With a goal of optimizing four different management objectives identified using the PrOACT tool (Figure 2A.3), students created hypotheses about how to manage the drinking water reservoir each day as the forecasts were iteratively updated over time (Figure 2A.1; Appendix S1: Figure S3). They completed this objective twice, using forecast visualizations which represented uncertainty using two different methods (Figure 2A.1, 2A.2). Students were encouraged to work through this activity independently and consult with their partner as needed. Finally, students answered questions about how different forecast visualizations influenced their ability to make decisions about managing the reservoir. The culminating discussion of Activity B asked students to discuss how they might improve or alter the visualizations for their decision needs as a water resource manager. Students addressed LO1 in Activity B by using ecological forecasts to make decisions, and LO2 by making decisions using different types of uncertainty visualizations.
In Activity C, “Create a customized visualization of an ecological forecast for a forecast user,” students worked individually to choose a different forecast user that was not a drinking water manager (e.g., a swimmer) of the same drinking water forecast they used in Activity B (Figure 2B-C). Students identified a decision to be made by their forecast user (e.g., whether or not to go swimming in a lake based on an algae threshold). Based on the decision that they identified, students created a customized forecast visualization for their user. Additionally, students explored the underlying forecast distribution by examining the mean, median, and upper ranges of the forecast to better understand the uncertainty underlying the forecast. Lastly, students compared their visualizations with their partner, who chose a different forecast user. This Activity C advanced student understanding of LO1 by connecting the forecast to a variety of potential users. By comparing across forecast users, students were also encouraged to think about how different users might benefit from different types of visualizations (Figure 2B and 2C), contributing to their understanding of LO2.
At the end of Activity C (as well as between completion of each activity, time permitting), instructors were guided to bring the student pairs back together for a full group discussion and answer any remaining questions. A list of discussion questions for the instructor to use as prompts was provided for each Activity in the Instructor Manual. For example, to recap Activity A, instructors could ask students to discuss how they were able to tell whether visualizations included uncertainty and if there were some types of visualizations that made it more or less difficult to recognize and interpret forecast uncertainty. For Activity B, instructors could ask students to present their decisions in the case study and explain how the trade-offs among their management objectives influenced their decision-making. Lastly, for Activity C, instructors could ask students to discuss the visualization that they chose for their forecast user and how it related to their forecast user’s decision needs, as well as what they would do if they had to create a visualization which served multiple forecast user needs.