Module Assessment
To assess the effects of our module on student learning and answer our
research questions, we administered pre- and post-module assessment
surveys to undergraduate students before and after module completion,
respectively. In total, we tested the module in four undergraduate
courses at four different universities with N = 314 consenting students
and 7 unique instructors (Table 2). Not all students completed every
question so the number of responses per assessment question varied. All
students who completed the assessment were undergraduates in their
second year or later and were enrolled in General Ecology, Zoology, or
Freshwater Ecology courses. Because the module was taught across a
variety of institutions, course types, classroom formats, and student
experience levels, we were not able to control for these variables in
our design, and thus focused our analysis on the total pool of
consenting students who completed the module. Instructors were recruited
via personal communication, participation in conference workshops, or
through an email listserv. The module was taught both virtually and
in-person (Table 2), though the majority of students (92%) completed
the module with in-person instruction.
As described above, the goal of the assessment was to measure the
effects of the module on students’ ability to understand foundational
ecological forecasting concepts (LO1) and uncertainty communication
(LO2; Figure 1, Table 3). We grouped the questions by LOs, resulting in
three questions which measured foundational ecological forecasting
concepts (LO1) and five questions which measured uncertainty
communication concepts (LO2, Figure 1).
The assessment included multiple-choice and qualitative, open-ended
questions (Table 3). Pre- and post-surveys were identical and
administered via an online, secure portal run by the Science Education
Research Center at Carleton College. All students and faculty consented
to participate in the study per our Institutional Review Board (IRB)
protocols (Virginia Tech IRB 19-669 and Carleton College IRB 19-20
065).Analysis of assessment surveys
We analyzed multiple-choice and qualitative assessment questions from
the pre- and post-module surveys. Multiple-choice questions (Q1-2, 5-9)
were scored by whether students selected the correct answer. Qualitative
questions (Q3-4) were scored using a rubric developed by two
Macrosystems EDDIE coordinators, following a standardized two-step
process (see Appendix S1: Text S2 for methodology), based on the rubric
methodology of Moore et al. (2022a) and Miles et al (2020). A detailed
description of the coding criteria for both Q3 and Q4 is included in
Appendix S1: Tables S5 and S6, respectively. We also screened answers to
Q4 (Table 3) for the presence of three keywords related to uncertainty
communication (‘icon’, ‘color’, and ‘forecast output/index’). We
recorded whether the keywords were present or absent in student
responses but did not consider responses correct unless students also
explained how the keywords were used to communicate uncertainty.
To determine the overall performance within and across LO1 (foundational
ecological forecasting) and LO2 (uncertainty communication), we
calculated the percent correct within each LO (i.e., resulting in a
score for LO1 and LO2) for each student. For the two qualitative
questions, which included multiple open-ended responses, student
responses were considered ‘correct’ if they identified at least one
benefit of ecological forecasting (Q3) and at least one way of
communicating uncertainty (Q4).
We used paired Wilcoxon signed-rank tests to analyze the differences
between pre- and post-survey responses on both multiple-choice and
qualitative questions as well as the grouped categories. Due to varying
class sizes, instruction, student experience levels, and teaching
modalities across the four institutions, all data were pooled and
analyzed together. Statistical significance was defined as p <
0.05. All analyses were conducted in R version 4.2.1 (R Core Team,
2022).