Table \ref{396036} represents simplified sample data. The full dataset with further columns providing course codes, all known course dates, and data sources is also available via a Zenodo archive \cite{tana}
Before any discussion of Table \ref{396036}, I will make transparent the following caveats.
The courses discovered are likely a small sample of the true US and UK population given significant constraints and the individualistic, ad-hoc nature of my search and technology used, for example only encountering material that is online, unrestricted access, English-language, sufficiently keyword-matched and search engine-ranked. Furthermore, my sampling will be affected by selection bias. Courses with a more discoverable digital footprint, for a given specific search strategy, may correlate with variables such as 'success,' institutional resources, and or course, recency. Another caveat is the subjectivity of scope; courses were excluded on the basis of being judged to have either insufficient or too much computational data content, and/or an imbalance in terms of transferring computational data practices to HSSA vs. STEM contexts. A final caveat is the course data collected representing the (publicly stated) intentions, rather than, for example, actual execution, evaluation, etc.
With these caveats in mind, we examine Table \ref{396036}, the first set of data in support of the article's case for Jupyter notebook adoption.
Of the twenty courses in the sample, 18 found are at undergraduate level with two from secondary schooling. 14 courses are US-based with the remaining six from the UK. In terms of the programming language on which the most recent course is based, Python is the dominant choice in this sample except for two courses: Introductory Applied Econometrics (Index 1) which uses R, and Interdisciplinary Computer Modeling (Index 17) using Julia.
The final eight columns under the heading "Computational Data Practices" provide more relevant insight into this sample. This matrix contains boolean data, manually created by reviewing the most granular instructional material available, that indicate which of eight categories of computational data tools and methods ubiquitous in STEM are covered within each course syllabus using Jupyter notebooks. Through this subset of the data we start to build a picture of interdisciplinary HSSA education Jupyter notebook use-cases that are considered possible for students at this level in these regions, simply by inspecting each record's combination of Course Name, Department, and the Computational Data Practices marked. 
Referencing data outside of Table \ref{396036}, I argue that many of the notebook use-cases can be seen as novel and pathbreaking, given the stage of education. At UC Berkeley the undergraduates taking Data, Law & Prediction (Index 14) are taught natural language processing techniques, then perform sentiment analysis to explore the question: "Did the way judges, prosecutors, and witnesses talk about moral culpability change after the Bloody Code was mostly repealed in 1827 (at the leading edge of a wave of legal reform in England)?" \cite{berkeleya}. At Notting Hill & Ealing High School, the Year 10 (9th Grade) students taking my Code Art course (Index 19), after receiving minimal basic vector math instruction then started to customize animated elements for the GIFs they were programming. 
The lack of a 'control' against which to compare even just syllabus design makes inference impossible. We cannot attribute the true educational effect of deploying Jupyter notebooks as the core pedagogical tool versus the alternatives. However, this first set of data does hint at what is considered feasible, actually executable, and, for the many sampled courses with some history of being offered \cite{tana}, potentially successful by these pioneering educators seeking to provide interdisciplinary HSSA courses for undergraduate and secondary level students, leveraging notebook tooling.
With the art of the possible in mind, the next section helps to fill in a few gaps by presenting the second set of data in support of the article's case, namely user experiences of five stakeholders of a handful of the sampled courses in Table \ref{396036}. These four instructors and one student provide more conclusive evidence of the pedagogical benefits related to their course experiences that they attribute to Jupyter notebooks.