Jupyter is quickly becoming the entrypoint to HPC for a growing class of users. The ability to provision different classes of resources, and integrate with HPC workload management systems through JupyterHub is an important enabler of easy-to-use interactive supercomputing.
Because of mission, design, and technological trends, supercomputers and the HPC centers that run them are still less homogeneous as a group than cloud providers. This means "one size fits all" solutions are sometimes harder to come by. And while providers of supercomputing power want to increase ease of use, they are not interested in homogenizing or concealing specialized capabilities from expert users. Developers working on Jupyter projects that may intersect with HPC especially should avoid making assumptions about HPC center policy (e.g. queue configuration, submit and run limits, privileged access) and seek input from HPC developers on how to generalize those assumptions. As long as Jupyter developers remain committed to extensibility, abstraction, and remaining agnostic about deployment options, developers at HPC centers and their research partners can help fill the gaps.
There seems to be a fair bit of momentum around Jupyter and HPC. We have built a network of contacts at other HPC centers to collaborate with and learn from; in 2019 with the Berkeley Institute for Data Science, we hosted a Jupyter Community Workshop on Jupyter at HPC and research facilities to kickstart that process.