Refer to the blog post from Zonca: https://zonca.dev/2015/09/ipython-jupyter-notebook-nersc-edison.html We realized that users of Cori's predecessor system (Edison) wanted to use Jupyter and were going to go through a lot of hoops to get it. We knew that Jupyter was a big deal (say the usual stuff about it) and so we decided we would embrace our Jupyter users. Then talk about Jupyter as a science gateway, what this gave our users.
Various things, mostly in chronological order:
Then we deployed Cori and part of the plan was to have Jupyter on Cori, so we were able to write a custom spawner and use GSISSH to make it work, blah blah. => Customization
Integration into Spin, being able to manage Jupyter without k8s but still take advantage of Docker containers => Customization and not being "locked in" (e.g to k8s)
Next milestone was more nodes and MFA => Customization
Services => Another abstraction/customization we like and take advantage of
Binder for HPC. 
Customizing environments for users and collaborations, providing a "base" environment and kernels that users can use but also giving locus of control (like using this phrase) to users.
This is an R&D project but also an active, officially supported core service, but we've been able to reconcile that through careful messaging and the willingness of our users to provide feedback and allow experimentation on our part, because in the end they're going to get something out of it.

Conclusion

[ROUGH] Jupyter in HPC is now commonplace. We have been able to give hundreds of HPC users a rich user interface to HPC through Jupyter. In the supercomputing context, we look at Jupyter as a tool that will help make it easier for our users to take advantage of supercomputing hardware and software. Some of that will come from us at supercomputing centers. Jupyter as a project needs to not make design decisions that break things for us, or lock us into one way of doing things. Each HPC center is different and that means that for Jupyter to remain useful to HPC centers and supercomputing it needs to maintain its high level of abstraction. We should make this into a bulleted list of demands :)

Acknowledgments

This work was supported by Lawrence Berkeley National Laboratory, through the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory.