Customizing User Environment#
This page contains instructions for common ways to enhance the user experience. For a list of all the configurable Helm chart options, see the Configuration Reference.
The user environment is the set of software packages, environment variables, and various files that are present when the user logs into JupyterHub. The user may also see different tools that provide interfaces to perform specialized tasks, such as JupyterLab, RStudio, RISE and others.
A Docker image built from a Dockerfile will lay the foundation for the environment that you will provide for the users. The image will for example determine what Linux software (curl, vim …), programming languages (Julia, Python, R, …) and development environments (JupyterLab, RStudio, …) are made available for use.
To get started customizing the user environment, see the topics below.
Choose and use an existing Docker image#
Project Jupyter maintains the jupyter/docker-stacks repository, which contains ready to use Docker images. Each image includes a set of commonly used science and data science libraries and tools. They also provide excellent documentation on how to choose a suitable image.
If you wish to use another image from jupyter/docker-stacks than the base-notebook used by default, such as the datascience-notebook image containing useful tools and libraries for data science, complete these steps:
config.yamlfile to specify the image. For example:
singleuser: image: # You should replace the "latest" tag with a fixed version from: # https://hub.docker.com/r/jupyter/datascience-notebook/tags/ # Inspect the Dockerfile at: # https://github.com/jupyter/docker-stacks/tree/HEAD/datascience-notebook/Dockerfile name: jupyter/datascience-notebook tag: latest # `cmd: null` allows the custom CMD of the Jupyter docker-stacks to be used # which performs further customization on startup. cmd: null
Container image names cannot be longer than 63 characters.
Always use an explicit
tag, such as a specific commit. Avoid using
latestas it might cause a several minute delay, confusion, or failures for users when a new version of the image is released.
Apply the changes by following the directions listed in apply the changes.
If you have configured prePuller.hook.enabled, all the nodes in your cluster will pull the image before the hub is upgraded to let users use the image. The image pulling may take several minutes to complete, depending on the size of the image.
Restart your server from JupyterHub control panel if you are already logged in.
If you’d like users to select an environment from multiple docker images, see Using multiple profiles to let users select their environment.
Selecting a user interface#
JupyterLab is the new user interface for Jupyter,
which is meant to replace the classic notebook user interface (UI).
Users can already interchange
/lab in the URL to switch between
the classic UI and JupyterLab if both are installed.
Deployments using JupyterHub 1.x and earlier default to the classic UI,
while JupyterHub 2.0 makes JupyterLab the default.
To pick a user interface to launch by default for users, two customization items need to be set:
the preferred default user interface (UI)
the server program to launch
There are two main Jupyter server implementations
(Most deployments will not see a difference,
but there can be issues for certain server extensions. If unsure, new applications should choose
jupyter server, which is launched when you use
jupyter labor other recent Jupyter applications, and
the ‘classic’ legacy notebook server (
In general, the default UI is selected in config.yaml by
singleuser: defaultUrl: ...
and the default server by:
singleuser: extraEnv: JUPYTERHUB_SINGLEUSER_APP: "..."
Specifically, use one of these options to select the modern server:
# this is the default with JupyterHub 2.0 singleuser: extraEnv: JUPYTERHUB_SINGLEUSER_APP: "jupyter_server.serverapp.ServerApp"
or the classic notebook server:
# the default with JupyterHub 1.x singleuser: extraEnv: JUPYTERHUB_SINGLEUSER_APP: "notebook.notebookapp.NotebookApp"
You only need the above configuration when it is different from the default.
JupyterHub 2.0 changes the default server from
so here we make the choice explicit in each example,
so the same configuration produces the same result with JupyterHub 1.x and 2.x.
That way, your choice will be preserved across upgrades.
Use JupyterLab by default#
This is the default in JupyterHub 2.0 and Helm chart 2.0.
You can choose JupyterLab as the default UI with the following config in your config.yaml:
singleuser: defaultUrl: "/lab" extraEnv: JUPYTERHUB_SINGLEUSER_APP: "jupyter_server.serverapp.ServerApp"
You can also make JupyterLab the default UI without upgrading to the newer server implementation. This may help users who need to stick to the legacy UI with extensions that may not work on the new server.
singleuser: defaultUrl: "/lab" extraEnv: JUPYTERHUB_SINGLEUSER_APP: "notebook.notebookapp.NotebookApp"
You need the
jupyterlab package (installable via
for this to work. All images in the jupyter/docker-stacks repository come pre-installed with it.
Use classic notebook by default#
This is the default in JupyterHub 1.x and helm chart 1.x.
If you aren’t ready to upgrade to JupyterLab,
especially for those who depend on custom notebook extensions without an equivalent in JupyterLab,
you can always stick with the legacy notebook server (
# the default with JupyterHub 1.x singleuser: extraEnv: JUPYTERHUB_SINGLEUSER_APP: "notebook.notebookapp.NotebookApp"
This will start the exact same server and UI as before.
If you install the
you can also default to the classic UI, running on the new server:
This may be the best way to support users on both classic and new environments.
singleuser: defaultUrl: /tree/ extraEnv: JUPYTERHUB_SINGLEUSER_APP: "jupyter_server.serverapp.ServerApp"
There are more Jupyter server extensions providing alternate UI choices, which can be used with JupyterHub.
For example, retrolab is a different notebook interface, built on JupyterLab, but which may be more comfortable for those coming from the classic Jupyter UI.
To install such an extension:
install the package (
pip install retrolabor
conda install retrolab) in your user container image
configure the default URL, and make sure ServerApp is used:
singleuser: defaultUrl: /retro/ extraEnv: JUPYTERHUB_SINGLEUSER_APP: "jupyter_server.serverapp.ServerApp"
Customize an existing Docker image#
If you are missing something in the image that you would like all users to have, we recommend that you build a new image on top of an existing Docker image from jupyter/docker-stacks.
Below is an example Dockerfile building on top of the minimal-notebook image. This file can be built to a Docker image, and pushed to a image registry, and finally configured in config.yaml to be used by the Helm chart.
FROM jupyter/minimal-notebook:latest # Replace `latest` with an image tag from to ensure reproducible builds: # https://hub.docker.com/r/jupyter/minimal-notebook/tags/ # Inspect the Dockerfile at: # https://github.com/jupyter/docker-stacks/tree/HEAD/minimal-notebook/Dockerfile # install additional package... RUN pip install --no-cache-dir astropy # set the default command of the image, # if you want to launch more complex startup than the default `juptyerhub-singleuser`. # To launch an image's custom CMD instead of the default `jupyterhub-singleuser` # set `singleuser.cmd: null` in your config.yaml.
If you are using a private image registry, you may need to setup the image
credentials. See the :ref:
helm-chart-configuration-reference for more
details on this.
Set environment variables#
One way to affect your user’s environment is by setting environment variables. While you can set them up in your Docker image if you build it yourself, it is often easier to configure your Helm chart through values provided in your config.yaml.
singleuser: extraEnv: EDITOR: "vim"
You can set any number of static environment variables in the config.yaml file.
Users can read the environment variables in their code in various ways. In Python, for example, the following code reads an environment variable’s value:
import os my_value = os.environ["MY_ENVIRONMENT_VARIABLE"]
About user storage and adding files to it#
It is important to understand the basics of how user storage is set up. By
default, each user will get 10GB of space on a harddrive that will persist in
between restarts of their server. This harddrive will be mounted to their home
directory. In practice this means that everything a user writes to the home
/home/jovyan) will remain, and everything else will be reset in
between server restarts.
A server can be shut down by culling. By default, JupyterHub’s culling service is configured to cull a users server that has been inactive for one hour. Note that JupyterLab will autosave files, and as long as the file was within the users home directory no work is lost.
In Kubernetes, a PersistantVolume (PV) represents the harddrive. KubeSpawner will create a PersistantVolumeClaim that requests a PV from the cloud. By default, deleting the PVC will cause the cloud to delete the PV.
Docker image’s $HOME directory will be hidden from the user. To make these
contents visible to the user, you must pre-populate the user’s filesystem. To do
so, you would include commands in the
config.yaml that would be run each
time a user starts their server. The following pattern can be used in
singleuser: lifecycleHooks: postStart: exec: command: ["cp", "-a", "src", "target"]
Each element of the command needs to be a separate item in the list. Note that
this command will be run from the
$HOME location of the user’s running
container, meaning that commands that place files relative to
./ will result
in users seeing those files in their home directory. You can use commands like
wget to place files where you like.
A simple way to populate the notebook user’s home directory is to add the
required files to the container’s
/tmp directory and then copy them to
/home/jovyan using a
postStart hook. This example shows the use of
singleuser: lifecycleHooks: postStart: exec: command: - "sh" - "-c" - > cp -r /tmp/foo /home/jovyan; cp -r /tmp/bar /home/jovyan
Keep in mind that commands will be run each time a user starts
their server. For this reason, we recommend using
nbgitpuller to synchronize
your user folders with a git repository.
nbgitpuller to synchronize a folder#
We recommend using the tool nbgitpuller to synchronize a folder
in your user’s filesystem with a
git repository whenever a user
starts their server. This synchronization can also be triggered by
letting a user visit a link like
(e.g., as alternative start url).
nbgitpuller, first make sure that you install it in your Docker
image. Once this is done,
you’ll have access to the
nbgitpuller CLI from within JupyterHub. You can
run it with a
postStart hook with the following configuration
singleuser: lifecycleHooks: postStart: exec: command: [ "gitpuller", "https://github.com/data-8/materials-fa17", "master", "materials-fa", ]
This will synchronize the master branch of the repository to a folder called
$HOME/materials-fa each time a user logs in. See the nbgitpuller
documentation for more information on
using this tool.
nbgitpuller will attempt to automatically resolve merge conflicts if your
user’s repository has changed since the last sync. You should familiarize
yourself with the nbgitpuller merging behavior prior to using the
tool in production.
Allow users to create their own
conda environments for notebooks#
Sometimes you want users to be able to create their own
By default, any environments created in a JupyterHub session will not persist
across sessions. To resolve this, take the following steps:
nb_conda_kernelspackage is installed in the root environment (e.g., see Build a Docker image with repo2docker)
Configure Anaconda to install user environments to a folder within
Create a file called
.condarcin the home folder for all users, and make sure that the following lines are inside:
envs_dirs: - /home/jovyan/my-conda-envs/
The text above will cause Anaconda to install new environments to this folder, which will persist across sessions.
These environments are supposed to be used in notebooks, so a typical use case:
Create one with at least a kernel, e.g. for Python it’s
conda create -n myenv ipykernel scipy
Now this env should be available in the list of kernels
Using multiple profiles to let users select their environment#
You can create configurations for multiple user environments, and let users select from them once they log in to your JupyterHub. This is done by creating multiple profiles, each of which is attached to a set of configuration options that override your JupyterHub’s default configuration (specified in your Helm Chart). This can be used to let users choose among many Docker images, to select the hardware on which they want their jobs to run, or to configure default interfaces such as Jupyter Lab vs. RStudio.
Each configuration is a set of options for KubeSpawner,
which defines how Kubernetes should launch a new user server pod. Any
configuration options passed to the
profileList configuration will
overwrite the defaults in KubeSpawner (or any configuration you’ve
added elsewhere in your helm chart).
Profiles are stored under
singleuser.profileList, and are defined as
a list of profiles with specific configuration options each. Here’s an example:
singleuser: profileList: - display_name: "Name to be displayed to users" description: "Longer description for users." # Configuration unique to this profile kubespawner_override: your_config: "Your value" # Defines the default profile - only use for one profile default: true
The above configuration will show a screen with information about this profile displayed when users start a new server.
Here’s an example with four profiles that lets users select the environment they wish to use.
singleuser: # Defines the default image image: name: jupyter/minimal-notebook tag: 2343e33dec46 profileList: - display_name: "Minimal environment" description: "To avoid too much bells and whistles: Python." default: true - display_name: "Datascience environment" description: "If you want the additional bells and whistles: Python, R, and Julia." kubespawner_override: image: jupyter/datascience-notebook:2343e33dec46 - display_name: "Spark environment" description: "The Jupyter Stacks spark image!" kubespawner_override: image: jupyter/all-spark-notebook:2343e33dec46 - display_name: "Learning Data Science" description: "Datascience Environment with Sample Notebooks" kubespawner_override: image: jupyter/datascience-notebook:2343e33dec46 lifecycle_hooks: postStart: exec: command: - "sh" - "-c" - > gitpuller https://github.com/data-8/materials-fa17 master materials-fa;
This allows users to select from three profiles, each with their own environment (defined by each Docker image in the configuration above).
The “Learning Data Science” environment in the above example overrides the postStart lifecycle hook. Note that when
kubespawner_override the values must be in the format that comply with the KubeSpawner configuration.
For instance, when overriding the lifecycle
kubespawner_override, the configuration is for
lifecycle_hooks (snake_case) rather than
lifecycleHooks (camelCase) which is
how it is used directly under the
singleuser configuration section.
A further explanation for this can be found in this github issue.
User-dependent profile options#
It is also possible to configure the profile choices presented to the user depending on the user. You can do this by defining a custom pre-spawn hook that populates the profile list based on user identity. See this discourse post for some examples of how this works.
You can also control the HTML used for the profile selection page by
using the Kubespawner
profile_form_template configuration. See the
Kubespawner configuration reference
for more information.
Set command to launch#
Ultimately, a single-user server should launch the
However, an image may have a custom CMD that does this,
with some preparation steps, or adding additional command-line arguments,
or launching a custom wrapper command, etc.
If you have environment preparation at startup in your image, this is best done in the ENTRYPOINT of the image, and not in the CMD, so that overriding the command does not skip your preparation.
By default, zero-to-jupyterhub will launch the command
If you have an image (such as
jupyter/scipy-notebook and other Jupyter Docker stacks)
that defines a CMD with startup customization and ultimately launches
you can chose to launch the image’s default CMD instead by setting:
singleuser: cmd: null
Alternately, you can specify an explicit custom command as a string or list of strings:
singleuser: cmd: - /usr/local/bin/custom-command - "--flag" - "--other-flag"
which k8s calls
zero-to-jupyterhub always respects the ENTRYPOINT of the image,
singleuser.cmd only overrides the CMD.
Disable specific JupyterLab extensions#
Sometimes you want to temporarily disable a JupyterLab extension on a JupyterHub
by default, without having to rebuild your docker image. This can be very
easily done with
and JupyterLab’s page_config.json
page_config.json lets you set page configuration by dropping JSON files
labconfig directory inside any of the directories listed when you run
We just use
singleuser.extraFiles to provide this file!
singleuser: extraFiles: lab-config: mountPath: /etc/jupyter/labconfig/page_config.json data: disabledExtensions: jupyterlab-link-share: true
This will disable the link-share
labextension, both in JupyterLab and RetroLab. You can find the name of the
extension, as well as its current status, with
jupyter labextension list.
jovyan@jupyter-yuvipanda:~$ jupyter labextension list JupyterLab v3.2.4 /opt/conda/share/jupyter/labextensions jupyterlab-plotly v5.4.0 enabled OK jupyter-matplotlib v0.9.0 enabled OK jupyterlab-link-share v0.2.4 disabled OK (python, jupyterlab-link-share) @jupyter-widgets/jupyterlab-manager v3.0.1 enabled OK (python, jupyterlab_widgets) @jupyter-server/resource-usage v0.6.0 enabled OK (python, jupyter-resource-usage) @retrolab/lab-extension v0.3.13 enabled OK
This is extremely helpful if the same image is being shared across hubs, and you want some of the hubs to have some of the extensions disabled.