Jupyter on the Cluster

Running Jupyter on your local machine is straight forward, but sometimes you need more computational resources which might mean hosting your work on a remote computer. This article will show you how to run Jupyter on a remote machine through an ssh tunnel such that you can interact with it in your local web browser.

Before we discuss running notebooks on the HPC clusters, we first point out simpler options. Both myAdroit and myDella offer web interfaces for setting up and running Jupyter notebooks as well as Matlab, XStata and RStudio. To get started, choose “Interactive Apps” from the menu at the top of those sites. For those with an account on Tiger, Della or Perseus, another possibility is jupyter.rc which is a standalone node designed for running interactive Jupyter notebooks. Note that jupyter.rc has one P100 GPU while the other options do not provide any.

If these options are insufficient for your needs then continue reading. The approach described below will give you more flexibility in comparison to the choices above.

On the Head Node

The first option is running Jupyter on the head node of one of the HPC clusters. In order to do this, we first need to log on to the relevant head node (tigercpu in this case):

ssh <yourusername>@tigercpu.princeton.edu

Then we launch Jupyter Lab/Notebook as follows

# load the anaconda environment module
module load anaconda3

# For Jupyter Lab:
jupyter-lab --no-browser --port=8889 --ip=

# For Jupyter Notebook:
jupyter-notebook --no-browser --port=8889 --ip=

Note that we selected the Linux port 8889 to connect to the notebook.  If you don’t specify the port, it will default to port 8888. But sometimes this port can be already in use either on the remote machine or the local one (your laptop), if the port you selected is unavailable, you will get an error message, in which case you should just pick another. It’s best to keep it > 1024, I usually start with 8888 and increment by 1 if it fails e.g. try 8888, 8889, 8890 … . In the remaining of this post we assume that you picked the port 8889, if you are running on a different port, just substitute 8889 by your port number.

On the local machine, we then need to type:

ssh -N -f -L localhost:8889:localhost:8889 <yourusername>@tigercpu.princeton.edu

Looking at the man page for ssh, the relevant flags are:

-N   Do not execute a remote command.  This is useful for just for‐
     warding ports.

-f   Requests ssh to go to background just before command execution.
     This is useful if ssh is going to ask for passwords or
     passphrases, but the user wants it in the background.
-L   Specifies that the given port on the local (client) host is to be
     forwarded to the given host and port on the remote side.

As the -f flag implies, the ssh tunnel will be running in the background. In order to kill the ssh tunnel, type lsof -i tcp:8889 to get the process id (PID) and use kill -9 <PID> to kill it.

In order to access Jupyter, navigate to http://localhost:8889/.

NOTE: If prompted for a “Password or token” the first time you connect, it can be found on the head node where Jupyter is running, as shown in the figure below.

On a Compute Node via salloc

If you need to compute larger tasks, you should not be running it on the head node, but rather on one of the compute nodes. One way of doing that is to request an interactive session using salloc. Once a compute node has been allocated, we can run Jupyter and connect to it similarly to what we did in the previous section. One difference is that the compute nodes are not connected to the internet and we therefore have to slightly modify the local port forwarding.

First, from the head node, we ask for an interactive session with a compute node on the cluster. Here we are asking for 1 node, 1 core, for 5 minutes:

salloc -N 1 -n 1 -t 00:05:00

Once the node has been allocated, type hostname to get the name of the node. In the figure below, we have been assigned tiger-h26c2n22.

On that node, we first need to unset the XDG_RUNTIME_DIR environment variable to avoid a permission issue, then we launch Jupyter:

# Unset 'XDG_RUNTIME_DIR' to avoid permission issue:

module load anaconda3

# For Jupyter Lab:
jupyter-lab --no-browser --port=8889 --ip=

# For Jupyter Notebook:
jupyter-notebook --no-browser --port=8889 --ip=

Then, on the local machine we set up the tunnel as:

ssh -N -f -L 8889:tiger-h26c2n22:8889 <yourusername>@tigercpu.princeton.edu

In order to access Jupyter, navigate to http://localhost:8889/.

On a Compute Node via sbatch

The third way of running Jupyter on the cluster is by submitting a job on the cluster via sbatch that launches Jupyter on the compute node.

In order to do this we need a submission script like the following called jupyter.sh:

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=4G
#SBATCH --time 00:05:00
#SBATCH --job-name jupyter-notebook
#SBATCH --output jupyter-notebook-%J.log

# get tunneling info
node=$(hostname -s)

# print tunneling instructions jupyter-log
echo -e "
Command to create ssh tunnel:
ssh -N -f -L ${port}:${node}:${port} ${user}@${cluster}.princeton.edu

Use a Browser on your local machine to go to:
localhost:${port}  (prefix w/ https:// if using password)

# load modules or conda environments here
module load anaconda3

# Run Jupyter
jupyter-lab --no-browser --port=${port} --ip=${node}

This job launches Jupyter on the allocated compute node and we can access it through an ssh tunnel as we did in the previous section.

First, from the head node, we submit the job to the queue:

sbatch jupyter.sh

Once the job is running, a log file will be created that is called jupyter-notebook-<jobid>.log, as seen in the figure below.

The log file contains information on how to connect to Jupyter, and the necessary token.

In order to connect to Jupyter that is running on the compute node, we set up a tunnel on the local machine as follows:

ssh -N -f -L 8889:tiger-h26c2n22:8889 <yourusername>@tigercpu.princeton.edu

where tiger-h26c2n22 is the name of the node that was allocated in this case.

In order to access Jupyter, navigate to http://localhost:8889/

In the directions on this page, the only packages that are available to the user are those made available by loading the anaconda3 module. If you have created your own Conda environment then you will need to activate it before running the “jupypter-lab” or “jupyter-notebook” command. Be sure that the “jupyter” package is installed into your environment (i.e., conda activate myenv; conda install jupyter).

6 thoughts on “Jupyter on the Cluster

  1. Also, is there a specific reason for using TCP port 8889? According to https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers 8888 is the default Jupyter notebook port. Can you use 8890 for binding the remote and local ports? Is there a safe/recommended range of ports to use?

    During the Help Session today, someone was having issues binding to 8889 which was being used by some process on his macOS system. Killing the process that was using the port and then starting the tunnel seemed to break the authentication token, but restarting the process with port 8890 worked. I suspect that a comment from https://github.com/jupyter/notebook/issues/3495 might explain that case:

    “You may have two different notebook servers running, one on port 8888 and one on port 8889. If the first one was started before you set the password, it won’t accept the new password.”

    But it might be good to include some more discussion of TCP port usage in this article.

  2. This was a great post! I am not getting the difference between the head node and the computer node. What are the issues with the head node? Also is it possible to have both the local machine processors and the cluster nodes available?

Leave a Reply

Your email address will not be published. Required fields are marked *