... | @@ -236,10 +236,49 @@ Done. |
... | @@ -236,10 +236,49 @@ Done. |
|
**caveate**
|
|
**caveate**
|
|
If you upgrade the drivers or the kernel a reboot is necessary as the drivers and the libs need to match otherwise docker will not run.
|
|
If you upgrade the drivers or the kernel a reboot is necessary as the drivers and the libs need to match otherwise docker will not run.
|
|
|
|
|
|
|
|
## Jupyterhub & Notebooks
|
|
|
|
**Notebooks**
|
|
|
|
We have created a build system where we first create a "base" notebook with the tools that we use (based on data-science notebook) and then we have used that as a base for creating a nvidia-container (this works because both our and nvidia is based on the ubuntu container). We there only need one notebook whether you use GPU or not (the tools will always be there but you will not have access to a GPU unless you requested one).
|
|
|
|
The drawback is that the container becomes rather large (but will not take up more space than multiple images on the node) but we always pre-populate the worker nodes with the image anyway.
|
|
|
|
As aid before when we create a new image we always tag it with todays date when we push it to the repo.
|
|
|
|
|
|
|
|
### Jupyterhub
|
|
|
|
I will not go through the code in detail as it is pretty well described in the [code](https://git.cs.kau.se/jonakarl/jupyterhub/-/blob/master/jupyterhub_config.py) I will however do some high level overview of our design.
|
|
|
|
In overview the design looks like this:
|
|
|
|
- install.sh-->network-keeper
|
|
|
|
- network-keeper-->create network
|
|
|
|
- network-keeper-->pulls current image;
|
|
|
|
- install.sh-->"pre-populate" docker-compose from .env file
|
|
|
|
- install.sh-->Start jupyterhub from docker-compose.yml
|
|
|
|
- docker-compose.yml-->Set environment variables for juptyerhub_config.py
|
|
|
|
- jupyterhub_config.py reads enviroment variabels and configure jupyterhub
|
|
|
|
|
|
|
|
|
|
|
|
The network-keeper service and pre-populating the docker-compose file (and why it is needed) we have described before.
|
|
|
|
|
|
|
|
**jupyterhub_config.py**
|
|
|
|
Jupyterhub reads the environment variables as defined in the docker-compose.yml file.
|
|
|
|
|
|
|
|
I will try to describe what happens when a user logs in and what we do.
|
|
|
|
|
|
|
|
1. A user surfs to our server (hub.cse.kau.se)
|
|
|
|
- we set some default values
|
|
|
|
- memory limits
|
|
|
|
- cpu limits
|
|
|
|
- notebook image (nb_image)
|
|
|
|
2. The user are redirected to our oath provider (git.cse.kau.se) (unless already logged in)
|
|
|
|
3. We do a lookup (gitlab) to see if the user is a member of the gpu group and show a dialog if so.
|
|
|
|
- if the user want to use a GPU (we set a gpu variable to 1).
|
|
|
|
4. We do 2 lookups to see if the user is a member of two of the three pre-defined gitlab groups (no-limits and admin) and check if the gpu variable is set to 1 (if the user choosed to use gpu in step 3)
|
|
|
|
- If admin we modify the spawner mounts to show all users home folders and make shared folder writeable
|
|
|
|
- If no-limits group we remove the spawner cpu and memory limits
|
|
|
|
- If the gpu variable is set we set spawner option to:
|
|
|
|
1. request 1 (one) gpu
|
|
|
|
2. remove cpu and memory limits
|
|
|
|
3. increase shared memory to 24 GB for that container.
|
|
|
|
4. (change the notebook image to the gpu image)*
|
|
|
|
5. We check if the user have a "home folder" and if not we create it**
|
|
|
|
*Defualt the gpu image is the same as the normal image
|
|
|
|
**The folders are created inside the jupyterhub container but is bind mounted from the manager host (all folders on the server have the same user and we use docker mounts to seperate user homefolders)
|
|
|
|
|
|
|
|
|