maiopanel.blogg.se

Conda environment
Conda environment








conda environment conda environment

I personally don’t particularly like this approach, and prefer to setup the environment before any notebooks are loaded/executed since it it feels like an implementation detail of SWAN is bleeding into the data analysis record. The alternative approach is to put something into each of your notebooks to do the setup/config - I have an example of doing that to install a virtual environment and setup the Python path here. JUPYTER_PATH), or to install a kernel into $SCRATCH_HOME. Therefore, if you want to have a custom kernel, you have to do something at SWAN startup - either set an environment variable (e.g. There is no directory which is persistent and writable, therefore there is no currently way (FWICS) to configure Jupyter to pick up anything persistently. scratch/pelson/.local/share/jupyter/runtime Here is the default jupyter config for a standard SWAN setup for me: $ jupyter -paths In terms of code, you will need to ensure you’ve installed ipython_kernel (conda installable with ipykernel), then: # The kernels on SWAN are installed in the scratch user-site directory. Personally though I just created a new kernel name, and then as soon as my notebook started, switch the kernel that the notebook runs with. It is feasible to replace the existing “python3” kernel, which would effectively replace the default SWAN kernel for the duration of your SWAN session. I have a prototype which also installs conda (in my case at SWAN start, and into /scratch), in which I register the ipython kernel in a SWAN “Environment script”. The Jupyter architecture means that the “notebook interface” is what gets started by SWAN, and then it subsequently starts a kernel (python, or indeed any other language) to do code execution. The key step that you are missing is to register your ipython kernel.










Conda environment