TL;DR
While reasoning is important, readers may not be interested in all the frustrations I experienced while figuring out how to get things done. If you’re looking for a quick solution, skip to the “What eventually worked?” section. However, if you’re interested in the thought process behind the solution, keep reading.
Why?
Some might bother why the hell I’d like to make my life so hard? 🤣
We used to use nodeenv
for that purpose. It provides a simple script that allows to fetch any version of Node. You have to configure PATH
variable and you’re done. It’s very simple from the perspective of Docker images operator. There’s one problem with nodeenv
comparing it to the nvm
- popularity.
nodeenv | |
nvm |
I was explaining to a lot of people, how to use nodeenv
. No one knew it! But many of them knew nvm
and were asking: Why can’t I just install version of Node that my project needs with NVM? That’s how I started thinking about better ways to deal with Node version management.
I’m supporting organization with thousands of projects. Node projects can be counted in hundreds. It’s not possible to provide up to date base images for all the Node versions that people would request. But I can provide a base, which would allow to install any Node version you need on up to date base images. That’s what this article is about.
NVM is nice and simple, how hard it might be to get it working with CI/CD?
Over the past few days, I’ve been working on providing nvm
for both my company’s Docker base images and for CI/CD. However, it hasn’t been an easy task.
In general, nvm
(know as Node Version Manager) is loved by frontend developers because it allows them to drop a .nvmrc
file in their project, and each time they switch between projects, everything works seamlessly. nvm
is responsible for installing (or activating) the version of Node required by a specific project. A similar functionality is provided by nodeenv
, which is Python-based and based on virtualenv
.
While setting up nodeenv
is straightforward because it’s a regular CLI command, nvm
is not. When you follow nvm’s installation instructions1, you end up with something like the following code in one of the startup files (~/.bashrc
, ~/.profile
, or ~/.zshrc
):
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
[ -s "$NVM_DIR/bash_completion" ] && . "$NVM_DIR/bash_completion"
Installing nvm
involves “sourcing” (that’s what .
do) nvm.sh
with some shell functions into your current shell. This is not a big deal for your local machine, but it becomes an issue if you want to use nvm
in non-interactive shells for your CI/CD.
Let’s take bash
as an example. Suddenly, it becomes important to know which file you’ve put the nvm
lines in because not all of them are loaded for non-interactive shells2. The startup files for bash (in order) are:
- Interactive (called with
--login
)/etc/profile
~/.bash_profile
~/.bash_login
~/.profile
- Non-interactive
~/.bashrc
It’s worth noting that ~/.profile
usually loads ~/.bashrc
automatically (check yours). But what about sh
, dash
or zsh
? If you’re calling nvm
on Jenkins, what shell will it call? These varieties of combinations make it incredibly hard to make nvm
behave consistently.
To better understand my use case, I use:
- Jenkins for CI/CD
- Which runs as a Docker container on a K8S cluster
- The container contains most of the tools needed by developers (but just single LTS version of Node).
Getting it working in the Dockerfile
I eventually found a nice way to make nvm
work in a Dockerfile - by using the SHELL
directive . By default, it’s set to: ["/bin/sh", "-c"]
.
This shell does not load the files we need and starts a non-interactive shell. We can fix it with:
SHELL ["/usr/bin/bash", "--login", "-c"]
The whole Dockerfile might look like this:
FROM ubuntu:22.04 as fetcher
ENV NVM_VERSION v0.39.3
RUN apt-get update && \
apt-get install -y git && \
git clone \
--depth 1 \
--branch $NVM_VERSION \
https://github.com/nvm-sh/nvm.git
FROM ubuntu:22.04
# we don't want to store cached files in the image
VOLUME /var/cache/apt
# prerequisites
RUN apt-get update && \
apt-get install -y curl
SHELL ["/bin/bash", "--login", "-c"]
ENV NVM_DIR=/opt/nvm
# copy the nvm
COPY --from=fetcher nvm $NVM_DIR
# change ownership if needed
RUN chown -R $(id -un):$(id -gn) $NVM_DIR && \
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" --no-use' >> ~/.profile
What happen here?
A shallow clone of the
nvm
repository is performed withgit
and it’s pinned to a specific version.The final image requires some dependencies such as
bash
andcurl
.The
SHELL
is set to pretend interactive.The
nvm
files are copied to/opt/nvm
.chown
is used to set the ownership of the files.The steps in the installation guide1 are followed with some exceptions such as setting the
NVM_DIR
environment variable directly in the Dockerfile, and skipping bash completion.The sourcing of
nvm.sh
is followed by--no-use
to lazily loadnvm
so that it’s not loaded until it’s used3.
Lastly, you can test if it’s working by adding some commands at the end of the Dockerfile to check the nvm
version, install different versions of Node, and check their versions as well.
Getting it working in Jenkins
The pattern of running Jenkins agents as containers is great for packing development tools into Docker images and running them when needed. However, it can be challenging to get nvm.sh
sourced properly when Jenkins agents are started as containers.
Now that we have a working nvm
in Docker image, we can use it in Jenkins. The easiest way is to use the Docker image we created above, as a base for a custom Jenkins agent image which includes nvm
and all the necessary Node versions.
Example Dockerfile:
FROM our-company/our-nvm-base:latest
# you might need to create jenkins user earlier
USER jenkins
ENV NODE_VERSIONS="lts/* 16"
# install Node versions
RUN nvm install $NODE_VERSIONS && \
nvm use "lts/*" && \
npm install -g yarn
And that’s it! Right? 🤔
Was it enough to get it working?
Not really 😕. Why?
It all depends how do you call the commands. While it works well during the build time, it may not work when running the container or calling commands in different ways.
For example, it may not work if you use docker exec
to run commands in containers - which we do when we test Docker base images. Similar situation happen when sh
commands are used in Jenkins pipelines, ~/.profile
is not loaded, and so nvm.sh
is not sourced properly. While it’s possible to teach users to source nvm.sh
in every sh
command, this can be cumbersome and error-prone. Therefore, we need a better solution.
What eventually worked?
After struggling with the previous approach, I started thinking of an easier and more straightforward way to work with nvm
and avoid common mistakes. I wanted something that would work the same way on both my local machine and in the CI/CD environment.
I realized that there were two main things that nvm
did: managing installations of different Node versions and loading them on demand by modifying the PATH
environment variable.
While thinking about how to simplify this process, it occurred to me that it would be much easier if nvm
was a command rather than a bash
function. And then, it hit me – why not make nvm
a command?
To do this, I created a file with the following code:
#!/usr/bin/env bash
# load nvm
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
# call nvm with all the parameters
nvm $@
Next, I made this file executable and available for execution by writing it to /usr/local/bin/nvm
, ~/.bin/nvm
, or ~/.local/bin/nvm
- whatever works for you.
With this solution, there was no longer any need to source nvm.sh
so we could use nvm
. Instead, it could be called directly as a command, which simplified and streamlined the process considerably.
I was able to install and manage different versions of Node. However, attempting to use node
or npm
commands resulted in an error message stating that the command was not recognized. This is because nvm
installed like that do not mangle the PATH
environment variable anymore. To fix this issue, there are two potential solutions.
The first solution involves manually setting the PATH
variable to include the desired version of Node. If you only need one specific version in your Docker image, this solution might suffice. Additionally, exposing the bin
directory will automatically expose all custom binaries installed with npm
.
It’s as easy as adding in Dockerfile of agent:
ENV PATH=$NVM_DIR/versions/node/v18.16.0/bin:$PATH
The second solution involves creating wrappers for node
and npm
commands in the same style as the nvm
command. While this approach has the downside of not automatically exposing binaries installed via npm
, it is a great idea overall. By running node
wrapper, we can guess which version of Node should be used by sourcing nvm
and running it.
Here are examples of the wrapper scripts for node
and npm
:
Wrapper script for node
:
#!/usr/bin/env bash
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
exec node $@
Wrapper script for npm
:
#!/usr/bin/env bash
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
exec npm $@
I use exec
here for node
and npm
but not for nvm
because nvm
is a function while node
and npm
are executables. By using exec
to run them, we eliminate one layer of bash
shell, which saves a few megabytes of RAM in the container.
How to automatically expose binaries installed by npm?
The “wrappers” approach has a downside: binaries installed via npm
are not immediately available. They have to be wrapped or linked, or the PATH
has to be modified. But we can automate this in our script!
Here’s an improved version of /usr/local/bin/npm
:
#!/usr/bin/env bash
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
set -e
npm $@
if [ "$1" == "install" ]; then
find $(dirname $(nvm which node)) \
-executable \
\( -type f -o -type l \) \
-print \
| sed '/node$/d;/npm$/d' \
| xargs -I{} ln -sf {} /usr/local/bin/
fi
This script works by first executing npm $@
normally, without exec
. After successful execution of npm install
, it will find all the executables in the same directory as the current node
executable, except for node
and npm
themselves, and symlink them to a directory already in the PATH
, such as /usr/local/bin
. Note that using /usr/local/bin
will require root permissions. Alternatively, you can use ~/bin
or ~/.local/bin
, as long as one of these directories is in the PATH
.
What about npx
?
A colleague suggest me, that npx
command should be wrapped too. I used same script as for npm
but without linking after install. Final script will look like:
#!/usr/bin/env bash
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
exec npx $@
Complete example
I prepared example repo with Dockerfiles implementing both ways + some test proving why solution with wrappers works better4.
This example presents a base image, which provides just NVM a top Ubuntu, without any version of Node available out of the box. You can use this base in a Docker image:
FROM tgagor/base-v2/nvm
# copy sources
COPY ./src ./
COPY .nvmrc ./
# install Node you need, set it as default, upgrade npm to latest version
RUN nvm install --default --latest-npm
# install your app
RUN npm install ...
Use those wrapper scripts in your CI/CD agents and you will be able to use NVM in the same way.
Final words
In conclusion, managing multiple versions of Node can be a hassle, but by creating a simple wrapper script, we can make nvm
a command and automate the process of exposing the correct binaries installed via npm
. With these techniques, we can easily manage multiple Node versions and ensure that our development and CI/CD environments will work in the same way.
Hopefully, this article has provided some useful insights and tips for managing Node versions with nvm
. Happy coding!
What’s pretty interesting, I review a huge part of Internet looking for solutions like this and no one does it like me. Am I wrong? Are there better ways to achieve it?