There are a lot of articles on Habré about Jenkins, but few examples of how Jenkins and docker agents work are described. All popular project build tools like
Today there is a solution to the problem: Jenkins 2 is remarkably able to work with
Why did I start solving this problem?
Since we are in the company
- a large amount of runtimes that developers forget to clean;
- there are conflicts between different versions of the same runtimes;
- Every developer needs a different set of components.
There are other problems, but let me tell you about the solution better.
jenkins in docker
Since Docker is now well established in the development world, almost everything can be run using Docker. My solution is to have Jenkins in Docker and be able to run other Docker containers. This question began to be asked back in 2013 in the article “
In short, you just need to install Docker itself in the working container and mount the file /var/run/docker.sock
.
Here is an example Dockerfile that turned out for Jenkins.
FROM jenkins/jenkins:lts
USER root
RUN apt-get update &&
apt-get -y install apt-transport-https
ca-certificates
curl
gnupg2
git
software-properties-common &&
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey &&
add-apt-repository
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")
$(lsb_release -cs)
stable" &&
apt-get update &&
apt-get -y install docker-ce &&
usermod -aG docker jenkins
RUN curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
RUN apt-get clean autoclean && apt-get autoremove —yes && rm -rf /var/lib/{apt,dpkg,cache,log}/
USER jenkins
Thus, we got a Docker container that can execute Docker commands on the host machine.
Build setup
Not so long ago, Jenkins got the opportunity to describe its rules using
So let's put a special Dockerfile in the repository itself, which will contain all the libraries necessary for building. In this way, the developer himself can prepare a repeatable environment and will not need to ask OPS to put a specific version of Node.JS on the host.
FROM node:12.10.0-alpine
RUN npm install yarn -g
This build image is suitable for most Node.JS applications. But what if you, for example, need an image for a JVM project with a Sonar scanner included inside? You are free to choose the components you need for assembly.
FROM adoptopenjdk/openjdk12:latest
RUN apt update
&& apt install -y
bash unzip wget
RUN mkdir -p /usr/local/sonarscanner
&& cd /usr/local/sonarscanner
&& wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-3.3.0.1492-linux.zip
&& unzip sonar-scanner-cli-3.3.0.1492-linux.zip
&& mv sonar-scanner-3.3.0.1492-linux/* ./
&& rm sonar-scanner-cli-3.3.0.1492-linux.zip
&& rm -rf sonar-scanner-3.3.0.1492-linux
&& ln -s /usr/local/sonarscanner/bin/sonar-scanner /usr/local/bin/sonar-scanner
ENV PATH $PATH:/usr/local/sonarscanner/bin/
ENV SONAR_RUNNER_HOME /usr/local/sonarscanner/bin/
We have described the build environment, but what does Jenkins have to do with it? And Jenkins agents can work with such Docker images and build inside.
stage("Build project") {
agent {
docker {
image "project-build:${DOCKER_IMAGE_BRANCH}"
args "-v ${PWD}:/usr/src/app -w /usr/src/app"
reuseNode true
label "build-image"
}
}
steps {
sh "yarn"
sh "yarn build"
}
}
Directive agent
uses property docker
where you can specify:
- the name of the assembly container according to your naming policy;
- arguments needed to start the build container, where in our case we mount the current directory as a directory inside the container.
And already in the build steps, we specify which commands to execute inside the Docker build agent. It can be anything, so I also run application deployments with ansible.
Below I want to show a generic Jenkinsfile that a simple Node.JS application can build.
def DOCKER_IMAGE_BRANCH = ""
def GIT_COMMIT_HASH = ""
pipeline {
options {
buildDiscarder(
logRotator(
artifactDaysToKeepStr: "",
artifactNumToKeepStr: "",
daysToKeepStr: "",
numToKeepStr: "10"
)
)
disableConcurrentBuilds()
}
agent any
stages {
stage("Prepare build image") {
steps {
sh "docker build -f Dockerfile.build . -t project-build:${DOCKER_IMAGE_BRANCH}"
}
}
stage("Build project") {
agent {
docker {
image "project-build:${DOCKER_IMAGE_BRANCH}"
args "-v ${PWD}:/usr/src/app -w /usr/src/app"
reuseNode true
label "build-image"
}
}
steps {
sh "yarn"
sh "yarn build"
}
}
post {
always {
step([$class: "WsCleanup"])
cleanWs()
}
}
}
What happened?
With this method, we solved the following problems:
- environment build configuration time is reduced to 10 - 15 minutes per project;
- completely repeatable application build environment, since you can also build it on the local computer;
- no problems with conflicts between different versions of assembly tools;
- always a clean workspace that doesn't get clogged.
The solution itself is simple and obvious and allows you to get some pluses. Yes, the entry threshold has risen a little compared to simple build commands, but now there is a guarantee that it will always build and the developer himself can choose everything that is necessary for his build process.
You can also use the image I collected
During the writing of the article, a discussion arose about using agents on remote servers in order not to load the master node using the plugin
Source: habr.com