Building CI/CD pipeline with Teamcity and Docker

TeamCity is a CI/CD solution that automatically runs your builds and tests, and lets your team deliver quality software faster, at scale. Nowadays when microservices are so popular we tend to have good automation over continuous integration and deployment. TeamCity allows to cover both part.

Controlled rollouts, or canary releases, allow you to release your features to a small subset of your user base at a time to ensure functionality before releasing it to your entire user base. It’s a fundamental piece of having an effective CI/CD pipeline, and essential to chaos engineering because you can roll out features to production, test them in production, and ensure functionality and reliability.

TeamCity is a  CI/CD solution that automatically runs your builds and tests, and lets your team deliver quality software faster, at scale. Nowadays when microservices are so popular we tend to have good automation over continuous integration and deployment. TeamCity allows to cover both part. 

Setup TeamCity with Docker and Docker Compose.

In the advent of microservices, we want to share our knowledge of how you can build your own CI/CD pipeline using TeamCity. In the market, there is a bunch of options you can take into consideration: Microsoft Azure Pipelines, Jenkins, GitHub Actions, Octopus deploy, and many others.

In this tutorial, we are going to show you how you can create a TeamCity build server in your on-premise server, how to use either Linux or windows agents, how you can integrate it with git repository (BitBucket, GitHub), build projects, and packages and deploy them in your target machine(s). 

So first thing, first. You need a machine. We use a Linux based machine that runs Ubuntu version 18.04 (Bionic Beaver release).

You need Docker up and running on that machine. On top of that, you need to install Docker Compose. 

How To Install and Use Docker on Ubuntu 18.04

First of all, we are going to talk about Docker Community Edition (CE). There are two versions of Docker – Docker CE (Community Edition) and Docker EE (Enterprise Edition). For small-scale projects or just for learning purposes it’s more than enough to have a try on Docker CE.

For more information, please follow the documentation page of Docker, and please make sure you meet the prerequisites (Ubuntu 18.04, sudo privileges, access to Docker repos,..)

We are going to show you the installation steps we have tried using apt (Advanced Package Tool).

1. In the terminal window enter the command

 

sudo apt-get update

It will update the local database of software to make sure you’ve got access to the latest revisions. Next step is 

2. Install Docker on Ubuntu 18.04

sudo apt install docker.io

You can encounter issues, so please make sure you met the prerequisites. (Former docker release is uninstalled, your local database of software is updated, you run the right release of Ubuntu and have the right privileges to do so. 

Next step is run docker demon on each restart of your OS. 

3.  Run Docker service at startup

sudo systemctl start docker
sudo systemctl enable docker

4. Check your installation. 

docker --version

If you are lucky enough, your last command output should show the version of Docker that has been installed. 

Installing docker-compose

Last but not least – you need Docker Compose. This helps you to avoid manually running Docker containers. So next three commands are self explanatory: 

1. Get release of docker-compose

sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

2. Set execution privileges

sudo chmod +x /usr/local/bin/docker-compose

3. Check if docker-compose has been installed properly

docker-compose --version

Creating docker-compose file

In this docker-compose file, we will put NGINX in the front line of each request. So it will act as a small webserver which will serve our requests and route them to other containers. We are going to use LetsEncrypt in order to have SSL over our requests. And we will use official TeamCity docker images to build our CI/CD server. 

So we have an nginx-proxy container. We have mapped the Nginx certificates folder to our host folder. This folder is shared among another container like letsencrypt to make sure SSL binding is set correctly. Please go to your DNS provider and change DNS A  record to route all your requests to your new build server.

Next to it, we have a TeamCity server container and three agents. In this case, we are going to store our data into a progress database. 

One caveat, as you can see we use DOCKER_IN_DOCKER environment variable that allows us to build docker images in our agents. You may outsource the docker images build task to Docker Hub itself, but for our case and the frequency builds we have it’s more than enough to place the work on agents. (In general, we use two Linux machines on the same server and one Windows machine outside.) TeamCity has limitations of agents for your free license, so keep that in mind.

To allow build images on your agents you need run couple more commands:

sudo chmod 777 /var/run/docker.sock
sudo usermod -a -G docker $USER

Reboot your machine and check if everything is working properly. The Docker daemon, docker-comose, and TeamCity should be up and running. You can access your TeamCity using the domain name. 

More To Explore

Mobile Developent
Domantas Jovaisas

Full guide of how to use React Native List with hook and back-end.

In this tutorial, I’m going to show you how to use the React Native List component and its variations (paging, refresh, projections and filtering the data) and make a working solution including data that comes from the backend. On top of that, we will use hooks, images on the list, we will try to optimize images and serve them from the AWS S3 bucket.

Read More »

Get started on your project today

Start your 30-day free trial

Products

Why CodeMash ?

Resources

All you can do is all you can do, and all you can do is enough but make sure you do – All you can do.