In this post, I will talk about Continous Integration with Java nowadays. When we think to CI systems often we think of systems like Jenkins, Bamboo or similar. These systems are only a nice support for a process that involves many aspects of the daily development job. I will not explain how to configure Jenkins, but I will explain what you can do to optimize the build process to obtain a consolidated product every time.
First of all, we need to define “what” we want to do in response to an update in the code. Typically main operations are:
1)Perform a build to ensure that code compiles
2)Perform tests on code released to ensure that we didn’t introduce any bugs
3)Perform a static quality analysis on the code to ensure that it’s respecting our quality gates
4)Notify others team member of the new code
5)Deliver new artifacts on our repository
6)In a docker environment, build the new image starting from the built artifacts
7)Tag and push these artifacts on a cloud storage
8)Put these new containers live
9)Generate API documentation and push them on a documentation site
2.1 Perform a build to ensure that code compiles
The first thing that we need when starting a new project is to choose e build system that is able to manage dependencies and all build phases.
Here you can find an interesting comparison between these two systems.
2.2 Perform tests on code released to ensure that we didn’t introduce any bugs
When we think of automated testing usually we think of systems like JUnit to perform unit and integration tests. In this article, I spoke about unit and integration test in a spring boot environment. Thanks to integration with maven we are able to easily execute test at every build cycle.
2.3 Perform a static quality analysis on the code to ensure that it’s respecting our quality gates
After being assured that our code compiles and is covered by tests, we must assure that code quality is good too. Of course, the code can be reviewed manually by a more skilled developer or a team leader/architect but when many devs are involved this can soon become a bottleneck in a development process. Fortunately, there is a group of software like Sonarqube that can help us to ensure that our software quality is respecting our standards company metrics. This kind of tool can be put in a build pipeline without too much effort and can notify the developer of issues or code smells on the code.
2.4 Notify others team member of the new code
In our day to day development activity, we often make use of communication software like Slack, Hipchat, Hangout etc.. All this software allow to interact, but some of them like Slack are more indicated for developers because they support “webhooks” that allow performing custom integrations. For example, we can be notified by our VCS like SVN or GIT that new code has been committed, or that a quality analysis on code has failed due to a code smell.
2.5 Deliver new artifacts on our repository
After being compiled, tested and analyzed our code is ready to be shipped to our staging area. Before that, we need to package it typically in a set of WAR/EAR/JAR packages. Thanks to our build management tool we will be able to create a versioned artifact; what we need after that is a place where storing these artifacts for a further use. Simpler storage location for artifacts I have seen during years of work is an FTP location. Of course, this kind of solution is not easy to maintain and it is not managed. Fortunately in this case too we have a lot of systems like Nexus or Artifactory that can act as a repository for our artifacts. Both of them can act as a maven repository and proxy, so we can limit outside network access to only missing artifacts.
2.6 In a docker environment, build the new image starting from the built artifacts
When we deploy our solution using docker containers we will need to prepare images for these containers. Fortunately thanks to maven plugin like Spotify we are able to build images during the build phase (or in any other phase we want to bind to).
2.7 Tag and push these artifacts on a cloud storage
Once we have got our docker images built, we need a registry to push these images to and a way to tag them for referencing for a further use.
2.8 Put these new containers live
Containers can now be started from images pushed on registries. This task can be automated by a script or by systems that perform periodic checks on version values of any single container. In an orchestrated stage environment (i.e. kubernates cluster) this task can be very hard to configure properly.
2.9 Generate API documentation and push them on a documentation site
I think that we can speak about a good system only if we have also a good documentation. Functional and Technical documentation should be kept updated and synchronized with the code to avoid developers to struggle with it. One of the systems I prefer for documenting API usage is using Swagger. Documentation can be written using YAML files or produced with annotations on the code. Using plugins like Springfox generation of documentation can be automatized and bound with any build phase like “package”.
API documentation is not the only thing we need when speaking about documentation. Functional documentation and other technical docs can be hosted on an intranet system like Confluence.
This post doesn’t want to be exhaustive on continuous integration topic but wants to give an overview on what kind of systems you can put together to obtain a consolidated and repeatable build procedure. In your day to day job maybe you will use only a few of this components, or maybe you use many more. I hope that these suggestions found your interest and please feel free to suggest me any other examples of build pipeline integrations.