Continuous Integration (CI) in Practice – #3


Continuous integration is an essential tool for developers. When launching programmes into operation, mistakes are efficiently prevented thanks to automation and standardisation. Before you jump into reading this article, be sure to check both articles in the series: Continuous Integration (CI) in Practice – #1 and Continuous Integration (CI) in Practice – #2.


There are numerous routine tasks in IT practice that is worth automating because man is not a robot. Certain processes in application development are frequently repeated. The repetitive nature of them is dull, and robots are error-free in comparison to people. Continuous integration offers a solution to such problems. So what uses could there be?

Continuous integration is, in my opinion, a vital component of software development that can also be exciting and entertaining at times. Jenkins CI, with which I have the most experience, is my favourite tool. There are several tools for continuous integration, build automation and deployment. Each has its own set of pros and cons. Selection criteria might vary based on pricing, usability, and the ability to integrate other tools and systems. But, before we get into the specific technologies, let’s talk about continuous integration in general.

Design and development, as well as testing, are all stages of application development. Some may operate simultaneously, and especially during development, more specialists may construct more or less interdependent pieces of the application.

The stages must be merged into a functional unit for the consumer, which is not as simple as it seems. Individual team members do their finest job, which they have tested and can vouch for. However, we do not live in a perfect world, and hence the contributions of different individuals may occasionally conflict, or someone may make a mistake in the project due to carelessness or ignorance.

Continuous integration helps to keep mistakes at distance for both the client and the system users. It involves the use of numerous tools created for this purpose as well as the establishment of error rate reduction processes.
Both large and small development teams must repeatedly rebuild the project.
This means that specific elements generated by various teams (or people) must be combined into a working project or application in which different modules communicate with one another smoothly. At the same time, relevant people must be notified as soon as an error happens (the system correctly notifies authorised personnel).

After all, it would be an issue if, say, Internet banking users were unable to make payments because some of the developers made an implementation error and no one in charge was aware of it. Continuous integration technologies are in place to guarantee that this does not occur and that everything runs smoothly as planned.

As a result, the continuous integration process should include automated testing.

Note: Of course, an application build, and deployment may potentially be done manually as well. Even though, we may debate whether this is “continuous” integration. Having procedures rely on the presence of qualified humans is undesirable, besides the potential mistake rate (and the time required). So, using a tool that is already in place makes more sense.

These tools include Travis CI, Teamcity, Atlassian Bamboo, Jenkins CI, and others. How to choose one is a question of personal taste; all of the options provided can handle the sub-tasks indicated in this article.


The newest code must be downloaded from a source code management (SCM) system as the first step towards continuous integration.

The SCM system takes into account the following:

  1. Code verification
  2. Centralization of developed code for submission and verification
  3. Security setup
  4. Hosting

Because of this, implementing SCM is indeed necessary. I could go into more depth about it, but it would make this article too long. This is why I’ll only focus on the most crucial information.

I use the Git version control system; you might use Mercurial SCM or Apache Subversion.

A special configuration in the runtime environment, such as login credentials for the production database or API connection keys, should be an exception to the versioning principle. These should not be accessible to all developers for security concerns, nor should they float in the “public space.”

In addition to addressing security concerns, we occasionally need to specify a particular debugging level independently of the accepted standard. A versioned local configuration is used for this purpose.

In any case, versioned code may not be enough for the functioning of the programme in its “raw form” since, after downloading the source code, we must make it executable.


A small project may potentially be a simple application on its own, composed of just a few classes.

In practice, we do not want to develop code from scratch, therefore it is wiser to start with one of the frameworks (such as Zend, Nette, Spring MVC, etc.) where the fundamental needs, such as safety and the separation of the program’s logic from its presentation, have already been addressed. And of course, each framework has components that we had planned to reuse rather than starting from scratch on the project.

There are many frameworks and libraries available that we may employ. But their definition and subsequent downloading are issues that must be dealt with methodically. Therefore, gradle build, maven install, and composer install / update (PHP) are now in use. They are here to assist you with obtaining relevant libraries and completing the project.


Applications have their front end (user interfaces) in addition to computer code. These days, front-end programmers often create styles for the LESS and SASS pre-processors. Like in the prior instance, compiling LESS / SASS documents to CSS is necessary when working with computer code (into a publishable and usable form). We have tools to assist us with it, such as Grunt (The JavaScript Task Runner) and/or (A Package Manager for the Web).


We usually do not receive the code in the proper shape right immediately, as I already explained. We also attempt to compress the front-end code (CSS, JavaScript, and images) because our secondary goal is to save data and CPU power (on the server and the client’s side).
The same rules apply to programme codes and front-end codes. This suggests that we should review the syntax and look for possible errors.
Tools like Grunt (The JavaScript Task Runner) or Bower do tasks like minification, compilation, and testing (A package manager for the web).
The aforementioned tools are available to assist us in developing a finished application that can be deployed. However, we need to test it first before making it available to actual users. Even the most careful coder is prone to making mistakes. The reality is that when numerous developers collaborate, it is possible for individual components they generate to operate well on their own, but the combined part may act oddly or include errors. And for that reason, we must inspect and test the code with thorough research.


Depending on the time of initiation, the tests can be split into the following groups:

  1. Pre-commit (prior to handover)
  2. Pre-deploy (prior to deployment)
  3. Post-deploy (post-deployment into the target environment)

Depending on the test type and integration approach, the tests can either be started before the integrated change, i.e., pre-commit (see, for example, the npm pre-commit package, SonarQube pre-commit analysis) or after integration, i.e., post-commit.

Early-stage tests have nothing to do with continuous integration tools because code testing is done on the developer side. Using Eclipse and IntelliJ plugins, SonarQube and pre-commit testing are performed directly in the developers’ IDE. Pre-commit tests are concerned with the source code. This implies that we do not test the current application. Instead, we do syntax checks, ensure coding convention compliance, ensure there are no duplications in the proposed classes, and assess overall code quality.

Later, during pre-deploy, we do unit tests to examine the functioning and accurate implementation of certain system units in the form of classes and functions. Here we discuss test-driven development, in which tests are developed before the code itself. They represent all functionality requirements, and we cannot continue unless they match. If the tests succeed, we may complete the deployment and go on to post-deployment testing.

Post-deploy tests are initiated following successful deployment:

  1. Automated user tests – are created by Selenium IDE and run by the CI tool in the Selenium server environment.
  2. Integration tests evaluate the system’s functionality.
    Some projects’ nature necessitates manual testing.
    These include programmes with similar security to electronic banking that only allow human users to log in. We want to test some operations with two-factor authentication using actual human testers. Numerous components, including calculators and forms, may also be evaluated automatically. Simple tests to determine whether an application is operative after an update and acts as expected may be automated.

Long-term benefits of automated testing include:

  1. Saved time – compared to demanding user testing
  2. Repetitiveness – tests are conducted under the same conditions, using the same scenario.

Despite the development team’s best efforts, building or testing might fail. Additionally, someone must always be informed about a negative outcome (advising about positive outcomes is not always necessary). A notification system, which we shall cover in the second section of the article, addresses this.

Be sure to check other articles on the topic of website development!


Leave a Reply

Your email address will not be published. Required fields are marked *