Software is traditionally designed, coded, then tested. However, poor quality coding and a testing phase at the end of the process can add a significant amount of time and risk to a project. Delayed and extended testing has a knock-on effect – the longer bugs remain undiscovered, the more likely that developers will be working with poor quality software, causing more delays as more bugs are found. The ideal position is to have a process that supports testing as early in the development life cycle as possible and which enables changes to be made quickly.

Sounds like nirvana? It is possible?

Continuous integration (CI) focuses on the ability to build and test an application every time a change has been, or needs to be, made. Manual testing works well with a small code base, but with software at the heart of so many products, this process needs to be automated to cope with the scale of the challenge. Solving the problem of software quality and time to market is an ongoing fight, but CI helps developers and engineers address these issues. There are five elements in the ultimate testing environment:

  • Tools that allow developers to test whenever they need.
  • Tools that provide visibility of testing completeness and auto generate test cases for incomplete code snippets.
  • A repository that automates the job scheduling of the integration process.
  • Parallelising and scaling the test architecture to achieve fastest build time.
  • Overlaying intelligence that understands the smallest number of retests required by a change to the source code.

Shift left testing
The ultimate CI engine enables testing early and often in order to prevent ‘Integration Hell’. In applications with millions of lines of code, leaving testing until last is dangerous; developers may need to abandon the project, while the company might face severe financial difficulty and possible penalty after discovering bugs too late.

CI is intended to be combined with automated unit testing. Historically, it was thought best to run all unit tests in the developer’s local environment and to confirm these tests were passed before committing code to the repository, avoid ‘spreading’ broken code. As CI practices have developed, the concept of build servers has appeared to run unit tests automatically. This has expanded to include the application of continuous QA processes. This improves software quality, reduces time to market and builds a solid foundation for the code’s future.

Who and what is Jenkins?
Without Jenkins – a server based, open source CI tool written in Java – it can take hours to perform an incremental build of an application and tests can take weeks to run. Jenkins enables continuous testing each time a source code change is made.

Figure 1 – Jenkins Continuous Integration Server

Introduced in early 2005, Jenkins (see Figure 1) has more than 400 plugins that allow it to be used with other coding languages to support the build and test of any project. Jenkins is a ‘job server’, with no bias as to which job needs to be performed.

It provides a distributed testing infrastructure that: allows a list of ‘nodes’ – physical or virtual machines – to be defined; ‘tags’ nodes to indicate the types of job that can be run; dispatches jobs to a list of nodes; and reports on job status when complete. Simply, Jenkins is a ‘butler’, taking instruction in the form of a list of jobs to be run.

While Jenkins plays a vital role in CI, additional software is needed to manage the overall integration process and to complete a CI test environment. For CI to be most effective, all members of a software development team need to be able to share tests and be kept up to date with release readiness.
However, many current applications are deployed in multiple environments and configurations, so the build and test environment needs to cater for different operating systems and hardware combinations. This is usually controlled with configuration files, macros and compiler options. Because it’s critical that testing is completed for each configuration, the CI engine needs tools to manage the process.

Adding parallelisation
Organising tests is one part of the CI process, but a parallel testing infrastructure is also crucial. When using Jenkins as the CI server, test targets need to be scrutinised and a popular choice is Docker, which can replicate the desired target environment. Using Jenkins and Docker, developers can select which environments to test and discover which cases need to be rebuilt and run, based on source code changes. Developers can set up different configurations of the same target environment to run comparable tests, allowing for complete code coverage testing. Docker makes testing more reliable, as having a model of the target environment –sometimes before it is available – means there is less chance of encountering errors with actual tests in the production environment. A complete CI environment needs a platform to bring these elements together. The ideal solution will organise all test cases into groups that allow developers to map the application’s architecture and allow individual stacks to be tested and pushed forward for system tests.

Test what needs to be tested
With an environment that can carry out a complete suite of tests as quickly as possible, further improvement can be obtained by carrying out only those tests required to restore 100% completeness when a change is made to the source code. The well known principle of change impact analysis is the final ingredient. A test automation tool such as VectorCAST has a change based testing feature that identifies automatically the minimum number of tests for each code change. This means that, instead of running 10,000 tests, only a fraction of these may be needed following a change. This reduces testing time from days to minutes.

Example project

Figure 2 - Jenkins Test Environment Setup
Figure 2 – Jenkins Test Environment Setup

If our test automation tool has an example project of 20 environments with 122 unit tests. The test environment (see Figure 2) contains the test automation tool, a Jenkins server and nine slave nodes distributed over three slave server using three Docker containers each. We know the baseline build and execute time is 47mins using one slave node and a Docker Container to run a complete set of tests.

To demonstrate the power of parallelism, 20 environment test jobs are created using the test automation tool we mentioned above, which pushes them into the Jenkins build queue. Jenkins pushes the first nine jobs to the slave nodes, where they execute in a Docker Container each. This continues until the full build is complete. The result is a full distributed rebuild time of 7min 47s – six times faster.

If we change two modules and start a rebuild, the test automation tool and Jenkins determine the tests that need to be rerun, then push the jobs into the queue. The first module needs a complete rebuild and a test of the three test cases; the second module needs only an incremental rebuild, executing two of the nine cases. This requires only five of the 122 test cases to be rerun and an execution time for the rebuild of less than 2mins.

This demonstrates the power of a parallelised distributed testing environment that supports continuously changing requirements.

This article originally featured in New Electronics, and has been edited for this post. A PDF version of the article can be downloaded here.