Software has become pervasive in the things we use in our everyday lives. It is not only in our office equipment but also in many other things – our cars, the toys we give to our kids, the bus we ride, our home appliances, and in the medical equipment to keep us healthy.

The trend over the last few years has been to move software from the Desktop to the Cloud, but the Industrial Internet of Things (IIoT) will push software back to the periphery of the network, and more and more products will contain embedded software applications. Due to this, software quality will play a big role in determining the winners and losers as this migration continues.

As our lives become more dependent on products whose functionality is controlled by software, the quality of the software has started to come under scrutiny, particularly in situations where safety, security or human life is exposed to risk if the software fails.   For example, the software inside our home appliances now has a safety standard (IEC 60730) to prevent injury or even worse a fatality from a software defect. Many of the principles of this standard inherit from the same Safety Standard (IEC 61508) used in Industrial Automation Systems such as Nuclear Reactors, Robots, Oil & Gas Systems and Factory Systems.

The biggest challenge software developers face, is balancing testing completeness with time-to-market. Often the fear is losing the “first mover advantage” for the sake of testing completeness. However sacrificing quality for time to market is a dangerous choice that can have a significant negative affect on brand value.

How can we quantify the balance between quality and time to market? Here is an example of the normal product life cycle for a software application:

Figure 1 – Example Releases

In this diagram the line labelled 1.0 is the initial release to customers, the lines moving to the right represent subsequent releases when bugs are fixed, and functionality that was missing from 1.0 are released. The line labelled with “?” is the point at which users are happy with the quality and the features of the product. The Quality Deficit sits between the first release of the product and when the market considers the product to be of good quality.  Minimising or eliminating the Quality Deficit should be high on the priority list of every organisation that is building software.

A second challenge that development teams face is allocating development resources between requirements, design, coding, and testing. Historically, the workflow has been as shown in this diagram:

Figure 2 – Historical Software Workflow

With most development teams, the highest priority placed on coding, with less emphasis on the Application Programming Interface (API) and test case design. Generally, in fact, groups will assign their senior staff to code development and junior staff to testing. This model should be completely reversed. The most valuable software development products are a complete and flexible API, and the test cases that prove the correctness of this API. In the post ‘Technical debt, legacy code and the Internet of Things‘, we discuss strategies for working with legacy code to efficiently define the test cases around the API to then allow the developer to confidently refactor the code. In this case we call the test cases ‘Characterisation Test Cases’, as the describe the behaviour of the legacy code.

If you develop a well defined API, and Tests that formalise the correct behaviour to this API, then the actual code writing can be done by junior staff, the code can be re-factored with confidence, and the resultant quality will be greatly improved. 

A third challenge is that most groups maintain a variety of test types, and a different group in the organisation “owns” each type of test. It is very common for the developers to create and maintain low-level tests, while the Quality Assurance (QA) department is responsible for the others.

Figure 3 – Testing Types

The QA tests are generally run only after several weeks of development, when hundreds of source changes have been integrated into the code base. This makes finding the root cause of a broken testing time consuming and frustrating. The solution to this challenge is to treat test cases as a valuable asset of the organisation, and leverage them across the entire team and application life cycle. Expose all tests to all team members, make it easy to run them, and run them frequently.

Tools that can do Change Impact Analysis when the code base changes can help team members quickly identify which test cases need to be run, and minimise the need to run redundant test cases. 

Today there is an increasingly important role for software quality as the industry adapts to the Industrial Internet of Things and the fourth industrial revolution. If organisations can successfully address the three challenges discussed in this post, they will be able to scale their software development processes to meet the upcoming demands.