The growth of Internet of Things (IoT) devices reflect a growing focus on driving results using sensor-based data and creating analytically rich data sets. These systems can deliver the most value in solving complex logistics, manufacturing, services, and supply chain problems. The number of connected IoT devices worldwide will jump 12 percent on average annually, from nearly 27 billion in 2017 to 125 billion in 2030, according to new analysis from IHS Markit1. With this rapid growth, and the role of IoT in driving business intelligence and agility, the quality of the software is critical.  With over 59%2 of IoT projects being late or cancelled, time is the most challenging issue—trying to keep up with shorter release cycles, growing scope of tests for each iteration and increasing the scale of supported platforms.

When we discuss quality in software, there is no other way to identify the quality and correctness of your software other than through testing it. In fact, in systems where quality and correctness are critical (e.g. Autonomous Driver Assistance Systems, Flight Control Computers, Pace Makers, etc), it is typical for almost 50% of the development effort to be spent testing the software. As a result, anything that can be done to accelerate testing can yield significant benefits in developing an IoT device.

1 – Identify non-deterministic tests

One of the quickest ways for a development team to loose confidence in an automated test suite is through unstable or non-deterministic tests. A test is considered to be non-deterministic when it alternates between passing and failing, without any noticeable change in the code, tests, or environment. Tests fail, are then re-run and subsequently pass. Test failures for such tests are seemingly random. This type of non-determinism or instability can plague any kind of test.

Non-deterministic tests should be identified and fixed or removed as quickly as possible. One of the primary goals of having an automated test frame work is to aid as a bug detection mechanism when using them as regression tests. The idea being, when new software is committed, by rerunning the regression tests, if any of the tests fail, then it is immediately clear that a failure has been introduced. The smaller the change in software when a test failure occurs, the quicker it is to localise and identify the cause.

If there are non-deterministic tests, every time a test fails, an investigation needs to be done, and if it is a false positive, then eventually teams will ignore the results of non-deterministic tests due to time being wasted. As a result when a real failure occurs, it will be incorrectly disregarded as being due to the non-determinism of the test case. Eventually the issue could manifest itself in a different area of the system several updates after the actual bug was introduced. Detecting the bug at this stage becomes extremely costly and time consuming.

Non-deterministic test cases are one of the biggest contributors to Technical Debt.

 

Figure 1 – Cycle for fixing non-deterministic tests

2 – Only run the tests impacted by change to the code

It’s typical in current regression testing strategies to run the entire test suite to validate the changes that have been made to the software. This activity can be done nightly or weekly depending on the size of the code base and it’s associated test cases. In some cases the volume of retesting is so large that the testing activity can take as long as 2-3 weeks! This means the changes that have been introduced between retests can be significant, and hence result in considerable amounts of effort being required to identify the root cause of failing test cases.

To overcome the time taken in identifying the quality of software after an update has been made to it, the concept of categorising test cases was introduced. Tests would often be categorised into a number of buckets such as Smoke Tests or Regression Tests that validate all the workflows. The idea being that running a subset of tests would mean less test case execution time, and faster feedback. However, if we look at the example of regression tests that validate all workflows, does it make sense to retest ALL workflows for a minor change?

This is where change impact analysis can significantly cut down the time taken to retest software. The idea being, if a change is made to the software, then ONLY the test cases associated with the impacted work flows should be retested. Revisiting the example above it means that for a minor change where only one or two workflows were impacted, then only a very minor proportion of the regression tests need to be run to validate the change. The identification of these tests is known as ‘Change Impact Analysis’

Figure 2 – Example of Change Impact Analysis

The benefit here is that significant value can be gained by testing incremental changes. The smaller the delta in change, the lower the number of test cases that need to be run, the quicker the feedback of the quality of the change. This enables retesting on code deltas, where the typical delta could be a ‘commit’. When we are able to test this way, it is immediately clear when a defect occurs what the root cause is, and hence the corrective actions required to rectify it. When integrated with a continuous integration process, it can be automated so that any commit that fails is automatically rejected and pushed back to the developer, while commits with clean test runs are automatically accepted propagated to the final baseline. The selection and execution of the subset of impacted test cases is know as Risk Based Testing.

3 – Create and manage test data smartly

Tools used to implement test cases most commonly rely either on a single-test or data-driven architecture.

Single-test architecture requires a new test driver for each test case or collection of test cases. This means more time is required during test execution. It also means that changes to the code being tested would result in several compile/re-links for each of the impacted test drivers. Additionally, if there are changes to the API or types for test case parameters, this will result in complex compilation failures, and significant time spent correcting these issues. If we look at the case where several test cases are accumulated inside a single test driver, we then also loose the ability apply the change based testing approach we specified above without introducing significant complexity (and maintainability) to the test driver. Complex test drivers are another major contributor to Technical Debt.

The data-driven architecture includes a test harness that is created for all the units under test. This means the test cases are defined as data values and are fed into the test harness during the time of execution. This means when changes are made to the code, only one test driver needs to be rebuilt. Additionally, when changes are made to parameters, as the test case values are data values, it is much easier to migrate test values to a new parameter, or disregard values that no longer match the new code with a report for review after the import activity is complete. The data-driven approach also means that importing of test case data from other sources such as a model or real transactions are significantly easier. Finally, the data driven architecture also makes it easier for testers who are not intimately aware of the underlying code to be able to quickly understand the interfaces, review the design and requirements, and implement test cases.

4 – Leverage the efficiency of Unit-Tests

The testing pyramid tells us that the test cases that are closer to the top of the pyramid are far more brittle. The tests that are located closer to the top are typically end-to-end test cases. Small changes in software can have a big impact on the definition of these test cases, and hence the ability to rerun them and get meaningful results. Additionally, system tests may take long periods of time to run due to the amount of initialisation time that is required to bring the system up to a known state (e.g. start-up services, hand shake a network connection, etc). As such they require much more effort to maintain over time. Test cases located closer to the bottom of the pyramid are much leaner, and often implemented at a level that even the developers understand the test case design intuitively.

Hence, Identifying and pushing tests to a lower level makes tests faster and reliable. Introduce stronger collaboration with developers, review the unit-tests and add more to the unit-levels. By reviews, you can avoid test duplication in multiple layers. Add integration tests, where applicable to ensure APIs are stable and tolerant to inconsistent inputs. Ensure only high-level workflows are covered in system tests.

 

Figure 4 – The Testing Pyramid

5 – Make tests run faster

No developer wants to wait hours or days to find out if their change works. Sometimes the change does not work – nobody is perfect – and the feedback loop needs to run multiple times. A faster feedback loop leads to faster fixes. If the loop is fast enough, developers may even run tests before checking in a change. Any time a test case needs to synchronise or wait for some activity to complete before it can progress, we introduce variance into the behaviour of the system and extended delays while we wait for the results of the test cases.

This is further compounded in the IoT world where very often the test cases need to interact with a remote device, often implemented in a different computer architecture and set of resources to the PC where the software is actually being developed. Typically when we want to run some code we have written on the device we need to compile the code with a compiler for the microprocessor of the IoT device  (cross-compiler), then we need to download the code to the IoT device through a debugger and probe, and then finally we need to use some sort of IO mechanism to retrieve the test case results from the execution. This exercise can take from several seconds for a smaller device up to minutes for a much bigger and complicated device. If we then scale this across the thousands of test cases that need to be run, it’s easy to see that we spend a lot of time getting ready to run the test cases, versus actually running it!

To make tests run faster, we can apply a couple of methodologies. First of all, we can develop test cases that are less dependent on the physical hardware of the IoT device, secondly we can try and introduce a simulation environment that provides the code sufficient stimulus that it an exercise the required test case, without needing access to the entire system.

Looking at the first of the methods, we can look at the example of the software for the TCP/IP protocol stack. The TCP/IP stack is the backbone of the internet communications network and the connected IoT device. However, the logic of the TCP/IP stack is not directly coupled to the physical hardware that will carry the protocol. This means, if we want to test the stack, we could actually run it anywhere, even on a PC instead of running on the physical IoT device. This means, even the developers could run the tests, without the need and complexity of the IoT hardware. Additionally, because we don’t have the complexities of working with the IoT device, the test cases can avoid the unnecessary delays.

In the second method, we can use an instruction-set-simulator. These kinds of simulators actually run the final binary that would run on the real IoT device by simulating the behaviour of the microprocessor on the local PC. In this instance because we are still running on the local PC, the simulator can also simulate time, and hence a test that would take several seconds on the physical hardware could be simulated to run much faster while still validating the desired workflow. Similarly to the first method, we avoid the complexities of having to work directly with the hardware, so test case execution will still be fast while still gaining the benefit of running the real binary code.

Conclusion

Detecting bugs earlier in the software process is critical to reduce the time, effort and cost in developing complex systems such as IoT devices. The 5 approaches we discuss in this post look to minimise wastage in execution time and post test case execution analysis. The rules of the game are simple;

Click To Tweet

 

  1. IHS Press Release – October 24, 2017
  2. Aspencore – 2017 Embedded Markets Study – Integrating IoT and Advanced Technology Designs, Application Development & Processing Environments – April 2017