We can consider that the software quality deficit gap is the time taken to evolve a product from initial release until perceived as being of good quality. It is variable depending on the number of copies of the software in circulation, how thoroughly it is used, and more importantly, the number of iterations required to fix the bugs plus the time taken between iterations. Therefore, a deficit exists until quality can be obtained; this might be within a few quick iterations, or if it is a large deficit it will require a large number of iterations over a long period to obtain good software quality.
Thus, development managers are faced with a commercial dilemma, do thorough testing and impact the launch date of the product or do ‘enough’ testing and hit the launch date — with the knowledge that you’re going to have to fix problems down the road. In traditional software development methodologies, testing is conducted in the later part of the project life cycle, usually during the QA process once all the finished components are being assembled. In the industry this is commonly referred to integration testing. With this practice, many development teams consider testing as an outsourced function, more than likely offshore to reduce costs.
However, when testing is done this later in the process, the time taken to fix any subsequent issues is usually quite lengthy. Even in Boehm and Basili’s later work they had reduced the ratio of the cost associated with fixing bugs later in the development process to 5:1 for non-critical system software but still supported a ratio of 100:1 for bugs in critical software systems 1. It is a protracted situation because the original developer may have moved onto a different project, and will need time for their memory to be restored to what they were trying to achieve with their code.
To overcome this issue, modern development processes like Agile and Scrum development promoted by the likes of the Google have made inroads into as a solution to this dilemma. While these incremental software development methods focus on rapid development of useful software they can, however, fall foul of missing the key point about making sure the software application has been thoroughly tested.
No matter what the methodology used, for many development managers the time-to-market remains a top three pressure for shipping without thorough testing and there’s a sea-change with development teams increasingly measured on customer satisfaction and quality metrics . The solution for cutting the quality deficit in software development is to eliminate the seven common short comings that prevent obtaining software quality:
- Not having a clear set of requirements for the product.
- Not having a clearly defined API for each module with tests for all boundary conditions.
- Not taking a common sense approach to testing at a logical functional level.
- Not using code coverage to ascertain testing completeness.
- Not deconstructing testing in a layered approach until the faults are found – only use unit test when it is necessary.
- Not understanding what needs to be re-tested when a change is made.
- Not having an environment where anyone can run any test anytime.
Implementing a software development process that imparts quality to every software application that your organisation ships will not happen overnight, nor will it be simple. No software company can expect customer loyalty if it relies on field usage to highlight the majority of software issues. Clearly, implementing a robust software testing process as part of your overall development process is the way to deliver quality software from the first version.