×
Author:
Website:
Page title:
URL:
Published:
Last revised:
Accessed:

Software Testing

The first Ariane 5 launch took place on June 4 1996. Due to a malfunction in the ELV’s guidance system control software, the spacecraft was destroyed by its automated self-destruct system just thirty-seven seconds into the mission. This spectacular incident was one of the most high-profile and most expensive software failures in history (although not the first, and by no means the last major project failure).

The inertial guidance system software from the Ariane 4 ELV had been adapted for re-use in Ariane 5, despite the fact that the flight trajectory of the two vehicles was significantly different. Although the updated software was re-designed to take account of the new flight parameters, critical stages of the pre-flight simulation and testing were never undertaken (mainly as a result of a desire to cut costs, and because the additional testing was thought be unnecessary).

As a result, a critical software error was overlooked, and soon after take-off the booster nozzle deflectors received incorrect data from the guidance control system, sending the spacecraft along a flight path for which the aerodynamic loading would tear it apart. The failure condition was detected, the auto destruct sequence was triggered, and the aircraft was destroyed. For a full report of the subsequent investigation, see:

http://sunnyday.mit.edu/accidents/Ariane5accidentreport.html

Araine 5 Flight 501 self-destructs

Araine 5 Flight 501 self-destructs

Software development is probably one of the most complex of human endeavours. Software projects often do not go as planned, with approximately eighty percent failing to meet completion deadlines and costing more than expected, and less than fifty percent delivering all of the specified functionality. The missing features are often promised for delivery in a later version in order to placate the customer.

High profile disasters like the one described above (which cost several hundred million dollars even by conservative estimates) have served to bring the risks associated with large and complex software projects sharply into focus. Software development projects vary in size, complexity, and the nature of the system they are intended to implement.

A relatively small-scale application that has been designed and implemented by a single programmer is obviously going to require less time and effort to test than a huge corporate information system that requires the services of a large team of programmers and analysts over several months or even years. Generally speaking, therefore, the scale of the system to be tested will determine how much effort has to be expended on planning and implementing the test schedule.

The ability to ensure quality requires not only a rigorous approach to design and implementation, but an equally rigorous approach to testing. The most important aspect from the standpoint of quality is that the software should perform exactly as expected, and the only way to ensure that this is the case is to adequately test the software.

Testing should, in fact, be planned before any code is written, and the programmer should write their code with testing in mind. Performance testing is concerned with the responsiveness of the system, which in turn depends on the efficiency of the underlying code. It also depends, however, on the environment in which the system is running, and the number of users accessing the system during any given time period. A system might work fine for a single user, but how does it perform for twenty users, or a hundred users?

The nature of the test environment will depend on the scale of the application and the environment in which it will be used. An information system for a large corporation may warrant the use of a number of dedicated test machines, allowing tests to be carried out under a range of conditions, and for a variety of hardware configurations. As far as possible, the configuration of test hardware (and the operating system that is used) should accurately reflect the configuration that will be used in the target environment.

Software testing is the subject of an international standard (ISO/IEC/IEEE 29119). The standard comprises five parts, which we have briefly summarised below. The first three parts were published in 2013, with parts four and five being published in 2015 and 2016 respectively.

  1. Test definitions and concepts - introduces the vocabulary on which the standard is based and provides examples of its application; this part of the standard essentially establishes the terms of reference for applying the remaining parts of the standard.
  2. Test processes - establishes a generic model for software testing processes that can be used by organisations carrying out software testing at various organisational levels, and with different software development lifecycle models.
  3. Test documentation - includes templates and examples of the kind of test documentation that should be produced during the testing process.
  4. Test techniques - provides standard definitions of software test design techniques for use in test design and implementation.
  5. Keyword-driven testing - an approach to specifying software tests geared towards the creation of automated testing based on keywords.

Further details of the standard can be found here.