Interestingly, the process is only slightly slower and more costly (perhaps 15 percent) than the normal ad-hoc processes used for commercial software. Since most software fails because of mistakes, eliminating the mistakes at the earliest possible step is also a relatively inexpensive, reliable way to produce software.
The basic idea is that each step of the design process has outputs. If these outputs are tested for correctness and fixed, then normal human mistakes can't easily grow into dangerous or expensive problems. Most manufacturers follow the waterfall model to coordinate the design product, but almost all explicitly permit earlier work to be revised. The result is more often closer to a spiral model.
For an overview of embedded software see embedded system. The rest of this article assumes familiarity with that information, and discusses differences from commercial embedded systems.
In the U.S, avionic and other aircraft components have safety and reliability standards mandated by the Federal Aviation Regulations, Part 25. These standards are enforced by "designated engineering representatives" of the FAA who are usually paid by a manufacturer and certified by the FAA.
In the European Union the International Electrotechnical Commission describes "recommended" (mandatory!) requirements for safety-critical systems, which are usually adopted without change by relevant governments. A safe, reliable piece of avionics has a "CE Mark." The regulatory arrangement is remarkably similar to fire safety in the U.S. and Canada. The government certifies testing laboratories, and the laboratories certify both manufactured items and organizations.
The regulatory requirements are not complex or burdensome. First, a document must be produced called the "Software Aspects of Certification." This describes why using software is a good idea, and ideally, how using software is safer than not using software, how much risk the software adds to an aircraft (its level), and how the software will be proven to work. The manufacturer must label each revision of the software unambiguously, and preserve a description or how it can be reproduced, and the data to do so. Finally, a comprehensive maintenance manual must be produced. The maintenance manual is comprehensive-enough almost to permit an air-line to rebuild the unit from bare metal.
Avionic software is viewed as a rather nasty and unpredictable part of the avionics. Generally, an organization is required to commit to a particular method of producing and testing the software, then follow that method to the letter.
In the U.S., the approved software development method is DO-178B, and it has five levels of compliance, based on how much damage the software can cause. Level A software can crash airplanes. Level B can kill or maim people. Level C can damage property or schedules. Level D inconveniences crew. Level E has no effect on safety or costs.
The differences between the levels are how much documentation must be preserved about the quality assurance steps. Generally, a level-A development has a marketing specification, a hazard analysis, and an engineering specification, done and reviewed before the design. Next there is a design document, and a test plan which are done and reviewed before the coding. Next, as code is produced, it is reviewed, then unit-tested, then integrated, then unit-tested again, and then integration-tested. At the end, the avionic unit is black-box-tested (i.e. external behavior is tested), and finally acceptance tested, usually with the black box tests.
The marketing specification describes what the marketing department hopes to sell. It is usually reviewed by senior engineers to make sure it can actually be done.
Projects with substantial human interfaces are usually prototyped or simulated. The video tape should be retained, but the prototype retired immediately after testing, because otherwise senior management and customers can believe the system is complete.
The hazard analysis takes each block of a block diagram and considers the things that could go wrong with that block. Then the severity and proibability of the hazards are estimated.
Projects invoving military crypographic security usually include a security analysis, using methods very like the hazard analysis.
The engineering specification is a document that describes what the software should do in forseeable circumstances. The hazards feed into it to create more requirements. Usually it's reviewed for completeness, conformance to the marketing requirements, mitigation of hazards, practicality and testability. If the requirements can't be tested, then the software can't be proven to be done.
One of the major differences between avionics and commercial electronics is that everything on an aircraft can usually be semi-automatically tested in the time it takes to turn around the aircraft at an airport. This "built-in test" is a substantial effort that can literally make or break avionics in the commercial marketplace, by affecting its cost of ownership.
As soon as the engineering specification is complete, the software design can start. The design tells what software modules exist, and what they do, and how they relate. Engineers review it for completeness, accuracy, safety, conformance to the engineering and marketing specification, and practicality.
As soon as the engineering specificatoin is complete, the test plan can start. The test plan describes a practical method to test every feature, especially the safety-critical ones. Often, the expense of testing can be dramatically reduced by adding a small amount of software and electronics to help the testing. The most common aid is an interface to an external computer, used to simulate inputs and read outputs.
As soon as the engineering specification is complete, writing the maintenance manual can start. There are several levels. A level D product such as an in-flight entienrtainment unit (a flying TV) may escape with a schematic and procedures for installation and adjustment. A navigation system, autopilot or engine may have thousands of pages of procedures, inspections and rigging instructions. Documents are now (2003) routinely delivered on CD-ROM, in standard formats that include text and pictures.
The code is written, then programmers exchange the code, and review someone else's code. Skilled organizations use a checklist of possible mistakes, and add to it when they find a new mistake. The code is also often examined by programs. The compilers or special checking programs like "lint" check to see if types of data are compatible with the operations on them. Another set of programs measure software metrics, to look for parts of the code that are likely to have mistakes. All the problems are fixed, or at least understood and double-checked.
Some code, such as digital filters, graphical user interfaces and inertial navigation systems, are so well-understood that software tools have been developed to write the software. In these cases, specifications are developed, and boring but reliable software is produced automatically.
"Unit test" code is written that should exercise every instruction of the code at least once. A "coverage" tool should be used to verify that every instruction is executed. This test is among the most powerful. It forces detailed review of the program logic, and detects most coding, compiler and some design errors. Some organizations write the unit tests before writing the code, using the software design as a module specification. The unit test code is executed, and all the problems are fixed.
As pieces of code become available, they are added to a skeleton of code, and tested in place to make sure each interface works. This is called integration, and the testing is called integration testing. Usually the built-in-tests of the electronics should be finished first, to begin burn-in and radio emissions tests of the electronics. Next, the most valuable features of the software are integrated. It is very convenient for the integrators to have a way to run small selected pieces of code, perhaps from a simple menu system.Regulatory issues
Project management features
Marketing Specification
Human Interfaces
Hazard Analysis
Engineering Specification
Software Design Document
Software Test Plan
Maintenance Manual
Code production and review
Unit testing
Integration testing