bannerd


SWE-219 - Code Coverage for Safety Critical Software

1. Requirements

3.7.4 If a project has safety-critical software, the project manager shall ensure that there is 100 percent code test coverage using the Modified Condition/Decision Coverage (MC/DC) criterion for all identified safety-critical software components.

1.1 Notes

Updated from the Note in NPR

In MC/DC coverage, each entry and exit point is invoked, each decision takes every possible outcome (branch coverage), each condition in a decision takes every possible outcome (i.e. each condition tested for both “true” and “false”), and every condition in a decision is shown to independently effect the outcome. Any deviations from 100 percent should be reviewed and waived with rationale by the TAs approval. 

1.2 History

SWE-219 - Used first in NPR 7150.2D

RevSWE Statement
A


Difference between A and B

N/A

B


Difference between B and C

N/A

C


Difference between C and DFirst use of this SWE
D

3.7.4 If a project has safety-critical software, the project manager shall ensure that there is 100 percent code test coverage using the Modified Condition/Decision Coverage (MC/DC) criterion for all identified safety-critical software components.



1.3 Applicability Across Classes

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

All Safety-critical software decisions must be tested to protect against loss of crew or vehicle.  MCDC testing represents the minimal set of tests necessary to achieve test coverage over decisions that change the behavior/outcome/output of a computer program.   Anything less than MCDC exposes a risk of a safety-critical decision based on a set of conditions not being tested. Aerospace and space guidance prioritizes safety above all else in the software development life cycle. MC/DC represents a compromise that finds a balance between rigor and effort, positioning itself between decision coverage (DC) and multiple condition coverage (MCC). MC/DC requires a much smaller number of test cases than multiple condition coverage (MCC) while retaining a high error-detection probability.   

  • Similar Standards requiring MC/DC testing for Safety-Critical code
    • Aircraft - DO-178B (Safety-critical Level A or B)
    • Automotive - ISO-26262 (ASIL D)
    • Nuclear - IEC-61508-3 (SIL 1-3)
    • Spacecraft - NASA NPR-7150.2 (Class A Safety-critical)

3. Guidance

In MC/DC coverage, each entry and exit point is invoked, each decision takes every possible outcome (branch coverage), each condition in a decision takes every possible outcome (i.e. each condition tested for both “true” and “false”), and every condition in a decision is shown to independently effect the outcome. For a full example of MC/DC testing coverage, see section 7.21 “Multi-condition Software Requirements”. 

MC/DC testing should be performed during unit test phase and should ensure that the software unit behaves as required/expected as well as all meaningful conditional paths through the code are exercised.  A specific type of cyclomatic complexity is not required, it is up to the individual projects to determine if they should use strict, normal, or modified.

A unit test tool can be utilized to identify or generate tests to adhere to MC/DC criterion .  The developer should ensure that the unit is functioning as expected.  

See topic 7.21 - Multi-condition Software Requirements for additional guidance (see "gcov" in the topic). 

3.1 Previously Developed Software and Computing Systems

The project must validate and verify the safety requirements for reused computing system safety items. In addition, a project must validate and verify the safety requirements for third-party products. Using previously developed computing system safety items can reduce development time, because those components have already undergone design and testing. However, analysis of accidents where software was a contributing factor shows the risks in this approach. See also SWE-147 - Specify Reusability Requirements

Previously-developed computing system safety items include:

  • commercial off-the-shelf (COTS) software,
  • government off-the-shelf (GOTS) software, and
  • “reused” software.

Although another vendor may have developed the software or product, reducing the risks of using third-party products remains the responsibility of the project. These risk reduction efforts should include evaluating the differences between

  1. the computing system safety item’s role in the new system, and
  2. its use in the previous system. 

This analysis includes an assessment of any identified issues found during use in the previous system and implementation of all preconditions for its use in the new system.

For third-party computing system safety items, risk reduction efforts should include:

  1. verification of compliance with the developer’s specified uses for third-party products, and
  2. verification of safety requirements for its use in the system.

See also Topic 8.08 - COTS Software Safety Considerations, 7.23 - Software Fault Prevention and Tolerance

See also SWE-135 - Static Analysis, SWE-190 - Verify Code Coverage

3.2 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:


3.3 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

SPAN Links

4. Small Projects

This requirement applies to all Class A, B, C, and D projects that have safety-critical software regardless of size.

5. Resources

5.1 References

5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

See references 384, 393, 394, 395, 396, and 397 for tools related to MC/DC. 

Gcov 486  is one tool that can be used to aid in MC/DC testing. 

6. Lessons Learned

There are currently no Lessons Learned identified for this requirement.

7. Software Assurance

The page could not be found.

7.1 Tasking for Software Assurance

From NASA-STD-8739.8B

1. Confirm that 100% code test coverage is addressed for all identified safety-critical software components or that software developers provide a technically acceptable rationale or a risk assessment explaining why the test coverage is not possible or why the risk does not justify the cost of increasing coverage for the safety-critical code component.

7.2 Software Assurance Products.

  • Software assurance or software engineering status reports
  • Software design analysis
  • Software test analysis
  • Source code quality analysis
  • Evidence of confirmation that requirements for test code coverage, complexity, and testing of support files affecting hazardous systems have been met.
  • Software assurance risk assessment of any software developers' rationale if requirements are not met.


    Objective Evidence

    • Evidence of confirmation that 100% code test coverage is addressed for all identified software safety-critical software components or assurance that software developers provide a risk assessment explaining why the test coverage is not possible for the safety-critical code component.
    • Evidence of confirmation that the values of the safety-critical loaded data, uplinked data, rules, and scripts that affect hazardous system behavior have been tested.

    Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

    • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
    • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
    • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
    • Signatures on SA reviewed or witnessed products or activities, or
    • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
      • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
      • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
    • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics 

    • Software code/test coverage percentages for all identified safety-critical components (e.g., # of paths tested vs. total # of possible paths)
    • Test coverage data for all identified safety-critical software components.
    • # of Source Lines of Code (SLOC) tested vs. total # of SLOC

       Note: Metrics in bold type are required for all projects

See also Topic 8.18 - SA Suggested Metrics.

7.4 Guidance

Test coverage analysis can be considered a two-step process involving requirements-based coverage analysis and structural coverage analysis. The first step analyzes the test cases about the software requirements to confirm that the selected test cases satisfy the specified criteria. The second step ensures that the requirements-based test procedures exercised the code structure to the applicable coverage criteria. This requires that when testing a safety-critical computing system safety item, its validation and verification must include testing by a test team. This validation and verification should take place within the development cycle and contribute iterative findings to the design of the computing system safety item.

See also Topic 8.57 - Testing Analysis

Confirm that 100% code test coverage is addressed for all identified software safety-critical software components or ensure that software developers provide a risk assessment explaining why the test coverage is impossible for the safety-critical code component.

Complete test coverage is needed for software safety-critical code.  The concept of using untested code in hazardous conditions should not be considered acceptable.  The requirement is to confirm that 100% code test coverage has been achieved or addressed for all identified software safety-critical components or provide a risk assessment explaining why the test coverage is impossible for the safety-critical code component.  If the safety-critical code has not been tested, we should understand why and discuss the risk associated with the hazard activity and the untested code. 

Recommend that you use the Modified Condition/Decision Coverage (MC/DC) approach. Modified Condition/Decision Coverage (MC/DC) is a code coverage criterion commonly used in software testing. See topic 7.21 - Multi-condition Software Requirements for additional guidance. 

The modified condition/decision coverage (MC/DC) coverage is like condition coverage, but every condition in a decision must be tested independently to reach full coverage. This means that each condition must be executed twice, with the results true and false, but with no difference in all other conditions' truth values in the decision. Also, it needs to be shown that each condition independently affects the decision.

With this metric, some combinations of condition results turn out to be redundant and are not counted in the coverage result. A program's coverage is the number of executed statement blocks, and non-redundant combinations of condition results divided by the number of statement blocks and required condition result combinations.

Code coverage is a way of measuring the effectiveness of your test cases. The higher the percentage of code covered by testing, the less likely it is to contain bugs compared to code with a lower coverage score. There are three other code coverage types worth considering with MC/DC: Statement coverage, Decision coverage, and Multiple condition coverage.

Why MC/DC?

Aerospace and space guidance prioritizes safety above all else in the software development life cycle. MC/DC represents a compromise that finds a balance between rigor and effort, positioning itself between decision coverage (DC) and multiple condition coverage (MCC). MC/DC requires a much smaller number of test cases than multiple condition coverage (MCC) while retaining a high error-detection probability.

Overview

MC/DC requires all of the following during testing:

  • Each entry and exit point is invoked.
  • Each decision takes every possible outcome.
  • Each condition in a decision takes every possible outcome.
  • Each condition in a decision is shown to affect the outcome independently.

MC/DC is used in avionics software development guidance DO-178B and DO-178C to ensure adequate testing of the most critical (Level A) software, which is defined as that software that could provide (or prevent failure of) continued safe flight and landing of an aircraft. It is also highly recommended for SIL 4 in part 3 Annex B of the basic safety publication[2] and ASIL D in part 6 of automotive standard ISO 26262.[3]

Clarifications

  • Condition - A condition is a leaf-level Boolean expression (it cannot be broken down into simpler Boolean expressions).
  • Decision - A Boolean expression composed of conditions and zero or more Boolean operators. A decision without a Boolean operator is a condition.
  • Condition coverage - Every condition in the program's decision has taken all possible outcomes at least once.
  • Decision coverage - Every entry and exit point in the program has been invoked at least once, and every decision in the program has taken all possible outcomes at least once.
  • Condition/decision coverage - Every entry and exit point in the program has been invoked at least once. Every condition in a decision in the program has taken all possible outcomes at least once, and the decision in the program has taken all possible outcomes at least once.
  • Modified condition/decision coverage - Every entry and exit point in the program has been invoked at least once. Every condition in a decision in the program has taken all possible outcomes at least once, and each condition has been shown to affect that decision outcome independently. A condition is shown to affect a decision's outcome independently by varying just that condition while holding fixed all other possible conditions. The condition/decision criterion does not guarantee the coverage of all conditions in the module. In many test cases, some conditions of a decision are masked by other conditions. Using the modified condition/decision criterion, each condition must be shown to act on the decision outcome by itself, with everything else being held fixed. The MC/DC criterion is thus much stronger than the condition/decision coverage.


Confirm that the values of the safety-critical loaded data, uplinked data, rules, and scripts that affect hazardous system behavior have been tested.

Analyze the software design to ensure:

a. Use of partitioning or isolation methods in the design and code,

b. That the design logically isolates the safety-critical design elements and data from those that are non-safety-critical.

Participate in software reviews affecting safety-critical software products.

Early planning and implementation dramatically ease the developmental burden of these requirements. Depending on the failure philosophy used (fault tolerance, control-path separation, etc.), design and implementation trade-offs will be made. Trying to incorporate these requirements late in the life cycle will impact the project cost, schedule, and quality. It can also impact safety as an integrated design that incorporates software safety features such as those above. This allows the system perspective to be taken into account. The design has a better chance of being implemented as needed to meet the requirements in an elegant, simple, and reliable way.

Where conflicts with program safety requirements exist, program safety requirements take precedence.

Additional information can be found in NASA/TM−20205011566, NESC-RP-20-01515, Cyclomatic Complexity, and Basis Path Testing Study  377

7.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

  • No labels