See edit history of this section
Post feedback on this section
- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
1. Requirements
3.7.3 If a project has safety-critical software or mission-critical software, the project manager shall implement the following items in the software:
a. The software is initialized, at first start and restarts, to a known safe state.
b. The software safely transitions between all predefined known states.
c. Termination performed by the software functions is performed to a known safe state.
d. Operator overrides of software functions require at least two independent actions by an operator.
e. The software rejects commands received out of sequence when the execution of those commands out of sequence can cause a hazard.
f. The software detects inadvertent memory modification and recovers to a known safe state.
g. The software performs integrity checks on inputs and outputs to/from the software system.
h. The software performs prerequisite checks prior to the execution of safety-critical software commands.
i. No single software event or action is allowed to initiate an identified hazard.
j. The software responds to an off-nominal condition within the time needed to prevent a hazardous event.
k. The software provides error handling.
l. The software can place the system into a safe state.
1.1 Notes
These requirements apply to components that reside in a mission-critical or safety-critical system, and the components control, mitigate, or contribute to a hazard as well as the software used to command hazardous operations/activities.
1.2 History
1.3 Applicability Across Classes
Class A B C D E F Applicable?
Key: - Applicable | - Not Applicable
2. Rationale
Implementing safety-critical software or mission-critical software design requirements helps ensure that the systems are safe and that the safety-critical software or mission-critical software requirements and processes are followed.
3. Guidance
This requirement applies to safety-critical software and mission-critical software. These items are design practices that should be followed when developing safety-critical software and mission-critical software.
Software safety requirements contained in NASA-STD-8739.8 278
The software safety requirements contained in NASA-STD-8739.8 for safety-critical software are:
1. Analyze the software requirements and the software design and work with the project to implement NPR 7150.2 requirement items "a" through "l."
2. Assess that the source code satisfies the conditions in the NPR 7150.2 requirement "a" through "l" for safety-critical and mission-critical software at each code inspection, test review, safety review, and project review milestone.
3. Confirm 100% code test coverage is addressed for all identified software safety-critical software components or ensure that software developers provide a risk assessment explaining why the test coverage is impossible for the safety-critical code component.
4. Confirm that all identified safety-critical software components have a cyclomatic complexity value of 15 or lower. If not, assure that software developers provide a risk assessment explaining why the cyclomatic complexity value needs to be higher than 15 and why the software component cannot be structured to be lower than 15.
5. Confirm that the values of the safety-critical loaded data, uplinked data, rules, and scripts that affect hazardous system behavior have been tested.
6. Analyze the software design to ensure:
a. Use of partitioning or isolation methods in the design and code,
b. That the design logically isolates the safety-critical design elements and data from those that are non-safety-critical.
7. Participate in software reviews affecting safety-critical software products.
See the software assurance tab for additional guidance material.
Additional specific clarifications for a few of the requirement notes include:
Item a: (The software is initialized, at first start and restarts, to a known safe state.)
When establishing a known safe state, inspections include the state of the hardware and software, operational phase, device capability, configuration, file allocation tables, and boot code in memory.
Item d: (Operator overrides of software functions require at least two independent actions by an operator.)
Multiple independent actions by the operator help to reduce potential operator mistakes.
Item f: (The software detects inadvertent memory modification and recovers to a known safe state.)
Memory modifications may occur due to radiation-induced errors, uplink errors, configuration errors, or other causes. The computing system must be able to detect the problem and recover to a safe state. For example, computing systems may implement error detection and correction, software executable and data load authentication, periodic memory scrub, and space partitioning to protect against inadvertent memory modification. Features of the processor and/or operating system can be utilized to protect against incorrect memory use.
Item g: (The software performs integrity checks on inputs and outputs to/from the software system.)
The software needs to accommodate both nominal inputs (within specifications) and off-nominal inputs, from which recovery may be required. The software needs to accommodate start-up transient inputs from the sensors. Specify system interfaces clearly and thoroughly. Include, as part of the documentation, the required action or actions should include the interface.
Item h: (The software performs prerequisite checks prior to the execution of safety-critical software commands.)
The requirement is intended to preclude the inappropriate sequencing of commands. Appropriateness is determined by the project and conditions designed into the safety-critical system. Safety-critical software commands are commands that can cause or contribute to a hazardous event or operation. One must consider the inappropriate sequencing of commands (as described in the original note) and the execution of a command in the wrong mode or state. Safety-critical software commands must perform when needed (must work) or be prevented from performing when the system is not in a proper mode or state (must-not work).
Item j: (The software responds to an off-nominal condition within the time needed to prevent a hazardous event.)
The intent is to establish a safe state following the detection of an off-nominal indication. The safety mitigation must complete between the time the off-nominal condition is detected and the time the hazard would occur without the mitigation. The safe state can either be an alternate state from normal operations or can be accomplished by detecting and correcting the fault or failure within the timeframe necessary to prevent a hazard and continuing with normal operations. The intent is to design software to detect and respond to a fault or failure before it causes the system or subsystem to fail. If failure cannot be prevented, then design in the software's ability to place the system into a safe state from which it can later recover. In this safe state, the system may not have full functionality but will operate with this reduced functionality.
Item k: (The software provides error handling.)
Error handling is an implementation mechanism or design technique by which software faults and/or failures are detected, isolated, and recovered to correct run-time program execution. The software error handling features that support safety-critical functions must detect and respond to hardware, software, and operational faults and failures and faults in software data and commands from within a program or from other software programs. Minimize common failure modes.
Item l: (The software can place the system into a safe state.)
The system's design must provide sufficient sensors and effectors, and self-checks within the software to detect and respond to system potential hazards. Identify safe states early in the design. Have these fully checked and verified for completeness. A safe state is a system state in which hazards are inhibited, and all hazardous actuators are in a non-hazardous state. The system can have more than one Safe State. Ensure that failures of dynamic system activities result in the system achieving a known and identified safe state within a specified time
Additional Safety-Critical Software Design guidelines include:
- Minimize complexity - For safety-critical code, anything over 15 should be assessed for testability, maintainability, and code quality.
- Avoid complex flow constructs, such as goto and recursion.
- All loops must have fixed bounds. This prevents runaway code.
- Avoid heap memory allocation.
- Use a minimum of two runtime assertions per function.
- Restrict the scope of data to the smallest possible.
- Check the return value of all non-void functions, or cast to void to indicate the return value is useless.
- Use the preprocessor sparingly.
- Limit pointer use to a single dereference, and do not use function pointers.
- Compile with all possible warnings active; all warnings should then be addressed before the release of the software.
- Appropriate security posture and mindset should be applied to all levels of development.
4. Small Projects
This requirement applies to all projects regardless of size.
5. Resources
5.1 References
- (SWEREF-014) SSP 50038, Revision B, NASA International Space Station Program, 1995.
- (SWEREF-017) Constellation Computing Safety Requirements, CxP 70065, Revision A, 2005.
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-260) This NASA-only resource is available to NASA-users at https://nen.nasa.gov/web/faultmanagement.
- (SWEREF-271) NASA STD 8719.13 (Rev C ) , Document Date: 2013-05-07
- (SWEREF-276) NASA-GB-8719.13, NASA, 2004. Access NASA-GB-8719.13 directly: https://swehb.nasa.gov/download/attachments/16450020/nasa-gb-871913.pdf?api=v2
- (SWEREF-278) NASA-STD-8739.8B , NASA TECHNICAL STANDARD, Approved 2022-09-08 Superseding "NASA-STD-8739.8A,
- (SWEREF-375) IEC 62304:2006, Medical device software — Software life cycle processes A copy of this standard is available from https://www.iso.org/standard/38421.html
- (SWEREF-376) ISO 26262-1:2011, Road vehicles — Functional safety — Part 1: Vocabulary A copy of this standard is available from: https://www.iso.org/standard/43464.html
- (SWEREF-432) For Public Release. (2006). Lessons Learned Reference.
- (SWEREF-521) Public Lessons Learned Entry: 740.
- (SWEREF-603) Carnegie Mellon University course 18-642 updated Fall 2020, Koopman, Phil
5.2 Tools
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.
6. Lessons Learned
6.1 NASA Lessons Learned
Early planning and coordination between software engineering, software safety, and software assurance on the applicability and implementation of the SWE-134 software safety requirements will reduce schedule impacts.
The NASA Lesson Learned database contains the following lessons learned related to safety-critical software:
Deficiencies in Mission Critical Software Development for Mars Climate Orbiter (MCO) (1999). Lesson Number 0740 521: "The root cause of the MCO mission loss was an error in the "Sm_forces" program output files, which were delivered to the navigation team in English units (pounds-force seconds) instead of the specified metric units (Newton-seconds). Comply with preferred software review practices, identify software that is mission-critical (for which staff must participate in major design reviews, walkthroughs, and review of acceptance test results), train personnel in software walkthroughs, and verify consistent engineering units on all parameters."
6.2 Other Lessons Learned
Demonstration of Autonomous Rendezvous Technology (DART) spacecraft Type A Mishap 432: "NASA has completed its assessment of the DART MIB (Mishap Investigation Board) report, which included a classification review by the Department of Defense. The report was NASA-sensitive but unclassified because it contained information restricted by International Traffic in Arms Regulations (ITAR) and Export Administration Regulations (EAR). As a result, the DART mishap investigation report was deemed not releasable to the public." The LL also "provides an overview of publicly releasable findings and recommendations regarding the DART mishap."
7. Software Assurance
a. The software is initialized, at first start and restarts, to a known safe state.
b. The software safely transitions between all predefined known states.
c. Termination performed by the software functions is performed to a known safe state.
d. Operator overrides of software functions require at least two independent actions by an operator.
e. The software rejects commands received out of sequence when the execution of those commands out of sequence can cause a hazard.
f. The software detects inadvertent memory modification and recovers to a known safe state.
g. The software performs integrity checks on inputs and outputs to/from the software system.
h. The software performs prerequisite checks prior to the execution of safety-critical software commands.
i. No single software event or action is allowed to initiate an identified hazard.
j. The software responds to an off-nominal condition within the time needed to prevent a hazardous event.
k. The software provides error handling.
l. The software can place the system into a safe state.
7.1 Tasking for Software Assurance
1. Analyze the software requirements and the software design and work with the project to implement NPR 7150.2 requirement items "a" through "l."
2. Assess that the source code satisfies the conditions in the NPR 7150.2 requirement "a" through "l" for safety-critical and mission-critical software at each code inspection, test review, safety review, and project review milestone.
3. Confirm 100% code test coverage is addressed for all identified software safety-critical software components or ensure that software developers provide a risk assessment explaining why the test coverage is impossible for the safety-critical code component.
4. Confirm that all identified safety-critical software components have a cyclomatic complexity value of 15 or lower. If not, assure that software developers provide a risk assessment explaining why the cyclomatic complexity value needs to be higher than 15 and why the software component cannot be structured to be lower than 15.
5. Confirm that the values of the safety-critical loaded data, uplinked data, rules, and scripts that affect hazardous system behavior have been tested.
6. Analyze the software design to ensure:
a. Use of partitioning or isolation methods in the design and code,
b. That the design logically isolates the safety-critical design elements and data from those that are non-safety-critical.
7. Participate in software reviews affecting safety-critical software products.
7.2 Software Assurance Products
- Software Assurance Status Reports
- Software Design Analysis
- SA analysis of software requirements and design to implement items "a" through "l."
- SA analysis of the design to satisfy "a" and "b" in task 6.
- Source Code Analysis
- Verification Activities Analysis
- SA assessment that source code meets "a" through "l" at inspections and reviews, including any risks and issues.
- Evidence of confirmation that requirements for test code coverage, complexity, and testing of support files affecting hazardous systems have been met.
- SA risk assessment of any software developers' rationale if requirements are not met.
Objective Evidence
- Evidence of confirmation that 100% code test coverage is addressed for all identified software safety-critical software components or assure that software developers provide a risk assessment explaining why the test coverage is not possible for the safety-critical code component.
- Evidence of confirmation that all identified safety-critical software components have a cyclomatic complexity value of 15 or lower or provide a risk assessment explaining why the cyclomatic complexity value needs to be higher than 15 and why the software component cannot be structured to be lower than 15.
- Evidence of confirmation that the values of the safety-critical loaded data, uplinked data, rules, and scripts that affect hazardous system behavior have been tested.
- NPR 7150.2 and NASA-STD-8739.8 requirements mapping matrices signed by the engineering and SMA technical authorities for each development organization.
7.3 Metrics
- Software cyclomatic complexity # for all identified safety-critical software components;
- Software code/test coverage percentages for all identified safety-critical components (e.g., # of paths tested vs. total # of possible paths)
- Test coverage data for all identified safety-critical software components.
- # of software work product Non-Conformances identified by life-cycle phase over time
- # of Non-Conformances from reviews (Open vs. Closed; # of days Open)
- # of safety-related requirement issues (Open, Closed) over time
- # of safety-related non-conformances identified by life-cycle phase over time
- # of Hazards containing software that has been successfully tested vs. total # of Hazards containing software
- # of Source Lines of Code (SLOC) tested vs. total # of SLOC
Note: Metrics in bold type are required by all projects
7.4 Guidance
The sub-requirements and notes included in the requirement are a collection of best practices for implementing safety-critical software. These sub-requirements apply to components that reside in a safety-critical system. The components that control, mitigate, or contribute to a hazard and software are used to command hazardous operations/activities. Software engineering and software assurance disciplines each have specific responsibilities for providing project management with work products that meet the engineering, safety, quality, and reliability requirements of a project.
Step 1- Analyze the software requirements and the software design and work with the project to implement NPR 7150.2 requirement items "a" through "l."
Additional specific clarifications for a few of the requirement notes include:
Item a: (The software is initialized, at first start and restarts, to a known safe state.) When establishing a known safe state, inspections include the state of the hardware and software, operational phase, device capability, configuration, file allocation tables, and boot code in memory.
Item d: (Operator overrides of software functions require at least two independent actions by an operator.) Multiple independent actions by the operator help to reduce potential operator mistakes.
Item f: (The software detects inadvertent memory modification and recovers to a known safe state.) Memory modifications may occur due to radiation-induced errors, uplink errors, configuration errors, or other causes, so the computing system must detect the problem and recover to a safe state. For example, computing systems may implement error detection and correction, software executable and data load authentication, periodic memory scrub, and space partitioning to protect against inadvertent memory modification. Features of the processor and/or operating system can be utilized to protect against incorrect memory use.
Item g: (The software performs integrity checks on inputs and outputs to/from the software system.) The software needs to accommodate both nominal inputs (within specifications) and off-nominal inputs, from which recovery may be required. The software needs to accommodate start-up transient inputs from the sensors. Specify system interfaces clearly and thoroughly. Include, as part of the documentation, the required action or actions should include the interface.
Item h: (The software performs prerequisite checks prior to the execution of safety-critical software commands.) The requirement is intended to preclude the inappropriate sequencing of commands. Appropriateness is determined by the project and conditions designed into the safety-critical system. Safety-critical software commands are commands that can cause or contribute to a hazardous event or operation. One must consider the inappropriate sequencing of commands (as described in the original note) and the execution of a command in the wrong mode or state. Safety-critical software commands must perform when needed (must work) or be prevented from performing when the system is not in a proper mode or state (must-not work).
Item j: (The software responds to an off-nominal condition within the time needed to prevent a hazardous event.) The intent is to establish a safe state following the detection of an off-nominal indication. The safety mitigation must complete between the time the off-nominal condition is detected and the time the hazard would occur without the mitigation. The safe state can either be an alternate state from normal operations or can be accomplished by detecting and correcting the fault or failure within the timeframe necessary to prevent a hazard and continuing with normal operations. The intent is to design software to detect and respond to a fault or failure before it causes the system or subsystem to fail. If failure cannot be prevented, then design in the software's ability to place the system into a safe state from which it can later recover. In this safe state, the system may not have full functionality but will operate with this reduced functionality.
Item k: (The software provides error handling.) Error handling is an implementation mechanism or design technique by which software faults and/or failures are detected, isolated, and recovered to correct run-time program execution. The software error handling features that support safety-critical functions must detect and respond to hardware, software, and operational faults and failures and faults in software data and commands from within a program or from other software programs. Minimize common failure modes.
Item l: (The software can place the system into a safe state.) The system's design must provide sufficient sensors and effectors, and self-checks within the software to detect and respond to system potential hazards. Identify safe states early in the design. Have these fully checked and verified for completeness. A safe state is a system state in which hazards are inhibited, and all hazardous actuators are in a non-hazardous state. The system can have more than one Safe State. Ensure that dynamic system activities' failures result in the system achieving a known and identified safe state within a specified time.
Step 2 - Assess that the source code satisfies the conditions in the NPR 7150.2 requirement "a" through "l" for safety-critical and mission-critical software at each code inspection, test review, safety review, and project review milestone.
Step 3. Confirm 100% code test coverage is addressed for all identified software safety-critical software components or ensure that software developers provide a risk assessment explaining why the test coverage is impossible for the safety-critical code component.
Complete test coverage is needed for software safety-critical code. The concept of using untested code in hazardous conditions should not be considered acceptable. The requirement is to confirm that 100% code test coverage has been achieved or addressed for all identified software safety-critical components or provide a risk assessment explaining why the test coverage is not possible for the safety-critical code component. If safety-critical code has not been tested, we should understand why and discuss the risk associated with the hazard activity and the untested code.Recommend that you use the Modified Condition/Decision Coverage (MC/DC) approach. Modified Condition/Decision Coverage (MC/DC) is a code coverage criterion commonly used in software testing. See topic 7.21 - Multi-condition Software Requirements for additional guidance.
The modified condition/decision coverage (MC/DC) coverage is like condition coverage, but every condition in a decision must be tested independently to reach full coverage. This means that each condition must be executed twice, with the results true and false, but with no difference in all other conditions' truth values in the decision. Also, it needs to be shown that each condition independently affects the decision.
With this metric, some combinations of condition results turn out to be redundant and are not counted in the coverage result. A program's coverage is the number of executed statement blocks, and non-redundant combinations of condition results divided by the number of statement blocks and required condition result combinations.
Code coverage is a way of measuring the effectiveness of your test cases. The higher the percentage of code covered by testing, the less likely it is to contain bugs compared to code with a lower coverage score. There are three other code coverage types worth considering with MC/DC: Statement coverage, Decision coverage, and Multiple condition coverage.
Why MC/DC?
MCDC Coverage Video example. 603
Aerospace and space guidance prioritizes safety above all else in the software development lifecycle. MC/DC represents a compromise that finds a balance between rigor and effort, positioning itself in between decision coverage (DC) and multiple condition coverage (MCC). MC/DC requires a much smaller number of test cases than multiple condition coverage (MCC) while retaining a high error-detection probability.
Overview
MC/DC requires all of the below during testing:
- Each entry and exit point is invoked.
- Each decision takes every possible outcome.
- Each condition in a decision takes every possible outcome.
- Each condition in a decision is shown to affect the outcome of the decision independently.
- The independence of a condition is shown by proving that only one condition changes at a time.
MC/DC is used in avionics software development guidance DO-178B and DO-178C to ensure adequate testing of the most critical (Level A) software, which is defined as that software that could provide (or prevent failure of) continued safe flight and landing of an aircraft. It is also highly recommended for SIL 4 in part 3 Annex B of the basic safety publication[2] and ASIL D in part 6 of automotive standard ISO 26262.[3]
Clarifications
- Condition - A condition is a leaf-level Boolean expression (it cannot be broken down into simpler Boolean expressions).
- Decision - A Boolean expression composed of conditions and zero or more Boolean operators. A decision without a Boolean operator is a condition.
- Condition coverage - Every condition in the program's decision has taken all possible outcomes at least once.
- Decision coverage - Every entry and exit point in the program has been invoked at least once, and every decision in the program has taken all possible outcomes at least once.
- Condition/decision coverage - Every entry and exit point in the program has been invoked at least once. Every condition in a decision in the program has taken all possible outcomes at least once, and every decision in the program has taken all possible outcomes at least once.
- Modified condition/decision coverage - Every entry and exit point in the program has been invoked at least once. Every condition in a decision in the program has taken all possible outcomes at least once, and each condition has been shown to affect that decision outcome independently. A condition is shown to affect a decision's outcome independently by varying just that condition while holding fixed all other possible conditions. The condition/decision criterion does not guarantee the coverage of all conditions in the module. In many test cases, some conditions of a decision are masked by the other conditions. Using the modified condition/decision criterion, each condition must be shown to act on the decision outcome by itself, everything else being held fixed. The MC/DC criterion is thus much stronger than the condition/decision coverage.
An example
Assume we want to test the following code extract:
if ( (A || B) && C )
{
/* instructions */
}
else
{
/* instructions */
}
A, B, and C represent boolean expressions (i.e., not divisible in other boolean sub-expressions).
In order to ensure Condition coverage criteria for this example, A, B and C should be evaluated at least one time "true" and one time "false" during tests, which would be the case with the 2 following tests:
- A = true / B = true / C = true
- A = false / B = false / C = false
In order to ensure Decision coverage criteria, the condition ( (A or B) and C ) should also be evaluated at least one time to "true" and one time to "false". Indeed, in our previous test cases:
- A = true / B = true / C = true ---> decision is evaluated to "true"
- A = false / B = false / C = false ---> decision is evaluated to "false"
and Decision coverage is also realized.
However, these two tests do not ensure a Modified condition/decision coverage, which implies that each boolean variable should be evaluated one time to "true" and one time to "false," affecting the decision's outcome. It means that changing the value of only one condition will also change the decision's outcome. However, with only the two previous tests, it is impossible to know which condition influences the decision's evaluation.
In practice, for a decision with n boolean conditions, we have to find at least n+1 tests in order to be able to ensure modified condition/decision coverage. As there are 3 boolean conditions (A, B et C) in our example, we can (for instance) choose the following set of tests:
- A = false / B = false / C = true ---> decision is evaluated to "false"
- A = false / B = true / C = true ---> decision is evaluated to "true"
- A = false / B = true / C = false ---> decision is evaluated to "false"
- A = true / B = false / C = true ---> decision is evaluated to "true"
Indeed, in this case:
- between the 1st and 4th test scenarios, only A changed of value, which also made the decision's outcome change its value ("false" in the 1st case, "true" in the 2nd);
- in the same way, between 1st and 2nd, only B changed of value, which also made the decision's outcome change its value (passing from "false" to "true");
- eventually, between 2nd and 3rd, only C changed of value and the decision's outcome's value also changed (passing from "true" to "false").
Besides, Decision and Condition coverage criteria are still respected (each boolean variable and the decision's outcome itself take at least one time the "true" and "false" values). The modified condition/decision coverage is then ensured.
Step 4 - Confirm that all identified safety-critical software components have a cyclomatic complexity value of 15 or lower. If not, assure that software developers provide a risk assessment explaining why the cyclomatic complexity value needs to be higher than 15 and why the software component cannot be structured to be lower than 15.
The requirement is to minimize risk, minimize testing, and increase reliability associated with safety-critical software code components, thus reducing the chance of software failure during a hazardous event. A section of source code's cyclomatic complexity is the number of linearly independent paths within it—where "linearly independent" means that each path has at least one edge that is not in one of the other paths. For instance, if the source code contained no control flow statements (conditionals or decision points), the complexity would be 1 since there would be only a single path through the code. If the code had one single-condition IF statement, there would be two paths through the code: one where the IF statement evaluates TRUE and another one where it evaluates to FALSE so that the complexity would be 2. Two nested single-condition IFs, or one IF with two conditions, would produce a complexity of 3.
Purpose:
- Limiting complexity during development
- Measuring the "structuredness" of a program - concerned with determining what the control flow graph
- Implications for software testing - Another application of cyclomatic complexity is determining the number of test cases necessary to achieve thorough test coverage of a particular module.
- Correlation to the number of defects - Some studies find a positive correlation between cyclomatic complexity and defects: functions and methods with the highest complexity tend to contain the most defects. However, the correlation between cyclomatic complexity and program size (typically measured in lines of code) has been demonstrated many times. Studies that controlled for program size (i.e., comparing modules with different complexities but similar size) are generally less conclusive, with many finding no significant correlation, while others find the correlation. Some researchers who have studied the area question the validity of the studies' methods finding no correlation.
- Code maintainability
- Reduces the coupling of code. The higher the cyclomatic complexity number, the more coupled the code is. Highly coupled code cannot be modified easily and independently of other code.
- Ease of understanding the code increases as the complexity decreases. With a higher complexity number, the programmer has to deal with more control paths in the code, which leads to more unexpected results and defects.
- Ease of testing. If a method has a cyclomatic complexity of 10, it means there are 10 independent paths through the method. This implies is that at least 10 test cases are needed to test all the different paths through the code. The lesser the number, the easier it is to test.
What is Cyclomatic Complexity?
Cyclomatic complexity is a software metric used to measure the complexity of a program. These metrics measure independent paths through program source code. An independent path is defined as a path with at least one edge that has not been traversed before in any other path. Cyclomatic complexity can be calculated concerning functions, modules, methods, or classes within a program.
Thomas J. McCabe developed this metric in 1976, and it is based on a control flow representation of the program. Control flow depicts a program as a graph that consists of Nodes and Edges.
In the graph, nodes represent processing tasks while edges represent control flow between the nodes.
Flow graph notation for a program:
Flow Graph notation for a program defines several nodes connected through the edges. Below are Flow diagrams for statements like if-else, While, until, and normal sequence of flow.
Cyclomatic complexity is a software metric used to measure the complexity of a program. These metrics measure independent paths through program source code. An independent path is defined as a path with at least one edge that has not been traversed before in any other path. It is a quantitative measure of the number of linearly independent paths through a program's source code. The application of the requirement is to limit the complexity of routines during program development; programmers should count the complexity of the modules they are developing and split them into smaller modules whenever the cyclomatic complexity of the module exceeded 15. The NIST Structured Testing methodology adopted this practice. The figure of 10 had received substantial corroborating evidence. There are occasional reasons for going beyond the agreed-upon limit. It phrased its recommendation as "For each module, either limit cyclomatic complexity to 15 or provide a written explanation of why the limit was exceeded. For example, if the source code contains no control flow statement, its cyclomatic complexity will be 1, and the source code contains a single path in it. Similarly, if the source code contains one if condition, then cyclomatic complexity is 2 because there will be two paths: true and the other for false.
Several studies have investigated the correlation between cyclomatic complexity numbers with the frequency of defects occurring in a function or method. Studies have found a positive correlation between cyclomatic complexity and defects: functions and methods with the highest complexity also tend to contain the most defects. However, international safety standards like ISO 26262 and IEC 62304 mandate coding guidelines that enforce low code complexity.
Use of Cyclomatic Complexity:
- Limit code complexity.
- Determine the number of test cases required.
- Determining the independent path executions thus proven to be very helpful for Developers and Testers.
- It can make sure that every path has been tested at least once.
- This helps to focus more on uncovered paths.
- Code coverage can be improved.
- The risk associated with the program can be evaluated.
- These metrics being used earlier in the program helps in reducing the risks.
Higher numbers of cyclomatic complexity are bad, and lower cyclomatic complexity numbers are good. That's because code with high complexity is difficult to test. And it's likely to result in errors. So, code with low complexity is easier to test. And it's less likely to produce errors.
The following table gives an overview of the complexity number and corresponding meaning of:
Complexity Number | Meaning |
---|---|
1-10 | Structured and well-written code High Testability Cost and Effort is less |
10-20 | Complex Code Medium Testability Cost and effort is Medium |
20-40 | Very complex Code Low Testability Cost and Effort are high |
>40 | Not at all testable Very high Cost and Effort |
For safety-critical code, anything over 15 should be assessed for testability, maintainability, and code quality.
If the software safety-critical components have a cyclomatic complexity value of 16 or higher, then work with engineering to provide a risk assessment showing why the cyclomatic complexity value needs to be higher than ten and why the software component cannot be structured to be 15 or lower.
Step 5: Confirm that the values of the safety-critical loaded data, uplinked data, rules, and scripts that affect hazardous system behavior have been tested.
Step 6: Analyze the software design to ensure:
a. Use of partitioning or isolation methods in the design and code,
b. That the design logically isolates the safety-critical design elements and data from those that are non-safety-critical.
Step 7. Participate in software reviews affecting safety-critical software products.
Early planning and implementation dramatically ease the developmental burden of these requirements. Depending on the failure philosophy used (fault tolerance, control-path separation, etc.), design and implementation trade-offs will be made. Trying to incorporate these requirements late in the life cycle will impact the project cost, schedule, and quality. It can also impact safety as an integrated design that incorporates software safety features such as those above. This allows the system perspective to be taken into account. The design has a better chance of being implemented as needed to meet the requirements in an elegant, simple, and more reliable way.
Note that where conflicts with program safety requirements exist, program safety requirements take precedence.
0 Comments