bannerd

This topic is generic and applies to all SWEHB versions. All links to version specific SWEs and Topics have been removed. SWEs and Topics are in Green Bold Underlined text to make them easy to find. 

Refer to the appropriate Requirements or Topics buttons in the SWEHB version you are using to locate the SWE or Topic you need.

8.24 - Software Assurance Risk

1. Introduction and Chart

This chart summarizes SA Risks by Risk Phases

Legends

Phase / Area 

  • Prog - Programmatic 
  • Req - Requirements
  • Design - Design
  • Code
  • Test - Testing
  • Ops - Operations

Risk Levels 

Risk levels are color coded for the reviews in which they are discussed. 

Review Types

MCR = Mission Concept Review,SRR = System Requirements ReviewMDR = Mission Definition Review
SDR = System Definition ReviewPDR = Preliminary Design ReviewCDR = Critical Design Review
SIR = System Integration ReviewTRR = Test Readiness ReviewSAR = System Acceptance Review
ORR = Operational Readiness Review

All data is also available in an Excel Spreadsheet: Software Assurance Risks v3.xlsx

1.1 Development Risks

Click on a header to sort. 

Risk 

Phase / Area

M
C
R

S
R
R

M
D
R

S
D
R

P
D
R

C
D
R

S
I
R

T
R
R

S
A
R

O
R
R

R001 - Incomplete code peer reviews

The risk arises from the high likelihood of undiscovered software defects and diminished code quality due to insufficient code peer reviews, which are critical for identifying and resolving issues early in the software development lifecycle.

Click here for "What is the risk?": R001 - Incomplete Code Peer Reviews

Code









R002 - Incomplete implementation of static code analysis results

The incomplete implementation or omission of static code analysis poses a significant risk of undiscovered software defects, reduced code quality, missed schedule milestones, and increased operational costs.

Click here for "What is the risk?": R002 - Incomplete Implementation Of Static Code Analysis Results

Code









R003 - Software configuration management system

Software code is not in a configuration management system before the start of software testing. 

Click here for "What is the risk?": R003 - Software Configuration Management System

Code




 

 

 

 

 

R004 - Software audit findings

Significant evidence and multiple findings from the software assurance audits that the software processes are not being followed by the software development team(s).  

Click here for "What is the risk?": R004 - Software audit findings

Code




 

 

 

 

 

R005 - Use of a non-secure coding standard

The lack of a secure coding standard significantly increases the risk of introducing security vulnerabilities into the software, which could compromise both software and system security, ultimately threatening project success and mission objectives.

Click here for "What is the risk?": R005 - Use of a non-secure coding standard

Code









R006 - Lack of coding standards or insufficient coding

Lack of coding standards or insufficient use of the coding standard for developing safety-critical software for use in critical, real-time operations; coding standards not enforced due to poor software development practices; and/or lack of code reviews or lack of static/dynamic code analysis tools to find code defects. 

Click here for "What is the risk?": R006 - Lack of coding standards or insufficient coding

Code




 

 

 

 

 

R008 - Existence of compiler warnings

The existence of unresolved compiler warnings poses a significant risk to the project, including the possibility of major software defects, degraded system reliability, and operational failures.

Click here for "What is the risk?": R008 - Existence Of Compiler Warnings

Code




 

 

 

 

 

R009 - Software common cause risk

Software common cause is not addressed in the software and avionics design. Common cause failures arise from faults in software or systems that impact multiple redundant or independent components simultaneously, undermining the benefits of redundancy typically built into aerospace systems. This could occur due to coding/logic errors, processor resource overruns, database errors, or malicious software (e.g., computer viruses).

Click here for "What is the risk?": R009 - Software Common Cause Risk

Code




 

 

 

 

 

R010 - Incomplete code test coverage

The software code/test coverage percentages for all identified safety-critical components fall below the required thresholds at key program milestones, specifically: less than 80% at the Critical Design Review (CDR), less than 90% at the Software Integration Review (SIR), and less than 100% at the Test Readiness Review (TRR). These thresholds ensure that safety-critical software components are adequately exercised, verified, and validated to meet reliability, safety, and functional requirements before advancing through project phases. Failure to meet these coverage targets increases the risk of undetected defects, non-compliance with standards, and potential mission or system failures.

Click here for "What is the risk?": R010 - Incomplete Code Test Coverage

Code




80%

90%

100%

 

 

R011 - Cybersecurity vulnerabilities

The number of cybersecurity vulnerabilities and weaknesses identified by automated tools exceeds five, indicating potential security risks to the system. These vulnerabilities may include coding issues, misconfigurations, insecure interfaces, or exploit-prone components that could compromise the integrity, confidentiality, or availability of mission-critical software. Failure to address these vulnerabilities promptly increases the risk of unauthorized access, data breaches, system failures, or disruptions during mission operations.

Click here for "What is the risk?": R011 - Cybersecurity Vulnerabilities

Code




 

 

 

 

 

R012 - Use of different compilers for test code

The use of different compilers for test code and flight code introduces the risk of software behavior discrepancies between the testing and operational environments. Differences in compiler implementations, optimizations, and code generation can lead to variations in execution, timing, and functionality, potentially masking defects during testing or introducing undetected errors in flight software. This practice undermines the fidelity of the testing process and increases the risk of mission-critical failures during deployment. To ensure consistency and reliability, the same compiler and configuration should be used for both testing and flight software wherever possible.

Click here for "What is the risk?": R012 - Use Of Different Compilers For Test Code

Code




 

 

 

 

 

R013 - Coding standard violations

A significant number of coding standard violations have been identified in the software codebase, indicating non-compliance with predefined coding guidelines such as NASA-STD-8739.8, MISRA, or project-specific standards. These violations may include improper naming conventions, unsafe practices, undocumented code, or structural inconsistencies, which can compromise software maintainability, reliability, and safety. If left unresolved, such violations increase the risk of defects, hinder collaborative development, and reduce compliance with safety-critical requirements. Addressing these violations promptly is essential to ensure code quality, adherence to standards, and long-term project success.

Click here for "What is the risk?": R013 - Coding Standard Violations

Code









R014 - Excessive cyclomatic complexity on safety -critical software components

Safety-critical software components with cyclomatic complexity exceeding 15 present a heightened risk to system reliability, maintainability, and testability. Cyclomatic complexity, which measures the number of linearly independent paths through a program's source code, serves as an indicator of the code's structural complexity. Values above 15 are considered excessive, making the code harder to understand, more prone to defects, and challenging to fully test, particularly in safety-critical contexts. Components with such high complexity may lead to undetected errors, insufficient test coverage, and increased effort for debugging and maintenance. To mitigate these risks, high-complexity components should be refactored or redesigned to simplify their logic and ensure compliance with safety and software engineering standards.

Click here for "What is the risk?": R014 - Excessive Cyclomatic Complexity On Safety - Critical Software Components

Code




 

 

 

 

 

R015 - Software Requirements Volatility

Exceeding the acceptable software requirements volatility thresholds at key program milestones—10% at the Test Readiness Review (TRR), 20% at the Software Integration Review (SIR), and 40% at the Critical Design Review (CDR)—indicates instability in the definition and control of software requirements. Requirements volatility refers to the percentage of requirements added, removed, or modified after baseline establishment, and excessive changes at these milestones can lead to schedule delays, increased development costs, and incomplete or inconsistent implementation. High volatility at critical points jeopardizes system reliability, traceability, and verification efforts, increasing the risk of failing to meet mission objectives. To minimize impact, requirements changes should be strictly controlled, thoroughly reviewed, and well-documented to ensure alignment with project goals and constraints.

Click here for "What is the risk?": R015 - Software Requirements Volatility

Code




40%

20%

10%

 

 

R016 - Static analysis findings risk

The identification of a high number of static analysis code errors and warnings categorized as "positives" (i.e., confirmed issues rather than false positives) indicates significant defects or inefficiencies in the codebase that require attention. These issues, detected through automated static analysis tools, may include violations of coding standards, potential security vulnerabilities, memory management issues, and other quality-related problems. A large volume of confirmed findings can hinder timely development, increase debugging efforts, reduce confidence in software reliability, and pose risks to system safety and performance, especially in critical systems. Addressing these errors and warnings systematically, prioritizing by severity and relevance, is essential to ensure compliance with standards and overall software quality.

Click here for "What is the risk?": R016 - Static Analysis Findings Risk

Code






 

 

 

R017 - Unit test results are not repeatable.

Unit test results must adhere to established test procedures and be repeatable to ensure consistency, reliability, and accuracy in software validation. Properly documented procedures provide a clear, step-by-step approach to executing tests, including setup, inputs, execution conditions, and expected outcomes. Repeatability ensures that the same set of tests, when executed under identical conditions, produces consistent results. Failure to follow procedures or achieve repeatability undermines the credibility of the test outcomes, increases the risk of undetected defects, and complicates regression testing. Adhering to these principles is critical for building confidence in the quality and correctness of the software, particularly in safety and mission-critical systems.

Click here for "What is the risk?": R017 - Unit Test Results Are Not Repeatable

Code






 

 

 

R018 - Incomplete code peer reviews on safety or mission critical software

Any percentage of safety or mission-critical code that has not undergone a formal peer review represents a significant gap in quality assurance and risk mitigation. Peer reviews are essential for identifying defects, verifying adherence to coding standards, and ensuring that the code meets safety, reliability, and functional requirements. Omission of a peer review for safety-critical software increases the likelihood of undetected defects, which could compromise system performance, safety, or mission objectives. To maintain reliability and compliance with industry standards (e.g., NASA-STD-8739.9), 100% of safety-critical code must be subjected to a thorough peer review process that is documented, repeatable, and executed by qualified personnel.

Click here for "What is the risk?": R018 - Incomplete Code Peer Reviews On Safety Or Mission Critical Software

Code






 

 

 

R019 - Incomplete software peer reviews

Missing hardware support at software peer reviews The absence of hardware support during software peer reviews can lead to critical gaps in the validation and verification of software functionality, particularly in systems designed to interact with hardware components. Hardware involvement in peer reviews is essential for ensuring proper integration, compatibility, and alignment between software and hardware requirements. Missing hardware support may result in overlooked issues such as incorrect assumptions about hardware capabilities, communication protocols, timing constraints, or resource dependencies, which can lead to system failures during integration or deployment. To mitigate these risks, peer reviews should involve representatives with hardware expertise and, where feasible, utilize hardware simulations or prototypes to assess the software's behavior in relation to its target environment. This approach ensures thorough validation of safety-critical or mission-critical systems.

Click here for "What is the risk?": R019 - Incomplete Software Peer Reviews

Design






 

 

 

R020 - Unvalidated software tools

Flight software development tool(s) are not validated and accredited.

The lack of validation and accreditation of flight software development tools poses a significant risk to the reliability, safety, and overall quality of the software being produced. Development tools, such as compilers, code generators, static analyzers, and testing frameworks, must be verified to ensure they function as intended and produce correct, predictable, and consistent outputs. Unvalidated or unaccredited tools may introduce undetected errors, inaccuracies, or vulnerabilities into the software, which can compromise mission success and safety, particularly in critical systems.

Tool validation and accreditation involve demonstrating that the tools meet their intended purpose, are free from critical defects, and align with system and regulatory requirements (e.g., DO-178C or NASA software standards). This process typically includes scenarios such as generating representative outputs, verifying error-detection capabilities, and ensuring compatibility with the software architecture.

A failure to validate and accredit tools can lead to increased development costs, schedule delays, and non-compliance with industry or mission-specific standards. Ensuring proper validation and accreditation of tools is critical to maintaining confidence in the flight software's integrity, performance, and safety under all operating conditions.

Click here for "What is the risk?": R020 - Unvalidated Software Tools

Design






 

 

 

R021 - Data dictionary completeness

Having less than 95% of the data dictionary's data definitions complete represents a critical gap in ensuring the clarity, consistency, and maintainability of a software system, particularly in safety or mission-critical applications. A complete and accurately defined data dictionary is essential for supporting the proper understanding, tracking, and validation of all data elements used within the system. This ensures that parameters are well-documented, unambiguous, and tied to their intended purpose.

The attributes defined in the data dictionary must include, but are not limited to:

  1. Derivation or Origin of Parameters: The source, method of calculation, or reference of each parameter must be documented to enable traceability, maintainability, and visibility of dependencies between related parameters.

  2. Parameter Type: Specification of the data type, such as enumeration, alphanumeric, floating point, or integer, is critical to avoiding type mismatches and ensuring proper usage within the system.

  3. Values and Constraints:

    • Nominal Value: The expected default or standard value of the parameter.
    • Precision: The level of detail or granularity in numerical representation.
    • Accuracy: The degree to which the parameter value represents the true value.
    • Allowable Range: The permissible minimum and maximum values for numeric types, ensuring robustness and error handling.
  4. Physical Units and Reference Frames: For numerical and physical parameters, units of measurement (e.g., meters, seconds, kilograms) must be specified, as well as any applicable reference frames (e.g., inertial, body-fixed). Consistency in units and frames must be verified automatically wherever possible to avoid integration errors.

  5. Attributes for Non-Numeric Types: When parameters are not numeric, additional descriptors, such as data organization (e.g., list, table, structure) and format (e.g., string length, character encoding), must be specified to ensure clear interpretation.

Failing to meet these completeness criteria (i.e., having less than 95% of the data definitions finalized) increases the likelihood of misunderstandings, integration errors, and unexpected system behavior, particularly during the later stages of development or operation. This can result in cost overruns, schedule delays, and potential safety risks. To mitigate this, teams must ensure that all parameters are thoroughly documented early in the development lifecycle and updated continuously to reflect changes in system design. Additionally, automated tools should be utilized for consistency checks and validation of data dictionary completeness.

Click here for "What is the risk?": R021 - Data Dictionary Completeness

Design






 

 

 

R022 - Missing software design analysis

Incomplete or missing software design analysis activities or the absence of a comprehensive Software Architecture Review Board (SARB) analysis introduces significant risks to the quality, reliability, and maintainability of software systems, particularly in safety-critical or mission-critical applications. These analyses are foundational for ensuring that the software design and architecture meet system requirements, adhere to industry standards, and can support long-term scalability, performance, and safety.

Click here for "What is the risk?": R022 - Missing Software Design Analysis

Design






 

 

 

R023 - Flawed system software or architecture

A flawed system architecture, flawed software implementation, or misconfigured software could cause loss of vehicle control and loss of crew as a result of Inadequate Software Development Process or Inadequate Guidelines and Best Practices. System/Software architecture contains assumptions that can lead to unnecessary or unacceptable levels of risk.

Click here for "What is the risk?": R023 - Flawed System Software Or Architecture

Design






 

 

 

R024 - Unauthorized use of software applications, issuing of commands, and changes to the software configuration

Inability of the software design to address or catch external acts that could bypass or contravene security policies, practices, or procedures, leading to unauthorized use of software applications, issuing of commands, and changes to the software configuration.

Click here for "What is the risk?": R024 - Unauthorized Use Of Software Applications, Issuing Of Commands, And Changes To The Software Configuration

Design






 

 

 

R025 - Data is incorrect or incomplete

Data that is incorrect, incomplete, or improperly processed compromises the integrity, reliability, and usability of software systems, particularly in safety-critical and mission-critical applications. Such data issues can arise due to errors in how the data is displayed, processed, converted, or transmitted. When left unaddressed, these errors can lead to misinterpretation, faulty system behavior, or even catastrophic failures.

Click here for "What is the risk?": R025 - Data Is Incorrect Or Incomplete

Design






 

 

 

R026 - Corrupted commands, data, or loads, and memory faults allocated to the software. 

Flight software shall be designed with robust mechanisms to detect, mitigate, and respond safely to corrupted commands, data, memory faults, or other anomalies, including hardware-induced issues such as stuck bits and Single Event Effects (SEE). These functionalities are essential to ensure mission success, safe operation, and system resilience in challenging and unpredictable environments such as space, where exposure to radiation and other external factors can lead to transient or permanent faults.

Click here for "What is the risk?": R026 - Corrupted Commands, Data, Or Loads, And Memory Faults Allocated To The Software

Design






 

 

 

R027 - Missing or incomplete software implementation

Incomplete or missing software implementations, as derived from the flow-down of software requirements or designs, represent a significant compliance, functionality, and quality gap within the development lifecycle. The extent of these missing implementations, measured as greater than or equal to 30%, 20%, 10%, or any percentage, directly correlates to the severity of risks it poses to safety, mission success, and project timelines. This issue highlights a failure to translate software requirements or design specifications into fully realized, testable code, leading to downstream impacts on system verification, validation, and overall operability.

Click here for "What is the risk?": R027 - Missing Or Incomplete Software Implementation

Design




30%

20%

10%

Any%

 

R028 - Missing Changes in Code

All expected or agreed-upon changes of significant impact, as derived from requirements, design updates, reviews, or testing feedback, are not fully reflected in the code. This omission undermines the integrity of the software, leading to unaddressed functionality gaps, non-compliance with stakeholder expectations, and increased risks during validation, integration, or operation, particularly in critical systems.

Click here for "What is the risk?": R028 - Missing Changes In Code

Design






 

 

 

R029 - Missing software user guide

The absence of a user guide for the code creates a significant barrier to effective user interaction, understanding, and adoption of the product, service, or application. As a key external document, a user guide provides essential instructions, workflows, and contextual information to help users operate the software efficiently, understand its features, and resolve common issues. Its absence negatively impacts user experience, usability, and the overall accessibility of the software, particularly for external stakeholders with limited technical expertise.

Click here for "What is the risk?": R029 - Missing Software User Guide

Ops






 

 

 

R030 - Missing or incomplete software configuration management planning

A missing or incomplete software configuration management (SCM) plan compromises the ability to control, track, and manage changes to software artifacts throughout the development lifecycle. The SCM plan is a critical document that defines processes, tools, roles, and responsibilities for managing software configurations, including version control, change management, and build management. Its absence or incompleteness increases the risk of inconsistencies, undocumented changes, loss of traceability, integration errors, and potential non-compliance with industry standards, ultimately impacting the software’s reliability, maintainability, and quality.

Click here for "What is the risk?": R030 - Missing Or Incomplete Software Configuration Management Planning

Prog






 

 

 

R031 - Unimplemented IV&V findings 

Independent Verification and Validation (IV&V) findings that are not being addressed by the project highlight a critical breakdown in the resolution of identified risks, issues, or non-compliances related to software functionality, quality, and safety. IV&V findings are essential for ensuring the software meets its intended requirements and standards. Failing to address these findings can lead to unresolved defects, incomplete risk mitigation, and potential impacts on mission success, as well as non-compliance with organizational or industry standards, jeopardizing overall project goals and stakeholder trust.

Click here for "What is the risk?": R031 - Unimplemented IV&V Findings

Prog






 

 

 

R032 - Unimplemented process audit findings 

Unaddressed software process audit findings indicate a failure to resolve documented non-compliances or inefficiencies within the software development lifecycle processes. Software process audits are critical for ensuring adherence to established standards, methodologies, and best practices. Neglecting to address these findings can lead to recurring process issues, reduced software quality, non-compliance with regulatory requirements, and increased risks to project delivery, ultimately compromising the reliability and success of the software and associated systems.

Click here for "What is the risk?": R032 - Unimplemented Process Audit Findings

Prog






 

 

 

R033 - Missing or no software acceptance criteria

The absence of clear software acceptance criteria for all software products creates ambiguity in determining whether the delivered software meets the required quality, functionality, and performance standards. Acceptance criteria (AC) define the specific conditions under which a software product will be deemed acceptable by users, customers, or interfacing systems. Without well-defined AC, stakeholders lack a clear basis for validating the software’s compliance with requirements, leading to risks such as unmet user expectations, delayed approvals, misaligned deliverables, and potential disputes over project outcomes.

Click here for "What is the risk?": R033 - Missing Or No Software Acceptance Criteria

Prog






 

 

 

R034 - Missing or incomplete software hazards

Hazard Analysis should consider The absence of software hazard conditions in the hazard analysis poses significant risks to system safety and mission success. Hazard analysis should account for the software's ability, by design, to contribute to, mitigate, or control specific hazards within the system. Omitting software in the overall hazard analysis neglects critical failure modes, including common-mode failures, which can occur when redundant systems (e.g., flight computers) run identical software. Failing to identify and address software-related hazards increases the likelihood of undetected safety-critical issues, reducing the system's robustness and reliability and potentially leading to catastrophic failures in safety-critical operations. Incorporating software-specific considerations into the system hazard analysis is a best practice to ensure a comprehensive and effective risk management approach.

Click here for "What is the risk?": R034 - Missing Or Incomplete Software Hazards

Prog






 

 

 

R035 - Insider threat activities

The absence of build processes that account for insider threat mitigation introduces critical vulnerabilities to an organization's networks, systems, and data. Insider threats, originating from individuals within the organization—such as employees, contractors, vendors, or partners—often involve misuse of legitimate access credentials to compromise or harm the organization's infrastructure. Build processes that fail to incorporate controls, monitoring, and risk mitigation activities to address insider threats increase exposure to cybersecurity risks, such as data breaches, espionage, intellectual property theft, or system sabotage. Integrating comprehensive insider threat mitigation measures into the software build process is essential to safeguard the organization's assets and reduce the likelihood of unauthorized actions.

Click here for "What is the risk?": R035 - Insider Threat Activities

Prog







 

 

R036 - Immature products at major milestone reviews

The inability to access software and software development status creates a lack of visibility and transparency into the progress, quality, and overall health of the development process. This deficiency can prevent stakeholders, including project managers, developers, and quality assurance teams, from monitoring milestones, identifying risks, or addressing potential delays. Without timely access to this information, organizations risk misaligned expectations, poor decision-making, missed deadlines, and failure to detect issues early in the development lifecycle, which can lead to increased costs, reduced quality, and potential project failure. Ensuring accessible and up-to-date tracking of software and development status is essential to maintain effective communication, collaboration, and project control.

Click here for "What is the risk?": R036 - Immature products at major milestone reviews

Prog






 

 

 

R037 - Unrepaired defects for flight release 

A large number of critical software defects and operational workarounds in the flight release code indicate significant quality and reliability issues that can jeopardize mission success and safety. Critical defects are issues that severely impact the software's functionality, performance, or ability to meet mission requirements, while operational workarounds are temporary fixes that circumvent these defects instead of resolving them. The presence of such defects and workarounds in the flight release code suggests inadequate testing, rushed development, poor requirements management, or unresolved technical debt. This not only increases operational risk but also adds complexity for end users, potentially resulting in mission-critical failures. Reducing critical defects and eliminating operational workarounds through robust development, rigorous testing, and defect management processes is crucial to delivering reliable, high-quality, and safe flight software.

Click here for "What is the risk?": R037 - Unrepaired Defects For Flight Release

Prog






 

 

 

R038 - Inability to track safety critical tests

The late delivery of hazard requirement flow-down tracing significantly hampers the ability to identify and prioritize safety-critical test cases associated with hazard controls during verification. Hazard requirement flow-down tracing ensures that high-level safety-critical requirements are properly decomposed into detailed software and system requirements and that their corresponding hazard controls are implemented and verified. When tracing is delayed, teams lack the necessary insight to identify which tests directly address safety-critical functionalities, resulting in insufficient or misaligned testing efforts. This increases the risk of unverified hazard mitigations, undetected deficiencies, and potential safety hazards in the final system. Timely delivery of hazard requirement tracing is essential to support early planning, prioritization, and execution of safety-critical verification tests, ensuring system safety and compliance with mission and regulatory requirements.

Click here for "What is the risk?": R038 - Inability To Track Safety Critical Tests

Prog






 

 

 

R039 - Severity 1 or 2 IV&V findings

Failure to address Severity 1 or Severity 2 Independent Verification and Validation (IV&V) findings poses significant risks to project success, system safety, and mission objectives. Severity 1 findings indicate critical issues that could lead to system failure or loss of mission, while Severity 2 findings represent major deficiencies that could seriously impact system performance, reliability, or safety. Ignoring or deprioritizing such findings suggests a breakdown in risk management and quality assurance processes, increasing the likelihood of undetected or unresolved issues in the final system. This oversight can result in costly rework, schedule delays, operational failures, or compromised safety in the deployed system. Ensuring that all Severity 1 and 2 IV&V findings are properly acknowledged, addressed, and mitigated is fundamental to delivering a reliable, high-quality product and preventing catastrophic outcomes.

Click here for "What is the risk?": R039 - Severity 1 or 2 IV&V findings

Prog






 

 

 

R040 - Missing implementation of cybersecurity requirements from NASA-STD-1006

The absence of software requirement implementation for standards outlined in NASA-STD-1006 represents a critical shortfall in meeting mandated practices for software assurance, reliability, and safety. NASA-STD-1006, "Software Formal Inspections Standard," defines rigorous requirements for verification processes like formal code inspections, defect identification, and adherence to quality standards needed to mitigate risks in mission-critical and safety-sensitive systems. Missing implementation of these requirements can lead to undetected defects, diminished compliance with NASA guidelines, reduced operational reliability, and an increased risk of system failures. Ensuring full implementation of NASA-STD-1006 requirements is essential to uphold safety, quality, and mission success, and helps guarantee that software reliability standards are consistently met throughout the development lifecycle.

Click here for "What is the risk?": R040 - Missing Implementation Of Cybersecurity Requirements From NASA-STD-1006

Req






 

 

 

R041 - Missing software requirements for encryption

The absence of a software requirement for encryption on uplink and downlink communications poses a significant security risk to the mission. Encryption is critical to ensuring the confidentiality, integrity, and authenticity of data transmitted between ground systems and spacecraft. Without encryption, the uplink (commands sent to the spacecraft) and downlink (telemetry and mission data sent to Earth) are susceptible to interception, unauthorized access, data corruption, or malicious tampering. This vulnerability could result in compromised mission operations, data breaches, or unauthorized control of critical systems. To mitigate these risks, it is essential to include and implement robust encryption requirements aligned with industry best practices and mission-specific security standards. Failure to do so jeopardizes mission security, data integrity, and overall operational safety.

Click here for "What is the risk?": R041 - Missing Software Requirements For Encryption

Req






 

 

 

R042 - Missing cybersecurity software requirements from project protection plan assessment

Space flight software development organizations work with the Project Protection Plan 362  to evaluate all software security risks identified in the Project Protection Plan.  The project performs a software cybersecurity assessment on the software components per the Agency security policies and the project requirements, including risks posed by the use of COTS, GOTS, MOTS, OSS, or reused software components.

Click here for "What is the risk?": R042 - Missing Cybersecurity Software Requirements From Project Protection Plan Assessment

Req






 

 

 

R043 - Inadequate software requirements quality

A high software requirements quality risk score (score of 3 or higher) indicates significant deficiencies in the clarity, completeness, testability, or feasibility of the software requirements. This metric is used to quantify and track the quality of software requirements, as poor-quality requirements can lead to costly rework, defects, and failures during development and testing phases. A risk score of 3 or higher corresponds to "high risk," suggesting requirements may lack specificity, have conflicting or ambiguous details, or fail to adequately define conditions for success. Ideally, fewer than 5% of the requirements should be categorized as high risk or very high risk, as percentages higher than this signal systemic issues in requirements management.

When such risks are prevalent—especially exceeding tolerances—this undermines the foundation of the project by increasing the likelihood of faulty implementation, missed functionality, and hazards in the operational system. Addressing high-risk requirements promptly through risk mitigation processes, clarification, stakeholder collaboration, and requirement validation ensures a higher-quality requirements baseline, reducing downstream risks and enhancing project reliability and mission success.

Click here for "What is the risk?": R043 - Inadequate Software Requirements Quality

Req






 

 

 

R044 - Missing software and software assurance requirements tailoring approval

Missing approvals for tailored significant and risky requirements in the requirement matrices (NPR 7150.2 and NASA-STD-8739.8) by both Engineering and SMA TAs.

Click here for "What is the risk?": R044 - Missing Software And Software Assurance Requirements Tailoring Approval

Req






 

 

 

R045 - Incomplete, missing, or unclear Software Requirements

Incomplete software requirements to code/design traceability, code or design functions without existing traceable software requirements. Software requirements not fully defined or incorrectly translated from requirement to design, caused by insufficient requirements or design reviews. Errors in software requirements include requirements that do not involve the necessary stakeholders in order to implement the desired functionality. Software requirements error resulting from invalid or insufficient data used in simulations and models developed for verification. Errors not detected and/or removed due to lack of design reviews or poor software development practices that do not include the necessary stakeholder participation on design review teams.

Click here for "What is the risk?": R045 - Incomplete, Missing, Or Unclear Software Requirements

Req






 

 

 

R046 - Late baselining of the software requirements (after CDR)

Software requirements are baselined late in the software development life cycle.

Click here for "What is the risk?": R046 - Late Baselining Of The Software Requirements (After CDR)

Req






 

 

 

R047 - Missing software capability to detect adversarial actions

The lack of software requirements addressing the detection of adversarial actions represents a significant risk to system security, resilience, and mission success. Adversarial actions, such as cyberattacks, unauthorized access, data manipulation, or malicious interference, pose a threat to the integrity, availability, and confidentiality of critical systems and operations. Without explicitly defined requirements for detecting such threats, the system is likely to lack the capabilities for monitoring, alerting, and responding to malicious activities in real time, leaving it vulnerable to exploitation.

Detection requirements are a foundational element for proactive defense mechanisms, including intrusion detection systems (IDS), anomaly detection, logging and monitoring, and correlations to identify patterns indicative of adversarial behavior. Missing these requirements increases the likelihood of undetected intrusions, operational disruptions, data breaches, and potential compromise of safety-critical functions.

To mitigate these risks, the project must ensure that software requirements include robust and comprehensive specifications for detecting, logging, and mitigating adversarial actions. These requirements should be aligned with cybersecurity standards and best practices to safeguard the system and ensure continued functionality during emerging threats. Failure to incorporate such requirements compromises system security and jeopardizes mission success.

Click here for "What is the risk?": R047 - Missing Software Capability To Detect Adversarial Actions

Req






 

 

 

R048 - Undefined fault management system requirements

Unclear or undefined mission and fault management requirements for software implementation represent a critical risk to system reliability, operational safety, and mission success. Mission and fault management requirements define how the software should respond to faults, failures, or anomalies during development, testing, and flight operations. The absence of a clear framework for implementing diagnostic and test capabilities leads to challenges in identifying, isolating, and resolving issues in a timely manner, potentially resulting in prolonged mission downtime, loss of system functionality, or catastrophic failures during flight.

To address this, test and diagnostic code must be explicitly designed and integrated into the software early in the development lifecycle, ensuring it is accessible through flight interfaces. This proactive approach enables real-time fault detection, troubleshooting, and resolution at both the element and flight system levels. Early incorporation ensures that diagnostics are not an afterthought but a critical component of system design, enhancing fault isolation and recovery capabilities during system verification and operations.

Failure to establish clear mission and fault management requirements undermines the system's ability to respond effectively to unexpected conditions, jeopardizes problem resolution efforts, and increases the likelihood of unmanageable faults during flight. Comprehensive and well-defined requirements for mission and fault management are essential to ensure system resilience, reduce development risk, and enable rapid recovery across all stages of the mission lifecycle.

Click here for "What is the risk?": R048 - Undefined Fault Management Ssystem Requirements

Req






 

 

 

R049 - Missing software sensor range checking capabilities

Missing software requirements for sensor range checking create a critical risk to system safety and reliability. Sensor range checking ensures that the software validates sensor data against predefined operational limits to detect out-of-range or faulty values. Without this capability, the system may process invalid data, leading to incorrect decisions, degraded performance, or unsafe operations.

The software must include clear requirements for defining sensor limits, handling out-of-range values (e.g., triggering alerts, ignoring invalid data, or activating fail-safes), and logging anomalies. Failure to implement these checks increases the risk of undetected sensor faults, which can cause system failures, misbehavior, or mission-critical errors. Including sensor range checking early in development is essential to prevent invalid data from compromising operations and ensure mission success.

Click here for "What is the risk?": R049 - Missing Software Sensor Range Checking Capabilities

Req






 

 

 

R050 - Testing for OTS software

The off-the-shelf (OTS) software component has not been verified and validated to meet the same rigorous standards required for a custom-developed software component intended for the same use. This creates a significant risk to system reliability, safety, and mission assurance, as the OTS software may contain undetected faults, vulnerabilities, or performance limitations that could impact critical operations.

To mitigate this risk, the OTS software must undergo a thorough verification and validation process to demonstrate that it meets all functional, safety, and operational requirements for its intended use. This includes evaluating the software against applicable standards, performing comprehensive testing under expected operational conditions, assessing its compatibility with the overall system, and analyzing its ability to handle edge cases or failure scenarios. Any gaps in assurance between the OTS component and the custom-developed equivalent must be addressed before acceptance and integration.

Failure to meet the required verification and validation standards for OTS components increases the likelihood of operational failures, undetected defects, and security vulnerabilities, potentially jeopardizing mission success. All components, regardless of their origin, must meet the same level of scrutiny to ensure system integrity and reduce risk.

Click here for "What is the risk?": R050 - Testing For OTS Software 

Req






 

 

 

R051 - Missing Data Quality Indicators

Missing data quality indicators. The semantics of data conveyed across public interfaces, whether inside an executable or as input to or output from an executable, shall be clearly specified and, if possible, verified in an automated way at build time. Data semantics may include range, precision, physical units of measure, and coordinate frames, as applicable. This principle applies primarily to public interfaces that cross system, subsystem, or component boundaries --- specifically, boundaries between software elements developed by different programmers. The importance of critical (formal) verification of these interfaces should be commensurate with the organizational separation between developers on each side of the interface.

Click here for "What is the risk?": R051 - Missing Data Quality Indicators

Req






 

 

 

R052 - Missing OTS software requirements

Missing detailed software requirements for Commercial off-the-shelf (COTS), Government off-the-shelf (GOTS), Modified off-the-shelf (MOTS), Open-source software (OSS), or reused software components poses a significant risk to system functionality, compatibility, security, and reliability. Without tailored requirements, these components may fail to meet critical operational, performance, safety, or cybersecurity needs, leading to integration challenges, unexpected behaviors, or vulnerabilities within the system.

Clear, specific requirements must be established for all such software components to ensure they are appropriately configured, tested, verified, and validated for their intended use. These requirements should include functionality specifications, performance expectations, compatibility constraints, interfaces, cybersecurity measures, maintenance plans, and documentation needs. Additionally, requirements must address licensing, intellectual property compliance, and support for future upgrades or modifications.

Failure to define detailed software requirements for COTS, GOTS, MOTS, OSS, or reused components increases the likelihood of inconsistencies, inefficiencies, operational failures, and security exposure, which can jeopardize mission success. Proper requirements development ensures these software components integrate seamlessly and perform reliably within the overall system architecture.

Click here for "What is the risk?": R052 - Missing OTS Software Requirements

Req






 

 

 

R053 - Traceability completion

Bi-directional traceability for all detailed software requirements being less than (CDR 75%, SIR 90%, TRR 100%) complete indicates a significant gap in tracking and verifying that all requirements are properly implemented and tested. Bi-directional traceability ensures that every requirement is mapped to its corresponding design, implementation, and test cases, and vice versa, to confirm that nothing is overlooked or misinterpreted at any stage of development.

Without achieving the specified levels of traceability—75% by Critical Design Review (CDR), 90% by Software Integration Review (SIR), and 100% by Test Readiness Review (TRR)—there is a higher risk of undetected design errors, incomplete implementation, untested functionality, and non-compliance with mission objectives. This can lead to inefficiencies, delays, and failures during testing, integration, or operational deployment.

Achieving and maintaining bi-directional traceability ensures that all requirements are accounted for, changes are managed effectively, and the system fulfills its intended purpose. Traceability gaps must be addressed early to avoid costly rework, schedule impacts, or mission-critical failures.

Click here for "What is the risk?": R053 - Traceability Completion

Req




75%

90%

100%

 

 

R054 - Missing or incomplete software requirements

Risk Statement:
Software requirements that are incomplete or missing, as a result of poor or insufficient flow-down from system requirements, pose critical risks to the development, testing, and integration of software components. This risk is quantified in terms of the proportion of system requirements that lack corresponding detailed software requirements (e.g., ≥30%, 20%, 10%, any percentage at key milestones).

Software requirements that are incomplete or missing as part of the flow-down from system requirements (e.g., >=30%, 20%, 10%, or any threshold) pose a critical risk to successful software development and system performance. Missing or incomplete requirements indicate that the necessary details for transforming high-level system requirements into actionable software design, implementation, and testing steps have not been fully defined, leading to ambiguity, gaps, and misalignment of the software with overall system objectives.

This deficiency can result in software that fails to meet functional, performance, safety, or security expectations, introduces rework, and delays project schedules. Specifically, it increases the likelihood of missed stakeholder needs, untested or improperly implemented software features, and miscommunication between teams.

To mitigate this risk, all system requirements must be thoroughly analyzed and flowed down to complete, unambiguous, measurable, and testable software requirements at predefined milestones (e.g., Critical Design Review). Any gaps must be identified, addressed, and validated to ensure that the software aligns seamlessly with system needs and mission goals. Failure to resolve these issues early can lead to significant integration challenges, cost overruns, or mission failure.

Click here for "What is the risk?": R054 - Missing Or Incomplete Software Requirements

Req


30%20%10%

Any%

 

 

 

R055 - Missing or incomplete software design

Risk Statement:
If software designs are incomplete or missing due to insufficient flow-down from detailed software requirements (e.g., ≥30%, 20%, 10%, or any percentage of missing design components), this poses a significant risk to the development, integration, and verification of software. Software design translates what the system is required to do (as stated in software requirements) into how the software will achieve it. Weaknesses in this flow-down lead to ambiguities, functional gaps, and integration issues that affect the quality, usability, and reliability of the delivered system.

Click here for "What is the risk?": R055 - Missing Or Incomplete Software Design

Req



30%20%

10%

Any%

 

 

R074 - A high ratio of estimated Source Lines of Code (SLOC) to detailed software requirements 


Risk Statement:
A high ratio of estimated Source Lines of Code (SLOC) to detailed software requirements (i.e., greater than 50 lines of code per requirement) indicates a potential mismanagement of requirements' granularity and system complexity. This condition risks overcomplication in the software development process, untraceable code, and an increased likelihood of defects during implementation, testing, and maintenance.

Click here for "What is the risk?": R074 - A high ratio of estimated Source Lines of Code (SLOC) to detailed software requirements 

Req









R075 - # of TBD/TBC/TBRs

If the percentage of TBDs (To Be Determined), TBCs (To Be Confirmed), or TBRs (To Be Reviewed) in the software requirements document exceeds 1%, the project faces significant risks related to clarity, completeness, and stability of requirements. This uncertainty drives downstream challenges in software design, development, testing, and integration, increasing the likelihood of project delays, rework, and cost overruns.

Click here for "What is the risk?":  R075 - # of TBD/TBC/TBRs

Req









R076 - Incomplete software regression testing

Software regression testing not identifiedFailure to identify software regression testing as an integral part of the test strategy poses significant risks to software quality, stability, and the overall success of the project. Without regression testing, changes to the software (e.g., bug fixes, enhancements, or updates) may unintentionally introduce new defects or break existing functionality, potentially leading to critical failures during integration, testing, or operational use.

Click here for "What is the risk?":  R076 - Incomplete software regression testing

Test









R077 - Incomplete cybersecurity assessment testing

Missing or incomplete cybersecurity testing introduces critical risks to the confidentiality, integrity, and availability of software systems. Without comprehensive testing, vulnerabilities may remain undetected, exposing the system to potential exploitation by malicious actors. This could lead to data breaches, system downtime, financial losses, reputational damage, legal liabilities, and regulatory non-compliance. Cybersecurity is paramount, particularly for software in regulated industries, safety-critical systems, or applications involving sensitive data.

Click here for "What is the risk?": R077 - Incomplete cybersecurity assessment testing

Test









R078 - Use of untested code on flight vehicle

Risk Statement:
The use of uncertified flight software on flight vehicles during testing and launch preparation introduces significant risks to mission success, safety, reliability, and schedule dependability. Operating uncertified flight software on a mission-critical vehicle before formal verification and validation (V&V) is complete increases the likelihood of undetected software defects or anomalies, potentially compromising vehicle operations, leading to system failures, and causing harm to personnel or assets during pre-launch or testing phases.

Click here for "What is the risk?": R078 - Use of untested code on flight vehicle

Test









R079 - Incomplete Software test planning

Risk Statement:
Incomplete software test plans introduce potentially severe risks to software quality, project schedules, cost overruns, and system reliability. A comprehensive test plan ensures that all system requirements are properly verified and validated, risks are mitigated, and defects are identified and resolved early in the lifecycle. Without a complete test plan, critical functionalities may not be tested adequately, defects may go undetected, and the delivered system may fail to meet project objectives, stakeholder expectations, or regulatory compliance requirements.

Click here for "What is the risk?":  R079 - Incomplete Software test planning

Test









R080 - Software test fidelity

Risk Statement:
The use of a software test environment with insufficient fidelity or the absence of test avionics hardware exposes the project to significant risks related to undetected software defects, unreliable system behavior, integration challenges, and mission failure. Testing software in an environment that does not accurately simulate real-world operational conditions increases the likelihood of environmental, hardware-software interface, and system-level anomalies going unnoticed until later stages, where fixes are costlier, time-consuming, or infeasible.

Click here for "What is the risk?": R080 - Software test fidelity

Test









R081 - Software test schedule does not have enough time to complete adequate software testing

Risk Statement:
Inadequate time allocated for software testing introduces significant risks of deploying a system that has not been comprehensively verified or validated against requirements. When test schedules are compressed, critical testing activities may be skipped, rushed, or poorly executed, leading to undetected defects, suboptimal software performance, and potential system failure in later phases. This can result in costly rework, delayed timelines, non-compliance with requirements, and dissatisfaction among stakeholders or end users.

Click here for "What is the risk?":  R081 - Software test schedule does not have enough time to complete adequate software testing

Test









R082 - Software test procedure maturity

Risk Description:
The maturity of software test procedures refers to the degree of completeness, accuracy, robustness, and reliability of test procedures throughout the software lifecycle. Immature software test procedures create risks such as unverified system requirements, undetected defects, inefficiencies in testing workflows, and inability to adapt to design changes, leading to compromised software quality, safety, and compliance. Ensuring test procedure maturity is critical, particularly in projects where the software operates in safety-critical systems (e.g., aerospace, healthcare, automotive).

Click here for "What is the risk?": R082 - Software test procedure maturity

Test









R083 - Use of simulation test bed versus flight hardware

Risk Statement:
Relying on a Simulation Test Bed (STB) as a substitute for flight hardware during flight software certification poses significant risks to the validity, reliability, and completeness of the certification process. While simulation test beds can accelerate development and provide early validation, they may fall short in replicating the exact physical, electrical, and environmental conditions of flight hardware. This gap between the simulated and real-world environments increases the likelihood of undetected issues, software-hardware integration failures, certification non-compliance, and, ultimately, mission-critical failures during operational deployment.

Simulation Test Beds are valuable tools for preliminary and developmental testing; however, their limitations make them insufficient as the sole basis for certification, especially for safety-critical or mission-critical flight applications.

Click here for "What is the risk?": R083 - Use of simulation test bed versus flight hardware

Test









R084 - No independence for software testing

Risk Description:
A lack of independence in software testing means that the testing is performed by the same team or individuals who developed the software, limiting objective evaluation. This practice introduces significant risks of missed defects, biased test execution, and inadequate verification of software functionality. In systems that are mission- or safety-critical, such as aerospace, automotive, or healthcare, the absence of independent testing can lead to catastrophic failures, regulatory non-compliance, and stakeholder dissatisfaction.

Click here for "What is the risk?": R084 - No independence for software testing

Test









R087 - EFT test results used for software certification

Risk Description:
Using Engineering Flight Test (EFT) results as the basis for certifying a real flight vehicle introduces risks due to differences in test environments, configurations, and operational constraints. While EFTs are crucial for validating preliminary design and performance, they may not fully represent the actual flight vehicle's operational conditions. Over-reliance on EFT test results for flight certification may lead to undetected issues in areas such as integration, environmental stress, hardware-software interactions, and mission-critical scenarios. This approach may compromise safety, certification standards, and mission success.

Click here for "What is the risk?": R087 - EFT test results used for software certification

Test






 

 

 

R088 - Missing software test reports

Risk Statement:
The lack of comprehensive, accurate, and traceable software test reports introduces significant risks to the verification and validation process during software development, especially for safety-critical or mission-critical systems. Issues such as missing test reports, selective recording of successful tests (while ignoring failed tests), and undefined test procedure outputs and criteria compromise accountability, traceability, and compliance with certification standards. These deficiencies can lead to undetected defects, incomplete test coverage, regulatory non-compliance, and wasted resources during rework, ultimately jeopardizing system reliability and operational safety.

Click here for "What is the risk?":  R088 - Missing software test reports

Test






 

 

 

R089 - Uncertified software test simulators

Risk Description:
The use of uncertified software simulations for formal software testing creates significant risks in safety-critical and mission-critical systems. Simulations are often used to emulate complex operational environments, test edge cases, and verify software behaviors under various scenarios. However, if the simulations are not certified or validated for accuracy, reliability, and alignment with real-world conditions, the test results cannot be trusted as evidence of compliance, safety, or operational readiness. This introduces risks of undetected defects, inaccurate test coverage, and non-compliance with regulatory standards, potentially leading to system malfunctions, safety incidents, and certification failure.

Click here for "What is the risk?": R089 - Uncertified software test simulators

Test






 

 

 

R090 - Missing software test criteria

Risk Statement:
Software test criteria define the objectives, conditions, and expected outcomes for validating whether a system satisfies its requirements. Missing or incomplete test criteria introduce significant risks during the verification and validation (V&V) process by creating ambiguities, reducing test coverage, and allowing defects to go unnoticed. Especially in safety-critical and mission-critical systems, undefined, vague, or inconsistent test criteria lead to inadequate verification, regulatory non-compliance, system failures, and increased costs associated with late-stage issue discovery.

Click here for "What is the risk?": R090 - Missing software test criteria

Test






 

 

 

R091 - Missing software test approaches

Risk Overview:
Failure to define a clear software testing approach for specific areas such as arrays, commands, and data stored in contiguous memory locations, or for multi-logic/complex decision-based conditions, introduces substantial risks. These risks are particularly pertinent in embedded systems, safety-critical applications (e.g., aerospace, automotive, medical devices), and performance-critical software. Undefined test strategies in such areas may lead to undetected memory corruption, incorrect indexing, untested boundary conditions, faulty logic evaluations, and inconsistent system behaviors. Ultimately, this can compromise system reliability, lead to non-compliance with regulatory standards, and increase operational and maintenance costs.

Click here for "What is the risk?": R091 - Missing software test approaches

Test






 

 

 

R092 - No Test Recording Procedure

Less than (TRR 80%, SRR 90%, ORR 100%) test recording procedure are not defined. 

Test recording procedures must provide a consistent and complete method for documenting the outcomes of testing activities during key project milestones. In this case:

  • Test Readiness Review (TRR): 80% — Focuses on verifying the system's readiness for formal testing (e.g., unit, integration, and system tests).
  • System Readiness Review (SRR): 90% — Examines whether the system design and implementation are complete, validated, and in alignment with the requirements.
  • Operational Readiness Review (ORR): 100% — Ensures the system is fully tested, functioning correctly, and operationally ready for production or deployment.

 Click here for "What is the risk?": R092 - No Test Recording Procedure

Test






80%

90%

100%

R093 - Incomplete software command testing

Less than 100% flight software commands have been tested. Commands in flight software are integral to the control, monitoring, and operation of complex systems, such as spacecraft, aircraft, or unmanned aerial vehicles (UAVs).

Click here for "What is the risk?": R093 - Incomplete software command testing

Test









R094 - Incomplete flight software data input testing

Less than 100% flight software data inputs have been tested including software data loads, data configuration loads, I-loads, flight software configuration files, missing or incomplete verification approaches for data-driven architectures. In flight software systems, data inputs (e.g., software data loads, data configuration loads, I-loads, and configuration files) act as critical parameters for proper system operation. Data-driven architectures rely heavily on these inputs for decision-making, state transitions, and mission execution. Data testing is as important as the software functionality since corrupted, incomplete, or incorrect data can mislead software, leading to system failures, mission-critical anomalies, or safety hazards.

Click here for "What is the risk?":  R094 - Incomplete flight software data input testing

Test









R095 - Incomplete testing of the software update capabilities

Software update capabilities have not been tested. In modern systems, especially in embedded systems, safety-critical applications, and connected systems, software update mechanisms (e.g., Over-the-Air (OTA) updates in vehicles, firmware upgrades in aircraft, or patch management in IoT systems) are essential for maintaining functionality, fixing bugs, enhancing performance, and addressing security vulnerabilities. If software update capabilities are not tested, the risks go beyond the update process itself and extend into the operational behavior of the entire system.

Click here for "What is the risk?": R095 - Incomplete testing of the software update capabilities

Test









R096 - Missing or undefined software stress testing

Missing or undefined software stress testing approach. Software stress Testing is a software testing activity that determines the robustness of software by testing beyond the limits of normal operation. Stress testing is particularly important for "mission critical" software, but is used for all types of software.

Click here for "What is the risk?": R096 - Missing or undefined software stress testing

Test









R097 - Missing or incomplete software verification

Software Verifications are incomplete or missing per flow down from Software Req. (>=30% SIR/20% TRR/10% SAR/0% ORR). Software verification ensures that software meets its specified requirements and behaves as intended under both normal and abnormal operating conditions. Missing or incomplete verification represents a critical gap in the software development lifecycle (SDLC) and jeopardizes the reliability, safety, security, and quality of the system.

Click here for "What is the risk?": R097 - Missing or incomplete software verification

Test





30%20%10%0%

1.2 Programmatic Risks

Risk 

Phase / AreaM
C
R
S
R
R
M
D
R
S
D
R
P
D
R
C
D
R
S
I
R
T
R
R
S
A
R
O
R
R

R057 - Software make/buy strategy

Software acquisition or make strategy has significant software risks due to capability maturity experience or processes of software development organization. Key risks arise due to the capability maturity, experience, and process reliability of the chosen software development organization—whether internal or external.

Click here for "What is the risk?": R057 - Software make/buy strategy

Prog









R058 - Unrealistic software schedules

Unrealistic software development or test schedules. Unrealistic schedules for software development or testing occur when timelines are not aligned with the software’s complexity, the organization’s capability maturity, resource availability, or risk factors.

Click here for "What is the risk?": R058 - Unrealistic software schedules

Prog









R060 - Missing IV&V support

Missing IV&V support or limited IV&V support. Objectives of performing IV&V include: Facilitate early detection and correction of cost and schedule variances. Enhance management insight into process and product risk. Support project life cycle processes to ensure compliance with regulatory, performance, schedule, and budget requirements.

Click here for "What is the risk?": R060 - Missing IV&V support

Prog









R061 - Missing electronic access to all software products

Lack of or delayed electronic access for NASA to all software products, software data, software source code, and software metrics. Having electronic access to all software products (e.g., code, documentation, test reports, requirements, binaries, and configuration items) by authorized users or stakeholders is essential for the management, traceability, maintenance, and delivery of software systems.

Click here for "What is the risk?": R061 - Missing electronic access to all software products

Prog









R062 - Delayed start of IV&V and or software assurance support

Delayed start to software assurance and or IV&V Activities. Independent Verification and Validation (IV&V) and software assurance support are essential components of software quality assurance, ensuring that software products meet functional, performance, safety, and security requirements.

Click here for "What is the risk?": R062 - Delayed start of IV&V and or software assurance support

Prog









R063 - Missing software measurement data

Missing or incomplete software measurement program. Measurement programs provide the data-driven insights needed to track program progress, manage risks, and improve decision-making across the software development lifecycle (SDLC).

Click here for "What is the risk?": R063 - Missing software measurement data

Prog









R064 - Incomplete flowdown of Agency software requirements

Incomplete flow down of NPR 7150.2 and NASA-STD-8739.8 requirements

Click here for "What is the risk?": R064 - Incomplete flowdown of Agency software requirements

Prog









R065 - Incorrect software classification

Incorrect software classifications

Click here for "What is the risk?":  R065 - Incorrect software classification

Prog









R066 - Lack of certifiable software development practices

Lack of verifiable or certifiable software development practices by the organizations developing the critical software components.

Click here for "What is the risk?": R066 - Lack of certifiable software development practices

Prog









R085 - Software re-use feasibility incompatible

Software assigned or assumed to be re-used is either not a good match for the proposed mission or it is not fully understood through missing or incomplete feasibility studies.

Click here for "What is the risk?": R085 - Software re-use feasibility incompatible

Prog









R086 - Operational scenarios

Operational scenarios are not defined enough to support the definition and allocation of system and software requirements

Click here for "What is the risk?": R086 - Operational scenarios

Prog









1.3 Resource Risks

Risk 

Phase / AreaM
C
R
S
R
R
M
D
R
S
D
R
P
D
R
C
D
R
S
I
R
T
R
R
S
A
R
O
R
R

R056 - Project cost allocation for software assurance resources

The project cost allocation or software assurance resources unavailable for the project’s Software Assurance support to meet the requirements.

Click here for "What is the risk?":  R056 - Project cost allocation for software assurance resources

Prog









R059 - Insufficient software workforce skills

Insufficient software workforce or software skillsets

Click here for "What is the risk?": R059 - Insufficient software workforce skills

Prog









R072 - Underperforming on allocated software resources

Software development activities are behind schedule due to underperforming resources, missing resources, medical related issues, or personnel priorities

Click here for "What is the risk?": R072 - Underperforming on allocated software resources

Prog









R073 - High turnover of software engineering or software assurance personnel on the project

Software development activities are behind schedule due to high turnover of software engineering or software assurance personnel on the project

Click here for "What is the risk?": R073 - High turnover of software engineering or software assurance personnel on the project

Prog









1.4 Technological Risks

Risk 

Phase / AreaM
C
R
S
R
R
M
D
R
S
D
R
P
D
R
C
D
R
S
I
R
T
R
R
S
A
R
O
R
R

R007 - Violations of margin for CPU utilization

Violations of Margin for CPU Utilization. CPU Utilization: This is a key metrics which measures the percentage of time CPU spends in handling a process. High CPU utilization by any task is red flagged to check any performance issues. Timely detection and planned response to oversubscriptions can preserve critical system capabilities.  There are several common methods for tolerating these situations, most of which relate to reducing demand from non-essential items, especially if they are the source of over subscription. Software development is complicated by inadequate or impractical timing allocations or margins associated with selected CPU. The software design should contain a robust response to situations where computer resources are oversubscribed. The action to be taken in such situations shall be specified as part of the requirements on the design.

Click here for "What is the risk?": R007 - Violations of margin for CPU utilization

Code




 

 

 

 

 

R067 - Programming language selection or insufficient software tools and training

Poor Software programing language selection by the project, insufficient software analysis and development tools, and the software development organization is not trained in the use of the software language

Click here for "What is the risk?": R067 - Programming language selection or insufficient software tools and training

Prog









R068 - Inadequate training

Personnel supporting the project do not have adequate experience or training to perform the processes

Click here for "What is the risk?": R068 - Inadequate training
Prog









R069 - Concurrent avionics hardware development or changing or undefined hardware interface requirements

Risk associated with developing software at the same time as the hardware is being developed, risk of misunderstanding the software hardware interfaces and not having the hardware available to use in the software development

Click here for "What is the risk?": R069 - Concurrent avionics hardware development or changing or undefined hardware interface requirements

Prog









R070 - Inadequate data throughput margins

Software development is complicated by inadequate or impractical data throughput allocations or margins associated requirements and bus selections

Click here for "What is the risk?": R070 - Inadequate data throughput margins

Prog









R071 - Software processes

Software development is complicated by undefined software processes or misunderstood processes

Click here for "What is the risk?": R071 - Software processes"

Prog










See also 7.19 - Software Risk Management Checklists, SWE-086 - Continuous Risk Management 

1.1 Additional Guidance

Links to Additional Guidance materials for this subject have been compiled in the Related Links table. Click here to see the in the Resources tab.

2. Resources

2.1 References

[Click here to view master references table.]

No references have been currently identified for this Topic. If you wish to suggest a reference, please leave a comment below.



2.2 Tools


Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

2.3 Additional Guidance

This topic is generic and applies to all SWEHB versions. All links to version specific SWEs and Topics have been removed. SWEs and Topics are in Green Bold Underlined text to make them easy to find. 

Refer to the appropriate Requirements or Topics buttons in the SWEHB version you are using to locate the SWE or Topic you need.

Additional guidance related to this requirement may be found in the following materials in this Handbook:

Related Links

  • SWE-086 - Continuous Risk Management

  • 7.19 - Software Risk Management Checklists

2.4 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

SPAN Links

2.5 Related Activities

This Topic is related to the following Life Cycle Activities:

3. Lessons Learned

3.1 NASA Lessons Learned

No Lessons Learned have currently been identified for this requirement.

3.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.