bannerd

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Next »


8.29 - Software Certification in Human-Rated Missions

1. Introduction

This checklist provides comprehensive data and evidence required to certify software for human-rated missions.It ensures compliance with applicable safety standards, regulatory requirements (NASA NPR 7150.2D 083 , SSP 50038 014  , FAA 450.141 Computing systems, NASA-STD-8739.8B 278), mission-critical functionality, and stakeholder acceptance of residual risks, demonstrating that the software is safe, reliable, and mission-ready for crewed spaceflight operations.


Click on the image to preview the file. From the preview, click on Download to obtain a usable copy. 

PAT-082 - Software Certification in Human-Rated Missions Checklist

2. Key Compliance Data Needs

2.1 Summary Table of Key Compliance Data Needs

Category

Key Data/Documentation

Requirements

  • System/Software Requirements Traceability,
  • Hazard Control Requirements

Design

  • Software Architecture,
  • Fault Containment and Management’
  • Safeguard Integration
  • Interface Control Documents
  • Data Dictionaries

Development

  • Development Plans,
  • Fault Tolerance Implementation Logs

Verification & Validation

  • Test Results,
  • Initialization,
  • Recovery,
  • Redundancy,
  • IV&V Reports

Hazard Analysis

  • Hazard Reports,
  • Operator Action Validation,
  • Safing Procedures

Configuration Management

  • Baseline Documentation,
  • Change Logs

Operational Procedures

  • Control Sequences,
  • OCAD Validation,
  • Manual Safing Data

2.2 Key Compliance Data Needs

  1. Software Requirements
    1. High-level system/software requirements
    2. Detailed software requirements (or whatever the developer used)
    3. All known software safety constraints
    4. Software bi-directional traceability data
    5. Specifications for internal and external software interfaces definition and testing
    6. Encryption protocols, authentication mechanisms, secure coding practices, and access control procedures.
  2. Software Design
    1. Description of software designed
    2. Hardware design data on safety-critical subsystems
    3. Data Dictionary: input/output data formats, telemetry parameters, and command sequences.
  3. Software Development
    1. All software analyses results
    2. Completed Time-to-effect (TTE) analysis
    3. Completed Fault Tree Analyses
    4. Completed Failure Mode and Effects Analysis
    5. Software process audit results
    6. Developer software process training records
  4. Software Verification and Validation (software testing)
    1. Software test data,
    2. safety-critical requirements test results,
    3. fault Injection Test Results,
    4. End-to-End Integration Testing results,
    5. Penetration Testing Results (resilience testing and telemetry plans against unauthorized system access and cyberattacks),
    6. test results and data showing command execution timing within acceptable,
    7. test results and data confirming adequate system resource margins
    8. Detailed description of the software test environments, including accreditation data
    9. software interfaces (internal and external) test results
    10. Code test coverage data
    11. Software static analysis results reports
    12. Number and types of static analysis tools used.
    13. Results of a Security Vulnerability Analysis: detected and resolved vulnerabilities in the software's security framework.
    14. All of the Independent Verification and Validation (IV&V) assessments results
    15. Data showing that the safety-critical software components meet complexity thresholds
    16. Evidence that the code structural quality has low risks.
  5. Hazards
    1. Hazards and mitigation controls that include software
    2. List of any unresolved hazards
  6. CM
    1. Processes used for version control, change tracking, and baseline management.
    2. Identification of flight-ready software configurations,
    3. Identification of mission data loads.
  7. Flight readiness and Operations
    1. Clear understanding of the operational environment for the mission.
    2. Operational procedures for updating the software and data
    3. Any software related threats for the operational environment on the software operation
    4. List of and access to all open software defects
    5. List of and access to all open and closed high-risk software defects.
    6. Stakeholder-approved sign-off on any unavoidable operational software related risks.
    7. Evidence of adherence to validated development processes, coding guidelines, and testing protocols.
    8. Deliverables required for regulatory certification
    9. Software Version Description Document (VDD)
    10. FRR Exit Criteria Sign-Off for software
    11. Crew software user guides, operational procedures, and troubleshooting documentation.
    12. Documentation showing mechanisms to handle errors, recover failures, and preserve system operation under degraded conditions.
    13. Data demonstrating feasibility (i.e., can complete in time) and effectiveness of emergency procedures for crew and controllers.


3. Safety Case for Human-Rated Software Certification

This safety case demonstrates that the software used in this human-rated mission adheres to rigorous safety, quality, and regulatory standards. Based on the evidence provided, the software is flight-ready and capable of supporting critical mission operations while ensuring the safety of the crew and spacecraft under both nominal and adverse conditions.

1. Requirements and Traceability

  • Argument: The software requirements are clearly defined, traceable, and aligned with safety-critical mission needs.
  • Evidence:
    • Comprehensive Software Requirements Specification (SRS) covering high-level mission-critical systems (e.g., navigation, propulsion, anomaly detection, life support, and abort operations).
    • Verified safety requirements (fault tolerance, redundancy, and safe initialization/termination).
    • Acceptable quality of detailed low-level safety-critical requirements, including specifics like algorithm designs and timing constraints.
    • A completed and validated Requirements Traceability Matrix (RTM) showing bi-directional traceability from requirements through design, code, and test results.
    • Reviewed system-level safety analyses to document "Must Work" (MWF) and "Must Not Work" (MNWF) requirements, prerequisite checks for hazardous commands, and mitigation strategies.

2. Software Design and Architecture

  • Argument: The software architecture is resilient, modular, and designed for fault tolerance and safety-critical operations.
  • Evidence:
    • Architecture documentation detailing modular fault isolation, redundancy, and resiliency mechanisms.
    • Block diagrams illustrating fault containment, fail-safe control paths, and separation of critical functions.
    • Documentation and analysis of safety-critical subsystems (e.g., propulsion, crew displays, navigation) with clearly defined responsibilities.
    • Verified Interface Control Documents (ICDs), ensuring compatibility between internal software, hardware systems, and external interactions.
    • Safety validation evidence for safeguards like fault containment, error detection, operator validation, integrity checks, and anomaly recovery processes.
    • Independent redundant system designs ensuring physical and logical separation to mitigate single points of failure.
    • Validation of fault-tolerant mechanisms, including cosmic radiation protection in CPU designs.

3. Hazard Analysis and Safety Evidence

  • Argument: All hazards associated with software functionality are identified, analyzed, and mitigated to acceptable levels of risk.
  • Evidence:
    • A complete Hazard Analysis Report (HAR) identifying software-driving hazards and the mitigation strategies in place.
    • Fault Tree Analysis (FTA) and Failure Mode and Effects Analysis (FMEA) showing robust fault prevention and recovery mechanisms or a completed System Theoretic Process Analysis (STPA) showing robust fault prevention and recovery mechanisms. 
    • Time-to-effect (TTE) analyses ensuring hazardous conditions can be addressed by safing systems within operational thresholds.
    • Residual risk documentation showing resolution or acceptance of remaining risks by stakeholders.

4. Verification and Validation (V&V) Evidence

  • Argument: Rigorous testing, validation, and coverage analyses demonstrate software compliance with safety-critical requirements.
  • Evidence:
    • 100% Statement Coverage.
    • 100% Decision Coverage.
    • 100% Modified Condition/Decision Coverage (MC/DC) for safety-critical components.
    • Unit testing, system integration testing, end-to-end validation, and operational flight simulations confirming that expected functional performance aligns with safety goals.
    • Validation of reused components (COTS, GOTS, OSS, MOTS) to ensure compatibility and reliable integration into human-rated environments.
    • Coverage analysis demonstrating:
    • Static analysis reports showing compliance with coding standards and identification/remediation of software defects.
    • Fault injection testing results validating responses to corrupted data, anomalies during power disruptions, and memory errors.
    • Worst-case response timing analysis confirming safing systems meet TTE requirements under degraded conditions.

5. Configuration Management and Change Tracking

  • Argument: Configuration management processes ensure version control and traceability for all software changes.
  • Evidence:
    • Documentation showing version-controlled baselines for flight-ready software and data loads, including configuration hashes and release notes.
    • Audit records verifying modifications, regression testing, impact analyses, and stakeholder approvals

6. Cybersecurity and Security Validation

  • Argument: The software architecture incorporates robust cybersecurity measures to mitigate threats in operation environments.
  • Evidence:
    • Security validation reports demonstrating encryption protocols, authentication mechanisms, access control, and secure coding practices.
    • Penetration testing results validating resilience against cyberattacks and unauthorized system access during pre-launch and flight.
    • Vulnerability analysis reports confirming detection, resolution, and closure of security-related risks. 

7. Defect Management and Residual Risks

  • Argument: All software defects have been resolved or mitigated to acceptable levels of residual risk.
  • Evidence:
    • Defect reports showing all open and closed defects categorized by severity and justifications for acceptance of residual risks.
    • Logs documenting defect resolutions and testing data validating the outcomes of mitigation measures.
    • Residual risk acceptance documentation signed off by stakeholders, with sufficient evidence showing safe system behavior despite unresolved minor risks.

8. Resource Utilization and Performance Metrics

  • Argument: The software demonstrates sufficient resource margins and acceptable performance under normal and worst-case conditions.
  • Evidence:
    • Validation test results confirming acceptable command execution timing (e.g., abort triggers).
    • Operating analysis showing CPU utilization below 80% even under maximum load conditions.
    • Methods for anomaly detection and recovery to safe states outlined and validated.

9. Team Training and Software Process Compliance

  • Argument: Development teams adhere to validated processes and are properly trained in safety-critical mission standards.
  • Evidence:
    • Records of team training addressing human-rated software workflows, defect management, and compliance with coding guidelines.
    • Process compliance reports documenting adherence to validated development processes.
    • Operator manuals ensuring deliberate, independent actions are necessary to execute critical safety commands 

10. Certification and Regulatory Compliance

  • Argument: The software complies with all applicable standards and safety regulations for human-rated missions.
  • Evidence:
    • Certification artifacts for compliance with standards like NASA NPR 7150.2D 083 ,
      NASA SSP 50038 014, FAA requirements, and NASA-STD-8739.8B 278 .
    • IV&V certification reports confirming operational maturity and compliance with safety standards by independent entities.
    • Regulatory compliance statements from authorities certifying readiness for human-rated missions.
    • Validation of software updates (patched or upgraded) ensuring continued compliance with safety requirements.

11. Flight Readiness Review (FRR) Certification

  • Argument: The software is flight-ready and capable of safely supporting mission operations.
  • Evidence:
    • Software Version Description Document (VDD) completion demonstrating proper documentation of the deployed software.
    • Final test results confirming readiness during flight operations in all mission environments.
    • FRR exit criteria signed off by stakeholders, certifying acceptance or resolution of all known risks, hazards, defects, and anomalies.

12. Flight Software Structural Quality

  • Argument: The software architecture and implementation are structurally sound and meet all quality standards for safety-critical applications.
  • Evidence:
    • Cyclomatic complexity analysis showing all safety-critical components meet thresholds (≤ 15).
    • Documentation verifying fault-tolerant mechanisms for error handling, failure recovery, and system operation under degraded conditions.
    • Maintainability analysis supporting modular coding practices for long-term sustainability and easy updates.
    • Code quality reports validating compliance with architecture, standards, security, and testability requirements.


4. Resources

4.1 References



4.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.


4.3 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

4.4 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

SPAN Links



  • No labels

0 Comments