- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
- 8. Objective Evidence
1. Requirements
4.5.2 The project manager shall establish and maintain:
a. Software test plan(s).
b. Software test procedure(s).
c. Software test(s), including any code specifically written to perform test procedures.
d. Software test report(s).
1.1 Notes
NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.
1.2 History
1.3 Applicability Across Classes
Class |
A |
B |
C |
D |
E |
F |
|---|---|---|---|---|---|---|
Applicable? |
|
|
|
|
|
|
Key:
- Applicable |
- Not Applicable
1.4 Related Activities
This requirement is related to the following Activities:
| Related Links |
|---|
2. Rationale
Having plans and procedures in place ensures that all necessary and required tasks are performed and performed consistently. The development of plans and procedures provides the opportunity for stakeholders to give input and assist with the documentation and tailoring of the planned testing activities to ensure the outcome will meet the expectations and goals of the project. Test reports ensure that the results of verification activities are documented and stored in the configuration management system for use in acceptance reviews or readiness reviews.
Ensuring the test plans, procedures, and reports follow templates ensures consistency of documents across projects, ensures proper planning occurs, ensures proper activity and results are captured, and prevents repeating problems of the past.
Ensuring high-quality, reliable, and mission-compliant software for NASA projects requires a well-defined and robust testing process. This requirement mandates that the project manager establish and maintain critical artifacts—test plans, test procedures, tests, and test reports—because these documents and outputs serve as a foundation for validating the software against its requirements and proving its readiness for deployment. Below is a detailed rationale for each part of the requirement:
Requirement 4.5.2a: Software Test Plan(s)
Rationale:
The Software Test Plan (STP) is the roadmap for the testing process. It defines the "what, why, when, where, and how" of testing, ensuring a structured approach to verifying software functionality, performance, and reliability.
Purpose of the Test Plan:
- Establishes the scope, objectives, environmental prerequisites, and resources required for software testing.
- Provides a comprehensive overview of testing activities, including schedules, levels of testing (e.g., unit, integration, system, acceptance testing), and testing criteria.
- Serves as a control mechanism to ensure all necessary testing is aligned with the project lifecycle and system-level milestones.
Importance:
- Ensures that all stakeholders (e.g., developers, test engineers, managers) share the same understanding of testing expectations.
- Mitigates project risks by identifying and planning for resource limitations, schedule constraints, and potential testing challenges before issues arise.
- Documented testing roadmaps are especially critical for safety- or mission-critical systems, where precise testing practices ensure mission success.
Requirement 4.5.2b: Software Test Procedure(s)
Rationale:
The Software Test Procedure(s) translate the test plan into actionable testing activities by detailing the step-by-step instructions for executing test cases.
Purpose of the Test Procedures:
- Define the "how" of execution for each test identified in the test plan, including setup, execution steps, inputs, expected outputs, and pass/fail criteria.
- Ensure the repeatability of the test process, even when executed by different testing personnel.
- Provide detailed actions that allow for traceability between requirements, test cases, and defects uncovered during testing.
Importance:
- Establishes consistency and precision, ensuring that the software is tested against all functional and non-functional requirements without deviation or oversight.
- Increases test coverage, as well-defined procedures ensure that edge cases, corner cases, and integration points are systematically tested.
- Facilitates efficient troubleshooting by outlining failure recovery actions, evaluation criteria, and steps for post-test analysis.
- Ensures traceability and compliance with NASA standards and guidelines, particularly for safety- and mission-critical systems.
- Procedures also provide evidence during post-mission audits that thorough and systematic testing was performed.
Requirement 4.5.2c: Software Test(s), Including Any Code Specifically Written to Perform Test Procedures
Rationale:
The Software Tests refer to the actual execution of planned and procedural actions to evaluate the software's behavior or performance under specified conditions. In addition to tests themselves, code written specifically to support testing (e.g., test scripts, stubs, drivers, simulators) must also be maintained because it directly ensures proper testing coverage.
Purpose of the Tests:
- Provide tangible results on whether the software meets its functional and performance requirements.
- Detect and diagnose errors, oversights, or inconsistencies in the software to ensure its reliability before deployment.
- Validate interactions between software components, between the software and the hardware, and between the software and external systems.
Importance:
- Helps uncover critical bugs early in the development cycle, reducing cost and schedule risk.
- For mission-critical systems, testing is the only way to simulate real-world scenarios that the system will face, including failures, anomalies, or extreme inputs.
- Test code (e.g., stubs and drivers) ensures that test scenarios can address software in isolation, allowing defective subsystems to be identified without contamination from other parts of the system.
- Comprehensive testing, including boundary, stress, regression, and interface testing, ensures software robustness and reliability under worst-case conditions.
Requirement 4.5.2d: Software Test Report(s)
Rationale:
The Software Test Report (STR) serves as the formal documentation of testing efforts, outcomes, and resulting evaluations. It confirms whether the system has met its testing objectives and provides evidence of software maturity.
Purpose of the Test Reports:
- Summarizes test results, identifying tests that passed, failed, or encountered unexpected behavior.
- Provides analysis of failures or anomalies encountered during testing, including root cause analysis and corrective action reports.
- Captures deviations from the test plan or procedure, documenting their resolution or justification.
- Provides a historical record of testing activities, outcomes, and overall confidence in the product's reliability for stakeholders or auditors.
Importance:
- Ensures accountability and transparency in the testing process, allowing technical leads and managers to confidently endorse system readiness.
- Demonstrates traceability between project requirements and test results, ensuring that all requirements (functional, performance, safety) have been tested thoroughly.
- Generates evidence for NASA's rigorous safety and quality assurance processes, thereby serving as formal documentation for agency reviews, external authorities, or mission partners.
- Test reports serve as a key reference in future lifecycle phases, especially when evaluating issues discovered post-launch or during operational use.
Overall Importance of Maintaining These Testing Artifacts
Risk Mitigation:
- Testing is the frontline defense against software defects reaching mission-critical operations. These artifacts (plans, procedures, tests, and reports) ensure that errors are detected and resolved early in the lifecycle, minimizing risks to cost, schedule, and mission success.
Compliance and Certification:
- NASA projects operate under strict safety, quality, and mission assurance requirements. Testing artifacts provide demonstrable evidence of compliance with agency standards and guidelines, without which key stakeholders may not approve or certify system readiness.
Repeatability and Reproducibility:
- Documented test plans, procedures, and reports allow tests to be repeated across teams, facilities, and lifecycle phases. This is essential for regression testing during updates, upgrades, or maintenance activities.
Traceability and Accountability:
- Testing artifacts establish traceability between software requirements, testing activities, and final deliverable quality. This fosters accountability among teams and ensures that no requirement is overlooked or inadequately tested.
- For example, a test failure can be traced back to its source (requirement, code, or design defect), enabling focused corrective action.
Support for Audits and Stakeholder Confidence:
- NASA missions are often scrutinized by internal and external entities. Well-maintained testing artifacts provide stakeholders with confidence that every necessary step has been taken to validate and verify the software.
Preparation for Real-World Scenarios:
- The increasingly complex and autonomous nature of modern NASA systems requires rigorous testing to ensure the software behaves predictably in real-world conditions. Test plans, procedures, and results ensure the system is validated for extreme edge cases and operational anomalies.
Facilitates Knowledge Transfer and Scalability:
- Detailed testing documents help future teams understand project testing history, workflows, and decisions. This is invaluable when scaling existing systems or developing derivatives of prior software.
Alignment with NASA's Mission Assurance Philosophy
NASA mandates stringent software assurance processes for all projects, particularly those involving flight and mission-critical systems. This requirement directly supports the agency's goals to minimize mission risk, maintain software quality, and ensure the safety of crew members, hardware, and scientific payloads.
By developing and maintaining detailed test plans, procedures, tests, and reports, projects can ensure the software functions as intended, is robust under all operating conditions, and aligns with NASA's high standards for mission success and safety.
3. Guidance
To ensure the successful execution of software testing and adherence to NASA's high standards for quality, safety, and reliability, this enhanced guidance establishes clear expectations for creating, executing, documenting, and maintaining test plans, procedures, and reports. Following these principles ensures traceability, accountability, and the ability to adapt dynamically to project changes.
1. Software Test Plans, Procedures, and Reports: Comprehensive Development Guidelines
Test Plans (STP):
Objective:
- Define the scope, purpose, and strategy for software testing activities, including responsibilities, resources, test levels, and test timelines.
- Establish high-level testing goals to ensure that all software requirements (functional, performance, safety, and interface) are met.
Content Recommendations:
- Use the content guidance from Topic 5.10 - Software Test Plan (STP) to include:
- Scope of testing.
- Test objectives and test levels (unit, integration, system, regression, etc.).
- Roles and responsibilities for test execution.
- Tools, environments, and hardware/software configurations.
- Risk assessment and mitigation plans for testing.
- Resource estimates, schedules, and budget allocations.
- Use the content guidance from Topic 5.10 - Software Test Plan (STP) to include:
Planning for Progressive Builds:
- For software developed in multiple builds, test plans must include phased validations to ensure requirements implemented incrementally in earlier builds are tested thoroughly. Final testing should incorporate end-to-end integration and verification.
Test Procedures:
Objective:
- Provide detailed, actionable instructions for executing each test case identified in the test plan.
- Ensure consistency and repeatability across execution efforts by clearly defining inputs, expected outputs, and step-by-step actions.
Guidelines:
- Develop test cases that align with SWE-187 and documented through Topic 5.14 - Test Procedure Guidance:
- Covering all functional and design requirements, including boundary conditions, error handling, and performance constraints.
- Addressing all software interfaces between internal and external systems or units.
- Including stress, load, and fault recovery tests to simulate real-world and worst-case scenarios.
- Include procedural steps to evaluate:
- Limits and boundary condition handling.
- Algorithms and correctness of calculations.
- Operational accuracy of hazard mitigations and fault recovery mechanisms.
- Develop test cases that align with SWE-187 and documented through Topic 5.14 - Test Procedure Guidance:
Reuse and Legacy Software:
- Legacy or reused software components must undergo comprehensive testing with updated requirements to ensure compatibility and correctness:
- Test all modified components.
- Test all critical components, regardless of whether they were modified.
- Target components with known or past performance risks.
- Legacy or reused software components must undergo comprehensive testing with updated requirements to ensure compatibility and correctness:
Dry Runs:
- Require all software test procedures to undergo dry runs to:
- Confirm procedure completeness and adequacy.
- Ensure tools, test data, and environmental resources are ready.
- Identify potential gaps or missing steps prior to formal execution.
- Require all software test procedures to undergo dry runs to:
Test Execution:
Independence in Testing (Topic 3.1):
- Establish clear independence in software testing for Classes A, B, and safety-critical Class C software:
- Testers must be independent of the personnel responsible for the detailed design, implementation, or unit testing of the software item.
- Design and implementation knowledge contributors are encouraged to assist in the process by providing test case insight.
- Independence reduces bias and improves defect detection during testing.
- Establish clear independence in software testing for Classes A, B, and safety-critical Class C software:
Testing in the Target Environment (Topic 3.3):
- Perform qualification and final testing on hardware that closely matches the target system's operational configuration, including:
- Processor architecture, memory size, timing, and performance characteristics.
- Interfaces and data I/O rates.
- High-fidelity simulations (see SWE-073 - Platform or Hi-Fidelity Simulations).
- Testing on high-fidelity hardware ensures that system-level performance, timing, and operational requirements are met.
- Perform qualification and final testing on hardware that closely matches the target system's operational configuration, including:
Test Rig Sufficient for Objectives:
- Verify that test configurations include sufficient hardware and software fidelity to comprehensively simulate actual use conditions. This minimizes the introduction of false negatives due to unrealistic environmental constraints.
Test Reports:
Objective:
- Document the outcomes of the software testing, analyze results, assess anomalies, and provide traceability to test plans and procedures.
Key Content:
- Include all required artifacts described in Topic 5.11 - Software Test Report Guidance and ensure the report covers:
- Test cases executed, pass/fail results, and deviations from expected outcomes.
- Analysis of data captured during testing, including evidence of requirement fulfillment.
- Any failures or anomalies observed, accompanied by root cause analysis and recommendations for corrective actions.
- Confirm traceability between test results and project requirements, test plans, and procedures.
- Include all required artifacts described in Topic 5.11 - Software Test Report Guidance and ensure the report covers:
2. Specialized Testing Recommendations
Regression Testing (SWE-191):
- After any modification (e.g., bug fixes, enhancements, or new requirements), regression testing must be conducted to ensure no unintended impacts on previously tested functionality. Include test procedures to validate:
- Newly modified or added code.
- Existing functionality and interfaces against updated code.
Fault Recovery and Robustness Testing:
- Create test cases that simulate fault and recovery scenarios to evaluate:
- The software’s ability to detect, respond to, and recover from failures.
- Correct operation during low-power modes, unexpected shutdowns, or resource unavailability.
Stress and Performance Testing:
- Evaluate the software's ability to operate under peak loads, high data rates, and adverse environmental conditions.
3. Documentation Maintenance
Dynamic Updates to Test Plans, Procedures, and Reports:
Trigger Points for Updates:
- Updates are required when:
- The project design evolves, or requirements change (SWE-071).
- New test tools or resources are introduced.
- Test results identify inadequacies in coverage or procedures.
- Test documents must also evolve when software classification or safety-criticality changes.
- Updates are required when:
Change Management:
- Incorporate updates via formal review and approval processes. Changes must be reviewed, peer-inspected, and validated to align with evolving project needs.
4. Software Assurance Role in Testing
Witness Testing:
- Ensure software test procedures are dry-run before formal witnessed testing (Topic 3.4). The presence of software assurance during formal tests ensures:
- Procedures are executed as planned.
- Results are properly documented and discrepancies addressed.
- Review results to confirm whether requirements verification and validation are complete.
- Ensure software test procedures are dry-run before formal witnessed testing (Topic 3.4). The presence of software assurance during formal tests ensures:
Review Analysis and Documentation:
- Software assurance must evaluate test reports to verify:
- Coverage of all requirements.
- Adequacy of regression tests and failure analyses.
- Updates accurately reflect changes to requirements or system configurations.
- Software assurance must evaluate test reports to verify:
5. Process Improvement and Best Practices
Leverage Historical Data:
- Integrate lessons learned from previous projects (reuse test assets where applicable) to improve efficiency and reduce risks.
Incorporate Metrics and Audits:
- Track metrics for test effectiveness (e.g., defect density, requirements coverage, open vs. closed defects).
- Audit the testing process to ensure adherence to documented plans and procedures.
Adhering to these guidelines ensures comprehensive, repeatable, and traceable testing processes, ultimately contributing to the software's safety, reliability, and mission success. Software testing is not only a verification and validation activity—it forms the cornerstone of high-quality software engineering.
Projects create test plans, procedures, and reports following the content recommendations in topic 7.18 - Documentation Guidance. 5.10 - STP - Software Test Plan, 5.14 - Test - Software Test Procedures,
See also Topic 7.06 - Software Test Estimation and Testing Levels, SWE-191 - Software Regression Testing,
see SWE-073 - Platform or Hi-Fidelity Simulations)
See also Topic 7.15 - Relationship Between NPR 7150.2 and NASA-STD-7009
See also Topic 8.13 - Test Witnessing, SWE-194 - Delivery Requirements Verification
See also Topic 5.11 - STR - Software Test Report,
NASA users should consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources related to the test plan, test procedures, and test reports, including templates and examples.
3.2 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook:
3.3 Center Process Asset Libraries
SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197
See the following link(s) in SPAN for process assets from contributing Centers (NASA Only).
| SPAN Links |
|---|
4. Small Projects
For small projects, it is essential to streamline test documentation to reduce overhead while ensuring that sufficient testing rigor is maintained. Small projects often have limited resources, including personnel, time, and budget, requiring a practical approach to meet testing objectives. The following enhanced guidance provides strategies to balance efficiency with quality in software testing documentation.
1. Combining Test Documentation
Small projects can benefit from combining various test documents to reduce duplication of effort and simplify management while maintaining traceability and thoroughness.
Best Practices:
Test Plan, Procedures, and Results in a Single Document:
- Instead of creating separate documents for the Software Test Plan (STP), Software Test Procedures (STPR), and Test Results, small projects can consolidate them into one unified document.
- In this format:
- Use one section to define test objectives, scope, and responsibilities.
- Detail the step-by-step test procedures, with space to directly record test results and observations.
- Include placeholders for analyzing results and documenting anomalies or corrective actions.
Benefits:
- Reduces the number of deliverables to manage, review, and maintain.
- Provides end-to-end traceability in a single location.
- Simplifies updates when requirements or procedures change.
Templates with Embedded Result Fields:
- Develop test procedures with embedded fields for directly capturing test results.
- Example: Test step descriptions, input/output parameters, execution timestamps, pass/fail status, and remarks can all be documented in the same table or structured section.
- This reduces the need for separate, standalone test results documents while ensuring that results are directly traceable to their corresponding procedures.
- Develop test procedures with embedded fields for directly capturing test results.
Leverage Lightweight Documentation:
- Use concise formats such as tables or checklists for smaller test cases or simpler software modules:
- Include basic fields such as test ID, requirement verified, test description, expected result, actual result, pass/fail status, and notes.
- Avoid excessive formality while still adhering to essential documentation standards.
- Use concise formats such as tables or checklists for smaller test cases or simpler software modules:
2. Standardized Testing Frameworks
For organizations that manage multiple small projects, establishing standardized testing documentation and processes can reduce repeated efforts while ensuring consistency across all projects.
Framework Development:
Create Flexible, Modular Templates:
- Develop standardized templates for:
- Test Plans: Include placeholders for custom project-specific information, such as scope, resource allocation, and schedules.
- Test Procedures: Provide a baseline set of procedural steps that can be tailored for each project, reducing the need to create procedures from scratch.
- Test Reports: Include pre-defined fields for results summaries, trends, and analysis.
- Templates should follow best practices but remain lightweight, allowing projects to complete only the sections relevant to their needs.
- Develop standardized templates for:
Include a Checklist-Driven Approach:
- Standardize testing processes using checklists that align with high-level requirements:
- Ensure coverage for key areas such as functional verification, performance testing, boundary testing, and fault recovery.
- Checklists can simplify reporting by serving as both the procedure and a validation artifact when completed.
- Standardize testing processes using checklists that align with high-level requirements:
Maintain Flexibility:
- Allow projects to customize and scale down the framework to suit their size and complexity:
- Projects can add sections, tailor procedural steps, or omit unnecessary detail, depending on their specific software scope and classification.
- Allow projects to customize and scale down the framework to suit their size and complexity:
Centralized Resources for Small Projects:
Create and Maintain a Reusable Test Repository:
- Store reusable test cases, test data, and prior validation artifacts in a Process Asset Library (PAL) or testing repository.
- Small projects can pull pre-existing test assets and adapt them to their own requirements, leveraging institutional knowledge from past projects.
Tool Recommendations for Standardization:
- Use shared tools or platforms to enforce consistency in documentation:
- For example, using NASA’s standardized tools (or custom organization-developed tools) for test planning, execution, and reporting, which provide pre-defined frameworks.
- Use shared tools or platforms to enforce consistency in documentation:
3. Maximizing Efficiency in Small Projects
Small projects often face constraints and competing priorities, so efficient testing practices are critical.
Lean Testing Strategies:
Risk-based Testing:
- Focus testing efforts on high-risk and mission-critical requirements. Use a simplified risk assessment to prioritize test cases:
- Rank requirements by risk factors such as failure impact, likelihood, and importance to overall functionality.
- Defer less critical testing tasks to later stages or minimize testing of low-risk functionality.
- Focus testing efforts on high-risk and mission-critical requirements. Use a simplified risk assessment to prioritize test cases:
Systematic Reuse of Legacy Tests:
- For projects involving reused or legacy software, prioritize regression and integration testing over complete re-verification of unmodified components:
- Test only modified or high-risk sections of the reused software.
- Use prior test documentation as a baseline, ensuring new test procedures and results only extend the existing framework.
- For projects involving reused or legacy software, prioritize regression and integration testing over complete re-verification of unmodified components:
Encourage Cross-Disciplinary Roles:
- In small teams, testing roles may overlap with development or design roles. While independence of testers is ideal (per the guideline for Classes A, B, and safety-critical Class C software), small teams should:
- Ensure critical tests are peer-reviewed or reviewed externally to maintain testing objectivity.
- Leverage tools or automation to increase consistency and reduce bias when tester independence cannot be fully achieved.
- In small teams, testing roles may overlap with development or design roles. While independence of testers is ideal (per the guideline for Classes A, B, and safety-critical Class C software), small teams should:
Minimize Overhead with Automation and Tools:
Automated Test Scripts:
- Develop reusable test scripts that can quickly execute test procedures and collect results, especially for time-consuming regression testing.
- Tools like continuous integration pipelines (e.g., Jenkins, GitLab CI) can automate testing workflows and reduce overall documentation and execution time.
Use Simple Tracking Systems for Test Documentation:
- Instead of complex document management systems, consider lightweight tools (e.g., shared spreadsheets or simple databases) to track test cases, procedures, and results.
4. Maintenance of Documentation
Even for small projects, documentation must remain up-to-date to ensure continuity, traceability, and compliance throughout the project lifecycle.
Guidelines for Effective Maintenance:
Use Configuration Management:
- Store test documents in a version-controlled repository, allowing updates to reflect changes in requirements, design, or test scope.
- Document updates such as:
- New or revised requirements (SWE-071).
- Changes resulting from defect reports or anomalies.
- Modifications due to new hardware/software tools included in testing.
Periodic Reviews:
- Even small projects should conduct periodic reviews of test documentation and results:
- Use peer reviews or lightweight audits to validate documents for relevance and sufficiency.
- Ensure completed tests are adequately marked as resolved and traceable to the original requirements.
- Even small projects should conduct periodic reviews of test documentation and results:
Test Documentation into Operations and Maintenance:
- Maintain accurate test documentation into operational phases:
- This ensures that any future updates, maintenance, or anomaly investigations have a reliable baseline from which to work.
- Maintain accurate test documentation into operational phases:
5. Collaboration and Knowledge Sharing
Shared Teams, Shared Knowledge:
- Encourage small project teams to collaborate and share test strategies, lessons learned, and reusable assets to reduce development time.
- Implement tools for quick knowledge sharing, such as Confluence pages, chat platforms, or internal wikis.
Summary of Improved Small Project Strategies:
| Guidance Area | Key Improvement |
|---|---|
| Combine Documentation | Use unified documents for plans, procedures, and results to streamline testing and reduce redundancies. |
| Standardization Framework | Establish reusable templates and checklists that can be customized per project. |
| Efficiency Practices | Employ risk-based testing, reuse legacy test assets, and focus on high-priority requirements. |
| Automation Tools | Leverage automation and lightweight tracking systems for consistent, low-overhead execution. |
| Maintenance | Use configuration management and conduct periodic reviews to maintain up-to-date documentation. |
| Shared Knowledge | Develop repositories and foster collaboration between teams for reusable testing resources. |
By focusing on reducing overhead, leveraging standardization, and prioritizing high-value activities, this improved guidance helps small projects meet NASA's rigorous quality standards while operating within resource constraints.
5. Resources
5.1 References
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-209) IEEE Computer Society, IEEE Std 1012-2016 (Revision of IEEE Std 1012-2012), Published September 29, 2017, NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards. Non-NASA users may purchase the document from: http://standards.ieee.org/findstds/standard/1012-2012.html
- (SWEREF-211) IEEE Computer Society, IEEE STD 1059-1993, 1993. NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards.
- (SWEREF-276) NASA-GB-8719.13, NASA, 2004. Access NASA-GB-8719.13 directly: https://swehb.nasa.gov/download/attachments/16450020/nasa-gb-871913.pdf?api=v2
- (SWEREF-478) Aerospace Report No. TOR-2004(3909)-3537, Revision B, March 11, 2005.
- (SWEREF-561) Public Lessons Learned Entry: 1529.
- (SWEREF-573) Public Lessons Learned Entry: 2419.
- (SWEREF-579) Lessons Learned Entry: 991.
- (SWEREF-581) CAMS 10188. In NASA Engineering Network.
- (SWEREF-695) The NASA GSFC Lessons Learned system. Lessons submitted to this repository by NASA/GSFC software projects personnel are reviewed by a Software Engineering Division review board. These Lessons are only available to NASA personnel.
5.2 Tools
6. Lessons Learned
6.1 NASA Lessons Learned
The NASA Lessons Learned database contains crucial insights from past incidents related to software testing and testing plans. Proper planning, preparation, execution, and review of test procedures are essential to ensure safe, reliable, and effective testing in NASA projects, especially for safety-critical systems and scenarios involving hardware integration. This section summarizes key lessons learned and actionable recommendations from the database to help avoid similar shortcomings in future projects:
1. Aquarius Reflector Over-Test Incident
Lesson Number: 2419
This incident highlighted the importance of comprehensive test procedures and clearly defined roles and responsibilities to prevent confusion during operations and ensure successful test execution.
Key Points:
Lesson Learned No. 1:
- "The Aquarius Reflector test procedure lacked complete instructions for configuring the controller software before the test."
- Actionable Takeaway: Test procedures must be fully detailed, including software setup and configuration steps, to minimize the risk of errors or omissions during execution. Missing instructions can lead to operational anomalies or test failures.
Lesson Learned No. 4:
- "The roles and responsibilities of the various personnel involved in the Aquarius acoustic test operations were not documented. This could lead to confusion during test operations."
- Actionable Takeaway: Ensure test plans and procedures clearly define the roles and responsibilities of all personnel involved. This includes engineers, quality assurance personnel, and safety monitors—role clarity is critical for smooth execution and timely responses to anomalies.
Recommendation: Review test procedures for completeness and ensure personnel roles are documented within the test plan to prevent misunderstandings during execution.
2. Planning and Conducting Hazardous Tests
Lesson Number: 0991
Testing involving hazardous conditions, such as extreme temperatures, pressures, energy storage, or deployable systems, requires heightened precautions. This set of lessons emphasized special measures to mitigate risks to personnel, flight hardware, and facilities.
Key Points:
Comprehensive Test Documentation:
- Test procedures must be well written, well organized, and easy to interpret for both engineering and quality assurance personnel.
- Actionable Takeaway: Simplify and clarify documentation for high-risk tests, ensuring technical details are accurate and understandable by all stakeholders.
Pre-Test Training:
- Document inherent test anomalies (known issues associated with the test equipment or conditions, including likely causes, effects, and remedies) and include them in pre-test training.
- Actionable Takeaway: Equip personnel with knowledge of historical anomalies and potential challenges before testing begins, to reduce error frequency and improve test readiness.
Safety-Critical Data Readouts:
- Ensure test control data is presented in clear and easily understood formats (e.g., audible alarms, visible indicators, or graphical visualizations).
- Actionable Takeaway: Use intuitive safety mechanisms to protect flight hardware during hazardous tests.
Test Readiness Reviews and Equipment Verification:
- A formal test readiness review must confirm that all Ground Support Equipment (GSE), test devices, and sensors have been properly calibrated and maintained.
- Actionable Takeaway: Validate equipment health and configuration prior to test execution to avoid premature failures or misreading.
Ensuring Quality Assurance Oversight:
- Quality assurance personnel need to be actively involved throughout hazardous testing to monitor adherence to procedures and prescribed responses to anomalies.
- Actionable Takeaway: Make quality assurance witness testing mandatory for high-risk scenarios to complement engineering oversight.
Recommendation: For hazardous tests, implement rigorous procedural reviews, comprehensive pre-test training, well-maintained equipment, and clear safety-critical data displays to protect personnel and hardware.
3. Proper Test Configuration for Fully Loaded Scenarios
Lesson Number: Not Specified (Testing for Configurations)
Inadequate test planning contributed to testing gaps in integrated and formal test levels. Testing scenarios did not account for concurrent operation of BAT06 and BAT04 in a fully loaded launch configuration, leading to timing-related code errors.
Key Points:
- Integrated and Acceptance Testing:
- "Neither test plan had steps where BAT06 and BAT04 were running concurrently in a launch configuration scenario. Thus, no test runs were conducted reflecting the new fully loaded console configuration."
- Actionable Takeaway: Test plans must include accurate operational scenarios, including all relevant hardware/software interactions, to ensure tests reflect system behavior under real-world mission conditions.
Recommendation: Include fully loaded configuration scenarios in integrated and acceptance testing to uncover timing or interaction-related issues that might occur during actual use.
4. Ensure Test Monitoring Software Prevents Over-Test
Lesson Number: 1529
This lesson emphasized the critical role of test monitoring software and hardware safeguards to avoid over-testing, a condition that could lead to damaging flight hardware.
Key Points:
- Test Monitoring Software Limits:
- "Under the principle of 'First, Do No Harm,' ensure test monitoring and control software is programmed or limiting hardware devices are installed to prevent over-test conditions under all circumstances."
- Actionable Takeaway: Implement software safeguards or physical limiting devices as part of the test environment to prevent over-exposure or unintended stress on flight equipment during testing.
Recommendation: Develop automated limits in test monitoring software or insert hardware safety devices to avoid harmful over-test conditions.
Summary of Actionable Recommendations from Lessons Learned
Complete Documented Procedures:
- Ensure test procedures fully detail all necessary steps, configurations, and personnel roles to avoid confusion and test deficiencies.
Comprehensive Testing for Operational Scenarios:
- Reflect accurate mission configurations (fully loaded conditions) during integrated and formal testing to identify timing or interaction-related errors.
Handle Hazardous Tests with Care:
- Use extra precautions, including pre-test training, quality assurance oversight, clear documentation, intuitive safety-critical data displays, and rigorous test readiness reviews to ensure personnel and hardware safety during high-risk tests.
Prevent Over-Test Conditions:
- Include safeguards within test monitoring software or hardware devices to prevent unintended stress or damage to flight hardware.
By integrating these lessons learned into future test plans, projects can significantly reduce testing risks, ensure thorough validation coverage, and improve the chances of mission success while safeguarding both personnel and hardware.
6.2 Other Lessons Learned
The Goddard Space Flight Center (GSFC) Lessons Learned online repository 695 contains the following lessons learned related to software requirements identification, development, documentation, approval, and maintenance based on analysis of customer and other stakeholder requirements and the operational concepts. Select the titled link below to access the specific Lessons Learned:
- Test plans should cover all aspects of testing. Lesson Number 56: The recommendation states: "Test plans should cover all aspects of testing, including specific sequencing and/or data flow requirements."
- Apply Change Management principles to test hardware/software. Lesson Number 65: The recommendation states: "Apply Change Management principles to test hardware/software."
- Proper sequencing of stress tests can make root cause analysis easier when failures occur. Lesson Number 68: The recommendation states: "Proper sequencing of stress tests can make root cause analysis easier when failures occur."
- Hire people on the FOT side in prelaunch to focus on ground system testing. Lesson Number 97: The recommendation states: "Hire 2-3 people on the FOT side in prelaunch to focus on ground system testing and not put this on the flight ops personnel of the FOT."
- Incorporate automation into operations prior to launch. Lesson Number 98: The recommendation states: "Incorporate automation into operations prior to launch, instead of waiting until after launch."
- "Day in the Life" simulations using automation prior to launch. Lesson Number 99: The recommendation states: "Execute "Day in the Life" simulations using automation prior to launch."
- Leverage planned testing activities to verify ground system requirements. Lesson Number 122: The recommendation states: "Leverage planned testing activities to verify ground system requirements."
- Use the Flight Ops team to perform ground system acceptance testing. Lesson Number 123: The recommendation states: "Use the Flight Ops team to perform ground system acceptance testing."
- Impacts caused by interfaces that are not tested pre-launch. Lesson Number 124: The recommendation states: "Develop mitigations for impacts caused by interfaces that are not tested pre-launch."
- Perform pre-launch end-to-end testing between the spacecraft and all primary primary ground stations. Lesson Number 126: The recommendation states: "Perform pre-launch end-to-end testing between the spacecraft and all primary primary ground stations."
- If ground systems are not available, a dedicated test needs to be performed. Lesson Number 144: The recommendation states: "Maintaining spacecraft schedule is critical: if ground systems are not available, a dedicated test needs to be performed."
- For a flight mission, plan and budget from outset for full end-to-end testing simulating an "orbit in the life". Lesson Number 161: The recommendation states: "For a flight mission, plan and budget from outset for full end-to-end testing simulating an "orbit in the life"."
- End-to-End Testing through satellite I&T. Lesson Number 172: The recommendation states: "End-to-End Testing should be planned for smaller events spread out through satellite (i.e., spacecraft with integrated payload/science instruments) I&T."
- Software Requirement Sell-Off Expedience. Lesson Number 177: The recommendation states: "As early as feasible in the program (EPR-CDR time frame) ensure that the project will be provided with all relevant test articles well in advance of the test’s run-for-record (will likely require NASA Program Management buy-in as well). This will allow the time necessary for: review of requirement test coverage, accumulation of all comments (especially if IV&V are supporting the program), and vendor disposition of all comments to project satisfaction. In this manner, when test artifacts from the FQT run-for-record are provided for requirement sell-off, the Flight Software SME will have a high level of confidence in the artifacts provided (knowing how each requirement has been tested) to expedite the sign-off process. This lesson can also be applicable for Instrument Software, Simulator Software, and Ground System Software."
- Going Beyond the Formal Qualification Test (FQT) Scripts: Data Reduction/Automation. Lesson Number 295: The recommendation states: "As early as feasible in the program (pre-FQT time frame), ascertain whether automated testing is planned for Software FQT and ensure that the vendor will provide all relevant test articles well in advance of test run-for-record (will likely require NASA Program Management buy in and support as well). Identify any calls to open up additional views to EGSE, Simulators, raw hex dumps, etc., that may be used to assist with data analysis/processing/reduction in the scripts. Request clarification on how data captured in those views will be used and have snapshots provided (or travel to vendor site) to fully understand verification extent. For automated testing, the Software Systems Engineer should evaluate whether the provider has allocated sufficient time and training to fully understand how the automated testing program will exercise and verify all required functions and behaviors. This lesson can also be applicable for Instrument Software, Simulator Software, and Ground System Software."
- Consider a streamlined review process for lower maturity products. Lesson Number 332: The recommendation states: "Start with a small group for initial review, and then add reviewers later."
- Key Mission Ops Tests essential to timely V&V of flight design/mission ops concept& launch readiness. Lesson Number 342: The recommendation states: "Develop/iterate/execute system level tests to verify/validate data system/mission Concept of Operations during Observatory I&T (e.g., the Comprehensive Performance Test (CPT) and Day-in-the-Life (DiTL) test). The CPT should be: a) thorough (exercising all copper paths, as many key data paths as reasonable, and using operational procedures); b) executed prior to/post significant events throughout Spacecraft & Observatory I&T; and c) designed comprehensive, yet short enough to be executed multiple times (e.g., the PACE CPT was specifically designed to be 4-5 days). The multi-pass DiTL test can demonstrate nominal operational procedures/processes and, when executed prior to the pre-environmental CPT, can be the basis for the instrument functionals during the environmental cycles and post environmental functional checkouts of the instruments."
7. Software Assurance
a. Software test plan(s).
b. Software test procedure(s).
c. Software test(s), including any code specifically written to perform test procedures.
d. Software test report(s).
7.1 Tasking for Software Assurance
For part a:
1. Confirm that software test plans have been established, contain correct content, and are maintained.
2. Confirm that the software test plan addresses the verification of safety-critical software, specifically the off-nominal scenarios.
For part b:
a. Coverage of the software requirements.
b. Acceptance or pass/fail criteria,
c. The inclusion of operational and off-nominal conditions,
including boundary conditions,
d. Requirements coverage and hazards per SWE-066 and
SWE-192, respectively.
e. Requirements coverage for cybersecurity per SWE-157
and SWE-210.
For part c:
1. Confirm that the project creates and maintains any code specifically written to perform test procedures in a software configuration management system.
For part d:
1. Confirm that the project creates and maintains the test reports throughout software integration and test.
2. Confirm that the project records the test report data and that the data contains the as-run test data, the test results, and required approvals.
3. Confirm that the project records all issues and discrepancies found during each test.
7.2 Software Assurance Products
This enhanced guidance provides a structured framework for software assurance (SA) activities, ensuring robust validation and verification of test artifacts, procedures, and results across the software lifecycle. The goal of this guidance is to emphasize traceability, safety, and continuous quality improvement while reducing risks through proactive oversight of testing processes.
Software assurance contributes to ensuring that test plans, procedures, and reports meet project and safety objectives. Below are key SA deliverables and corresponding responsibilities for each stage of the test lifecycle:
1. Test Plan Review and Confirmation
Correct Test Plan Content:
- Ensure that test plans include all applicable content as specified in NPR 7150.2 Guidance and 7.18 - Documentation Guidance:
- Objectives, scope, and strategy of testing.
- Traceability to software requirements, including safety-critical requirements.
- Coverage of operational, off-nominal, boundary, and failure scenarios.
- Confirm updates to the test plan as requirements or project objectives evolve.
- Ensure that test plans include all applicable content as specified in NPR 7150.2 Guidance and 7.18 - Documentation Guidance:
Safety-Critical Requirements:
- Verify that the plan addresses all safety-critical requirements and hazard controls.
- Ensure test objectives cover operational safety scenarios as well as failure detection, mitigation, and recovery.
Peer Review Results:
- Assess the results of test plan peer reviews to identify deficiencies.
- Track and confirm that all issues and corrective actions associated with peer reviews have been resolved.
Evidence of Approval:
- Provide formal evidence (signatures, approvals, or documented assessments) verifying that the software assurance team has approved the test plans.
2. Test Procedure Review and Maintenance
Established and Maintained Procedures:
- Confirm that test procedures are developed, maintained, and updated as tests, requirements, or designs change through the software lifecycle.
- Verify that procedures align with updates to software safety analyses, hazard reports, or other critical project documentation.
Identify Issues During Procedure Peer Reviews:
- Participate in and review peer evaluations of test procedures to identify potential deficiencies or discrepancies.
- Ensure corrective actions for any identified issues are implemented and documented.
Procedure Attributes:
- Review and analyze test procedures for:
- Coverage: Ensure procedures encapsulate all software requirements (functional, interface, boundary, and safety-related).
- Pass/Fail Criteria: Confirm the presence of clear and measurable evaluation criteria for each test.
- Scenario Testing: Validate that procedures address:
- Normal operational conditions.
- Off-nominal and boundary conditions.
- Stress, performance, and fault recovery scenarios.
- Traceability: Confirm test procedures link explicitly to software requirements, design elements, hazard reports, and system-level tests.
- Review and analyze test procedures for:
Traceability of Requirements to Procedures:
- Use and validate traceability matrices to ensure all software requirements are covered by appropriate tests.
- Pay special attention to requirements derived from safety analyses (e.g., fault-tree analysis, hazard reports) and verify that hazard controls are rigorously tested.
3. Test Execution Monitoring and Assessment
Assessment of Test Status:
- Continuously monitor testing progress and compliance with the test plan.
- Analyze and report on the status of test execution, highlighting test completion rates, anomalies identified, and corrective actions taken.
SA Role in Safety-Critical Testing:
- Witness safety-critical tests to ensure adherence to test procedures and evaluate the proper functioning of hazard controls.
- Focus on fault detection, isolation, recovery mechanisms, and the software’s performance under concurrent hardware or software failures.
Approval of Test Reports:
- Where required (e.g., for safety-critical software), provide formal approval of test reports after verifying:
- All test objectives were met.
- All discrepancies were adequately addressed.
- All safety-related requirements were verified.
- Ensure test results reflect an accurate assessment of the system’s quality and safety readiness.
- Where required (e.g., for safety-critical software), provide formal approval of test reports after verifying:
4. Issues and Discrepancies Resolution
Identification of Testing Issues:
- Track and document issues, anomalies, and discrepancies identified during test planning, test execution, or peer reviews of test procedures.
- Analyze the root causes of testing issues and recommend corrective actions to prevent recurrence.
Types of Issues to Monitor:
- Non-Conformances: Software misbehavior during tests, such as deviation from expected results or unhandled exceptions.
- Safety Gaps: Missing or insufficient test cases for safety-critical requirements, hazard mitigations, or fault recovery.
- Requirement Inadequacies: Unclear, conflicting, or incomplete requirements leading to testing ambiguity.
- Test Procedure Deficiencies: Gaps, errors, or inconsistencies within the testing procedures or expected results.
Feedback and Continuous Improvement:
- Provide detailed issue summaries and corrective action recommendations to the project team for process improvement and future testing iterations.
5. Software Safety Testing
Validation of Safety Mechanisms:
- Ensure that the test plan and procedures validate the software’s fault detection, isolation, and recovery mechanisms as derived from safety analyses (e.g., PHA, FMEA, and fault-tree analysis).
- Confirm that testing encompasses:
- Interface robustness testing for hardware/software interactions.
- Multiple concurrent failure scenarios (e.g., simultaneous hardware and software faults).
- FDIR (Fault Detection, Isolation, and Recovery) operation under nominal, degraded, and failure conditions.
Unit and Component Testing for Safety Features:
- Confirm that safety features are tested at the unit level for both normal and unexpected inputs (e.g., out-of-sequence, malformed, or extreme data).
- Ensure test artifacts such as drivers, stubs, and simulations used for unit testing are maintained for future regression testing.
6. Reporting and Documentation
Test Results Analysis:
- Analyze the outcome of tests and summarize findings in detailed, actionable reports.
- Develop a categorized list of issues and discrepancies observed during testing to inform management and future projects.
Maintain Artifacts:
- Ensure all test-related documentation (plans, procedures, reports, etc.) is up to date and accurately reflects the current software baseline.
- Provide ongoing oversight during updates caused by changes in requirements, design, or implementation.
7.3 Software Assurance Metrics
Software assurance should utilize metrics to monitor and improve the effectiveness of testing activities. These metrics provide a quantifiable basis for assessing progress, identifying trends, and implementing data-driven decisions.
Recommended Metrics Categories:
Test Coverage Metrics:
- Total number of requirements versus completed tests.
- Number of safety-critical tests executed versus those witnessed by SA.
- Detailed requirements tested versus total detailed requirements.
Discrepancy Metrics:
- Types and severity of issues identified during testing.
- Open versus closed non-conformances, with time to closure.
Risk and Non-Conformance Metrics:
- Risks or non-conformances related to test code or test procedures.
Process Trends:
- Trends in test outcomes (e.g., pass/fail percentages over time).
- Open versus closed action items, risks, and non-conformances.
See Also: Topic 8.18 - SA Suggested Metrics.
7.4 Software Assurance Activities and Reviews
Software assurance personnel should perform the following at each stage of the software lifecycle:
Preliminary Design Review (PDR):
- Confirm the test plan has been started and includes placeholders for safety-critical scenarios.
Critical Design Review (CDR):
- Assess that test procedures have been started, align with system requirements, and address operational, off-nominal, and boundary conditions.
Implementation and Beyond:
- Monitor updates to test plans and procedures as requirements, hazards, or designs change.
- Actively witness critical tests, validate results, and ensure traceability to requirements.
Post-Test Activities:
- Verify that corrective actions are completed and regression tests are re-executed for modified software.
By embedding software assurance throughout the test lifecycle and prioritizing traceability, analysis, and safety, NASA can ensure the successful verification and validation of software systems while maintaining the highest standards for mission safety and quality.
See also SWE-023 - Software Safety-Critical Requirements,
7.5 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook:
8. Objective Evidence
Objective Evidence
To ensure compliance with Requirement 4.5.2, objective evidence should be demonstrable, measurable, and documented to substantiate that all software testing activities—including test plans, test procedures, actual tests, and reports—are being created, maintained, and utilized effectively. Below is a detailed breakdown of the required objective evidence that addresses each aspect of the requirement:
This requirement involves the creation and upkeep of the following software testing artifacts, with documentation and records serving as tangible, auditable proof of compliance.
1. Test Plans
Evidence:
Test Plan Document:
- A finalized and version-controlled document that outlines the scope, objectives, testing phases, environments, resources, risk mitigations, and schedules.
- The document must include traceability to software requirements and a rationale for safety considerations.
- Reference: Include document revision control logs or an artifact repository record (e.g., showing "baselined at CDR" with updates tracked).
Approval Record:
- Evidence of approval/signoff by:
- Project manager or technical lead.
- Software assurance or quality assurance teams (SA/QA).
- Independent reviewers (if required for safety or criticality).
- Evidence of approval/signoff by:
Peer Review Artifacts:
- Peer review reports or meeting minutes, including:
- List of attendees.
- Anomalies, action items, and resolutions (e.g., tracked through change request systems).
- Peer review reports or meeting minutes, including:
Content Coverage Analysis:
- Records demonstrating that the test plan covers:
- Functional and non-functional requirements (e.g., performance, safety, boundary conditions, fault recovery, off-nominal scenarios).
- Verification of safety-critical components and hazardous conditions (linked to system-level hazard analyses).
- Records demonstrating that the test plan covers:
2. Test Procedures
Evidence:
Test Procedure Document:
- Detailed instructions covering the steps, configurations, inputs, and pass/fail criteria for the execution of test cases.
- The procedures should include test artifacts for specific test cases traced to the software requirements.
Traceability Matrix:
- A matrix linking software test cases and procedures to:
- Software requirements (functional, safety-critical, interface, and performance).
- Hazard controls (for systems with hazard reports).
- A matrix linking software test cases and procedures to:
Dry Run Records:
- Logs, test data, or engineer-noted outcomes of dry-run executions, with evidence of updates/refinements based on pre-execution findings.
Configuration Management Evidence:
- Maintenance records showing test procedures are updated (e.g., after changes in requirements, design, or implementation). Include version-controlled revisions in configuration tools.
Approval and Audit Records:
- Records of SA/QA signoff for updated or revised procedures after periodic reviews or significant changes.
3. Test Execution and Data
Evidence:
Test Execution Logs:
- Timestamped and version-controlled records of actual test runs, detailing:
- Test case ID (from the procedure or matrix).
- Input data, output results, and observed anomalies.
- Test environment configuration (platform, simulator, hardware, software versions).
- Timestamped and version-controlled records of actual test runs, detailing:
Automation Evidence (if applicable):
- Logs or screenshots from automated test frameworks/tools (e.g., Jenkins, Selenium, or Python-generated test results).
- Evidence of compliance with automated regression test cycles.
Test Monitoring and Witness Checklists:
- Recorded observations from software assurance personnel or third-party witnesses confirming:
- Tests were executed per the approved procedures.
- Safety-critical tests were executed in the specified operational or simulation environment(s).
- Recorded observations from software assurance personnel or third-party witnesses confirming:
4. Test Reports
Evidence:
Software Test Reports:
- Formal, version-controlled artifacts documenting the following:
- Test cases executed and their outcomes (pass/fail criteria met).
- Summary of discrepancies, root causes, corrective actions, and resolutions.
- Coverage analysis (e.g., percentage of requirements tested, untested requirements, gaps, and mitigations).
- Formal, version-controlled artifacts documenting the following:
Metrics or Performance Analysis:
- Data-driven evidence of test effectiveness, such as:
- Tests executed versus total planned tests.
- Requirements tested versus total requirements.
- Defect density trends over time to demonstrate improvement.
- Compliance with SWE-191: Regression Testing metrics for modified code/components.
- Data-driven evidence of test effectiveness, such as:
Approvals:
- Test reports signed off by SA/QA with evidence showing their verification of accuracy and completeness.
5. Safety Assurance Evidence
For safety-critical software, additional evidence is required to verify that hazardous scenarios or off-nominal events have been considered and tested adequately:
Evidence:
Hazard Traceability:
- A traceability matrix showing links between:
- System-level hazards (from hazard reports) and safety-related software requirements.
- Safety-related software requirements and corresponding test cases/procedures.
- A traceability matrix showing links between:
Off-Nominal Test Logs:
- Results of fault-injection testing, stress/stability testing, and boundary testing to confirm:
- System responses to failures and hazardous states (e.g., power loss, invalid inputs, timing issues).
- Validation of fault detection, isolation, and recovery (FDIR) mechanisms.
- Results of fault-injection testing, stress/stability testing, and boundary testing to confirm:
Witness Checklist from Hazard Testing:
- Documentation indicating hazardous tests were witnessed and validated (SWE-194) by software assurance or safety experts.
6. Updates and Maintenance Evidence
Evidence:
Configuration Management and Change Logs:
- Documented updates to test plans, procedures, and reports after any changes to requirements, designs, or code (SWE-071 compliance).
- Historical logs showing:
- What was changed.
- Why the change was necessary (link to defect reports, design updates, or requirement changes).
- When and who approved the changes.
Revised Test Artifacts:
- Updated versions of test documents after defect resolution or corrective actions.
- Records of re-executed test cases (e.g., regression tests) after code updates with the revised test results.
7. Metrics-Based Monitoring Evidence
Evidence:
Metrics to provide a quantifiable view of software progress, quality, and risks. Examples of evidence include:
Requirements Coverage:
- Charts or reports tracking:
- Number of requirements tested versus total number of requirements.
- Percentage of safety-critical requirements executed successfully.
- Charts or reports tracking:
Defect and Anomaly Trends:
- Reports or plotted graphs showing:
- Non-conformances (open/closed) over time.
- Severity and resolution timeframes for defects.
- Defect density trends (e.g., issues per test case or line of code).
- Reports or plotted graphs showing:
Testing Progress:
- Summary reports of completed versus planned tests.
- Execution breakdowns across testing levels (unit, subsystem, integration, and acceptance).
Safety Metrics:
- Number of safety-critical tests executed versus witnessed.
- Non-conformances in safety mitigations or fault-handling procedures.
8. Tools and Repository Artifacts
Evidence:
Test Repository Records:
- Screenshots or exports from tools like JIRA, TestRail, or equivalent (if applicable), showing:
- Test plans, procedures, execution logs, and approvals in one central repository.
- Issue and action item tracking linked to test artifacts.
- Screenshots or exports from tools like JIRA, TestRail, or equivalent (if applicable), showing:
Automation Artifacts:
- Logs from automation frameworks for regression testing and repeated execution of test scripts.
Summary of Objective Evidence Types
| Artifact | Example Evidence |
|---|---|
| Test Plan | Baseline document, version control logs, review approval reports. |
| Test Procedure | Detailed test steps, traceability matrices, peer review comments. |
| Test Results | Execution logs, automation logs, safety-critical test records with traces to requirements. |
| Test Reports | Signed-off reports, discrepancy analysis summaries, metrics tracking reports. |
| Safety Artifacts | Traceability to hazard reports, fault injection results, boundary/stress test reports. |
| Maintenance Artifacts | Configuration change logs, newly updated artifacts reflecting requirements or design changes. |
| Metrics | Reports showing testing trends, requirements/test coverage, defect closure rates, and safety-critical test data. |
By maintaining these evidences, compliance with Requirement 4.5.2 can be demonstrated effectively, ensuring thorough validation and verification processes for mission-critical software.


