See edit history of this section
Post feedback on this section
- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
1. Requirements
4.5.2 The project manager shall establish and maintain:
- Software test plan(s).
- Software test procedure(s).
- Software test report(s)
1.1 Notes
NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.
1.2 History
1.3 Applicability Across Classes
Class A B C D E F Applicable?
Key: - Applicable | - Not Applicable
2. Rationale
Having plans and procedures in place ensures that all necessary and required tasks are performed and performed consistently. The development of plans and procedures provides the opportunity for stakeholders to give input and assist with the documentation and tailoring of the planned testing activities to ensure the outcome will meet the expectations and goals of the project. Test reports ensure that results of verification activities are documented and stored in the configuration management system for use in acceptance reviews or readiness reviews.
Ensuring the test plans, procedures, and reports follow templates ensures consistency of documents across projects, ensures proper planning occurs, ensures proper activity and results are captured, and prevents repeating problems of the past.
3. Guidance
Projects create test plans, procedures, and reports following the content recommendations in topic 7.18 - Documentation Guidance.
Objectives of software test procedures are to perform software testing following the following guidelines:
- Software testing is performed to demonstrate to the project that the software requirements have been met, including all interface requirements.
- If a software item is developed in multiple builds, its software testing will not be completed until the final build for the software item, or possibly until later builds involving items with which the software item is required to interface. Software testing in each build is interpreted to mean planning and performing the test of the current build of each software item to ensure that the software item requirements to be implemented in that build have been met. 478
Independence in software item testing
For Class A, B, and Safety-critical class C software, the person(s) responsible for software testing of a given software item should not be the persons who performed detailed design, implementation, or unit testing of the software item. This does not preclude persons who performed detailed design, implementation, or unit testing of the software item from contributing to the process, for example by contributing test cases that rely on knowledge of the software items' internal implementation. 478
Software Test Procedure Development Guidelines
The project should “establish test cases (in terms of inputs, expected results, and evaluation criteria), test procedures, and test data for testing the software.” 478The test cases, test procedures, should cover the software requirements and design, including, as a minimum, the correct execution of all interfaces (including between software units), statements and branches; all error and exception handling; all software unit interfaces including limits and boundary conditions; end-to-end functional capabilities, performance testing, operational input, and output data rates and timing and accuracy requirements, stress testing, worst case scenario(s), fault detection, isolation and recovery handling, resource utilization, hazard mitigations, start-up, termination, and restart (when applicable); and all algorithms. Legacy reuse software should be tested for all modified reuse software, for all reuse software units where the track record indicates potential problems, and all critical reuse software components even if the reuse software component has not been modified. 478
All software testing should be following the defined test cases and procedures.
“Based on the results of the software testing, the developer [should] make all necessary revisions to the software, perform all necessary retesting, update the SDFs, and other software products as needed... Regression testing ... [should] be performed after any modification to previously test software.” 478
Ensure test rig configuration sufficiency for planned testing (e.g., sufficient real hardware is included).
Testing on the target computer system
Software testing should be performed using the target hardware. The target hardware used for software qualification testing should be as close as possible to the operational target hardware and should be in a configuration as close as possible to the operational configuration. 478(see SWE-073) Typically, high-fidelity simulation has the exact processor, processor performance, timing, memory size, and interfaces as the target system.
Software Assurance Witnessing
The software test procedure developer should “dry run the software item test cases and procedures to ensure that they are complete and accurate and that the software is ready for witnessed testing. The developer should record the results of this activity in the appropriate SDFs and should update the software test cases and procedures as appropriate.” 478
Formal and acceptance software testing is witnessed by software assurance personnel to verify satisfactory completion and outcome. Software assurance is required to witness or review/audit the results of software testing and demonstration.
Software Test Report Guidance
The software tester is required to analyze the results of the software testing and record the test and analysis results in the appropriate test report.
Ensure that the data captured is analyzed.
Software Test Documentation Maintenance
Once these documents are created, they need to be maintained to reflect the current project status, progress, and plans, which will change over the life of the project. When requirements change (SWE-071), test plans, procedures, and the resulting test reports may also need to be updated or revised to reflect the changes. Changes to test plans and procedures may result from:
- Inspections/peer reviews of documentation.
- Inspections/peer reviews of code.
- Design changes.
- Code maturation and changes (e.g., code changes to correct bugs or problems found during testing, interfaces revised during development).
- Availability of relevant test tools that were not originally part of the test plan (e.g., tools freed up from another project, funding becomes available to purchase new tools).
- Updated software hazards and mitigations (e.g., new hazards identified, hazards eliminated, mitigations are added or revised).
- Execution of the tests (e.g., issues found in test procedures).
- Test report/results analysis (e.g., incomplete, insufficient requirements coverage).
- Changes in test objectives or scope.
- Changes to schedule, milestones, or budget changes.
- Changes in test resource numbers or availability (e.g., personnel, tools, facilities).
- Changes to software classification or safety criticality (e.g., a research project not intended for flight becomes destined for use on the ISS (International Space Station)).
- Process improvements relevant to test activities.
- Changes in the project affect the software testing effort.
Just as the initial test plans, procedures, and reports require review and approval before use, the project team ensures that updates are also reviewed and approved the following project procedures.
Maintaining accurate and current test plans, procedures, and reports continue into the operation and maintenance phases of a project.
NASA users should consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources related to the test plan, test procedures, and test reports, including templates and examples.
NASA-specific test documentation information and resources are available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook.
Additional guidance related to the test plan, test procedures, and test reports may be found in the following related requirement in this Handbook:
4. Small Projects
No additional guidance is available for small projects.
5. Resources
5.1 References
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-209) IEEE Computer Society, IEEE Std 1012-2012 (Revision of IEEE Std 1012-2004), This link requires an account on the NASA START (AGCY NTSS) system (https://standards.nasa.gov ). Once logged in, users can access Standards Organizations, IEEE and then search to get to authorized copies of IEEE standards.
- (SWEREF-211) IEEE Computer Society, IEEE STD 1059-1993, 1993. NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards.
- (SWEREF-276) NASA-GB-8719.13, NASA, 2004. Access NASA-GB-8719.13 directly: https://swehb-pri.msfc.nasa.gov/download/attachments/16450020/nasa-gb-871913.pdf?api=v2
- (SWEREF-478) Aerospace Report No. TOR-2004(3909)-3537, Revision B, March 11, 2005.
- (SWEREF-561) Public Lessons Learned Entry: 1529.
- (SWEREF-573) Public Lessons Learned Entry: 2419.
- (SWEREF-579) Lessons Learned Entry: 991.
- (SWEREF-581) CAMS 10188. In NASA Engineering Network.
5.2 Tools
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.
6. Lessons Learned
6.1 NASA Lessons Learned
The NASA Lessons Learned database contains the following lessons learned related to insufficiencies in software test plans:
- Aquarius Reflector Over-Test Incident (Procedures should be complete.) Lesson Number 2419 573: Lessons Learned No. 1 states: "The Aquarius Reflector test procedure lacked complete instructions for configuring the controller software before the test." Lesson Learned No. 4 states: "The roles and responsibilities of the various personnel involved in the Aquarius acoustic test operations were not documented. This could lead to confusion during test operations."
- Planning and Conduct of Hazardous Tests Require Extra Precautions (2000-2001) (Special measures needed for potentially hazardous tests.) Lesson Number 0991 579: "When planning tests that are potentially hazardous to personnel, flight hardware or facilities (e.g., high/low temperatures or pressure, stored energy, deployables), special measures should be taken to ensure that:
- "Test procedures are especially well written, well organized, and easy to understand by both engineering and quality assurance personnel.
- "Known test anomalies that history has shown to be inherent to the test equipment or conditions (including their likely causes, effects, and remedies) are documented and included in pre-test training.
- "Readouts of safety-critical test control data are provided in an easily understood form (e.g., audible, visible, or graphic format).
- "Test readiness reviews are held, and test procedures require confirmation that GSE test equipment and sensors have been properly maintained.
- "Quality assurance personnel are present and involved throughout the test to ensure procedures are properly followed, including prescribed responses to pre-identified potential anomalies."
- Test plans should reflect proper configurations 581: "Testing of the software changes was inadequate at the Unit, Integrated and Formal test level. In reviewing test plans...neither had test steps where BAT06 and BAT04 were running concurrently in a launch configuration scenario. Thus no test runs were done with the ... program that would reflect the new fully loaded console configuration. Had the launch configuration scenarios been included in integrated and acceptance testing, this might have revealed the code timing problems."
- Ensure Test Monitoring Software Imposes Limits to Prevent Overtest (2003) (Include test monitoring software safety steps.) Lesson Number 1529 561: Recommendation No. 2 states: "Before the test, under the test principle of 'First, Do No Harm' to flight equipment, assure that test monitoring and control software is programmed or a limiting hardware device is inserted to prevent over-test under all conditions..."
6.2 Other Lessons Learned
No other Lessons Learned have currently been identified for this requirement.
7. Software Assurance
- Software test plan(s).
- Software test procedure(s).
- Software test report(s)
7.1 Tasking for Software Assurance
For requirement a:
- Confirm that software test plans have been established, contain correct content, and are maintained.
- Confirm that the software test plan addresses the verification of safety-critical software, specifically the off-nominal scenarios.
For requirement b:
Confirm that test procedures have been established and are updated when changes to tests or requirements occur.
- Analyze the software test procedures for:
For requirement c:
Confirm that the project creates and maintains the test reports throughout software integration and test.
Confirm that the project records the test report data and that the data contains the as-run test data, the test results, and required approvals.
Confirm that the project records all issues and discrepancies found during each test.
Confirm that the project tracks closure errors, defects, etc. found during testing.
7.2 Software Assurance Products
For 65a:
- Confirmations that test plans have correct content, including verification of safety-critical software, and are updated, as needed.
- Results of any peer reviews on the test plans, including any issues and corrective actions.
- Evidence that Software Assurance has approved or signed off on the software test plans.
For 65b:
- Evidence of confirmation that test procedures are established and maintained as tests or requirements change.
- Issues and corrective actions are identified with the test procedures or during any test procedure peer reviews.
Software Assurance analysis of test procedure attributes listed in a through d.
For 65c:
- Software assurance assessment of project test status.
SA approval for test reports, where required (e.g. safety-critical software).
- List of types of issues and discrepancies found during testing.
Objective Evidence
- Software test plan
- Software test procedures
- Software test reports
7.3 Metrics
For 65a:
- # of safety-related non-conformances identified by life-cycle phase over time
For 65b:
- # of Software Requirements (e.g. Project, Application, Subsystem, System, etc.)
- # of software requirements with completed test procedures over time
- # of Software Requirements being met via satisfactory testing vs. total # of Software Requirements
- # of Software Requirements without associated test cases
- # of software work product Non-Conformances identified by life-cycle phase over time
- # of safety-related requirement issues (Open, Closed) over time
- # of safety-related non-conformances identified by life-cycle phase over time
- # of Non-Conformances and risks open vs. # of Non-Conformances, risks identified with test procedures
- # of hazards with completed test procedures/cases vs. total number of hazards over time
- # of software requirements with completed test procedures/cases over time
- # of Non-Conformances identified when the approved, updated requirements are not reflected in test procedures
- # of Non-Conformances identified while confirming hazard controls are verified through test plans/procedures/cases
- # of Requirements tested successfully vs. total # of Requirements
- # of detailed software requirements tested to date vs. total # of detailed software requirements
- # of issues and risks/corrective actions open versus total # of issues and risks/corrective actions identified with test procedures.
For 65c:
- Total # of Non-Conformances over time (Open, Closed, # of days Open, and Severity of Open)
- # of Non-Conformances in the current reporting period (Open, Closed, Severity)
- # of Closed action items vs. # of Open action items
- # of software work product Non-Conformances identified by life-cycle phase over time
- Total # of tests completed vs. number of test results evaluated and signed off
- # of Safety-Critical tests executed vs. # of Safety-Critical tests witnessed by SA
- # of tests executed vs. # of tests successfully completed
- # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
- # of Requirements tested successfully vs. total # of Requirements
- # of tests successfully completed vs. total # of tests
- # of detailed software requirements tested to date vs. total # of detailed software requirements
- Trends of open versus closed problem/change reports over time.
7.4 Guidance
Guidance for part a and b:
Software assurance will confirm that software test plans are started during the preliminary design period and baselined at the end of CDR. Whenever changes occur that affect the test plan information, confirm that the test plan has been updated. Review the expected test plan contents in 7.18 - Documentation Guidance and assess that the expected contents have been included. Confirm that the test plan specifically addresses the coverage of hazard controls, particularly the off-nominal scenarios.
Software assurance will confirm that the software test procedures are started around the end of the CDR and are refined and updated to reflect changes in the requirements, design, or software through implementation. Any updates to requirements, design, safety, or software changes may cause changes in the test procedures. Software assurance should confirm that the expected content for test procedures is included in the project test procedures, using the guidance for test procedure content in 7.18 - Documentation Guidance.
Software assurance will assess the test procedures for the following:
- Coverage of the software requirements (See the chart for recommended coverage in the software guidance for SWE-189)
- Acceptance criteria for the test procedures; pass/fail criteria for each test
- Operational conditions, off-nominal conditions as well as boundary conditions are tested
- Requirements coverage as per SWE-066 and SWE-192
Software assurance personnel will want to use the traceability matrices to help determine whether the tests are defined to cover the requirements and whether the safety aspects are adequately covered. As explained below, some of the safety-related software can only be tested at a unit or component level, so software assurance will want to check whether that has been considered in the testing. Also, often the safety requirements are found in a hazard report or safety plan and need to be included in the test planning.
Traceability is a link or definable relationship between two or more entities. Requirements are linked from their more general form (e.g., the system specification) to their more explicit form (e.g., subsystem specifications). They are also linked forward to the design, source code, and test cases. Many software safety-related hazard events, conditions, causes, controls, or mitigations are derived from multiple sources (the system safety analysis, risk assessments, or organizational, facility, vehicle, system-specific generic hazards). The HRs need to be updated as those sources change and mature. Also, the resulting software requirements linked to those HRs need to be maintained and updated as needed. Changes to the software related to HRs also need to be fed back to the HRs and assure changes occur in both directions.
Tracing requirements is a vital part of system verification and validation, especially in safety verifications. Full requirements test coverage is virtually impossible without some form of requirements traceability. Tracing also provides a way to understand and communicate the impact on the system of changing requirements or modification of software elements. A tracing system can be as simple as a spreadsheet or as complicated as an automatic tracing tool.
The relationship of software requirements to hazards, controls, conditions, and events is usually kept in the hazard or safety report as well as in the requirements traceability document where the requirement(s) associated with safety-critical functions, are traceable to the Hazard Report (HR), Risk Analyses or CIL. Enough detail is flowed down with the resulting safety implicated requirement(s) to capture needed conditions, triggers, contingencies, etc. Tests need to be established for all these safety features.
Plans for unit and component testing also need to take into account the testing of safety features, controls, inhibits, mitigations, data and command exchanges, and execution at the unit or component level. Unit-level testing is often the only place where the software paths can be completely checked for both the full range of expected inputs and its response to wrong, out of sequence, or garbled input. The stubs and drivers, test suites, test data, models, simulations, and simulators used for unit testing are very important to capture and maintain for future regression testing and the proof of thorough safety testing. The reports of unit-level testing of safety-critical software components need to be thoroughly documented as well.
Software safety testing will include verification and validation that the implemented fault and failure modes detection and recovery work as derived from the safety analyses, such as PHAs, sub-system hazard analyses, failure-modes-effects-analysis, fault-tree-analysis. This can include both software and hardware failures, interface failures, or multiple concurrent hardware. failures. FDIR is often used in place of failure detection when using the software as software can detect and react to faults before they become failures. Refer to NPR 7150.2 (requirement SWE-134 in Revision C).
For part c:
Software assurance will review the test results and develop a list of the types of software issues and discrepancies discovered during software and system testing. The types of issues should be reported to management and the project as information for future improvement.