See edit history of this section
Post feedback on this section
- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
1. Requirements
4.5.3 The project manager shall test the software against its requirements.
1.1 Notes
A best practice for Class A, B, and C software projects is to have formal software testing planned, conducted, witnessed, and approved by an independent organization outside of the development team.
1.2 History
1.3 Applicability Across Classes
Class A B C D E F Applicable?
Key: - Applicable | - Not Applicable
2. Rationale
Software testing is required to ensure that the software meets the agreed requirements and design. The application works as expected. The application doesn’t contain serious bugs, and the software meets its intended use as per user expectations.
3. Guidance
Per section 3.2 of the IEEE 730-2014 IEEE Standard for Software Quality Assurance Processes 469, “software testing is an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component.” Per the ISO/IEC TR 19759:2005 Software Engineering -- Guide to the Software Engineering Body of Knowledge (SWEBOK) 020, software testing is “the dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain, against the expected behavior.”
The developer performs software testing to demonstrate to the acquirer that the software item requirements have been met, including the interface requirements. If the software item is developed in multiple builds, its software item qualification testing will not be completed until the final build for that software item. The persons responsible for qualification testing of a given software item should not be the persons who performed detailed design or implementation of the software item. This does not preclude persons who performed detailed design or implementation of the software item from contributing to the process.
Software testing is essential for the following reasons:
- Software testing is required to point out the defects and errors.
- Ensure reliability in the application.
- Verify the quality of the product.
- Testing is required for the effective performance of software applications or products.
See also Topic 8.02 - Software Reliability, 5.10 - STP - Software Test Plan, SWE-068 - Evaluate Test Results,
3.1 Code Coverage
One intent of software testing is to test all paths through the code—every decision and nominal and off-nominal path—by executing test cases. Code coverage metrics identify additional tests that need to be added to the test run. Code coverage tools monitor the path the software executes and can be used during test runs to identify code paths that were not executed by any test. By analyzing these missed areas, tests can be identified and implemented to execute the missed path. It is challenging to get 100% coverage due to off-nominal and hardware issues not possible or not advisable to execute during a test run (e.g., radiation effects, hardware failures). Code coverage metrics can also identify sections of orphaned or unused code (dead code). See also Topic 8.01 - Off Nominal Testing, 8.19 - Dead / Dormant Code and Safety-Critical Software,
Code coverage can require the code to be compiled and instrumented in a specific manner and then executed in the test environments to indicate the code coverage of the tests. This instrumentation invalidates any functional acceptance testing so that acceptance testing would require test runs without modification. Code coverage should be verified as part of the in-line development test schedule. Code coverage of software-only unit tests would minimally impact the in-line development test schedule and be recorded as part of the code coverage metrics. New hardware-only coverage tools can provide the metrics without intrusive instrumentation of the code. See also SWE-062 - Unit Test
Consider using code coverage as a part of a project’s software testing metrics. Code coverage (also referred to as structural coverage analysis) is an important verification tool for establishing the completeness and adequacy of testing. Traceability between code, requirements, and tests is complemented by measuring the structural coverage of the code when the tests are executed. Where coverage is less than 100%, this points to:
- Code that is not traceable to requirements.
- Inadequate tests.
- Incomplete requirements.
- A combination of the above.
When using requirements-based testing, 100% code coverage means that subject to the coverage criteria used. No code exists which cannot be traced to a requirement. For example, every function is traceable to a requirement (but individual statements within the coverage may not be). What 100% code coverage does not mean is:
- The code is correct. The test cases, when aggregated, exercise every line of code. This is not sufficient to show there are no bugs. As long ago as 1969, Edsger Dijkstra noted, “testing shows the presence of bugs, not their absence” – in other words, just because testing doesn’t show any errors, it doesn’t mean they are not present.
- The software requirements are correct. This is determined through the validation of the requirements with the customer.
- 100% of the requirements have been tested. Merely achieving 100% code coverage isn’t enough. This is only true if the project achieves 100% code coverage AND the project has a test for 100% of the requirements, and every test passes.
- The compiler translated the code correctly. The compiler might be inserting errors that cause incorrect results in some situations (ones the project hasn’t tested for).
- 100% of the object code is covered. Even when all statements and conditions of the source code are being executed, the compiler can introduce additional structures into the object code.
Consider requiring code-coverage tools for determining testing completeness, and identification of untested code automates testing quality and can provide metrics for determining test completeness.
See also SWE-189 - Code Coverage Measurements, SWE-190 - Verify Code Coverage, Topic 7.06 - Software Test Estimation and Testing Levels.
3.2 Additional Software Test Guidance
4.3.5 Testing
"Testing serves several purposes: to find defects; to validate the system or an element of the system; and to verify functionality, performance, and safety requirements. The focus of testing is often on the verification and validation aspects. However, defect detection is probably the most important aspect of testing. While you cannot test quality into the software, you can certainly work to remove as many defects as possible." 276
Software testing has many levels, including unit testing, integration testing, and system testing, including functionality, performance, load, stress, safety, and acceptance testing. While the development team typically performs unit testing, some testing, such as integration, system, or regression testing, may be performed by a separate and/or independent test group.
- Predefine verification/validation needed for all configuration data loads (CDLs)
- Predeclare configuration data load (CDL) values which are expected/allowed to change with associated nominal verification activities
- Any tests (formal or informal) which fail should be rerun and verified before software change tickets are closed, in the original environment, or as close to it as possible. Preferably this would be done with the original author of the software change ticket but with appropriate control board approval.
In mind, formal testing, such as acceptance testing, is witnessed by an external organization, such as software assurance (see NASA-STD-8739.8, Software Assurance, and Software Safety Standard 278). See also Topic 8.13 - Test Witnessing.
4.3.5 Testing
"Some basic principles of testing are:
- All tests need to be traceable to the requirements, and all requirements need to be verified by one or more methods (e.g., test, demonstration, inspection, analysis).
- Tests need to be planned before testing begins. Test planning can occur as soon as the relevant stage has been completed. System test planning can start when the requirements document is complete.
- The "80/20" principle applies to software testing. In general, 80 percent of errors can be traced back to 20 percent of the components. Anything you can do ahead of time to identify components likely to fall in that 20 percent (e.g., high risk, complex, many interfaces, demanding timing constraints) will help focus the testing effort for better results.
- Start small and then integrate into the larger system. Finding defects deep in the code is difficult to do at the system level. Such defects are easier to uncover at the unit level.
- You can't test everything. However, a well-planned testing effort can test all parts of the system. Missing logic paths or branches may mean missing important defects, so test coverages need to be determined.
- Testing by an independent party is most effective. It is hard for developers to see their bugs. While unit tests are usually written and run by the developer, it is good to have a fellow team member review the tests. A separate testing group will usually perform the other tests. An independent viewpoint helps find defects, which is the goal of testing.
Scheduling testing phases is always an art and depends on the expected quality of the software product. Relatively defect-free software passes through testing within a minimal time frame. An inordinate amount of resources can be expended testing buggy software. Previous history, either of the development team or similar projects, can help determine how long testing will take. Some methods (such as error seeding and Halstead's defect metric) exist for estimating defect density (number of defects per unit of code) when historical information is not available."
276
NASA-GB-8719.13, NASA Software Safety Guidebook 276, includes a chapter on testing with a focus on safety testing. Some general testing highlights of that chapter include:
- Software testing beyond the unit level (integration and system testing) is usually performed by someone other than the developer, except in the smallest teams.
- Normally, software testing ensures that the software performs all required functions correctly and can exhibit graceful behavior under anomalous conditions.
- Integration testing is often done in a simulated environment, and system testing is usually done on the actual hardware. However, hazardous commands or operations need to be tested in a simulated environment first.
- During testing, any problems discovered need to be analyzed and documented in discrepancy reports and summarized in test reports.
- Create and follow written test procedures for integration and system testing.
- Perform regression testing after each change to the system.
- Prepare a Test Report upon completion of a test.
- Verify that commercial-off-the-shelf (COTS) software operates as expected.
- Follow problem reporting and corrective action procedures when defects are detected.
- Perform testing in a controlled environment using a structured test procedure and monitoring results or a demonstration environment where the software is exercised without interference.
- Analyze tests before use to ensure adequate test coverage.
- Analyze test results to verify that requirements have been satisfied and that all identified hazards are eliminated or controlled to an acceptable level of risk.
See also SWE-192 - Software Hazardous Requirements, SWE-193 - Acceptance Testing for Affected System and Software Behavior.
See also Topic 8.08 - COTS Software Safety Considerations.
Other useful practices include:
- Plan and document testing activities to ensure all required testing is performed. See SWE-065 - Test Plan, Procedures, Reports
- Have test plans, procedures, and test cases inspected and approved before use.
- Use a test verification matrix to ensure coverage of all requirements.
- Consider dry running test procedures in offline labs with simulations before actual hardware/software integration tests.
- Consider various types of testing to achieve more comprehensive coverage. (See Software QA and Testing Frequently-Asked-Questions 207 or NASA-GB-8719.13, NASA Software Safety Guidebook 276, for a list with descriptions.)
- When time and resources are limited, identify areas of highest risk and set priorities to focus effort to achieve the greatest benefit with the available resources. (See Software QA and Testing Frequently-Asked-Questions 207 or NASA-GB-8719.13, NASA Software Safety Guidebook 276, for suggested risk analysis considerations.)
- As necessary and appropriate, including support from the software development and/or test team when performing formal testing of the final system. Support could include:
- Identifying system test requirements unique to software.
- Providing input for software to system test procedures.
- Providing software design documentation.
- Providing software test plans and procedures.
- Predefine verification/validation needed for all configuration data loads (CDLs)
- Predeclare configuration data load (CDL) values which are expected/allowed to change with associated nominal verification activities
- Any tests (formal or informal) which fail should be rerun and verified before software change tickets are closed, in the original environment, or as close to it as possible. Preferably this would be done with the original author of the software change ticket but with appropriate control board approval.
While NASA Centers typically have their procedures and guidance, NASA-GB-8719.13, NASA Software Safety Guidebook 276, lists and describes the following testing which needs to be considered when planning any software test effort:
- Functional system testing.
- Stress testing.
- Stability tests.
- Resistance to failure testing.
- Compatibility tests.
- Performance testing.
The following chart shows a basic flow for software testing activities from planning through maintenance. Several elements of this flow are addressed in related requirements in this Handbook (listed in the table at the end of this section).
Tools that may be useful when performing software testing include the following non-exhaustive list. Each project needs to evaluate and choose the appropriate tools for the testing for that project.
- Software analysis tools.
- Reverse engineering, code navigation, metrics, and cross-reference tools.
- Debuggers.
- Compilers.
- Coding standards checkers.
- Memory management tools.
- Screen capture utilities.
- Serial interface utilities.
- Telemetry display utilities.
- Automated scripts.
- Etc.
See also SWE-070 - Models, Simulations, Tools, Topic 7.15 - Relationship Between NPR 7150.2 and NASA-STD-7009,
3.3 Guidance for the Test Lead When Preparing for Testing
The TEST REVIEW CHECKLIST FOR TEST LEADS PAT-026 can be used to help a test lead determine whether the team is ready for testing and what processes and other preparations need to be in place before beginning testing.
Click on the image to preview the file. From the preview, click on Download to obtain a usable copy.
See also Topic 8.57 - Testing Analysis, PAT-026 - Test Review Checklist For Test Leads, PAT-027 - Test Review Checklist For Review Teams,
3.4 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook:
3.5 Center Process Asset Libraries
SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197
See the following link(s) in SPAN for process assets from contributing Centers (NASA Only).
SPAN Links |
---|
4. Small Projects
Software testing is required regardless of project size. The level of rigor on the tests can be determined by the risk posture of the project. Safety Critical code needs to be tested and is a priority. Areas of higher risk or determined to be critical to success should take next priority. Unit, coverage, and other tests can be used for higher level tests but should be done with caution. While test articles (models, simulators, …) may or may not be used, testing on the actual “flight” hardware may damage equipment and should be considered (see SWE-073 Platform or High-Fidelity Simulations). In all cases, maintaining records and the ability to repeat the tests in the same configuration is required to prove that issues found are resolved.
5. Resources
5.1 References
- (SWEREF-001) Software Development Process Description Document, EI32-OI-001, EI32-OI-001, Revision R, Flight and Ground Software Division, Marshall Space Flight Center (MSFC), 2010. See Chapter 12. This NASA-specific information and resource is available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook.
- (SWEREF-008) 580-CK-032-02, Software Engineering Division, NASA Goddard Space Flight Center, 2010.
- (SWEREF-020) IEEE, Version 3.0 describes generally accepted knowledge about software engineering. Its 15 knowledge areas (KAs) summarize basic concepts and include a reference list pointing to more detailed information.
- (SWEREF-071) Acceptance Review Checklist, NASA Marshall Space Flight Center (MSFC). This NASA-specific information and resource may be available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook.
- (SWEREF-112) Test Readiness Review Checklist, NASA Marshall Space Flight Center. This NASA-specific information and resource may be available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook.
- (SWEREF-115) Verification Handbook, Volume 1: Verification Process, MSFC-HDBK-2221, NASA Marshall Space Flight Center (MSFC), 1994.
- (SWEREF-152) Chillarege, Ram (1999). Technical Report RC 21457 Log 96856. Center for Software Engineering/IBM Research.
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-207) Hower, Rick. (December, 2016), Software QA and Testing Resource Center,
- (SWEREF-234) Software Stress-Testing Guide, JPL D-24472, NASA Jet Propulsion Laboratory (JPL), 2001. This NASA-specific information and resource may be available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook.
- (SWEREF-276) NASA-GB-8719.13, NASA, 2004. Access NASA-GB-8719.13 directly: https://swehb.nasa.gov/download/attachments/16450020/nasa-gb-871913.pdf?api=v2
- (SWEREF-278) NASA-STD-8739.8B , NASA TECHNICAL STANDARD, Approved 2022-09-08 Superseding "NASA-STD-8739.8A,
- (SWEREF-299) Pettichord, Bret, editor (August, 2001). In PrismNet: The Complete Business Internet. Retrieved August 04, 2015 from http://www.prismnet.com/~wazmo/qa/.
- (SWEREF-469) IEEE Std 730: 2014, 2014. NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards.
- (SWEREF-529) Public Lessons Learned Entry: 938.
- (SWEREF-530) Public Lessons Learned Entry: 939.
- (SWEREF-537) Public Lessons Learned Entry: 1104.
- (SWEREF-538) Public Lessons Learned Entry: 1106.
- (SWEREF-545) Public Lessons Learned Entry: 1197.
- (SWEREF-685) Arianne 5 Inquiry Board - European Space Agency
5.2 Tools
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.
5.3 Process Asset Templates
(PAT-026 - )
SWE-066, tab 3.3, Also in Testing Analysis, tab ? and Test Results and Documentation Process Asset Templates.(PAT-027 - )
SWE-066, tab 7.4, Also in Test Docs
6. Lessons Learned
6.1 NASA Lessons Learned
The NASA Lessons Learned database contains the following lessons learned related to the importance of and potential issues related to software testing:
- International Space Station Program/Hardware-Software/Qualification Testing-Verification and Validation (Issues related to using software before completion of testing.) Lesson Number 1104 537: "Some hardware is being used in MEIT before it has completed qualification testing. Software is also often used before its verification and validation are complete. In both cases, modification to the hardware or software may be required before certification is completed, thereby potentially invalidating the results of the initial MEIT testing."
- International Space Station Program/Hardware-Software/Integration Testing (The importance of end-user involvement in the testing process.) Lesson Number 1106 538: "Astronaut crew participation in testing improves the fidelity of the test and better familiarizes the crew with systems and procedures."
- MPL Uplink Loss Timer Software/Test Errors (1998) (The importance of recognizing and testing high-risk aspects of the software.) Lesson Number 0939 530: 1) "Recognize that the transition to another mission phase (e.g., from EDL to the landed phase) is a high-risk sequence. Devote extra effort to planning and performing tests of these transitions. 2) Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested."
- Deep Space 2 Telecom Hardware-Software Interaction (1999) (Considerations for performance testing.) Lesson Number 1197 545: The Recommendation states: "To fully validate performance, test integrated software and hardware over the flight operational temperature range... ['test as you fly, and fly as you test...'"]
- Probable Scenario for Mars Polar Lander Mission Loss (1998) (Testing failures.) Lesson Number 0938 529: "1) Project test policy and procedures should specify actions to be taken when a failure occurs during the test. When tests are aborted or known to have had flawed procedures, they must be rerun after the test deficiencies are corrected. When test article hardware or software is changed, the test should be rerun unless there is a clear rationale for omitting the rerun. 2) All known hardware operational characteristics, including transients and spurious signals, must be reflected in the software requirements documents and verified by test."
Arianne 5 -The Inquiry Board's Recommendations: 685
Prepare a test facility including as much real equipment as technically feasible, inject realistic input data, and perform complete, closed-loop system testing. Complete simulations must take place before any mission. High test coverage has to be obtained.
Include trajectory data in specifications and test requirements.
Review the test coverage of existing equipment and extend it where deemed necessary.
Give the justification documents the same attention as the code. Improve the technique for keeping code and its justifications consistent.
Set up a team that will prepare the procedure for qualifying software, propose stringent rules for confirming such qualification, and ascertain that specification, verification, and testing of software are of consistently high quality in the Ariane-5 Programme. Inclusion of external RAMS (Reliability, Availability, Maintainability, Safety) experts is to be considered.
6.2 Other Lessons Learned
- All software requirements with multiple logic conditions require Formal Qualification Testing (FQT) to exercise all logic conditions.
- If unable to provide this via FQT, the FQT test designer must confirm coverage via other means, e.g., by leveraging lower-level unit tests. Exceptions are to be documented and approved by the software control board.
- Guidance for test case coverage must be documented.
- Be judicious in identifying software flaws during testing
- Review all test scripts and test procedures for occurrences of non-flight-like actions. All occurrences must be approved (program to decide appropriate approval level).
- Apply the ‘Test Like You Fly’ exception process to test scripts and procedures applied to the scenario and validation testing.
- Hardware/software integration testing campaign
- Avoid reliance on lower-level software tests to verify system-level requirements.
- Hardware/Software Integration test campaign must be used to verify critical vehicle and system functions (system spec) and utilize a high fidelity test environment (e.g., flight-like hardware in the loop), especially for external and internal interfaces.
- Ensure test rig configuration sufficiency for planned testing (e.g., sufficient real hardware included) and the data captured is analyzed.
- Perform end-to-end mission scenario testing
- Programs must establish an end-to-end “run for record” test before each flight to include all applicable dynamic/critical phases of flight using a maximum available suite of flight hardware.
- Simulation validation
- Simulations/emulations must be validated by the system provider and with real hardware data signatures.
- Simulations/emulations must be kept in sync with hardware/software updates.
- Increase involvement of SE&I in the development life cycle
- Quality spacecraft software development and test require deep and persistent partnership and joint accountability between SE&I, subsystem designers, software community, and external suppliers.
- Software Change Tickets
- Any tests (formal or informal) that fail should be rerun and verified before software change tickets are closed, in the original environment, or as close to it as possible. Preferably this would be done with the original author of the software change ticket but with appropriate control board approval.
- Be cautious about closing overlapping software change tickets to ensure that the full scope of all associated software change tickets is addressed via retest. Be careful about missing any unique elements to the individual software change ticket.
- Modifications to board decisions
- Any aspects of control board decisions that are modified must be re-approved by the board (e.g., impacted artifacts that need updating).
7. Software Assurance
7.1 Tasking for Software Assurance
1. Confirm test coverage of the requirements through the execution of the test procedures.
7.2 Software Assurance Products
- Test Witnessing Signatures
Objective Evidence
- Test coverage metric data.
- Confirmation that the system safety data package contains newly identified software contributions to hazards, events, or conditions found during testing.
- Software test plan(s).
- Software test procedure(s).
- Software test report(s)
7.3 Metrics
- # of Software Requirements (e.g., Project, Application, Subsystem, System, etc.)
- # of software requirements with completed test procedures/cases over time
- # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
- # of Open issues vs. # of Closed over time
- # of Source Lines of Code (SLOC) tested vs. total # of SLOC
- # of detailed software requirements tested to date vs. total # of detailed software requirements
- # of tests completed vs. total # of tests
- Software code/test coverage percentages for all identified safety-critical components (e.g., # of paths tested vs. total # of possible paths)
- # of Hazards containing software that have been tested vs. total # of Hazards containing software
- # of Requirements tested vs. total # of Requirements
- # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
- # of tests executed vs. # of tests completed
- # of Non-Conformances identified while confirming hazard controls are verified through test plans/procedures/cases
- # of safety-related non-conformances identified by life cycle phase over time
- # of safety-related requirement issues (Open, Closed) over time
- # of TBD/TBC/TBR requirements trended over time
- # of Software Requirements without associated test cases
- # of Software Requirements being met via satisfactory testing vs. total # of Software Requirements
- # of Safety-Critical tests executed vs. # of Safety-Critical tests witnessed by SA
- Total # of tests completed vs. number of test results evaluated and signed off.
Note: Metrics in bold type are required by all projects
See also Topic 8.18 - SA Suggested Metrics
7.4 Guidance
Software assurance will review the test procedures and either review test results or witness the tests being run to confirm the test coverage of the requirements. This assumes that the bidirectional tracing of the test procedures and test requirements has been done previously and shows that all requirements have been traced to one or more tests. See SWE-052 - Bidirectional Traceability for requirements traceability requirements and guidance and SWE-190 - Verify Code Coverage for code coverage.
In projects with safety-critical code, software assurance will perform extra rigor to ensure that all safety-related features are thoroughly tested. This may involve witnessing the tests or doing a more thorough review of the test results to check that all safety features have been tested successfully. In many cases, the requirements for the specific safety features are captured in the hazard reports, so it is important to ensure all of these safety features have been included in the trace to tests. Tests for safety features should include testing in operational scenarios, nominal scenarios, off-nominal conditions, stress conditions, and error conditions that require bringing the system to a safe mode. See also Topic 8.01 - Off Nominal Testing.
Projects should do regression for any changes made to the software during the test process, following the project’s change management process. Tests including any safety features should be part of the regression test set. See SWE-080 - Track and Evaluate Changes for tracking and evaluating changes and SWE-191 - Software Regression Testing for regression testing.
All software requirements with multiple logic conditions require Formal Qualification Testing (FQT) to exercise all logic conditions.
- If unable to provide this via FQT, the FQT test designer must confirm coverage via other means, e.g., by leveraging lower-level unit tests. Exceptions are to be documented and approved by the software control board.
- Guidance for test case coverage must be documented.
7.4.1 Guidance for Checking Test Readiness and for Review Teams
The checklist TEST REVIEW CHECKLIST FOR REVIEW TEAMS PAT-027 can be used by both Software Assurance Personnel and by members of test review teams to help determine whether a project is ready to move into a testing phase. Maturity of the software development and the preparation for the actual testing is considered in the checklist below.
Click on the image to preview the file. From the preview, click on Download to obtain a usable copy.
7.4.2 Guidance for Checking Test Readiness and for Test Leads
A checklist designed for Test Leads to use when preparing the testing is below:
The TEST REVIEW CHECKLIST FOR TEST LEADS PAT-026 can be used to help a test lead determine whether the team is ready for testing and what processes and other preparations need to be in place before beginning testing.
Click on the image to preview the file. From the preview, click on Download to obtain a usable copy.
7.5 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook: