bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)


SWE-066 - Perform Testing

1. Requirements

3.4.2 The project shall perform software testing as defined in the Software Test Plan.

1.1 Notes

A best practice for Class A, B, and C software projects is to have formal software testing conducted, witnessed, and approved by an independent organization outside of the development team. Testing could include software integration testing, systems integration testing, validation testing, end-to-end testing, acceptance testing, white and black box testing, decision and path analysis, statistical testing, stress testing, performance testing, regression testing, qualification testing, simulation, and others. Use of automated software testing tools are also to be considered in software testing. Test breadth and accuracy can be increased through the use of test personnel independent of the software design and implementation teams, software peer reviews/inspections of Software Test Procedures and Software Test Results, and employing impartial test witnesses.

1.2 Applicability Across Classes

Class E Not Safety Critical and Class G are labeled with "P (Center)." This means that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement. 

For Class E, Not Safety Critical software, no test plans are required, but the project shall perform software testing.

Class

  A_SC 

A_NSC

  B_SC 

B_NSC

  C_SC 

C_NSC

  D_SC 

D_NSC

  E_SC 

E_NSC

     F      

     G      

     H      

Applicable?

   

   

   

   

   

   

   

   

   

    P(C)

   

    P(C)

   

Key:    A_SC = Class A Software, Safety-Critical | A_NSC = Class A Software, Not Safety-Critical | ... | - Applicable | - Not Applicable
X - Applicable with details, read above for more | P(C) - P(Center), follow center requirements or procedures

2. Rationale

Per NASA-GB-8719.13, NASA Software Safety Guidebook 276, "Testing serves several purposes: to find defects, to validate the system or an element of the system, and to verify functionality, performance, and safety requirements. The focus of testing is often on the verification and validation aspects. However, defect detection is probably the most important aspect of testing. While you cannot test quality into the software, you can certainly work to remove as many defects as possible."

Following a plan helps ensure that all necessary and required testing tasks are performed. Development of the test plan provides the opportunity for stakeholders to give input and assist with the documentation and tailoring of the planned test activities for the project. 

3. Guidance

Software testing has many levels including unit testing, integration testing, and system testing which can include functionality, performance, load, stress, safety, and acceptance testing. While unit testing is typically performed by the development team, some testing, such as integration, system or regression testing may be performed by a separate and/or independent test group.

Keep in mind that formal testing, such as acceptance testing, is to be witnessed by an external organization, such as software assurance (see NASA-STD-8739.8, Software Assurance Standard 278, ).

"Scheduling testing phases is always an art, and depends on the expected quality of the software product. Relatively defect free software passes through testing within a minimal time frame. An inordinate amount of resources can be expended testing buggy software. Previous history, either of the development team or similar projects, can help determine how long testing will take. Some methods (such as error seeding and Halstead's defect metric) exist for estimating defect density (number of defects per unit of code) when historical information is not available." (NASA-GB-8719.13, NASA Software Safety Guidebook 276)

The following basic principles of testing come from NASA-GB-8719.13, NASA Software Safety Guidebook 276:

  • All tests need to be traceable to the requirements and all requirements need to be verified by one or more methods (e.g., test, demonstration, inspection, analysis).
  • Tests need to be planned before testing begins. Test planning can occur as soon as the relevant stage has been completed. System test planning can start when the requirements document is complete.
  • The "80/20" principle applies to software testing. In general, 80 percent of errors can be traced back to 20 percent of the components. Anything you can do ahead of time to identify components likely to fall in that 20 percent (e.g., high risk, complex, many interfaces, demanding timing constraints) will help focus the testing effort for better results.
  • Start small and then integrate into larger system. Finding defects deep in the code is difficult to do at the system level. Such defects are easier to uncover at the unit level.
  • You can't test everything. Exhaustive testing cannot be done except for the most trivial of systems. However, a well-planned testing effort can test all parts of the system. Missing logic paths or branches may mean missing important defects, so test coverages need to be determined.
  • Testing by an independent party is most effective. It is hard for developers to see their own bugs. While unit tests are usually written and run by the developer, it is a good idea to have a fellow team member review the tests. A separate testing group will usually perform the other tests. An independent viewpoint helps find defects, which is the goal of testing.

NASA-GB-8719.13, NASA Software Safety Guidebook, 276 includes a chapter on testing with a focus on safety testing. Some general testing highlights of that chapter include:

  • Software testing beyond the unit level (integration and system testing) is usually performed by someone other than the developer, except in the smallest of teams.
  • Normally, software testing ensures that the software performs all required functions correctly, and can exhibit graceful behavior under anomalous conditions.
  • Integration testing is often done in a simulated environment, and system testing is usually done on the actual hardware. However, hazardous commands or operations need to be tested in a simulated environment first.
  • Any problems discovered during testing need to be analyzed and documented in discrepancy reports and summarized in test reports.
  • Create and follow written test procedures for integration and system testing.
  • Perform regression testing after each change to the system.
  • Prepare Test Report upon completion of a test.
  • Verify COTS (Commercial Off the Shelf) software operates as expected.
  • Follow problem reporting and corrective action procedures when defects are detected.
  • Perform testing in a controlled environment using a structured test procedure and monitoring of results or in a demonstration environment where the software is exercised without interference.
  • Analyze tests before use to ensure adequate test coverage.
  • Analyze test results to verify that requirements have been satisfied and that all identified hazards are eliminated or controlled to an acceptable level of risk.

Other useful practices include:

  • Plan and document testing activities to ensure all required testing is performed.
  • Have test plans, procedures, and test cases inspected and approved before use.
  • Use a test verification matrix to ensure coverage of all requirements.
  • Consider dry running test procedures in offline labs with simulations prior to actual hardware/software integration tests.
  • Consider various types of testing to achieve more comprehensive coverage. (See Software QA and Testing Frequently-Asked-Questions 207 or NASA-GB-8719.13, NASA Software Safety Guidebook, 276 for a list with descriptions.)
  • When time and resources are limited, identify areas of highest risk and set priorities to focus effort where the greatest benefit will be achieved with the available resources. (See Software QA and Testing Frequently-Asked-Questions 207 or NASA-GB-8719.13, NASA Software Safety Guidebook, 276 for suggested risk analysis considerations.)
  • As necessary and appropriate, include support from the software development and/or test team when performing formal testing of the final system. Support could include:
    • Identifying system test requirements unique to software.
    • Providing input for software to system test procedures.
    • Providing software design documentation.
    • Providing software test plans and procedures.

While NASA Centers typically have their own procedures and guidance, NASA-GB-8719.13, NASA Software Safety Guidebook, 276 lists and describes the following testing which needs to be considered when planning any software test effort:

  • Functional system testing.
  • Stress testing.
  • Stability tests.
  • Resistance to failure testing.
  • Compatibility tests.
  • Performance testing.

The following chart shows a basic flow for software testing activities from planning through maintenance. Several elements of this flow are addressed in related requirements in this Handbook (listed in the table at the end of this section).

Tools which may be useful when performing software testing include the following, non-exhaustive list. Each project needs to evaluate and choose the appropriate tools for the testing to be performed for that project.

  • Software analysis tools.
  • Reverse engineering, code navigation, metrics, and cross-reference tools.
  • Debuggers.
  • Compilers.
  • Coding standards checkers.
  • Memory management tools.
  • Screen capture utilities.
  • Serial interface utilities.
  • Telemetry display utilities.
  • Automated scripts.
  • Etc.

Consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources related to software testing.

Additional guidance related to software testing, including specifics of plan, procedure, and report contents may be found in the following related requirements in this Handbook:

SWE-067

Verify Implementation

SWE-068

Evaluate Test Results

SWE-069

Document Defects and Track

SWE-104

Software Test Plan

SWE-114

Test procedures

SWE-118

Software Test Report


4. Small Projects

Software testing is required regardless of project size.  However, when time and resources are limited, areas of highest risk may be identified and priorities set to focus effort where the greatest benefit will be achieved with the available resources. (See Software QA and Testing Frequently-Asked-Questions 207 or NASA-GB-8719.13, NASA Software Safety Guidebook, 276 for suggested risk analysis considerations.)

5. Resources

5.1 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN).

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to the importance of and potential issues related to software testing:

  • International Space Station Program/Hardware-Software/Qualification Testing-Verification and Validation (Issues related to using software before completion of testing.) Lesson Number 1104: "Some hardware is being used in MEIT before it has completed qualification testing. Software is also often used before its verification and validation is complete. In both cases, modification to the hardware or software may be required before certification is completed, thereby potentially invalidating the results of the initial MEIT testing." 537
  • International Space Station Program/Hardware-Software/Integration Testing (The importance of end user involvement in the testing process.) Lesson Number 1106: "Astronaut crew participation in testing improves fidelity of the test and better familiarizes the crew with systems and procedures." 538
  • MPL Uplink Loss Timer Software/Test Errors (1998) (The importance of recognizing and testing high risk aspects of software.) Lesson Number 0939: 1) "Recognize that the transition to another mission phase (e.g. from EDL to the landed phase) is a high risk sequence. Devote extra effort to planning and performing tests of these transitions.  2) Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested." 530
  • Deep Space 2 Telecom Hardware-Software Interaction (1999) (Considerations for performance testing.) Lesson Number 1197: The Recommendation states: "To fully validate performance, test integrated software and hardware over the flight operational temperature range... ['test as you fly, and fly as you test...'"] 545
  • Probable Scenario for Mars Polar Lander Mission Loss (1998) (Testing failures.) Lesson Number 0938: "1) Project test policy and procedures should specify actions to be taken when a failure occurs during test. When tests are aborted, or known to have had flawed procedures, they must be rerun after the test deficiencies are corrected. When test article hardware or software is changed, the test should be rerun unless there is a clear rationale for omitting the rerun.  2) All known hardware operational characteristics, including transients and spurious signals, must be reflected in the software requirements documents and verified by test." 529