bannerd


5.10 - STP - Software Test Plan

Return to 7.18 - Documentation Guidance

1. Minimum Recommended Content 

Minimum recommended content for the Software Test Plan at a high level,

    1. Test levels (separate test effort that has its own documentation and resources, e.g., unit or component, software integration testing, system integration end-to-end testing acceptance testing).
    2. Test types: There are many types of testing that can be used. Each type is generally intended to verify different aspects of the software. Depending on the type of testing, there may be test cases chosen from many of the types of testing and/or an exhaustive set of test cases may be chosen from one type (for example, functional testing). Test types may include:
        1. Functional Testing (Requirements-based Testing)
        2. Stress Testing.
        3. Performance Testing.
        4. Endurance Testing.
        5. Interface Testing. (Both User Interface and Interfaces to Other System Functions)
        6. Boundary Conditions Testing.
        7. Coverage Testing (Both Path Coverage Testing and Statement Coverage)
        8. Mutation Testing or Perturbation Testing
        9. Types of Testing often used in Safety-Critical Systems:
          1. Fault Insertion Testing
          2. Failure Modes and Effects Testing
          3. Perturbation or Mutation Testing
    3. Test classes (designated grouping of test cases).
    4. Test progression. (Order in which test classes for each test level will be performed).
    5. Test schedules. 
    6. Acceptance (or exit) Criteria for set of tests (Example: 95% of test cases must pass (Meet expected results))
    7. Test coverage (breadth and depth) or other methods for ensuring sufficiency of testing.
    8. Plan for test witnessing, if system is safety-critical
    9. Data recording, reduction, and analysis.
    10. Requirements traceability (or verification matrix).
    11. Any risks or issues identified with testing.

2. Rationale

Planning the software testing activity allows thorough deliberation of tasks, methods, environments, and related criteria before they are implemented. Planning also allows the project team to improve a current project, based on lessons learned from previous projects, including using more appropriate or efficient techniques and ensuring the performance of steps previously missed or not included.

As with any task, having a plan in place ensures that all necessary and required tasks are performed. Development of that plan provides the opportunity for stakeholders to give input and assist with the documentation and tailoring of the planned testing activities to ensure the outcome will meet the expectations and goals of the project.

Ensuring the Software Test Plan follows a template and includes specified information ensures consistency of test plans across projects, ensures proper planning occurs, and prevents repeating problems of the past. 

3. Guidance

The Software Test Plan may be standalone or part of the Software Management Plan. 

Follow Center policies and procedures when determining which approach to use for a particular project. 

The Software Test Plan describes the plans for software component level testing, software integration testing, software qualification testing, and system qualification testing of software systems. The plan describes the software test environment, development, and test activities. The plan provides an overview of software testing, test schedules, and test management procedures.


The Software Test Plan may be tailored by software classification. Goddard Space Flight Center's (GSFC's) 580-STD-077-01, Requirements for Minimum Contents of Software Documents, provides one suggestion for tailoring a Software Test Plan based on the recommended contents and the classification of the software being tested.

As shown above, the Software Test Plan is recommended to contain specific information. Below are suggestions for the type of information to consider for the named contents. Note that if this information is added to the Software Test Plan in phases, IEEE Std 1059-1993, IEEE Guide for Software Verification and Validation Plans, 211 notes that: "The requirements phase produces the ...acceptance and system test plans. The design phase produces the ... component and integration test plans."

See also SWE-036 - Software Process Determination

3.1 Test Levels

Specific types of testing may have their own plans, resources, tools, schedules, and other information particular to the testing being performed. Where plans exist to document this information, they need not be duplicated in the Software Test Plan, but they need to be identified and referenced to provide an overall view of the project's testing efforts. This section of the plan also describes the levels at which testing will be performed: configuration item level, system level, etc.

See also Topic 7.06 - Software Test Estimation and Testing Levels.

3.2 Unit Testing

The intent of unit testing is to confirm that a unit performs the capability assigned to it, correctly interfaces with other units and data, and represents a faithful implementation of the unit design. 452

See also SWE-062 - Unit Test, SWE-186 - Unit Test Repeatability, SWE-190 - Verify Code Coverage, SWE-191 - Software Regression Testing

In accordance with IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology, 222, a unit is defined as:

(1) A separately testable element specified in the design of a computer software component.
(2) A logically separable part of a computer program.
(3) A software component that is not subdivided into other components.

The Software Test Plan includes an overview of the unit testing process to be used for this project. The section(s) of the Software Test Plan focused on unit testing addresses the following:

  • Use of test cases.
  • Type of testing to be used: path testing, analytical testing, user interface testing.
  • Exercising functions in the unit.
  • Testing all paths in the unit. (This is a goal, although it may not always be possible.)
  • Testing boundary conditions and error situations.
  • Assessment of timing, sizing, accuracy.
  • Testing of safety features.
    • NASA-GB-8719.13, NASA Software Safety Guidebook, 276 recommends formal unit test plans and procedures for safety-critical units.
    • Unit tests for non-critical units may be informally documented, e.g., in laboratory notebooks.
  • Tools to be used.
  • Testing assignment: Who will carry out the testing? (Typically, the developer.)
  • Testing iteration: testing until success criteria are achieved (define objective criteria).
  • Documenting and correcting issues.
    • Typically, during unit testing, issues are corrected and the unit retested.
    • Typically, issues related to requirements or design follow the relevant process for correcting those issues.

3.3 Software Integration Testing

According to NASA-GB-8719.13,  276 "Integration testing deals with how the software components will be incorporated into the final software system and what will be tested at each integration step." 

The Software Test Plan includes an overview of the software integration testing process to be used for this project. The section(s) of the Software Test Plan focused on software integration testing addresses the following:

  • Testing environment.
  • Integration sequence.
  • Testing of safety features, including confirmation that non-critical components cannot influence critical components.
  • Interactions among the units.
  • Assessment of timing, sizing, accuracy.
  • Performance at boundaries and under stress conditions.
  • Use of automated tools, when available, for analysis of the results.
  • Repetition of integration testing until success criteria are achieved (define objective criteria).
  • Use of independent personnel to perform testing.
  • Identification of test harnesses or drivers required and whether they exist or need to be developed.

See also Topic 5.02 - IDD - Interface Design Description

3.4 Systems Integration Testing

The Software Test Plan includes an overview of the testing process used to test software integrated into the larger system. The section(s) of the software test plan focused on systems integration testing addresses the following:

  • Stress, load, disaster, stability testing.
  • Boundary testing (data, interfaces).
  • Functional testing in a complete system environment.
  • Testing of safety features.
  • Pass/fail criteria for each test.
  • Use of independent personnel to perform testing.

3.5 End-to-end Testing

The Software Test Plan includes an overview of the process used for end-to-end testing. The section(s) of the Software Test Plan focused on end-to-end testing addresses the following:

  • Stress, load, disaster, stability testing.
  • Functional testing.
  • Testing of safety features.
  • Testing all paths in the system. (This is a goal, although it may not always be possible.)
  • Exercising all branches in the system.
  • Execution of each statement at least once.
  • Testing of boundary conditions for all inputs, as well as nominal and invalid input values.
  • Objective pass/fail criteria for each test.
  • Any special procedures, constraints, dependencies for implementing and running tests.
  • Use of independent personnel to perform testing.

3.6 Acceptance Testing

Acceptance testing needs to be conducted after the appropriate readiness review has been successfully completed. This type of testing is the customer acceptance test, and the Software Test Plan includes an overview of the process used. The section(s) of the Software Test Plan focused on acceptance testing includes the following:

  • Plans to assure the customer that the system is safe.
  • Plans to confirm that software correctly implements system and software requirements in an operational environment.
  • Use of independent personnel to perform testing.
  • Identification of criteria for stopping testing, e.g., fraction of requirements covered, number of errors remaining, reliability goals.
  • Provisions for witnessing of tests.

See also Topic 8.13 - Test Witnessing

3.7 Regression Testing

Regression tests are used to ensure that changes made to software have had no unintended side effects.

The section(s) of the Software Test Plan focused on regression testing addresses the following:

  • Exercising the maximum number of critical functions.
  • Selection of the subset of the total number of tests (from the original test suites) to be used as the regression test set, e.g., key 10 percent.
  • Use of independent personnel to perform testing.
  • Use of automated test tools.

See also Topic 7.06 - Software Test Estimation and Testing Levels

3.8 Test Classes (Designated Grouping Of Test Cases)

Test classes describe the grouping of tests that will be performed throughout the various types of testing. When developing the Software Test Plan, consider:

  • Timing tests.
  • Erroneous input tests.
  • Maximum capacity tests.
  • Other factors.

3.9 General Test Conditions

General test conditions are conditions that apply to all of the planned tests or to a specific group of tests. When documenting general test conditions, consider statements and conditions, such as these taken from Langley Research Center's NPR 7150.2 Class A Required Testing Documents With Embedded Guidance:

  • "Each test should include nominal, maximum, and minimum values."
  • "Each test of type X should use live data."
  • "Execution size and time should be measured for each software item."
  • Extent of testing to be performed, e.g., percent of some defined total, and the associated rationale.

3.10 Test Progression

Test progression addresses the sequence or ordering of tests. The Software Test Plan describes dependencies among tests that require that tests be performed in a particular order.

3.11 Data Recording, Reduction, and Analysis

This section of the Software Test Pan describes processes and procedures for capturing and evaluating/analyzing results and issues found during all types of testing. Consider "manual, automatic, and semi-automatic techniques for recording test results, manipulating the raw results into a form suitable for evaluation, and retaining the results of data reduction and analysis." 401 For specific suggestions for carrying out these activities, see SWE-068 - Evaluate Test Results in this Handbook.

NASA-GB-8719.13 276 recommends test reports be created for unit tests of safety-critical items. 

See also Topic 8.57 - Testing Analysis

3.12 Test Coverage (Breadth and Depth) Or Other Methods For Ensuring Sufficiency Of Testing

If not addressed elsewhere in the Software Test Plan, a description of the methods to be used for ensuring sufficient test coverage are provided. Methods could include:

  • Reviews/inspections.
  • Automated tools.

3.13 Planned Tests, Including Items and Their Identifiers

If not already included in sections of the plan focused on specific types of testing (unit, integration, etc.), all planned tests, test cases, data sets, etc., that will be used for the project need to be identified in the Software Test Plan, along with the software items they will be used to test. Each item needs to have its own unique identifier to ensure proper execution and tracking of the planned tests. Consider the following information as information to capture for each test:

  • Objective.
  • Test level.
  • Test type.
  • Test class.
  • Requirements addressed.
  • Software items(s) tested.
  • Type of data to be recorded.
  • Assumptions, constraints, limitations (timing, interfaces, personnel, etc.).
  • Safety, security, privacy considerations.

See also SWE-015 - Cost Estimation

3.14 Test Schedules

NASA-GB-8719.13 276 notes that: "Scheduling testing phases is always an art, and depends on the expected quality of the software product... Previous history, either of the development team or similar projects, can help determine how long testing will take. Some methods (such as error seeding and Halstead's defect metric) exist for estimating defect density (number of defects per unit of code) when historical information is not available."

Information to consider for test schedules includes:

  • List or chart showing time frames for testing at all test sites.
  • Schedule of test activities for each test site, including on-site test setup, on-site testing, data collection, retesting, etc.
  • Milestones relevant to the development life cycle.

See also SWE-065 - Test Plan, Procedures, Reports, SWE-066 - Perform Testing. See also Topic 7.08 - Maturity of Life Cycle Products at Milestone Reviews, SWE-024 - Plan Tracking

3.15 Requirements Traceability (Or Verification Matrix)

One of the key purposes of testing is to confirm that the requirements for the project have been met. The Software Test Plan includes a traceability matrix that maps tests to software requirements. If the matrix already exists, the test plan includes a reference to the matrix.

3.16 Qualification Testing Environment, Site, Personnel, and Participating Organizations

This section or topic in the Software Test Plan lists items such as these:

  • Sites where testing will occur, identify by name..
  • Software and version necessary to perform the planned testing activities at each site, for example:
    • Compilers, operating systems, communications software.
    • Test drivers, test data generators, test control software.
    • Input files, databases, path analyzers.
    • Other.
  • Hardware and firmware, including versions, that will be used in the software test environment at each site.
  • Manuals, media, licenses, instructions, etc., required to setup and perform the planned tests.
  • Items to be supplied by the site and those items that will be delivered to the test site.
  • Organizations participating in the tests at each site and their roles and responsibilities.
  • Number, type, skill level of personnel required to carry out testing at each site.
  • Training and/or orientation required for testing personnel at each site.
  • Tests to be performed at each site.

In addition to the information required above, test plans address the following information for all types of testing:

  • Resources (personnel, tools, equipment, facilities, etc.).
  • Risks that require contingency planning.
  • What is to be tested and what is not to be tested.
  • Test completion criteria.

If not identified elsewhere, the Software Test Plan identifies the metrics to be collected for each type of testing. Suggested metrics include:

  • Number of units tested.
  • Hours spent.
  • Number of defects found.
  • Average defects found per line of code.
  • Measures of test coverage, software reliability and maintainability.
  • Other.

Consider the following best practices for Software Test Plans:

  • Begin development of the Software Test Plan(s) early.
  • As soon as the relevant stage has been completed:
    • Helps identify confusing or unclear requirements.
    • Helps identify un-testable design features before implementation.
  • Allows for acquisition/allocation of test resources.
  • Involve the right people in the plan development (quality engineers, software engineers, systems engineers, etc.).

Use the right Sources of Information (first column in table below) as appropriate for the project and for each type of testing, such as:


*Unit Test

*SW Integration Test

*Systems Integration Test

*End-to-End Test

*Acceptance Test

*Regression Test

Software Requirements Specification (SRS)

X


X

X

X

X

Software Design Description (SDD)

X

X





Design traceability

X

X





Interface documents

X

X

X

X

X

X

Draft user documentation

X






Code coverage analyzer specifications

X






Criticality analysis


X





Draft operating documents


X

X

X



Draft maintenance documents


X





Final operating documents





X


Final user documentation





X


Concept documents



X

X



Requirements traceability



X

X

X

X

Expected customer usage patterns and conditions



X

X

X

X

*May be a separate test plan referenced in the overall Software Test Plan or part of the overall Software Test Plan.

  • Have the Software Test Plan reviewed/inspected before use (SWE-087 - Software Peer Reviews and Inspections for Requirements, Plans, Design, Code, and Test Procedures).
    • Include Software Assurance and Software Safety personnel to verify safety-critical coverage.
  • Have changes to the Software Test Plan evaluated for their effect on system safety.
  • Keep the Software Test Plan maintained (up to date) and under configuration control.
  • Identify early and focus testing on the components most likely to have issues (high risk, complex, many interfaces, demanding timing constraints, etc.).
    • May require some level of analysis to determine optimal test coverage. (See NASA-GB-8719.13 276 for a list of automated test coverage analysis tools.)
  • Plan to use independent testing (e.g., fellow programmers, separate test group, separate test organization, NASA Independent Verification & Validation) where possible and cost-effective as new perspectives can turn up issues that authors might not see.
  • Include coverage of user documentation, e.g., training materials, procedures.

3.17 Additional Guidance

Links to Additional Guidance materials for this subject have been compiled in the Relevant Links table. Click here to see the  Additional Guidance in the Resources tab.

4. Small Projects

Software Test Plans are necessary for all software projects, but for projects with small budgets or small teams starting with an existing test plan from a project of a similar type and size could help reduce the time and effort required to produce a test plan for a new project. Working with someone experienced in writing test plans, perhaps from another project and on a short-term basis, could help the project team prepare the document in a timely fashion without overburdening team resources. Where applicable, the test plan could reference other project documents rather than reproduce their contents, avoiding duplication of effort and reducing maintenance activities.

Since the Software Test Plan may be standalone or part of the Software Management Plan, incorporating the test plan into a larger project document may be useful for document tracking, review, etc. 

Follow Center policies and procedures when determining which approach to use for a particular project. 

The Software Test Plan may be tailored by software classification. Goddard Space Flight Center's (GSFC's) 580-STD-077-01, Requirements for Minimum Contents of Software Documents provides one suggestion for tailoring a Software Test Plan based on the required contents and the classification of the software being tested. This tailoring could reduce the size of the Software Test Plan and, therefore, the time and effort to produce and maintain it. 

5. Resources

5.1 References



5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

5.3 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

5.4 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 



 6. Lessons Learned

6.1 NASA Lessons Learned

  • MPL Uplink Loss Timer Software/Test Errors (1998) (Plan to test against full range of parameters). Lesson Number 0939 530: "Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested."
  • Deep Space 2 Telecom Hardware-Software Interaction (1999) (Plan to test as you fly). Lesson Number 1197 545: "To fully validate performance, test integrated software and hardware over the flight operational temperature range."
  • International Space Station (ISS) Program/Computer Hardware-Software/Software (Plan realistic but flexible schedules). Lesson Number 1062 536: "NASA should realistically reevaluate the achievable ... software development and test schedule and be willing to delay ... deployment if necessary rather than potentially sacrificing safety."
  • Thrusters Fired on Launch Pad (1975) (Plan for safe exercise of command sequences). Lesson Number 0403 507: "When command sequences are stored on the spacecraft and intended to be exercised only in the event of abnormal spacecraft activity, the consequences should be considered of their being issued during the system test or the pre-launch phases."

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

  • No labels