bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)


SWE-068 - Evaluate Test Results

1. Requirements

3.4.4 The project shall evaluate test results and document the evaluation.

1.1 Implementation Notes from Appendix D

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 Applicability Across Classes

Class G is labeled with "P (Center)." This means that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement.

Class

  A_SC 

A_NSC

  B_SC 

B_NSC

  C_SC 

C_NSC

  D_SC 

D_NSC

  E_SC 

E_NSC

     F      

     G      

     H      

Applicable?

   

   

   

   

   

   

   

   

   

   

   

    P(C)

   

Key:    A_SC = Class A Software, Safety-Critical | A_NSC = Class A Software, Not Safety-Critical | ... | - Applicable | - Not Applicable
X - Applicable with details, read above for more | P(C) - P(Center), follow center requirements or procedures

2. Rationale

Test results are the basis for confirming that the team has fulfilled the software requirements in the resulting software product. In order to make such decisions, test results must be reviewed and evaluated using a documented, repeatable process. The team can derive quality conclusions by capturing the actual test results, comparing them to expected results, analyzing those results against pre-established criteria, and documenting that analysis/evaluation process.

It is important to document and retain elements used to generate and analyze the results for future regression testing and related test results analysis.

3. Guidance

Per NASA-STD-8719.13, NASA Software Safety Standard, 271 and NASA-GB-8719.13, NASA Software Safety Guidebook, 276  the analysis methodology for software and system test results includes the following steps:

  • Verify that software and system test data meet the requirements for verifying all functional software safety requirements and safety-critical software elements.
  • Verify that the requirements verification evaluation, inspection, or demonstration data meet the requirements stated in  NASA-STD-8719.13, NASA Software Safety Standard. 271
  • Verify via test coverage analysis that all safety requirements, functions, controls, and processes have been completely covered within the unit, component, system, and acceptance level tests.
  • Verify that all software safety requirements have been tested, or evaluated, inspected, or demonstrated.
  • Verify that all software safety functions are correctly performed and that the software system does not perform unintended functions.
  • Verify that all safety requirements have been satisfied.
  • Verify that all identified hazards have been eliminated or controlled to an acceptable level of risk.

The following from IEEE-STD-1012-2004, IEEE Standard for Software Verification and Validation,
209 are also appropriate considerations when developing a test results analysis methodology:

  • Validate that software correctly implements the design.
  • Validate that the test results trace to test criteria established by the test traceability in the test planning documents.
  • Validate that the software satisfies the test acceptance criteria.
  • Verify that the software components are integrated correctly.
  • Validate that the software satisfies the system requirements.

Other elements for the evaluation methodology include:

  • Verify that the test results cover the requirements.
  • Determine if actual results match expected results.
  • Verify adequacy and completeness of test coverage.
  • Determine appropriateness of test standards and methods used.

For all levels of software testing (unit, component, integration, etc.) capture and document items used to generate and collect the results. These items are an important part of analyzing the test results since some anomalies could have been caused by the tests themselves. The following are captured, not only for results analysis, but for future regression testing:

  • Simulators.
  • Test drivers and stubs.
  • Test suites.
  • Test data.

In addition to the information used to generate test results, the following may be important inputs to the results analysis:

  • Discrepancies found during testing (e.g., discrepancies between expected and actual results).
  • Disposition of discrepancies.
  • Retest history.

When performing the actual test results analysis/evaluation, consider the following practices 047 :

  • Use application or domain specialists as part of the analysis/evaluation team.
  • Use checklists to assist in the analysis and ensure consistency.
  • Use automated tools to perform the analysis, when possible.
  • Capture a complete account of the procedures that were followed.
  • If a test cannot be evaluated, capture that fact and the reasons for it.
  • Plan the criteria to be used to evaluate the test results, consider (from Software Test Description and Results 407, USC, Center for System and Software Engineering):
    • The range or accuracy over which an output can vary and still be acceptable.
    • Minimum number of combinations or alternatives of input and output conditions that constitute an acceptable test result.
    • Maximum/minimum allowable test duration, in terms of time or number of events.
    • Maximum number of interrupts, halts, or other system breaks that may occur.
    • Allowable severity of processing errors.
    • Conditions under which the result is inconclusive and retesting is to be performed.
    • Conditions under which the outputs are to be interpreted as indicating irregularities in input test data, in the test database/data files, or in test procedures.
    • Allowable indications of the control, status, and results of the test and the readiness for the next test case (may be output of auxiliary test software).
    • Additional criteria not mentioned above.

When documenting the outcome of the analysis, important items to include are:

  • Major anomalies.
  • Problem reports generated as a result of the test.
  • Operational difficulties (e.g, constraints or restrictions imposed by the test, aspects of the requirement under test that could not be fully verified due to test design or testbed limitations).
  • Abnormal terminations.
  • Reasons/justifications for discrepancies (e.g., caused by test cases or procedures, not a product issue).
  • Any known requirement deficiencies present in the software element tested.
  • Corrective actions taken during testing.
  • Success/failure status of the test.

Additional guidance related to software test results may be found in the following related requirements in this Handbook:

SWE-066

Perform Testing

SWE-067

Verify Implementation

SWE-069

Document Defects and Track

SWE-118

Software Test Report

4. Small Projects

No additional guidance is available for small projects. The community of practice is encouraged to submit guidance candidates for this paragraph.

5. Resources


5.1 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN).

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

A documented lesson from the NASA Lessons Learned database notes the following:

  • Flight Software Reviews (Have test results peer reviewed.) Lesson Number 1294: "Rigorous peer reviews of spacecraft bus software resulted in good on-orbit performance. A lack of rigorous peer reviews of the instrument software have resulted in numerous on-orbit patches and changes." 548

0 Comments