bannerd


SWE-068 - Evaluate Test Results

1. Requirements

4.5.5 The project manager shall evaluate test results and record the evaluation.

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

SWE-068 - Last used in rev NPR 7150.2D

RevSWE Statement
A

3.4.4 The project shall evaluate test results and document the evaluation.

Difference between A and B

No change

B

4.5.5 The project manager shall evaluate test results and record the evaluation.

Difference between B and C

No change

C

4.5.5 The project manager shall evaluate test results and record the evaluation.

Difference between C and DNo change
D

4.5.5 The project manager shall evaluate test results and record the evaluation.



1.3 Applicability Across Classes

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

Test results are the basis for confirming that the team has fulfilled the software requirements in the resulting software product. To make such decisions, test results must be reviewed and evaluated using a documented, repeatable process. The team can derive quality conclusions by capturing the actual test results, comparing them to expected results, analyzing those results against pre-established criteria, and documenting that analysis/evaluation process.

It is important to document and retain elements used to generate and analyze the results for future regression testing and related test results analysis.

3. Guidance

3.1 Evaluation Of Software Testing

Evaluation of software testing is a complicated activity and the evaluations is further impacted by the amount of data produced by the software test.  All software test data should be evaluated and compared to the expected results for the test.  The evaluation process and evaluation tools used should be recorded for future assessments.  If a software test evaluation tool has an error, the software test data may be misinterpreted by the evaluation team.  The process for software test evaluations should be repeatable. Repeatable tests are a key element of evaluating the test resultsThe context of the test (including tools, compilers, code under test, procedures, …) all need to be recorded to provide context in the evaluation and limitations.

Software testing is required regardless of project size. The level of rigor on the tests can be determined by the risk posture of the project. Safety Critical code needs to be tested and is a priority.  Areas of higher risk or determined to be critical to success should take next priorityUnit, coverage, and other tests can be used for higher level tests but should be done with caution.  While test articles (models, simulators, …) may or may not be used, testing on the actual “flight” hardware may damage equipment and should be considered (see SWE-073 - Platform or Hi-Fidelity Simulations).  In all cases, maintaining records and the ability to repeat the tests in the same configuration is required to prove that issues found are resolved. 

Per NASA-GB-8719.13, NASA Software Safety Guidebook   276,  the analysis methodology for software and system test results includes the following steps: 

  • Verify that software and system test data meet the requirements for verifying all functional software safety requirements and safety-critical software elements. 
  • Verify via test coverage analysis that all safety requirements, functions, controls, and processes have been completely covered within the unit, component, system, and acceptance level tests. 
  • Verify that all software safety requirements have been tested, or evaluated, inspected, or demonstrated. 
  • Verify that all software safety functions are correctly performed and that the software system does not perform unintended functions. 
  • Verify that all safety requirements have been satisfied. 
  • Verify that all identified hazards have been eliminated or controlled to an acceptable level of risk. 

3.2 Developing A Test Results Analysis Methodology

The following from IEEE-STD-1012-2004, IEEE Standard for Software Verification and Validation, 209 are also appropriate considerations when developing a test results analysis methodology: 

  • Validate that software correctly implements the design. 
  • Validate that the test results trace to test criteria established by the test traceability in the test planning documents. 
  • Validate that the software satisfies the test acceptance criteria. 
  • Verify that the software components are integrated correctly. 
  • Validate that the software satisfies the system requirements. 

Other elements for the evaluation methodology include: 

  • Verify that the test results cover the requirements. 
  • Determine if actual results match expected results. 
  • Verify adequacy and completeness of test coverage. 
  • Determine the appropriateness of test standards and methods used. 

3.3 Items Used To Generate And Collect The Results

For all levels of software testing (unit, component, integration, etc.) capture and record items used to generate and collect the results. These items are an important part of analyzing the test results since some anomalies could have been caused by the tests themselves. The following are captured, not only for results analysis but for future regression testing: 

  • Simulators. 
  • Test drivers and stubs. 
  • Test suites. 
  • Test data. 

3.4 Other Inputs To Analysis

In addition to the information used to generate test results, the following may be important inputs to the analysis of the result: 

  • Discrepancies found during testing (e.g., discrepancies between expected and actual results). 
  • Disposition of discrepancies. 
  • Retest history. 

3.5 Analysis Practices

When performing the actual test results analysis/evaluation, consider the following practices 047 : 

  • Use application or domain specialists as part of the analysis/evaluation team. 
  • Use checklists to assist in the analysis and ensure consistency. 
  • Use automated tools to perform the analysis, when possible. 
  • Capture a complete account of the procedures that were followed. 
  • If a test cannot be evaluated, capture that fact and the reasons for it. 
  • Plan the criteria to be used to evaluate the test results, consider (from a 1997 University of Southern California Center for System and Software Engineering project file entitled, “Software Test Description and Results”): 
  • The range or accuracy over which output can vary and still be acceptable. 
  • The minimum number of combinations or alternatives of input and output conditions constitute an acceptable test result. 
  • Maximum/minimum allowable test duration, in terms of time or number of events. 
  • The maximum number of interrupts, halts or other system breaks that may occur. 
  • Allowable severity of processing errors. 
  • Conditions under which the result is inconclusive and retesting is to be performed. 
  • Conditions under which the outputs are to be interpreted as indicating irregularities in input test data, in the test database/data files, or test procedures. 
  • Allowable indications of the control, status, and results of the test and the readiness for the next test case (maybe the output of auxiliary test software). 
  • Additional criteria not mentioned above. 
  • Any information about the setup of the test (including versions of tools, hardware, simulations, …) to make the test repeatable and provide context on assumptions and limitations.

When recording the outcome of the analysis, important items to include are: 

  • Major anomalies. 
  • Problem reports were generated as a result of the test. 
  • Operational difficulties (e.g, constraints or restrictions imposed by the test, aspects of the requirement under test that could not be fully verified due to test design or testbed limitations). 
  • Abnormal terminations. 
  • Reasons/justifications for discrepancies (e.g., caused by test cases or procedures, not a product issue). 
  • Any known requirement deficiencies present in the software element tested. 
  • Corrective actions were taken during testing. 
  • Success/failure status of the test. 

Additional guidance related to software test results may be found in the following related requirements in this Handbook: 

NPR 7150.2 - Section 4.5 SWEs including: 

3.6 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

3.7 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

4. Small Projects

Software testing is required regardless of project size. The level of rigor on the tests can be determined by the risk posture of the project. Safety Critical code needs to be tested and is a priority.  Areas of higher risk or determined to be critical to success should take next priority Unit, coverage, and other tests can be used for higher level tests but should be done with caution.  While test articles (models, simulators, …) may or may not be used, testing on the actual “flight” hardware may damage equipment and should be considered (see SWE-073 - Platform or Hi-Fidelity Simulations).  In all cases, maintaining records and the ability to repeat the tests in the same configuration is required to prove that issues found are resolved. 

5. Resources

5.1 References


5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

6.1 NASA Lessons Learned

A documented lesson from the NASA Lessons Learned database notes the following:

  • Flight Software Reviews (Have test results peer-reviewed.) Lesson Number 1294 548: "Rigorous peer reviews of spacecraft bus software resulted in good on-orbit performance. A lack of rigorous peer reviews of the instrument software has resulted in numerous on-orbit patches and changes."

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

7. Software Assurance

SWE-068 - Evaluate Test Results
4.5.5 The project manager shall evaluate test results and record the evaluation.

7.1 Tasking for Software Assurance

From NASA-STD-8739.8B

1. Confirm that test results are assessed and recorded. 

2. Confirm that the project documents software non-conformances in a tracking system.

3. Confirm that test results are sufficient verification artifacts for the hazard reports.

7.2 Software Assurance Products

  • SA assessment of verification adequacy for hazard reports.


    Objective Evidence

    • Software test report(s)
    • Software problem report or defect data.
    • Software test coverage metric data.

    Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

    • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
    • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
    • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
    • Signatures on SA reviewed or witnessed products or activities, or
    • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
      • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
      • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
    • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics

  • # of software work product Non-Conformances identified by life cycle phase over time
  • # of safety-related Non-Conformances
  • Total # of Non-Conformances over time (Open, Closed, # of days Open, and Severity of Open)
  • # of Non-Conformances in the current reporting period (Open, Closed, Severity)
  • # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
  • # of Open issues vs. # of Closed over time
  • # of tests completed vs. total # of tests
  • # of Hazards containing software that have been tested vs. total # of Hazards containing software
  • # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
  • # of tests executed vs. # of tests completed
  • # of Non-Conformances identified while confirming hazard controls are verified through test plans/procedures/cases
  • # of hazards with completed test procedures/cases vs. total number of hazards over time
  • # of safety-related non-conformances identified by life cycle phase over time
  • # of Safety-Critical tests executed vs. # of Safety-Critical tests witnessed by SA
  • Total # of tests completed vs. number of test results evaluated and signed off

See also Topic 8.18 - SA Suggested Metrics

7.4 Guidance

Software assurance needs to review all software test reports and assess whether the results of the test(s) have been accurately captured. Software assurance will confirm that any discrepancies /non-conformances found during the test(s) are fully described in the test report and documented in the project tracking system. The discrepancies/non-conformances need to be addressed and resolutions agreed upon before software assurance signs off on the test completion. See also Topic 8.57 - Testing Analysis for additional details on analysis by SA. 

Software assurance will review the test reports and confirm that all software safety-related verifications in the Hazard Report or Safety Package have been tested and assure they are performed according to the test plan, test procedures, and or safety plan.  Testing of these features should include correct and safe operations in known operational and nominal configurations as well as the ability to handle off-nominal conditions and transition to a safe state. Testing should also include testing of the software under load, stress, and off-nominal conditions including the operation of software controls and mitigations in various modes and states. Any discrepancies or non-conformances should be documented and addressed before the closure of the hazard verification.

See also Topic 8.01 - Off Nominal Testing.

7.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:


  • No labels