bannerd


SWE-062 - Unit Test

1. Requirements

4.4.5 The project manager shall unit test the software code.

1.1 Notes

For safety-critical software, the unit testing should follow the requirement established in 3.7.4 of this document.

1.2 History

SWE-062 - Last used in rev NPR 7150.2D

RevSWE Statement
A

3.3.4 The project shall ensure that the software code is unit tested per the plans for software testing.

Difference between A and B

No change

B

4.4.5 The project manager shall unit test the software code per the plans for software testing. 

Difference between B and C

Removed "per the plans for software testing".


C

4.4.5 The project manager shall unit test the software code.

Difference between C and DNo change
D

4.4.5 The project manager shall unit test the software code.



1.3 Applicability Across Classes

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

Unit testing is the process of testing the range of inputs to a unit to ensure that only the intended outputs are produced. By doing this at the lowest level, fewer issues will be discovered when the components are later integrated and tested as a whole. Therefore, during unit testing, it is important to check the maximum and minimum values, invalid values, empty and corrupt data, etc. for each input and output to ensure the unit properly handles the data (processes or rejects it).

Unit testing can be described as the confirmation that the unit performs the capability assigned to it, correctly interfaces with other units and data, and represents a faithful implementation of the unit design. 

Ensuring that developers perform unit testing following written test plans helps build quality into the software from the beginning and allows bugs to be corrected early in the project life cycle when such corrections cost the least to the project.

3. Guidance

3.1 Unit Test

The project manager shall assure that the unit test results are repeatable. See SWE-186 - Unit Test Repeatability

If the code is safety critical the unit tests need to include Modified Coverage/Decision Coverage (MC/DC). See SWE-219 - Code Coverage for Safety Critical Software.

Per IEEE Std 610.12-1990 222, IEEE Standard Glossary of Software Engineering Terminology, a "unit" is defined as:

  1. A separately testable element specified in the design of a computer software component. 
  2. A logically separable part of a computer program.
  3. A software component that is not subdivided into other components.

Given the low-level nature of a unit of code, the person most able to fully test that unit is the developer who created it. Unit tests should be accomplished with full insight into the code under test and include off-nominal and error tests.  Unit tests should test more than just the requirements of the unit of code. See also Topic 8.01 - Off Nominal Testing, 7.06 - Software Test Estimation and Testing Levels

3.2 Prepare For Unit Testing

Projects ensure that the appropriate test environment, test materials, and personnel training (SWE-017 - Project and Software Training), are in place and then conduct unit tests per the approved plans (5.10 - STP - Software Test Plan), according to the schedule (SWE-016 - Software Schedule), and with proper monitoring per the software assurance plan, making sure that:

  • Criteria for a successful test are established before the test.
  • The test environment represents inputs, output, and stimulus the unit will experience in operation.
  • Capture weaknesses or differences between the unit test environment and the actual target environment.
  • Following are the approved plans for unit testing:
    • Unit test results are captured.
    • Issues are identified and documented (some minor issues, such as typos, as defined by the project, may simply be corrected without documentation).
    • Unit test issues are corrected; these may include:
      • Issues found in the code.
      • Issues found in test instruments (e.g., scripts, data, procedures).
      • Issues found in testing tools (e.g., setup, configuration).
    • Unit test corrections are captured (for root cause analysis, as well as proof that the unit test plans were followed).
  • Unit test results are evaluated by someone other than the tester to confirm the results, as applicable and practical; evaluation results are captured.
  • Unit test data, scripts, test cases, procedures, test drivers, and test stubs are captured for reference and any required regression testing.
  • Notes captured in software engineering notebooks or other documents are captured for reference.
  • Objective evidence that unit tests were completed and unit test objectives met is captured in the Software Development Folders (SDFs) or other appropriate project locations as called out in the project documentation (e.g., 5.08 - SDP-SMP - Software Development - Management Plan, 5.06 - SCMP - Software Configuration Management Plan).
  • Unit test metrics are captured, as appropriate and defined for the project.

Per NASA-GB-8719.13 276, NASA Software Safety Guidebook, software assurance is to verify unit testing and data verification is completed before the unit is integrated. Either software assurance or Independent Verification and Validation (IV&V) personnel "Verify unit tests adequately test the software and are performed." When less formal confirmation of unit testing is needed, a software team lead or other designated project member may verify the completeness and correctness of the testing by comparing the results to the test plan to ensure that all logic paths have been tested and verifying the test results are accurate.

Unit tests provide a mechanism for developers to ensure that their code is working the way it is intended to.  The two types of unit tests are automated and manual.  Unit tests that run automatically are recommended, because these can be integrated into a regression suite of tests.  Unit testing tools and some integrated development environments (IDEs) can auto-generate unit tests based on the code. These tools provide a quick method to generate unit tests, but may not completely exercise the unit of code. Rerun units test each time the unit is updated to ensure the code continues to work as expected. When continuous integration is part of the life cycle, all of the unit tests are rerun each time the code is updated to ensure only the working code is integrated. See SWE-066 - Perform Testing

Documented test results, results evaluations, issues, problem reports, corrections, and tester notes can all serve as evidence that unit tests were completed. Comparing those documents to the software test plans for unit testing can ensure the tests were completed following those documented procedures.

Make sure evidence of all test passes is captured.

See also SWE-191 - Software Regression Testing

NASA-GB-8719.13, NASA Software Safety Guidebook, further states in the section on safety-critical unit test plans that "documentation is required to prove adequate safety testing of the software." Therefore, unit test results can play an important role in supporting reviews of safety-critical software.

See also SWE-219 - Code Coverage for Safety Critical Software, SWE-157 - Protect Against Unauthorized Access, SWE-190 - Verify Code Coverage

Consult Center PALs for Center-specific guidance and resources related to unit testing.

3.3 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

3.4 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

4. Small Projects

Projects with limited budgets and personnel may choose to perform unit testing or capture unit test results and artifacts in a less formal manner than projects with greater resources. Regardless of the formality of the procedures used, the software test plans for unit testing need to describe the test environment/setup, the results captured, simple documentation procedures, and compliance checks against the procedures. Some Centers have tailored lean unit test procedures and support tools specifically for small projects.

5. Resources

5.1 References

5.2 Tools


Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

6.1 NASA Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to unit testing:

  • MPL Uplink Loss Timer Software/Test Errors (1998) (Plan to test against a full range of parameters.) Lesson Number 0939 530:  Lesson Learned No. 2 states: "Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested."
  • Computer Software/Configuration Control/Verification and Validation (V&V) (Unit level V&V needed for auto-code and auto-code generators.) Lesson Number 1023 533:  "The use of the Matrix X auto code generator for ISS software can lead to serious problems if the generated code and Matrix X itself are not subjected to effective configuration control or the products are not subjected to unit-level V&V. These problems can be exacerbated if the code generated by Matrix X is modified by hand."

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

7. Software Assurance

SWE-062 - Unit Test
4.4.5 The project manager shall unit test the software code.

7.1 Tasking for Software Assurance

From NASA-STD-8739.8B

1. Confirm that the project successfully executes the required unit tests, particularly those testing safety-critical functions.

2. Confirm that the project addresses or otherwise tracks to closure errors, defects, or problem reports found during unit testing.

7.2 Software Assurance Products

  • None at this time.


    Objective Evidence

    • Unit test results.
    • Software problem or defect report findings related to issues identified in unit testing.

    Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

    • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
    • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
    • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
    • Signatures on SA reviewed or witnessed products or activities, or
    • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
      • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
      • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
    • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics

  • # of planned unit test cases vs. # of actual unit test cases completed
  • # of tests completed vs. total # of tests
  • # of tests executed vs. # of tests completed
  • # of software work product Non-Conformances identified by life cycle phase over time
  • # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
  • # of Requirements tested vs. total # of Requirements
  • Total # of Non-Conformances over time (Open, Closed, # of days Open, and Severity of Open)
  • # of Non-Conformances in the current reporting period (Open, Closed, Severity)
  • # of safety-related non-conformances identified by life cycle phase over time
  • # of Closed action items vs. # of Open action items
  • # of Safety-Critical tests executed vs. # of Safety-Critical tests witnessed by SA
  • # of detailed software requirements tested to date vs. total # of detailed software requirements
  • # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
  • # of Open issues vs. # of Closed over time
  • # of Hazards containing software that have been tested vs. total # of Hazards containing software

7.4 Guidance

Software assurance will confirm that a thorough set of unit tests are included as part of the software test plan and that the developers are executing those tests, as planned. It is particularly important to include any code performing a safety-critical function in the unit tests since that is often the only place those functions can be tested well. When confirming the tests are being run, be aware that the unit tests must also be repeatable, as per SWE-186 - Unit Test Repeatability. See the SA guidance on SWE-186 for the information needed to make the test repeatable. The software guidance provides the following list of activities that should be occurring during unit testing:

  • Criteria for a successful test are established before the test.
  • The test environment represents inputs, output, and stimulus the unit will experience in operation.
  • Capture weaknesses or differences between the unit test environment and the actual target environment.
  • Following are the approved plans for unit testing:
    • Unit test results are captured.
    • Issues are identified and documented (some minor issues, such as typos, as defined by the project, may simply be corrected without documentation).
    • Unit test issues are corrected; these may include:
      • Issues found in the code.
      • Issues found in test instruments (e.g., scripts, data, procedures).
      • Issues found in testing tools (e.g., setup, configuration).
    • Unit test corrections are captured (for root cause analysis, as well as proof that the unit test plans were followed).
  • Unit test results are evaluated by someone other than the tester to confirm the results, as applicable and practical; evaluation results are captured.
  • Unit test data, scripts, test cases, procedures, test drivers, and test stubs are captured for reference and any required regression testing.
  • Notes captured in software engineering notebooks or other documents are captured for reference.
  • Objective evidence that unit tests were completed and unit test objectives met is captured in the Software Development Folders (SDFs) or other appropriate project locations as called out in the project documentation (e.g., 5.08 - SDP-SMP - Software Development - Management Plan, 5.06 - SCMP - Software Configuration Management Plan).
  • Unit test metrics captured, as appropriate and defined for the project

Finally, software assurance will confirm that any errors, defects, etc., found during the unit tests are addressed or tracked to closure. Unit tests should be rerun to verify that the changes made to the software fixed the problem. Any unit tests for safety functions may need to be rerun to make sure that has not been affected.

7.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:


  • No labels

0 Comments