Page History
| Show If | |||||
|---|---|---|---|---|---|
| |||||
|
| Set Data | ||||
|---|---|---|---|---|
| ||||
4 |
| Excerpt | ||
|---|---|---|
| ||
Guidance for software assurance personnel performing test witnessing. |
...
| 0 | 1. Preparation for Test Witnessing |
|---|---|
| 1 | 2. Activities during Test Execution |
| 2 | 3. Activities Following Test Execution |
| 3 | 4. Resources |
| 4 | 5. Lessons Learned |
...
| id | tabs-1 |
|---|
1. Preparation For Test Witnessing
- Software Assurance personnel chosen to witness testing should be familiar with the following governing documents:
- NASA-STD-8739.8
requirements related to testing,Swerefn refnum 278 - NPR-7150.2
requirements related to testing, andSwerefn refnum 083 - Any Center or project level standards or procedures relating to test witnessing and safety-critical software, if applicable.
- NASA-STD-8739.8
- Test Witnesses should have the appropriate training for the facility.
- Software Assurance planning activities associated with test witnessing should be based on the software classification and software safety-criticality of the software under test. Software Assurance should witness all software tests for the safety-critical software components.
- Software Assurance personnel witnessing the test should be familiar with the following project-specific documents:
- Project software requirements,
- Software design,
- Software bi-directional traceability,
- Open software problem reports,
- Software data loads required for the test,
- Fidelity of the test environments,
- Fidelity and maturity of any simulations,
- Emulator or models used in the software testing,
- Software configuration and software configuration management state for the software under test,
- Test plans, procedures, test cases, acceptance criteria for each test set and expected results for each test.
- Software Assurance personnel should be familiar with the operational scenarios and have some knowledge of the project domain.
- Software Assurance personnel should develop a checklist of items to be checked before, during, and after the test(s). Be sure to provide a space to record any observations.
- Software Assurance should verify that all tests listed in the test plan trace back to one or more requirements and trace back to the hazard reports, as applicable.
- All requirements should trace to one or more tests; for safety-critical software, all software features in hazard reports (e.g., mitigations, controls, warnings, barriers and other safety designs, etc.) should trace to one or more tests.
- Check to determine if planned software tests provide good coverage of the requirements under test.
- Verify that test set includes limit/range/boundary testing, operational scenario testing (day-in-the-life-of), off-nominal conditions, end-to-end tests, regression tests, stress testing, load and performance testing, as well as security testing and hazard report verifications (as applicable), etc.
- Verify that any COTS, MOTS, GOTS, Open Source, reused code is being used within the operational assumptions for the code and is tested just as thoroughly as the developed code.
- Testing of some functionality can only occur during the unit testing level. For these items, Software Assurance personnel should review the unit test and unit test results, especially for safety-critical capabilities/functions/requirements.
- If Software Assurance or the project has identified high risk, high complexity, and highly critical system components, confirm that the planned tests adequately cover those components.
- Software Assurance personnel should confirm that the project is ready to do the testing. See Test Readiness Review information in NASA-HDBK-2203, Topic 7.8 - Maturity of Life-Cycle Products at Milestone Reviews. Perform a formal or informal Test Readiness Review before any formal testing. A few reminders:
- Note the software version(s) to be tested.
- Software, test scripts, input data files, etc. to be used in the test need to be under configuration management.
- Test plans, procedures, and test cases need to be peer-reviewed and under configuration management.
- The operational environment or high- fidelity test environment needs to be ready.
- The defect tracking system needs to be in place.
| Div | ||
|---|---|---|
| ||
2. Activities During Test Execution
Depending on the type of test results expected, the following may happen after test execution:
|
| Div | ||
|---|---|---|
| ||
3. Activities Following Test Execution
|
...
| id | tabs-4 |
|---|
4. Resources
4.1 References
| refstable-topic |
|---|
...
| group | confluence-users |
|---|
...
| titleColor | red |
|---|---|
| title | Visible to editors only |
Enter necessary modifications to be made in the table below:
...
SWEREFs NOT called out in text but listed as germane: none
SWEREFs called out in text: 083, 278
4.2 Tools
...
| Div | ||
|---|---|---|
| ||
5. Lessons Learned5.1 NASA Lessons LearnedNo Lessons Learned have currently been identified for this topic. 5.2 Other Lessons LearnedNo other Lessons Learned have currently been identified for this topic. |


