4.3.3 The project shall, for each planned software peer review/inspections: a. Use a checklist to evaluate the work products. NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement. Classes C through E and Safety Critical are labeled with "P (Center)+SO." This means that this requirement applies to the safety-critical aspects of the software and that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement. Class C and Not Safety Critical as well as Class G are labeled with "P (Center)." This means that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement. Class F is labeled with "X (not OTS)." This means that this requirement does not apply to off-the-shelf software for this class. Class A_SC A_NSC B_SC B_NSC C_SC C_NSC D_SC D_NSC E_SC E_NSC F G H Applicable? P(C) P(C) Key: A_SC = Class A Software, Safety-Critical | A_NSC = Class A Software, Not Safety-Critical | ... | This requirement calls out four important best practices that are associated with effective inspections: a. Using a checklist supports the software peer review/inspection team members by giving them a memory aid regarding what quality aspects they are responsible for in the document under review. The checklists provide a concrete way for the inspection to improve over time. Defect types that are seen to continually slip through peer reviews/inspections are added to the checklist so that future teams are aware that they are important to look for. Checklist items that no longer lead to defects being found are candidates for deletion. If kept up to date in this way, checklists provide a timely and efficient list of the types of issues on which review time should be spent. b. Readiness and completion criteria are used to ensure that peer review/inspection time is being spent effectively and that confidence can be had in the outcome. Readiness criteria is satisfied before an inspection is able to begin. They represent the minimal set of quality characteristics that are to be satisfied before it is worthwhile to have a team of subject matter experts spend significant time on understanding, assessing, and discussing the product under review/inspection. Readiness criteria also indicate the preparedness of the peer review/inspection team to conduct the review/inspection. Readiness criteria may specify standards and guidelines to be adhered to; set project-specific criteria like the level of detail or a particular policy to be followed; and may require the use of automated tools (like static analysis tools or traceability tools). Completion criteria represent a set of measurable activities that are to be completed at the end of an inspection, so that statements can be made with confidence regarding the outcome. For example, completion criteria may require that all process steps have been completed and documented; metrics have been collected; or that all major defects have been completed and approved. c. Action items are required to be tracked through completion so that it is assured that the inspection has a positive impact on software quality. Due to time pressures, teams who identify significant numbers of defects in an inspection and then do not take the time to resolve them, are wasting effort. Tracking the action items ensures that such an outcome is avoided. In addition to the impact on software quality, this best practice also aims at keeping the morale of inspection teams high. Nothing is more de-moralizing for a team than investing significant time in identifying and reporting software defects, if they are never fixed afterwards. d. Effective peer reviews/inspections begin with a planning phase in which plans are made regarding the scope of the document under review, the time available, and other key parameters. One of the most important issues to address in this step is to analyze which perspectives or stakeholders are needed to ensure that all quality aspects can be adequately addressed in an inspection. Taking the time to apply a rigorous inspection process will not automatically yield an effective outcome if the actual engineering knowledge and expertise is never brought to bear on analyzing the document. The creators of NASA-STD 2202-93, Software Formal Inspection Standard, which is currently under development, suggest several best practices related to the use of checklists. They recommend that: The Standard offers detailed suggestions as to what types of quality aspects need to be covered by checklists in a variety of different circumstances. Best practices related to the establishment of readiness and completion criteria include: Best practices related to tracking actions identified in the reviews until they are resolved include: Best practices related to the identification of required participants include: The Fraunhofer Center 421 in Maryland maintains a public website that collects checklists found from NASA and other contexts, which can be applied to the types of work products mentioned in these requirements. Consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources, such as templates, related to peer reviews and inspections. Checklists for various types of inspections can be found at the Fraunhofer Center website 421. Various inspection tools can be used to reduce the effort of tracking the information associated with inspections. See the "Tools" section of the
Resources tab for a list of tools. Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider. Over the course of hundreds of inspections and analysis of their results, the Jet Propulsion Laboratory (JPL) has identified key lessons learned which lead to more effective inspections, including:
View this section on the website
See edit history of this section
Post feedback on this section
1. Requirements
b. Use established readiness and completion criteria.
c. Track actions identified in the reviews until they are resolved.
d. Identify required participants.1.1 Notes
1.2 Applicability Across Classes
X
X
X
X
- Applicable |
- Not Applicable
X - Applicable with details, read above for more | P(C) - P(Center), follow center requirements or procedures
2. Rationale
3. Guidance
4. Small Projects
5. Resources
5.1 Tools
6. Lessons Learned
SWE-088 - Software Peer Reviews and Inspections - Checklist Criteria and Tracking
Web Resources
Unknown macro: {page-info}