This version of SWEHB is associated with NPR 7150.2B. Click for the latest version of the SWEHB based on NPR7150.2C
5.3.4 The project manager shall, for each planned software peer review or software inspection, record basic measurements.
NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.
1.2 Applicability Across Classes
If Class C or Class D software is safety critical, this requirement applies to the safety-critical aspects of the software.
Classes F and G are labeled with and X and “not OTS” which indicates the project is required to meet this requirement with the exception of off-the-shelf software.
Class A B C CSC D DSC E F G H Applicable?
Key: - Applicable | - Not Applicable
A & B = Always Safety Critical; C & D = Not Safety Critical; CSC & DSC = Safety Critical; E - H = Never Safety Critical.
As with other engineering practices, it is important to monitor defects, pass/fail results, and effort. This is necessary to ensure that peer reviews and software inspections are being used in an appropriate way as part of the overall software development life cycle, and to be able to improve the process itself over time. Moreover, key measurements are required to interpret inspection results correctly. For example, if very little effort is expended on an inspection or key phases (such as individual preparation) are skipped altogether, it is very unlikely that the inspection will have found a majority of the existing defects.
NASA-STD-8739.9, Software Formal Inspections Standard, includes lessons that have been learned by practitioners over the last decade.
The Software Formal Inspections Standard suggests several best practices related to the collection and the use of inspection data. 277
This requirement collects effort, number of participants, defects, number and types of defects found, pass/fail, and identification in order to ensure the effectiveness of the inspection. Where peer reviews and software inspections yield less than expected results, some questions to address may include:
- Are peer reviews/inspections being deployed for the appropriate artifacts? As described in the rationale for SWE-087, this process often is most beneficial when applied to artifacts such as requirements and test plans.
- How are peer reviews and software inspections being applied with respect to other verification and validation (V&V) activities? It may be worth considering whether this process is being applied only after other approaches to quality assurance (e.g., unit testing) that are already finding defects, perhaps less cost-effectively.
- Are peer review and software inspection practices being followed appropriately? Tailoring away key parts of the inspection process (e.g., planning or preparation), or undertaking inspections with key expertise missing from the team, will not produce the best results.
As with other forms of software measurement, best practices for ensuring that the collection and analysis of peer review and software inspection metrics are done well include:
- Clear triggers indicating when the metrics are gathered and analyzed (e.g., after every inspection; once per month).
- Clear task assignments for this task.
- Consistent recording of the units of measure, (e.g., one inspection does not record effort in person-hours and another in calendar-days).
- Consistency checking for collected measures, including investigation of outliers to verify whether the data was entered correctly and the correct definitions were applied.
Best practices related to the collection and analysis of inspection data include:
- The moderator is responsible for compiling and reporting the inspection data.
- The project manager explicitly specifies the location and the format of the recorded data.
- Inspections are checked for process compliance using the collected inspection data, for example to verify that:
- Any inspection team consists of at least of three persons.
- Any inspection meeting is limited to approximately 2 hours, and if the discussion looks likely to extend far longer, the remainder of the meeting is rescheduled for another time when inspectors can be fresh and re-focused.
- The rate of inspection adheres to the recommended or specified rate for different inspection types.
- A set of analyses is performed periodically on the recorded data to monitor progress (i.e., number of inspection planned versus completed) and to understand the costs and benefits of inspection.
- The outcome of the analyses is leveraged to support the continuous improvement of the inspection process.
In an acquisition context, there are several important considerations for assuring proper inspection usage by software provider(s):
- The metrics to be furnished by software provider(s) must be specified in the contract.
- It must be clear and agreed upon ahead of time whether or not software providers can define their own defect taxonomies. If providers may use their own taxonomy, request that the software providers furnish the definition or the data dictionary of the taxonomy. It is also important (especially when the provider team contains subcontractors) to ensure that consistent definitions are used for: defect types; defect severity levels; effort reporting (how comprehensive or restrictive are the activities that are part of the actual inspection).
Examples Software Peer Review Base Metrics
Lines of code or document pages that you planned to inspect
Lines of code or documents pages that were actually inspected or peer reviewed
Time required to complete the inspection, if done over several meetings then add up the total time required
Total number of hours spend planning and preparation for the review
Total number of hours spent in the inspection meeting (multiply Time meeting by number of participates
Total number of hours spend by the author making improvements based on the findings.
Major Defects found
Number of Major defects found during the review
Minor Defect found
Number of Minor defects found during the review
Major Defects Corrected
Number of major defect corrected during rework
Minor Defects Corrected
Number of minor defect corrected during rework
Number of Inspectors
Number of people, not counting observers, who participated in the review
Review teams assessment of the work product (accepted, accepted conditionally, review again following rework, review not complete, etc.)
Peer Review Defects
The Peer Review Defect metric measures the average number of defects per peer review to determine defect density over time.
Number of defects found per Peer Review = [Total number of defects] / [To number of Peer Reviews]
Additional guidance regarding software peer review/inspection measures can be found in Handbook Topic 7.18 – Documentation Guidance.
NASA users should consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources, such as templates, related to peer reviews and inspections.
4. Small Projects
Projects with small budgets or a limited number of personnel need not use complex or user-intensive data collection logistics.
Given the amount of data typically collected, well-known and easy to use tools such as Excel sheets or small databases (e.g., implemented in MS Access) are usually sufficient to store and analyze the inspections performed on a project.
Tools relative to this SWE may be found in the table below. You may wish to reference the Tools Table in this handbook for an evolving list of these and other tools in use at NASA. Note that this table should not be considered all-inclusive, nor is it an endorsement of any particular tool. Check with your Center to see what tools are available to facilitate compliance with this requirement.
SPAN - Accessible to NASA users via SPAN tab in this Handbook. By Request - Non-NASA users, contact User for a copy of this tool.
Excel workbook that provides instructions for conducting a peer review, an overview of the peer review process, and product-specific checklists used during reviews. Areas for documenting issues and concerns, assigning action items, tracking issues to resolution, and documenting metrics are included. In SPAN search for LARC_TL_20120821_Peer_Review_Toolkit_v13
Collaborator is a code review tool that helps development, testing and management teams work together to produce high quality code. It allows teams to peer review code, user stories and test plans in a transparent, collaborative framework — instantly keeping the entire team up to speed on changes made to the code.
LaRC, MSFC, KSC
6. Lessons Learned
Over the course of hundreds of inspections and analysis of their results, the Jet Propulsion Laboratory (JPL) has identified key lessons learned which lead to more effective inspections, including:
- Capturing statistics on the number of defects, the types of defects, and the time expended by engineers on the inspections 235.