5.3.3.1 The Software Peer Review/Inspection Report shall include: a. Identification information (including item being reviewed/inspected, review/inspection type (e.g., requirements inspection, code inspection, etc.) and review/inspection time and date). NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement. Classes C through E and Safety Critical are labeled with "P (Center) + SO." "P (Center)" means that an approved Center-defined process, which meets a non-empty subset of the full requirement, can be used to achieve this requirement, while "SO" means that the requirement applies only for safety-critical portions of the software. Class C and Not Safety Critical, as well as Class G, are labeled with "P (Center)." This means that an approved Center-defined process, which meets a non-empty subset of the full requirement, can be used to achieve this requirement. Class F is labeled with an "X (not OTS)." This means that this requirement does not apply to off-the-shelf (OTS) software for these classes. Class A_SC A_NSC B_SC B_NSC C_SC C_NSC D_SC D_NSC E_SC E_NSC F G H Applicable? X P(C) X X X P(C) Key: A_SC = Class A Software, Safety-Critical | A_NSC = Class A Software, Not Safety-Critical | ... | - Applicable | - Not Applicable The metrics and data called out in this requirement are ones that allow monitoring of the effectiveness of peer reviews/inspections and that develop an understanding of how to plan and perform an inspection leading to optimal effectiveness. a. Identification information: At the most basic level, a peer review's/inspection's results need to be traceable to the document inspected, so that the team can make sure problems get fixed and so that a quality history of key system components can be developed. It is also important to be able to trace results to the type of document inspected. Research and experience have shown that many key inspection parameters will vary greatly from one type of document to another, even though the same basic inspection procedure applies. For example, teams are often larger for reviews of requirements documents since more stakeholders may need to have their concerns incorporated. The review time and date can be used to detect whether inspection planning or results have varied over time, e.g., over different phases of the project. b. Total time expended: When done correctly, peer reviews/inspections require a non-trivial amount of effort. This metric can help build baselines of how much time is required to perform an inspection, so that future project plans can be constructed realistically, allocating appropriate time for the inspections needed. The effort expended on an inspection also allows an analysis of the return on investment. Moderators pay attention to the person-hours expended on a peer review/inspection and the benefit received, e.g., the number of defects found, to make sure that the time is well spent and to find opportunities for improving the process next time. c. Participant info: The number and perspectives of participants are factors greatly correlated with inspection success and are aspects over which the moderator has direct control. A good rule of thumb is that the number of participants involved in the inspection should be between four and six people, regardless of the type of document being inspected. These values reflect the fact that teams of less than four people are likely to lack important perspectives, while larger teams are more likely to experience dynamics that limit full participation. d. Defect found: A key indicator of peer review/inspection benefit is the number of defects found, i.e., how many defects can be removed from the system early, rather than waiting until later when more work has been done based on these defects and the corrections required are more expensive. e. Results summary: Teams indicate whether the document under peer review/inspection passes, i.e., once the corrections are made, it will be of sufficient quality for the development process to proceed, or whether it should be sent for a re-inspection. Re-review/inspection should be chosen when a large number of defects has been found or when the corrections to the defects found would result in major and substantial changes to the work product. f. Listing of all defects: The defects found in an inspection are recorded so that they can be tracked until they are resolved. The metrics called out in this requirement are all intrinsic to a well-run peer review/inspection. If the inspection process is being followed adequately, these metrics are already collected along the way to support key stakeholders and their decisions. For example, the names of participants and their areas of expertise are used to support the moderator during planning to compose a team capable of covering the important quality aspects of the document under review. Likewise, defects are recorded and counted to ensure that they are tracked until actually fixed or otherwise dispositioned. NASA-STD-2202-93, NASA Software Formal Inspection Standard, is currently being updated and revised to include lessons that have been learned by practitioners over the last decade. This Standard provides a more detailed list of the inspection data and metrics to be collected. The creators of NASA-STD-2202-93, NASA Software Formal Inspection Standard, suggest additional best practices related to the collection and content of inspection data and metrics. They recommend that: Best practices related to the various activities that employ the information recorded in the Software Peer Review/Inspection Report include: In creating baselines of inspection performance, it is important to ensure that: The following defect severity classification taxonomy is typically used for classifying anomalies or defects identified in inspection meetings: The types of identified defects are further classified according to a pre-defined defect taxonomy. The following defect taxonomy has been frequently used to classify code-related anomalies or defects: Consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources, such as templates, related to peer reviews and inspections. No additional guidance is available for small projects. The community of practice is encouraged to submit guidance candidates for this paragraph. Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider. Both experience and research have shown that the parameters under control of the moderator have significant effect on the results of an inspection. Where teams do not have their own baselines of data, Agency-wide heuristics have been developed to assist in planning. For example, analyzing a database of over 2,400 inspections across the Agency, researchers have found that inspections with team sizes of 4 to 6 inspectors find 12 defects on average, while teams outside this range find on average only 7. 320 Heuristics for how many pages can be reviewed by a team during a single inspection vary greatly according to the type of the document, which is to be expected since the density of information varies greatly between a page of requirements, a page of a design diagram, and a page of code. Similar to the heuristics for team size, inspection teams that follow the heuristics for document size also find on average significantly more defects than those which do not. 320 The recommended heuristics for document size are listed below. All of these values assume that inspection meetings will be limited to 2 hours. Inspection Type Target Range Functional Design 20 Pages 10 to 30 Pages Software Req. 20 Pages 10 to 30 Pages Arch. Design 30 Pages 20 to 40 Pages Detailed Design 35 Pages 25 to 45 Pages Source Code 500 LOC 400 to 600 LOC Test Plans 30 Pages 20 to 40 Pages Test Procedures 35 Pages 25 to 45 Pages Teams have consistently found that inspection meetings should last at most 2 hours at a time. Beyond that, it is hard, if not impossible, for teams to retain the required level of intensity and freshness to grapple effectively with technical issues found. This heuristic is a common rule of thumb found across the inspection literature. Over the course of hundreds of inspections and analysis of their results, NASA's Jet Propulsion Laboratory (JPL) has identified key lessons learned, which lead to more effective inspections, including:
See edit history of this section
Post feedback on this section
1. Requirements
b. Summary on total time expended on each software peer review/inspection (including total hour summary and time participants spent reviewing/inspecting the product individually).
c. Participant information (including total number of participants and participant's area of expertise).
d. Total number of defects found (including the total number of major defects, total number of minor defects, and the number of defects in each type such as accuracy, consistency, completeness).
e. Peer review/inspection results summary (i.e., pass, re-inspection required).
f. Listing of all review/inspection defects.1.1 Notes
1.2 Applicability Across Classes
X - Applicable with details, read above for more | P(C) - P(Center), follow center requirements or procedures2. Rationale
3. Guidance
4. Small Projects
5. Resources
5.1 Tools
6. Lessons Learned
SWE-119 - Software Documentation Requirements - Software Inspection, Peer Reviews, Inspections
Web Resources
View this section on the websiteUnknown macro: {page-info}