bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Wiki Markup
{alias:SWE-xxx119}
{tabsetup:1. The Requirement|2. Rationale|3. Guidance|4. Small Projects|5. Resources|6. Lessons Learned}

{div3:id=tabs-1}

h1. 1. Requirements

add5.3.1 The Software Peer Review/Inspection Report shall include:

      a. Identification information (including item being reviewed/inspected, review/inspection type (e.g., requirements inspection, code inspection, etc.) and review/inspection time and date).
      b. Summary on total time expended on each software peer review/inspection (including total hour summary and time participants spent reviewing/inspecting the product individually).
      c. Participant information (including total number of participants and participant's area of expertise).
      d. Total number of defects found (including the total number of major defects, total number of minor defects, and the number of defects in each type such as accuracy, consistency, completeness).
      e. Peer review/inspection results summary (i.e., pass, re-inspection required).
      f. Listing of all review/inspection defects.


h2. {color:#003366}{*}1.1 Notes{*}{color}

NPR 7150.2A2 does not include any notes for this requirement.

h2. 1.2 Applicability Across Classes

KEEP THIS SENTENCE IF THERE ARE NOT NOTES. REMOVE IF THERE ARE:
Appendix D of NPR 7150.2A does not include any notes for this requirement.Classes C through E and Safety Critical are labeled with "P (Center) + SO".  "P(Center)" means that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement while "SO" means that the requirement applies only for safety critical portions of the software.

Class C and not safety critical as well as Class G are labeled with "P(Center).  This means that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement.

Class F is labeled with and "X (not OTS)".  This means that this requirement does not apply to off-the-shelf software for these classes.


{applicable:asc=1|ansc=1|bsc=1|bnsc=1|csc=1*|cnsc=1p|dsc=1*|dnsc=10|esc=1*|ensc=0|f=1*|g=p|h=*0}

{div3}
{div3:id=tabs-2}

h1. 2. Rationale

add

The metrics and data called out in this requirement are ones that allow monitoring the effectiveness of peer review/ inspections and develop an understanding of how to plan and perform an inspection to lead to optimal effectiveness.

*a) Identification information:* At the most basic level, a peer review/ inspection's results need to be traceable back to the document inspected, so that the team can make sure problems get fixed, and so that a quality history can be developed of key system components. But it is also important to be able to trace results back to the type of document inspected. Research and experience has shown that many key inspection parameters will vary greatly from one type of document to another, even though the same basic inspection procedure applies. For example, teams are often larger for reviews of requirements documents since more stakeholders may need to have their concerns incorporated. The review time and date can be used to detect whether inspection planning or results have varied over time, e.g. over different phases of the project.

*b) Total time expended:* When done correctly, peer reviews/inspections require a non-trivial amount of effort. This metric can help build up  baselines of how much time is required to perform an inspection, so that future project plans can be constructed realistically, allocating appropriate time for the inspections needed. The effort expended on an inspection is also allow an analysis of the return on investment: Moderators pay attention to the person-hours expended on a peer review inspection and the benefit received (e.g. the number of defects found), to make sure that the time is well-spent and to find opportunities for improving the process next time.

* It is worth noting that the time spent on individual review needs to be reported separately. This information can help the inspection moderator to assess whether to reschedule the meeting. If participants spend too little time preparing, inspection meetings tend to be inefficient; because reviewers are looking at the material for the first time it is typically hard to really get into a discussion of key technical details. Inspections with too little individual preparation time may find some defects, but aren't likely to find a majority of defects.

*c) Participant info:* The number and perspectives of participants are factors greatly correlated with inspection success, and aspects over which the moderator has direct control. A good rule of thumb is that the number of participants involved in the inspection should be between four and six people, regardless of the type of document being inspected. These values reflect the fact that teams of less than four people are likely to lack important perspectives, while larger teams are more likely to experience dynamics that limit full participation.

* It is important to note that the areas of participants' expertise also is reported. This reflects the idea that the best review process in the world cannot find defects, if the required human expertise is missing. Reporting the expertise or job categories which participated allows an analysis of whether the right judgments have been brought to bear for the type of system and the type of document.

* d) Defect found:* A key indicator of peer review/inspection benefit is the number of defects found: that is, how many defects can be removed from the system early, rather than waiting until later when more work has been done based on these defects and the corrections required are more expensive.

* Teams report the number of major and minor defects separately. Looking for trends in these measures can provide important indications to a team, such as whether the documents being inspected are of very low quality (high number of major defects being found, consistently) or whether the inspection process may not be focused on the most important issues (the vast majority of defects being found are minor defects).

* Teams report the number of defects according to some categorization scheme. This information allows teams to look for trends over time that they should be aware of: For example, if defects of "completeness" are routinely found to be the majority of defects found in inspections, the team should consider whether corrective actions could be taken to help developers understand the components or aspects that need to be included.

*e) Results summary:* Teams indicate whether the document under peer review/inspection passes  (i.e., once the corrections are made it will be of sufficient quality for the development process to proceed) or whether it should be sent for a re-inspection. Re-review/inspection should be chosen when a large number of defects has been found, or when the corrections to the defects found would result in major and substantial changes to the work product. 

*f) Listing of all defects:* The defects found in an inspection are recorded, so that they can be tracked until they are resolved.


{div3}
{div3:id=tabs-3}

h1. 3. Guidance

The metrics called out in this requirement are all intrinsic to a well-run peer review/inspection. If the inspection process is being followed adequately, these metrics are already collected along the way in order to support key stakeholders and their decisions: For example, the names of participants and their areas of expertise are used to support the moderator during planning, to compose a team capable of covering the important quality aspects of the document under review. Likewise, defects are recorded and counted ensure that they are tracked until actually fixed or otherwise dispositioned.

The NASA Software Formal Inspection Standard is currently being updated and revised to include lessons that have been learned by practitioners over the last decade. This standard provides a more detailed list of the inspection data and metrics to be collected. 

The creators of the NASA Software Formal Inspection Standard suggest additional best practices related to the collection and content of inspection data and metrics. They recommend that:

* To the extent possible, metrics are collected as they become available during the inspection process, rather than compiled after the entire inspection is over.
* Teams maintain the data records across the inspections that they perform, so that they can look for trends between the parameters they control and the number and types of defects detected. Parameters that are under an inspection moderator's control include:
** The number and expertise of inspectors. 
** The size of the document inspected.
** The amount of effort spent by the inspection team.

Best practices related to the various activities that employ the information recorded in the Software Peer Review/Inspection Report include:

* The moderator reviews individual inspectors' preparation effort to decide whether sufficient preparation has been done to proceed with the inspection meeting.
* The moderator ensures that all major defects have been resolved.
* A set of analyses are performed periodically on the recorded data such as to monitor progress (i.e., number of inspection planned versus completed) and to understand the effort and benefits of inspection.
* The outcomes of the analyses are leveraged to support the continuous improvement of the inspection process. 

In creating baselines of inspection performance in this way, it is important to ensure that:

* The units of measure are recorded consistently, e.g. one inspection does not record effort in person-hours and another in calendar-days.
* The definition of measures is consistent, for example, are things like prep time and other activities counted consistently across all inspections.

add* Zeroes and missing values are handled consistently, for example, if a value is blank, is it clear whether the data could be missing or 0 hours were spent on the given activity?

The following defect severity classification taxonomy are typically used for classifying anomalies or defects identified in inspection meeting:

* # Major Defect:* A defect in the product under inspection which, if not corrected, would either cause a malfunction, or prevent the attainment of a required result, and would result in a Discrepancy Report.
* # Minor Defect:*  A defect in the product under inspection which, if not fixed, would not cause a malfunction, would not prevent the attainment of a required result, and would not result in a Discrepancy Report, but could result in difficulties in terms of operations, maintenance, and future development.
* # Clerical Defect:* A defect in the product under inspection at the level of editorial errors, such as spelling, punctuation, and grammar.

The types of identified defects are further classified according to a pre-defined defect taxonomy. The following defect taxonomy have been frequently used to classifying code-related anomalies or defects:
1.	Algorithm / method: An error in the sequence or set of steps used to solve a particular problem or computation, including mistakes in computations, incorrect implementation of algorithms, or calls to an inappropriate function for the algorithm being implemented.
2.	Assignment / initialization: A variable or data item that is assigned a value incorrectly or is not initialized properly or where the initialization scenario is mishandled (e.g., incorrect publish or subscribe, incorrect opening of file, etc.)
3.	Checking: Inadequate checking for potential error conditions or an inappropriate response is specified for error conditions.
4.	Data: Error in specifying or manipulating data items, incorrectly defined data structure, pointer or memory allocation errors, or incorrect type conversions.
5.	External interface: Errors in the user interface (including usability problems) or the interfaces with other systems.
6.	Internal interface: Errors in the interfaces between system components, including mismatched calling sequences and incorrect opening, reading, writing or closing of files and databases.
7.	Logic: Incorrect logical conditions on if, case or loop blocks, including incorrect boundary conditions ("off by one" errors are an example) being applied, or incorrect expression (e.g., incorrect use of parentheses in a mathematical expression).
8.	Non-functional defects: Includes non-compliance with standards, failure to meet non-functional requirements such as portability and performance constraints, and lack of clarity of the design or code to the reader - both in the comments and the code itself.
9.	Timing / optimization: Errors that will cause timing (e.g., potential race conditions) or performance problems (e.g., unnecessarily slow implementation of an algorithm).
10.	Other: Anything that does not fit any of the above categories that is logged during an inspection of a design artifact or source code.


{div3}
{div3:id=tabs-4}

h1. 4. Small Projects

add

{div3}
{div3:id=tabs-5}

h1. 5. Resources

# add
# add

h2. 5.1 Tools

add

{div3}
{div3:id=tabs-6}

h2. 6. Lessons Learned

add

{div3}
{tabclose}