2.2.6 The project shall regularly hold reviews of software activities, status, and results with the project stakeholders and track issues to resolution.
NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.
1.2 Applicability Across Classes
Class G is labeled with "P (Center)." This means that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement.
Regular reviews of software status and activities assure that all needed tasks and deliverables are managed and achieved. Regular reviews include both formal ((e.g., Preliminary Design Review (PDR), Critical Design Review (CDR)) and informal reviews ( e.g., weekly status reviews)). Issues presented or discovered during these activities are communicated to the appropriate personnel. The tracking of these issues to closure assures that errors and shortcomings in the requirements, architecture, design and/or build of the software are corrected and prevented from reoccurring.
Stakeholders typically are those materially affected by the outcome of a decision or a deliverable. In the context of software work product development, a stakeholder may be inside of or outside of the organization doing the work. The key reason for holding regular reviews with project stakeholders is to keep them and their organizations informed since they are both participants in the software work product development and advocates for the activity.
Regular reviews are those held on a scheduled basis throughout the software development life cycle. These may be weekly, monthly, or quarterly; or as needed, depending on the size and scope of the software development activity. They include the internal reviews conducted periodically to status the project, and the major technical reviews that occur at various phases of the project (see the NASA Systems Engineering Handbook and the NASA Space Flight Program and Project Management Requirements documents). The latter reviews are the major reviews in the software development life cycle (e.g., PDR, CDR, etc.). Take into consideration the results of the software classification level process (see SWE-020) and safety criticality determination process (see SWE-133) when making the choice of review frequency.
Regular reviews cover details of work in software planning, requirements development, architecture, detailed design, coding and integration, testing plans, testing results, and overall readiness for flight. The individual project or development activity determines specific content of each review, with consideration of the current position of the activities within the software development life cycle. Review content is based on the specific project needs. However, the major technical reviews often must show evidence of satisfaction of Entrance and Exit (Success) criteria. See Topic 7.9 for a listing of potential criteria to use in reviews.
The evaluation of metrics developed from a software measures process (see SWE-091, SWE-092, SWE-093, and SWE-094) and the assessment of milestone status provide quantitative determinations of the work progress. Risk identification and mitigation, safety, problem identification and resolution are parts of the regular reviews. Risk identification (see SWE-086) and mitigation efforts are tracked in a controlled manner, whether in a database tool or in a software package written for risk management.
Issues that are identified during a regular review that can't be closed at the review are documented and tracked until they are officially dispositioned and/or closed. The issue can be tracked in a suitable tool, such as a risk management system, a configuration management system, or a problem reporting and corrective action (PRACA) system. The configuration management and control system selected for the project is written up in the Configuration Management Plan. The plan is used to record the methods and tools used for tracking the issues to closure (see SWE-079 and SWE-103).
The interpretation of the term "stakeholder" for this requirement can be taken to include representatives from the following organizations:
- Quality assurance.
- Systems engineering.
- Independent testing.
- Independent Verification and Validation (IV&V).
- Project management.
- Other organizations performing project activities.
However, other external stakeholders at the program or project level (e.g., principal investigators, the science community, technology community, public, education community, and/or a Mission Directorate sponsor) are not included on a regular basis at the internal reviews for the satisfaction of this requirement. In contrast to the relevant or internal stakeholders, external project stakeholders generally participate in just the major milestone reviews.
A best practice related to the involvement of stakeholders is to determine and invite the relevant stakeholders, i.e., those who are typically involved in the review because they are engaged in the development and/or who have a vested interest in the work products being produced.
Additional guidance related to holding review of software activities, status, and results may be found in the following related requirements in this Handbook:
Software Life Cycle
Software Peer Reviews and Inspections for Requirements, Test Plans, Design, and Code
Software Peer Reviews and Inspections - Checklist Criteria and Tracking
Software Peer Reviews and Inspections - Basic Measurements
4. Small Projects
This requirement applies to all projects depending on the determination of the software classification of the project (see SWE-020 ). A smaller project, however, may be able to get by with less frequent reviews, if the risk to the overall project or program by the software product is low. Periodic evaluations of the software classification and risk level may validate the use of less frequent reviews or suggest an increase in their frequency.
6. Lessons Learned
The NASA Lessons Learned database contains the following lessons learned related to or applicable to the software reviews: