Regular reviews are those held on a scheduled basis throughout the software development life cycle. These may be weekly, monthly, or quarterly; or as needed, depending on the size and scope of the software development activity. They include the internal reviews conducted periodically to status the project, and the major technical reviews that occur at various phases of the project (see the NASA Systems Engineering Handbook and the NASA Space Flight Program and Project Management Requirements documents). The latter reviews are the major reviews in the software development life cycle (e.g., PDR, CDR, etc.). Take into consideration the results of the software classification level process (see SWE-020) and safety criticality determination process (see SWE-133) when making the choice of review frequency.
Regular reviews cover details of work in software planning, requirements development, architecture, detailed design, coding and integration, testing plans, testing results, and overall readiness for flight. The individual project or development activity determines specific content of each review, with consideration of the current position of the activities within the software development life cycle. Review content is based on the specific project needs. However, the major technical reviews often must show evidence of satisfaction of Entrance and Exit (Success) criteria. See Topic 7.9 for a listing of potential criteria to use in reviews.
The evaluation of metrics developed from a software measures process (see SWE-091, SWE-092, SWE-093, and SWE-094) and the assessment of milestone status provide quantitative determinations of the work progress. Risk identification and mitigation, safety, problem identification and resolution are parts of the regular reviews. Risk identification (see SWE-086) and mitigation efforts are tracked in a controlled manner, whether in a database tool or in a software package written for risk management.
Issues that are identified during a regular review that can't be closed at the review are documented and tracked until they are officially dispositioned and/or closed. The issue can be tracked in a suitable tool, such as a risk management system, a configuration management system, or a problem reporting and corrective action (PRACA) system. The configuration management and control system selected for the project is written up in the Configuration Management Plan. The plan is used to record the methods and tools used for tracking the issues to closure (see SWE-079 and SWE-103).
The interpretation of the term "stakeholder" for this requirement can be taken to include representatives from the following organizations:
- Quality assurance.
- Systems engineering.
- Independent testing.
- Independent Verification and Validation (IV&V).
- Project management.
- Other organizations performing project activities.
However, other external stakeholders at the program or project level (e.g., principal investigators, the science community, technology community, public, education community, and/or a Mission Directorate sponsor) are not included on a regular basis at the internal reviews for the satisfaction of this requirement. In contrast to the relevant or internal stakeholders, external project stakeholders generally participate in just the major milestone reviews.
A best practice related to the involvement of stakeholders is to determine and invite the relevant stakeholders, i.e., those who are typically involved in the review because they are engaged in the development and/or who have a vested interest in the work products being produced.
Additional guidance related to holding review of software activities, status, and results may be found in the following related requirements in this Handbook:
Software Life Cycle
Software Peer Reviews and Inspections for Requirements, Test Plans, Design, and Code
Software Peer Reviews and Inspections - Checklist Criteria and Tracking
Software Peer Reviews and Inspections - Basic Measurements