A software organization is expected to collect metrics in accordance with a common procedure to facilitate uniformity in the data collected. The software organization designates a storage location so that the project metric data can be viewed and used by the organization. An effective storage approach allows for long-term access to the metric information that can be used in trending assessments and analyses. The data gained from these repositories are used in the management of projects, for assuring safety and quality, and in improving overall software engineering practices. Measurement repositories can be at a Center or organizational level within a Center. Each Center should decide which approach works best for their Center.
In general, Center-level measurement systems are designed to meet the following high-level goals:
- To improve future planning and cost estimation.
- To provide realistic data for progress tracking.
- To provide indicators of software quality.
- To provide baseline information for future process improvement activities.
The software measurement repository stores metrics history to be used to evaluate data on current and/or future projects. The availability of past metrics data can be the primary source of information for calibration, planning estimates, benchmarking, and process improvement activities.
With the high-level goals in mind, Centers are to establish and maintain a specific measurement repository for their particular programs and projects to enable reporting on the minimum requirements categories:
- Software development tracking data:
This data is tracked throughout the life cycle, and includes, but is not limited to, the planned and actual values for software resources, schedule, implementation status, and test status. This information may be reported in the Software Metrics Report (see Metrics).
- Software functionality achieved data:
This is data that monitors the functionality of the software and includes, but is not limited to, the planned vs. the actual number of requirements and function points. It also includes data on the utilization of computer resources. This information may be reported in the Software Metrics Report (see Metrics).
This data is used to determine the quality of the software produced during each phase of the software lifecycle. Software quality data includes figures and measurements regarding software problem reports/change requests, reviews of item discrepancies, peer reviews/software inspections, software audits, software risks, and mitigations. This information may be reported in the Software Metrics Report (see Metrics).
- Software development effort and cost:
Effort and cost data are used to monitor the cost and progress of software development. This type of data can include progress toward accomplishing planned activities, number of schedule delays, resource and tool expenses, costs associated with test and verification facilities, and more. This information is typically captured in some type of status reporting tool.
Since this is a Center repository, measurements are collected across programs and projects. Proper execution of data collection necessitates an agreed-to plan for the Center’s software measurement repository. This plan is based on the evaluation and interpretation of specified software measures that have been captured, validated and analyzed according to the approved procedures in the programs/projects’ software measurement plans. The collection plan begins with clear information that includes:
- A description of all data to be provided.
- A precise definition of terms.
- A description of responsibilities for data provision and analyses.
- A description of time frames and responsibilities for the reception of the data and analyses being provided.
Each Center develops its own measurement program that is tailored to its needs and objectives and is based on an understanding of its unique development environment.
Activities within the data collection procedure include:
- A clear description of all data to be provided. This includes a description of each item and its format, a description of the physical or electronic form to be used, the location or address for the data to be sent (i.e., the measurement repository).
- A clear and precise definition of terms. This includes a description of the project or organization-specific criteria, definitions, and a description of how to perform each step in the collection process and storing data in a measurement repository.
Activities within the data storage procedure, which covers placing data in the measurement repository, include :
- A description of the checking, validation, and verification (V&V) of the quality of the data sets collected. This includes checks for proper formats, logicalness, missing entries, repetitive entrees, typical value ranges (expect these to be oriented towards validation at the set level since individual data checking and validation will have been already performed at the project level).
- A description of what data sets or intermediate analyses will be made and kept, or made and discarded (assuming the analyses can be reconstructed if needed). This includes a listing of requested analyses by Center stakeholders, lists of continuing metrics, and descriptions of changes to the analyses to reflect advances in the software development life cycle.
- The identification of a Center’s measurement repository, and site and management steps to access, use, and control the data and its appropriate database management system (DBMS) in the repository. The use of a DBMS in the repository allows multiple projects and organizations to access the data in a format that supports their specific or organizational objectives.
Metrics (or indicators) are computed from measurements using the Center’s analysis procedures. They are quantifiable indices used to compare software products, processes, or projects or to predict their outcomes. They show trends of increasing or decreasing values, relative only to the previous value of the same metric. They also show containment or breaches of pre-established limits, such as allowable latent defects.
Management metrics are measurements that help evaluate how well software development activities are performing across multiple Centers, development organizations, programs or projects. Trends in management metrics support forecasts of future progress, early trouble detection, and realism in plan adjustments of all candidate approaches that are quantified and analyzed.
NASA-specific establishing and maintaining software measurement repository information and resources are available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook.
Additional guidance related to software measurements may be found in the following related requirements in this Handbook: