Fortunately, there are many resources to help a Center develop data collection and storage procedures for its software measurement program. The NASA "Software Measurement Guidebook" is aimed at helping organizations begin or improve a measurement program. The Software Engineering Institute at Carnegie Mellon University has detailed specific practices within its CMMI-DEV, Version 1.3 model for collecting, interpreting, and storing data. The Software Technology Support Center (STSC), at Hill Air Force Base, has its "Software Metrics Capability Evaluation Guide". Westfall's "12 Steps to Useful Software
Metrics" is an excellent primer for the development of useful software metrics. Other resources are suggested in section 5 [Resources].
The guidance in SWE-091 discusses recognized types of measures. It provides guidance for the selection and recording of the measurement selection decisions. SWE-092 calls for the specification and recording of the data collection and storage procedures for the software measures, and for the actual collection and storage of the measurements themselves. If a metric does not have a customer, there is no reason for it to be produced. Software measures are expensive to collect, report, and analyze so if no one is using a metric, producing it is a waste of time and money.
Each organization or Mission Directorate develops its own measurement program that is tailored to its needs and objectives, and is based on an understanding of its unique development environment. Typical software measurement programs have three components: Data collection, technical support, and analysis and packaging. To properly execute the collection of the data, an agreed-to procedure for this is needed within the Software Development Plan (see SWE-102).
Activities within the data collection procedure include:
- A clear description of all data to be provided. This includes a description of each item and its format, a description of the physical or electronic form to be used, the location or address for the data to be sent.
- A clear and precise definition of terms. This includes a description of the project or organization specific criteria, definitions, and a description of how to perform each step in the collection process.
"When we use terms like defect, problem report, size, and even project, other people will interpret these words in their own context with meanings that may differ from our intended definition. These interpretation differences increase when more ambiguous terms like quality, maintainability, and user-friendliness are used."
- Who is responsible for providing which data. This may be easily expressed in matrix form, with clarifying notes appended to any particular cell of the matrix.
- When and to whom the data are to be provided. This describes the recipient(s) and management chain for the submission of the data; it also specifies the submission dates, periodic intervals, or special events for the collection and submission of the data.
The data collection involvement by the software development team works better if the team's time to collect the data is minimized. Software developers and software testers are the two groups that are responsible for collecting and submitting significant amounts of the required data. If the software developers or testers see this as a non-value added task, data collection will become sporadic and data quality will suffer, thus increasing the effort of the technical support staff that has to validate the collected data. Any step that automates some or all of the data collection will contribute to the efficiency and quality of the data collected.
Activities within the data storage procedure include:
- A description of the checking, validation and verification of the quality of the data collected. This includes automated analysis of the data, logic checks, missing entries, repetitive entrees, typical value ranges.
- A description of what data, or intermediate analyses will be made and kept, or made and discarded (assuming the analyses can be reconstructed if needed). This includes a listing of requested analyses by stakeholder organizations, lists of continuing metrics, and changes to the analyses to reflect advances in the software development life cycle
- The identification of a proper storage system or site and management steps to access, use, and control the appropriate data base management system (DBMS). The use of a DBMS allows multiple projects and organizations to access the data in a format that supports their specific or organizational objectives.
Metrics (or indicators) are computed from measures using the approved analysis procedures (see SWE-093). They are quantifiable indices used to compare software products, processes, or projects or to predict their outcomes. They show trends of increasing or decreasing values, relative only to the previous value of the same metric. They also show containment or breeches of pre-established limits, such as allowable latent defects. Management metrics are measurements that help evaluate how well software development activities are performing across multiple development organizations or projects. Trends in management metrics support forecasts of future progress, early trouble detection, and realism in plan adjustments of all candidate approaches that are quantified and analyzed. The method for developing and recording these metrics is also written in the (SWE-102) Software Development Plan.
As the project objectives evolve and mature, and as requirements understanding improves and solidifies, software measures can be evaluated and updated if necessary or if beneficial. Updated measurement selections, data to be recorded, and recording procedures are documented during update and maintenance of the Software Development Plan.