Many resources exist to help a Center develop a software measurement program. The NASA "Software Measurement Guidebook" is aimed at helping organizations begin or improve a measurement program. The Software Engineering Institute at Carnegie Mellon University has detailed specific practices for measurement and analysis within its CMMI-Dev, Version 1.3 model. NASA resources that will help with the selection of measures for your project are: the "Software Metrics Selection Presentation" and the "Project-Type/Goal/Metric Matrix" . The Software Technology Support Center (STSC), at Hill Air Force Base, has its "Software Metrics Capability Evaluation Guide" . Other resources are suggested in the of this SWE.
Each organization or Mission Directorate develops its own measurement program that is tailored to its needs and objectives, and is based on an understanding of its unique development environment (see SWE-095 and SWE-096). Once a manager has the ability to track actual project measures against planning estimates, any observed differences are used to evaluate the status of the project and to support decisions to take corrective actions. The SEPG (Software Engineering Process Group)can also use this data to improve the software development processes. The manager may also consider comparing actual measures to established norms or benchmarks either from the Center measurement program or from industry for other possible insights.
When choosing project measures, check to see if your Center has a pre-defined set of measurements that meets the project's objectives. If so, then the specification of measures for the project begins there. Review the measures specified and initially choose those required by your Center. Make sure they are all tied to project objectives or are measures that are required to meet your organization's objectives.
To determine if any additional measures are required or if your Center does not have a pre-defined set of measures, think about the questions that need to be asked to satisfy project objectives. For example, if an objective is to complete on schedule, the following might need to be asked:
- How long is the schedule?
- How much of the schedule has been used and how much is left?
- How much work has been done? How much work remains to be done?
- How long will it take to do the remaining work?
From these questions, determine what needs to be measured to get the answers to key questions. One possible set of measures for the above set of questions is "earned value" (sum of budgeted cost for task and products that have actually been produced (completed or in progress) at a given time in the schedule). Similarly, think about the questions that need to be answered for each objective and see what measures will provide the answers. If several different measures will provide the answers, choose the measures that are already being collected or those that are easily obtained from tools.
The presentation, "Software Metrics Selection Presentation" , gives a method for choosing project measures and provides a number of examples of measurement charts, with information showing how the charts might be useful for the project. The "Project-Type/Goal/Metric Matrix" is also a matrix developed following the series of NASA software workshops at Headquarters that might be helpful in choosing the project's measures. This matrix specifies the types of measures a project might want to collect to meet a particular goal, based on project characteristics, such as size.
The measurements need to be defined so project personnel collect data items consistently. The measurement definitions are documented in the project Software Management Plan (see SWE-102) or Software Metrics Report (see SWE-117) plan along with the measurement objectives. Items to be included as part of a project's measurement collection and storage procedure are:
- A clear description of all data to be provided.
- A clear and precise definition of terms.
- Who is responsible for providing which data.
- When and to whom the data are to be provided.
The data collection involvement by the software development team works better if the team's time to collect the data is minimized. If the software developers see this as a non-value added task, data collection will become sporadic affecting data quality and usefulness. Some suggestions for specifying measures:
- Don't collect too many measures. Be sure the project is going to use the measures.
- Think about how the project will use them. Visualize the way charts look to best communicate information.
- Make sure measures apply to project objectives (or are being provided to meet sponsor or institutional objectives).
- Consider whether suitable measures already exist or whether they can be collected easily. The use of tools that automatically collect needed measures help ensure consistent, accurate collection.
Tools can be used to track and report (e.g.,JIRA) measures and provide status reports of the all open and closed issues, including their position in the issue tracking life-cycle. This can be used as a measure of progress.
Static analysis tools (e.g.,Coverity, CodeSonar) can provide measures of software quality and identify software characteristics at the source code level.
Characterizations of measures like requirements volatility can be tracked with general purpose requirements development and management tools (e.g., DOORS), along with tracking verification progress. They can also provide reports on software functionality and software verification progress.
Links to the aforementioned tools are found in section "5.1: Tools" on the of this SWE.