1. Purpose

This document discusses guidance for projects implementing the NPR 7150.2 requirements addressing or including software measurement, including SWE-032, SWE-090, SWE-091, SWE-092, SWE-093, SWE-094, SWE-102, and SWE-117.  This guidance is intended for all persons responsible for the software measurement process from the planning stages through the implementation stages of collection, storage, analysis and reporting.

1.1 Roles


Role

Responsibility

Software Project Lead

Prepares measurement plan, ensures measures are collected and stored as planned, analyzes measures and takes appropriate actions, reports measurement results to appropriate stakeholders. Responsible for using measurements to assist with the management of the software project

Project Measurement Support Person

Collects measurements from project members and project tools, stores measures in specified storage location, generates measurement charts ready for analysis

Project members

Supplies measurement data and populates tools designed to capture measurement data

SEPG Lead or organizational measurement team representative

Provides the software project with the organizational measurement objectives and the organizational defined set of measures. Collects the organizational defined set of measures, analyzes the data across projects, and develops organizational models to aid in evaluating project performance and to identify opportunities for process improvement

Relevant line and project managers

Review measurement reports and analysis results to identify any areas that need higher level management attention



2. Planning the Measurement Activities

When a software lead on a project begins to plan the project's software measurement task, he/she needs to determine what the project's information needs are and how to satisfy them. There are two commonly used methodologies to help a software lead determine these needs. They are the Goal/Question/Metric (GQM) methodology originally developed by Vic Basili of the Software Engineering Laboratory and the similar Goal/Question/Indicator/Metric (GQIM) methodology developed by the Software Engineering Institute.

The first step in both is to define the goals or objectives desired by the project for the measurement task. The goal specifies the purpose of the measurement, the object to be measured (usually a project product, process or resource), the issue to be measured and the viewpoint to be considered. An example of a typical project goal is to ensure that the software is delivered on schedule.

The next step is to refine the goal into questions that break down the issue into its major component. For the timely delivery goal, some questions might be: What is the planned delivery schedule? How much work needs to be completed before the delivery can be made? What is the planned rate of completing that work? Is the project completing the work as planned?

Using the questions, it is possible to determine what metrics can be collected to help answer the questions. Some potential measures might be the planned and the actual schedule as well as the current rate of work compared to the planned rate of work. Using this process to derive the measures is consistent with the GQM methodology.

In the case of the GQIM methodology, there is an extra step after refining the goal into questions in which consideration is given regarding the information to be displayed for easy analysis; in other words, consideration is given to what the charts look like (or what indicators would be helpful). In the above example, several charts might be useful to a software manager such as a Gantt chart with the planned versus actual schedule or an earned value chart showing the value of the work actually performed compared to the value of the work that was planned by this point in the schedule. (A higher level manager may only be interested in a top level indicator like a red/yellow/green flag to indicate the project schedule health.) As in the GQM methodology, once the questions and indicators have been determined, the metrics can also be determined.

Planning steps for the software lead are:

Step 1: Establish the measurement objectives for the project. (See SWE-090)

Step 2: Identify the measurement analyses that support these objectives (questions, indicators, as above)

Step 3: Specify the measures to be collected (See SWE-091)

Step 4: Specify the data collection and storage procedures (See SWE-092)

Step 5: Specify the analysis procedures (See SWE-093)

2.1 Step 1: Establish the measurement objectives for the project

In order to establish the measurement objectives or goals of the project, there are usually several things to consider. Typically, software projects are most interested in the following:

  • Managing the project with objective data
  • Providing data for planning future projects
  • Meeting any specific requirements that may have been levied on them

There is often an overlap in the information needed for the items above. For example, Class A and B projects need to consider measurements that might be expected to help satisfy CMMI (Capability Maturity Model Integration) requirements and all classes of software need to comply with the requirements in NPR 7150.2. Measurement objectives that are established for the project are to be consistent with the objectives specified in the NPR and any specific measurement objectives specified by the project's organizational measurement groups.

In addition to the organizational objectives and the NASA requirements, other sources for objectives include management, technical, product, or process implementation needs such as:

  • Achieve completion by the scheduled date
  • Complete the project within the budget
  • Devote adequate resources to each process area

Corresponding objectives might be:

  • Measure project progress to ensure that it is adequate to achieve completion by scheduled date
  • Track cost and effort to ensure project is within budget
  • Measure the resources devoted to each process area

Additional sources for information needs and objectives include strategic plans, high level requirements, the project Software Management Plan and operational concepts.

Once the project has established the measurement objectives and information needs, they are documented in either the project's Software Management Plan (See SWE-102) or in a separate measurement plan.

2.2 Step 2: Identify the measurement analysis that supports these objectives

The software lead identifies the measurement analyses that will be done to support the objectives chosen. In order to help with this, think about the questions that need to be answered to provide the needed information. If there are multiple analyses that can be used, prioritize the candidates and select the key analysis that will provide the best information to satisfy the objective. There are several resources that could potentially assist with selecting the measurement analyses and also with the selection of the corresponding measures to be collected. Goddard Space Flight Center has a Measurement Planning Table that shows some common objectives with measurement analyses and corresponding measures. There is also a Project/Type Goal Matrix developed following the Headquarters Software Measurement Workshop that lists goals and corresponding measures based on the project's size. The selected analyses are documented in the Software Management Plan or measurement plan, maintaining the traceability to the objectives. Note: Analysis can be as simple as comparing the planned values to the actual values over time.

2.3 Step 3: Specify the measures to be collected

Continue to use the methodology and select the measures needed to support the analyses and the objectives already chosen. It is helpful to think about what types of indicators or charts are needed in each area. Possibilities include tables, trend charts, stoplight charts, bar chart comparing points in time, etc.  The resources listed in Step 2 can assist with the selection of measures. According to SWE-091, measures are required to be selected in the following five areas:

  • Software progress tracking
  • Software functionality
  • Software quality
  • Software requirements volatility
  • Software characteristics

The first four areas will provide information that can be used in the management of the project, to address objectives such as completion of the project on-time and within budget, delivery of the planned functionality, and delivery of high quality software. The software characteristics measures are intended to provide enough background information on the project to allow roll-ups of measures across similar projects to provide information for organizational models. Measures that are selected for the project are defined by name, type and unit and documented in the measurement plan or Software Management Plan.

Some examples of potential measures and possible chart representations are shown below for each of the four categories.

Progress tracking measure examples:

Figure 1: Scheduled Activities Planned Versus Scheduled Activities Completed shows whether the activities planned on the project are being completed on schedule.

 
 
 
 
Figure 2: Planned Progress Points versus Actual Progress Points. For this chart, a certain number of points are assigned to each activity on the project (or a portion of the project). Then points are scheduled according to the schedule dates when the activities need to be completed. As work progresses, points are earned toward the completion of the activities. Often a point represents a day or hour of estimated work. This method provides a more objective way of determining whether activities within the schedule are progressing satisfactorily. For more information on Point Counting see the instructions for the Point Counting Tool listed in the Tools Table in the  of this topic.
 
 
 
 
Figures 3 and 4: Earned Value – Schedule and Cost Performance. Earned value is a methodology used to measure the progress of a project taking into account the work complete, the time taken and the costs incurred to complete that work. Using this methodology, it is possible to predict the final cost and schedule, based on the current performance of the project. For more information on earned value, see NASA's Earned Value Management (EVM) website.

 
 
 
 
Figure 5: Planned Staffing versus Actual Staffing shows whether the project has the planned amount of staff working on the project.
 
 
 
 
Figure 6: Planned Cost versus Actual Cost shows whether the costs incurred by the project are the costs expected at that point in the project.
 
 
 
 
Figure 7: Units Planned for Design versus Units Completed Design shows the number of units where design has been completed compared to the total number of units that need to be designed.
 
 
 
 
Figure 8: Tests Planned versus Tests Successfully Completed shows testing progress against plan.
This sample of progress tracking charts shows that there are many ways to examine progress and many factors that the project might want to consider to determine whether satisfactory progress is being made. Although only one progress tracking set of measures is required by SWE 091, more than one set of progress tracking measures are often needed to give the project a clear picture of whether the project is progressing satisfactorily or not. For example, if a project chose only to track actual costs against planned costs, and the charts showed that the costs were well below the plan, the assumption might be that the project was doing very well. However, a chart showing the progress of the activities completed might show that progress is way behind schedule. One possible reason might be that the project does not have the all the staff necessary to complete the work as scheduled.

Often, a particular type of progress tracking chart is more applicable in a particular phase of the project such as the charts shown in Figures 7 and 8 for progress during the design and testing phases, respectively.
 
 
 
 
Figure 9: Number of Units Planned for Each Build shows the incremental count of units planned for each build as the builds progress. This is an example of a functionality chart.

For this chart, you can see that the largest number of units was planned for the first build in the initial the build plan. In the second plan, the number of units to be delivered in build 1 has decreased and the numbers to be delivered in later builds has increased. In the third plan, even more of the units have shifted into the third build. This trend is generally a clear indication of difficulties in the project in terms of being able to deliver the planned functionality on time.
 
 
 
 
Figure 10: Requirements Changes per Month and Total Requirements is an example of a requirements volatility chart.

This chart shows a baseline of the number of expected requirements, the actual number of requirements over time and the numbers of requirements changes by month. Ideally, the number of requirements changes would decrease to zero as the project starts into the design and implementation phases.  Large changes or large numbers of changes in requirements in the later phases are likely to cause either additional work or rework for the project and may affect staffing requirements or delivery schedules.
 
 
 
 
Figure 11: Number of Open, Closed and New Defects Over Time is an example of a quality chart.

This chart shows a fairly large number of defects still open, and the number being closed is not increasing very rapidly. There are still a number of new defects, but the numbers seem to be decreasing. The software is not ready for delivery since it still has a large number of open severity 1 defects and new ones are still being discovered. Note that a small number of defects may not indicate good quality software---it may indicate that inadequate testing has been done. If the organization has a baseline of typical numbers of defects/ KSLOC (Thousand Software Lines of Code) for software similar the project, it is helpful to compare the project's defect status with the organizational baseline.

2.4 Step 4: Specify the data collection and storage procedures

During this step, the project must decide how they are going to handle the logistics of collecting the measurement data they have chosen. The items to be decided and documented for each measurement collected are the following: Who is going to be responsible for collecting the data? How often will it be collected? Where will the responsible person find the data? Is there any data manipulation to be done (i.e., change in units, extraction from one tool and entry into another to produce charts, etc.?)  Where will the data be stored? (Consider both the raw data and the resultant charts with analysis). Once the details of collection and storage have been determined, they are documented in a collection and storage procedure that describes these specific details for the project. An example template for a storage and collection procedure can be found in the NASA PAL (Process Asset Library). Measurement collection is always easier if there are tools that provide the measures as the work is being performed.

2.5 Step 5: Specify the analysis procedures

The final step in planning for measurement is to develop the analysis and reporting procedures. Essentially, this step provides the objective data that can be used to evaluate the information provided by the measurement data collected. This type of information can help the project manager determine whether the data shown on the measurement charts is indicating a problem or a good trend.

For many of the types of measurement charts, the analysis will be based on whether the data is within certain expected or planned boundaries as specified in the analysis procedures. For example, to analyze a chart showing cost over time compared with the planned cost, the analysis might be to compare the planned and actual costs and investigate further if the actual cost deviates from the plan by more than 10%. Often to complete the analysis, other measurement charts and information need to be considered. In the case of cost, the next step might be to look at the other information affecting cost. (Is the staffing higher than planned? Have the requirements increased? Did some piece of equipment cost more than planned?)

Another type of boundary that is used for analysis is an organization "norm" for that type of data. For example, an organization might have a normal or expected set of ranges for the numbers of defects found during each test phase. If a project has either considerably more or less defects, during that phase, further investigation is warranted.

Analysis procedures typically specify the thresholds on boundaries used to trigger further investigation. The analysis procedures also specify other charts that might provide a more complete picture of the information being provided by the measurements.

The reporting procedures specify the types of measurement charts that will be reported to each level of management. Typically, the technical software managers need more detailed levels of measurement data and the higher level managers are more interested in seeing red/yellow/green indicators that only provide an indication of potential problems.  When red/yellow/green indicators are used, more detailed data can be made available to support these indicators. An example of an analysis and reporting procedure can be found on the NASA PAL.


3. Measurement Activities During the Project

Several primary measurement activities are conducted continuously throughout the project:

  • Collect the measurement data
  • Analyze the collected data
  • Take appropriate corrective actions, based on the data
  • Store collected data and analysis results
  • Communicate the results to the project team and appropriate management

During collection of the measurement data, the responsible measurement person collects all the measurement data as specified in the project's Software Management Plan, or measurement collection and storage plan, and performs any of the data manipulation to produce the charts and graphs that have been specified in the plan. Different types of data will be collected at different points in the project's life cycle and at different intervals.  For example, earned value information may be collected in all phases of the project, but other progress tracking information such as units designed versus units planned or number of tests performed/passed versus number of tests planned would be collected only in the design or test phase, respectively. The raw data is stored for potential further analysis later in the project or for use by the organization to establish baselines.

Once the data has been collected and put into the planned charts or tables, the software manager will need to analyze the collected data. The software manager uses the guidelines established in the analysis and reporting procedure developed during the planning process. The purpose of the data analysis is to get an objective view of the project's health and to determine whether there are any issues that may need corrective action. In order to determine the full status of the project, it is often necessary to consider several sources of information together. For example, a progress tracking chart may show that several subsystems are not being implemented as quickly as planned. If the staffing chart showing the planned versus actual staffing is examined, it may show that the staffing level has fallen way below the effort planned. Now the software manager may need to investigate further to determine whether to take corrective action. It is possible that several team members have been on vacation or have spent a portion of their time on another project. This may have been a temporary problem, but it could affect the delivery date. Another possibility is that the team has lost one or more members. In this case, the manager may need to bring on additional staff or adjust the delivery date.

Another example of a situation where measures can highlight an issue for the software manager is shown in the following case of defect data during testing. Analysis of defect data that shows the number of defects open versus the number of defects closed and the number of new defects may show that a large number of defects are still open and that new defects are still being discovered. This would indicate that the project is not close to completing this test cycle. If the project had planned to end testing shortly and deliver the software, the software manager may need to extend the delivery schedule or plan for an additional build to fix the remaining errors.

It may be helpful to see the data represented in different ways to obtain a complete analysis of a situation. Using trend charts rather than point in time charts often provide more useful information to help in determining whether a corrective action is required. In the previous case of defect data, one chart representation might be simply to show the number of defects that were opened or closed for the week. A more useful representation might show the trend of the total number opened and closed over several weeks.

Analysis of the measurement charts is recorded with the measurement charts and both charts and analysis are stored together. Since the raw data was stored previously when it was collected, the project now has both the measurement data and its analysis stored for future reference.

During all phases of the project, the software manager reviews the measurement data regularly, analyzing it to determine whether any corrective action is necessary on the project. These activities occur whenever there is a need to determine the health of the project.

At regular intervals during the project, the software manager reports measurement results to appropriate stakeholders. However, when the software manager does the regular reporting to his team, customers, line managers or other interested stakeholders, he will want to use some of the measurement charts to show how the project is progressing. The software manager chooses the measurement charts that are most applicable for the phase of the project and most useful for the particular stakeholder. For example, the line managers and mission project managers are generally interested in knowing whether the project is on schedule and within budget. A line manager who helps assign staffing will be interested in seeing the staffing profiles. A requirements volatility chart is probably the most interesting during the requirements and early design phases.

Often the higher level managers are not interested in actually seeing detailed measurement charts, but would like to see red/yellow/green indicators of status or trend arrows to show the general direction a project is going. If the indicators do not all indicate a positive status, then the prudent software manager is prepared to describe the situation using the more detailed measurement information he has already analyzed.


4. Project Completion

At the end of the project, all the measurement data on the project is archived along with the rest of the project information. Selected measurement data is sent to the organizational measurement team who will use this data to create planning models to describe the expected behavior of projects. For the purposes of this collection, project completion is generally defined as the delivery of the final version of the system for operations and maintenance. Typical data collected organizationally includes the project cost and size estimates with the corresponding assumptions (initial estimates, as well as those re-estimates made at milestones), milestone dates, effort by phase, size of the system as implementation progresses and completes, number of requirements and number of changes, and defects per testing phase. This type of information across a number of similar projects can provide baselines that future projects can use for comparison with their measurement results. The collection of the estimation data throughout the project and the actual size and effort data at completion will provide information on the accuracy of the estimates and will help refine future estimates on similar projects.


5. Resources

5.1 Tools