bannerb

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

Include Page
2B-Page Warning
2B-Page Warning

Tabsetup
1. The Requirement
1. The Requirement
12. Rationale
23. Guidance
34. Small Projects
45. Resources
56. Lessons Learned
Div
idtabs-1

1. Requirements

2.1.3.9 For Class A, B, C, and safety critical software projects, the Center Director shall establish and maintain a software measurement repository for software project measurements containing at a minimum:
a. Software development tracking data.
b. Software functionality achieved data.
c. Software quality data.
d. Software development effort and cost data.

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

Div
idtabs-2

2. Rationale

Software measurement programs are established to meet measurement objectives and goals at multiple levels. The data gained from these measurement programs are used in the management of projects, for assuring safety and quality, and in improving overall software engineering practices. Software measurement repositories help manage this data and provide insight needed on projects.  Measurement repositories should be established for collecting, storing, analyzing and reporting measurement data based on the requirements of the projects and Center. The repository enables the Center to assess its current software status and the engineering capabilities of providers for future work. Once these software measurement systems are established, the software measures and analysis results that emanate from them enable Center-wide assessments of the abilities and skills in the workforce, and opportunities for improving the software development.

Div
idtabs-3

3. Guidance

A software organization is expected to collect metrics in accordance with a common procedure to facilitate uniformity in the data collected.  The software organization designates a storage location so that the project metric data can be viewed and used by the organization. An effective storage approach allows for long-term access to the metric information that can be used in trending assessments and analyses. The data gained from these repositories are used in the management of projects, for assuring safety and quality, and in improving overall software engineering practices.  Measurement repositories can be at a Center or organizational level within a Center.  Each Center should decide which approach works best for their Center.

In general, Center-level measurement systems are designed to meet the following high level goals:

  • To improve future planning and cost estimation.
  • To provide realistic data for progress tracking.
  • To provide indicators of software quality.
  • To provide baseline information for future process improvement activities.

The software measurement repository stores metrics history to be used to evaluate data on current and/or future projects. The availability of past metrics data can be the primary source of information for calibration, planning estimates, benchmarking, and process improvement activities.

With the high level goals in mind, Centers are to establish and maintain a specific measurement repository for their particular programs and projects to enable reporting on the minimum requirements categories:

  • Software development tracking data:

This data is tracked throughout the life cycle, and includes, but is not limited to, the planned and actual values for software resources, schedule, implementation status, and test status.  This information may be reported in the Software Metrics Report (see Metrics).

  • Software functionality achieved data:

This is data that monitors the functionality of software and includes, but is not limited to, the planned vs. actual number of requirements and function points.  It also includes data on utilization of computer resources. This information may be reported in the Software Metrics Report (see Metrics).

  • Software quality data:

This data is used to determine the quality of the software produced during each phase of the software lifecycle.  Software quality data includes figures and measurements regarding software problem reports/change requests, reviews of item discrepancies, peer reviews/software inspections, software audits, software risks and mitigations.  This information may be reported in the Software Metrics Report (see Metrics).

  • Software development effort and cost:

Effort and cost data is used to monitor the cost and progress of software development.  This type of data can include progress toward accomplishing planned activities, number of schedule delays, resource and tool expenses, costs associated with test and verification facilities, and more. This information is typically captured in some type of status reporting tool. 

Since this is a Center repository, measurements are collected across programs and projects.  Proper execution of data collection necessitates an agreed-to plan for the Center’s software measurement repository. This plan is based on the evaluation and interpretation of specified software measures that have been captured, validated and analyzed according to the approved procedures in the programs/projects’ software measurement plans. The collection plan begins with clear information that includes:

  • A description of all data to be provided.
  • A precise definition of terms.
  • A description of responsibilities for data provision and analyses.
  • A description of time frames and responsibilities for reception of the data and analyses being provided.

Each Center develops its own measurement program that is tailored to its needs and objectives, and is based on an understanding of its unique development environment.

Activities within the data collection procedure include:

  • A clear description of all data to be provided. This includes a description of each item and its format, a description of the physical or electronic form to be used, the location or address for the data to be sent (i.e., the measurement repository).
  • A clear and precise definition of terms. This includes a description of the project or organization specific criteria, definitions, and a description of how to perform each step in the collection process and storing data in measurement repository.

Activities within the data storage procedure, which covers placing data in the measurement repository, include

Swerefn
refnum329
:

  • A description of the checking, validation and verification (V&V) of the quality of the data sets collected. This includes checks for proper formats, logicalness, missing entries, repetitive entrees, typical value ranges (expect these to be oriented towards validation at the set level since individual data checking and validation will have been already performed at the project level).
  • A description of what data sets, or intermediate analyses will be made and kept, or made and discarded (assuming the analyses can be reconstructed if needed). This includes a listing of requested analyses by Center stakeholders, lists of continuing metrics, and descriptions of changes to the analyses to reflect advances in the software development life cycle.
  • The identification of a Center’s measurement repository, and site and management steps to access, use, and control the data and its appropriate data base management system (DBMS) in the repository. The use of a DBMS in repository allows multiple projects and organizations to access the data in a format that supports their specific or organizational objectives.

Metrics (or indicators) are computed from measures using the Center’s analysis procedures. They are quantifiable indices used to compare software products, processes, or projects or to predict their outcomes. They show trends of increasing or decreasing values, relative only to the previous value of the same metric. They also show containment or breeches of pre-established limits, such as allowable latent defects.

Management metrics are measurements that help evaluate how well software development activities are performing across multiple Centers, development organizations, programs or projects. Trends in management metrics support forecasts of future progress, early trouble detection, and realism in plan adjustments of all candidate approaches that are quantified and analyzed.

NASA-specific establishing and maintaining software measurement repository information and resources are available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook. 

Additional guidance related to software measurements may be found in the following related requirements in this Handbook: 

Div
idtabs-4

4. Small Projects

No additional guidance is available for small projects. 

Div
idtabs-5

5. Resources

refstable

toolstable

Div
idtabs-6

6. Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to Measurement Selection:

  • Selection and use of Software Metrics for Software Development Projects. Lesson Learned Number 3556:  "The design, development, and sustaining support of Launch Processing System (LPS) application software for the Space Shuttle Program provide the driving event behind this lesson.

"Metrics or measurements provide visibility into a software project's status during all phases of the software development life cycle in order to facilitate an efficient and successful project." The Recommendation states that: "As early as possible in the planning stages of a software project, perform an analysis to determine what measures or metrics will used to identify the 'health' or hindrances (risks) to the project. Because collection and analysis of metrics require additional resources, select measures that are tailored and applicable to the unique characteristics of the software project, and use them only if efficiencies in the project can be realized as a result. The following are examples of useful metrics:

    • "The number of software requirement changes (added/deleted/modified) during each phase of the software process (e.g., design, development, testing).
    • "The number of errors found during software verification/validation.
    • "The number of errors found in delivered software (a.k.a., 'process escapes').
    • "Projected versus actual labor hours expended.
    • "Projected versus actual lines of code, and the number of function points in delivered software."
      Swerefn
      refnum577


  • Flight Software Engineering Lessons. Lesson Learned Number 2218: "The engineering of flight software (FSW) for a typical NASA/Caltech Jet Propulsion Laboratory (JPL) spacecraft is a major consideration in establishing the total project cost and schedule because every mission requires a significant amount of new software to implement new spacecraft functionality."

The lesson learned Recommendation No. 8 provides this step as well as other steps to mitigate the risk from defects in the FSW development process:

"Use objective measures to monitor FSW development progress and to determine the adequacy of software verification activities. To reliably assess FSW production and quality, these measures include metrics such as the percentage of code, requirements, and defined faults tested, and the percentage of tests passed in both simulation and test bed environments. These measures also identify the number of units where both the allocated requirements and the detailed design have been baselined, where coding has been completed and successfully passed all unit tests in both the simulated and test bed environments, and where they have successfully passed all stress tests."

Swerefn
refnum572