bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin


Tabsetup
1. The Requirement
1. The Requirement
12. Rationale
23. Guidance
34. Small Projects
45. Resources
56. Lessons Learned


Div
idtabs-1

1. Requirements

2.6.2.2 The project shall require the software supplier(s) to provide software metric data as defined in the project's Software Metrics Report.

1.1 Notes

The requirement for the content of a Software Metrics Report is defined in Chapter 5 [section 5.3.1 of NPR 7150.2, NASA Software Engineering Requirements].

1.2 Applicability Across Classes

Class D and Not Safety Critical and Class G are labeled with "P (Center)." This means that an approved Center-defined process which meets a non-empty subset of the full requirement can be used to achieve this requirement.


applicable
f1
gp
h0
ansc1
asc1
bnsc1
csc1
bsc1
esc1
cnsc1
dnscp
dsc1
ensc0



Div
idtabs-2

2. Rationale

The software development team needs to acquire software metrics data in a timely and periodic manner to assure the effectiveness of the insight and oversight activities being performed by NASA. The specification of those metrics, and when they are due, is effectively accomplished by identifying them in the software acquisition contract statement of work (SOW) clauses and instructions (see SWE-048). Effective insight and oversight of the software development by the NASA software development team is achieved by obtaining and reviewing these metrics and making decisions based on them.


Div
idtabs-3

3. Guidance

This is a key requirement that must be addressed on all NASA software projects. Access needs to be defined up front in the SOW, task agreement, software plans or other assignment paperwork. Special care needs to be used to clearly identify this requirement in the in-house documentation, primary contractor and subcontractor requirements. NASA needs direct insight into software metrics on NASA software projects.

Measurement is defined as the process of assigning numerical values to process, product, or project attributes according to defined criteria.

sweref
254
254
The measurement process is based on estimation or direct measurement. Estimation activities result in planned or expected measures. Direct measurement activities result in actual measures. The term "measure" is the result of counting or otherwise quantifying an attribute of a process, project or product. Some examples of measures are size, cost, and defects. The term "metric" is defined as a measurement that provides a basis for making a decision or taking action.
sweref
254
254

Measurement is a key process area for successful management and is applied to all engineering disciplines. Measurement helps to define and implement more realistic plans, as well as monitor progress against those plans. Measurement data provides objective information that helps project management perform the following.

  • More accurately plan a project or program that is similar to one that has been completed.
  • Identify and correct problems early in the life cycle (more proactive than reactive).
  • Assess impact of problems that relate to project or program objectives.
  • Make proper decisions that best meet program objectives.
  • Defend and justify decisions.

Software metrics are typically used for estimation (i.e., size, effort, and cost), productivity measurements, reliability measurements, quality measurements, and project and task management. A metric quantifies a characteristic of a process or product and defines what is to be measured. They help in managing and controlling software projects and learning more about the way the organization operates and performs. Metrics are also a tool that highlights potential problems or deficiencies in the development process or in the products themselves. They provide quantitative and qualitative measures that help focus management's attention and resources, if necessary, on the prevention and/or correction of problems.

The term "indicator" is used to help an organization define and measure progress toward organizational goals. Once an organization has analyzed its mission, identified all its relevant stakeholders, and defined its objectives, it needs a way to measure progress toward those goals. Indicators are those measurements. If data is used to make safety decisions (either by a human or the system), then the data is safety-critical, as is all the software that acquires, processes, and transmits the data. Refer to NASA-STD-8719.13, NASA Software Safety Standard

sweref
271
271
for direction on how to handle safety-critical data.

A successful process for measurements is characterized by decision making that regularly includes data analyses results that are based on objective measurement. To ensure successful implementation of a project measurement process the following activities are needed.

  • Organizational goals/objectives are defined/changed.
  • Information needs for the measurement activities are identified and planned.
  • An appropriate set of measures driven by organizational objectives are derived.
  • Required data is collected, stored, analyzed and reported.
  • Indicators are used to provide an objective basis for decision making.
  • Measurement processes and measures are tracked and evaluated.
  • Improvements and best practices are captured and communicated to determine if modifications to (goals/metrics/strategy) are required.

The figure below describes a typical software metric process flow. The measurement process below is based upon the Software Engineering Institute's Capability Maturity Model Integration (CMMI). The blue highlighting indicates the planning activities needed when new organizational goals, metrics and/or strategy are indicated. 

sweref
157
157

                              Figure 3.1 Software Metric Process Flow

WHY WE MEASURE

It is often difficult to accurately understand the status of a project or determine how well development processes are working without some measures of current performance and a baseline for comparison purposes. Metrics support better management and control of software projects and work to establish greater insight into the way the organization is operating. There are four major reasons for measuring software processes, products, and resources. They are to Characterize, Evaluate, Predict, and Improve.

  1. Characterizations are performed to gain understanding of processes, products, resources, and environments, and to establish baselines for comparisons with future efforts.
  1. Evaluation is used to determine status with respect to plans. Measures are the signals that provide knowledge and awareness when projects and processes are drifting off track, so that they are brought back under control. Evaluations are also used to assess achievement of quality goals and to assess impacts of technology and process improvements on products and processes.
  1. Predictions are made so that planning is performed more proactively. Measuring for prediction involves gaining an understanding of the relationships among processes and products so that the values observed for some attributes are used to predict others. This is accomplished because of a desire to establish achievable goals for cost, schedule, and quality so that appropriate resources are applied and managed. Projections and estimates based on historical data help analyze risks and support design and cost tradeoffs.
  1. An organization measures to improve when quantitative data and information is gathered to help identify inefficiencies and opportunities for improving product quality and process performance. Measures help to plan and track improvement efforts. Measures of current performance give baselines to compare against, so that an organization can determine if the improvement actions are working as intended. Good measures also help to communicate goals and convey reasons for improving.

Measurement is an important component of any project and product development effort. It is applied to all facets of software development and engineering disciplines. Before a process can be efficiently managed and controlled, it has to be measured.

The content of Software Metrics Report, section 5.3.1 of NPR 7150.2, (see SWE-117), is set up as a common approach to collecting and reporting software metrics. It requires that metrics information be reported on a CSCI (Computer Software Configuration Item) basis. All NASA software development follows some level of defined software processes. The reporting processes used in a software development activity can be derived from a set of common processes defined at the Agency level, Center level or organizational level. As a minimum, for which ever processes are used, the following reporting categories shown in SWE-117 are required for summarizing and organizing the minimum information needed:

Software progress tracking.

Software functionality.

Software quality.

Software requirements volatility.

Software characteristics.

The NASA approach to contractor-developed software work products requires that contractor terms and deliverables be explicitly listed in the contract SOW. To be most effective this includes the software metrics list required to effectively manage the insight and oversight activities. This requirement is levied on the contractor as a provision in the software acquisition agreement or contract SOW.

Additional guidance related to software measurement determination, collection, analysis, and reporting may be found in the following related requirements in this handbook:


SWE-090

Measurement Objectives

SWE-091

Measurement Selection

SWE-092

Measurement Collection and Storage

SWE-093

Analysis of Measurement Data

SWE-094

Reporting of Measurement Analysis

SWE-095

Directorate Measurement System

SWE-096

Directorate Measurement Objectives

 SWE-117

Software Metrics Report


Examples of software metrics

                              Table 3.1 Software Metric Examples


Div
idtabs-4

4. Small Projects

No additional guidance is available for small projects. The community of practice is encouraged to submit guidance candidates for this paragraph.


Div
idtabs-5

5. Resources


refstable



toolstable


Div
idtabs-6

6. Lessons Learned

 The NASA Lessons Learned database contains the following lessons learned related to software metrics:

  1. Selection and use of Software Metrics for Software Development Projects, Lesson No. 3556: The design, development, and sustaining support of Launch Processing System (LPS) application software for the Space Shuttle Program provide the driving event behind this lesson.

    "Metrics or measurements should be used to provide visibility into a software project's status during all phases of the software development life cycle in order to facilitate an efficient and successful project."  The Recommendation is: "As early as possible in the planning stages of a software project, perform an analysis to determine what measures or metrics will used to identify the "health" or hindrances (risks) to the project. Because collection and analysis of metrics require additional resources, select measures that are tailored and applicable to the unique characteristics of the software project, and use them only if efficiencies in the project can be realized as a result. The following are examples of useful metrics:

          - The number of software requirement changes (added/deleted/modified) during each phase of the software process (e.g., design, development, testing)
          - The number of errors found during software verification/validation
         - The number of errors found in delivered software (a.k.a., "process escapes")
          - Projected versus actual labor hours expended
          - Projected versus actual lines of code, and the number of function points in delivered software."
    sweref
    577
    577

  2. Flight Software Engineering Lessons, Lesson No. 2218: "The engineering of flight software (FSW) for a typical NASA/Caltech Jet Propulsion Laboratory (JPL) spacecraft is a major consideration in establishing the total project cost and schedule because every mission requires a significant amount of new software to implement new spacecraft functionality."

    The lesson learned statement provides this step as well as other steps to mitigate the risk from defects in the FSW development process:

    "Use objective measures to monitor FSW development progress and to determine the adequacy of software verification activities. To reliably assess FSW production and quality, these measures should include metrics such as the percentage of code, requirements, and defined faults tested, and the percentage of tests passed in both simulation and test bed environments. These measures should also identify the number of units where both the allocated requirements and the detailed design have been baselined, where coding has been completed and successfully passed all unit tests in both the simulated and test bed environments, and where they have successfully passed all stress tests."
    sweref
    572
    572

  3. Acquisition and Oversight of Contracted Software Development (1999). Lesson No. 0921: "The loss of Mars Climate Orbiter (MCO) was attributed to, among other causes, the lack of a controlled and effective process for acquisition of contractor-developed, mission critical software. Under the MCO procurement strategy, JPL placed full responsibility for flight software development in the hands of a contractor/industrial partner and did not monitor the quality of the contractor's product."

    The collection of appropriate software metrics through an insight and oversight process enables the software development team to monitor the quality of the contractor's product.
    sweref
    528
    528