bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin


Wiki Markup
{alias:SWE-094}
Tabsetup
Tabsetup
1. The Requirement
1. The Requirement
12. Rationale
23. Guidance
34. Small Projects
45. Resources
56. Lessons Learned1. The Requirement


{div3:id=} h1.
Wiki Markup
Div
idtabs-1

1.

Requirements

4.4.5

The

project

shall

report

measurement

analysis

results

periodically

and

allow

access

to

measurement

information

by

Center-defined

organizational

measurement

programs.

h2.

1.1

Notes

NPR

7150.2,

NASA

Software

Engineering

Requirements,

does

not

include

any

notes

for

this

requirement.

h2.

1.2

Applicability

Across

Classes

Classes

C-E

and

Safety

Critical

are

labeled

"P

(Center)

\

+SO."

This

means

that

this

requirement

applies

to

the

safety-critical

aspects

of

the

software

and

that

an

approved

Center-defined

process

which

meets

a

non-empty

subset

of

this

full

requirement

can

be

used

to

meet

the

intent

of

this

requirement.

Class

C

and

Not

Safety

Critical

and

Class

G

are

labeled

with

"P

(Center)."

This

means

that

a

Center

defined

process

which

meets

a

non-empty

subset

of

this

full

requirement

can

be

used

to

meet

the

intent

of

this

requirement.

Class

F

is

labeled

with

"X

(not

OTS)".

This

means

that

this

requirement

does

not

apply

to

off-the-shelf

(OTS)

software

for

these

classes.

{applicable:asc=1|ansc=1|bsc=1|bnsc=1|csc=*|cnsc=p|dsc=*|dnsc=0|esc=*|ensc=0||f=*|g=p|h=0} {div3}

Wiki Markup
{div3:id=tabs-2}

h1. 2. Rationale

The intent of this requirement is that the software development project organizations provide or allow access to the software metric data during the project life cycle by those Center-defined organizations responsible for assessing and utilizing the metric data. Access can be provided to a number of organizations tasked to review or access the software development progress and quality. When a software effort is acquired from an industry partner, the contractor and/or subcontractors provide NASA with access to the software metric information in a timely manner to allow usage of the information.

NASA established software measurement programs to meet measurement objectives at multiple levels within the Agency. In particular, measurement programs are established at the project and also at the Mission Directorate and Mission Support levels (see[SWE-095|SWE-095] and [SWE-096|SWE-096]) to satisfy organizational, project, program and Directorate needs. Centers have measurement systems and record repositories that are used for records retention, and for subsequent analysis and interpretation to identify overall levels of Center competence in software engineering, and to identify future opportunities for training and software process improvements.

The data gained from these measurement programs assist in the management of projects, assuring and improving safety and product and process quality, and in improving overall software engineering practices. In general, the project level and Mission Directorate/Mission Support Office level measurement programs are designed to meet the following high-level goals:

* To improve future planning and cost estimation.
* To provide realistic data for progress tracking.
* To provide indicators of software quality.
* To provide baseline information for future process improvement activities.

Information generated to satisfy other requirements in the NPR 7150.2 (see [SWE-090|SWE-090],[SWE-091|SWE-091], [SWE-092|SWE-092], and [SWE-093|SWE-093]) provide the software measurement data and the analysis methods that are used to produce the results being reported by this SWE-094. The information is recorded in a [Software Metrics Report|SWE-117].


{div3}
Wiki Markup
{div3:id=tabs-3}

h1. 3. Guidance

Metrics (or indicators) are computed from measures using approved and documented analysis procedures. They are quantifiable indices used to compare software products, processes, or projects or to predict their outcomes. They show trends of increasing or decreasing values, relative only to the previous value of the same metric. They also show containment or breeches of pre-established limits, such as allowable latent defects. Management metrics are measurements that help evaluate how well software development activities are being conducted across multiple development organizations. Trends in management metrics support forecasts of future progress, early trouble detection, and realism in current plans. In addition, adjustments to software development processes can be evaluated, once they are quantified and analyzed. The collection of Center-wide data and analysis results provides information and guidance to Center leaders for evaluating overall Center capabilities, and in planning improvement activities and training opportunities for more advanced software process capabilities.

To ensure the measurement analysis results are communicated properly, on time, and to the appropriate people, the project develops reporting procedures for the specified analysis results (see [SWE-091|SWE-091] and [SWE-093|SWE-093]). It makes these analysis results available on a regular basis (as designed) to the appropriate distribution systems.

Things to consider when developing these reporting methods in order to make them available to others are as follows:
* *Stakeholders*: Who receives which reports? What level of reporting depth is needed for particular stakeholders? Software developers and software testers (the collectors of the software measures) may only need a brief summary of the results, or maybe just a notification that results are complete and posted online where they may see them. Task and project managers will be interested in the status and trending of the project's metrics (these may be tailored to the level and responsibilities of each manager). Center leaders may need only normalized values that assist in evaluations of Center competence levels overall (which in turn provides direction for future training and process improvement activities).

* *Management chain*: Does one detailed report go to every level? Will the analysis results be tailored (abbreviated, synopsized) at progressively higher levels in the chain? Are there multiple chains (engineering, projects, safety and mission assurance)?

* *Timing and periodicity*: Are all results issued at the same frequency? Are weekly, monthly, or running averages reported? Are some results issued only upon request? Are results reported as deltas or cumulative?

* *Formats for reports*: Are spreadsheet-type tools used (bar graph, pie chart, trending lines)? Are statistical analyses performed? Are hardcopy, power point, or email/online tools used to report information?

* *Appropriate level of detail*: Are only summary results presented? Are there criteria in place for deciding when to go to a greater level of reporting (trend line deterioration, major process change, mishap investigation)? Who approves data format and release levels?

* *Specialized reports*: Are there capabilities to run specialized reports for individual stakeholders (safety-related analysis, interface-related defects)? Can reports be run outside of the normal "Timing and periodicity" cycle?

* *Correlation with project and organizational goals*: Are analysis results directly traceable or relatable to specific project and Center software measurement goals? Who performs the summaries and synopses of the traceability and relatability? Are periodic reviews scheduled and held to assess the applicability of the analysis results to the software improvement objectives and goals?

* *Interfaces with organizational and Center-level data repositories*: Are analysis results provided regularly to organizational and Center-level database systems? Is access open, or by permission (password) only? Is it project specific, or will only normalized data be made available? Where are the interfaces designed, maintained, and controlled?

The project reports analysis results periodically according to established collection and storage procedures (see [SWE-092|SWE-092]) and the reporting procedures developed according to this SWE. These reporting procedures are contained in the Software Development or Management Plan (see [SWE-102|SWE-102]).


{div3}
Wiki Markup
{div3:id=tabs-4}

h1. 4. Small Projects

Data reporting activities may be restricted to measures that support safety and quality assessments and the overall organization's goals for software process improvement activities. Data reporting timing may be limited to annual or major review cycles. Small projects may consider software development environments or configuration management systems that contain automated collection, tracking and storage of measurement data. Many projects within NASA have been using the JIRA environment (see section 5.1, Tools) with a variety of plug-ins which help capture measurement data associated with software development.
{div3}
Wiki Markup
{div3:id=tabs-5}

h1. 5. Resources

{refstable}

{toolstable}



{div3}
Wiki Markup
{div3:id=tabs-6} h1. 6. Lessons Learned The NASA Lessons Learned database contains the following lessons learned related to reporting of measurement analysis: * *Selection and use of Software Metrics for Software Development Projects, Lesson No. 3556*: "The design, development, and sustaining support of Launch Processing System (LPS) application software for the Space Shuttle Program provide the driving event behind this lesson. \\ \\ "Metrics or measurements are used to provide visibility into a software project's status during all phases of the software development life cycle in order to facilitate an efficient and successful project." The Recommendation states that "As early as possible in the planning stages of a software project, perform an analysis to determine what measures or metrics will used to identify the 'health' or hindrances (risks) to the project. Because collection and analysis of metrics require additional resources, select measures that are tailored and applicable to the unique characteristics of the software project, and use them only if efficiencies in the project can be realized as a result. The following are examples of useful metrics:{sweref:577} \\ ** "The number of software requirement changes (added/deleted/modified) during each phase of the software process (e.g., design, development, testing). ** "The number of errors found during software verification/validation. ** "The number of errors found in delivered software (a.k.a., 'process escapes'). ** "Projected versus actual labor hours expended. ** "Projected versus actual lines of code, and the number of function points in delivered software." * *Flight Software Engineering Lessons, Lesson No. 2218*: "The engineering of flight software (FSW) for a typical NASA/Caltech Jet Propulsion Laboratory (JPL) spacecraft is a major consideration in establishing the total project cost and schedule because every mission requires a significant amount of new software to implement new spacecraft functionality." \\ \\ The lesson learned Recommendation No. 8 provides this step as well as other steps to mitigate the risk from defects in the FSW development process: \\ \\ "Use objective measures to monitor FSW development progress and to determine the adequacy of software verification activities. To reliably assess FSW production and quality, these measures include metrics such as the percentage of code, requirements, and defined faults tested, and the percentage of tests passed in both simulation and test bed environments. These measures also identify the number of units where both the allocated requirements and the detailed design have been baselined, where coding has been completed and successfully passed all unit tests in both the simulated and test bed environments, and where they have successfully passed all stress tests." {sweref:572} Consider the following lessons learned when capturing software measures in support of Center/organizational needs: * *Know How Your Software Measurement Data Will Be Used, Lesson No. 1772*: "Prior to Preliminary Mission & Systems Review (PMSR), the Mars Science Laboratory (MSL) flight project submitted a Cost Analysis Data Requirement (CADRe) document to the {term:IPAO} that included an estimate of source lines of code (SLOC) and other descriptive measurement data related to the proposed flight software. The IPAO input this data to their parametric cost estimating model. The project had provided qualitative parameters that were subject to misinterpretation, and provided physical SLOC counts. These SLOC values were erroneously interpreted as logical SLOC counts, causing the model to produce a cost estimate approximately 50 percent higher than the project's estimate. It proved extremely difficult and time-consuming for the parties to reconcile the simple inconsistency and reach agreement on the correct estimate." \\ \\ The Recommendation states that "Prior to submitting software cost estimate support data (such as estimates of total SLOC and software reuse) to NASA for major flight projects (over $500 million), verify how the NASA recipient plans to interpret the data and use it in their parametric cost estimating model. To further preclude misinterpretation of the data, the software project may wish to duplicate the NASA process using the same or a similar parametric model, and compare the results with NASA's." {sweref:567} {div3}


applicable
f*
gp
h0
ansc1
asc1
bnsc1
csc*
bsc1
esc*
cnscp
dnsc0
dsc*
ensc0



Div
idtabs-2

2. Rationale

The intent of this requirement is that the software development project organizations provide or allow access to the software metric data during the project life cycle by those Center-defined organizations responsible for assessing and utilizing the metric data. Access can be provided to a number of organizations tasked to review or access the software development progress and quality. When a software effort is acquired from an industry partner, the contractor and/or subcontractors provide NASA with access to the software metric information in a timely manner to allow usage of the information.

NASA established software measurement programs to meet measurement objectives at multiple levels within the Agency. In particular, measurement programs are established at the project and also at the Mission Directorate and Mission Support levels (see SWE-095 and SWE-096) to satisfy organizational, project, program and Directorate needs. Centers have measurement systems and record repositories that are used for records retention, and for subsequent analysis and interpretation to identify overall levels of Center competence in software engineering, and to identify future opportunities for training and software process improvements.

The data gained from these measurement programs assist in the management of projects, assuring and improving safety and product and process quality, and in improving overall software engineering practices. In general, the project level and Mission Directorate/Mission Support Office level measurement programs are designed to meet the following high-level goals:

  • To improve future planning and cost estimation.
  • To provide realistic data for progress tracking.
  • To provide indicators of software quality.
  • To provide baseline information for future process improvement activities.

Information generated to satisfy other requirements in the NPR 7150.2 (see SWE-090,SWE-091, SWE-092, and SWE-093) provide the software measurement data and the analysis methods that are used to produce the results being reported by this SWE-094. The information is recorded in a Software Metrics Report.


Div
idtabs-3

3. Guidance

Metrics (or indicators) are computed from measures using approved and documented analysis procedures. They are quantifiable indices used to compare software products, processes, or projects or to predict their outcomes. They show trends of increasing or decreasing values, relative only to the previous value of the same metric. They also show containment or breeches of pre-established limits, such as allowable latent defects. Management metrics are measurements that help evaluate how well software development activities are being conducted across multiple development organizations. Trends in management metrics support forecasts of future progress, early trouble detection, and realism in current plans. In addition, adjustments to software development processes can be evaluated, once they are quantified and analyzed. The collection of Center-wide data and analysis results provides information and guidance to Center leaders for evaluating overall Center capabilities, and in planning improvement activities and training opportunities for more advanced software process capabilities.

To ensure the measurement analysis results are communicated properly, on time, and to the appropriate people, the project develops reporting procedures for the specified analysis results (see SWE-091 and SWE-093). It makes these analysis results available on a regular basis (as designed) to the appropriate distribution systems.

Things to consider when developing these reporting methods in order to make them available to others are as follows:

  • Stakeholders: Who receives which reports? What level of reporting depth is needed for particular stakeholders? Software developers and software testers (the collectors of the software measures) may only need a brief summary of the results, or maybe just a notification that results are complete and posted online where they may see them. Task and project managers will be interested in the status and trending of the project's metrics (these may be tailored to the level and responsibilities of each manager). Center leaders may need only normalized values that assist in evaluations of Center competence levels overall (which in turn provides direction for future training and process improvement activities).
  • Management chain: Does one detailed report go to every level? Will the analysis results be tailored (abbreviated, synopsized) at progressively higher levels in the chain? Are there multiple chains (engineering, projects, safety and mission assurance)?
  • Timing and periodicity: Are all results issued at the same frequency? Are weekly, monthly, or running averages reported? Are some results issued only upon request? Are results reported as deltas or cumulative?
  • Formats for reports: Are spreadsheet-type tools used (bar graph, pie chart, trending lines)? Are statistical analyses performed? Are hardcopy, power point, or email/online tools used to report information?
  • Appropriate level of detail: Are only summary results presented? Are there criteria in place for deciding when to go to a greater level of reporting (trend line deterioration, major process change, mishap investigation)? Who approves data format and release levels?
  • Specialized reports: Are there capabilities to run specialized reports for individual stakeholders (safety-related analysis, interface-related defects)? Can reports be run outside of the normal "Timing and periodicity" cycle?
  • Correlation with project and organizational goals: Are analysis results directly traceable or relatable to specific project and Center software measurement goals? Who performs the summaries and synopses of the traceability and relatability? Are periodic reviews scheduled and held to assess the applicability of the analysis results to the software improvement objectives and goals?
  • Interfaces with organizational and Center-level data repositories: Are analysis results provided regularly to organizational and Center-level database systems? Is access open, or by permission (password) only? Is it project specific, or will only normalized data be made available? Where are the interfaces designed, maintained, and controlled?

The project reports analysis results periodically according to established collection and storage procedures (see SWE-092) and the reporting procedures developed according to this SWE. These reporting procedures are contained in the Software Development or Management Plan (see SWE-102).


Div
idtabs-4

4. Small Projects

Data reporting activities may be restricted to measures that support safety and quality assessments and the overall organization's goals for software process improvement activities. Data reporting timing may be limited to annual or major review cycles. Small projects may consider software development environments or configuration management systems that contain automated collection, tracking and storage of measurement data. Many projects within NASA have been using the JIRA environment (see section 5.1, Tools) with a variety of plug-ins which help capture measurement data associated with software development.


Div
idtabs-5

5. Resources


refstable

toolstable


Div
idtabs-6

6. Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to reporting of measurement analysis:

  • Selection and use of Software Metrics for Software Development Projects, Lesson No. 3556: "The design, development, and sustaining support of Launch Processing System (LPS) application software for the Space Shuttle Program provide the driving event behind this lesson.

    "Metrics or measurements are used to provide visibility into a software project's status during all phases of the software development life cycle in order to facilitate an efficient and successful project." The Recommendation states that "As early as possible in the planning stages of a software project, perform an analysis to determine what measures or metrics will used to identify the 'health' or hindrances (risks) to the project. Because collection and analysis of metrics require additional resources, select measures that are tailored and applicable to the unique characteristics of the software project, and use them only if efficiencies in the project can be realized as a result. The following are examples of useful metrics:
    sweref
    577
    577

    • "The number of software requirement changes (added/deleted/modified) during each phase of the software process (e.g., design, development, testing).
    • "The number of errors found during software verification/validation.
    • "The number of errors found in delivered software (a.k.a., 'process escapes').
    • "Projected versus actual labor hours expended.
    • "Projected versus actual lines of code, and the number of function points in delivered software."
  • Flight Software Engineering Lessons, Lesson No. 2218: "The engineering of flight software (FSW) for a typical NASA/Caltech Jet Propulsion Laboratory (JPL) spacecraft is a major consideration in establishing the total project cost and schedule because every mission requires a significant amount of new software to implement new spacecraft functionality."

    The lesson learned Recommendation No. 8 provides this step as well as other steps to mitigate the risk from defects in the FSW development process:

    "Use objective measures to monitor FSW development progress and to determine the adequacy of software verification activities. To reliably assess FSW production and quality, these measures include metrics such as the percentage of code, requirements, and defined faults tested, and the percentage of tests passed in both simulation and test bed environments. These measures also identify the number of units where both the allocated requirements and the detailed design have been baselined, where coding has been completed and successfully passed all unit tests in both the simulated and test bed environments, and where they have successfully passed all stress tests."
    sweref
    572
    572

Consider the following lessons learned when capturing software measures in support of Center/organizational needs:

  • Know How Your Software Measurement Data Will Be Used, Lesson No. 1772: "Prior to Preliminary Mission & Systems Review (PMSR), the Mars Science Laboratory (MSL) flight project submitted a Cost Analysis Data Requirement (CADRe) document to the IPAO (Independent Program Assessment Office) that included an estimate of source lines of code (SLOC) and other descriptive measurement data related to the proposed flight software. The IPAO input this data to their parametric cost estimating model. The project had provided qualitative parameters that were subject to misinterpretation, and provided physical SLOC counts. These SLOC values were erroneously interpreted as logical SLOC counts, causing the model to produce a cost estimate approximately 50 percent higher than the project's estimate. It proved extremely difficult and time-consuming for the parties to reconcile the simple inconsistency and reach agreement on the correct estimate."

    The Recommendation states that "Prior to submitting software cost estimate support data (such as estimates of total SLOC and software reuse) to NASA for major flight projects (over $500 million), verify how the NASA recipient plans to interpret the data and use it in their parametric cost estimating model. To further preclude misinterpretation of the data, the software project may wish to duplicate the NASA process using the same or a similar parametric model, and compare the results with NASA's."
    sweref
    567
    567