bannerb

This version of SWEHB is associated with NPR 7150.2B. Click for the latest version of the SWEHB based on NPR7150.2D

SWE-093 - Analysis of Measurement Data

1. Requirements

5.4.3 The project shall analyze software measurement data collected using documented project-specified and Center/organizational analysis procedures.

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 Applicability Across Classes

Classes F and G are labeled with “X (not OTS)”.  This means that this requirement does not apply to off-the-shelf software for these classes.

Class

     A      

     B      

     C      

   CSC   

     D      

   DSC   

     E      

     F      

     G      

     H      

Applicable?

   

   

   

   

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

NASA software measurement programs are now being designed 297 to provide the specific information necessary to manage software products, projects, and services. Center/organizational, project and task goals (see SWE-090) are determined in advance and then measurements and metrics are selected (see SWE-091) based on those goals. These software measurements are used to make effective management decisions as they relate to established goals. Documented procedures are used to calculate and analyze metrics that indicate overall effectiveness in meeting the goals.

Typically, the effectiveness of the project in producing a quality product is characterized by measurement levels associated with the previously chosen metric. Use of measurement functions and analysis procedures that are chosen in advance helps assure that Center/organizational goals are being addressed.

3. Guidance

SWE-093 requires the analysis of the collected software measurements with the documented project-specified and Center and organizational analysis procedures. Implicit in the requirement is the need to investigate, evaluate and select the appropriate analysis procedures and software metrics. The Software Development (SDP) or Management Plan (see SDP-SMP) lists software metrics as part of the SDP content. This indicates the need to develop the software metrics for the project early in the software development life cycle (see SWE-019). The evolution of the software development project and its requirements may necessitate a similar evolution in the required software measures and software metrics (see SWE-092).

Metrics can be classified as primitive metrics (or base metrics) and/or as derived metrics (or computed metrics). Primitive metrics are those that can be directly measured or observed, such as program size (in source lines of code (SLOC)), number of defects observed in unit testing, or total development time for the project. Computed metrics are those that cannot be directly measured but are computed in some manner from software measures or other metrics. Examples of computed metrics are those that are commonly used to measure productivity, such as SLOC produced per person-month, or the number of defects per thousand lines of code (KSLOC). Computed metrics are combinations of software measures or other metric values and are often more valuable in understanding or evaluating the software development process than simple metrics. 252

It is important to understand that the analysis procedure defines how we are going to calculate the project's software metrics. As stated above, the primitive metrics are measured directly and their analysis procedure may consist of converting them to a simple plot, bar chart, or table. Examples of software measures or primitive metrics may include the number of lines of code reviewed during an inspection, or the hours spent preparing for an inspection meeting.

The derived metrics (usually more complex, but not always) are calculated using the analysis procedures (e.g., mathematical combinations, equations, or algorithms) of the base software measures or other derived measures. An example of a derived metric would be the peer inspection's preparation rate, modeled as the number of lines of code reviewed divided by the number of preparation and review hours.

Many analysis procedures produce metrics with an element of simplification. This is both the strength and the weakness of using metrics. When we create a model to use as our analysis procedure, we need to be pragmatic. If we try to include all of the software measures that might affect the metric used to characterize the software product, the model can become so complicated that it is useless. Being pragmatic means not trying to create the most comprehensive analysis procedure. It means picking the aspects that are the most important. Remember that the analysis procedures can always be modified to include additional levels of detail in the future.

Ask yourself these questions:

    • Does the analysis procedure produce more information than we have now?
    • Is the resulting information of practical benefit?
    • Does it tell us what we want to know?
    • Does it help satisfy the goals and objectives of the software measurement program?

The importance of the need for defining software measures and associated analysis procedures can be illustrated by considering the lines of code metric. "Lines of Code" is one of the most used and most often misused of all of the software metrics. (The problems, variations, and anomalies of using lines of code were well documented many years ago 236). There is no industry-accepted standard for counting lines of code. Therefore, if you are going to use a metric based on lines of code, it is critical that a specific measurement method is defined. A description of this metric is included in all reports and analysis so that stakeholders (customers and managers) can understand the definition of the metric. Without this, invalid comparisons with other data are almost inevitable. 137

There are two basic approaches for selecting an analysis procedure to use for producing the software metric(s): (1) Use an existing model, or, (2) create a new one. In many cases, there is no need to "re-invent the wheel."

Many software analysis procedures exist that other organizations have used successfully. These are documented in the current literature and in prior NASA software development projects. With a little research, you can identify many candidate analysis procedures that require little or no adaptation to match your own project needs and environments. For example, review the material in the NASA Software Measurement Guidebook 329 to see previously used software metrics. The analysis procedures to develop these metrics are typically straightforward.

The second method is to create your own model. The best advice here is to talk to the people who are actually responsible for the software product or the software development processes and procedures. They are the experts. They know what factors are important. With their assistance, the key software measures, analysis procedures, and resulting software metrics you will need to support and meet the project's specific measurement objectives will become apparent (see SWE-090). If you create a new analysis procedure for calculating the project's software metrics, you need to ensure the analysis procedure is intelligible to your customers and management chain. You must also prove it is a valid analysis procedure for what you are trying to measure. Sometimes this validation can occur only through application of statistical techniques 137 or by application to previous software development projects in your local environment.

Good metrics facilitate the development of models that are capable of predicting process or product parameters, not just describing them.

As you analyze the collected software measurement data, keep in mind that ideal metrics are:

  • Traceable to an organizational or project objective.
  • Simple, precisely definable.
  • Objective.
  • Easily obtainable (i.e., at reasonable cost).
  • Valid (the metric effectively measures what it is intended to measure).
  • Robust (the metric is relatively insensitive to insignificant changes in the process or product).

Good metrics may have different types of attributes:

  • Control type: Used to monitor the software processes, products and services; and identify areas where corrective or management action is required.
  • Evaluate type: Used to examine and analyze the measurement information as part of the decision-making processes.
  • Understand and predict type: Used to predict future status of the software development activity; adds to the level of confidence in the quality of the software product.

When analyzing the collected software measures, ask the following:

  • Are the software measurement sets complete?
  • Is the data objective or subjective?
  • What is the integrity and accuracy of the data?
  • How stable is the product process being measured?
  • What is the variation in the data set?

Software metrics can provide the information needed by engineers for technical decisions as well as information required by management 355. According to the International Standards Organization (ISO) / International Electrotechnical Commission (IEC) 15939 Software Engineering – Software Measurement Process standard, decision criteria are the thresholds, targets, or patterns used to determine the need for action or further investigation. 378

Center and organization analysis procedures may include reporting and distribution functions. This includes defining the report format (tables, trend lines, bar graphs), data extraction and reporting cycle (dates, triggers for collection, exception basis), reporting mechanisms (hard copy, online Database Management System (DBMS)), distribution (email blasts, management chain), and availability. Software measurement data collection cycles may or may not be the same as the data reporting cycles. Alternately, the data collection and storage procedures may capture the descriptions of these reporting and distribution functions (see SWE-092).

Additional guidance related to software measurements may be found in the following related requirements in this Handbook:

4. Small Projects

While SWE-091 may offer limits to the type and interval of measures to be recorded, the project still needs to collect and analyze the selected software measurement data to develop the key software metrics. Using previously defined analysis procedures can help a project reduce the time and effort needed to develop procedures. Also, certain development environments, such as JIRA (see section 5.1, Tools), and associated plug-ins, or configuration management systems, can help automate the collection and distribution of information associated with the analysis of development metrics.

5. Resources

5.1 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN).

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

A documented lesson from the NASA Lessons Learned database notes the following related to capturing software measures in support of Center/organizational needs:

  • Know How Your Software Measurement Data will be used, Lesson No. 1772: Prior to Preliminary Mission & Systems Review (PMSR), the Mars Science Laboratory (MSL) flight project submitted a Cost Analysis Data Requirement (CADRe) document to the Independent Program Assessment Office (IPAO) that included an estimate of source lines of code (SLOC) and other descriptive measurement data related to the proposed flight software. The IPAO input this data to their parametric cost estimating model. The project had provided qualitative parameters that were subject to misinterpretation, and provided physical SLOC counts. These SLOC values were erroneously interpreted as logical SLOC counts, causing the model to produce a cost estimate approximately 50 percent higher than the project's estimate. It proved extremely difficult and time-consuming for the parties to reconcile the simple inconsistency and reach agreement on the correct estimate.

Prior to submitting software cost estimate support data (such as estimates of total SLOC and software reuse) to NASA for major flight projects (over $500 million), verify how the NASA recipient plans to interpret the data and use it in their parametric cost estimating model. To further preclude misinterpretation of the data, the software project may wish to duplicate the NASA process using the same or a similar parametric model, and compare the results with NASA's. 567


  • No labels