5.4.3 The project manager shall analyze software measurement data collected using documented project-specified and Center/organizational analysis procedures.
1.1 Notes
NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.
1.2 History
Expand
title
Click here to view the history of this requirement: SWE-093 History
Include Page
SITE:SWE-093 History
SITE:SWE-093 History
1.3 Applicability Across Classes
Applicable c
a
1
b
1
csc
1
c
1
d
0
dsc
1
e
0
f
0
g
0
h
0
Div
id
tabs-2
2. Rationale
NASA software measurement programs are now being designed
Swerefn
refnum
297
to provide the specific information necessary to manage software products, projects, and services. Center/organizational, project, and task goals (see SWE-090) are determined in advance and then measurements and metrics are selected (see SWE-091) based on those goals. These software measurements are used to make effective management decisions as they relate to established goals. Documented procedures are used to calculate and analyze metrics that indicate overall effectiveness in meeting the goals.
Typically, the effectiveness of the project in producing a quality product is characterized by measurement levels associated with the previously chosen metric. The use of measurement functions and analysis procedures that are chosen in advance helps assure that Center/organizational goals are being addressed.
Div
id
tabs-3
3. Guidance
SWE-093 requires the analysis of the collected software measurements with the documented project-specified and Center and organizational analysis procedures. Implicit in the requirement is the need to investigate, evaluate and select the appropriate analysis procedures and software metrics. The Software Development (SDP) or Management Plan (see 5.08 - SDP-SMP - Software Development - Management Plan) lists software metrics as part of the SDP content. This indicates the need to develop the software metrics for the project early in the software development life cycle. The evolution of the software development project and its requirements may necessitate a similar evolution in the required software measures and software metrics (see SWE-092).
Metrics can be classified as primitive metrics (or base metrics) and/or as derived metrics (or computed metrics). Primitive metrics are those that can be directly measured or observed, such as program size (in source lines of code (SLOC)), number of defects observed in unit testing, or total development time for the project. Computed metrics are those that cannot be directly measured but are computed in some manner from software measures or other metrics. Examples of computed metrics are those that are commonly used to measure productivity, such as SLOC produced per person-month, or the number of defects per thousand lines of code (KSLOC). Computed metrics are combinations of software measures or other metric values and are often more valuable in understanding or evaluating the software development process than simple metrics.
Swerefn
refnum
252
It is important to understand that the analysis procedure defines how we are going to calculate the project's software metrics. As stated above, the primitive metrics are measured directly and their analysis procedure may consist of converting them to a simple plot, bar chart, or table. Examples of software measures or primitive metrics may include the number of lines of code reviewed during an inspection, or the hours spent preparing for an inspection meeting.
The derived metrics (usually more complex, but not always) are calculated using the analysis procedures (e.g., mathematical combinations, equations, or algorithms) of the base software measures or other derived measures. An example of a derived metric would be the peer inspection's preparation rate, modeled as the number of lines of code reviewed divided by the number of preparation and review hours.
Many analysis procedures produce metrics with an element of simplification. This is both the strength and weakness of using metrics. When we create a model to use as our analysis procedure, we need to be pragmatic. If we try to include all of the software measures that might affect the metric used to characterize the software product, the model can become so complicated that it is useless. Being pragmatic means not trying to create the most comprehensive analysis procedure. It means picking the aspects that are the most important. Remember that the analysis procedures can always be modified to include additional levels of detail in the future.
Ask yourself these questions:
Does the analysis procedure produce more information than we have now?
Is the resulting information of practical benefit?
Does it tell us what we want to know?
Does it help satisfy the goals and objectives of the software measurement program?
The importance of the need for defining software measures and associated analysis procedures can be illustrated by considering the lines of code metric. "Lines of Code" is one of the most used and most often misused of all of the software metrics. (The problems, variations, and anomalies of using lines of code were well documented many years ago
Swerefn
refnum
236
). There is no industry-accepted standard for counting lines of code. Therefore, if you are going to use a metric based on lines of code, a specific measurement method must be defined. A description of this metric is included in all reports and analyses so that stakeholders (customers and managers) can understand the definition of the metric. Without this, invalid comparisons with other data are almost inevitable.
Swerefn
refnum
137
There are two basic approaches for selecting an analysis procedure to use for producing the software metric(s): (1) Use an existing model, or, (2) create a new one. In many cases, there is no need to "re-invent the wheel."
Many software analysis procedures exist that other organizations have used successfully. These are documented in the current literature and prior NASA software development projects. With a little research, you can identify many candidate analysis procedures that require little or no adaptation to match your own project needs and environments. For example, review the material in the NASA Software Measurement Guidebook
Swerefn
refnum
329
to see previously used software metrics. The analysis procedures to develop these metrics are typically straightforward.
The second method is to create your model. The best advice here is to talk to the people who are responsible for the software product or the software development processes and procedures. They are the experts. They know what factors are important. With their assistance, the key software measures, analysis procedures, and resulting software metrics you will need to support and meet the project's specific measurement objectives will become apparent (see SWE-090). If you create a new analysis procedure for calculating the project's software metrics, you need to ensure the analysis procedure is intelligible to your customers and management chain. You must also prove it is a valid analysis procedure for what you are trying to measure. Sometimes this validation can occur only through the application of statistical techniques 137 or by application to previous software development projects in your local environment.
Panel
Good metrics facilitate the development of models that are capable of predicting process or product parameters, not just describing them.
As you analyze the collected software measurement data, keep in mind that ideal metrics are:
Traceable to an organizational or project objective.
Simple, precisely definable.
Objective.
Easily obtainable (i.e., at a reasonable cost).
Valid (the metric effectively measures what it is intended to measure).
Robust (the metric is relatively insensitive to insignificant changes in the process or product).
Good metrics may have different types of attributes:
Control type: Used to monitor the software processes, products, and services; and identify areas where corrective or management action is required.
Evaluate type: Used to examine and analyze the measurement information as part of the decision-making processes.
Understand and predict type: Used to predict the future status of the software development activity; adds to the level of confidence in the quality of the software product.
When analyzing the collected software measures, ask the following:
Are the software measurement sets complete?
Is the data objective or subjective?
What is the integrity and accuracy of the data?
How stable is the production process being measured?
What is the variation in the data set?
Software metrics can provide the information needed by engineers for technical decisions as well as information required by management
Swerefn
refnum
355
. According to the International Standards Organization (ISO) / International Electrotechnical Commission (IEC) 15939 Software Engineering – Software Measurement Process standard, decision criteria are the thresholds, targets, or patterns used to determine the need for action or further investigation.
Swerefn
refnum
378
Center and organization analysis procedures may include reporting and distribution functions. This includes defining the report format (tables, trend lines, bar graphs), data extraction and reporting cycle (dates, triggers for collection, exception basis), reporting mechanisms (hard copy, online Database Management System (DBMS)), distribution (email blasts, management chain), and availability. Software measurement data collection cycles may or may not be the same as the data reporting cycles. Alternately, the data collection and storage procedures may capture the descriptions of these reporting and distribution functions (see SWE-092).
Additional guidance related to software measurements may be found in the following related requirements in this Handbook:
While SWE-091 may offer limits to the type and interval of measures to be recorded, the project still needs to collect and analyze the selected software measurement data to develop the key software metrics. Using previously defined analysis procedures can help a project reduce the time and effort needed to develop procedures. Also, certain development environments, such as JIRA (see section 5.1, Tools), and associated plug-ins, or configuration management systems, can help automate the collection and distribution of information associated with the analysis of development metrics.
Div
id
tabs-5
5. Resources
5.1 References
refstable
Show If
group
confluence-users
Panel
titleColor
red
title
Visible to editors only
Enter the necessary modifications to be made in the table below:
SWEREFs to be added
SWEREFS to be deleted
SWEREFs called out in text: 137, 236, 252, 297, 329, 355, 378, 567
SWEREFs NOT called out in text but listed as germane: 146, 157, 222, 336
5.2 Tools
Include Page
Tools Table Statement
Tools Table Statement
Div
id
tabs-6
6. Lessons Learned
6.1 NASA Lessons Learned
A documented lesson from the NASA Lessons Learned database notes the following related to capturing software measures in support of Center/organizational needs:
Know-How Your Software Measurement Data will be used, Lesson No. 1772
Swerefn
refnum
567
: Before Preliminary Mission & Systems Review (PMSR), the Mars Science Laboratory (MSL) flight project submitted a Cost Analysis Data Requirement (CADRe) document to the Independent Program Assessment Office (IPAO) that included an estimate of source lines of code (SLOC) and other descriptive measurement data related to the proposed flight software. The cost office inputs this data to its parametric cost estimating model. The project had provided qualitative parameters that were subject to misinterpretation and provided physical SLOC counts. These SLOC values were erroneously interpreted as logical SLOC counts, causing the model to produce a cost estimate of approximately 50 percent higher than the project's estimate. It proved extremely difficult and time-consuming for the parties to reconcile the simple inconsistency and reach an agreement on the correct estimate.
Before submitting software cost estimate support data (such as estimates of total SLOC and software reuse) to NASA for major flight projects (over $500 million), verify how the NASA recipient plans to interpret the data and use it in their parametric cost estimating model. To further preclude misinterpretation of the data, the software project may wish to duplicate the NASA process using the same or a similar parametric model, and compare the results with NASA's.
6.2 Other Lessons Learned
No other Lessons Learned have currently been identified for this requirement.
Div
id
tabs-7
7. Software Assurance
Excerpt Include
SWE-093 - Analysis of Measurement Data
SWE-093 - Analysis of Measurement Data
7.1 Tasking for Software Assurance
Confirm software measurement data analysis conforms to documented analysis procedures.
Analyze software assurance measurement data collected.
7.2 Software Assurance Products
SA metrics analysis results relating to software meeting or exceeding requirements, including any risks or issues.
Note
title
Objective Evidence
Software measurement or metric data
Trends and analysis results on the metric set being provided
Status presentation showing metrics and treading data
Software assurance audit reports on software metric processes
Expand
title
Definition of objective evidence
Include Page
SITE:Definition of Objective Evidence
SITE:Definition of Objective Evidence
7.3 Metrics
Measures relating to status and performance as identified in other requirements. (Schedule deviations, closure of corrective actions, product and process audit results, peer review result, etc)
7.4 Guidance
Task 1
Software assurance reviews the software development plan/software management plan or the measurement plan to confirm that the project has chosen documented analysis procedures from an Agency, Center, or project library or has developed project-specific analysis procedures for the measures they have chosen to collect. Confirm that the software measurement analysis the project has done on its collected measures has followed the documented project analysis procedures. When the project measures exceed a documented threshold, verify that the project has examined the potential root causes of the variation and has chosen a corrective action to prevent further problems.
Task 2
Software assurance will take the software assurance measurement data and analyze it using the software assurance measurement analysis procedures documented in the software assurance plan. Measurement trends, potential root causes of problem areas, and potential corrective actions should receive special attention. An in-depth analysis of the data and trends should be done to understand the causes of any undesirable trends or indicators. The understanding of these causes is key to determining how to make corrections and improve software assurance performance. For example, if the charts for software assurance activities performed versus software assurance activities planned show that many planned activities have not been performed on schedule, there could be many reasons why (Not enough staff to perform all planned activities, the project is behind schedule and planned activities couldn't be completed, software assurance was focused on unplanned work, etc.) Based on the analysis, corrective actions can be planned to improve assurance work. More information on analyzing the measurement data can be found in the guidance for this software requirement.