- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
5.4.2 The project manager shall establish, record, maintain, report, and utilize software management and technical measurements
The NASA-HDBK-2203 contains a set of candidate management indicators that might be used on a software development project. The NASA Chief Engineer may identify and document additional Center measurement objectives, software measurements, collection procedures and guidelines, and analysis procedures for selected software projects and software development organizations. The software measurement process includes collecting software technical measurement data from the project’s software developer(s).
Click here to view the history of this requirement: SWE-090 History
1.3 Applicability Across Classes
Key: - Applicable | - Not Applicable
A & B = Always Safety Critical; C & D = Sometimes Safety Critical; E - F = Never Safety Critical.
Numerous years of experience on many NASA projects demonstrate the following three key reasons for software measurement activities 329:
- To understand and model software engineering processes and products.
- To aid in assessing the status of software projects.
- To guide improvements in software engineering processes.
It is often difficult to accurately understand the status of a project or determine how well development processes are working without some measures of current performance and a baseline for comparison purposes. Metrics support management and control of projects and work to establish greater insight into the way the organization is operating.
There are four major reasons for measuring processes, products, and resources. They are to Characterize, Evaluate, Predict, and Improve:
- Characterizations are performed to gain an understanding of processes, products, resources, and environments, and to establish baselines for comparisons with future efforts.
- Evaluation is used to determine status with respect to plans. Measures are the signals that provide knowledge and awareness when projects and processes are drifting off track so that they are brought back under control. Evaluations are also used to assess the achievement of quality goals and to assess the impacts of technology and process improvements on products and processes.
- Predictions are made so that planning is performed more proactively. Measuring for prediction involves gaining an understanding of the relationships among processes and products so that the values observed for some attributes are used to predict others. This is accomplished because of a desire to establish achievable goals for cost, schedule, and quality so that appropriate resources are applied and managed. Projections and estimates based on historical data help analyze risks and support design and cost tradeoffs.
- An organization measures to improve when quantitative data and information is gathered to help identify inefficiencies and opportunities for improving product quality and process performance. Measures help to plan and track improvement efforts. Measures of current performance give baselines to compare against so that an organization can determine if the improvement actions are working as intended. Good measures also help to communicate goals and convey reasons for improving.
Measurement is an important component of the project and product development effort and is applied to all facets of the software engineering disciplines. Before a process can be efficiently managed and controlled, it has to be measured.
Software measurement programs are established with the rationale in mind, to meet objectives at multiple levels, and are structured to satisfy particular organization, project, program and Mission Directorate needs. The data gained from measurement programs assist in managing projects, assuring quality, and improving overall software engineering practices. In general, the organizational level, project level, and Mission Directorate/Mission Support Office level measurement programs are designed to achieve specific time-based goals, such as the following:
- To provide realistic data for progress tracking.
- To assess the software's functionality when compared to the requirements and the user's needs.
- To provide indicators of software quality which provides confidence in the final product.
- To assess the volatility of the requirements throughout the life cycle.
- To provide indicators of the software's characteristics and performance when compared to the requirements and the user's needs.
- To improve future planning and cost estimation.
- To provide baseline information for future process improvement activities.
At the organizational level, it can be said that "We typically examine high-level strategic goals like being the low-cost provider, maintaining a high level of customer satisfaction, or meeting projected resource allocations. At the project level, we typically look at goals that emphasize project management and control issues or project level requirements and objectives. These goals typically reflect the project success factors like on-time delivery, finishing the project within budget or delivering software with the required level of quality or performance. At the specific task level, we consider goals that emphasize task success factors. Many times these are expressed in terms of the entry and exit criteria for the task." 355
With these goals in mind, projects are to establish specific measurement objectives for their particular activities. Measurement objectives document the reasons for doing software measurement and the accompanying analysis. They also provide a view of the types of actions or decisions that may be made based on the results of the analysis. Before implementing a measurement program or choosing measures for a project, it is important to know how the measures are going to be used. Establishing measurement objectives in the early planning stages helps ensure organizational and project information needs are being met and that the measures that are selected for collection will be useful.
Projects are to select and document specific measures. The measurement areas chosen to provide specific measures are closely tied to the NASA measurement objectives listed above and in NPR 7150.2.
Software technical measurements
Given below is a set of candidate management indicators that might be used on a software development project:
- Requirements volatility: The total number of requirements and requirements changes over time.
- Bidirectional traceability: percentage complete of System-level requirements to Software Requirements, Software Requirements to Design, Design to Code, Software Requirements to Test Procedures.
- Software size: planned and the actual number of units, lines of code, or other size measurements over time.
- Software staffing: planned and actual staffing levels over time.
- Software complexity: the complexity of each software unit.
- Software progress: planned and actual number of software units designed, implemented, unit tested, and integrated over time; code developed.
- Problem/change report status: total number, number closed, the number opened in the current reporting period, age, severity.
- Software test coverage: a measure used to describe the degree to which the source code of a project is tested by a particular test suite.
- Build release content: planned and the actual number of software units released in each build.
- Build release volatility: planned and the actual number of software requirements implemented in each build.
- Computer hardware and data resource utilization: planned and actual use of computer hardware resources (such as processor capacity, memory capacity, input/output device capacity, auxiliary storage device capacity, and communications/network equipment capacity, bus traffic, partition allocation) over time.
- Milestone performance: planned and actual dates of key project milestones.
- Scrap/rework: the number of resources expended to replace or revise software products after they are placed under any level of configuration control above the individual author/developer level.
- Effect of reuse: a breakout of each of the indicators above for reused versus new software products.
- Cost performance: identifies how efficiently the project team has turned costs into progress to date.
- Budgeted cost of work performed: identifies the cumulative work that has been delivered to date.
- Audit performance: indicators for whether the project is following defined processes, number of audits completed, audit findings, number of audit findings opened and closed.
- Risk Mitigation: number of identified software risks, risk migration status.
- Hazard analysis: number of hazard analyses completed, hazards mitigation steps addressed in software requirements and design, number of mitigation steps tested.
Here is an example from the MSFC Organizational Metric Plan. This example is a table showing the metrics, who collects the information, the frequency of the collection, and where the information is reported.
There are specific example measurements listed in the software metrics report (see Metrics) that were chosen based on identified informational needs and from questions shared during several NASA software measurement workshops.
There is some cost to collecting and analyzing software measures, so it is desirable to keep the measurement set to the minimum that will satisfy informational needs. When measures are tied to objectives, a minimum useful set can be easily selected. Defining measurement objectives helps to select and prioritize the candidate measures to be collected. If a measurement isn't tied to an objective, it probably doesn't need to be collected.
Metrics (or indicators) are computed from measures using approved and documented analysis procedures. They are quantifiable indices used to compare software products, processes, or projects or to predict their outcomes. They show trends of increasing or decreasing values, relative only to the previous value of the same metric. They also show containment or breaches of pre-established limits, such as allowable latent defects. The establishing and collection of Center-wide data and analysis results provide information and guidance to Center leaders for evaluating overall Center capabilities, and in planning improvement activities and training opportunities for more advanced software process capabilities. The method for developing and recording these metrics is written in the Software Development Plan (see SDP-SMP).
As the project objectives evolve and mature, and as requirements understanding improves and solidifies, software measures can be evaluated and updated if necessary or if beneficial. Updated measurement selections, data to be recorded and reported, recording and reporting procedures are documented during the update and maintenance of the Software Development Plan.
Measurement objectives document the reasons why software measurement and analysis are performed. Measurement objectives consider both the perspectives of the organization as well as the project. The sources for measurement objectives may be management, technical, project, product or process implementation needs, such as:
- Deliver software by the scheduled date.
- Improve cost estimation.
- Complete projects within budget.
- Devote adequate resources for each process area.
Corresponding measurement objectives for these needs might be:
- Measure project progress to ensure it is adequate to achieve completion by the scheduled date.
- Measure the actual project planning parameters against the estimated planning parameters to identify deviations.
- Track the project cost and effort to ensure project completion within budget.
- Measure resources devoted to each process area to ensure they are sufficient.
The measurement objectives a project defines are to be based on their Center's objectives and NASA's objectives. Usually, project objectives are focused on informational needs. The project has to provide information for managing and controlling the project. When establishing measurement objectives, ask what questions will be answered with the data, why you are measuring something and what types of decisions will be made with the data. A project may also have additional objectives to provide information to their Center or to NASA. This information may be used for process improvement or for developing organizational baselines and trends. For example, Goddard Space Flight Center (GSFC) has defined two high-level objectives for its software projects as follows:
- To assure that the effort is on track for delivery of the required functionality on time and within budget.
- To improve software development practices both in response to immediate customer issues and in support of the organizational process improvement goals.
As can be seen, one of these objectives focuses on the project's ability to manage and control the project with quantitative data and the other objective focuses on organizational process improvement.
In addition, GSFC's Measurement Planning Table Tool lists the following more specific measurement objectives for their projects:
- Ensure schedule progress is within acceptable bounds.
- Ensure project effort and costs remain within acceptable bounds.
- Ensure project issues are identified and resolved in a timely manner.
- Deliver the required functionality.
- Ensure the performance measures are within margins.
- Ensure that the system delivered to operations has no critical or moderate severity errors.
- Minimize the amount of rework due to defects occurring during development.
- Ensure requirements are complete and stable enough to continue work without undue risk.
- Support future process improvement.
When choosing project measures, check to see if your Center has a pre-defined set of measurements that meets the project's objectives. If so, then the specification of measures for the project begins there. Review the measures specified and initially choose those required by your Center. Make sure they are all tied to project objectives or are measures that are required to meet your organization's objectives.
To determine if any additional measures are required or if your Center does not have a pre-defined set of measures, think about the questions that need to be asked to satisfy project objectives. For example, if an objective is to complete on schedule, the following might need to be asked:
- How long is the schedule?
- How much of the schedule has been used and how much is left?
- How much work has been done? How much work remains to be done?
- How long will it take to do the remaining work?
From these questions, determine what needs to be measured to get the answers to key questions. Similarly, think about the questions that need to be answered for each objective and see what measures will provide the answers. If several different measures will provide the answers, choose the measures that are already being collected or those that are easily obtained from tools.
The presentation by Carolyn Seaman, "Software Metrics Selection Presentation", gives a method for choosing project measures and provides a number of examples of measurement charts, with information showing how the charts might be useful for the project. The "Project-Type/Goal/Metric Matrix" 089 is also a matrix developed following the series of NASA software workshops at Headquarters that might be helpful in choosing the project's measures. This matrix specifies the types of measures a project might want to collect to meet a particular goal, based on project characteristics, such as size.
The measurements need to be defined so project personnel collects data items consistently. The measurement definitions are documented in the project Software Management Plan (see SDP-SMP) or Software Metrics Report (see Metrics) along with the measurement objectives. Items to be included as part of a project's measurement collection and storage procedure are:
- A clear description of all data to be provided.
- A clear and precise definition of terms.
- Who is responsible for providing which data.
- When and to whom the data are to be provided.
Some suggestions for specifying measures:
- Be sure the project is going to use the measures.
- Think about how the project will use them. Visualize the way charts look to best communicate information.
- Make sure measures apply to project objectives (or are being provided to meet sponsor or institutional objectives).
- Consider whether suitable measures already exist or whether they can be collected easily. The use of tools that automatically collect needed measures help ensure consistent, accurate collection.
Tools can be used to track and report (e.g., JIRA) measures and provide status reports of all open and closed issues, including their position in the issue tracking life-cycle. This can be used as a measure of progress.
Static analysis tools (e.g., Coverity, CodeSonar) can provide measures of software quality and identify software characteristics at the source code level.
Characterizations of measures like requirements volatility can be tracked with general-purpose requirements development and management tools (e.g., DOORS), along with tracking verification progress. They can also provide reports on software functionality and software verification progress.
Links to the aforementioned tools are found in the Tools listing in the Resources section of this SWE.
Fortunately, there are many resources to help a Center develop data collection and storage procedures for its software measurement program. The NASA "Software Measurement Guidebook" 329 is aimed at helping organizations begin or improve a measurement program. The CMMI Institute has specific detailed practices within its CMMI-DEV 157 model for collecting, interpreting, and storing data. The Software Technology Support Center (STSC), at Hill Air Force Base, has its "Software Metrics Capability Evaluation Guide". 336 Westfall's "12 Steps to Useful Software
Metrics" 355 is an excellent primer for the development of useful software metrics. Other resources are suggested the Resources section.
Typical software measurement programs have three components: Data collection, technical support, and analysis and packaging. To properly execute the collection of the data, an agreed-to procedure for this is needed within the Software Development Plan (see SDP-SMP).
Activities within the data collection procedure include:
- A clear description of all data to be provided. This includes a description of each item and its format, a description of the physical or electronic form to be used, the location or address for the data to be sent.
- A clear and precise definition of terms. This includes a description of the project or organization-specific criteria, definitions, and a description of how to perform each step in the collection process.
- Who is responsible for providing which data. This may be easily expressed in matrix form, with clarifying notes appended to any particular cell of the matrix.
- When and to whom the data are to be provided. This describes the recipient(s) and management chain for the submission of the data; it also specifies the submission dates, periodic intervals, or special events for the collection and submission of the data.
The data collection involvement by the software development team works better if the team's time to collect the data is minimized. Software developers and software testers are the two groups that are responsible for collecting and submitting significant amounts of the required data. If the software developers or testers see this as a non-value added task, data collection will become sporadic and data quality will suffer, thus increasing the effort of the technical support staff that has to validate the collected data. Any step that automates some or all of the data collection will contribute to the efficiency and quality of the data collected.
If a metric does not have a customer, there is no reason for it to be produced. Software measures are expensive to collect, report, and analyze so if no one is using a metric, producing it is a waste of time and money.
To ensure the measurement analysis results are communicated properly, on time, and to the appropriate people, the project develops reporting procedures for the specified analysis results. It makes these analysis results available on a regular basis (as designed) to the appropriate distribution systems.
Things to consider when developing these reporting methods in order to make them available to others are as follows:
- Stakeholders: Who receives which reports? What level of reporting depth is needed for particular stakeholders? Software developers and software testers (the collectors of the software measures) may only need a brief summary of the results, or maybe just a notification that results are complete and posted online where they may see them. Task and project managers will be interested in the status and trending of the project's metrics (these may be tailored to the level and responsibilities of each manager). Center leaders may need only normalized values that assist in evaluations of Center competence levels overall (which in turn provides direction for future training and process improvement activities).
- Management chain: Does one detailed report go to every level? Will the analysis results be tailored (abbreviated, synopsized) at progressively higher levels in the chain? Are there multiple chains (engineering, projects, safety, and mission assurance)?
- Timing and periodicity: Are all results issued at the same frequency? Are weekly, monthly, or running averages reported? Are some results issued only upon request? Are results reported as deltas or cumulative?
- Formats for reports: Are spreadsheet-type tools used (bar graph, pie chart, trending lines)? Are statistical analyses performed? Are hardcopy, PowerPoint, or email/online tools used to report information?
- The appropriate level of detail: Are only summary results presented? Are there criteria in place for deciding when to go to a greater level of reporting (trend line deterioration, major process change, mishap investigation)? Who approves data format and release levels?
- Specialized reports: Are there capabilities to run specialized reports for individual stakeholders (safety-related analysis, interface-related defects)? Can reports be run outside of the normal "Timing and periodicity" cycle?
- Correlation with the project and organizational goals: Are analysis results directly traceable or relatable to specific project and Center software measurement goals? Who performs the summaries and synopses of the traceability and relatability? Are periodic reviews scheduled and held to assess the applicability of the analysis results to the software improvement objectives and goals?
- Interfaces with organizational and Center-level data repositories: Are analysis results provided regularly to organizational and Center-level database systems? Is access open, or by permission (password) only? Is it project-specific, or will only normalized data be made available? Where are the interfaces designed, maintained, and controlled?
The project reports analysis results periodically according to established collection and storage procedures and the reporting procedures developed according to this SWE. These reporting procedures are contained in the Software Development or Management Plan (see SDP-SMP).
Management and technical measurements should be analyzed and used in managing the software project. Measurements are used to help evaluate how well software development activities are being conducted across multiple development organizations. They show containment or breaches of pre-established limits, such as allowable latent defects. Trends in management metrics support forecasts for future progress, early trouble detection, and realism in current plans. In addition, adjustments to software development processes can be evaluated, once they are quantified and analyzed.
Additional guidance related to software measurements may be found in the following related requirements in this Handbook:
NASA-specific software measurement collection information and resources are available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook.
Projects should check with their Center to ensure they have chosen objectives that support their Center-level objectives.
4. Small Projects
Since small projects are typically constrained by budget and staff, they choose the objectives most important to them to help keep the cost and effort within budget. Some Centers have tailored measurement requirements for small projects. Be sure to check your Center's requirements when choosing objectives and associated measures.
A few key measures to monitor the project's status and meet sponsor and institutional objectives may be sufficient. Data collection timing may be limited in frequency. The use of tools that collect measures automatically helps considerably.
While a small project may propose limited sets and relaxed time intervals for the measures to be collected and recorded, the project still needs to select and record in the Software Development Plan the procedures it will use to collect and store software measurement data. Small projects may consider software development environments or configuration management systems that contain automated collection, tracking, and storage of measurement data. Many projects within NASA have been using the JIRA environment (see section 5.1, Tools) with a variety of plug-ins that helps capture measurement data associated with software development.
Data reporting activities may be restricted to measures that support safety and quality assessments and the overall organization's goals for software process improvement activities. Data reporting timing may be limited to annual or major review cycles. Small projects may consider software development environments or configuration management systems that contain automated collection, tracking, and storage of measurement data.
Often small projects have difficulty affording tools that would enable automatic measurement collection. There are several solutions to this issue. Some Centers, such as GSFC, have developed simple tools (often Excel-based) that will produce the measurements automatically. GSFC examples are the staffing tool, the requirements metrics tool, the action item tool, the risk tool, and the problem reporting tool.
Other solutions for small projects involve organizational support. Some organizations support a measurement person on staff to assist the small projects with measurement collection, storage, and analysis, and some Centers use tools that can be shared by small projects.
- CMMI Development Team (2010). "CMMI for Development, Version 1.3: Improving processes for developing better products and services,"CMMI Development Team (2010). CMU/SEI-2010-TR-033, Software Engineering Institute.
- "Lessons learned from 25 years of process improvement: The Rise and Fall of the NASA Software Engineering Laboratory,"Basili, V.R., et al. (May, 2002). University of Maryland, College Park. Experimental Software Engineering Group (ESEG). Lessons Learned Reference.
6. Lessons Learned
6.1 NASA Lessons Learned
The NASA Lesson Learned database contains lessons learned addressing topics that should be kept in mind when planning a software measurement program. The lessons learned below are applicable when choosing objectives and related measures:
- Know-How Your Software Measurement Data Will Be Used. Lesson Number 1772 567: When software measurement data used to support cost estimates is provided to NASA by a project without an understanding of how NASA will apply the data, discrepancies may produce erroneous cost estimates that disrupt the process of project assessment and approval. Major flight projects should verify how NASA plans to interpret such data and use it in their parametric cost estimating model, and consider duplicating the NASA process using the same or a similar model prior to submission.
- Space Shuttle Processing and Operations Workforce. Lesson Number 1042 583: Operations and processing in accordance with the Shuttle Processing Contract (SPC) have been satisfactory. Nevertheless, lingering concerns include the danger of not keeping foremost the overarching goal of safety before schedule before cost; the tendency in a success-oriented environment to overlook the need for continued fostering of frank and open discussion; the press of budget inhibiting the maintenance of a well-trained NASA presence on the work floor; and the difficulty of a continued cooperative search for the most meaningful measures of operations and processing effectiveness.
- A recommendation from Lesson Number 1042 states "NASA and SPC should continue to search for, develop, test, and establish the most meaningful measures of operations and processing effectiveness possible."
- Selection and use of Software Metrics for Software Development Projects. Lesson Learned Number 3556 577: "The design, development, and sustaining support of Launch Processing System (LPS) application software for the Space Shuttle Program provides the driving event behind this lesson.
"Metrics or measurements provide visibility into a software project's status during all phases of the software development life cycle in order to facilitate an efficient and successful project." The Recommendation states that: "As early as possible in the planning stages of a software project, perform an analysis to determine what measures or metrics will be used to identify the 'health' or hindrances (risks) to the project. Because collection and analysis of metrics require additional resources, select measures that are tailored and applicable to the unique characteristics of the software project, and use them only if efficiencies in the project can be realized as a result. The following are examples of useful metrics:
- "The number of software requirement changes (added/deleted/modified) during each phase of the software process (e.g., design, development, testing).
- "The number of errors found during software verification/validation.
- "The number of errors found in delivered software (a.k.a., 'process escapes').
- "Projected versus actual labor hours expended.
- "Projected versus actual lines of code, and the number of function points in delivered software."
- Flight Software Engineering Lessons. Lesson Learned Number 2218 572: "The engineering of flight software (FSW) for a typical NASA/Caltech Jet Propulsion Laboratory (JPL) spacecraft is a major consideration in establishing the total project cost and schedule because every mission requires a significant amount of new software to implement new spacecraft functionality."
- The lesson learned Recommendation No. 8 provides this step as well as other steps to mitigate the risk from defects in the FSW development process:
- "Use objective measures to monitor FSW development progress and to determine the adequacy of software verification activities. To reliably assess FSW production and quality, these measures include metrics such as the percentage of code, requirements, and defined faults tested, and the percentage of tests passed in both simulation and testbed environments. These measures also identify the number of units where both the allocated requirements and the detailed design have been baselined, where coding has been completed and successfully passed all unit tests in both the simulated and testbed environments, and where they have successfully passed all stress tests."
6.2 Other Lessons Learned
- Much of NASA's software development experience that was accomplished in the NASA/GSFC Software Engineering Laboratory (SEL) is captured in "Lessons learned from 25 years of process improvement: The Rise and Fall of the NASA Software Engineering Laboratory" 430. The document describes numerous lessons learned that are applicable to the Agency's software development activities. From their early studies, they were able to build models of the environment and develop profiles for their organization. One of the key lessons in the document is "Lesson 6: The accuracy of the measurement data will always be suspect, but you have to learn to live with it and understand its limitations."
7. Software Assurance
7.1 Tasking for Software Assurance
Confirm that a measurement program establishes, records, maintains, reports, and uses software assurance, management, and technical measures.
Perform trending and analyses on metrics (quality metrics, defect metrics) and report.
7.2 Software Assurance Products
- Reports of SA trending and analyses performed on software and assurance measurements.
- Evidence that confirms a software measurement program, per Task 1.
Evidence of a software assurance measurement program including:
-collection, storage and analysis procedures
-SA measures collected, stored, analyzed
-SA measurement results reported and used for management and improvement
- Evidence of use of the SA metric information for SA improvement and software assessments.
Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:
- Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
- Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
- Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
- Signatures on SA reviewed or witnessed products or activities, or
- Status report, email or memo containing Short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
- To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
- To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
- Note: Each project chooses its own set of metrics, depending on their information needs. See SWE Handbook Topic 8.18 - SA Suggested Metrics for additional guidance.
Organizational Goals of Software Assurance metrics:
Assure delivery of quality software requirements to assure safe and secure products in support of mission success and customer objectives.
Quality Software Requirements
Are the software requirements detailed enough for development and test?
The ratio of the number of detailed software requirements to the number of SLOC to be developed by the project.
The percentage is complete for each area of traceability.
Are requirements stable?
Software requirements volatility trended after project baseline (e.g., # of requirements added, deleted, or modified; tbds).
Are the software hazards adequately addressed in the software requirements?
The percentage is complete of traceability to each hazard with software items. (New)
Assure delivery of quality, safe, and secure code.
Is the code secure and has the code addressed cybersecurity requirements?
Number of cybersecurity secure coding violations per number of developed lines of code;
List of types of secure coding violations found.
Is the safety-critical code safe?
Software cyclomatic complexity data for all identified safety-critical software component;
What is the quality of the code?
Number of defects or issues found in the software after delivery;
The number of defects or non-conformances found in flight code, ground code, tools, and COTs products used.
Do the requirements adequately address cybersecurity?
Number and type of identified cybersecurity vulnerabilities and weaknesses found by the project.
Continuously improve the quality and adequacy of software testing to assure safe and reliable products and services are delivered.
Quality Software Testing
Does the test program provide adequate coverage?
Software Code Coverage data;
Software requirements test coverage percentages, including the percentage of testing completed and the percentage of the detailed software requirements, successfully tested to date;
Number of issues and discrepancies found during each test;
The number of lines of code tested.
Does the software test program test all of the safety-critical code?
Test coverage data for all identified safety-critical software components.
Continuously monitor software projects to improve the management of Software Plans, Procedures, and Defects to assure quality products and services are delivered on-time and within budget.
Quality Software Plans, Procedures, and Defect Tracking
Is the SW project proceeding as planned?
Compare initial cost estimate and final actual cost, noting assumption and differences in cost parameters;
Is the SW project addressing identified problems?
The number of finding from process non-compliances and process maturity.
Is the SW project using peer reviews to increase product quality?
Number of peer reviews performed vs. # planned; the number of defects found in each peer review;
How well is the project following its processes and procedures?
Number of audits findings per audit;
The time required to close the audit findings;
Defect Tracking status and why the Defect occurred?
Problem/change report status: total number, number closed, the number opened in the current reporting period, age, severity;
Number of defects or issues found in the software after delivery;
The number of defects or non-conformances found in flight code, ground code, tools, and COTs products used.
Number of software nonconformances at each severity level for each software configuration item.
The number of root cause analyses performed; list of finding identified by each root cause analysis.
The trend showing the closure of corrective actions over time.
Maintain and advance organizational capability in software assurance processes and practices to meet NASA-STD-8739.8 requirements.
SA Process Improvements
Are SA findings providing value to software development?
The number of SA findings (e.g., # open, closed, latency, # accepted) mapped against SA activities, through the lifecycle, including process non-compliances, process maturity.
The number of defects found by software assurance during each peer review activity.
Is the SA effort proceeding as planned?
Trend the software assurance cost estimates through the project life cycle;
Planned SA resource allocation versus actual SA resource allocation.
Percent of the required training completed for each of the project SA personnel.
The number of compliance audits planned vs. the number of compliance audits complete and trends on non-conformances from the audits.