- 1. Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
1. Requirements
2.1.1.2 The NASA OCE shall periodically benchmark each Center's software engineering capability against requirements in this directive.
1.1 Notes
Capability Maturity Model® Integration (CMMI®) for Development (CMMI-DEV) appraisals are the preferred benchmarks for objectively measuring progress toward software engineering process improvement at NASA Centers.
1.2 History
1.3 Related Activities
This requirement is related to the following Activities:
| Related Links |
|---|
2. Rationale
The Headquarters Office of the Chief Engineer (OCE) is responsible for ensuring that the Agency-level software engineering requirements and policies are being followed throughout the Agency.
SWE-004 specifies that the NASA Office of the Chief Engineer (OCE) must periodically benchmark each NASA Center’s software engineering capability against the requirements outlined in the directive. This benchmarking process is critical for ensuring a standard level of excellence and fostering continuous improvement across the agency. The rationale for this requirement is outlined below:
1. Ensuring Compliance with Agency Standards
NASA operates as a decentralized organization with multiple Centers, each contributing to its diverse suite of missions. Without periodic benchmarking:
- Centers may interpret or implement NPR 7150.2 083 requirements inconsistently.
- Deviations in adherence to software engineering standards may arise, resulting in uneven quality across projects.
By benchmarking Centers’ software engineering capabilities:
- The OCE ensures that agency-wide policies and processes are consistently applied.
- Each Center is held accountable for aligning with NPR 7150.2 requirements, helping to maintain software quality, safety, and reliability.
This is particularly important for safety-critical systems in human spaceflight, planetary exploration, and spacecraft operations, where software defects can have catastrophic consequences.
2. Improving Mission Reliability and Success Rates
NASA’s missions depend on software to control spacecraft, instruments, and ground systems. Benchmarking helps identify and address gaps or deficiencies in Center capabilities that could pose risks to mission success. For example:
- Strengthening Requirements Management ensures that software requirements fully represent mission needs.
- Enhancing Testing Rigor ensures that potential failure modes are caught early.
- Identifying capability gaps allows the OCE to implement targeted improvements that reduce risks of software failures.
3. Driving Continuous Improvement in Software Processes
The periodic benchmarking process enables NASA to maintain a culture of continuous improvement in how software is developed, managed, and assured. This serves to:
- Promote the adoption of best practices and lessons learned across all Centers.
- Help Centers advance their maturity in software engineering processes.
- Ensure that even high-performing Centers continue refining their approaches in response to new challenges, technologies, and methodologies.
Benchmarking aligns with Capability Maturity Model Integration (CMMI) principles, providing quantitative feedback for process improvements and enabling Centers to better optimize their resources.
4. Identifying Agency-Wide Trends and Strengthening Consistency
Without benchmarking, NASA risks variability in the quality and maturity of software engineering practices across its Centers. Benchmarking serves to:
- Identify agency-wide strengths, allowing NASA to highlight effective practices and replicate them across multiple Centers.
- Pinpoint common weaknesses or challenges across Centers, enabling the OCE to address systemic issues.
- Improve the consistency of coding standards, assurance practices, test protocols, and risk management processes across the agency.
This approach ensures that NASA remains a unified organization with robust, standardized software engineering practices.
5. Resource Allocation and Support for Underperforming Centers
Periodic benchmarking allows the OCE to:
- Identify Centers that may require additional training, funding, or technical support to meet agency requirements.
- Prioritize resource allocation to teams or Centers facing specific challenges, while ensuring that high-priority missions do not suffer from software gaps.
- Measure the effectiveness of software engineering improvement plans at each Center and provide targeted recommendations for further enhancement.
Benchmarking ensures that all Centers, regardless of past performance, have access to necessary resources to meet NASA’s high standards.
6. Adapting to Emerging Challenges and Technologies
Software engineering is a rapidly evolving discipline, with new technologies, methodologies, and tools emerging regularly. Benchmarking facilitates:
- Identification of Centers that are successfully adopting new practices (e.g., model-based systems engineering, autonomous systems testing, DevOps) and sharing those advancements across the agency.
- Detection of Centers that may lag in adapting to evolving mission requirements, allowing the OCE to provide training or additional resources.
By benchmarking capabilities periodically, the OCE ensures that NASA’s software engineering practices remain effective and responsive to the challenges posed by increasingly complex and autonomous systems.
7. Minimizing Risk in Inter-Center and Interdisciplinary Collaboration
Many NASA missions require collaboration between multiple Centers and involve diverse interdisciplinary teams. Without a baseline understanding of each Center’s software engineering capability:
- Integration and coordination challenges may arise between Centers with differing levels of software quality, risk management practices, and testing rigor.
- Disparities in competency could jeopardize mission-critical systems.
Benchmarking ensures that all Centers:
- Operate at a comparable level of software engineering capability.
- Share a common understanding of expectations and requirements, leading to smoother collaborations.
8. Enhancing Accountability and Transparency
Benchmarking enhances transparency by providing an objective assessment of each Center's software engineering capability. This:
- Encourages Centers to take ownership of their software processes, embracing accountability for quality and compliance.
- Ensures clear communication with stakeholders, including mission managers, contractors, and external partners, about the strengths and weaknesses of software practices.
9. Supporting NASA’s Commitment to Excellence
NASA’s commitment to excellence in engineering and mission assurance is recognized globally. Benchmarking is a vital tool for maintaining this reputation by:
- Upholding the highest standards in software engineering, adherence to safety-critical requirements, and reliability.
- Demonstrating NASA’s proactive approach to identifying opportunities for improvement agency-wide.
Benchmarking Framework: Metrics and Outcomes
To evaluate each Center comprehensively, benchmarking involves:
- Measuring compliance metrics for each requirement in NPR 7150.2 (e.g., requirements management, configuration management, testing practices, documentation quality).
- Assessing the maturity of software process improvements (e.g., alignment with CMMI levels).
- Reviewing lessons learned and the implementation of corrective actions from past projects at each Center.
- Examining workforce competency, use of advanced tools and practices, and incorporation of software safety principles.
The outcomes of benchmarking include:
- Actionable feedback for Centers to address gaps in compliance.
- Agency-wide reports to benchmark performance trends and inform future initiatives.
- Identification of high-performing Centers whose practices might become models for others.
Conclusion
The requirement for the OCE to periodically benchmark Centers’ software engineering capabilities against NPR 7150.2 is necessary to maintain high standards in software development and mission assurance. It fosters continuous improvement, cross-Center consistency, and greater accountability, ensuring NASA’s ability to reliably deliver safe, successful, and cost-effective missions. Benchmarking enhances the agency’s resilience in the face of evolving challenges, ultimately aligning every Center’s efforts with NASA’s rigorous engineering and operational expectations.
3. Guidance
To fulfill this requirement, the Headquarters Office of the Chief Engineer (OCE) employs a comprehensive set of methods to assess and benchmark the software engineering capabilities of NASA Centers. This ensures consistent compliance with NPR 7150.2083, the advancement of software engineering maturity, and the identification of best practices and areas for improvement across the agency. Below is improved and expanded guidance to refine the processes and emphasize clarity on roles, objectives, and methodologies.
3.1 Methods Employed by OCE for Benchmarking
The OCE achieves this requirement through a combination of assessments, reviews, and collaborations across NASA Centers. These methods provide a holistic view of compliance with software engineering requirements, the effectiveness of process implementation, and progress toward agency goals:
- OCE Project Surveys:
- Periodic assessments are conducted at the Centers and within programs/projects to verify compliance with NPR 7150.2. These surveys provide oversight into how Centers implement agency-level requirements and policies.
- Review of CMMI® Appraisal Results:
- Evaluate Centers using CMMI for Development (CMMI-DEV) appraisals to objectively assess the maturity of their software engineering processes.
- Provide actionable insights for improvement based on specific and general practices benchmarked against industry standards.
- Participation in Program and Project Reviews:
- Attend program, project, and milestone reviews to assess compliance with software engineering and assurance requirements in the context of specific missions or systems.
- Review of Organizational and Project-Level Artifacts:
- Assess planning documents, project schedules, progress reports, and other organizational artifacts.
- Validate the appropriate allocation of resources and alignment with project objectives and requirements.
- Review of Center and Project Waivers:
- Analyze waivers submitted by Centers or projects to ensure proper justification, risk assessment, and adherence to the waiver approval process.
- Collaboration Through the NASA Software Working Group (SWG):
- Gather feedback and share status updates from Centers during SWG meetings to identify common challenges, lessons learned, and potential areas for improvement.
- Software Inventory Data Review:
- Utilize comprehensive software inventory data to identify trends, high-risk areas, or anomalies related to software classification, documentation, and life-cycle management.
- Feedback from External Entities:
- Incorporate feedback and responses to inquiries from external stakeholders (e.g., contractors, other government agencies) to maintain high standards and awareness of external expectations.
3.2 Objectives and Core Elements of OCE Surveys
The OCE surveys play a central role in assessing compliance, maintaining internal control, and overseeing operations at the Center and project levels.
3.2.1 Survey Objectives
These surveys are aligned with the following objectives:
- Ensure Compliance: Validate compliance with agency-level software engineering requirements and policies defined in NPR 7150.2.
- Assess Implementation: Assess the implementation of the Software Engineering Technical Authority (SETA) on projects and at Centers.
- Identify Deficiencies and Risks: Highlight systemic issues, deficiencies, or risks that may impede mission success or compliance.
- Recognize Best Practices: Identify areas of excellence that can be scaled or replicated across NASA.
- Gather Feedback: Collect suggestions from stakeholders regarding areas where policies or requirements may need refinement.
3.2.2 Focus Areas of Survey Core Elements
The following are the primary focus areas of OCE surveys:
- Compliance with Software Engineering Requirements:
- Verify adherence to NPR 7150.2 and related standards specific to software engineering.
- Implementation of Technical Authority (SETA):
- Confirm that the structured approval process is followed when managing software-related decisions and issues.
- Waivers and Dissent Management:
- Assess the waiver process and responses to dissenting opinions to ensure proper evaluation and approval of deviations.
- Software Risk Management:
- Examine risk identification, analysis, and mitigation approaches for software-related issues in project plans.
- Documentation and Records Management:
- Review the completeness and accuracy of key software documentation, including requirements, designs, and test artifacts.
- Software Classification Processes:
- Evaluate classification processes, ensuring appropriate rigor for Class A and B software and effective application of tailored requirements for other classifications.
- Software Safety and Assurance Practices:
- Inspect the integration of safety and assurance processes into software engineering workflows.
- Training and Workforce Development:
- Assess the availability of training programs and the qualifications of personnel working in software engineering disciplines.
- Software Architecture and Design:
- Review the clarity and robustness of software architecture and detailed design for critical systems.
- Use of Metrics and Data-Driven Decision Making:
- Evaluate Centers' use of software metric data in decision-making and process improvement.
3.3 Use of CMMI® in Benchmarking
The Capability Maturity Model Integration® (CMMI-DEV) 689 framework is a cornerstone of the benchmarking process, serving as an industry-standard method for assessing a Center’s software engineering maturity level. Its adoption in NASA benchmarking efforts provides a consistent, objective methodology for measuring progress and identifying improvement opportunities.
3.3.1 Purpose of CMMI Benchmarking
- Objective Evaluation of Capabilities:
- CMMI appraisals assess specific and general software engineering practices to determine the maturity of the Center’s software engineering processes.
- Progress Tracking:
- With follow-up appraisals, the improved capabilities of Centers can be measured against previously established baselines.
- Risk Mitigation:
- CMMI results help NASA identify process-related risks within software development organizations (both internal and external) and measure effectiveness in mitigating those risks.
3.3.2 Benefits of CMMI to NASA
- Best Practices Alignment:
- Measures the alignment of Center practices with industry-proven methods for software development and sustainment.
- Consistency and Comparability:
- Establishes a common yardstick for comparing the maturity of processes across Centers, missions, and external contractors.
- Improved Performance:
- Promotes measurable improvements in cost estimation accuracy, schedule adherence, and defect reduction.
3.3.3 Center Role in Benchmarking:
Centers play an active role in supporting CMMI evaluation by:
- Preparing for appraisals (e.g., compiling documentation, demonstrating practices).
- Participating actively during appraisals conducted by certified CMMI assessors.
- Reviewing and addressing findings from appraisals to align with agency objectives.
CMMI benchmarking fosters collaboration and accountability while providing NASA with insights to improve both Center-specific and agency-wide processes. For additional detail, refer to SWE-032 - CMMI Levels for Class A and B Software.
3.4 Feedback and Continuous Improvement
A key principle guiding NASA’s benchmarking efforts is the use of feedback loops to drive continuous improvement. This is accomplished through:
- Input from project surveys, SWG discussions, and external reviews to revise policies as needed.
- Sharing lessons learned—successes and challenges—across Centers to promote a holistic improvement culture.
- Monitoring trends in software maturity over time to ensure ongoing alignment with agency mission goals.
3.5 Conclusion
The benchmarking requirement ensures that NASA maintains consistent, high-quality software engineering and process rigor across its Centers. By employing multi-faceted assessment methods, focusing on compliance and continuous improvement, and leveraging CMMI benchmarking as an industry-standard tool, the OCE ensures that NASA’s software development capabilities remain robust, responsive, and reliable to meet the demands of evolving mission priorities. As Centers and projects actively engage in this process, NASA strengthens its ability to deliver safe, cost-effective, and mission-critical software products.
The Headquarters Office of the Chief Engineer (OCE) achieves this requirement by a number of methods:
- OCE project surveys. OCE personnel conduct periodic assessments of compliance at the Centers and within the programs/projects to verify that they are meeting this responsibility.
- Review of the Capability Maturity Model Integration (CMMI®) appraisal results.
- Review and participate in program and project reviews.
- Review of organizational and project planning documents, schedule, and progress.
- Review of Center and project waivers.
- Feedback and status presentations were provided by the Centers during the NASA Software Working Group activities.
- Feedback and discussion for the NASA Software Working Group members.
- Project status and feedback provided to the NASA Headquarters OCE.
- Software inventory data.
- External Agency inquires.
See also SWE-003 - Center Improvement Plans, SWE-036 - Software Process Determination.
See also SWE-209 - Benchmarking Software Assurance and Software Safety Capabilities
3.6 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook:
| Related Links |
|---|
3.7 Center Process Asset Libraries
SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197
See the following link(s) in SPAN for process assets from contributing Centers (NASA Only).
| SPAN Links |
|---|
4. Small Projects
For small projects, benchmarking their software engineering capability doesn't need to impose excessive overhead or duplicate the comprehensive processes designed for larger programs. Instead, the benchmarking process for small projects should focus on scalability, proportionality, and practicality, ensuring that efforts are aligned with the smaller scope, reduced resources, and generally lower risk profiles of these projects. Below is tailored guidance for small projects under this requirement, ensuring effective participation in OCE benchmarking efforts.
4.1 Guidance for Small Projects
4.1.1 Understanding the Context of Small Projects
- Small projects are typically characterized by their limited budget, smaller teams, and shorter timelines. Often, the software involved may not fall into the Class A or Class B category, although it is still critical to mission success.
- For small projects, benchmarking should focus on essential software engineering practices rather than exhaustive appraisals of all NPR 7150.2 requirements.
4.1.2. Right-Sizing the Benchmarking Process
- OCE Project Surveys for Small Projects
- Small projects should ensure they address core software engineering elements relevant to their size and classification, such as requirements engineering, testing, risk management, and lessons learned.
- Preparation for project surveys should focus on clear, concise documentation that demonstrates compliance with the small project's specific tailored subset of NPR 7150.2 requirements.
- Participation in Program and Project Reviews
- For small projects, OCE participation in program and project reviews should be streamlined. Focus discussions and evaluations on critical concerns relevant to the project's software engineering practices, such as:
- Risk identification and mitigation processes.
- Validation and verification (V&V) efforts to ensure quality.
- Adequacy of developer training for their roles.
- For small projects, OCE participation in program and project reviews should be streamlined. Focus discussions and evaluations on critical concerns relevant to the project's software engineering practices, such as:
- Assessment of Software Planning Documents
- Simplify and consolidate key project documents such as project plans, schedules, and test reports.
- Small projects can combine related documents (e.g., requirements and test strategies) to minimize the documentation burden while demonstrating compliance.
- Waiver Management and Lessons Learned
- If a small project needs waivers for NPR 7150.2 requirements, ensure these waivers are carefully justified by the project's classification and criticality.
- Regularly integrate lessons learned from prior small projects into software development processes to demonstrate continuous improvement during benchmarking.
4.1.3. Focus Areas for Small Projects During Benchmarking
To ensure benchmarking is proportional to the scope of small projects, the OCE benchmarking process should emphasize the following areas:
Compliance Tailored to Scope:
- Ensure that software engineering practices align with the size, complexity, and safety criticality of the small project.
- Demonstrate that small projects have appropriately tailored workflows while maintaining adherence to core principles of NPR 7150.2.
Engagement in Essential Software Activities:
- Requirements engineering: Show that requirements are well-documented, clear, and traceable.
- Testing and validation: Demonstrate that test coverage reflects the small project's risk profile and complexity.
- Risk management: Highlight key software-related risks and mitigation strategies.
Practical Training and Workforce Competency:
- For small projects, compliance metrics regarding workforce skills and training should focus on ensuring the availability of just-in-time training for critical software engineering practices.
- Leverage resources provided by NASA’s Software Engineering Initiative (e.g., online modules, templates).
Software Safety and Assurance Practices:
- For small projects with safety-critical components, ensure these practices are in place proportionally:
- Early identification of safety risks.
- Independent assurance or peer reviews of critical systems.
- Where safety is not a concern, focus resources on test rigor and defect prevention.
- For small projects with safety-critical components, ensure these practices are in place proportionally:
Use of Software Metrics:
- Small projects should track lightweight, essential metrics sufficient to measure software quality and process efficiency without adding significant administrative burden.
Inter-Center or External Collaboration:
- If small projects involve collaboration with other Centers or contractors, ensure that communication protocols, interface specifications, and shared responsibilities are well-documented.
4.1.4. CMMI Benchmarking for Small Projects
For small projects, participating in CMMI benchmarking should focus on simplified and targeted improvements aligned specifically with the project’s scope. Since small projects are unlikely to involve as complex a set of processes as large-scale Class A or B projects, CMMI benchmarking efforts should emphasize:
- Adopting Scalable CMMI Practices:
- Use a lighter version of CMMI-inspired frameworks, focusing on areas such as:
- Requirements and design documentation.
- Risk tracking and process improvement loops.
- Small projects can align with CMMI practices for maturity at the project level, rather than Center-wide evaluations.
- Use a lighter version of CMMI-inspired frameworks, focusing on areas such as:
- Leverage NASA's Pre-Defined Resources and Templates:
- Use process templates and reusable assets available from the OCE or Center Process Asset Libraries (PALs) 197, to meet benchmarking criteria efficiently.
- Focus benchmarking efforts on a few selected CMMI practices that are directly applicable to the project's goals.
- Small Project Scenarios for Benchmarking:
- For instance, a CubeSat project might prioritize simple incremental development methods, test automation, small-team collaboration, and a focused defect-tracking system over broader process improvements.
4.1.5. Tailoring OCE Benchmarking Expectations
When engaging with small projects, the OCE should tailor benchmarking expectations based on the classification of the software and the complexity of the mission. The tailored approach should consider:
- Reduced compliance metrics for low-risk, small-scale software efforts.
- Proportional application of software lifecycle models, focusing on areas that have the greatest impact on scope and mission outcomes.
- Minimalist documentation requirements (e.g., combining plans where applicable).
4.1.6. Support from NASA Centers and the SWG
Role of Centers:
- Centers should provide direct support to small projects by:
- Facilitating proactive participation in OCE benchmarking efforts.
- Offering mentorship or assistance from subject matter experts (SMEs) to fulfill documentation and compliance reviews.
- Centers can also act as mediators in translating agency-wide benchmarking processes to the scale of small projects.
Role of the Software Working Group (SWG):
- The SWG can help small projects by:
- Sharing lessons learned from other small projects to mitigate repeated errors.
- Acting as a forum for disseminating best practices and lightweight tools.
- Providing feedback on improvement areas highlighted during benchmarking.
4.2 Example of Small Project Guidance During Benchmarking
Case Study: CubeSat Mission
Scenario: A small CubeSat mission involves Class D software, with a limited team of developers operating on a strict timeline to deliver flight software.
Application of Benchmarking Guidance:
Compliance Tailoring:
- Focus only on essential NPR 7150.2 requirements for Class D software, such as lightweight requirements validation, unit testing, and testing interface compatibility.
CMMI Alignment:
- Benchmark CMMI compliance for essential practices like configuration management and incremental testing processes rather than addressing the full suite of CMMI standards.
Streamlined Documentation:
- Combine requirements documentation and the test plan into one optimized artifact.
- Document only software risks that are directly tied to mission success or safety.
Ongoing Training:
- Use just-in-time resources or online training tools to ensure team members are familiar with mission-specific software assurance practices.
4.3 Conclusion
Benchmarking small projects should prioritize essential processes that align with their specific scope, classification, and complexity. By focusing on critical elements, minimizing overhead, and leveraging OCE tools and templates, small projects can maintain compliance with NPR 7150.2 while maximizing efficiency. Tailored guidance ensures that benchmarking efforts are achievable, meaningful, and directly contribute to the improved quality and effectiveness of small projects.
5. Resources
5.1 References
- (SWEREF-038) Release 1.0, NASA Office of the Chief Engineer, 2002.
- (SWEREF-082) NPR 7120.5F, Office of the Chief Engineer, Effective Date: August 03, 2021, Expiration Date: August 03, 2026,
- (SWEREF-083) NPR 7150.2D, Effective Date: March 08, 2022, Expiration Date: March 08, 2027 https://nodis3.gsfc.nasa.gov/displayDir.cfm?t=NPR&c=7150&s=2D Contains link to full text copy in PDF format. Search for "SWEREF-083" for links to old NPR7150.2 copies.
- (SWEREF-157) CMMI Development Team (2010). CMU/SEI-2010-TR-033, Software Engineering Institute.
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-278) NASA-STD-8739.8B, NASA TECHNICAL STANDARD, Approved 2022-09-08 Superseding "NASA-STD-8739.8A"
- (SWEREF-689) Capability Maturity Model Integration (CMMI) Model V3.0, ISACA, April 6, 2023, NASA users can access the CMMI models with a CMMI Institute account at: https://cmmiinstitute.com/dashboard. Non-NASA users may purchase the document from: https://cmmiinstitute.com
5.2 Tools
6. Lessons Learned
6.1 NASA Lessons Learned
The NASA Lessons Learned Library provides multiple examples that emphasize the importance of consistent benchmarking, compliance assessment, and continuous improvement in software engineering practices across NASA Centers. These lessons reveal how discrepancies in adherence to processes or the absence of periodic assessments can lead to mission failures or inefficiencies. Below are applicable lessons learned that reinforce the need for periodic benchmarking in alignment with this requirement.
6.1.1 Relevant NASA Lessons Learned
1. Mars Climate Orbiter Loss (1999)
Lesson Number: 0740
Summary:
The loss of the Mars Climate Orbiter (MCO) highlighted critical failures in software engineering processes due to unverified assumptions, lack of thorough reviews, and missing compliance with established workflows. One of the root causes was the failure to detect that output files contained inconsistent units—imperial units instead of the required metric units. This discrepancy resulted in the spacecraft’s trajectory deviation and mission loss.
Key Relevance to Benchmarking:
- Standards such as unit compatibility must be benchmarked across Centers to ensure uniform understanding and adherence to processes during software design, development, and integration.
- Benchmarking fosters consistent implementation of requirements, such as those prescribed by the Software Management and Development Plan (SMDP), preventing flaws from cascading through multiple phases of development.
Lesson Learned:
- Conduct periodic evaluations of whether Centers are adhering to validation, testing, and verification processes for mission-critical software components.
- Use benchmarking to evaluate whether teams at all Centers properly implement process-related requirements and have a common understanding of standards and metrics.
2. Mars Polar Lander Loss (1999)
Lesson Number: 0938
Summary:
The loss of the Mars Polar Lander (MPL) occurred because of incomplete software requirements and inadequate testing protocols. The flight software did not account for transient signals generated by hardware components during descent, which were wrongly interpreted as a touchdown. This led to premature engine shutdown and the spacecraft's destruction.
Key Relevance to Benchmarking
- This incident demonstrates the importance of benchmarking software engineering capabilities center-wide to ensure robust requirements management and testing practices.
- Benchmarking can uncover whether Centers have adequately implemented systems to detect and address risks introduced by hardware/software interactions.
Lesson Learned:
- Benchmarking efforts should include assessments of how well requirements are defined and traced to testing activities across Centers.
- Periodic reviews should verify that software engineering practices are in place to address edge cases and risks in mission-critical software.
3. Software Metrics and Management Oversight
Lesson Number: 1964
Summary:
A project experienced significant issues with software cost and schedule overruns, largely due to the lack of standardized software engineering metrics and incomplete oversight of software development processes. This resulted in unmanaged code growth and insufficient progress tracking, ultimately jeopardizing the program timeline.
Key Relevance to Benchmarking:
- NASA software engineering benchmarking ensures Centers are using consistent metrics for cost, schedule, quality, and risk management.
- Benchmarking can identify gaps in how Centers measure, analyze, and report software performance metrics, allowing the OCE to implement corrective actions.
Lesson Learned:
- Centers should be required to demonstrate how they collect, use, and analyze software metrics for insight into software quality and compliance with processes.
- Benchmarking can provide a mechanism to share best practices in software metrics and management oversight across Centers.
4. Independent Oversight of Software Processes
Lesson Number: 0331
Summary:
A lack of independent software process evaluation resulted in project teams underestimating technical risks associated with software. This contributed to design and testing weaknesses that could have been detected earlier with independent reviews.
Key Relevance to Benchmarking:
- Benchmarking addresses the lack of oversight by ensuring OCE periodically evaluates and compares Center capabilities through objective criteria.
- Independent benchmarking activities, like CMMI appraisals, reduce the risk of undetected issues caused by poor execution of software engineering processes.
Lesson Learned:
- The OCE’s benchmarking activities need to reinforce the need for Centers to institutionalize independent oversight mechanisms across projects.
- Benchmarking evaluations should monitor how well Centers have established consistent reviews, including independent process assessments, to mitigate risks.
5. Space Network Ground System Software Issues
Lesson Number: 2294
Summary:
The project experienced significant underperformance of ground system software, which was attributed to poor software requirements management, testing insufficiencies, and weak configuration control practices. These issues were exacerbated by over-reliance on underperforming contractors, whose subpar practices went undetected due to inadequate benchmarking and reviews.
Key Relevance to Benchmarking:
- Benchmarking ensures both internal and external software engineering processes are aligned with NASA's requirements.
- Regular assessments of software engineering maturity at Centers support better risk identification and management across internal teams and contractors.
Lesson Learned:
- Centers must demonstrate compliance with software engineering standards to ensure their software systems, including contractor-provided systems, meet requirements for reliability and performance.
- Benchmarking should include an appraisal of contractor oversight processes to ensure contractor performance aligns with NASA's expectations.
6. Consistency and Knowledge Sharing Across Centers
Lesson Number: 2278
Summary:
Variability in software development practices across NASA Centers caused unnecessary rework and inefficiencies during software integration for a multi-Center mission. This was due to differences in process maturity, inconsistent documentation standards, and a lack of shared best practices.
Key Relevance to Benchmarking:
- Benchmarking helps unify processes across Centers, creating a consistent baseline for software engineering practices.
- Regular reviews of Center practices can detect variability early and promote knowledge sharing, reducing integration challenges.
Lesson Learned:
- Benchmarking is a critical tool for identifying discrepancies in software engineering capabilities across Centers and ensuring a consistent application of NPR 7150.2.
- Lessons learned from one Center should be shared agency-wide to foster continuous improvement and prevent repeated errors.
7. Training and Workforce Competency in Software Processes
Lesson Number: 1125
Summary:
A project encountered software delivery delays and quality issues because the workforce lacked sufficient training in applying new software engineering tools and methodologies. The gaps in knowledge led to inefficient development practices and increased defect rates.
Key Relevance to Benchmarking:
- Benchmarking efforts should assess not only process maturity but also the adequacy of workforce training and competency at Centers.
- Regular reviews can ensure that training programs are implemented and maintained, enabling staff to keep pace with emerging technologies and methodologies.
Lesson Learned:
- Include assessments of software engineering training programs during benchmarking to verify that project teams are well-prepared to execute processes effectively.
- Benchmarking should encourage Centers to adopt just-in-time training for small projects to address proficiency gaps in critical skills.
6.1.2 Conclusion
The lessons learned from NASA’s missions underscore the critical need for periodic benchmarking of software engineering capabilities. These lessons highlight how deficiencies in compliance, requirements management, testing, metrics, contractor oversight, and workforce training can lead to significant mission impacts. By leveraging benchmarking as a proactive tool, the NASA Office of the Chief Engineer (OCE) can ensure consistent application of best practices, risk mitigation, and the overall improvement of software engineering processes across all Centers. These lessons reinforce the importance of fostering a culture of continuous improvement and knowledge sharing within NASA’s software engineering community.
6.2 Other Lessons Learned
No other Lessons Learned have currently been identified for this requirement.
7. Software Assurance
7.1 Tasking for Software Assurance
None identified at this time.
7.2 Software Assurance Products
Software Assurance (SA) products are tangible outputs created by Software Assurance personnel to support oversight, validate compliance, manage risks, and ensure the quality of delivered products. These products are essential to demonstrate that SA objectives are being met, and they serve as evidence of the thoroughness and effectiveness of the assurance activities performed.
No specific deliverables are currently identified.
7.3 Metrics
No standard metrics are currently specified.
7.4 Guidance
This requirement ensures that NASA maintains a high standard of software engineering capability across all Centers by evaluating compliance with NPR 7150.2 083 requirements. Software assurance (SA) plays a critical role in benchmarking activities by verifying, monitoring, and assessing each Center's implementation of software assurance processes and identifying areas of improvement.
7.4.1 Software Assurance Focus
Software assurance guidance should focus on ensuring:
- The accuracy and consistency of benchmarking data related to software assurance practices.
- That Centers apply corrective actions to address gaps in software assurance processes or compliance.
- Continued improvement in software assurance capabilities across the Agency.
7.4.2 Software Assurance Responsibilities
- Support OCE in Benchmarking Activities
- Collaborate with the OCE and Centers:
- Actively participate in benchmarking activities by providing input related to software assurance processes and compliance.
- Share expertise, processes, tools, and lessons learned that can further enhance software assurance practices across Centers.
- Prepare Assurance Metrics:
- Support the OCE by identifying software assurance-specific metrics during benchmarking (e.g., defect reduction rates, test coverage, compliance with risk assurance processes).
- Provide clear, measurable, and reliable assurance data to inform benchmarking.
- Collaborate with the OCE and Centers:
- Provide Input on Assessing Key Assurance Areas
Software assurance processes should be assessed alongside software engineering capability. Key areas to evaluate include:- Compliance with Software Assurance Requirements (NPR 7150.2, NASA-STD-8739.8 278):
- Evaluate whether assurance procedures are being correctly implemented across the software lifecycle based on Center-specific activities.
- Ensure that waivers or deviations are documented, approved, and justified for software assurance requirements.
- Effectiveness of Independent Verification and Validation (IV&V):
- Verify that IV&V processes, if applicable, are being performed for critical software systems and aligned with their classification (e.g., Class A/B safety-critical).
- Risk Management for Software Assurance:
- Assess if each Center maintains effective risk identification, tracking, and mitigation strategies for software risks, particularly safety-critical or mission-critical ones.
- Testing Rigor and Coverage:
- Ensure software testing and assurance practices align with lifecycle benchmarks specified in NPR 7150.2.
- Lessons Learned and Corrective Actions:
- Evaluate whether lessons learned from prior assurance and engineering reviews have been incorporated back into Center processes to strengthen compliance.
- Evaluate whether lessons learned from prior assurance and engineering reviews have been incorporated back into Center processes to strengthen compliance.
- Compliance with Software Assurance Requirements (NPR 7150.2, NASA-STD-8739.8 278):
- Ensure Consistency Across Centers
- Collaborate with the OCE to define uniform benchmarks for assessing software assurance practices across all NASA Centers, ensuring consistency in how assurance processes are measured, evaluated, and reported.
- Act as a liaison to share best practices and solutions adopted by high-performing Centers to address gaps at Centers with less mature assurance practices.
- Identify and Address Gaps in Assurance Capability
- Participate in benchmarking gap analyses to identify software assurance deficiencies specific to Centers, including:
- Lack of compliance with NPR assurance requirements.
- Inconsistent or inadequate risk and safety evaluations.
- Resource limitations (staff, tools, or training) affecting assurance quality.
- Collaborate with Center-level software assurance teams to:
- Develop corrective action plans for identified gaps.
- Recommend tailored training, tools, or process improvements to close gaps.
- Participate in benchmarking gap analyses to identify software assurance deficiencies specific to Centers, including:
- Establish Ongoing Communication and Reporting
- Provide ongoing status updates to the OCE about assurance-related benchmarking findings. This includes:
- Center-specific strengths and weaknesses in assurance processes.
- Recommendations for improving processes or tools.
- Recurrent deficiencies that require Agency-wide policy or resource adjustments.
- Document benchmarking participation, including completed reviews, identified gaps, and corrective actions taken.
- Provide ongoing status updates to the OCE about assurance-related benchmarking findings. This includes:
- Promote Assurance Workforce Development
- Encourage Centers to invest in their software assurance workforce to ensure staff are trained in modern and emerging assurance techniques, including automation, agile practices, DevSecOps, and model-based assurance.
- Advocate for standardization of assurance tools and practices across Centers to reduce variability and improve benchmarking outcomes.
7.4.3 Best Practices for Software Assurance Benchmarking
7.4.3.1 Establish Assurance Metrics for Benchmarking
Develop measurable indicators to assess software assurance performance:
- Percentage of safety-critical software systems with formal assurance reviews completed (classes A/B).
- Number of assurance issues or non-compliances identified during lifecycle phases.
- Test coverage percentages as tracked by assurance teams.
- Average time taken to resolve assurance-related defects or risks.
7.4.3.2 Foster Center Collaboration and Knowledge Sharing
- Encourage Centers to share assurance-related challenges, successes, and best practices during benchmarking reviews.
- Support the creation of a central repository for assurance-specific lessons learned, tools, and training materials.
7.4.4 Outcomes of Software Assurance Benchmarking
By participating in and supporting the benchmarking process, Software Assurance ensures:
- Continual Improvement: Opportunities to enhance assurance practices are identified and addressed at both the Center and Agency level.
- Compliance Management: Centers meet NPR software assurance requirements across all software classifications.
- Harmonization: Best practices in software assurance are shared and implemented uniformly across NASA Centers.
- Risk Reduction: Assurance gaps are addressed early, preventing risks from escalating into safety or mission-critical issues.
- Accountability: All Centers remain accountable for maintaining high-quality assurance capabilities that align with Agency goals.
7.4.5 Conclusion
Software Assurance is an integral component of benchmarking activities performed by the OCE. By actively engaging in the assessment of assurance processes, providing measurable inputs, addressing gaps, and fostering cross-Center collaboration, Software Assurance ensures that NASA maintains consistent and high-performing assurance practices that support mission success, safety, and compliance.
7.5 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook:


