bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Wiki Markup
{alias:SWE-027}
\\

{tabsetup:1. The Requirement|2. Rationale|3. Guidance|4. Small Projects| 5. Resources|6. Lessons Learned}

{div3:id=tabs-1}

h1. 1. Requirements

Paragraph 2.3.1 The project shall ensure that when a COTS, GOTS, MOTS, reused, or open source software component is to be acquired or used, the following conditions are satisfied:

# The requirements that are to be met by the software component are identified.
# The software component includes documentation to fulfill its intended purpose (e.g., usage instructions).
# Proprietary, usage, ownership, warranty, licensing rights, and transfer rights have been addressed.
# Future support for the software product is planned.
# The software component is verified and validated to the same level of confidence as would be required of the developed software component.

h2. {color:#003366}{*}1.1 Notes{*}{color}\\

The project responsible for procuring off-the-shelf software is responsible for documenting, prior to procurement, a plan for verifying and validating the off-the-shelf software to the same level of confidence that would be needed for an equivalent class of software if obtained through a "development" process. The project ensures that the COTS, GOTS, OTS, reused, and open source software components and data meet the applicable requirements in this NPR assigned to its software classification as shown in Appendix D.

Note from NPR 7150.2: For these types of software components consider the following:
# Supplier agreement to deliver or escrow source code or third party maintenance agreement is in place.
# A risk mitigation plan to cover the following cases is available:
## Loss of supplier or third party support for the product.
## Loss of maintenance for the product (or product version).
## Loss of the product (e.g., license revoked, recall of product, etc.).
# An Agreement that the project has access to defects discovered by the community of users has been obtained. When available, the project can consider joining a product users group to obtain this information.
# A plan to provide adequate support is in place; the plan needs to include maintenance planning and the cost of maintenance.
# Any documentation changes required to the software management, development, operations, or maintenance plans that are affected by the use or incorporation of COTS, GOTS, MOTS, reused, and legacy\heritage software.
# A review of any open source software licenses by the Center Counsel.

h2. 1.2 Applicability Across Classes\\ \\

{applicable:asc=1\|ansc=1\|bsc=1\|bnsc=1\|csc=1\|cnsc=1\|dsc=1\|dnsc=0\|esc=1\|ensc=0\|f=1\|g=p\|h=0}\\
{div3}{div3:id=tabs-2}

h1. 2. Rationale

This requirement exists in NPR 7150.2 because some software (e.g. COTS, GOTS) is purchased with no direct NASA or NASA contractor software engineering involvement in software development. Projects using this type of COTS/GOTS software must know that that the acquisition and maintenance of the software is expected to meet NASA requirements as spelled out in this section of NPR 7150.2.

This requirement also exists in NPR 7150.2 because some software, whether purchased as COTS, GOTS or developed/modified in house may contain Open Source Software (OSS). If OSS exists within the project software, there may be implications on how the software can be used in the future, including internal/external releases or reuse of the software.

Finally, this requirement  exists in NPR 7150.2 because some software may be heritage (or legacy) software, or developed/purchased before current software engineering processes were in place.
{div3}
{div3:id=tabs-3}

h1. 3. Guidance


Guidance on OTS software is broken down into elements as a function of the type of OTS software. The following is an index of the guidance elements:
3.1 COTS/GOTS Software
3.2 MOTS Software
3.3 Legacy/Heritage Code
3.4 Open Source Software
3.4.1 What is Open Source Software?
3.4.2 Planning ahead for the inclusion of Open Source Software
3.4.3 Releasing NASA code containing Open Source Software
3.4.4 Identifying and using high pedigree Open Source Software in NASA code
3.4.5 Procurement of software by NASA -- Open Source Provisions
3.4.6 Embedded Open Source Software

h2. 3.1 COTS/GOTS Software

COTS (Commercial Off-the-Shelf) software and GOTS (Government Off-the-Shelf) software are unmodified, out of the box software solutions that can range in size from a portion of a software project to an entire software project. COTS/GOTS software can include software tools (e.g. word processor or spreadsheet applications), simulations (e.g. aeronautical and rocket simulations), and modeling tools (e.g. dynamics/thermal/electrical modeling tools).
If you are planning to use COTS/GOTS products, be sure to complete the checklist tables under the Tools section. The purpose of these tables is to ensure that the table entries are considered in your software life cycle decisions from software acquisition through software maintenance.
If COTS/GOTS software is used for a portion of the software solution, the software requirements pertaining to that portion should be used in the testing, verification and validation of the COTS/GOTS software. For example, if a MIL STD 1553 serial communications is the design solution for the project communications link requirements, and the COTS/GOTS software design solution is used along with the COTS/GOTS hardware design solution, then the project software requirements for the serial communications link should be used to test, verify and validate the COTS/GOTS 1553 software. Other functionality which might be present in the COTS/GOTS Mil Std 1553 software may not be covered by the project requirements. This other functionality should be either disabled or determined to be safe by analysis and testing.

h2. 3.2 MOTS Software

As defined in Appendix A of NPR 7150.2, A.18:
Modified Off-The-Shelf (MOTS) Software. When COTS, legacy/heritage software is reused, or heritage software is changed, the product is considered "modified." The changes can include all or part of the software products and may involve additions, deletions, and specific alterations. An argument can be made that any alterations to the code and/or design of an off-the-shelf software component constitutes "modification," but the common usage allows for some percentage of change before the off-the-shelf software is declared to be MOTS software. This may include the changes to the application shell and/or glueware to add or protect against certain features and not to the off-the-shelf software system code directly. See off-the-shelf definition in Appendix A of NPR 7150.2.
In cases where legacy/heritage code is modified, MOTS is considered to be an efficient method to produce project software, especially if the legacy/heritage code is being used in the same application area of NASA. For example, Expendable Launch Vehicle simulations have been successfully modified to accommodate solid rocket boosters, payload release requirements, or other such changes. Further, if the "master" code has been designed with reuse in mind, such code becomes an efficient and effective method of producing quality code for succeeding projects.
An IV&V Facility report, "Software Reuse Study Report", April 29, 2005 examines in detail changes made on reused software. The conclusions are positive but caution against overestimating the extent and costs of reused software.
The DoD has had extensive experience in COTS/GOTS and MOTS. A Lessons Learned item, Commercial Item Acquisition: Considerations and Lessons Learned, specifically includes lessons learned from MOTS. Concerns in these lessons learned included commercial software vendors attempting to modify existing commercial products, limiting the changes to "minor" modifications, and underestimating the extent and schedule impacts of testing of modified code.
Extreme caution should be exercised when attempting to purchase or modify COTS or GOTS code which was written for another application realm or for which key documentation is missing (e.g. requirements, architecture, design, tests).

h2. 3.3 Legacy/Heritage Code

"Legacy" and "Heritage" code will be used interchangeably here to identify code which may have been produced before modern software development processes were put into place. Reused code may in many cases be considered legacy/heritage code. Legacy code can range in size from small units of code (e.g. subroutines) to large, complete software systems (e.g. Atlas-Centaur Expendable Launch Vehicle Simulation).
It may be desirable to maintain legacy code largely intact due to one or more of the following factors:
* The code may have a history of successful application over many runs
* No new software errors have been found in the code in some time and it has been reliable through many years of use
* The cost of upgrading the legacy code (e.g. a new software development) may be uneconomical or unaffordable in terms of time or funding
* Upgrading the legacy code could add new software errors
* Software personnel are familiar with the legacy code
* Safety reviews have been conducted on the legacy code under similar applications

On the other hand, it may be desirable to replace legacy code due to one or more of the following factors:
* No active civil servants or contractors are familiar with the code or its operation
* One or more of the following documents are missing: architecture, requirements, traceability, design, source code, unit through integration test cases, validation results, user operational manuals, non-conformances, waivers, coding standards, or other key documents.
* Lack of conditions for the installation of the software or use of the software or software development environment
* No safety review has been done on the new code in its old or new operational environment
* The legacy code may contain Open Source Software with questionable license terms or rights
* The source code language compilers may be years out of date or even inaccessible
* Emulators may not be available
* Maintenance responsibility is unknown
* Legacy code may operate on out-of-date or unavailable operating systems
* Unknown IP, licensing, exportability constraints, if any.

Determining which path to follow (keep or replace legacy code) is largely a cost-risk analysis. Further guidance on this facet of legacy code will be provided in future iterations of this Electronic Handbook.
If the decision is made to maintain the use of the legacy code, it is recommended that incremental improvements be made as the code is used. Don't wait until all the experts retire\! Specifically,
* Requirements should be documented if not already available
* Create verification and validation documents based on testing against any vendor documentation including user's manuals
* Start configuration management on the reused code.
* Software architecture and design should be documented, if not already available
* Test cases should be documented
* Software debugging and error reporting should be documented
* Have Software Assurance and Safety personnel review the legacy code and documentation
* All documentation, test results, maintenance history, and other such documents associated with legacy code should be gathered and stored if it is anticipated the code will be reused.

One author, Michael C. Feathers,^12^ has defined legacy code as code which has no tests and proceeds to advise his readers on how to work with legacy code. See the reference link.

h2. 3.4 Open Source Software

Open Source Software is considered a form of Off-The-Shelf software. Even if most of the software on a NASA project is developed in-house, it is common to find embedded Open Source Software within the code. It is often more efficient for a software engineer to use widely available and well tested code developed in the software community for common functions than to "reinvent the wheel".
Open Source Software is specifically mentioned in the SWE-027 requirement in NPR 7150-2.

h3. 3.4.1 What is Open Source Software?

* In general usage:
* "Open Source Software (OSS)" is not to be confused with other forms of inexpensive or "free" software. The intention of SWE-027 is to cover any software used in the software system that was not developed in-house. More generalized information on this subject of OSS is available from Wikipedia at:[http://en.wikipedia.org/wiki/Open_source_software]
Verify information from Wikipedia if necessary.
* NASA specific definition:
In NPR_2210_1C, "Release of NASA Software" under Appendix A, definitions,
A.1.1.7 "Open Source Software" means software where the recipient is free to use the software for any purpose, to make copies of the software and to distribute the copies without payment of royalties, to modify the software and to distribute the modified software without payment of royalties, to access and use the source code of the software, and to combine the software with other software in accordance with Open Source licenses/agreements. Open Source Software is a subcategory of Publicly Releasable software.

h3. 3.4.2 Planning ahead for the inclusion of Open Source Software

Whether open source software is acquired or developed by NASA or a NASA contractor, a usage policy should be established up front to avoid any possible legal issues that may arise. This policy may be developed in conjunction with advice from the Software Release Agent (even if you do not plan to release the software) and/or your Center's IP Legal Counsel.

h3. 3.4.3 Releasing NASA code containing Open Source Software

When software is released to another NASA Center, NASA project or external organization, it is important to inform the receiving party of any licenses and restrictions under which the software is released. It is important to note that additional software required to "run" the released software is not part of the software release. For example a web application that runs under the Apache Web Server does not need to include the Apache Public License as part of the relevant licenses.
Software releases are also performed when software is submitted for Space Act Awards such as the NASA Software of the Year Award. For more information on software releases one should contact the Software Release Authority at the NASA Center at which the software is being or was developed.
There are requirements and processes associated with software releases. See NPR 2210.1C, "Release of NASA Software", located at [http://nodis3.gsfc.nasa.gov/displayDir.cfm?t=NPR&c=2210&s=1A]
 
A cautionary item from NPR 2210.1C is worth repeating here:
(NPR 2210.1C paragraph 3.2.2.2) If a proposed release of Open Source Software includes the release of external Open Source Software, care shall be taken to ensure that the pertinent license for such external Open Source Software is acceptable. For example, at least one widely used external Open Source license does not currently include an indemnification provision and further requires that all software distributed with that external Open Source Software be distributed under the same license terms.
Therefore, except for an Approved for Interagency Release or Approved for NASA Release, both the Center Office or Project that is responsible for the software and Center Patent or IP Counsel shall review and approve any proposed distribution of Open Source Software that includes external Open Source Software.
Caution: Open Source Software may itself contain other Open Source Software\!

h3. 3.4.4 Identifying and using high pedigree Open Source Software in NASA code

Going back to the NPR 7150.2, requirement 2.3.1.e "The software component is verified and validated to the same level of confidence as would be required of the developed software component." To achieve this level of confidence, it is recommended that software developers use only OSS (and COTS, GOTS MOTS, Legacy as well) software that has a high pedigree, i.e. a gold standard. Such OSS will typically have the following characteristics:
* There should be a strong software development model including defined processes for:
Bug reporting: identification, triage, and correction
** Code modification: review and approval to commit fixes and features to the source code
** Testing: thorough descriptions of test cases, test runs, and test configurations
** Code review: as part of the code modification process
** Documentation: detailed documentation should be updated
** Discussion lists for questions: this may include a wiki, mailing list, or live chat site
** Leadership: ensures that the community works in a coordinated fashion to define target functionality for each release and overall product focus.
* Usually, a high quality, established open source project will have a large number of developers, Usage of the project's product(s) will occur across multiple industries both nationally and internationally
* Metrics (e.g., number of developers) can be used to evaluate the quality of an Open Source project via sites that track a large proportion of Open Source Software projects (e.g., [http://www.ohloh.net/]). Norms, such as what constitutes a large number of developers, change as the number of Open Source projects and developers grow.
* The project should provide a listing of all Open Source Software included, embedded, or modified within the piece of Open Source Software
Open Source Software may provide more opportunity to perform verification and validation of the software to the same level of confidence as if obtained through a "development" process. Often OSS project will provide online access to detailed development and test artifacts (as described above), which may be difficult to obtain from COTS vendors

h3. 3.4.5 Procurement of software by NASA -- Open Source Provisions

A cautionary item from NPR 2210.1C, "Release of NASA Software", is worth reiterating here:
[NPR 2210.1c paragraph 1.8.3] Open Source Software Development, as defined in paragraph A.1.1.8 [of NPR 2210.1C], may be used as part of a NASA project only if the Office or Project that has responsibility for acquisition or development of the software supports incorporation of external Open Source Software into software. In addition, the Office or Project responsible for the software acquisition or development shall:
* Determine the ramifications of incorporating such external Open Source Software during the acquisition planning process specified in NASA FAR Supplement Subpart 1807.1, Acquisition Plans; and
* Consult with the Center Patent or IP Counsel early in the planning process (see 2.4.2.1) as the license under which the Open Source Software was acquired may negatively impact NASA's intended use.
 

h3. 3.4.6 Embedded Open Source Software

Embedded software applications written by/for NASA are commonly used by NASA for engineering software solutions. Embedded software is software specific to a particular application as opposed to general purpose software running on a desktop. Embedded software usually runs on custom computer hardware ("avionics"), often on a single chip.

Care must be taken when using vendor supplied board support packages (BSPs) and hardware specific software (drivers) which are typically supplied with off-the-shelf avionics systems. BSPs and drivers act as the software layer between the avionics hardware and the embedded software applications written by/for NASA. Most CPU boards have BSPs provided by the board manufacturer, or third parties working with the board manufacturer. Driver software is provided for serial ports, USB ports, interrupt controllers, modems, printers, and many other hardware devices

BSPs and drivers are hardware dependent, often developed by third parties on hardware/software development tools which may not be accessible years later. Risk mitigation should include hardware specific software such as BSPs, software drivers, etc.

Many BSPs and drivers are provided by board manufacturers as binary code only, which could be an issue if the supplier is not available and BSP/driver errors are found. It is recommended that a project using BSPs/drivers maintain a configuration managed version of any BSPs with release dates and notes. Consult with avionics (hardware) engineers on the project to see what actions, if any, they may be taken to configuration manage the BSPs/drivers.

Consideration should also be given to how BSP/driver software updates are to be handled, if and when they are made available, and how will it become known to the project that updates are available?

Vendor reports and user forums should be monitored from the time hardware and associated software are purchased through a reasonable time after deployment. Developers should monitor suppliers or user forums for bugs, workarounds, security changes, and other modifications to software that, if unknown, could derail a NASA project. Consider the following snippet from a user forum:

{panel}"Manufacturer Pt. No." motherboard embedded complex electronics contains malware
Published: 2010-xx-xx

A "Manufacturer" support forum identifies "manufacturer's product" motherboards which contain harmful code. The embedded complex electronics for server management on some motherboards may contain malicious code. There is no impact on either new servers or non-Windows based servers. No further information is available regarding the malware, malware mitigation, the serial number of motherboards affected, nor the source {panel}



{div3}
{div3:id=tabs-4}

h1. 4. Small Projects


This requirement applies to all projects regardless of size.
{div3}




{div3:id=tabs-5}

h1. 5. Resources

# [COTS Foundations: Essential Background and Terminology|http://www.ippa.ws/IPPC2/PROCEEDINGS/Article_5_Baron.pdf]
By Sally J. F. Baron, International Public Procurement Conference Proceedings, 21-23 September 2006. This paper defines COTS by giving a comprehensive history, explaining essential elements and defining terms and acronyms. It focuses on the recent history since the landmark "Perry Memo" of 1994, to current progress. Important issues such as intellectual property are also presented. The purpose of this paper is to provide a background as well as a working reference for academics and government procurement officials.
# [NASA Study on Flight Software Complexity|http://nen.nasa.gov/files/FSWC_Final_Report.pdf]
March, 2009. Commissioned by the NASA Office of Chief Engineer, Technical Excellence Program, Adam West, Program Manager, and edited by Daniel L. Dvorak, Systems and Software Division, Jet Propulsion Laboratory, this study reviews the mixed blessings of COTS, Identifying and Minimizing Incidental FSW Complexity regarding COTS, COTs integration risks, and COTS lifecycle cost risks.
# [The Commandments of COTS: Still in Search of the Promised Land, SEI, Carnegie Mellon|http://www.stsc.hill.af.mil/crosstalk/1997/05/commandments.asp]
By David J. Carney and Patricia A. Oberndorf, Software Engineering Institute, Carnegie Mellon University, May 1997.This article examines current government trends toward using commercial-off-the-shelf (COTS) products. It discusses both the positive and the negative effects of these trends and suggests some high-level issues for policy makers to consider.
# [Decision Point: Will Using a COTS Component Help or Hinder Your DO-178B Certification Effort?|http://www.stsc.hill.af.mil/crosstalk/2003/11/0311budden.html]
November 2003, CrossTalk - Journal of Defense Software Engineering, Timothy J. Budden, {color:#330066}AVISTA.{color} Avionics software developers today are continually challenged to cut costs and reduce time to market, without compromising the safety of their application. Many project leaders look to commercial off-the-shelf (COTS) software components as a possible means to reduce software development costs and development time. The requirements to "prove" software quality under Defense Order (DO)-178B may be difficult, but the opportunity demands consideration of COTS module integration where possible. Understand what is certifiable, how to get the right information from your vendor, and the importance of DO-178B traceability.
# [Added Sources of Costs in Maintaining COTS-Intensive Systems|http://www.stsc.hill.af.mil/crosstalk/2007/06/0706clarkclark.html]
June 2007, CrossTalk -- Journal of Defense Software Engineering, Drs. Brad and Betsy Clark, Software Metrics, Inc. . Ten years ago, work was begun at the Center for Systems and Software Engineering at the University of Southern California to develop a cost model for commercial off-the-shelf (COTS)-based software systems. A series of interviews were conducted to collect data to calibrate this model. A total of 25 project managers were interviewed; for eight of these projects, data was collected during the original system development and maintenance phases. A common sentiment heard from the people maintaining these systems was that they turned out to be more expensive to maintain than originally envisioned and, in fact, were more costly than a comparable custom-built system. At the same time, several people expressed frustration about the difficulty of communicating to upper management the reasons why COTS-based systems were so expensive to maintain. Anecdotal evidence from these interviews is used to discuss the added sources of maintenance cost. Three different approaches or strategies for system maintenance were observed and are summarized in this article.
# Sick of COTs Acronyms?[http://www.pennwellblogs.com/mae/2008/01/sick-of-cots-acronyms-yet.html]
January, 2008. John McHale, Executive editor of Military & Aerospace Electronics magazine. This humorous blog gives a bit of history on the COTS, and other, acronyms.
# [COTS: Commercial Off-The-Shelf or Custom Off-The-Shelf?|http://www.stsc.hill.af.mil/crosstalk/2007/06/0706BackTalk.html]
June 2007, Wiley F. Livingston, Jr. P.E., Software Technology Support Center (STSC), Hill AFB. A refreshing look at the cost and complexity of customizing an otherwise OTS product
# [Open-source vs. proprietary software bugs: Which get squashed fastest?|http://news.cnet.com/8301-13505_3-9786034-16.html]
This article from CNET News, September 26, 2007, looks at which software is more robust, in-house or COTS.
# [COTS Software Integration Cost Modeling Study|http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.143.1227&rep=rep1&type=pdf]
June 1997. From the University of Southern California Center for Software Engineering, performed for the USAF Electronic Systems Center, this study represents a first effort towards the goal of developing a comprehensive COTS integration cost modeling tool.
# [Researchers: Bugs in Open Source Software are waning|http://www.betanews.com/article/Researchers-Bugs-in-open-source-software-are-waning/1211324896] May 2008. By Jacqueline Emigh, Betanews. Developers of the Linux OS, Apache Web server, and about 250 other different open source projects have removed more than 8,500 individual bugs from their code over the past two years, according to a study released this week.
# [Open Source Licenses|http://www.opensource.org/licenses/category]
Open Source Initiative, September, 2010. A list and some guidance for Open Source Licenses {anchor:Legacy1}
# {color:#0000ff}{+}Working Effectively with Legacy Code{+}{color}, by Michael C. Feathers, ISBN 0-13-117705-2
Michael Feathers starts with legacy code defined as code without tests. He introduces "Characterization testing" as an important concept and an essential tool for software developers dealing with legacy code. This is a highly recommended book if you are a software developer or manager working with legacy code. Examples are given in C, C++, C#, Ruby and Java.
# [Change-out: A system of systems approach to COTS management|http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04127316]
IEEE Xplore, Sixth International IEEE Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (ICCBSS'07), 0-7695-2785-X/07, by Sally J. F. Baron, Ph.D, Management Consulting. This paper examines such complexity, provides a visual framework for a system of systems and the relevance and importance of change-out in general. From the Sixth International IEEE Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (ICCBSS'07) {anchor:_Lessons_Learned_with}
# [AIAA Guide for Managing the Use of Commercial off the Shelf (COTS) Software Components for Mission-Critical Systems|https://netforum.aiaa.org/eweb/DynamicPage.aspx?Action=Add&ObjectKeyFrom=1A83491A-9853-4C87-86A4-F7D95601C2E2&WebCode=ProdDetailAdd&DoNotSave=yes&ParentObject=CentralizedOrderEntry&ParentDataObject=Invoice%20Detail&ivd_formkey=69202792-63d7-4ba2-bf4e-a0da41270555&ivd_cst_key=00000000-0000-0000-0000-000000000000&ivd_prc_prd_key=4FBFBBC1-5790-436A-B966-4C6CB206B320](G-118-2006e) (ONLINE) The purpose of this Guide is to assist development and maintenance projects (teams and individuals) that have to address the use of, or consideration of, COTS products within large, complex systems, including but not limited to mission critical systems. This assistance is provided by capturing a set of information about COTS products (benefits, risks, recommended practices, lifecycle activity impacts) and mission critical systems (variety of MCS, special needs for MCS, differences between MCS and other types of systems) and then providing some linkage between these topics so that various types of stakeholders can find useful information. The document should be of value to both management and technical individuals/teams. It should also be of value to teams that are dealing with non-MCS, in that the scope is not limited to only MCS.



{toolstable}



h2. 5.2 Checklist Tools

h3. *Requirement*: 2.3.1 (SWE-027) checklist

| Steps | Y/N |
| (a) Have requirements to be met by the software component been identified? | |
| (b) Does the software component include documentation to fulfill its intended purpose (e.g., usage instructions)? | |
| (c1) Have the proprietary rights been addressed? | |
| (c2) Have the usage rights been addressed? | |
| (c3) Have the ownership rights been addressed? | |
| (c4) Has the warranty been addressed? | |
| (c5) Have the licensing rights been addressed? | |
| (c6) Have the transfer rights addressed? | |
| (d) Has future support for the software product been planned? | |
| (e) Has the software component been verified and validated to the same level of confidence as would be required of the developed software component? | |

h3. *Guidelines:* Risk Mitigation Plan Topic Checklist

| | Y/N |
| From the [Lessons Learned|#_Lessons_Learned_with] section of this document, underestimation of life cycle costs and risks of OTS software can be a show stopper. Has a thorough study been done to identify and mitigate lifecycle costs and risks? | |
| If the lifecycle costs and risks of the OTS software have been identified in the study above, is there a mitigation plan if these costs and risks are exceeded? | |
| Is there a supplier agreement to deliver or escrow source code or third party maintenance agreement is in place? | |
| Is there a supplier agreement to provide developer training, development environment test tools and artifacts, development environment documentation for development hardware and software in part to mitigate the loss of the supplier? | |
| Is there a supplier agreement to transfer any maintenance agreement to a third party? | |
| Are there any trade studies or market analysis completed that identify equivalent or alternate products to mitigate the loss of the supplier's product? | |
| Is there an analysis that addresses the risk of loss of supplier or supplier's product such as stability of supplier, supplier's product recall history, supplier's business and technical reputation? | |
| Has a plan been completed that describes what to do if supplier or third party support for the product is lost? | |
| Has a plan been completed that describes what to do if there is a loss of maintenance for the product (or product version)? | |
| Has a plan been completed that describes what to do if there is a loss of the product (e.g., license revoked, recall of product, etc)? | |
| If hardware changes, is software affected? If so, has a plan been completed that describes what to do with software if hardware changes? | |
| If software changes, is hardware affected? If so, has a plan been completed that describes what to do with hardware if software changes? | |
| Has there been a review of the pertinent items from the [Lessons Learned|#_Lessons_Learned_with] section of this document? | |

h3. *Guidelines:* Suitability Analysis for OTS, Legacy/Heritage Software Checklist

It is recommended that the following table items be addressed as early as possible, perhaps during trade studies. Some of this table is based on materials presented at a NASA software safety training session by Martha Wetherholt, OSMA. Other parts of the table are based on comments from the [Lessons Learned|#_Lessons_Learned_with] section
\\
| | Y/N |
| Is the pedigree of the software known, including the development environment and software practices and processes used for development and testing? | |
| Is adequate documentation available to determine suitability such as functionality (requirements), interfaces, design, architecture, tests, verification, validation, user manuals? | |
| If legacy/heritage code, are the old and new operational environments known and understood? | |
| If legacy/heritage code, are the old and new hardware differences, such as memory size, speed, CPU speed and hardware interfaces understood? | |
| If legacy/heritage code, are task scheduling functions affected by new hardware? | |
| Are the external data interfaces well known and understood? {color:#ff0000}guideline{color} | |
| Is source code available to better understand the application? | |
| Is there an updated record of defects (bugs) and fixes available? | |
| Are the internal software interfaces available? | |
| Is "glueware" required to fill gaps in functionality? | |
| Is there extra functionality, and, if so, the ability to turn the extra functionality off (e.g. recompile options, disable flags, or writing wrapper software)? | |
| If extra functionality is present and cannot be disabled, has it been determined by analysis and testing that the extra functionality is safe? | |
| Are there potential conflicts with the system into which the OTS or legacy/heritage code is being integrated such as memory overwrites, monopolizing system resources such as processor time or stack space? | |
| Have hazard analyses (fault trees, FMEA's, etc) been performed on the software and made available? | |
| Have analysis, test and/or verification tests been performed, documented and been made available? | |
| Does the software meet the NASA Software Safety Standard? {color:#ff0000}requirement{color} | |
| Has the software been adequately tested and analyzed to an acceptable level of risk and will the software remain safe in the context of its planned use, and has the resulting documentation been made available? | |
| Has a prioritized list of desired COTS features been developed? | |
| Have discussions been had with local experts having experience in similar areas and frequent peer and design reviews been performed? | |
| Have demonstration versions of the COTS products been obtained as well as customer references from the vendor? | |
| Has a product been selected which is appropriately sized for your application and which is closely aligned with your project's requirements and a vendor whose size will permit a working relationship? | |
| Are vendor tutorials, documentation, and vendor contacts being used during the COTS evaluation period? |



{div3}
{div3:id=tabs-6}







h1. 6. Lessons Learned

The following lessons learned are taken primarily from the NASA Engineering Network Lessons Learned Database.
# The following information comes from the [NASA Study on Flight Software Complexity |http://nen.nasa.gov/files/FSWC_Final_Report.pdf] listed in the reference section of this document
 
Summary: In 2007, a relatively new organization in DoD (the Software Engineering and System Assurance Deputy Directorate)reported their findings on software issues based on approximately 40 program reviews in the preceding 2½ years (Baldwin 2007). They found several software systemic issues that were significant contributors to poor program execution. Among the seven listed were the following on COTS:
#* Immature architectures, COTS integration, interoperability.
\\
\\
Later, in partnership with the NDIA, they identified the seven top software issues that follow, drawn from a perspective of acquisition and oversight. Among the seven listed were the following on COTS:
\\
#* Inadequate attention is given to total life cycle issues for COTS/NDI impacts on life cycle cost and risk.
 
In partnership with the NDIA, they made seven corresponding top software recommendations. Among the seven listed were the following on COTS:
#* Improve and expand guidelines for addressing total life cycle COTS/NDI issues.
 
# The following information comes from the NASA Lessons Learned Repository at the NeN portal
 
Summary: The Shuttle Program selected off-the-shelf GPS and EGI units that met the requirements of the original customers. It was assumed that off-the-shelf units with proven design and performance would reduce acquisition costs and require minimal adaptation and minimal testing. However, the time, budget and resources needed to test and resolve firmware issues exceeded initial projections.
\\
Details: see the NASA Lessons Learned Repository at the NeN portal, Lesson 1370: [Link|http://nen.nasa.gov/portal/site/llis/index.jsp?epi-content=LLKN_DOCUMENT_VIEWER&llknDocUrl=http%3A%2F%2Fnen.nasa.gov%2Fllis_content%2F1370.html&llknDocTitle=Lessons%20Learned%20Entry:%201370|http://nen.nasa.gov/portal/site/llis/index.jsp?epi-content=LLKN_DOCUMENT_VIEWER&llknDocUrl=http%3A%2F%2Fnen.nasa.gov%2Fllis_content%2F1370.html&llknDocTitle=Lessons%20Learned%20Entry:%201370]
 
# The following information comes from the NASA Lessons Learned Repository at the NeN portal. Summary: Lessons Learned Study Final Report for the Exploration Systems Mission Directorate; Langley Research Center; August 20, 2004 had the following comments on COTS:
 
Summary: There has been an increasing interest in utilizing commercially available hardware and software as portions of space flight systems and their supporting infrastructure. Experience has shown that this is a very satisfactory approach for some items, and a major mistake for others. In general, COTS (products) should not be used as part of any critical systems because of the generally lower level of engineering and product assurance used in their manufacture and test. In those situations where COTS (software) has been applied to flight systems, such as the laptop computers utilized as control interfaces on International Space Station (ISS), the cost of modifying and testing the hardware/software to meet flight requirements has far exceeded expectations, potentially defeating the reason for selecting COTS products in the first place. In other cases, such as the Checkout Launch Control System (CLCS) project at JSC, the cost of maintaining the commercial software had not been adequately analyzed and drove the project's recurring costs outside the acceptable range.
\\
Recommendation: Ensure that candidate COTS products are thoroughly analyzed for technical deficiencies and life cycle cost implications before levying them on the program.
\\
#* COTS systems have potential to reduce system costs, but only if all of their characteristics are considered beforehand and included in the planned application. (Standards)
 
#* COTS systems that look good on paper may not scale well to NASA needs for legitimate reasons. These include sustaining engineering/update cycle/recertification costs, scaling effects, dependence on third party services and products. Need to assure that a life-cycle cost has been considered correctly. (HQ - CLCS)
Details: see the NASA Lessons Learned Repository at the NeN portal,:
[http://nen.nasa.gov/llis_lib/doc/1016526main_LL_Task_Final_Report.doc|http://nen.nasa.gov/llis_lib/doc/1016526main_LL_Task_Final_Report.doc]
 
# The following information comes from the NASA Lessons Learned Repository at the NeN portal.
\\
Summary: The purpose of the Standard Autonomous File Server (SAFS) is to provide automated management of large data files without interfering with the assets involved in the acquisition of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated level of fail-over processing to enhance reliability. The successful integration of COTS products into the SAFS system has been key to its becoming accepted as a NASA standard resource for file distribution, and leading to its nomination for NASA's Software of the Year Award in 1999.
 
Lessons learned:
Match COTS tools to project requirements. Deciding to use a COTS product as the basis of system software design is potentially risky, but the potential benefits include quicker delivery, less cost, and more reliability in the final product. The following lessons were learned in the definition phase of the SAFS/CSAFS development.
*#* Use COTS products and re-use previously developed internal products.
*#* Create a prioritized list of desired COTS features.
*#* Talk with local experts having experience in similar areas.
*#* Conduct frequent peer and design reviews.
*#* Obtain demonstration \[evaluation\] versions of COTS products.
*#* Obtain customer references from vendors.
*#* Select a product appropriately sized for your application.
*#* Choose a product closely aligned with your project's requirements.
*#* Select a vendor whose size will permit a working relationship.
*#* Use vendor tutorials, documentation, and vendor contacts during COTS evaluation period
 
Test and prototype COTS products in the lab:The prototyping and test phase of the COTS evaluation allows problems to be identified as the system design matures. These problems can be mitigated (often with the help and cooperation of the COTS vendor) well before the field-testing phase at which time it may be too costly or impossible to retrofit a solution. The following lessons were learned in the prototyping and test phase of the SAFS/CSAFS development.:
 
#* Prototype your systems hardware and software in a lab setting as similar to the field environment as possible
#** simulate how the product will work on various customer platforms
#** model the field operations
#** develop in stages with ongoing integration and testing
 
#* Pass pertinent information on to your customers
#* Accommodate your customers, where possible, by building in alternative options
#* Don't approve all requests for additional options by customers or new projects that come on line
#* Select the best COTS components for product performance even if they are from multiple vendors
#* Consider the expansion capability of any COTS product
#* Determine if the vendor's support is adequate for your requirements
 
Install, operate and maintain the COTS field and lab components. The following lessons were learned in the installation and operation phase of the SAFS/CSAFS development.:
 
#* Personally perform on-site installations whenever possible
#* Have support/maintenance contracts for hardware and software through development, deployment, and first year of operation
#* Create visual representations of system interactions where possible.
#* Obtain feedback from end users
#* Maintain the prototype system after deployment
#* Select COTS products with the ability to do internal logging
 
Details: see the NASA Lessons Learned Repository at the NeN portal, Lesson 1346:
[Link|http://nen.nasa.gov/portal/site/llis/index.jsp?epi-content=LLKN_DOCUMENT_VIEWER&llknDocUrl=http%3A%2F%2Fnen.nasa.gov%2Fllis_content%2F1346.html&llknDocTitle=Lessons%20Learned%20Entry:%201346]
 
# The following information comes from the NASA Lessons Learned Repository at the NeN portal
Summary: Shortly after the commencement of science activities on Mars, the Mars Exploration Rover (MER) lost the ability to execute any task that requested memory from the flight computer. The cause was incorrect configuration parameters in two operating system software modules that control the storage of files in system memory and flash memory. Seven recommendations cover enforcing design guidelines for COTS software, verifying assumptions about software behavior, maintaining a list of lower priority action items, testing flight software internal functions, creating a comprehensive suite of tests and automated analysis tools, providing downlinked data on system resources, and avoiding the problematic file system and complex directory structure.
\\
Recommendations:
#* Enforce the project-specific design guidelines for COTS software, as well as for NASA-developed software. Assure that the flight software development team reviews the basic logic and functions of commercial off-the-shelf (COTS) software, with briefings and participation by the vendor.
#* Verify assumptions regarding the expected behavior of software modules. Do not use a module without detailed peer review, and assure that all design and test issues are addressed.
#* Where the software development schedule forestalls completion of lower priority action items, maintain a list of incomplete items that require resolution before final configuration of the flight software.
#* Place high priority on completing tests to verify the execution of flight software internal functions.
#* Early in the software development process, create a comprehensive suite of tests and automated analysis tools. Ensure that reporting flight computer related resource usage is included.
#* Ensure that the flight software downlinks data on system resources (such as the free system memory) so that the actual and expected behavior of the system can be compared.
#* For future missions, implement a more robust version of the dosFsLib module, and/or use a different type of file system and a less complex directory structure.
 
\\
Details: see the NASA Lessons Learned Repository at the NeN portal, Lesson 1483
[Link|http://nen.nasa.gov/portal/site/llis/index.jsp?epi-content=LLKN_DOCUMENT_VIEWER&llknDocUrl=http%3A%2F%2Fnen.nasa.gov%2Fllis_content%2F1483.html&llknDocTitle=Lessons%20Learned%20Entry:%201483]
# The following information comes from the NEN Lessons Learned Repository. Summary:
 
International Space Station Lessons Learned as Applied to Exploration, KSC, July 22, 2009, had the following comments on COTS:
 (23-Lesson): Use Commercial Off-the-Shelf Products Where Possible
An effective strategy in the ISS program was to simplify designs by utilizing commercial off-the-shelf (COTS) hardware and software products for non-safety, non-critical applications. Application to Exploration: Use of COTS products should be encouraged whenever practical in exploration programs.
Details: see the NASA Lessons Learned Repository at the NeN portal, at:
[http://nen.nasa.gov/llis_lib/pdf/1022932main_ISSLessonsLearnedJuly2009.pdf|http://nen.nasa.gov/llis_lib/pdf/1022932main_ISSLessonsLearnedJuly2009.pdf]
# The following information is from [Commercial Item Acquisition: Considerations and Lessons Learned|http://www.acq.osd.mil/dpap/Docs/cotsreport.pdf] July 14, 2000, Office of the Secretary of Defense
 
\\
Summary: This document is designed to assist DoD acquisition and supported commercial items. It provides an overview of the considerations inherent in such acquisitions and summarizes lessons learned from a wide variety of programs. Although it's written with the DoD acquirer in mind, it can provide useful information you r and assist you as we move down this increasingly important path.
\\
Details: see [Commercial Item Acquisition: Considerations and Lessons Learned|http://www.acq.osd.mil/dpap/Docs/cotsreport.pdf]\\
{div3}

{tabclose}