bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin


{alias:SWE-027} {tabsetup:1. The Requirement|2. Rationale|3. Guidance|4. Small Projects| 5. Resources|6. Lessons Learned} {div3:id=tabs-1} h1. 1. Requirements Paragraph
Wiki Markup
Tabsetup
1. The Requirement
1. The Requirement
12. Rationale
23. Guidance
34. Small Projects
4 5. Resources
56. Lessons Learned


Div
idtabs-1

1. Requirements

Paragraph 2.3.1

The

project

shall

ensure

that

when

a

{term:COTS}, {term:GOTS}, {term:MOTS}, reused, or open source software component is to be acquired or used, the following conditions are satisfied: # The requirements that are to be met by the software component are identified. # The software component includes documentation to fulfill its intended purpose

COTS (Commercial Off the Shelf), GOTS (Government Off the Shelf), MOTS (Modified Off the Shelf), reused, or open source software component is to be acquired or used, the following conditions are satisfied:

  1. The requirements that are to be met by the software component are identified.
  2. The software component includes documentation to fulfill its intended purpose (e.g.,
  1. usage
  1. instructions).
#
  1. Proprietary,
  1. usage,
  1. ownership,
  1. warranty,
  1. licensing
  1. rights,
  1. and
  1. transfer
  1. rights
  1. have
  1. been
  1. addressed.
#
  1. Future
  1. support
  1. for
  1. the
  1. software
  1. product
  1. is
  1. planned.
#
  1. The
  1. software
  1. component
  1. is
  1. verified
  1. and
  1. validated
  1. to
  1. the
  1. same
  1. level
  1. of
  1. confidence
  1. as
  1. would
  1. be
  1. required
  1. of
  1. the
  1. developed
  1. software
  1. component.
h2. {color:#003366}{*}

1.1

Notes{*}{color}\\ The project responsible for procuring

Notes

The project responsible for procuring off-the-shelf

software

is

responsible

for

documenting,

prior

to

procurement,

a

plan

for

verifying

and

validating

the

off-the-shelf

software

to

the

same

level

of

confidence

that

would

be

needed

for

an

equivalent

class

of

software

if

obtained

through

a

"development"

process.

The

project

ensures

that

the

COTS,

GOTS,

MOTS,

reused,

and

open

source

software

components

and

data

meet

the

applicable

requirements

in

this

NPR

\

[NPR

7150.2,

NASA

Software

Engineering

Requirements

\

]

assigned

to

its

software

classification

as

shown

in

Appendix

D.

Note:

For

these

types

of

software

components

consider

the

following:

#

  1. Supplier
  1. agreement
  1. to
  1. deliver
  1. or
  1. escrow
  1. source
  1. code
  1. or
  1. third
  1. party
  1. maintenance
  1. agreement
  1. is
  1. in
  1. place.
#
  1. A
  1. risk
  1. mitigation
  1. plan
  1. to
  1. cover
  1. the
  1. following
  1. cases
  1. is
  1. available:
##
    1. Loss
    1. of
    1. supplier
    1. or
    1. third
    1. party
    1. support
    1. for
    1. the
    1. product.
##
    1. Loss
    1. of
    1. maintenance
    1. for
    1. the
    1. product
    1. (or
    1. product
    1. version).
##
    1. Loss
    1. of
    1. the
    1. product
    1. (e.g.,
    1. license
    1. revoked,
    1. recall
    1. of
    1. product,
    1. etc.).
#
  1. An
  1. Agreement
  1. that
  1. the
  1. project
  1. has
  1. access
  1. to
  1. defects
  1. discovered
  1. by
  1. the
  1. community
  1. of
  1. users
  1. has
  1. been
  1. obtained.
  1. When
  1. available,
  1. the
  1. project
  1. can
  1. consider
  1. joining
  1. a
  1. product
  1. users
  1. group
  1. to
  1. obtain
  1. this
  1. information.
#
  1. A
  1. plan
  1. to
  1. provide
  1. adequate
  1. support
  1. is
  1. in
  1. place;
  1. the
  1. plan
  1. needs
  1. to
  1. include
  1. maintenance
  1. planning
  1. and
  1. the
  1. cost
  1. of
  1. maintenance.
#
  1. Documentation
  1. changes
  1. to
  1. the
  1. software
  1. management,
  1. development,
  1. operations,
  1. or
  1. maintenance
  1. plans
  1. that
  1. are
  1. affected
  1. by
  1. the
  1. use
  1. or
  1. incorporation
  1. of
  1. COTS,
  1. GOTS,
  1. MOTS,
  1. reused,
  1. and
  1. legacy/heritage
  1. software.
#
  1. Open
  1. source
  1. software
  1. licenses
  1. review
  1. by
  1. the
  1. Center
  1. Counsel.
h2.

1.2

Applicability

Across

Classes

\\

Class

G

is

labeled

with

"P

(Center)."

This

means

that

an

approved

Center-defined

process

which

meets

a

non-empty

subset

of

the

full

requirement

can

be

used

to

achieve

this

requirement.

{applicable:asc=1\|ansc=1\|bsc=1\|bnsc=1\|csc=1\|cnsc=1\|dsc=1\|dnsc=0\|esc=1\|ensc=0\|f=1\|g=p\|h=0}\\ {div3}{div3:id=tabs-2} h1. 2. Rationale This requirement exists in NPR 7150.2 because some software (e.g. COTS, GOTS) is purchased with no direct NASA or NASA contractor software engineering involvement in software development. Projects using this type of COTS/GOTS software must know that the acquisition and maintenance of the software is expected to meet NASA requirements as spelled out in this section of NPR 7150.2 (see [Topic 7.7 - Acquisition Guidance|7.7 - Acquisition Guidance]). This requirement also exists in NPR 7150.2 because some software, whether purchased as COTS, GOTS or developed/modified in house may contain Open Source Software. If Open Source Software exists within the project software, there may be implications on how the software can be used in the future, including internal/external releases or reuse of the software. Finally, this requirement exists in NPR 7150.2 because some software may be heritage (or legacy) software, or developed/purchased before current software engineering processes were in place. {div3} {div3:id=tabs-3} h1. 3. Guidance h2. OFF-THE-SHELF SOFTWARE Guidance on off-the-shelf (OTS) software is broken down into elements as a function of the type of OTS software. The following is an index of the guidance elements: 3.1 {term:COTS}/{term:GOTS} Software 3.2 {term:MOTS} Software 3.3 Legacy/Heritage Code 3.4 Open Source Software 3.4.1 What is Open Source Software? 3.4.2 Planning ahead for the inclusion of Open Source Software 3.4.3 Releasing NASA code containing Open Source Software 3.4.4 Identifying and using high pedigree Open Source Software in NASA code 3.4.5 Procurement of software by NASA -- Open Source Provisions 3.5 Embedded Open Source Software Software reuse: Software reuse (either software acquired commercially or existing software from previous development activities) comes with a special set of concerns. Reuse of commercially-acquired software includes COTS, GOTS, and MOTS. Reuse of in-house software may include legacy or heritage software. Open Source Software is also reused software. Reused software often requires modification to be useable by the project. Modification may be extensive, or just require wrappers or glueware to make the software useable. Acquired and existing software must be evaluated during the process of selection to determine the effort required to bring it up to date. The basis for this evaluation is typically the criteria used for developing software as a new effort. The evaluation of the reused software requires ascertaining whether the quality of the software is sufficient for the intended application. The requirement statement for SWE-027 calls for five conditions to be satisfied In addition the note indicates six additional items to consider. The key item in the above listings is the need to assure the verification and validation (V&V) activity for the reused software is performed to the same level of confidence as would be required for the newly developed software component. Software certification by outside sources: Outside certifications can be taken into account dependent on the intended NASA use of the software and the use of the software in the environment in which it was certified. Good engineering judgment has to be used in cases like this to determine the acceptable degree of use. An example: {term:RTOS} vendor certification data and testing can be used in conjunction with data certification done in the NASA software environment. Even though software has been determined "acceptable" by a regulatory entity (e.g., FAA) good engineering judgement has to be used for these types of cases to determine the acceptable degree of use. h2. 3.1 COTS/GOTS Software COTS (commercial off-the-shelf) software and GOTS (government off-the-shelf) software are unmodified, out of the box software solutions that can range in size from a portion of a software project to an entire software project. COTS/GOTS software can include software tools (e.g. word processor or spreadsheet applications), simulations (e.g. aeronautical and rocket simulations), and modeling tools (e.g., dynamics/thermal/electrical modeling tools). If you are planning to use COTS/GOTS products, be sure to complete the checklist tables under the Tools section. The purpose of these tables is to ensure that the table entries are considered in your software life cycle decisions from software acquisition through software maintenance. If COTS/GOTS software is used for a portion of the software solution, the software requirements pertaining to that portion should be used in the testing, V&V of the COTS/GOTS software. For example, if a MIL-STD-1553 serial communications is the design solution for the project communications link requirements, and the COTS/GOTS software design solution is used along with the COTS/GOTS hardware design solution, then the project software requirements for the serial communications link should be used to test, verify and validate the COTS/GOTS MIL-STD-1553 software. Other functionality that might be present in the COTS/GOTS MIL-STD-1553 software may not be covered by the project requirements. This other functionality should be either disabled or determined to be safe by analysis and testing. COTS software can range from simple software (e.g., handheld electronic device) to progressively more complicated software (e.g., launch vehicle control system software). A software criticality assessment (see [SWE-133|SWE-133]) and a risk assessment can be made to determine if the use of this software will result in an acceptable level of risk, even if unforeseen hazard(s) result in a failure. The results of these assessments can be used to set up the approach to using and verifying the COTS/GOTS software. h2. 3.2 MOTS Software As defined in Appendix A of NPR 7150.2, A.18: "Modified Off-The-Shelf (MOTS) Software. When COTS, legacy/heritage software is reused, or heritage software is changed, the product is considered 'modified.'\- The changes can include all or part of the software products and may involve additions, deletions, and specific alterations. An argument can be made that any alterations to the code and/or design of an off-the-shelf software component constitutes 'modification,' but the common usage allows for some percentage of change before the off-the-shelf software is declared to be MOTS software. This may include the changes to the application shell and/or glueware to add or protect against certain features and not to the off-the-shelf software system code directly. See the off-the-shelf \[definition\]." In cases where legacy/heritage code is modified, MOTS is considered to be an efficient method to produce project software, especially if the legacy/heritage code is being used in the same application area of NASA. For example, Expendable Launch Vehicle simulations have been successfully modified to accommodate solid rocket boosters, payload release requirements, or other such changes. Further, if the "master" code has been designed with reuse in mind, such code becomes an efficient and effective method of producing quality code for succeeding projects. An Independent Verification and Validation (IV&V) Facility report, "Software Reuse Study Report", April 29, 2005, examines in detail changes made on reused software. The conclusions are positive but caution against overestimating (underestimating) the extent and costs of reused software. The DoD has had extensive experience in COTS/GOTS and MOTS. A Lessons Learned item, Commercial Item Acquisition: Considerations and Lessons Learned, specifically includes lessons learned from MOTS. Concerns in these lessons learned included commercial software vendors attempting to modify existing commercial products, limiting the changes to "minor" modifications, and underestimating the extent and schedule impacts of testing of modified code. Extreme caution should be exercised when attempting to purchase or modify COTS or GOTS code that was written for another application realm, or for which key documentation is missing (e.g., requirements, architecture, design, tests). Engineering judgement and a consideration of possible impacts to the software development activity need to be used/made when determining that software is MOTS, legacy, or heritage. h2. 3.3 Legacy/Heritage Code "Legacy" and "heritage" code will be used interchangeably here to identify code that may have been produced before modern software development processes were put into place. Reused code may in many cases be considered legacy/heritage code. Legacy code can range in size from small units of code (e.g., subroutines) to large, complete software systems (e.g., Atlas-Centaur Expendable Launch Vehicle Simulation). It may be desirable to maintain legacy code largely intact due to one or more of the following factors: * The code may have a history of successful applications over many runs. * No new software errors have been found in the code in some time and it has been reliable through many years of use. * The cost of upgrading the legacy code (e.g., a new software development) may be uneconomical or unaffordable in terms of time or funding. * Upgrading the legacy code could add new software errors. * Software personnel are familiar with the legacy code. * Safety reviews have been conducted on the legacy code under similar applications. On the other hand, it may be desirable to replace legacy code due to one or more of the following factors: * No active civil servants or contractors are familiar with the code or its operation. * One or more of the following documents are missing: architecture, requirements, traceability, design, source code, unit through integration test cases, validation results, user operational manuals, non-conformances, waivers, or coding standards. * Lack of conditions for the installation of the software or use of the software or software development environment. * No safety review has been done on the new code in its old or new operational environment. * The legacy code may contain Open Source Software with questionable license terms or rights. * The source code language compilers may be years out of date or even inaccessible. * Emulators may not be available. * Maintenance responsibility is unknown. * Legacy code may operate on out-of-date or unavailable operating systems. * Unknown IP, licensing, exportability constraints, if any. Determining which path to follow (keep or replace legacy code) is largely a cost-risk analysis. Further guidance on this facet of legacy code will be provided in future iterations of this Electronic Handbook. If the decision is made to maintain the use of the legacy code, it is recommended that incremental improvements be made as the code is used. Don't wait until all the experts retire\! Specifically, * Requirements should be documented if not already available. * Create V&V documents based on testing against any vendor documentation, including user's manuals. * Start configuration management on the reused code. * Software architecture and design should be documented, if not already available. * Test cases should be documented. * Software debugging and error reporting should be documented. * Software Assurance and Safety personnel should review the legacy code and documentation. * All documentation, test results, maintenance history, and other such documents associated with legacy code should be gathered and stored, if it is anticipated the code will be reused. One author, Michael C. Feathers,{sweref:185} has defined legacy code as code which has no tests and proceeds to advise his readers on how to work with legacy code. Engineering judgement and a consideration of possible impacts to the software development activity need to be used/made when determining that software is MOTS, legacy, or heritage. h2. 3.4 Open Source Software Open Source Software is considered a form of off-the-shelf software. Even if most of the software on a NASA project is developed in-house, it is common to find embedded Open Source Software within the code. It is often more efficient for a software engineer to use widely available and well tested code developed in the software community for common functions than to "reinvent the wheel." h3. 3.4.1 What is Open Source Software? * In general usage:  "Open Source Software" is not to be confused with other forms of inexpensive or "free" software. The intention of SWE-027 is to cover any software used in the software system that was not developed in-house. More generalized information on this subject of Open Source Software is available from Wikipedia at:[


applicable
f1
gp
h0
ansc1
asc1
bnsc1
csc1
bsc1
esc1
cnsc1
dnsc0
dsc1
ensc0




Div
idtabs-2

2. Rationale

This requirement exists in NPR 7150.2 because some software (e.g. COTS, GOTS) is purchased with no direct NASA or NASA contractor software engineering involvement in software development. Projects using this type of COTS/GOTS software must know that the acquisition and maintenance of the software is expected to meet NASA requirements as spelled out in this section of NPR 7150.2 (see Topic 7.3 - Acquisition Guidance).

This requirement also exists in NPR 7150.2 because some software, whether purchased as COTS, GOTS or developed/modified in house may contain Open Source Software. If Open Source Software exists within the project software, there may be implications on how the software can be used in the future, including internal/external releases or reuse of the software.

Finally, this requirement exists in NPR 7150.2 because some software may be heritage (or legacy) software, or developed/purchased before current software engineering processes were in place.


Div
idtabs-3

3. Guidance

OFF-THE-SHELF SOFTWARE

Guidance on off-the-shelf (OTS) software is broken down into elements as a function of the type of OTS software. The following is an index of the guidance elements:
3.1 COTS/GOTS Software
3.2 MOTS Software
3.3 Legacy/Heritage Code
3.4 Open Source Software
3.4.1 What is Open Source Software?
3.4.2 Planning ahead for the inclusion of Open Source Software
3.4.3 Releasing NASA code containing Open Source Software
3.4.4 Identifying and using high pedigree Open Source Software in NASA code
3.4.5 Procurement of software by NASA – Open Source Provisions
3.5 Embedded Open Source Software

Software reuse:

Software reuse (either software acquired commercially or existing software from previous development activities) comes with a special set of concerns. Reuse of commercially-acquired software includes COTS, GOTS, and MOTS. Reuse of in-house software may include legacy or heritage software. Open Source Software is also reused software. Reused software often requires modification to be useable by the project. Modification may be extensive, or just require wrappers or glueware to make the software useable. Acquired and existing software must be evaluated during the process of selection to determine the effort required to bring it up to date. The basis for this evaluation is typically the criteria used for developing software as a new effort. The evaluation of the reused software requires ascertaining whether the quality of the software is sufficient for the intended application. The requirement statement for SWE-027 calls for five conditions to be satisfied In addition the note indicates six additional items to consider. The key item in the above listings is the need to assure the verification and validation (V&V) activity for the reused software is performed to the same level of confidence as would be required for the newly developed software component.

Software certification by outside sources:

Outside certifications can be taken into account dependent on the intended NASA use of the software and the use of the software in the environment in which it was certified. Good engineering judgment has to be used in cases like this to determine the acceptable degree of use. An example: RTOS (Real-Time Operating System) vendor certification data and testing can be used in conjunction with data certification done in the NASA software environment. Even though software has been determined "acceptable" by a regulatory entity (e.g., FAA) good engineering judgement has to be used for these types of cases to determine the acceptable degree of use.

3.1 COTS/GOTS Software

COTS (commercial off-the-shelf) software and GOTS (government off-the-shelf) software are unmodified, out of the box software solutions that can range in size from a portion of a software project to an entire software project. COTS/GOTS software can include software tools (e.g. word processor or spreadsheet applications), simulations (e.g. aeronautical and rocket simulations), and modeling tools (e.g., dynamics/thermal/electrical modeling tools).

If you are planning to use COTS/GOTS products, be sure to complete the checklist tables under the Tools section. The purpose of these tables is to ensure that the table entries are considered in your software life cycle decisions from software acquisition through software maintenance.

If COTS/GOTS software is used for a portion of the software solution, the software requirements pertaining to that portion should be used in the testing, V&V of the COTS/GOTS software. For example, if a MIL-STD-1553 serial communications is the design solution for the project communications link requirements, and the COTS/GOTS software design solution is used along with the COTS/GOTS hardware design solution, then the project software requirements for the serial communications link should be used to test, verify and validate the COTS/GOTS MIL-STD-1553 software. Other functionality that might be present in the COTS/GOTS MIL-STD-1553 software may not be covered by the project requirements. This other functionality should be either disabled or determined to be safe by analysis and testing.

COTS software can range from simple software (e.g., handheld electronic device) to progressively more complicated software (e.g., launch vehicle control system software). A software criticality assessment (see SWE-133) and a risk assessment can be made to determine if the use of this software will result in an acceptable level of risk, even if unforeseen hazard(s) result in a failure. The results of these assessments can be used to set up the approach to using and verifying the COTS/GOTS software.

3.2 MOTS Software

As defined in Appendix A of NPR 7150.2, A.18:
"Modified Off-The-Shelf (MOTS) Software. When COTS, legacy/heritage software is reused, or heritage software is changed, the product is considered 'modified.'- The changes can include all or part of the software products and may involve additions, deletions, and specific alterations. An argument can be made that any alterations to the code and/or design of an off-the-shelf software component constitutes 'modification,' but the common usage allows for some percentage of change before the off-the-shelf software is declared to be MOTS software. This may include the changes to the application shell and/or glueware to add or protect against certain features and not to the off-the-shelf software system code directly. See the off-the-shelf [definition]."

In cases where legacy/heritage code is modified, MOTS is considered to be an efficient method to produce project software, especially if the legacy/heritage code is being used in the same application area of NASA. For example, Expendable Launch Vehicle simulations have been successfully modified to accommodate solid rocket boosters, payload release requirements, or other such changes. Further, if the "master" code has been designed with reuse in mind, such code becomes an efficient and effective method of producing quality code for succeeding projects.

An Independent Verification and Validation (IV&V) Facility report, "Software Reuse Study Report", April 29, 2005, examines in detail changes made on reused software. The conclusions are positive but caution against overestimating (underestimating) the extent and costs of reused software.

The DoD has had extensive experience in COTS/GOTS and MOTS. A Lessons Learned item, Commercial Item Acquisition: Considerations and Lessons Learned, specifically includes lessons learned from MOTS. Concerns in these lessons learned included commercial software vendors attempting to modify existing commercial products, limiting the changes to "minor" modifications, and underestimating the extent and schedule impacts of testing of modified code.

Extreme caution should be exercised when attempting to purchase or modify COTS or GOTS code that was written for another application realm, or for which key documentation is missing (e.g., requirements, architecture, design, tests).

Engineering judgement and a consideration of possible impacts to the software development activity need to be used/made when determining that software is MOTS, legacy, or heritage.

3.3 Legacy/Heritage Code

"Legacy" and "heritage" code will be used interchangeably here to identify code that may have been produced before modern software development processes were put into place. Reused code may in many cases be considered legacy/heritage code. Legacy code can range in size from small units of code (e.g., subroutines) to large, complete software systems (e.g., Atlas-Centaur Expendable Launch Vehicle Simulation).

It may be desirable to maintain legacy code largely intact due to one or more of the following factors:

  • The code may have a history of successful applications over many runs.
  • No new software errors have been found in the code in some time and it has been reliable through many years of use.
  • The cost of upgrading the legacy code (e.g., a new software development) may be uneconomical or unaffordable in terms of time or funding.
  • Upgrading the legacy code could add new software errors.
  • Software personnel are familiar with the legacy code.
  • Safety reviews have been conducted on the legacy code under similar applications.

On the other hand, it may be desirable to replace legacy code due to one or more of the following factors:

  • No active civil servants or contractors are familiar with the code or its operation.
  • One or more of the following documents are missing: architecture, requirements, traceability, design, source code, unit through integration test cases, validation results, user operational manuals, non-conformances, waivers, or coding standards.
  • Lack of conditions for the installation of the software or use of the software or software development environment.
  • No safety review has been done on the new code in its old or new operational environment.
  • The legacy code may contain Open Source Software with questionable license terms or rights.
  • The source code language compilers may be years out of date or even inaccessible.
  • Emulators may not be available.
  • Maintenance responsibility is unknown.
  • Legacy code may operate on out-of-date or unavailable operating systems.
  • Unknown IP, licensing, exportability constraints, if any.

Determining which path to follow (keep or replace legacy code) is largely a cost-risk analysis. Further guidance on this facet of legacy code will be provided in future iterations of this Electronic Handbook.

If the decision is made to maintain the use of the legacy code, it is recommended that incremental improvements be made as the code is used. Don't wait until all the experts retire! Specifically,

  • Requirements should be documented if not already available.
  • Create V&V documents based on testing against any vendor documentation, including user's manuals.
  • Start configuration management on the reused code.
  • Software architecture and design should be documented, if not already available.
  • Test cases should be documented.
  • Software debugging and error reporting should be documented.
  • Software Assurance and Safety personnel should review the legacy code and documentation.
  • All documentation, test results, maintenance history, and other such documents associated with legacy code should be gathered and stored, if it is anticipated the code will be reused.

One author, Michael C. Feathers,

sweref
185
185
has defined legacy code as code which has no tests and proceeds to advise his readers on how to work with legacy code.

Engineering judgement and a consideration of possible impacts to the software development activity need to be used/made when determining that software is MOTS, legacy, or heritage.

3.4 Open Source Software

Open Source Software is considered a form of off-the-shelf software. Even if most of the software on a NASA project is developed in-house, it is common to find embedded Open Source Software within the code. It is often more efficient for a software engineer to use widely available and well tested code developed in the software community for common functions than to "reinvent the wheel."

3.4.1 What is Open Source Software?

  • In general usage:  "Open Source Software" is not to be confused with other forms of inexpensive or "free" software. The intention of SWE-027 is to cover any software used in the software system that was not developed in-house. More generalized information on this subject of Open Source Software is available from Wikipedia at:http://en.wikipedia.org/wiki/Open_source_software
]

  • Verify
  • information
  • from
  • Wikipedia
  • if
  • necessary.
\\ * NASA specific
  • NASA specific definition:
  • In
  • NPR
  • 2210.1C,
  • "Release
  • of
  • NASA
  • Software"
{sweref:373} under Appendix A, definitions,
  • sweref
    373
    373
    under Appendix A, definitions, A.1.7,
  • "'Open
  • Source
  • Software'
  • means
  • software
  • where
  • the
  • recipient
  • is
  • free
  • to
  • use
  • the
  • software
  • for
  • any
  • purpose,
  • to
  • make
  • copies
  • of
  • the
  • software
  • and
  • to
  • distribute
  • the
  • copies
  • without
  • payment
  • of
  • royalties,
  • to
  • modify
  • the
  • software
  • and
  • to
  • distribute
  • the
  • modified
  • software
  • without
  • payment
  • of
  • royalties,
  • to
  • access
  • and
  • use
  • the
  • source
  • code
  • of
  • the
  • software,
  • and
  • to
  • combine
  • the
  • software
  • with
  • other
  • software
  • in
  • accordance
  • with
  • Open
  • Source
  • licenses/agreements.
  • Open
  • Source
  • Software
  • is
  • a
  • subcategory
  • of
  • Publicly
  • Releasable
  • software."
h3.

3.4.2

Planning

ahead

for

the

inclusion

of

Open

Source

Software

Whether

Open

Source

Software

is

acquired

or

developed

by

NASA

or

a

NASA

contractor,

a

usage

policy

should

be

established

up

front

to

avoid

any

possible

legal

issues

that

may

arise.

This

policy

may

be

developed

in

conjunction

with

advice

from

the

Software

Release

Authority

(even

if

you

do

not

plan

to

release

the

software)

and/or

your

Center's

IP

Legal

Counsel.

h3.

3.4.3

Releasing

NASA

code

containing

Open

Source

Software

When

software

is

released

to

another

NASA

Center,

NASA

project

or

external

organization,

it

is

important

to

inform

the

receiving

party

of

any

licenses

and

restrictions

under

which

the

software

is

released.

It

is

important

to

note

that

additional

software

required

to

"run"

the

released

software

is

not

part

of

the

software

release.

For

example

a

web

application

that

runs

under

the

Apache

Web

Server

does

not

need

to

include

the

Apache

Public

License

as

part

of

the

relevant

licenses.

Software

releases

are

also

performed

when

software

is

submitted

for

Space

Act

Awards,

such

as

the

NASA

Software

of

the

Year

Award.

For

more

information

on

software

releases

one

should

contact

the

Software

Release

Authority

at

the

NASA

Center

at

which

the

software

is

being

or

was

developed.

There

are

requirements

and

processes

associated

with

software

releases.

See

NPR

2210.1C,

"Release

of

NASA

Software."

{sweref:373}   A cautionary item from NPR

sweref
373
373

 
A cautionary item from NPR 2210.1C,

paragraph

3.2.2.2

is

worth

repeating

here:

"If

a

proposed

release

of

Open

Source

Software

includes

the

release

of

external

Open

Source

Software,

care

shall

be

taken

to

ensure

that

the

pertinent

license

for

such

external

Open

Source

Software

is

acceptable.

For

example,

at

least

one

widely

used

external

open

source

license

does

not

currently

include

an

indemnification

provision

and

further

requires

that

all

software

distributed

with

that

external

Open

Source

Software

be

distributed

under

the

same

license

terms.

Therefore,

except

for

an

Approved

for

Interagency

Release

or

Approved

for

NASA

Release,

both

the

Center

Office

or

Project

that

is

responsible

for

the

software

and

Center

Patent

or

IP

Counsel

shall

review

and

approve

any

proposed

distribution

of

Open

Source

Software

that

includes

external

Open

Source

Software."

Caution:

Open

Source

Software

may

itself

contain

other

Open

Source

Software

\

!

h3.

3.4.4

Identifying

and

using

high

pedigree

Open

Source

Software

in

NASA

code

Going

back

to

the

NPR

7150.2,

requirement

2.3.1.e,

which

states:

"The

software

component

is

verified

and

validated

to

the

same

level

of

confidence

as

would

be

required

of

the

developed

software

component."

To

achieve

this

level

of

confidence,

it

is

recommended

that

software

developers

use

only

Open

Source

Software

(and

COTS,

GOTS

MOTS,

legacy

as

well)

software

that

has

a

high

pedigree,

i.e.

a

gold

standard.

Such

Open

Source

Software

will

typically

have

the

following

characteristics:

*

  • There
  • should
  • be
  • a
  • strong
  • software
  • development
  • model
  • including
  • defined
  • processes
  • for
  • bug
  • reporting,
  • which
  • includes
  • identification,
  • triage,
  • and
  • correction.
**
    • Code
    • modification:
    • Review
    • and
    • approval
    • to
    • commit
    • fixes
    • and
    • features
    • to
    • the
    • source
    • code.
**
    • Testing:
    • Thorough
    • descriptions
    • of
    • test
    • cases,
    • test
    • runs,
    • and
    • test
    • configurations.
**
    • Code
    • review
    • (as
    • part
    • of
    • the
    • code
    • modification
    • process).
**
    • Documentation:
    • Detailed
    • documentation
    • should
    • be
    • updated.
**
    • Discussion
    • lists
    • for
    • questions:
    • This
    • may
    • include
    • a
    • wiki,
    • mailing
    • list,
    • or
    • live
    • chat
    • site.
**
    • Leadership:
    • Ensures
    • that
    • the
    • community
    • works
    • in
    • a
    • coordinated
    • fashion
    • to
    • define
    • target
    • functionality
    • for
    • each
    • release
    • and
    • overall
    • product
    • focus.
*
  • Usually,
  • a
  • high
  • quality,
  • established
  • open
  • source
  • project
  • will
  • have
  • a
  • large
  • number
  • of
  • developers.
  • Usage
  • of
  • the
  • project's
  • product(s)
  • will
  • occur
  • across
  • multiple
  • industries
  • both
  • nationally
  • and
  • internationally.
*
  • Metrics
  • (e.g.,
  • number
  • of
  • developers)
  • can
  • be
  • used
  • to
  • evaluate
  • the
  • quality
  • of
  • an
  • Open
  • Source
  • project
  • via
  • sites
  • that
  • track
  • a
  • large
  • proportion
  • of
  • Open
  • Source
  • Software
  • projects
  • (e.g.,
[
]
  • ).
  • Norms,
  • such
  • as
  • what
  • constitutes
  • a
  • large
  • number
  • of
  • developers,
  • change
  • as
  • the
  • number
  • of
  • Open
  • Source
  • projects
  • and
  • developers
  • grow.
*
  • The
  • project
  • should
  • provide
  • a
  • listing
  • of
  • all
  • Open
  • Source
  • Software
  • included,
  • embedded,
  • or
  • modified
  • within
  • the
  • piece
  • of
  • Open
  • Source
  • Software.
*
  • Open
  • Source
  • Software
  • may
  • provide
  • more
  • opportunity
  • to
  • perform
  • V&V
  • of
  • the
  • software
  • to
  • the
  • same
  • level
  • of
  • confidence
  • as
  • if
  • obtained
  • through
  • a
  • "development"
  • process.
  • Often
  • Open
  • Source
  • Software
  • project
  • will
  • provide
  • online
  • access
  • to
  • detailed
  • development
  • and
  • test
  • artifacts
  • (as
  • described
  • above),
  • which
  • may
  • be
  • difficult
  • to
  • obtain
  • from
  • COTS
  • vendors.
h3.

3.4.5

Procurement

of

software

by

NASA

--

Open

Source

Provisions

A

cautionary

item

from

page 9 of NPR

2210.1C,

"Release

of

NASA

Software,"

{

sweref

:373} is worth reiterating here: "Open Source Software Development," as defined in paragraph

373
373
is worth reiterating here:
"Open Source Software Development, as defined in paragraph A.1.8

, of

NPR

2210.1C,

may

be

used

as

part

of

a

NASA

project

only

if

the

Office

or

Project

that

has

responsibility

for

acquisition

or

development

of

the

software

supports

incorporation

of

external

Open

Source

Software

into

software.

In

addition,

the

Office

or

Project

responsible

for

the

software

acquisition

or

development

shall:

*

  • Determine
  • the
  • ramifications
  • of
  • incorporating
  • such
  • external
  • Open
  • Source
  • Software
  • during
  • the
  • acquisition
  • planning
  • process
  • specified
  • in
  • NASA
  • FAR
  • Supplement
  • Subpart
  • 1807.1,
  • Acquisition
  • Plans;
  • and
*
  • Consult
  • with
  • the
  • Center
  • Patent
  • or
  • IP
  • Counsel
  • early
  • in
  • the
  • planning
  • process
  • (see
  • 2.4.2.1)
  • as
  • the
  • license
  • under
  • which
  • the
  • Open
  • Source
  • Software
  • was
  • acquired
  • may
  • negatively
  • impact
  • NASA's
  • intended
  • use.
  h2.
  • "
     

3.5

Embedded

Software

Embedded

software

applications

written

by/for

NASA

are

commonly

used

by

NASA

for

engineering

software

solutions.

Embedded

software

is

software

specific

to

a

particular

application

as

opposed

to

general

purpose

software

running

on

a

desktop.

Embedded

software

usually

runs

on

custom

computer

hardware

("avionics"),

often

on

a

single

chip.

Care

must

be

taken

when

using

vendor-supplied

board

support

packages

(BSPs)

and

hardware-specific

software

(drivers)

which

are

typically

supplied

with

off-the-shelf

avionics

systems.

BSPs

and

drivers

act

as

the

software

layer

between

the

avionics

hardware

and

the

embedded

software

applications

written

by/for

NASA.

Most

CPU

boards

have

BSPs

provided

by

the

board

manufacturer,

or

third

parties

working

with

the

board

manufacturer.

Driver

software

is

provided

for

serial

ports,

USB

ports,

interrupt

controllers,

modems,

printers,

and

many

other

hardware

devices

BSPs

and

drivers

are

hardware

dependent,

often

developed

by

third

parties

on

hardware/software

development

tools

which

may

not

be

accessible

years

later.

Risk

mitigation

should

include

hardware-specific

software,

such

as

BSPs,

software

drivers,

etc.

Many

BSPs

and

drivers

are

provided

by

board

manufacturers

as

binary

code

only,

which

could

be

an

issue

if

the

supplier

is

not

available

and

BSP/driver

errors

are

found.

It

is

recommended

that

a

project

using

BSPs/drivers

maintain

a

configuration

managed

version

of

any

BSPs

with

release

dates

and

notes.

Consult

with

avionics

(hardware)

engineers

on

the

project

to

see

what

actions,

if

any,

that

may

be

taken

to

configuration

manage

the

BSPs/drivers.

Consideration

should

also

be

given

to

how

BSP/driver

software

updates

are

to

be

handled,

if

and

when

they

are

made

available,

and how it will become known to the project that updates are available? Vendor reports and user forums should be monitored from the time hardware and associated software are purchased through a reasonable time after deployment. Developers should monitor suppliers or user forums for bugs, workarounds, security changes, and other modifications to software that, if unknown, could derail a NASA project. Consider the following snippet from a user forum: {panel}"Manufacturer Pt. No." motherboard embedded complex electronics contains malware Published: 2010-xx-xx A "Manufacturer" support forum identifies "manufacturer's product" motherboards which contain harmful code. The embedded complex electronics for server management on some motherboards may contain malicious code. There is no impact on either new servers or non-Windows based servers. No further information is available regarding the malware, malware mitigation, the serial number of motherboards affected, nor the source {panel} {div3} {div3:id=tabs-4} h1. 4. Small Projects This requirement applies to all projects regardless of size. {div3} {div3:id=tabs-5} h1. 5. Resources {refstable}

and how it will become known to the project that updates are available?

Vendor reports and user forums should be monitored from the time hardware and associated software are purchased through a reasonable time after deployment. Developers should monitor suppliers or user forums for bugs, workarounds, security changes, and other modifications to software that, if unknown, could derail a NASA project. Consider the following snippet from a user forum:


Panel

"Manufacturer Pt. No." motherboard embedded complex electronics contains malware
Published: 2010-xx-xx

A "Manufacturer" support forum identifies "manufacturer's product" motherboards which contain harmful code. The embedded complex electronics for server management on some motherboards may contain malicious code. There is no impact on either new servers or non-Windows based servers. No further information is available regarding the malware, malware mitigation, the serial number of motherboards affected, nor the source



Div
idtabs-4

4. Small Projects

This requirement applies to all projects regardless of size.


Div
idtabs-5

5. Resources


refstable
h2. 5.1 Tools

Besides the Checklist Tools in section 5.2, you may wish to reference the [Tools
Table
 
|http://nasa7150.onconfluence.com/display/7150/Tools+
Table] in this handbook for an evolving list of these and other tools in use at NASA. Note that this table should not be considered all-inclusive, nor is it an endorsement of any particular tool. Check with your Center to see what tools are available to facilitate compliance with this requirement




h2. 5.2 Checklist Tools

h3. *Requirement*: 2.3.1 (SWE-027) checklist

| Steps | Y/N |
| (a) Have requirements to be met by the software component been identified? | |
| (b) Does the software component include documentation to fulfill its intended purpose (e.g., usage instructions)? | |
| (c1) Have the proprietary rights been addressed? | |
| (c2) Have the usage rights been addressed? | |
| (c3) Have the ownership rights been addressed? | |
| (c4) Has the warranty been addressed? | |
| (c5) Have the licensing rights been addressed? | |
| (c6) Have the transfer rights been addressed? | |
| (d) Has future support for the software product been planned? | |
| (e) Has the software component been verified and validated to the same level of confidence as would be required of the developed software component? | |

h3. *Guidelines:* Risk Mitigation Plan Topic Checklist

| | Y/N |
| From the [Lessons Learned|#_Lessons_Learned_with] section of this document, underestimation of life cycle costs and risks of OTS software can be a show stopper. Has a thorough study been done to identify and mitigate life cycle costs and risks? | |
| If the life cycle costs and risks of the OTS software have been identified in the study above, is there a mitigation plan if these costs and risks are exceeded? | |
| Is there a supplier agreement to deliver or escrow source code or third party maintenance agreement is in place? | |
| Is there a supplier agreement to provide developer training, development environment test tools and artifacts, development environment documentation for development hardware and software in part to mitigate the loss of the supplier? | |
| Is there a supplier agreement to transfer any maintenance agreement to a third party? | |
| Are there any trade studies or market analysis completed that identify equivalent or alternate products to mitigate the loss of the supplier's product? | |
| Is there an analysis that addresses the risk of loss of supplier or supplier's product such as stability of supplier, supplier's product recall history, supplier's business and technical reputation? | |
| Has a plan been completed that describes what to do if supplier or third party support for the product is lost? | |
| Has a plan been completed that describes what to do if there is a loss of maintenance for the product (or product version)? | |
| Has a plan been completed that describes what to do if there is a loss of the product (e.g., license revoked, recall of product, etc)? | |
| If hardware changes, is software affected? If so, has a plan been completed that describes what to do with software if hardware changes? | |
| If software changes, is hardware affected? If so, has a plan been completed that describes what to do with hardware if software changes? | |
| Has there been a review of the pertinent items from the [Lessons Learned|#_Lessons_Learned_with] section of this document? | |

h3. *Guidelines:* Suitability Analysis for OTS, Legacy/Heritage Software Checklist

It is recommended that the following table items be addressed as early as possible, perhaps during trade studies. Some of this table is based on materials presented at a NASA software safety training session by Martha Wetherholt, OSMA. Other parts of the table are based on comments from the [Lessons Learned|#_Lessons_Learned_with] section
\\
| | Y/N |
| Is the pedigree of the software known, including the development environment and software practices and processes used for development and testing? | |
| Is adequate documentation available to determine suitability such as functionality (requirements), interfaces, design, architecture, tests, verification, validation, user manuals? | |
| If legacy/heritage code, are the old and new operational environments known and understood? | |
| If legacy/heritage code, are the old and new hardware differences, such as memory size, speed, CPU speed and hardware interfaces understood? | |
| If legacy/heritage code, are task scheduling functions affected by new hardware? | |
| Are the external data interfaces well known and understood? {color:#ff0000}guideline{color} | |
| Is source code available to better understand the application? | |
| Is there an updated record of defects (bugs) and fixes available? | |
| Are the internal software interfaces available? | |
| Is "glueware" required to fill gaps in functionality? | |
| Is there extra functionality, and, if so, the ability to turn the extra functionality off (e.g. recompile options, disable flags, or writing wrapper software)? | |
| If extra functionality is present and cannot be disabled, has it been determined by analysis and testing that the extra functionality is safe? | |
| Are there potential conflicts with the system into which the OTS or legacy/heritage code is being integrated such as memory overwrites, monopolizing system resources such as processor time or stack space? | |
| Have hazard analyses (fault trees, FMEA's, etc) been performed on the software and made available? | |
| Have analysis, test and/or verification tests been performed, documented and been made available? | |
| Does the software meet the NASA Software Safety Standard? {color:#ff0000}requirement{color} | |
| Has the software been adequately tested and analyzed to an acceptable level of risk and will the software remain safe in the context of its planned use, and has the resulting documentation been made available? | |
| Has a prioritized list of desired COTS features been developed? | |
| Have discussions been had with local experts having experience in similar areas and frequent peer and design reviews been performed? | |
| Have demonstration versions of the COTS products been obtained as well as customer references from the vendor? | |
| Has a product been selected which is appropriately sized for your application and which is closely aligned with your project's requirements and a vendor whose size will permit a working relationship? | |
| Are vendor tutorials, documentation, and vendor contacts being used during the COTS evaluation period? |

{refstable} {toolstable} {div3} {div3:id=tabs-6} h1. 6. Lessons Learned The following information comes from the NASA Study on Flight Software Complexity{sweref:040} listed in the reference section of this document:   * "In 2007, a relatively new organization in DoD (the Software Engineering and System Assurance Deputy Directorate) reported their findings on software issues based on approximately 40 program reviews in the preceding 2½ years (Baldwin 2007). They found several software systemic issues that were significant contributors to poor program execution." Among the seven listed were the following on COTS: ** "Immature architectures, COTS integration, interoperability." \\ \\ "Later, in partnership with the NDIA, they identified the seven top software issues that follow, drawn from a perspective of acquisition and oversight." Among the seven listed were the following on COTS: \\ ** "Inadequate attention is given to total life cycle issues for COTS/NDI impacts on life cycle cost and risk."   "In partnership with the NDIA, they made seven corresponding top software recommendations." Among the seven listed were the following on COTS: ** "Improve and expand guidelines for addressing total life cycle COTS/NDI issues." The following information is from Commercial Item Acquisition: Considerations and Lessons Learned{sweref:426} July 14, 2000, Office of the Secretary of Defense: * This document is designed to assist DoD acquisition and supported commercial items. It provides an overview of the considerations inherent in such acquisitions and summarizes lessons learned from a wide variety of programs. Although it's written with the DoD acquirer in mind, it can provide useful information to you and assist you as we move down this increasingly important path.   The NASA Lessons Learned database contains the following lessons learned related to the user of commercial, government, and legacy software: * *Lessons Learned From Flights of Off the Shelf Aviation Navigation Units on the Space Shuttle, GPS. Lesson Number 1370:*  "The Shuttle Program selected off-the-shelf GPS and EGI units that met the requirements of the original customers. It was assumed that off-the-shelf units with proven design and performance would reduce acquisition costs and require minimal adaptation and minimal testing. However, the time, budget and resources needed to test and resolve firmware issues exceeded initial projections."{sweref:551} * *Lessons Learned Study Final Report for the Exploration Systems Mission Directorate, Langley Research Center; August 20, 2004. Lessons Learned Number 1838:* "There has been an increasing interest in utilizing commercially available hardware and software as portions of space flight systems and their supporting infrastructure. Experience has shown that this is a very satisfactory approach for some items, and a major mistake for others. In general, COTS \[products\] should not be used as part of any critical systems (but see the recommendation later in this Lesson Learned) because of the generally lower level of engineering and product assurance used in their manufacture and test. In those situations where COTS (software) has been applied to flight systems, such as the laptop computers utilized as control interfaces on International Space Station (ISS), the cost of modifying and testing the hardware/software to meet flight requirements has far exceeded expectations, potentially defeating the reason for selecting COTS products in the first place. In other cases, such as the Checkout Launch Control System (CLCS) project at JSC, the cost of maintaining the commercial software had not been adequately analyzed and drove the project's recurring costs outside the acceptable range. \\ *Recommendation:* Ensure that candidate COTS products are thoroughly analyzed for technical deficiencies and life-cycle cost implications before levying them on the program. \\ ** COTS systems have potential to reduce system costs, but only if all of their characteristics are considered beforehand and included in the planned application. (Standards)   ** COTS systems that look good on paper may not scale well to NASA needs for legitimate reasons. These include sustaining engineering/update cycle/recertification costs, scaling effects, dependence on third party services and products. Need to assure that a life-cycle cost has been considered correctly. (Headquarters - CLCS){sweref:424} * *ADEOS-II NASA Ground Network (NGN) Development and Early Operations -- Central/Standard Autonomous File Server (CSAFS/SAFS) Lessons Learned. Lesson Number 1346*: "The purpose of the Standard Autonomous File Server (SAFS) is to provide automated management of large data files without interfering with the assets involved in the acquisition of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated level of fail-over processing to enhance reliability. The successful integration of COTS products into the SAFS system has been key to its becoming accepted as a NASA standard resource for file distribution, and leading to its nomination for NASA's Software of the Year Award in 1999."   *Lessons Learned:* "Match COTS tools to project requirements. Deciding to use a COTS product as the basis of system software design is potentially risky, but the potential benefits include quicker delivery, less cost, and more reliability in the final product. The following lessons were learned in the definition phase of the SAFS/CSAFS development. ** "Use COTS products and re-use previously developed internal products. ** "Create a prioritized list of desired COTS features. ** "Talk with local experts having experience in similar areas. ** "Conduct frequent peer and design reviews. ** "Obtain demonstration \[evaluation\] versions of COTS products. ** "Obtain customer references from vendors. ** "Select a product appropriately sized for your application. ** "Choose a product closely aligned with your project's requirements. ** "Select a vendor whose size will permit a working relationship. ** "Use vendor tutorials, documentation, and vendor contacts during COTS evaluation period."   "Test and prototype COTS products in the lab. The prototyping and test phase of the COTS evaluation allows problems to be identified as the system design matures. These problems can be mitigated (often with the help and cooperation of the COTS vendor) well before the field-testing phase at which time it may be too costly or impossible to retrofit a solution. The following lessons were learned in the prototyping and test phase of the SAFS/CSAFS development:   ** "Prototype your systems hardware and software in a lab setting as similar to the field environment as possible. *** "Simulate how the product will work on various customer platforms. *** "Model the field operations. *** "Develop in stages with ongoing integration and testing."   ** "Pass pertinent information on to your customers. ** "Accommodate your customers, where possible, by building in alternative options. ** "Don't approve all requests for additional options by customers or new projects that come on line. ** "Select the best COTS components for product performance even if they are from multiple vendors. ** "Consider the expansion capability of any COTS product. ** "Determine if the vendor's support is adequate for your requirements.   "Install, operate and maintain the COTS field and lab components. The following lessons were learned in the installation and operation phase of the SAFS/CSAFS development:   ** "Personally perform on-site installations whenever possible. ** "Have support/maintenance contracts for hardware and software through development, deployment, and first year of operation. ** "Create visual representations of system interactions where possible. ** "Obtain feedback from end users. ** "Maintain the prototype system after deployment. ** "Select COTS products with the ability to do internal logging."{sweref:550} * *MER Spirit Flash Memory Anomaly (2004).  Lesson Number 1483:* "Shortly after the commencement of science activities on Mars, the Mars Exploration Rover (MER) lost the ability to execute any task that requested memory from the flight computer. The cause was incorrect configuration parameters in two operating system software modules that control the storage of files in system memory and flash memory. Seven recommendations cover enforcing design guidelines for COTS software, verifying assumptions about software behavior, maintaining a list of lower priority action items, testing flight software internal functions, creating a comprehensive suite of tests and automated analysis tools, providing downlinked data on system resources, and avoiding the problematic file system and complex directory structure." * *Recommendations:* ** "Enforce the project-specific design guidelines for COTS software, as well as for NASA-developed software. Assure that the flight software development team reviews the basic logic and functions of commercial off-the-shelf (COTS) software, with briefings and participation by the vendor. ** "Verify assumptions regarding the expected behavior of software modules. Do not use a module without detailed peer review, and assure that all design and test issues are addressed. ** "Where the software development schedule forestalls completion of lower priority action items, maintain a list of incomplete items that require resolution before final configuration of the flight software. ** "Place high priority on completing tests to verify the execution of flight software internal functions. ** "Early in the software development process, create a comprehensive suite of tests and automated analysis tools. ** "Ensure that reporting flight computer related resource usage is included. ** "Ensure that the flight software downlinks data on system resources (such as the free system memory) so that the actual and expected behavior of the system can be compared. ** "For future missions, implement a more robust version of the dosFsLib module, and/or use a different type of file system and a less complex directory structure." {sweref:557}.   * *International Space Station Lessons Learned as Applied to Exploration, KSC, July 22, 2009:* (23-Lesson): Use Commercial Off-the-Shelf Products Where Possible An effective strategy in the ISS program was to simplify designs by utilizing commercial off-the-shelf (COTS) hardware and software products for non-safety, non-critical applications. Application to Exploration: Use of COTS products should be encouraged whenever practical in exploration programs.{sweref:425} \\ {div3} {tabclose}

toolstable


Div
idtabs-6

6. Lessons Learned

The following information comes from the NASA Study on Flight Software Complexity listed in the reference section of this document:

"In 2007, a relatively new organization in DoD (the Software Engineering and System Assurance Deputy Directorate) reported their findings on software issues based on approximately 40 program reviews in the preceding 2½ years (Baldwin 2007). They found several software systemic issues that were significant contributors to poor program execution." Among the seven listed were the following on COTS:

  • "Immature architectures, COTS integration, interoperability."

"Later, in partnership with the NDIA, they identified the seven top software issues that follow, drawn from a perspective of acquisition and oversight." Among the seven listed were the following on COTS:

  • "Inadequate attention is given to total life cycle issues for COTS/NDI impacts on life cycle cost and risk."

"In partnership with the NDIA, they made seven corresponding top software recommendations." Among the seven listed were the following on COTS:

  • "Improve and expand guidelines for addressing total life cycle COTS/NDI issues."
    sweref
    040
    040

The following information is from Commercial Item Acquisition: Considerations and Lessons Learned July 14, 2000, Office of the Secretary of Defense:

This document is designed to assist DoD acquisition and supported commercial items. According to the introductory cover letter, "it provides an overview of the considerations inherent in such acquisitions and summarizes lessons learned from a wide variety of programs." Although it's written with the DoD acquirer in mind, it can provide useful information to you and assist you as we move down this increasingly significant path.

sweref
426
426

The NASA Lessons Learned database contains the following lessons learned related to the user of commercial, government, and legacy software:

  • Lessons Learned From Flights of Off the Shelf Aviation Navigation Units on the Space Shuttle, GPS. Lesson Number 1370:  "The Shuttle Program selected off-the-shelf GPS and EGI units that met the requirements of the original customers. It was assumed that off-the-shelf units with proven design and performance would reduce acquisition costs and require minimal adaptation and minimal testing. However, the time, budget and resources needed to test and resolve firmware issues exceeded initial projections."
    sweref
    551
    551
  • Lessons Learned Study Final Report for the Exploration Systems Mission Directorate, Langley Research Center; August 20, 2004. Lessons Learned Number 1838: "There has been an increasing interest in utilizing commercially available hardware and software as portions of space flight systems and their supporting infrastructure. Experience has shown that this is a very satisfactory approach for some items, and a major mistake for others. In general, COTS [products] should not be used as part of any critical systems [but see the recommendation later in this Lesson Learned] because of the generally lower level of engineering and product assurance used in their manufacture and test. In those situations where COTS [software] has been applied to flight systems, such as the laptop computers utilized as control interfaces on [International Space Station] (ISS), the cost of modifying and testing the hardware/software to meet flight requirements has far exceeded expectations, potentially defeating the reason for selecting COTS products in the first place. In other cases, such as the [Checkout Launch Control System] (CLCS) project at JSC, the cost of maintaining the commercial software had not been adequately analyzed and drove the project's recurring costs outside the acceptable range.
    Recommendation: Ensure that candidate COTS products are thoroughly analyzed for technical deficiencies and life-cycle cost implications before levying them on the program.
    • COTS systems have potential to reduce system costs, but only if all of their characteristics are considered beforehand and included in the planned application. (Standards)
       
    • COTS systems that look good on paper may not scale well to NASA needs for legitimate reasons. These include sustaining engineering/update cycle/recertification costs, scaling effects, dependence on third party services and products. Need to assure that a life-cycle cost has been considered correctly. (Headquarters - CLCS)
      sweref
      424
      424
  • ADEOS-II NASA Ground Network (NGN) Development and Early Operations – Central/Standard Autonomous File Server (CSAFS/SAFS) Lessons Learned. Lesson Number 1346: "The purpose of the Standard Autonomous File Server (SAFS) is to provide automated management of large data files without interfering with the assets involved in the acquisition of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated level of fail-over processing to enhance reliability. The successful integration of COTS products into the SAFS system has been key to its becoming accepted as a NASA standard resource for file distribution, and leading to its nomination for NASA's Software of the Year Award in 1999."
     
    Lessons Learned:
    "Match COTS tools to project requirements. Deciding to use a COTS product as the basis of system software design is potentially risky, but the potential benefits include quicker delivery, less cost, and more reliability in the final product. The following lessons were learned in the definition phase of the SAFS/CSAFS development.
    • "Use COTS products and re-use previously developed internal products.
    • "Create a prioritized list of desired COTS features.
    • "Talk with local experts having experience in similar areas.
    • "Conduct frequent peer and design reviews.
    • "Obtain demonstration [evaluation] versions of COTS products.
    • "Obtain customer references from vendors.
    • "Select a product appropriately sized for your application.
    • "Choose a product closely aligned with your project's requirements.
    • "Select a vendor whose size will permit a working relationship.
    • "Use vendor tutorials, documentation, and vendor contacts during COTS evaluation period."
       
      "Test and prototype COTS products in the lab. The prototyping and test phase of the COTS evaluation allows problems to be identified as the system design matures. These problems can be mitigated (often with the help and cooperation of the COTS vendor) well before the field-testing phase at which time it may be too costly or impossible to retrofit a solution. The following lessons were learned in the prototyping and test phase of the SAFS/CSAFS development:
       
    • "Prototype your systems hardware and software in a lab setting as similar to the field environment as possible.
      • "Simulate how the product will work on various customer platforms.
      • "Model the field operations.
      • "Develop in stages with ongoing integration and testing."
         
    • "Pass pertinent information on to your customers.
    • "Accommodate your customers, where possible, by building in alternative options.
    • "Don't approve all requests for additional options by customers or new projects that come on line.
    • "Select the best COTS components for product performance even if they are from multiple vendors.
    • "Consider the expansion capability of any COTS product.
    • "Determine if the vendor's support is adequate for your requirements.
       
      "Install, operate and maintain the COTS field and lab components. The following lessons were learned in the installation and operation phase of the SAFS/CSAFS development:
       
    • "Personally perform on-site installations whenever possible.
    • "Have support/maintenance contracts for hardware and software through development, deployment, and first year of operation.
    • "Create visual representations of system interactions where possible.
    • "Obtain feedback from end users.
    • "Maintain the prototype system after deployment.
    • "Select COTS products with the ability to do internal logging."
      sweref
      550
      550
  • MER Spirit Flash Memory Anomaly (2004).  Lesson Number 1483: "Shortly after the commencement of science activities on Mars, the Mars Exploration Rover (MER) lost the ability to execute any task that requested memory from the flight computer. The cause was incorrect configuration parameters in two operating system software modules that control the storage of files in system memory and flash memory. Seven recommendations cover enforcing design guidelines for COTS software, verifying assumptions about software behavior, maintaining a list of lower priority action items, testing flight software internal functions, creating a comprehensive suite of tests and automated analysis tools, providing downlinked data on system resources, and avoiding the problematic file system and complex directory structure."
  • Recommendations:
    • "Enforce the project-specific design guidelines for COTS software, as well as for NASA-developed software. Assure that the flight software development team reviews the basic logic and functions of commercial off-the-shelf (COTS) software, with briefings and participation by the vendor.
    • "Verify assumptions regarding the expected behavior of software modules. Do not use a module without detailed peer review, and assure that all design and test issues are addressed.
    • "Where the software development schedule forestalls completion of lower priority action items, maintain a list of incomplete items that require resolution before final configuration of the flight software.
    • "Place high priority on completing tests to verify the execution of flight software internal functions.
    • "Early in the software development process, create a comprehensive suite of tests and automated analysis tools.
    • "Ensure that reporting flight computer related resource usage is included.
    • "Ensure that the flight software downlinks data on system resources (such as the free system memory) so that the actual and expected behavior of the system can be compared.
    • "For future missions, implement a more robust version of the dosFsLib module, and/or use a different type of file system and a less complex directory structure."
      sweref
      557
      557
      .
       
  • International Space Station Lessons Learned as Applied to Exploration, KSC, July 22, 2009: (23-Lesson): Use Commercial Off-the-Shelf Products Where Possible.
    • An effective strategy in the ISS program was to simplify designs by utilizing commercial off-the-shelf (COTS) hardware and software products for non-safety, non-critical applications.
    • Application to Exploration: Use of COTS products should be encouraged whenever practical in exploration programs.
      sweref
      425
      425