bannerc

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Tabsetup
01. The Requirement
12. Rationale
23. Guidance
34. Small Projects
4 5. Resources
56. Lessons Learned
67. Software Assurance
Div
idtabs-1

1. Requirements

Excerpt

3.1.14 The project manager shall satisfy the following conditions when a Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf (MOTS), Open Source Software (OSS), or reused software component is acquired or used:

    1. The requirements to be met by the software component are identified.
    2. The software component includes documentation to fulfill its intended purpose (e.g., usage instructions).
    3. Proprietary rights, usage rights, ownership, warranty, licensing rights, and transfer rights have been addressed.
    4. Future support for the software product is planned and adequate for project needs.
    5. The software component is verified and validated to the same level required to accept a similar developed software component for its intended use.
    6. The project has a plan to perform periodic assessments of vendor reported defects to ensure the defects do not impact the selected software components.

1.1 Notes

The project responsible for procuring off-the-shelf software is responsible for documenting, before procurement, a plan for verifying and validating the software to the same level required for a developed software component. The project ensures that the COTS, GOTS, MOTS, reused, and auto-generated code software components and data meet the applicable requirements in this directive assigned to its software classification, as shown in Appendix C.

1.2 History

Expand
titleClick here to view the history of this requirement: SWE-027 History

Include Page
SITE:SWE-027 History
SITE:SWE-027 History

1.3Applicability Across Classes

Applicable c
a1
b1
c1
d1
e0
f1

Div
idtabs-2

2. Rationale

All software used on a project must meet the project requirements and be tested, verified, and validated, including the incorporated Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf (MOTS), Open Source Software (OSS) or reused software components. The project must know that each COTS, GOTS, MOTS, OSS, or reused software component meets NASA requirements; that all of the legal requirements for proprietary rights, usage rights, ownership, warranty, licensing rights, and transfer rights are understood and met by the project’s planned use. That software's future support for the software is planned.  To reduce the risk of failure, the project performs periodic assessments of vendor-reported defects to ensure the defects do not impact the selected software components.

Div
idtabs-3

3. Guidance

Several key concepts are addressed in this requirement for software used in a project but are not hand-generated. Open Source Software is considered to be a software component for this requirement.

Identify Requirements

Identifying the requirements to be met by the software component allows the project to perform analysis to ensure the software component is adequate for the function it is intended to fulfill.  Identifying the requirements for the software component also allows the project to determine how the risk of using Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf (MOTS), Open Source Software (OSS), or reused software components affects the overall risk posture of the software system. Identifying the requirements to be met by the COTS, GOTS, MOTS, OSS, or reused software components allows the project to perform testing to ensure that the COTS, GOTS, MOTS, OSS, or reused software component performs the required functions. Without requirements for the COTS, GOTS, MOTS, OSS, or reused software components, the software functionality cannot be tested.

Obtain Documentation

The second concept is to ensure that the COTS, GOTS, MOTS, OSS, or reused software components include documentation to fulfill their intended purpose (e.g., usage instructions).  Usage instructions are important to ensure the purpose, installation, and functions of the components are properly understood and used in the project.

Obtain Usage Rights

For COTS, GOTS, MOTS, OSS, or reused software components, projects need to be sure that any proprietary, usage, ownership, warranty, licensing rights, and transfer rights have been addressed to ensure that NASA has all the rights necessary to use the software component in the project.  It may help the project work with procurement to determine the documents needed from the software developer.

Understand the license.

Read the legal code, not just the deed. The human-readable deed is a summary of, but not a replacement for, the legal code. It does not explain everything you need to know before using the licensed material.
Make sure the license grants permission for what you want to do. There are different licenses. Some licenses prohibit the sharing of adaptations.
Take note of the particular version of the license. The license version may differ from prior versions in important respects. Similarly, the jurisdiction ports may differ in certain terms, such as dispute resolution and choice of law.

Scope of the license.

Pay attention to what exactly is being licensed. The licensor should have marked which elements of the work are subject to the license and not. For those elements that are not subject to the license, you may need separate permission.
Consider clearing rights if you are concerned. The license does not contain a warranty, so if you think there may be third-party rights in the material, you may want to clear those rights in advance.
Some uses of licensed material do not require permission under the license. If the user you want to make of work falls within an exception or limitation to copyright or similar rights, you may do so. Those uses are unregulated by the license.

Know your obligations.

Provide attribution. Some licenses require you to provide attribution and mark the material when you share it publicly. The specific requirements vary slightly across versions.
Do not restrict others from exercising rights under the license. Some licenses prohibit you from applying effective technological measures or imposing legal terms that would prevent others from doing what the license permits.
Determine what, if anything, you can do with adaptations you make. Depending on what type of license is applied, you are limited in whether you can share your adaptation and, if so, what license you can apply to your contributions.
Termination is automatic. Some licenses terminate automatically when you fail to comply with its terms.

Consider licensor preferences.

Consider complying with non-binding requests by the licensor. The licensor may make special requests when you use the material. We recommend you do so when reasonable, but that is your option and not your obligation.

Plan Future Support

COTS, GOTS, MOTS, OSS, or reused software components may have limited lifetimes, so it is important to plan for future support of this software adequate for project needs.  As applicable, consider the following:

  • Get a supplier agreement in place to deliver or escrow source code or a third-party maintenance agreement.
  • Ensure a risk mitigation plan is in place to cover the following cases:
    • Loss of supplier or third-party support for the product.
    • Loss of maintenance for the product (or product version)
    • Loss of the product (e.g., license revoked, recall of the product, etc.).
  • Obtain an agreement that the project has access to defects discovered by the community of users. When available, the project can consider joining a product user group to obtain this information.
  • Ensure a plan to provide adequate support is in place; the plan needs to include maintenance planning and maintenance costs.
  • Document changes to the software management, development, operations, or maintenance plans affected by the use or incorporation of COTS, GOTS, MOTS, or reused software components.  

Ensure Fitness for Use

Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf (MOTS), Open Source Software (OSS), or reused software components are required to be tested, verified and validated, to the level required to ensure its fitness for use in the intended application.  The software should be verified and validated, to the extent possible, in the same manner as software that is hand-generated using the project classification and criticality as the basis for the level of effort to be applied.

For COTS, GOTS, MOTS, OSS, or reused software components like commercial real-time operating systems, it is sufficient to test, in the project environment, the features being used to meet the software system’s requirements.  It is not necessary to test every claim made by the software.  On Class A projects, when the software test suites for the COTS, GOTS, MOTS, OSS, or reused software components are available, they are to be used when appropriate to address the intended environment of use, interfaces to the software system, as well as the requirements of the project.

Assess Vendor-Reported Defects

The project should periodically assess vendor-reported defects in the COTS, GOTS, MOTS, OSS, or reused software components.  This assessment plan should include the frequency of evaluations, likely within a short period following the vendor defect reports' release to users.  The plan should capture how the project will assess the impact of vendor-reported defects in the project’s environment.  This plan could refer back to procedures used to ensure the COTS, GOTS, MOTS, OSS, or reused software component’s fitness for use in the project environment and capture any additional activities necessary to ensure the defects do not impact the system quality, reliability, performance, safety, etc.

Off-The-Shelf Software

The remaining guidance on off-the-shelf (OTS) software is broken down into elements as a function of the type of OTS software. The following is an index of the guidance elements:

3.1 COTS / GOTS Software

3.2 MOTS Software

3.3 Auto-generated Code

3.4 Embedded Software

Software reuse

Software reuse (either software acquired commercially or existing software from previous development activities) comes with a special set of concerns. The reuse of commercially-acquired software includes COTS, GOTS, MOTS, and OSS. Reuse of in-house software may include legacy or heritage software. Reused software often requires modification to be used by the project. Modification may be extensive or just require wrappers or glueware to make the software useable. The acquired and existing software must be evaluated during the selection process to determine the effort required to bring it up to date. The basis for this evaluation is typically the criteria used for developing software as a new effort. The evaluation of the reused software requires ascertaining whether the software's quality is sufficient for the intended application. The requirement statement for SWE-027 calls for five conditions to be satisfied. Also, the note indicates six additional items to consider. The key item in the above listings is the need to ensure the verification and validation (V&V) activity for the reused software is performed to the same level of confidence required for the newly developed software component.

Software certification by outside sources

Outside certifications can be taken into account dependent on the intended NASA use of the software and the software's use in the environment in which it was certified. Good engineering judgment has to be used in cases like this to determine the acceptable degree of use. An example: a real-time operating system (RTOS) vendor certification data and testing can be used in conjunction with data certification in the NASA software environment. Even though the software has been determined "acceptable" by a regulatory entity [e.g., the Federal Aviation Administration (FAA)], good engineering judgment has to be used for these types of cases to determine the acceptable degree of use.

3.1 COTS/GOTS Software

Commercial Off the Shelf (COTS) software and  Government Off the Shelf (GOTS) software are unmodified, out-of-the-box software solutions that can range in size from a portion of an entire software project. COTS/GOTS software can include software tools (e.g., word processor or spreadsheet applications), simulations (e.g., aeronautical and rocket simulations), and modeling tools (e.g., dynamics/thermal/electrical modeling tools).

If you plan to use COTS/GOTS products, be sure to complete the checklist tables under the Tools section of this guidance. The purpose of these tables is to ensure that the table entries are considered in your software life cycle decisions from software acquisition through software maintenance.

If COTS/GOTS software is used for a portion of the software solution, the software requirements about that portion should be used in the testing, V&V of the COTS/GOTS software. For example, if MIL-STD-1553 serial communication is the design solution for the project communications link requirements, and the COTS/GOTS, software design solution, is used along with the COTS/GOTS hardware design solution. The serial communications link's project software requirements should be used to test, verify, and validate the COTS/GOTS MIL-STD-1553 software. The project requirements may not cover other functionality present in the COTS/GOTS MIL-STD-1553 software. This other functionality should be either disabled or determined to be safe by analysis and testing.

COTS software can range from simple software (e.g., handheld electronic device) to progressively more complicated software (e.g., launch vehicle control system software). A software safety assessment and a risk assessment can be made to determine if this software's use will result in an acceptable level of risk, even if unforeseen hazard(s) fails. These assessments can be used to set up the approach to using and verifying the COTS/GOTS software.

Example: Checklist for selecting an RTOS

From “Selecting a Real-Time Operating System,” Greg Hawley, Embedded Systems Programming, March 1999.  Updated 10/21/2020 by NASA Software Safety Guidebook Team.

Criteria

Considerations

Language/Microprocessor Support

The first step in finding an RTOS for your project is to look at those vendors supporting the language and microprocessor you’ll be using.

Tool Compatibility

Make sure your RTOS works with your ICE, compiler, assembler, linker, source code debuggers, and static code analysis tools.

Services

Operating systems provide a variety of services. Make sure your OS supports the services (queues, times, semaphores) you expect to use in your design.

Footprint

RTOSes are often scalable, including only those services you end up needing in your applications. Based on what services you’ll need and the number of tasks, semaphores, and everything else you expect to use, make sure your RTOS will work in the RAM and ROM you have

Performance

Can your RTOS meet your performance requirements? Ensure you understand the benchmarks vendors give you and how they apply to the hardware you will be using.

Software Components

Are the required components (protocol stacks, communications services, real-time databases, Web services, virtual machines, graphics libraries, and so on) available for your RTOS? How much effort will it be to integrate them? Ensure that the RTOS is compatible with the other software components the project is using.

Device Drivers

If you’re using common hardware, are device drivers available for your RTOS?

Debugging Tools

RTOS vendors may have debugging tools that help find harder to find defects with source-level debuggers (such as deadlocks, forgotten semaphore puts, and so on).

Standards Compatibility

Are there safety or compatibility standards your application demands? Make sure your RTOS complies.

Technical Support

Phone support is typically covered for a limited time after your purchase or on a year-to-year basis through support. Sometimes applications engineers are available. Additionally, some vendors provide training and consulting.

Source vs. Object Code

With some RTOSes, you get the source code to the operating system when you buy a license. In other cases, you get only object code or linkable libraries.

Licensing

Make sure you understand how the RTOS vendor licenses their RTOS. With some vendors, run-time licenses are required for each board shipped, and development tool licenses are required for each developer.

Reputation

Make sure you’re dealing with someone you’ll be happy with.

Services

Real-time operating systems provide developers a full complement of features: several types of semaphores (counting, mutual exclusion), times, mailboxes, queues, buffer managers, memory system managers, events, and more.

Priority Inheritance

Must support or priority inversion can result.

Test Suite

If possible, purchase a vendor test suite for accreditation and regression testing purposes.

Certification

Ensure that the appropriate approval from an authorized individual or office is obtained and RTOS is certified for the capability that the project requires.

3.2 MOTS Software

As defined in Appendix A of NPR 7150.2, A.17:

"Modified Off-The-Shelf (MOTS) Software. When COTS, legacy/heritage software is reused, or heritage software is changed, the product is considered 'modified.'- The changes can include all or part of the software products and may involve additions, deletions, and specific alterations. An argument can be made that any alterations to the code and/or design of an off-the-shelf software component constitutes 'modification.' Still, the common usage allows for some percentage of change before the off-the-shelf software is declared to be MOTS software. This may include the changes to the application shell and/or glueware to add or protect against certain features and not to the off-the-shelf software system code directly. See the off-the-shelf [definition]."

In cases where legacy/heritage code is modified, MOTS is considered to be an efficient method to produce project software, especially if the legacy/heritage code is being used in the same application area as NASA. For example, Expendable Launch Vehicle simulations have been successfully modified to accommodate solid rocket boosters, payload release requirements, or other such changes. Further, if the "master" code has been designed with reuse in mind, such code becomes an efficient and effective method of producing quality code for succeeding projects.

An Independent Verification and Validation (IV&V) Facility report, "Software Reuse Study Report," April 29, 2005, examines changes made on reused software. The conclusions are positive but caution against overestimating (underestimating) the extent and costs of reused software.

The Department of Defense (DoD) has had extensive experience in COTS and MOTS. A Lessons Learned item, Commercial Item Acquisition: Considerations and Lessons Learned, specifically includes lessons learned from MOTS. Concerns in these lessons learned included commercial software vendors attempting to modify existing commercial products, limiting the changes to "minor" modifications, and underestimating the extent and schedule impacts of testing modified code.

Extreme caution should be exercised when attempting to purchase or modify COTS or GOTS code written for another application realm or for which key documentation is missing (e.g., requirements, architecture, design, tests).

Engineering judgment and consideration of possible impacts on the software development activity need to be used/made when determining that software is MOTS, legacy, or heritage.

3.3 Auto-generated Software

Auto-generated software results from translating a model of the system behavior created by software engineers into different languages such as C or Ada by the appropriate code generator. Changes are made by revising the model, and code is generated from the revised model.

As is required for software developed without using a model or code generator, auto-generated software is required to be verified and validated to the level required to ensure its fitness for use in the intended application.  Recommendations include subjecting the model to the same level of scrutiny and review as source code,

Swerefn
refnum369
 subjecting the auto-generated code to the same rigorous inspection, analysis, and test as hand-generated code, including system engineers and software engineers in joint reviews to identify misunderstood or unclear requirements.

There are many considerations to be reviewed when deciding whether using auto-generated software is the right choice for a project, including determining what is necessary to ensure future support for the generated software.  

3.4 Embedded Software

NASA commonly uses embedded software applications written by/for NASA for engineering software solutions. Embedded software is software specific to a particular application as opposed to general-purpose software running on a desktop. Embedded software usually runs on custom computer hardware ("avionics"), often on a single chip.

Care must be taken when using vendor-supplied board support packages (BSPs) and hardware-specific software (drivers), typically supplied with off-the-shelf avionics systems. BSPs and drivers act as the software layer between the avionics hardware and the embedded software applications written by/for NASA. Most central processing unit (CPU) boards have BSPs provided by the board manufacturer or third parties working with the board manufacturer. Driver software is provided for serial ports, universal serial bus (USB) ports, interrupters, modems, printers, and many other hardware devices.

BSPs and drivers are hardware dependent, often developed by third parties on hardware/software development tools that may not be accessible years later. Risk mitigation should include hardware-specific software, such as BSPs, software drivers, etc.

Board manufacturers provide many BSPs and drivers as binary code only, which could be an issue if the supplier is not available and BSP/driver errors are found. It is recommended that a project using BSPs/drivers maintain a configuration-managed version of any BSPs with release dates and notes. Consult with avionics (hardware) engineers on the project to see what actions may be taken to manage the BSPs/drivers.

Consideration should also be given to how BSP/driver software updates are to be handled, if and when they are made available, and how it will become known to the project that updates are available?

Vendor reports and user forums should be monitored from the time hardware, and associated software are purchased through a reasonable time after deployment. Developers should monitor suppliers or user forums for bugs, workarounds, security changes, and other modifications to the software that, if unknown, could derail a NASA project. Consider the following snippet from a user forum:

Panel

"Manufacturer Pt. No." motherboard embedded complex electronics contains malware.
Published: 2010-xx-xx

A "Manufacturer" support forum identifies "manufacturer's product" motherboards that contain harmful code. The embedded complex electronics for server management on some motherboards may contain malicious code. There is no impact on either new servers or non-Windows based servers. No further information is available regarding the malware, malware mitigation, the serial number of motherboards affected, or the source.

3.5 Open Source Software (OSS)

OSS can range in size from a portion of a software project to an entire software project. OSS software can include software tools (e.g., word processor or spreadsheet applications), simulations (e.g., aeronautical and rocket simulations), and modeling tools (e.g., dynamics/thermal/electrical modeling tools).

If you plan to use OSS products, be sure to complete the checklist tables under the Tools section of this guidance. The purpose of these tables is to ensure that the table entries are considered in your software life cycle decisions from software acquisition through software maintenance.

For OSS:

  1. The requirements to be met by the software component are identified.
  2. The software component includes documentation to fulfill its intended purpose (e.g., usage instructions).
  3. Proprietary rights, usage rights, ownership, warranty, licensing rights, and transfer rights have been addressed. The OSS license must be reviewed and approved by NASA legal before use on any government system.
  4. Future support for the software product is planned and adequate for project needs.
  5. The software component is verified and validated to the same level required to accept a similar developed software component for its intended use.
  6. The project has a plan to perform periodic assessments of vendor reported defects to ensure the defects do not impact the selected software components.
  7. An OSS software origin analysis should be done on all open source software before use.
  8. OSS software is required to be scanned or assessed for security vulnerabilities before use.



 Example Questions for Assessing COTS. MOTS, OSS, and reuse software for use by or in a System

This is not a complete list. Each Center or Project should add to, remove, or alter the list which applies to their tools. Not all questions apply to all tool types. This checklist helps the thought processes when considering COTS, MOTS, OSS, and resued software or software tools. If the system has safety-critical components that could contribute to a hazard by either providing a false or inaccurate output or developing software with flawed algorithms, paths, execution timing, etc., consider using the example safety checklist below. 
Y/N/ NA/ unknown (?)


1.       What requirements does the intended OTS or OSS or reuse software fulfill for the system?

a.         Is it a standalone tool used to produce, develop, or verify software  (or Hardware) for a safety-critical system?

b.        Is this an embedded product?

2.      Why this COTS, MOTS, OSS, or reuse software product, why this vendor? 

a.       What is the COTS, MOTS, OSS, or reuse software pedigree?

b.       Is it a known and respected company?

c.        What does the market/user community say about it?

d.       Does the purchasing company/industry have a track record of using this product?

e.        What is the volatility of the product? Of the vendor?

f.        The vendor will provide what agreements and services?

g.       Is escrow of the COTS, MOTS, OSS, or reuse software a viable option?

h.       What documentation is available from the vendor?

i.         Are operators/user guides, installation procedures, etc., normally provided?

j.         Is additional requested documentation available? (perhaps at additional cost) Examples might include requirements, design, tests, problem reports, development and assurance processes, plans, etc.

3.         What training is provided or available?

4.         Does the vendor share known problems with their clients?

a.       Is there a user group?

b.       Are there a useful means to notify customers of problems (and any solutions/workarounds) found by the company or by another customer?

c.        Do they share their risk and or safety analyses of new and existing problems/errors?

5.        What plan/agreement is there for if the vendor or product ceases to exist? 

a.        What if the vendor goes out of business?

b.        What if another company buys the vendor?

c.         What if the vendor stops supporting either the product or the version of the product used?

6.      Why not develop it in the house ( this may or may not be obvious)?

7.       How are those requirements traced throughout the life of the product? 

8.       What performance measures are expected and needed?

9.        What requirements does it not meet that will need to be fulfilled with developed code?

10.       Will wrappers and or glueware be needed?

11.       Will other tools and support software be needed  (e.g., special tools for programming the cots, adaptors for specific interfaces, drivers specific to an operating system or output device, etc.)?

12.    Does it need to be programmed? Do one or more applications run on the COTS? 

13.    What features does the COTS, MOTS, OSS, or reuse software have that are not used?

a.       How are they “turned off,” if that is possible?

b.       Is a wrapper necessary to assure correct inputs to and/or outputs from the COTS, MOTS, OSS, or reuse software?

c.        Can the unwanted features be “safe,” prevented from inadvertent functioning?

d.       Could operators/users/maintenance incorporate the unused features in the future?

               i.            What would be the implications?

              ii.            What would be the controls?

             iii.             What would be the safety ramifications?

14.    How can it verified and validated functionally, performance-wise, stress/fault-tolerant?

a.       Outside the intended system?

b.       With any programming or applications?

c.        With wrappers and or glueware?

d.       As part of the incorporating system? 

e.        What performance measures are to be used?

f.       What tests can be performed to stress the COTS, MOTS, OSS, or reuse software standalone or within the system?

g.        What fault-tolerant tests can be performed either standalone or within the system, fault injection?

15.    Will it be used in a safety-critical system?

a.       Do the functions performed by the COTS, MOTS, OSS, or reuse software meet the SW safety criticality criteria?

b.       Has a preliminary hazard analysis identified the COTS, MOTS, OSS, or reuse software functions as safety-critical?

c.      If so, what hazard contributions could it make?

            i.         Think functionally at first. What happens if the function it is to perform fails?

           ii.            Then work through common/generic faults and failure modes.

d.       How does it fail?; list, test for all the modes.

e.       Will wrapper code be developed to protect the system from this COTS, MOTS, OSS, or reuse software?

f.        What potential problems could the unused portions of the COTS, MOTS, OSS, or reuse software cause? 

16.     For an Operating System, 

    a.       Is it a reduced or “safe” OS (e.g., Real-Time Operating System VxWorks, Integrity, the D0-178B ARINC 653 version sold only to the aviation, life-critical software market)?

b.How are exceptions handled?

c.What compilers are needed, what debuggers?

e.What is it running on? Is that an approved, recommended platform?

f.Is the processing time, scheduler, switching time adequate?

g.Will partitioning be needed, how well does it perform partitioning?

17.    Will it be used in a system that must be highly reliable?

a.       Is there reliability information from the vendor?

b.      How will its reliability be measured within the system it is operating in/contributing to?

c.        Is there company experience of this COTS, MOTS, OSS, or reuse software to be drawn from? Which version of the COTS, MOTS, OSS, or reuse software?

d.       What are the error/discrepancy metrics that can be collected? 

             i.            From the vendor?

            ii.            From use in developing and testing the COTS, MOTS, OSS, or reuse software both within and outside the system?

e. Do the system functional FEMAs include the functions to be performed by the COTS, MOTS, OSS, or reuse software

            i.          Have known potential faults and failures been adequately analyzed and documented?

18.    What happens when versions of the COTS, MOTS, OSS, or reuse software change?

a.       Is there an upgrade plan?

              i.            During development?

             ii.            During operations/maintenance?

            iii.            What does the upgrade plan take into consideration?

b.     Is there a maintenance plan/agreement?

c.        Is there a support agreement for addressing any errors found? 

d.       Should the COTS, MOTS, OSS, or reuse software be put in escrow?

e.       Should there be an agreement to have the software revert to the company after so many years? 

f.       Should the company purchase the rights to the COTS, MOTS, OSS, or reuse software code and documentation?

19.     What is the licensing agreement, and what are the limitations?

a.       How many seats are there?

b.       Is vendor support included?

c.       Can licenses be transferred? 

d.      Does the licensing agreement meet project needs?

20.    For software development  and debugging tools:

a.       Which compilers and libraries have been chosen?

b.      Are there a reduced instruction set and code standards to be followed?

c.       Is there more than one debug tool to be used? 

             i.           What are their false positive and false negative rates?

d.       Autocode generators: 

              ii.           What are their limitations, their known defects?

              iii.           What are their settings and parameters? (Are they easy to use? Do they meet project needs?)

             iv.            Are results usable, repeatable?

             v.            What are the support agreements?

             vi.           Is there verification and validation support? How will they be verified and validated?

e.        Modeling tools

f.      Development environment tools

21.  For infrastructure tools (e.g., databases, configuration management, and release tools, verification tools, etc.): 

a.Does it meet the requirements?

b.Can it grow and expand if needed, or has it been specified for only current needs?

c.Will the tool be verifying, creating (e.g., auto code generator), building, assembling, burning in safety-critical software? 

d.How would the loss of data stored in the tool or accessed by the tool impact the project?

i.Could safety data be lost, say, from the tool that stores hazard reports, problem reporting information?

e. Are there sufficient and frequent enough back-ups? How and where are those stored?

f. How much training is required to use the tools?

g. Are there restrictions on and levels of access?

     ii. How are access levels managed?

h. Are any security features needed, either built-in or via access limitations?
Div
idtabs-4

4. Small Projects

This requirement applies to all projects regardless of size.

Div
idtabs-5

5. Resources

5.1 References

refstable
Show If
groupconfluence-users
Panel
titleColorred
titleVisible to editors only

Enter the necessary modifications to be made in the table below:

SWEREFs to be addedSWEREFS to be deleted


SWEREFs called out in text: 040, 276, 369, 424, 425, 426, 550, 551, 557

SWEREFs NOT called out in text but listed as germane: 121, 125, 129, 130, 143, 148, 154, 167, 185, 242, 249, 373



5.2 Tools


Include Page
Tools Table Statement
Tools Table Statement

Div
idtabs-6

6. Lessons Learned

6.1 NASA Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to the user of commercial, government, and legacy software:

  • MER Spirit Flash Memory Anomaly (2004).  Lesson Number 1483
    Swerefn
    refnum557
    :
    "Shortly after the commencement of science activities on Mars, the Mars Exploration Rover (MER) lost the ability to execute any task that requested memory from the flight computer. The cause was incorrect configuration parameters in two operating system software modules that control files' storage in system memory and flash memory. Seven recommendations cover enforcing design guidelines for COTS software, verifying assumptions about software behavior, maintaining a list of lower priority action items, testing flight software internal functions, creating a comprehensive suite of tests and automated analysis tools, providing downlinked data on system resources, and avoiding the problematic file system and complex directory structure."

Recommendations:

    • "Enforce the project-specific design guidelines for COTS software, as well as for NASA-developed software. Assure that the flight software development team reviews the basic logic and functions of commercial off-the-shelf (COTS) software, including the vendor's briefings and participation.
    • "Verify assumptions regarding the expected behavior of software modules. Do not use a module without detailed peer review and ensure that all design and test issues are addressed.
    • "Where the software development schedule forestalls completion of lower priority action items, maintain a list of incomplete items that require resolution before final configuration of the flight software.
    • "Place a high priority on completing tests to verify the execution of flight software internal functions.
    • "Early in the software development process, create a comprehensive suite of tests and automated analysis tools.
    • "Ensure that reporting flight computer-related resource usage is included.
    • "Ensure that the flight software downlinks data on system resources (such as the free system memory) so that the actual and expected behavior of the system can be compared.
    • "For future missions, implement a more robust version of the dosFsLib module, and/or use a different type of file system and a less complex directory structure.".


  • Lessons Learned From Flights of Off the Shelf Aviation Navigation Units on the Space Shuttle, GPS. Lesson Number 1370
    Swerefn
    refnum551
    : "The Shuttle Program selected off-the-shelf GPS and EGI units that met the requirements of the original customers. It was assumed that off-the-shelf units with proven design and performance would reduce acquisition costs and require minimal adaptation and minimal testing. However, the time, budget, and resources needed to test and resolve firmware issues exceeded initial projections."


  • ADEOS-II NASA Ground Network (NGN) Development and Early Operations – Central/Standard Autonomous File Server (CSAFS/SAFS) Lessons Learned. Lesson Number 1346
    Swerefn
    refnum550
    : "The purpose of the Standard Autonomous File Server (SAFS) is to provide automated management of large data files without interfering with the assets involved in the acquisition of the data. It operates as a stand-alone solution, monitoring itself and providing an automated fail-over processing level to enhance reliability. The successful integration of COTS products into the SAFS system has been key to its becoming accepted as a NASA standard resource for file distribution, and leading to its nomination for NASA's Software of the Year Award in 1999."

    Lessons Learned:
    "Match COTS tools to project requirements. Deciding to use a COTS product as the basis of system software design is potentially risky. The potential benefits include quicker delivery, less cost, and more reliability in the final product. The following lessons were learned in the definition phase of the SAFS/CSAFS development.
    • "Use COTS products and re-use previously developed internal products.
    • "Create a prioritized list of desired COTS features.
    • "Talk with local experts having experience in similar areas.
    • "Conduct frequent peer and design reviews.
    • "Obtain demonstration [evaluation] versions of COTS products.
    • "Obtain customer references from vendors.
    • "Select a product appropriately sized for your application.
    • "Choose a product closely aligned with your project's requirements.
    • "Select a vendor whose size will permit a working relationship.
    • "Use vendor tutorials, documentation, and vendor contacts during the COTS evaluation period."

      "Test and prototype COTS products in the lab. The COTS evaluation prototyping and test phase allow problems to be identified as the system design matures. These problems can be mitigated (often with the help and cooperation of the COTS vendor) well before the field-testing phase, at which time it may be too costly or impossible to retrofit a solution. The following lessons were learned in the prototyping and test phase of the SAFS/CSAFS development:
    • "Prototype your system's hardware and software in a lab setting as similar to the field environment as possible.
      • "Simulate how the product will work on various customer platforms.
      • "Model the field operations.
      • "Develop in stages with ongoing integration and testing."
    • "Pass pertinent information on to your customers.
    • "Accommodate your customers, where possible, by building in alternative options.
    • "Don't approve all requests for additional options by customers or new projects that come online.
    • "Select the best COTS components for product performance, even if they are from multiple vendors.
    • "Consider the expansion capability of any COTS product.
    • "Determine if the vendor's support is adequate for your requirements.

      "Install, operate, and maintain the COTS field and lab components. The following lessons were learned in the installation and operation phase of the SAFS/CSAFS development:
    • "Personally perform on-site installations whenever possible.
    • "Have support/maintenance contracts for hardware and software through development, deployment, and first year of operation.
    • "Create visual representations of system interactions where possible.
    • "Obtain feedback from end-users.
    • "Maintain the prototype system after deployment.
    • "Select COTS products with the ability to do internal logging."
  • Lessons Learned Study Final Report for the Exploration Systems Mission Directorate, Langley Research Center; August 20, 2004. Lessons Learned Number 1838
    Swerefn
    refnum424
    : "There has been an increasing interest in utilizing commercially available hardware and software as portions of space flight systems and their supporting infrastructure. Experience has shown that this is a very satisfactory approach for some items and a major mistake for others. In general, COTS [products] should not be used as part of any critical systems [but see the recommendation later in this Lesson Learned] because of the generally lower level of engineering and product assurance used in their manufacture and test. In those situations where COTS [software] has been applied to flight systems, such as the laptop computers utilized as control interfaces on [International Space Station] (ISS), the cost of modifying and testing the hardware/software to meet flight requirements has far exceeded expectations, potentially defeating the reason for selecting COTS products in the first place. In other cases, such as the [Checkout Launch Control System] (CLCS) project at JSC, the cost of maintaining the commercial software had not been adequately analyzed and drove the project's recurring costs outside the acceptable range.

Recommendation: Ensure that candidate COTS products are thoroughly analyzed for technical deficiencies and life-cycle cost implications before levying them on the program.

      • COTS systems can reduce system costs, but only if all of their characteristics are considered beforehand and included in the planned application. (Standards)
         
      • COTS systems that look good on paper may not scale well to NASA's needs for legitimate reasons. These include sustaining engineering/update cycle/recertification costs, scaling effects, dependence on third-party services and products. We need to ensure that a life-cycle cost has been considered correctly. (Headquarters - CLCS)

6.2 Other Lessons Learned

  • The following information comes from the NASA Study on Flight Software Complexity listed in the reference section of this document
    Swerefn
    refnum040
    :

"In 2007, a relatively new organization in DoD (the Software Engineering and System Assurance Deputy Directorate) reported their findings on software issues based on approximately 40 program reviews in the preceding 2½ years (Baldwin 2007). They found several software systemic issues that were significant contributors to poor program execution." Among the seven listed were the following on Commercial Off The Shelf (COTS):

      • "Immature architectures, COTS integration, interoperability."

"Later, in partnership with the NDIA, they identified the seven top software issues that follow, drawn from a perspective of acquisition and oversight." Among the seven listed were the following on COTS:

      • "Inadequate attention is given to total life cycle issues for COTS/NDI impacts on life cycle cost and risk."

"In partnership with the NDIA, they made seven corresponding top software recommendations." Among the seven listed were the following on COTS:

      • "Improve and expand guidelines for addressing total life cycle COTS/NDI issues."

  • The following information is from Commercial Item Acquisition: Considerations and Lessons Learned July 14, 2000, Office of the Secretary of Defense
    Swerefn
    refnum426
    :

This document is designed to assist DoD acquisition and supported commercial items. According to the introductory cover letter, "it provides an overview of the considerations inherent in such acquisitions and summarizes lessons learned from a wide variety of programs." Although it's written with the DoD acquirer in mind, it can provide useful information and assist you as we move down this increasingly significant path.

  • International Space Station Lessons Learned as Applied to Exploration, KSC, July 22, 2009
    Swerefn
    refnum425
    :
    (23-Lesson): Use Commercial Off-the-Shelf Products Where Possible.
    • An effective strategy in the ISS program was to simplify designs by utilizing commercial off-the-shelf (COTS) hardware and software products for non-safety, non-critical applications.
    • Application to Exploration: The use of COTS products should be encouraged whenever practical in exploration programs.


Div
idtabs-7

7. Software Assurance

Excerpt Include
SWE-027 - Use of Commercial, Government, and Legacy Software
SWE-027 - Use of Commercial, Government, and Legacy Software

7.1 Tasking for Software Assurance

  1. Confirm that the conditions listed in "a" through "f" are complete for any COTS, GOTS, MOTS, OSS, or reused software that is acquired or used.

7.2 Software Assurance Products

  • No products have been identified at this time.


    Note
    titleObjective Evidence
    • The requirements for any COTS, GOTS, MOTS, OSS, or reused software that is acquired or used.
    • COTS, GOTS, MOTS, OSS, or reused software documentation.
    • Test procedures and test reports that show that any COTS, GOTS, MOTS, OSS, or reused software is verified and validated to the same level required to accept a similar developed software component for its intended use.
    • Data showing a review of vendor-reported defects.
    Expand
    titleDefinition of objective evidence

    Include Page
    SITE:Definition of Objective Evidence
    SITE:Definition of Objective Evidence

7.3 Metrics

  •  # of Software Requirements (e.g. Project, Application, Subsystem, System, etc.)

7.4 Guidance

When a Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf) (MOTS), Open Source Software (OSS) or reused software component is acquired or used:

    1. The requirements to be met by the software component are identified: Assess if the software requirements specification has requirements identified for any feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software component,  not just a requirement to use a COTS, GOTS, MOTS, OSS, or reused software component.  The requirements are needed to drive testing of the feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software components.  Identify risks or issues if the software requirements specification does not have well-written requirements identified for the feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software component.  This includes requirements for RTOS software.
    2. The software component includes documentation to fulfill its intended purpose (e.g., usage instructions): - Assess if the software requirements specification has requirements identified for the feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software component.
    3. Proprietary rights, usage rights, ownership, warranty, licensing rights, and transfer rights have been addressed. - verify that engineering has addressed the question for all COTS, GOTS, MOTS, OSS, or reused software components used.
    4. Future support for the software product is planned and adequate for project needs: - Assess the long-term support plans for all COTS, GOTS, MOTS, OSS, or reused software components used, including version support update plans by the project.  Verify that you have access to all discrepancies identified by any user of the COTS, GOTS, MOTS, OSS, or reused software components.
    5. The software component is verified and validated to the same level required to accept a similar developed software component for its intended use.
    6. The project has a plan to perform periodic assessments of vendor reported defects to ensure the defects do not impact the selected software components. Verify that you have access to all discrepancies identified by any user of the COTS, GOTS, MOTS, OSS, or reused software components.
    7. Perform or review any risk analysis, trade studies, or heritage analyses that have been done to assess any potential impacts on safety, quality, security, or reliability.

If the COTS, MOTS, GOTS, or OSS is identified as safety-critical, evidence of the following need to be evaluated by SA as part of the acceptance process:

  1. Software contributions to hazards, conditions or events have been identified.
  2. All controls for hazards, conditions, or events that require software implementation have been identified and properly implemented.
  3. All software requirements associated with safety and safety design elements have been identified and tracked.
  4. All software requirements associated with safety and safety design elements have been successfully implemented, validated, or waivers and deviations have been approved.
  5. All software requirements associated with safety and safety design elements have been properly verified, or waivers and deviations have been approved.
  6. All discrepancies in safety-critical software have been dispositioned with the SMA concurrence.
  7. All operational workarounds associated with discrepancies in safety-critical software have the concurrence of the acquiring SMA and operations.

Example COTS Safety Checklist

This is not a complete list. Each Center or Project should add to, remove, or alter the list which applies to their tools. Not all questions apply to all COTS, MOTS, GOTS, OSS, or reuse software or tool types. This checklist helps the thought processes when considering if software tools or programs (embedded or standalone) could contribute to a hazard by either providing a false or inaccurate output or developing software with flawed algorithms, paths, execution timing, etc.  
1. Were any risk analyses or trade-off analyses performed?

a.    Where and how are the COTS, MOTS, GOTS, OSS, or reuse software planned to be used?

b.    What features will not be used, and how can they be prevented from inadvertent access?

c.     What changes to the rest of the system are needed to incorporate the COTS, MOTS, GOTS, OSS, or reuse software?

d.     Where are the results of the trade study documented, and are they being maintained?
2. How adequately does the SW Management Plan address the COTS, MOTS, GOTS, OSS, or reuse software in its system(s), or is there a standalone COTS, MOTS, GOTS, OSS, or reuse software management plan?

a.   Does the plan address how version changes and problem fixes to the COTS, MOTS, GOTS, OSS, or reuse softwarewill be handled during development?

i.    What is the decision-making process for what upgrades will be made and when they will be made?

ii.   How does it address version control for the COTS, MOTS, GOTS, OSS, or reuse software and any wrappers or glueware?

iii.   If there are multiple COTS, MOTS, GOTS, OSS, or reuse softwarethat interacts, how are upgrades coordinated?

iv.   What retesting and additional analyses will take place to assure smooth incorporation?

b.   How will COTS, MOTS, GOTS, OSS, or reuse software be included in the Data Acceptance package and version description documents?

c.   What Software Classification is assigned to the COTS, MOTS, GOTS, OSS, or reuse software or the SW System in which the COTS, MOTS, GOTS, OSS, or reuse software is used?

d.   Does SA agree with the Software Classification? With the Safety Assessment?

e.   Is the plan complete for the appropriate level(s) of SW Classification?

f.    How will risks be captured and managed?

g.   Does it cover the issues listed above?

h.   Does the plan make sense?
3. Other SW or system plans will need to be reviewed to assure that they address the COTS, MOTS, GOTS, OSS, or reuse software:

a.   Has the software maintenance plan been reviewed?

i.    How will COTS, MOTS, GOTS, OSS, or reuse softwarebe upgraded or replaced once in operation

ii.   What trigger points will be used to determine the need/benefits vs. potential instability caused by upgrades or replacement of COTS, MOTS, GOTS, OSS, or reuse software?

b.   Have retirement plans been reviewed?

c.   Have safety plans been reviewed?

d.   Have assurance plans (which address all that is listed here and possibly more)been reviewed?
4. A review of the requirements the COTS, MOTS, GOTS, OSS, or reuse software is supposed to be fulfilling: 

a.   Functional requirements,

b.   Interface requirements,

c.   Performance requirements

d.   Wrapper software requirements

e.   Has the functionality of the COTS, MOTS, GOTS, OSS, or reuse softwarenot to be used identified, and how will it be prevented from being used?

f.    How are the requirements fulfilled by COTS, MOTS, GOTS, OSS, or reuse softwarebeing traced from beginning to delivery and beyond?

g.    Have realistic and complete operational and failure scenarios been written?
5. Participation in the design reviews of how the COTS, MOTS, GOTS, OSS, or reuse softwareis to be architected into the system, or at least the PDR and CDR of the systems should address the COTS, MOTS, GOTS, OSS, or reuse software.

a. Does it meet the requirements placed on it?

b. Has the risk analyses been performed?

c. Have the safety analyses been performed and presented to the appropriate phase?
6. Safety: How will Hazard analyses be run on the COTS, MOTS, GOTS, OSS, or reuse softwareor systems with COTS, MOTS, GOTS, OSS, or reuse software?

a. By its functions, or just as inputs and outputs through a wrapper?

b. What if it is an OS, are the safety personnel aware of how to cover OS in an HA?

c. Possible hazard causes and effects on safety-critical systems?

d. Its applications, glueware, and or wrappers?

e. How to mitigate possible hazards that a COTS, MOTS, GOTS, OSS, or reuse softwarecould trigger?
7. Review of verification and validation plans, procedures, results,

a. How will it be tested?

       i. White box testing?

       ii. In situ testing?

       iii. Can it be tested standalone to ensure it meets the needs it is intended for?

b.   How are upgrades to the COTS, MOTS, GOTS, OSS, or reuse softwareverified and validated?

c.   What are the plans and procedures?

d.    Proof that it does not utilize undesired features?

e.   Are any safety controls and mitigations tested sufficiently?

f.    When best to participate in testing to assure the COTS, MOTS, GOTS, OSS, or reuse softwareare working properly and have been incorporated properly?
8. Reliability

a.   What are the performance measures?

i.    Expected?

ii.   Measured?

iii.   What are the issues with crashes, input, or memory overloads?

b.   How does it fail?

i.    What are the conditions that lead to failure or fault?

ii.    What are the operational limits?

iii.    What are the impacts of those failures or faults?

iv.    Are there any predictors that can measure and lead to the prevention of a failure?

v.    What protections need to be provided?

1. In the operations?

2. In the glueware/wrappers?

c.   How does the COTS, MOTS, GOTS, OSS, or reuse software provide notifications of faults and failures?

d.   How does it get reset?

e.   What measurements should be taken, and when, to understand the reliability of the COTS, MOTS, GOTS, OSS, or reuse software?

i. During integration and incorporation into the system (interface problems, trouble with support SW, etc.)?

ii. During systems checkout and testing?

iii. During operations?
9. Metrics should be determined to assure performance and quality within the system or as a standalone tool.
10.    SW Assurance of any associated developed software needs to carry the same SW Classification and safety assessment and thus the appropriate normal software engineering and assurance:

a.   glueware

b.   wrappers

c.   applications

d.   Interfaces

i.     human

ii.   other systems/software

iii.   Hardware including Programmable Logic Devices
11. Lessons learned of problems, changes, adaptations, usage, programmability, etc.:

a.    Including its applications, glueware, and or wrappers?

b.    Provide information and evidence if the COTS, MOTS, GOTS, OSS, or reuse software product(s) worked, and provide documentation of both problems and solutions.