You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 37 Next »

Software Design Analysis

1. Introduction

The Software Design Analysis product focuses on analyzing the software design that has been developed from the requirements (software, system, and/or interface). This topic describes some of the methods and techniques Software Assurance and Software Safety personnel may use to evaluate the quality of the architecture and design elements that was developed.

The software design process begins with a good understanding of the requirements and the system architecture and system design. The architectural design begins with the development of a basic architecture and a high-level preliminary design. The architectural design is then expanded into a low-level detailed design. By the time the detailed design is complete, software engineering should be able to implement it into the code of the desired software system or application.

Since the design primarily guides the code implementation, it is important to ensure that the architecture and design are correct, safe, secure, complete, understandable, and captures the intent of the requirements. The detailed design captures the low-level component-based approach to implementing the software requirements, including the requirements associated with fault management, security, and safety. When the detailed design is complete, the analysis of the requirements traceability documents should show the relationship between the software design components and the software requirements and provides evidence that all requirements are accounted for. The information in this topic is divided into several tabs as follows:

  • Tab 1 – Introduction
  • Tab 2 – Software Design Analysis Guidance – provides general guidance for doing software design analysis 
  • Tab 3 – Safety Analysis During Design – provides additional guidance when safety critical software is involved with analysis emphasis on safety features
  • Tab 4 - Analysis Reporting Content – provides guidance on the analysis report product content
  • Tab 5 – Resources for this topic

The following is a list of the applicable SWE requirements that relate to the generation of the software design analysis product:


NPR 7150.2 Requirement

NASA-STD-8739.8 Software Assurance and Software Safety Tasks


The project manager shall define and document the acceptance criteria for the software.

1. Confirm software acceptance criteria are defined and assess the criteria based on guidance in the NASA Software Engineering Handbook, NASA-HDBK-2203.


If a project has safety-critical software or mission-critical software, the project manager shall implement the following items in the software:
a. The software is initialized, at first start and restarts, to a known safe state.
b. The software safely transitions between all predefined known states.
c. Termination performed by software of functions is performed to a known safe state.
d. Operator overrides of software functions require at least two independent actions by an operator.
e. Software rejects commands received out of sequence when execution of those commands out of sequence can cause a hazard.
f. The software detects inadvertent memory modification and recovers to a known safe state.
g. The software performs integrity checks on inputs and outputs to/from the software system.
h. The software performs prerequisite checks prior to the execution of safety-critical software commands.
i. No single software event or action is allowed to initiate an identified hazard.
j. The software responds to an off-nominal condition within the time needed to prevent a hazardous event.
k. The software provides error handling.
l. The software can place the system into a safe state.

6. Analyze the software design to ensure:
a. Use of partitioning or isolation methods in the design and code,
b. That the design logically isolates the safety-critical design elements and data from those that are non-safety-critical.


The project manager shall transform the requirements for the software into a recorded software architecture.

1. Assess that the software architecture addresses or contains the software structure, qualities, interfaces, and external/internal components.


The project manager shall transform the requirements for the software into a recorded software architecture.

2. Analyze the software architecture to assess whether software safety and mission assurance requirements are met.


The project manager shall perform a software architecture review on the following categories of projects:
a. Category 1 Projects as defined in NPR 7120.5.
b. Category 2 Projects as defined in NPR 7120.5 that have Class A or Class B payload risk classification per NPR 8705.4.

1. Assess the results of or participate in software architecture review activities held by the project.


The project manager shall develop, record, and maintain a software design based on the software architectural design that describes the lower-level units so that they can be coded, compiled, and tested.

1. Assess the software design against the hardware and software requirements, and identify any gaps.


The project manager shall develop, record, and maintain a software design based on the software architectural design that describes the lower-level units so that they can be coded, compiled, and tested.

2. Assess the software design to verify that the design is consistent with the software architectural design concepts and that the software design describes the lower-level units to be coded, compiled, and tested.


The project manager shall develop, record, and maintain a software design based on the software architectural design that describes the lower-level units so that they can be coded, compiled, and tested.

3. Assess that the design does not introduce undesirable behaviors or unnecessary capabilities.


The project manager shall develop, record, and maintain a software design based on the software architectural design that describes the lower-level units so that they can be coded, compiled, and tested.

4. Confirm that the software design implements all of the required safety-critical functions and requirements.


The project manager shall develop, record, and maintain a software design based on the software architectural design that describes the lower-level units so that they can be coded, compiled, and tested.

5. Perform a software assurance design analysis.


The project manager shall track and evaluate changes to software products.

1. Analyze proposed software and hardware changes to software products for impacts, particularly to safety and security.


The project manager shall identify the software configuration items (e.g., software records, code, data, tools, models, scripts) and their versions to be controlled for the project.

2. Assess that the software safety-critical items are configuration managed, including hazard reports and safety analysis.


The project manager shall implement mandatory assessments of reported non-conformances for all COTS, GOTS, MOTS, OSS, or reused software components.

2. Assess the impact of non-conformances on the safety, quality, and reliability of the project software.

2. Software Design Analysis Guidance

Under Construction

In software design, software requirements are transformed into the architectural design with a software architecture and a high-level preliminary design followed by the more specific detailed software design. The architecture establishes the interfaces, overall layout/structure, and data flow of the software. The high-level preliminary design identifies the specific individual components (e.g., files, functions, subroutines, classes, modules) for each software program/application along with a description of what that piece does. In addition, it should include items such as the inputs, outputs, units, and data types along with databases and interfaces (e.g., hardware, operator/user, software program/applications, system and subsystems). 

The detailed design takes the high-level components, files, functions, subroutines, classes, etc. and breaks them down to the point where they become pseudo-code with variable names and associated descriptions identified and the logic flow stubbed out. As project budgets tighten, more and more software organizations are embedding the detailed design in the source code and extracting it with tools like Javadoc and Doxygen. (Note: This is not an endorsement of these tools.) So, Software Assurance and Software Safety personnel should be aware they may receive the detailed design documentation in a less traditional manner. For small software systems, the architectural and detailed design may be combined.

The design addresses the software architectural design and software detailed design.  The objective of doing design analysis is to ensure that the design: 

  • is a correct, accurate, and complete transformation of the software requirements that will meet the operational needs under nominal and off-nominal conditions,
  • is safe,
  • is secure with known weaknesses and vulnerabilities mitigated,
  • introduces no unintended features, and
  • does not result in unacceptable operational risk.

The design should also be created considering portability, performance, and maintainability so future changes can be made quickly without the need for significant redesign.

There are several design techniques described below that help with the analysis of the design. Each of these may be used by Software Assurance and Software Safety personnel to help ensure a more robust design. Additionally, these personnel should be aware of the Topic section – Software Design Principles – that addresses specific aspects of the design.

Tab 3 (Safety Design Analysis) contains a more extensive list of analysis techniques that may be used by the Software Safety personnel.

Software Assurance and Software Safety tasks in NASA-STD-8739.8 that relate to design analysis are found in SWE-052, SWE-058, SWE-060, SWE-087, SWE-134, and SWE-157.

2.1 Use of Checklists and Known Best Practices

As part of the design analysis, Software Assurance and Software Safety personnel review the design to ensure that good general design practices have been implemented. There are several checklists in this Handbook that list some of the design best practices. The use of the SADESIGN Checklist (see below) is important when evaluating the software design as it highlights many good general design practices. Another checklist that can be used for safety-critical software is found in this Handbook, under the Programming Checklists Topic: 6.1 - Design for Safety Checklist. The “Software Design Principles” tab under Topics provides information for specific design aspects that should be considered during the analysis for both safety critical software and non-safety critical software. Teams may decide to formulate some of this information into a checklist that is applicable to their project.

2.1 Use of Checklists

Consider the checklist below, from SADESIGN, when evaluating the software design. Another checklist that can be used for safety-critical software is found in this Handbook, under the Programming Checklists Topic: 6.1 - Design for Safety Checklist.

SADESIGN Checklist:

    1. Has the software design been developed at a low enough level for coding?
    2. Is the design complete and does it cover all the approved requirements?
    3. Have complex algorithms been correctly derived, provide the needed behavior under off-nominal conditions and assumed conditions, and is the derivation approach known and understood to support future maintenance?
    4. Examine the design to ensure that it does not introduce any undesirable behaviors or any capabilities, not in the requirements?
    5. Have all requirements sources been considered when developing the design (e.g., system requirements, interface  requirements, databases, etc.)?
    6. Have the interfaces with COTS, MOTS, GOTS, and Open Source been designed (e.g., APIs, .dlls)?
    7. Have all internal and external software interfaces been designed for all (in-scope) interfaces with hardware, user, operator, software, and other systems and are they detailed enough to enable the development of software components that implement the interfaces?
    8. Are all safety features in the design e.g., (mitigations, controls, barriers, must-work requirements, must-not-work requirements)
    9. Does the design provide the dependability/reliability and fault tolerance required by the software, and is the design capable of controlling identified hazards?  Does the design create any hazardous conditions?
    10. Does the design adequately address the identified security requirements both for the software and security risks, including the integration with external components as well as information and data utilized, stored, and transmitted through the software?   
    11. Does the design prevent, control, or mitigate any identified security threats, weaknesses and vulnerabilities? Are any unmitigated weaknesses and vulnerabilities documented as risks and addressed as part of the software and software operations?
    12. Have operational scenarios have been considered in the design (for example, use of multiple individual programs to obtain one particular result may not be operationally efficient or reasonable; transfers of data from one program to another should be electronic, etc.).
    13. Have users/operators been consulted during design to identify any potential operational issues?
    14. Maintainability: Has maintainability been considered? Is the design modular? Is the design easily extensible? Is it designed to allow for the addition of new capabilities and functionality?
    15. Portability: Has portability been considered? Are environmental variables used? Can the software be moved to other environments quickly?
    16. Can additions and changes be made quickly?
    17. Is the design easy to understand?
    18. Is the design unnecessarily complicated?
    19. Is the design adequately documented for usability and maintainability?
    20. Does the design address error handling?
    21. Has software performance been considered during design? Has the software design been optimized for efficiency to reduce system load, run-time length/speed, etc.?
    22. Has the level of coupling (interactivity between modules) been kept to a minimum?
    23. Has software planned for reuse and OTS software in the system been examined to determine if it meets the requirements and performs appropriately within the required limits for this system? Has the software been evaluated for security vulnerabilities and weaknesses?
    24. Does this software introduce any undesirable capabilities or behaviors?
    25. Has the software design been peer reviewed?
    26. Are components referenced by more than one application, file, module, components, functions, subroutines, classes, etc. stored in a common area such as a library, class, or package?

Some good general design practices to be considered are:

  • Begin by breaking the design into smaller chunks.
  • Keep the design simple.
  • Keep the design modular so it will be easier to test and maintain.
  • Keep boundaries, interfaces, and constraints in mind.
  • Strive for maximum cohesion and minimum coupling. (Cohesion groups together the things that make sense; coupling is the relative dependence between the modules)
  • Use abstraction to increase the reusability of modules. (Abstraction is the reduction of a body of data to a simplified representation of the whole.)
  • Understand how users will interact with the system.
  • Include error handling in the designs.
  • Don’t repeat code portions –if the code portions need to be used repeatedly, put them into a function, a package or subroutine that can be called.
  • Prototype new approaches or designs for difficult requirements.
  • Peer review designs, particularly interfaces, data flows, and logic flows
  • Use design documentation, pseudo code, process diagrams, logic diagrams to aid in evaluating the design.

Additional guidance and some key design practices can be found in SWE-058, tab 7.

2.2 Use of peer reviews or inspections

Design items designated in the software management/development plans are peer reviewed or inspected. Some of the items to look for during these meetings are:

  1. Assess the software design against the hardware and identify any gaps.
  2. Assess the software design against the system requirements and design and identify any gaps.
  3. Confirm that the detailed design is consistent with the architectural design and describes the program’s or application’s components at a low enough level for coding.
  4. Confirm the design does not contain undesirable functionality.
  5. Confirm the safety-related requirements (e.g., SWE-134) have been taken into account for safety-critical software.
  6. Confirm the design addresses possible unauthorized access, vulnerabilities, and weaknesses.

2.3 Review of Traceability

Review the bi-directional tracing between requirements and design and ensure they are complete. As the project moves into implementation, the bi-directional tracing between design and code should also be checked.

2.4 Analysis by Software Architecture Review Board (SARB) - applies to NASA projects only

The Software Architecture Review Board (SARB) is a NASA-wide board that engages with flight projects in the formative stages of software architecture. The objectives of SARB are to manage and/or reduce flight software complexity through better software architecture and help improve mission software reliability and save costs. NASA projects that meet certain criteria (for example, large projects, ones with safety critical concerns, projects destined for considerable reuse, etc.) may request the SARB to do a review and assessment for their architecture. For more guidance on SARB, see Tabs 3 and 7 in SWE-143 - Software Architecture Review

2.5 Problem/Issue Tracking System

Per SWE-088 – Task 2, all analysis non-conformances, findings, defects, issues, concerns, and observations are documented in a problem/issue tracking system and tracked to closure. These items are communicated to the software development personnel and possible solutions discussed. The level of risk associated with the finding/issue should be reflected in the priority given in the tracking system. The analysis performed by Software Assurance and Software Safety may be reported in one combined report, if desired.

3. Safety Design Analysis

Under Construction

3.1 Review Software Design Analysis

There are many considerations for analyzing the design with respect to safety. Most of the design analysis that is used for non-safety projects is still applicable for safety critical software. So, to begin with, the Software Safety personnel should either review or ensure that the Software Assurance personnel have reviewed the set of items listed in Tab 2 -Software Design Analysis Guidance. The first of these is the SADESIGN checklist (previously in Topic 7.18). Another checklist that can be used for safety-critical software is found in this Handbook, under the Programming Checklists Topic: 6.1 - Design for Safety Checklist

3.2 Design peer reviews or design walkthroughs

Design peer reviews or design walkthroughs for safety-critical components are recommended for safety-critical components to identify design problems or other issues. One of the most important aspects of a software design for safety critical software is to design for minimum risk. “Minimum risk” includes the hazard risk, the risk of software defects, risk of human operator errors and other types of risk such as programmatic, cost, schedule, etc. When possible, eliminate identified hazards and risks or reduce the associated risk through design. Some of the ways risk can be reduced  through design are listed below. This list can be used by attendees of design peer reviews or walk-throughs to help evaluate the design with respect to safety and risk considerations.

   Safety Considerations during Design Peer Reviews/Walk-throughs:

    • Reduce the complexity of the software and interfaces.
    • Design for user-safety instead of user-friendly.

    • Design for testability during development and integration.

    • Give more design “resources” (such as time, effort) to the higher risk aspects such as hazard controls.

    • Include separation of commands, functions, files, and ports.

    • Include design for Shutdown/Recovery/Safing.

    • Plan for monitoring and detection.

    • Isolate the components containing safety-critical requirements as much as possible.

    • Interfaces between safety-critical components should be designed for minimum interaction.

    • Document the positions and functions of safety critical components in the design hierarchy.

    • Document how each safety-critical component can be traced back to the original safety requirements and how the requirements are implemented.

    • Specify safety-related design and implementation constraints.

    • Document execution control, interrupt characteristics, initialization, synchronization, and control of the components. For high risk systems, interrupts should be avoided since they may interfere with software safety controls. Any interrupts used should be priority-based.

    • Specify any error detection or recovery schemes for safety-critical components.

    • Consider hazardous operations scenarios.

    • The design of safing and recovery actions should fully consider the real-world conditions and the corresponding time to criticality. Automatic safing is often required if the time to criticality is shorter than the realistic human operator response time, or if there is no human in the loop. This can be performed by either hardware or software or a combination depending on the best system design to achieve safing.

    • Select a strategy for handling faults and failures. Some of the techniques that can be used in fault management are below:

      • To prevent fault propagation (cascading of a software error from one component to another) safety-critical components must be fully independent of non-safety-critical components, be able to detect an error and not pass it along.
      • Shadowing: A higher level process emulates lower level processes to predict expected performance and decides if failures have occurred in the lower processes. The higher level process implements appropriate redundancy switching when it detects a discrepancy.
      • Built-in Test: Fault/Failure Detection, Isolation and Recovery (FDIR) can be based on self-test (BIT) of lower tier processors where the lower level units test themselves and report their status to the higher processor. The higher processor switches out units reporting a failed or bad status.
      • Majority voting: Some redundancy schemes are based on majority voting. This technique is especially useful when the criteria for diagnosing failures is complicated (e.g. when an unsafe condition is defined by exceeding an analog value rather than simply a binary value). An odd number of parallel units are required to achieve majority voting.
      • Fault Containment Regions: Establish a Fault Containment Region(FCR) to prevent fault propagation such as from non-critical software to safety-critical components; from one redundant software unit to another, or from one safety-critical component to another. Techniques such as firewalling or “come from” checks should be used to provide sufficient isolation of FCRs to prevent hazardous fault propagation. FCRs can be best partitioned or firewalled by hardware. A typical method of obtaining independence between FCRs is to host them on different and independent hardware processors.
      • Redundant architecture: In redundant architecture, there are two versions of the operational code which do not need to operate identically. The primary version is a high performance version with all required functionality and performance requirements. If problems occur with this version, the other version (called a safety kernel )will be given control. This version may have the same functionality, or it may have a more limited scope.
      • Recovery blocks: These use multiple software versions to find and recover from faults. Outputs from a block will be checked against an acceptance test. If it fails, then another version computes the output and the process continues. Each version is more reliable but less efficient. If the last block fails, the program must determine some way to fail safe.
      • Self-checks: This is a type of dynamic fault detection. Self-checks can include replication (copies must be identical if the data is to be considered correct), reasonableness (is the data reasonable, based on other data in the system), and structural (are components manipulating complex data correctly).
    • Consider any potential issues with the use of COTS, Open Source , reused or inherited code.
    • Select sampling rates with consideration for noise levels and expected variations of control system and physical parameters.
    • Identify test and/or verification methods for each safety-critical design feature.
    • Design for testability. Include ways that the internals of a component can be adequately tested to verify that they are working properly.
    • Consider maintainability in the design (For example: anticipate potential changes in the software, use a modular design, object-oriented design, uniform conventions, and naming conventions, use coding standards that support safety practices, use documentation standards, common tool sets)

A few more safety-specific design considerations are below:

  • Are the design and its safety features appropriately flowed from the requirements and the evolving hazard analyses?
  • Has the design been reviewed to ensure that software design’s correct implementation of safety controls or processes does not compromise other system safety features or the functionality of the software?
  • Have additional system hazards, causes, or contributions discovered during the software design analysis been documented in the required system safety documentation (e.g. Safety Data Package and or Hazard Reports)?
  • Have Safety reviews approved the controls, mitigations, inhibits, and safety design features to be incorporated into the design?
  • Are any needed or identified safety conditions, constraints, parameters, trigger points, boundary conditions, environments, and other software circumstances for safe operation, in the appropriate modes and states all flowed from the software requirements and incorporated into the design?
  • Does the design maintain the system in a safe state during all modes of operation or can it transition to a safe state when and if necessary?
  • Are any partitioning or isolation methods used in the design to logically isolate the safety critical design elements from those that are non-safety critical effective?  This is particularly important with the incorporation of COTS or integration of legacy, heritage, and reuse software.  Any software that can write or provide data to safety critical software will also be considered safety critical unless there is isolation built in, and then the isolation design is considered safety critical.
  • Are appropriate fault and or failure tolerance incorporated into the software design as designated?
  • If heritage code is being used, is there a clear understanding of the design and constraints associated with any fault management in the heritage code? Are they appropriate for the current system being developed?

3.3 Analysis of Other Aspects of the Design

All of these design analyses would be useful to perform, but they require more time and effort so the safety team should choose those they feel would provide the most value, depending on the areas where risk is highest in the design. Some of the other available design analysis methods are below:

a. Acceptable Level of Safety: Once the design is fairly mature, a design safety analysis can be done to determine whether an acceptable level of safety will be attained by the designed system. This analysis involves analyzing the design of the safety components to ensure that all the safety requirements are specified correctly. The requirements may need to be updated once the design has determined exactly what safety features will be included in the system. Then review the design looking for the places and conditions that lead to unacceptable hazards. Consider the credible faults or failure that could occur and evaluate their effects on the designed system. Does the designed system produce the desired result with respect to the hazards?

b. Prototyping or simulating: Prototyping or simulating parts of the design may show where the software can fail. In addition, this can demonstrate whether the software can meet the constraints it might have, such as response time, or data conversion speed. This could also be used to provide the operator’s inputs on the user interface. If the prototypes show that a requirement cannot be met, the requirement must be modified as appropriate or the design may need to be revised.

c.  Independence Analysis: To perform this analysis, map the safety-critical functions to the software components, and then map the software components to the hardware hosts and FCRs. All the input and output of each safety-critical component should be inspected.  Consider global or shared variables, as well as the directly passed parameters.  Consider “side effects” that may be included when a component is run. 

d. Design Logic Analysis: The Design Logic Analysis (DLA) evaluates the equations, algorithms, and control logic of the software design. Logic analysis examines the safety-critical areas of a software component.  A technique for identifying safety-critical areas is to examine each function performed by the software component.  If it responds to or has the potential to violate one of the safety requirements, it should be considered critical and undergo logic analysis.  A technique for performing logic analysis is to compare design descriptions and logic flows and note discrepancies. This most rigorous type of analysis can also be done using Formal Methods. Less formal DLA involves a human inspector reviewing a relatively small quantity of critical software products (e.g., PDL, prototype code) and manually tracing the logic. Safety-critical logic to be inspected can include failure detection and diagnosis, redundancy management, variable alarm limits, and command inhibit logical preconditions.

e. Design Data Analysis: The Design Data Analysis evaluates the description and intended use of each data item in the software design. Data analysis ensures that the structure and intended use of data will not violate a safety requirement.  A technique used in performing design data analysis is to compare the description to the use of each data item in the design logic.                       

Interrupts and their effect on data must receive special attention in safety-critical areas.  Analysis should verify that interrupts and interrupt handling routines do not alter critical data items used by other routines.

The integrity of each data item should be evaluated with respect to its environment and host.  Shared memory and dynamic memory allocation can affect data integrity.  Data items should also be protected from being overwritten by unauthorized applications.

f. Design Interface Analysis: The Design Interface Analysis verifies the proper design of a software component's interfaces with other components of the system. The interfaces can be with other software components, with hardware, or with human operators.  This analysis will verify that the software component's interfaces, especially the control and data linkages, have been properly designed.  Interface requirements specifications (which may be part of the requirements or design documents, or a separate document) are the sources against which the interfaces are evaluated.

Interface characteristics to be addressed should include inter-process communication methods, data encoding, error checking and synchronization.

The analysis should consider the validity and effectiveness of checksums, CRCs, and error correcting code.  The sophistication of error checking or correction that is implemented should be appropriate for the predicted bit error rate of the interface.  An overall system error rate should be defined and budgeted to each interface.

g. Design Traceability Analysis: This analysis ensures that each safety-critical software requirement is included in the design. Tracing the safety requirements throughout the design (and eventually into the source code and test cases) is vital to making sure that no requirements are lost, that safety is “designed in”, that extra care is taken during the coding phase, and that all safety requirements are tested. A safety requirement traceability matrix is one way to implement this analysis.   

3.4 Documenting and Reporting of Results of the Design Analysis:

Any design analysis done in the interim between status reports or prior to milestone reviews should be reported on to management and the rest of the team. When a project has safety-critical software, any analysis done by Software Assurance should be shared with the Software Safety personnel. The results reporting should include:

  • Identification of what was analyzed: Mission/Project/Application
  • Person or group doing analysis
  • Period/Timeframe/Phase analysis performed during
  • Documents used in analysis (e.g., requirements version, etc.)
  • Description or identification of analysis techniques used
  • Overall assessment of design, based on analysis
  • Major findings and associated risk
  • Current status of findings: open/closed; projection for closure timeframe

3.5 Problem/Issue Tracking System

Per SWE-088 – Task 2, all analysis non-conformances, findings, defects, issues, concerns, and observations are documented in a problem/issue tracking system and tracked to closure. These items are communicated to the software development personnel and possible solutions discussed. The level of risk associated with the finding/issue should be reflected in the priority given in the tracking system. The analysis performed by Software Assurance and Software Safety may be reported in one combined report, if desired.

4. Design Analysis Report Content

Documenting and Reporting of Analysis Results.

When the design is analyzed, the Software Design Analysis work product is generated to document the results. It should include a detailed report of the design analysis results. Analysis results should also be reported in a high-level summary and conveyed as part of weekly or monthly SA Status Reports. The high-level summary should provide an overall evaluation of the analysis, any issues/concerns, and any associated risks. If a time-critical issue is uncovered, it should be reported to management immediately so that the affected organization may begin addressing it at once.

When a project has safety-critical software, analysis results should be shared with the Software Safety personnel. The results of analysis conducted by Software Assurance personnel and those done by Software Safety personnel may be combined into one analysis report, if desired.

4.1 High-Level Analysis Content for SA Status Report

Any design analysis performed since the last SA Status Report or project management meeting should be reported to project management and the rest of the Software Assurance team. When a project has safety-critical software, any analysis done by Software Assurance should be shared with the Software Safety personnel.

When reporting the results of an analysis in a SA Status Report, the following defines the minimum recommended contents:

  • Identification of what was analyzed: Mission/Project/Application
  • Period/Timeframe/Phase analysis performed during
  • Summary of analysis techniques used
  • Overall assessment of design, based on analysis
  • Major findings and associated risk
  • Current status of findings: open/closed; projection for closure timeframe

4.2 Detailed Content for Analysis Product:

The detailed results of all software design analysis activities are captured in the Software Design Analysis product. This document is placed under configuration management and delivered to the project management team as the Software Assurance record for the activity. When a project has safety-critical software, this product should be shared with the Software Safety personnel.

When reporting the detailed results of the software design analysis, the following defines the minimum recommended content:

  • Identification of what was analyzed: Mission/Project/Application
  • Person(s) or group performing the analysis
  • Period/Timeframe/Phase analysis performed
  • Documents used in analysis (e.g., versions of the system and software requirements, interfaces document, architectural and detailed design)
  • Description or identification of analysis techniques used. Include an evaluation of the techniques used.
  • Overall assessment of design, based on analysis results
  • Major findings and associated risk – The detailed reporting should include where the finding, issue, or concern was discovered and an assessment of the amount of risk involved with the finding.
  • Minor findings
  • Current status of findings: open/closed; projection for closure timeframe
    • Include counts for those discovered by SA and Software Safety
    • Include overall counts from the Project’s problem/issue tracking system.

5. Resources

5.1 References

No references have been currently identified for this Topic. If you wish to suggest a reference, please leave a comment below.

5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

  • No labels