- 1. Introduction
- 2. SW Requirements Analysis Techniques
- 3. SW Safety Requirements Analysis
- 4. Requirements Analysis Report
- 5. References
The primary purpose of the Software Requirement Analysis is to ensure that the documented set of requirements is the “best set” of requirements that can be defined. The documented set of requirements will guide the software developers to build a correct, robust software system that meets or exceeds all the operational needs of the system. Thus, during the software requirements analysis phase, all the various aspects of the requirements are examined carefully to see where any improvements might be needed before design and implementation begin.
Many characteristics of the software requirements are considered - Requirements should be complete, correct, understandable, unambiguous, testable, traceable to the higher-level requirements, consistent, able to meet the user’s expectations, and detailed enough to include boundary conditions, constraints, desired controls, etc.
The are many tools and techniques that can be used to do a thorough requirements analysis. The requirements in NPR 7150.2 and NASA-STD-8739.8 specify some methods that are required and those are listed in tab 2, Software Requirements Analysis Techniques on techniques. Many other techniques are available and help locate various other types of requirements problems. Both the software team and the software assurance and /safety personnel should be involved in the requirements analyses activities. Both the software and the safety teams should not only use the analysis techniques that are required by NPR 7150.2 and NASA-STD-8739.8, but they should also choose as many of the other listed techniques that would be useful for the project. Note that the requirements analysis portion of the Software Safety Analysis is an important part of the requirements analysis for safety-critical systems and, as such, is included in this topic. (See tab 3) Each team should do any required SWEs or SA/Safety activities and then choose other listed techniques that would help the team do a more thorough job of analyzing the requirements. The results of the techniques used will be documented in a Software Requirements Analysis Report (including the Safety Requirements Analysis).
For safety-critical software, the requirements portion of the Safety Analysis should be done in conjunction with the Software Requirements Analysis. See tab 3 for items to be included.
Tab 4 of this topic has some guidance on reporting out on the requirements analysis performed.
2. SW Requirements Analysis Techniques
This section lists several different techniques that can be used by either software assurance or safety personnel to aid in analyzing the software requirements. The few techniques that are required for software assurance or safety personnel are noted below. It is recommended that at least one other method be chosen for inclusion in the requirements analysis. Consider the areas where the project requirements seem weak or where most of the requirements issues have been found previously and tailor the approach. This tab contains some checklists and guidance for requirements analysis. Other guidance to consider while analyzing requirements can be found in this SWE Handbook in tabs 3 and 7 of the following requirements: SWE-052, SWE-134, SWE-050, SWE-051, and SWE-184.
NASA Missions go through a logical decomposition in defining their requirements. Requirements analysis addresses a system’s software requirements including analysis of the functional and performance requirements, hardware requirements, interfaces external to the software, and requirements for qualification, quality, safety, security, dependability, human interfaces, data definitions, user requirements, installation, acceptance, user operation, and user maintenance.
- Requirements/and or Operations Concepts Walkthroughs - this technique provides an understanding of the system
- The following roles will need to have a good understanding of the requirements and should attend/participate in the walkthroughs: System Engineers, Software Developers, Software Testers including IV&V, Software Assurance personnel, Software, and System Safety personnel, Operations people, users.
- Generally, these types of walkthroughs give the participants a good view of the expected flows through the system to perform the necessary operations as well as the data flow that is needed for each operation. This can help determine whether all the correct requirements are in place to support all the operational scenarios as well as whether the necessary data has been defined. These walkthroughs can also provide a good background understanding of the system’s operations that will help identify potential system hazards.
- Walkthroughs also provide an opportunity for open discussion of the requirements. These in-depth discussions may lead to the identification of additional requirements as the requirements and their intent are better understood.
- Peer Reviews or Formal Inspections
- Peer Reviews or Formal Inspections can be used to focus on smaller areas of concern or to look at potential problem areas of the requirements.
- For example, a peer review could focus on just correctness or consistency:
Requirements are considered correct if they "respond properly to situations" 001 and are appropriate to meet the objectives of higher-level requirements. A method for determining correctness is to compare the requirements set against operational scenarios developed for the project.
Requirements are consistent if they do not conflict with each other within the same requirements set and if they do not conflict with system (or higher-level) requirements. It is helpful to have at least one person read through the entire set of requirements to confirm the use of consistent terms/terminology throughout.
Peer reviews or formal inspections are particularly important for the areas where the requirements seem the weakest or where the organization typically finds the most problems with requirements.
3. Checklists: Several checklists can be used to help analyze requirements. It is often useful to consider using one or more of these to supplement other analysis methods.
The checklist SAANALYSIS is shown below (previously located in 7.18) is a good general checklist that covers many areas to be considered in your analysis:
When evaluating the software requirements, consider the list of items below:
- Is the approach to requirements decomposition reasonable, appropriate, and consistent?
- Are the system’s software requirements both individually and in aggregate of high quality (clear, concise, complete, consistent, feasible, implementation independent, necessary, singular, traceable, accurate, unambiguous, and verifiable)?
Clear and concise. The requirement ensures that statements can only be interpreted unambiguously. “ The terms and syntax used must be simple, clear and exact”. For a clear and concise requirement, the use of weak terms, synonyms, and unclear sentence structure leads to misunderstandings.
Complete. The requirement describes adequately “the capability and characteristics to meet the stakeholder’s needs”. Further explanation and enhancement of the requirement are not necessary.
Consistent. The requirement has no conflicts. Defined terms are used consistently throughout the requirement.
Feasible. The requirement can be implemented technically and does not need further advanced technologies. The system constraints are considered regarding legal, cost, and schedule aspects.
Implementation independent. The requirement is specified independently from the implementation: “The requirement states what is required, not how the requirement should be met”.
Necessary. The requirement contains relevant information and is not deprecated.
Singular. A requirement cannot be divided into further requirements. It includes one single statement.
Traceable. “The requirement is upwards [and downwards] traceable”. Every requirement at each development stage can be traced to a requirement either to the current or to the previous and subsequent development stage. The requirement considers the dependency and possible conflicts among software.
- Accurate. Each requirement must accurately describe the functionality to be built.
- Unambiguous. A Requirements Document is unambiguous if and only if every requirement stated therein has only one interpretation.
Verifiable. The requirement necessitates the verification of the statement by using the standard methods of inspection, analysis, demonstration, or test.
- Will requirements adequately meet the needs of the system and expectations of its customers and users?
- Do requirements consider the operational environment under nominal and off-nominal conditions? Are the requirements complete enough to avoid the introduction of unintended features? Specifically:
- Do the requirements specify what the system is supposed to do?
- Do requirements guard against what the system is not supposed to do?
- Do the requirements describe how the software responds under adverse conditions?
- Is this requirement necessary?
- Are the requirements understandable?
- Are the requirements organized in a manner such that additions and changes can be made easily?
- Are the requirements unnecessarily complicated?
- Has system performance been captured as part of the requirements?
- Are the system boundaries (or perhaps operational environment) well defined?
- Is a requirement realistic given the current technology?
- Is the requirement singular in nature, or could it be broken down into several requirements? (looking at grammar not whether it can be decomposed or not)
- Within each requirement level, are requirements at an appropriate and consistent level of abstraction?
- In the traceability, are the parent requirements represented appropriate child requirements?
- Do the parent requirements include outside sources such as:
- Hardware specifications
- Computer\Processor\Programmable Logic Device specifications
- Hardware interfaces
- Operating system requirements and board support packages
- Data\File definitions and interfaces
- Communication interfaces including bus communications Software interfaces
- Derived from Domain Analysis
- Fault Detection, Isolation and Recovery requirements
- Commercial Software interfaces and functional requirements
- Software Security Requirements
- User Interface Requirements
- Legacy or Reuse software requirements
- Derived from Operational Analysis
- Prototyping activities
- Software Test Requirements
- Software Fault Management Requirements
- Hazard Analysis
- Does the Software Requirements Specification contain the following information:
- System overview.
- CSCI requirements:
- Functional requirements.
- Required states and modes.
- External interface requirements.
- Internal interface requirements.
- Internal data requirements.
- Adaptation requirements (data used to adapt a program to a given installation site or given conditions in its operational environment).
- Safety requirements.
- Performance and timing requirements.
- Security and privacy requirements.
- Environment requirements.
- Computer resource requirements:
- Computer hardware resource requirements, including utilization requirements.
- Computer software requirements.
- Computer communications requirements.
- Software quality characteristics.
- Design and implementation constraints.
- Personnel-related requirements.
- Training-related requirements.
- Logistics-related requirements.
- Precedence and criticality of requirements.
- FDIR requirements for system, hardware, and software failures
- Software State transitions, state diagrams
- Assumptions for design and operations are documented
- Qualification provisions (e.g., demonstration, test, analysis, inspection).
- Bidirectional requirements traceability.
- Requirements partitioning for phased delivery.
- Testing requirements that drive software design decisions (e.g., special system-level timing requirements/checkpoint restart).
- Supporting requirements rationale.
- Is there bidirectional traceability between parent requirements, requirements, and preliminary design components?
- Do the detailed software requirements trace to a reasonable number of parent requirements? (e.g., does the ratio between detailed requirements and parent requirements look reasonable, do all of the software detailed requirements trace to too few parent requirements (inadequate system requirement definitions)).
- Are trace links of high quality (e.g., avoidance of widows and orphans, circular traces, traces within a requirements level, etc.)?
- Have high-risk behaviors or functions been identified, and does SA agree with the identified behaviors? This should result in a list of critical activities to be performed by software and analyzed further by software assurance.
- For critical activities, are associated requirements correct and complete against SA understanding of the system behavior? Note: consider additional analysis rigor to address critical activities.
- Are interface requirements with the hardware, users, operators, or other systems adequate to meet the needs of the system concerning expectations of its customer and users, the operational environment, safety and fault tolerance, and both functional and non-functional perspectives?
- Has a fault tolerance strategy for fault detection, identification, and recovery (FDIR) from faults been provided, and is it reasonable?
- Is the role of software as part of the FDIR understood?
- Are software-related activities associated with the fault tolerance strategy for fault detection and identification and recovery captured as requirements?
- Is human safety addressed through the requirements?
- Have hazards and hazard causes been adequately identified, with associated software detection and mitigations captured as requirements?
- Are must-work and must-not-work requirements understood and specified?
- Have requirements addressed the security threats and risks identified within the system concept specifications and the system security concept of operations (e.g., System Security Plan)
- Do requirements define appropriate security controls to the system and software?
- Can the requirement(s) be efficiently and practically tested?
- Do the requirements address the configuration of, and any associated constraints, associated with COTS, GOTS, MOTS, and Open Source software?
- Do the requirements appropriately address operational constraints?
- Does the requirement conflict with domain constraints, system constraints, policies, or regulations (local and governmental)?
- Have users/operators been consulted during requirements development to identify any potential operational issues?
- Have the software requirements been peer-reviewed?
For the basic requirements on traceability, see SWE-052 in NASA-STD-8739.8. Software Assurance should be confirming that the bidirectional trace is complete and includes traces for all the levels specified in SWE-052.
The requirements traceability matrix will show whether all of the system-level and higher-level requirements assigned to software have been addressed and broken down into software requirements that can be coded. It will also show whether any software requirements do not have parents (and may not be necessary). Traceability is important for the full life cycle since it will show where the requirements were allocated and are getting designed, implemented, and tested. Traceability is particularly important when there are changes that need to be addressed throughout the system. For example, if a change in requirements occurs when the code is being generated, the traceability will show where the requirement affects the design and the code. The traceability between the requirements and the tests is to determine whether all the requirements in the system are being tested and similarly, the traceability between the system hazards and the requirements will help determine whether all the system hazards are being addressed.
5. Verify Accuracy of Mathematical Specifications
While the requirements analysis is in process, the algorithms and mathematical specifications need to be verified to ensure that what inputs to the system are computed, the specifications, and algorithms produce a correct value. Ensure that correct units have been specified for all components in the specifications/algorithms.
Many other techniques can be used in requirements analysis to assist in getting the best requirements set possible. Some others are mentioned below:
6. Modeling of inputs and outputs; Use of Data Flow Diagrams, Control Flow Diagrams, Scenario-Based Modeling, Behavioral-Based Modeling-TBS
7. Use Cases, User Stories (Typically used in Agile)
i. To analyze the requirements in an Agile software development environment, it is important to understand the relationship between user stories and requirements and where user cases fit in.
ii. Requirements have been typically used for the NASA systems developments.
Requirements describe the features of the system being built and convey the user’s expectations. They tend to be very specific and go into detail on how the software should work. Traditionally, the requirements are written by the product manager, the systems engineer, or the technical lead. Projects using many of the traditional development methodologies use the written requirements and go directly into the design phase after their requirements analysis. In Agile developments where they are provided with standard NASA requirements, the Product Owner typically breaks the requirements into agile user stories and/or use cases.
iii. Agile User Stories:
User Stories are typically used with Agile Methodologies.
The Agile Alliance describes a user story as work to be done divided into functional increments.
A more simplified definition can be attributed to Atlassian: “A user story is an informal, general explanation of a software feature written from the perspective of the end-user. Its purpose is to articulate how a software feature will provide value to the customer.”
User stories are often used to plan the work in an Agile software development environment. They usually consist of one or two sentences in a specific structure to identify the user, what they want, and why. User stories are all about providing value to the customer. It’s what the user wants (the “results”).
iv. Agile Use Cases
Use cases are more detailed than user stories and focus on exactly how the software will work. Use cases are all about the “behavior” that is built into the software to meet the customer/user’s needs.
When Agile use stories/use cases are used, analysis is still required but may take on a different flavor. They give context to the requirements. Some important considerations:
Ensure the User Stories are well formulated. To convey the appropriate information, User Stories are commonly in the form of:
-As a <role>,
-I want <requirement>
-so that <rationale/goal>.
Ensure that the set of user stories/use cases capture all of the information that may exist in the more traditional forms of requirements.
Consider how the user stories can be tested and ensure that all the required capabilities can be tested through the user stories.
Perform bidirectional traces between the traditional requirements and the user stories/use cases. Ensure that all safety-critical hazard controls trace to user stories and use cases.
Review Appendix A in NASA-STD-8739.8 to ensure that all applicable software hazard causes have been considered for inclusion in the user stories and use cases.
8. Use of a Requirements Analysis Tool, such as Innoslate, or requirements modeling tools such as UML
Many tools can be used to check some of the attributes being reviewed during the requirements analysis activities.
3. SW Safety Requirements Analysis
As mentioned previously, there are requirements analysis activities that apply particularly for the systems with safety-critical software. The safety team needs to focus on safety considerations while the requirements analysis is being done and ensure that these safety-critical considerations have been addressed. The safety requirements analysis portions should be done in conjunction with the software requirements analysis and the results combined in reporting on the analyses results.
a. Ensure that the requirements analysis discussed in Topic 8.9 - Software Safety Analysis has been performed.
b. Below is a checklist of items that should be considered in addition to the other requirements analysis techniques during the safety requirements analysis:
- Has all the safety-critical software been identified?
- Have agreements been reached on criticality designations among the project, Software Assurance personnel, and IV&V personnel?
- Is the bidirectional trace complete between the software requirements and the software-related system hazards, including hazardous controls, hazardous mitigations, hazardous conditions, and hazardous events.? (Required by Task 1, SWE-052)
- Has there been an analysis to determine that all necessary safety requirements have been flowed down to the software requirements?
- Have all system hazards been reviewed to identify software contributions, mitigations, or controls?
- Have the requirements been reviewed to ensure they contain all the necessary safety and security-related requirements? Consider:
- Fault Detection, Isolation, and Recovery Requirements
- Software Fault Management Requirements
- Software Mitigations, Controls, etc. identified in Hazard Analysis Reports
- System constraints such as activities the hardware must not do or limitations in sensor precision.
- Software Security Requirements (For example, command authentication schemes may be necessary for network-controlled systems. Remember unauthorized access may be either inadvertent or malicious.
- This could be for:
- Access control to the environment or data
- Communication controls per NASA-STD-1006.
- Review of COTS, MOTS, OTS, GOTS, and OSS software for security vulnerabilities and weaknesses
Has a list of generic hazards been reviewed for applicability? (A set of software hazard causes can be found in Appendix A of NASA-STD-8739.8 or topic 8.21 - Software Hazard Causes.)
- Are there requirements with the hazards that should be included in the requirements document?
- Have generic safety requirements been reviewed to see if any are applicable for this project? (The Safety Requirements Checklist listed below is one example of this. There may be other generic safety requirements lists in your Center Asset Library)
- Have software assurance personnel or software safety personnel verified that the configuration items include safety and security requirements, hazard reports, and safety analysis reports (Required by Task 2, SWE-081)
- Do the requirements include those listed in items a through l of SWE-134? (Required by Task 1, SWE-134)
- Have the requirements in the Safety Requirements Checklist been considered?
- Are assumptions and boundary conditions identified for safety-related functions?
- Have the software safety requirements been derived from appropriate parent requirements, and do they include modes, states of operation, and safety-related constraints?
- Do the software safety requirements "maintain the system in a safe state” and provide adequate proactive and reactive responses to potential failures? For example, do the requirements include capabilities like system failover, redundant systems, backup servers, ability to shut down gracefully, etc.?
- Have timing, data throughput, and performance been considered, and are the requirements for them feasible to meet the safety requirements? Has adequate human operator and control system response time been included? Are there adequate margins of capacity for all critical resources?
- Are any safety-related constraints between the hardware and software included in the software requirements documentation?
- Have safety “Best Practices” been included in the requirements? Some examples of safety “Best Practices” are:
- Notifying the controller when an automated safety-critical process is executed.
- Requiring hazardous commands to involve multiple, independent steps to execute.
- Requiring hazardous commands or data to differ from non-hazardous commands by multiple bits.
- Making the current state of all inhibits available to controllers (human or executive program).
- Ensuring unused code cannot cause a hazard if executed.
- Have any planned COTS, MOTS, GOTS, Open Source, or reused software modules been included in the safety requirements analysis? (See Topic 8.8 COTS Software Safety Considerations)
- Have all changes in the requirements been evaluated for impacts to safety or security?
- Have verification methods for the safety requirements been considered? Often considering the verification methods for a requirement can highlight a requirement that is ambiguous, conflicting, or not testable.
c. Has a Fault Tree Analysis been performed? (See Topic 8.7 - Software Fault Tree Analysis for information on performing a Fault Tree Analysis))
d. Has a Software Failure Modes and Effects Analysis (SFMEA) or a Software Failure Modes, Effects and Criticality (SFMECA) been performed? (See Topic 8.5 - SW Failure Modes and Effects Analysis for information on performing a SFMEA.)
4. Requirements Analysis Report
The primary purpose of the Software Requirements Analysis Report is to document the results of the analysis and to capture the findings and corrective actions that need to be addressed to improve the overall requirements set. A secondary objective is to record the scope and types of analysis techniques used to provide information on the robustness of the analysis done and to provide information on the methodologies that produced the most useful results.
With those objectives in mind, the following information should be captured in the Software Requirements Analysis Report:
- Basic information identifying the product being analyzed:
- Mission/Project/Application being reviewed in the analysis
- Set of Software Requirements included in this requirements analysis (e.g. Requirements Documents. Interface Documents, system requirements, etc.). Reference the set of project (Parent) requirements documents used for the traces. Include version numbers or document revision numbers for all documents included.
- Personnel performing the analysis and roles represented. Lead Software Assurance Personnel and Lead Software Safety Personnel should sign the report documenting the results of the Software Requirements Analysis and Software Safety Analysis.
- A high-level scope and description of the methodologies used in the analysis
- Use the list of possible analysis methodologies listed in Tab 2 as a starting point.
- For each methodology on the list, state why/or why not the methodology was used.
- List any additional methodologies used that were not included in the Tab 2 list.
- Summary of results found using each methodology
- How many findings resulted from each methodology?
- Difficulty/Ease of methodology used
- The general assessment of the methodology
- High-Level Summary of the findings
- Detailed information on Software Requirements Analysis results, findings, and corrective actions
- Overall assessment of the quality/completeness of the requirements
- Either list each result, finding, or corrective action or summarize them and list the links to the detailed findings.
- Documentation should include:
- Missing requirements
- Requirements that need rewriting: because they are incomplete, incorrect, unclear, not testable/verifiable, etc.
- Requirement with safety concerns
- Requirements with security concerns (e.g., access control, vulnerabilities, weaknesses, etc.)
- Incompatibilities in interfaces not clearly defined
- Issues in traceability (Child with no parent, parent with no children, etc.)
- Requirements with unnecessary functions
- Requirements are not detailed enough to provide the information needed to develop a detailed design that can be implemented in the code
- Hazards and safety-related software controls, constraints, features not included in the requirements
- Any other requirements issues discovered during the analyses
No references have been currently identified for this Topic. If you wish to suggest a reference, please leave a comment below.
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.