bannerd


5.10 - STP - Software Test Plan

Return to 7.18 - Documentation Guidance

1. Minimum Recommended Content

1.1 Purpose

The Software Test Plan describes the test activities, at a high-level, that will be performed to verify that the software has been implemented in a manner that satisfies the project's functional and non-functional requirements that are verified via testing (e.g., performance, reliability, safety, security, availability, usability) as defined within the Software Requirements Specification (SRS). As a result, this Plan addresses how the project will satisfy SWE-065 - Test Plan, Procedures, Reports, SWE-066 - Perform Testing, and SWE-191 - Software Regression Testing. This plan also defines the test methodology (i.e., qualification provisions) to be used to certify that the implemented software system satisfies the operational needs of the project. 

Planning the software testing activities allow for the thorough deliberation of tasks, methods, environments, and related criteria before they are implemented.  Planning also allows the project team to improve upon a previous project's testing by devising a plan for the implementation of more appropriate, modern, or efficient techniques and ensuring the inclusion of steps previously missed or not included.

As with any task, having a plan in place ensures that all necessary and required tasks are performed. Development of that plan provides the opportunity for stakeholders to give input and assist with the documentation and tailoring of the planned testing activities to ensure the outcome will meet the expectations and goals of the project. The Software Test Plan is baselined as part of the Critical Design Review (CDR) activities. A preliminary draft should be available for review as part of the Preliminary Design Review (PDR) work package with a baselined version ready at the exit of the Critical Design Review (CDR). As the life cycle progresses, it may be necessary to update the Test Plan after it is baselined; however, it should be complete and final by the exit of the Test Readiness Review (TRR).

Ensuring the Software Test Plan follows a template and includes specified information ensures consistency of test plans across projects, ensures proper planning occurs, and prevents repeating problems of the past. 

1.2 Recommended Content

Minimum recommended content for the Software Test Plan. The Test Plan defines the approach for testing on a project or class of projects, including the methodology for each type of test to be performed. It addresses the following items:

    1. Test levels (separate test effort that has its own documentation and resources, e.g., unit or component, software integration testing, system integration end-to-end testing acceptance testing).
    2. Test types: There are many types of testing that can be used. Each type is generally intended to verify different aspects of the software. Depending on the type of testing, there may be test cases chosen from many of the types of testing and/or an exhaustive set of test cases may be chosen from one type (for example, functional testing). Test types may include:
        1. Unit Testing
        2. Software Integration Testing
        3. System Integration Testing
        4. End-to-End Testing
        5. Acceptance Testing
        6. Regression Testing
        7. Functional Testing (Requirements-based Testing)
        8. Stress Testing.
        9. Performance Testing.
        10. Endurance Testing.
        11. Interface Testing. (Both User Interface and Interfaces to Other System Functions)
        12. Boundary Conditions Testing.
        13. Coverage Testing (Both Path Coverage Testing and Statement Coverage)
        14. Mutation Testing or Perturbation Testing
        15. Types of Testing often used in Safety-Critical Systems:
          1. Fault Insertion Testing
          2. Failure Modes and Effects Testing
          3. Perturbation or Mutation Testing
    3. Test classes (designated grouping of test cases).
    4. Test progression. (Order in which test classes for each test level will be performed).
    5. Test schedules. 
    6. Acceptance (or exit) Criteria for set of tests (Example: 95% of test cases must pass (Meet expected results))
    7. Test coverage (breadth and depth) or other methods for ensuring sufficiency of testing.
    8. Plan for test witnessing, if system is safety-critical
    9. Data recording, reduction, and analysis.
    10. Any risks or issues identified with testing.
    11. Qualification - Testing Environment, Site, Personnel, and Participating Organizations 

2. Test Levels

There are many levels of testing. Several are used regularly in the Software Development Life Cycle. Others are used to test is ways that explore for weaknesses in software design, robustness, or for specific cases in safety critical software. The main levels are defined below and all are discussed further in the tabs of this topic. 

Each software test level plays a crucial role in ensuring that defects are identified early, interactions between components are validated, and the system meets functional, non-functional, and business requirements. Properly planning and executing these test levels ensures the software delivers a high-quality and seamless experience to end-users.

See also Topic 7.06 - Software Test Estimation and Testing Levels.

2.1 Test Levels Overview Chart

Expand Test Level for details. 

Test LevelKey ObjectiveScopePerformed By

Overview:

  • Definition: Testing individual components, methods, or modules of the application to ensure they work as intended.
  • Focus:
    • Validate logic correctness.
    • Test edge cases and boundary conditions for a module.
    • Identify coding errors.

Objective:

  • Ensure that each unit or function performs its assigned task correctly in isolation from other parts of the software.

Key Tools:

  • JUnit (Java), NUnit (.NET), pytest (Python), Jasmine (JavaScript).

Example:

  • Testing a function that calculates discounts based on user input parameters.
Testing individual components, methods, or modules of the application to ensure they work as intended.Smallest testable units of software (e.g., functions, classes, API endpoints).

Developers (usually supported by automated tools).

Overview:

  • Definition: Testing interactions between integrated modules, components, or third-party systems to ensure they work together as expected.
  • Focus:
    • Validate data flow and communication between modules.
    • Identify interface and interaction errors.

Objective:

  • Ensure smooth communication between components and catch defects in the integration process (e.g., API calls, external system sync).

Types of Integration Testing:

  • Top-Down Integration Testing:
    • Testing starts with high-level modules, coupled with stubs to simulate subordinated modules.
  • Bottom-Up Integration Testing:
    • Testing starts with low-level modules and relies on drivers to simulate higher-level components.
  • Big Bang Integration Testing:
    • All modules and subsystems are integrated simultaneously and tested together.
  • Incremental Testing:
    • Modules are integrated and tested step-by-step.

Key Tools:

  • SOAP UI, Postman (API testing), Karate.

Example:

  • Verifying that the login module interacts successfully with the user database authentication system.
Testing interactions between integrated modules, components, or third-party systems to ensure they work together as expected.Groups of connected modules or subsystems.

Test team or developers.

Overview:

  • Definition: Testing the complete, integrated system as a whole to ensure it meets specified requirements.
  • Focus:
    • Functional validation (all features are working).
    • Non-functional validation (performance, usability, scalability).
    • Testing as a user would use it.

Objective:

  • Verify whether the software meets both functional and non-functional requirements.

Types of System Testing:

  • Functional Testing.
  • Usability Testing.
  • Load/Performance Testing.
  • Security Testing.
  • Localization Testing.

Key Tools:

  • Selenium, Appium (automation tools), JMeter (performance), OWASP ZAP (security).

Example:

  • Testing the full flow of an e-commerce site from product search to order placement and payment
Testing the complete, integrated system as a whole to ensure it meets specified requirements.End-to-end testing of the entire software system.

Dedicated testers in an environment similar to production.

Overview:

  • Definition: The final level of testing, where stakeholders, end-users, or clients verify that the software meets their business requirements and is ready for deployment.
  • Focus:
    • Validate user expectations.
    • Ensure the software addresses business needs.

Objective:

  • Ensure the software satisfies its end-users and stakeholders before production rollout.

Types of Acceptance Testing:

  • User Acceptance Testing (UAT):
    • Performed by end-users to validate workflows.
  • Alpha Testing:
    • Conducted by internal stakeholders before release.
  • Beta Testing:
    • Conducted by external users in a real-world environment.

Key Tools:

  • TestRail, Xray, Trello (UAT tracking tools).

Example:

  • Testing whether a banking app allows customers to securely transfer money and view their transaction history accurately.
The final level of testing, where stakeholders, end-users, or clients verify that the software meets their business requirements and is ready for deployment.Entire software, focusing on user workflows and business criteria.

Stakeholders, end-users, domain experts, or Test teams.

Overview:
  • Definition: Testing performed after updates, bug fixes, or feature changes to ensure the modifications haven't negatively impacted existing functionality.
  • Focus:
    • Revalidate impacted areas of code.
    • Ensure stability and backward compatibility.

Objective:

  • Detect any unintended side effects of changes before software deployment.

Key Tools:

  • Selenium WebDriver, TestNG, Appium (mobile testing).

Example:

  • Testing the shopping cart feature after introducing a coupon discount functionality to ensure no disruptions occur in checkout processing.
Testing performed after updates, bug fixes, or feature changes to ensure the modifications haven't negatively impacted existing functionality.Previously tested functionality.

Developers, Test teams, often using automated tools.

Overview:

  • Definition: Testing conducted to evaluate the software's responsiveness, stability, and scalability under expected and extreme workloads.
  • Focus:
    • Stress Testing: Test limits under heavy conditions.
    • Load Testing: Evaluate software under normal workload thresholds.
    • Scalability Testing: Test the system's ability to grow with increased data/traffic.

Objective:

  • Validate the system's performance against benchmarks and identify bottlenecks.

Key Tools:

  • JMeter, LoadRunner, Gatling.

Example:

  • Simulating 10,000 users accessing an e-commerce platform concurrently to identify response bottlenecks.
Testing conducted to evaluate the software's responsiveness, stability, and scalability under expected and extreme workloads.Entire system, focusing on non-functional requirements.

Performance testing specialists.

Overview:

  • Definition: Testing performed to uncover vulnerabilities, risks, or threats to the system's security.
  • Focus:
    • Penetration testing.
    • Authentication and authorization mechanisms.
    • Data encryption standards.

Objective:

  • Ensure the system is safeguarded against unauthorized access, data breaches, and malicious attacks.

Key Tools:

  • OWASP ZAP, Burp Suite, Nessus, Acunetix.

Example:

  • Testing whether a web application is protected against SQL injection attacks.
Testing performed to uncover vulnerabilities, risks, or threats to the system's security.Entire software system, focusing on sensitive data processing.Security testers or ethical hackers.

Overview:

  • Definition: A brief and shallow test performed on builds to ensure critical features work and the software is testable.
  • Focus:
    • "Sanity check" for basic functionality.

Objective:

  • Quickly determine whether the software build is stable enough to proceed with more comprehensive testing.

Example:

  • Testing whether an existing build functions properly when a new version of the OS is installed.
A brief and shallow test performed on builds to ensure critical features work and the software is testable.High-priority functionalities.Developers, Test Teams

Overview:

  • Definition: Ad-hoc and unscripted testing performed by experienced testers to evaluate software creatively.
  • Focus:
    • Identify unexpected issues and gaps missed by structured test cases.

Objective:

  • Discover defects or weaknesses by exploring the software dynamically.
Ad-hoc and unscripted testing performed by experienced testers to evaluate software creatively.Entire software.End-users, Test teams


3. Test Types

There are many types of testing that can be used. Each type is generally intended to verify different aspects of the software. Depending on the type of testing, there may be test cases chosen from many of the types of testing and/or an exhaustive set of test cases may be chosen from one type (for example, functional testing).

Test types may include (Expand each Test Type for details.):

The intent of unit testing is to confirm that a unit performs the capability assigned to it, correctly interfaces with other units and data, and represents a faithful implementation of the unit design. 452

In accordance with IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology, 222, a unit is defined as:

  1. A separately testable element specified in the design of a computer software component.
  2. A logically separable part of a computer program.
  3. A software component that is not subdivided into other components.

The Software Test Plan includes an overview of the unit testing process to be used for this project. The section(s) of the Software Test Plan focused on unit testing addresses the following:

  • Use of test cases.
  • Type of testing to be used: path testing, analytical testing, user interface testing.
  • Exercising functions in the unit.
  • Testing all paths in the unit. (This is a goal, although it may not always be possible.)
  • Testing boundary conditions and error situations.
  • Assessment of timing, sizing, accuracy.
  • Testing of safety features.
    • It is recommended that formal unit test plans and procedures be written for safety-critical units. This also ensures repeatability (see SWE-186 - Unit Test Repeatability).
    • Unit tests for non-critical units may be informally documented, e.g., in laboratory notebooks.
  • Tools to be used.
  • Testing assignment: Who will carry out the testing? (Typically, the developer.)
  • Testing iteration: Testing until success criteria are achieved (define objective criteria).
  • Documenting and correcting issues.
    • Typically, during unit testing, issues are corrected and the unit retested.
    • Typically, issues related to requirements or design follow the relevant process for correcting those issues.

See also SWE-062 - Unit TestSWE-186 - Unit Test RepeatabilitySWE-190 - Verify Code CoverageSWE-191 - Software Regression Testing, SWE-219 - Code Coverage for Safety Critical Software

Integration testing deals with how the software components will be incorporated into the final software system and what will be tested at each integration step. The Software Test Plan includes an overview of the software integration testing process to be used for this project. The section(s) of the Software Test Plan focused on software integration testing addresses the following:

  • Testing environment.
  • Integration sequence.
  • Testing of safety features, including confirmation that non-critical components cannot influence critical components.
  • Interactions among the units.
  • Assessment of timing, sizing, accuracy.
  • Performance at boundaries and under stress conditions.
  • Use of automated tools, when available, for analysis of the results.
  • Repetition of integration testing until success criteria are achieved (define objective criteria).
  • Use of independent personnel to perform testing.
  • Identification of test harnesses or drivers required and whether they exist or need to be developed.

See also Topic 5.02 - IDD - Interface Design Description

The Software Test Plan includes an overview of the process used for end-to-end testing. The section(s) of the Software Test Plan focused on end-to-end testing addresses the following:

  • Stress, load, disaster, stability testing.
  • Functional testing.
  • Testing of safety features.
  • Testing all paths in the system. (This is a goal, although it may not always be possible.)
  • Exercising all branches in the system.
  • Execution of each statement at least once.
  • Testing of boundary conditions for all inputs, as well as nominal and invalid input values.
  • Objective pass/fail criteria for each test.
  • Any special procedures, constraints, dependencies for implementing and running tests.
  • Use of independent personnel to perform testing.

Acceptance testing needs to be conducted after the appropriate readiness review has been successfully completed. This type of testing is the customer acceptance test, and the Software Test Plan includes an overview of the process used. The section(s) of the Software Test Plan focused on acceptance testing includes the following:

  • Plans to assure the customer that the system is safe.
  • Plans to confirm that software correctly implements system and software requirements in an operational environment.
  • Use of independent personnel to perform testing.
  • Identification of criteria for stopping testing, e.g., fraction of requirements covered, number of errors remaining, reliability goals.
  • Provisions for witnessing of tests.

See also Topic 8.13 - Test Witnessing

Regression tests are used to ensure that changes made to software have had no unintended side effects.

The section(s) of the Software Test Plan focused on regression testing addresses the following:

  • Exercising the maximum number of critical functions.
  • Selection of the subset of the total number of tests (from the original test suites) to be used as the regression test set, e.g., key 10 percent.
  • Use of independent personnel to perform testing.
  • Use of automated test tools.

See also Topic 7.06 - Software Test Estimation and Testing Levels

Software Functional Testing, often referred to as Requirements-Based Testing (RBT), is a type of software testing aimed at verifying that a software application or system performs according to its specified functional requirements. This type of testing evaluates whether the system behaves as expected and meets its intended purpose, as defined by the requirements documentation. It focuses on what the software is supposed to do, rather than how it does it.

3.6.1 Key Components of Functional Testing (Requirements-Based Testing)

  1. Alignment with Requirements:
    • Each test is derived from specific functional requirements or user stories documented during the software development process.
    • The goal is to ensure the software satisfies the explicitly stated needs and expectations of stakeholders.
  2. Evaluation of Functional Behavior:
    • Functional testing focuses on whether the software can perform the tasks and operations it was designed for.
    • Example functional behaviors may include data input/output, calculations, commands, and user interactions.
  3. Independent of Implementation Details:
    • Tests are not concerned with the underlying code or internal architecture but focus solely on whether the output matches the required functionality.
  4. Input-Output Verification:
    • Functional testing involves providing different inputs (valid, invalid, edge cases) to the software and verifying that it produces the correct outputs or responses.

3.6.2 Types of Functional Testing

There are several subcategories of functional testing that fall under requirements-based testing:

  1. Unit Testing: Testing individual components or modules of the software to ensure they function correctly.
  2. Integration Testing: Testing interactions between integrated components to verify they work together as expected.
  3. System Testing: Evaluating the complete system to see whether it fulfills requirements.
  4. User Acceptance Testing (UAT): Ensuring the software meets the needs of end-users and stakeholders.

3.6.3 Functional Testing Process

  1. Understand Requirements: Review functional requirements and specifications to understand what functionalities need to be tested.
  2. Design Test Cases: Create test cases that map directly to the documented requirements. For example, if the requirement states "The system shall allow users to log in with a valid username and password," the corresponding test cases would include scenarios confirming correct login behavior and rejecting invalid credentials.
  3. Execute Test Cases: Run the test cases on the software to evaluate its behavior according to requirements.
  4. Analyze Results: Compare actual outputs with expected outputs and determine whether the software functions correctly or has defects.
  5. Report Defects: If discrepancies or bugs are found during testing, they are logged for corrective action.

3.6.4 Example of a Functional Test Case

Requirement: "The software shall calculate the sum of two numbers entered by the user."

Functional Test Cases:

    1. Verify the software correctly calculates the sum of positive numbers.

      • Input: 2, 3
      • Expected Output: 5
    2. Verify the software handles negative numbers.

      • Input: -2, -3
      • Expected Output: -5
    3. Verify the software handles mixed positive and negative numbers.

      • Input: -2, 3
      • Expected Output: 1
    4. Verify invalid input handling.

      • Input: "abc, 3"
      • Expected Output: Error message indicating invalid input.

3.6.5 Importance of Functional Testing

Functional (Requirements-Based) testing is essential because it:

    1. Ensures the software fulfills its intended purpose and meets stakeholder expectations.
    2. Validates compliance with documented requirements.
    3. Identifies any functional defects early, reducing the risk of system failure.
    4. Improves overall software quality, usability, and reliability.

This form of testing provides confidence that the software operates as intended and is ready for deployment or further development stages!

Software Stress Testing is a type of performance testing that evaluates how a system behaves under extreme or unfavorable conditions. It is designed to test the system's robustness, stability, and reliability by pushing it beyond normal operational limits—such as high data input, maximum concurrent users, insufficient resources, or other stress scenarios that might cause failure.

The purpose of stress testing is to determine the system's ability to recover gracefully from extreme situations and ensure that it can maintain performance under adverse or unexpected conditions.

Stress testing plays a critical role in ensuring that software or systems can handle extreme conditions gracefully and deliver consistent performance under stress.

3.7.1 Objectives of Stress Testing

  1. Evaluate System Stability: Ensure the system remains stable when subjected to extreme loads or resource constraints.
  2. Identify Breaking Points: Discover the maximum capacity the system or application can handle before failing.
  3. Assess Recovery Behavior: Investigate how the system reacts and recovers after reaching its breaking point or abnormal failure scenarios.
  4. Prevent Catastrophic Failures: Identify bottlenecks, crashes, or issues that could potentially escalate during heavy usage or under extreme conditions.
  5. Validate Fail-Safe Mechanisms: Verify if fallback, recovery, or backup processes work as intended during stress conditions.

3.7.2 When is Stress Testing Performed?

Stress testing is typically performed:

  • After functional testing is complete.
  • During later stages of the software development life cycle (SDLC).
  • Often as part of performance testing efforts.

3.7.3 Scenarios Tested During Stress Testing

Stress testing involves evaluating the system under conditions that mimic extreme real-world usage. Common scenarios include:

  1. Excessive Concurrent Users:

    • Test how the application performs when the number of concurrent users significantly exceeds expected levels.
    • Example: A web application meant for 10,000 simultaneous users is stress-tested with 50,000 users.
  2. High Volume Data Input:

    • Test large data loads or excessive requests in limited time frames to observe how the system handles high throughput.
    • Example: Sending millions of transactions within a short time interval.
  3. Resource Degradation (Memory, CPU, Network):

    • Reduce available system resources (e.g., memory, disk space, CPU usage) and test performance while running critical operations.
    • Example: Simulating memory leaks or limiting system resources to evaluate behavior.
  4. Network Failures:

    • Degrade network resources (low bandwidth, dropped connections, delays) and observe system reaction.
    • Example: An app that relies on API calls can be tested by simulating poor or intermittent network behavior.
  5. Prolonged Usage:

    • Stress the system over extended periods of continuous operation to detect stability issues such as slowdowns, crashes, or memory leaks.
    • Example: Running a high-load server non-stop for 48 hours.
  6. Hardware Failures:

    • Simulate hardware issues, such as disk crashes, power outages, or server unavailability, to examine the system's ability to handle such disruptions.
  7. Simultaneous Heavy Processing:

    • Execute multiple high-complexity operations simultaneously to test system throughput and response times under extreme stress.
    • Example: Processing hundreds of database queries while performing heavy file uploads.

3.7.4 Steps for Performing Stress Testing

  1. Identify Stress Test Boundaries: Define the extreme conditions you want to test (e.g., maximum users, resource constraints, peak data load).
  2. Create Stress Test Scenarios: Develop specific scenarios to test the system under extreme situations. Tools like JMeter, LoadRunner, or custom scripts are often used.
  3. Prepare Test Environment: Set up a reliable and isolated testing environment (ideally resembling production) with necessary monitoring tools installed.
  4. Inject Load and Stress: Use automated tools or scripts to apply the stress conditions progressively or abruptly.
  5. Monitor System Behavior: Collect metrics such as response times, CPU/memory usage, error rates, transaction failures, and system crashes during testing.
  6. Analyze Results: Study logs and metrics to identify bottlenecks, failure points, and system vulnerabilities.
  7. Fix Issues & Re-Test: Address any weaknesses discovered during stress testing and conduct re-testing to validate fixes under the same stress conditions.

3.7.5 Metrics Tested in Stress Testing

During stress testing, teams monitor the following critical metrics:

  • Response Time: How quickly the system responds to requests under stress.
  • Error Rates: Frequency of errors or failed transactions during testing.
  • Transaction Throughput: Number of transactions completed successfully.
  • Resource Utilization: CPU, memory, and disk usage under load.
  • System Downtime: Frequency and duration of outages.
  • Recovery Time: Time taken to recover after a failure.

3.7.6 Benefits of Stress Testing

  1. Prepares for Real-World Scenarios: Helps developers identify how systems behave under unexpected peak conditions or failures.
  2. Prevents Crashes: Identifies potential bottlenecks early before the system suffers a catastrophic failure in production.
  3. Improves Scalability: Detects how systems scale under extreme loads so improvements can be made.
  4. Enhances Reliability: Verifies fail-safe mechanisms, recovery processes, and data integrity in case of a system failure.

3.7.7 Example of Stress Testing

Imagine an e-commerce app during Black Friday sales:

  • Normal operation handles 2,000 transactions/min.
  • Stress testing simulates 10,000 transactions/min over a prolonged period.
  • During the test, developers monitor response times, error rates, and app crashes.
  • Resources like system storage, processing power, and bandwidth utilization are intentionally throttled to evaluate how the app handles limits.

Software Performance Testing is a type of testing focused on determining how well a software application or system performs under various conditions. It evaluates factors such as speed, scalability, reliability, and resource consumption to ensure that the software meets performance expectations when deployed in real-world environments.

Performance testing ensures the system is ready to handle real-world scenarios without compromising responsiveness, reliability, or scalability—ultimately making it essential for mission-critical and user-facing applications!

3.8.1 Objectives of Performance Testing

  1. Measure System Responsiveness: Ensure the application's response time is acceptable under expected workloads.
  2. Evaluate Scalability: Determine how the application scales under increasing loads (e.g., more users, larger data sets).
  3. Identify Bottlenecks: Pinpoint performance issues caused by inadequate resource utilization or inefficient system behavior.
  4. Ensure Stability: Verify that the system remains stable and doesn't crash while handling extended or high workloads.
  5. Validate Throughput: Measure the number of transactions or requests handled within a given time frame to ensure the application can meet workload demand.
  6. Optimize Resource Usage: Analyze CPU, memory, disk, and network usage to identify opportunities for optimization.

3.8.2 Goals of Performance Testing

  • Ensure software meets speed and efficiency benchmarks.
  • Provide a seamless experience for end-users.
  • Prevent performance-related failures in production environments.
  • Support capacity planning and infrastructure scaling.

3.8.3 Types of Performance Testing

Performance testing encompasses several specialized testing approaches:

  1. Load Testing:

    • Evaluates the system's behavior under anticipated workloads (e.g., regular user traffic or transactions).
    • Example: Testing a website with 1,000 concurrent users to verify response time during typical operation.
  2. Stress Testing:

    • Determines how the system behaves under extreme conditions, pushing it past its normal capacity.
    • Example: Testing an application with 10x the expected data load to identify breaking points or recovery mechanisms.
  3. Endurance Testing (Soak Testing):

    • Tests system stability over extended periods under a sustained load to detect memory leaks or degradation.
    • Example: Running a server continuously for 48 hours under normal traffic load.
  4. Spike Testing:

    • Examines the system's response to sudden, dramatic increases in user or transaction load.
    • Example: Simulating a surge in traffic during an unexpected event (e.g., a flash sale).
  5. Volume Testing:

    • Tests the system with large volumes of data to evaluate its capacity for handling high data loads.
    • Example: Uploading millions of records into a database to test its ability to process.
  6. Scalability Testing:

    • Determines how the system performs as the workload increases proportionally.
    • Example: Adding more virtual users to see how system performance is affected.

3.8.4 Metrics Measured in Performance Testing

Performance testing typically evaluates the following metrics:

  1. Response Time: The time the system takes to respond to a user request.

  2. Throughput: The number of transactions processed over a specific period of time.

  3. Load Time: The time taken by a page or application to load completely.

  4. Resource Utilization: CPU, memory, disk, and network usage during testing.

  5. Error Rate: The percentage or number of errors encountered during testing.

  6. Concurrent Users: The number of users the system can handle simultaneously while maintaining performance.

  7. Maximum Load Capacity: The highest load the system can handle before performance degrades or failure occurs.

3.8.5 Steps in Performance Testing

  1. Define Test Goals: Identify performance benchmarks and acceptable thresholds (e.g., "average response time must be under 2 seconds").
  2. Prepare Test Environment: Set up a testing environment that closely resembles the production setup. This includes network configurations, servers, hardware, and software.
  3. Identify Test Scenarios: Develop scenarios that simulate typical, peak, and worst-case usage conditions.
  4. Select Performance Testing Tools: Use appropriate tools for generating load and monitoring system behavior (e.g., JMeter, LoadRunner, Gatling).
  5. Conduct Tests: Run the test scenarios under varying load conditions.
  6. Monitor System Behavior: Collect metrics during testing, such as response time, throughput, resource usage, and error rates.
  7. Analyze Results: Identify bottlenecks, performance degradation, or violations of acceptable benchmarks.
  8. Optimize and Re-Test: Implement fixes or optimizations, then conduct performance tests again to validate improvements.

3.8.6 Tools Used for Performance Testing

  1. Apache JMeter: Open-source tool for simulating load and monitoring responses for web applications and APIs.
  2. LoadRunner: Enterprise-class tool by Micro Focus for load and stress testing.
  3. Gatling: Open-source tool designed for scalability testing of web applications.
  4. NeoLoad: Test automation tool for performance testing and monitoring.
  5. Dynatrace: Used for monitoring system metrics during performance testing.
  6. BlazeMeter: Cloud-based testing tool supporting JMeter scripts and scalability testing.

3.8.7 Benefits of Performance Testing

  1. Improved User Experience: Ensures the application is fast and responsive, providing a seamless experience for users.
  2. Scalable Applications: Determines if the system can accommodate growth (e.g., increased users or data).
  3. Avoids Downtime: Ensures the system can handle peak loads without crashing.
  4. Cost Optimization: Detects overutilization or underutilization of resources so adjustments can be made.
  5. Builds Market Success: Reliable and efficient performance positively impacts user satisfaction and system adoption.

3.8.8 Example of Performance Testing

Scenario: Performance testing an e-commerce web application.

  • Test Goals: Measure response time and throughput under normal traffic (2,000 users) and peak traffic (10,000 users). Ensure that the page load time remains under 3 seconds.

  • Setup: Use Apache JMeter to simulate concurrent users browsing, adding items to carts, checking out, and searching for products.

  • Metrics Monitored: Response times, errors, CPU utilization, and database query times during both normal and peak traffic.

Outcome: Identify bottlenecks in the checkout process, optimize database queries to improve response time, and test scalability by deploying additional servers.

Software Endurance Testing, also known as Soak Testing, is a type of performance testing conducted to evaluate the system's behavior over an extended period of continuous usage. Its focus is on detecting issues such as memory leaks, performance degradation, resource exhaustion, and stability problems that may not become apparent during short-duration tests.

The primary goal of endurance testing is to ensure that the application or system can operate reliably for prolonged periods under expected usage conditions without failing or degrading in performance.

Endurance testing is critical for applications that require consistent performance over extended periods and helps identify long-term issues that could affect the system's reliability, scalability, and user satisfaction.

3.9.1 Objectives of Endurance Testing

  1. Verify Long-Term Stability: Ensure the application remains functional and stable over extended periods.
  2. Detect Memory Leaks: Identify issues like gradual increases in memory usage over time, which could lead to system crashes or slowdowns.
  3. Evaluate Resource Utilization: Monitor how system resources (CPU, memory, network, disk space) are used over time and ensure there are no inefficiencies.
  4. Identify Performance Degradation: Detect problems like response time increases, throughput reductions, or system slowdowns under prolonged usage.
  5. Validate System Reliability: Confirm that the application consistently delivers expected performance and doesn't encounter unforeseen failures during long-term operation.

3.9.2 Typical Scenarios for Endurance Testing

  1. 24/7 Web Applications: Test applications expected to run continuously for extended periods, such as banking portals, e-commerce platforms, or SaaS products.
  2. Embedded and IoT Devices: Evaluate devices or systems (e.g., flight control systems, medical devices) that must operate continuously for days, weeks, or longer.
  3. Database Applications: Test database systems to ensure queries and operations remain efficient over extended usage with growing data volumes.
  4. Server Systems and APIs: Assess backend services that will handle ongoing user requests or automated tasks throughout their lifecycle.

3.9.3 Key Focus Areas of Endurance Testing

  1. Memory Usage: Ensure that memory does not drain or grow uncontrollably, potentially leading to crashes or fragmentation (e.g., due to memory leaks).
  2. CPU Utilization: Monitor CPU load over time to detect spikes, inefficiencies, or progressive slowdowns.
  3. Disk Space: Validate that logs, backups, or temporary files do not cause storage exhaustion.
  4. Network Stability: Evaluate how the application handles continuous network traffic over long periods.
  5. Data Integrity: Ensure that prolonged operations don't lead to data corruption or inconsistency.

3.9.4 Steps for Performing Endurance Testing

  1. Define Test Goals: Clearly define the performance metrics to monitor and the expected duration of the test (e.g., 24, 48, or more hours).
  2. Prepare Test Environment: Set up an environment similar to the production setup, with appropriate hardware, software, and configurations.
  3. Create Test Scenarios: Simulate realistic, sustained user and workload scenarios over an extended period (e.g., a web application's normal traffic plus database queries).
  4. Run the Tests: Execute the endurance test continuously, keeping system load consistent or with slight variations based on real-world patterns.
  5. Monitor System Behavior: Use monitoring tools to observe real-time memory usage, CPU utilization, disk space growth, response times, and other metrics.
  6. Collect and Analyze Metrics: Compare performance over time to identify trends such as gradual increases in resource consumption and potential degradation.
  7. Report Findings: Log all anomalies, bottlenecks, and degradation patterns for developers to address.

3.9.5 Metrics Measured in Endurance Testing

  1. Response Time: Average time taken by the application to respond to user actions over extended periods.
  2. Resource Utilization: CPU, memory, and network usage trends throughout the test.
  3. Memory Leaks: Gradual increases in memory usage without release or restoration.
  4. Error Rate: Frequency of errors generated during prolonged operation.
  5. Throughput: Consistency in the number of transactions processed over time.
  6. Unexpected Failures: Number of crashes or downtime events during prolonged usage.

3.9.6 Tools Used for Endurance Testing

  1. Apache JMeter: Suitable for running sustained load tests on web applications and APIs.
  2. LoadRunner: Built for enterprise-grade soak testing and long-term resource observation.
  3. Dynatrace: An APM tool to monitor real-time metrics for applications during long-running tests.
  4. Nagios: Helps track system resource health (CPU, memory, storage) during soak testing.
  5. AppDynamics: Useful for detecting memory leaks and performance degradation during endurance tests.

3.9.7 Benefits of Endurance Testing

  1. Early Detection of Memory Leaks: Identifies subtle memory leak issues that could cause crashes during prolonged operation.
  2. Reliability Validation: Builds confidence in the application’s ability to run continuously without degradation or failure.
  3. Performance Optimization: Highlights code inefficiencies or resource utilization problems that only appear with prolonged usage.
  4. Scalability Planning: Provides insights into how the system handles prolonged and scaled usage over time.

3.9.8 Example of Endurance Testing

Scenario: Endurance testing is carried out for a video streaming application that serves users 24/7.

  • Test Goals: Check stability, response time consistency, memory usage, and the effect of prolonged streaming on backend servers.
  • Setup: Simulate 1,000 users continuously streaming videos over 48 hours. Monitor memory, CPU utilization, server response times, and disk usage for temporary files.
  • Observations:
    • Memory use increases after 12 hours due to unoptimized garbage collection.
    • Response times remain consistent over the first 24 hours but slow slightly after 30 hours.

Outcome: Developers fix the memory leak by optimizing garbage collection and database code. After retesting, performance metrics remain stable for the 48-hour test.

(Both User Interface and Interfaces to Other System Functions)

Software Interface Testing is a type of testing that focuses on verifying the interactions between various software components, systems, or modules. Its primary goal is to ensure that interfaces (communication points) between different parts of the system work as intended, exchanging data correctly, handling failures gracefully, and maintaining consistency and integrity.

Software interfaces include APIs (Application Programming Interfaces), web services, messaging protocols, and graphical user interfaces (GUIs), as well as interactions between databases, servers, or hardware devices. Interface testing plays a crucial role in ensuring seamless communication within a system.

Interface testing is critical for ensuring successful integration and communication between components in modern systems, especially in distributed architectures, cloud-based environments, and applications with external dependencies like APIs or third-party services.

3.10.1 Objectives of Interface Testing

  1. Validate Communication: Ensure data is correctly transmitted between system components, with no missing or corrupted information.
  2. Detect Integration Issues: Identify problems caused by poor interaction between modules, such as mismatching data formats or incompatible protocols.
  3. Ensure Error Handling: Verify that the system handles unexpected input, network failures, or interface malfunctions gracefully.
  4. Verify Consistency: Ensure consistent behavior between modules, such as adhering to agreed data structures, message formats, or API contracts.
  5. Test Boundary Conditions: Evaluate edge cases or unusual scenarios that might affect interface functionality (e.g., high transaction loads, empty responses).

3.10.2 Types of Interfaces Tested

  1. Application Programming Interfaces (APIs): Validates REST, SOAP, or other APIs for proper communication, authentication, data exchange, and error handling.
  2. Web Services: Tests communication protocols (e.g., HTTP/HTTPS, XML/JSON) used between web applications.
  3. Graphical User Interfaces (GUIs): Ensures proper interaction between graphical elements and backend systems.
  4. Hardware Interfaces: Validates communication between software and hardware components (e.g., embedded systems, sensors, devices).
  5. Database Interfaces: Tests interactions between the application and databases, ensuring proper execution of queries, data storage/retrieval, and data integrity.
  6. Messaging Systems: Verifies interactions in distributed systems using protocols like RabbitMQ, Kafka, or other message brokers.

3.10.3 Steps to Perform Interface Testing

  1. Identify Interfaces to Test:
    • Analyze the system architecture to find all critical interfaces (e.g., APIs, modules, third-party systems, web services).
    • Analyze the user interface to determine how users interact with the system.
  2. Understand Requirements: Gather requirements such as input/output formats, protocols, authentication methods, and system behavior during interfacing.
  3. Prepare Test Cases: Develop test cases for all expected scenarios, including normal, edge, and error conditions (e.g., invalid input).
  4. Simulate Dependencies: Use stubs, mock services, or simulators to replicate dependent systems for isolated testing of the interface.
  5. Perform Testing: Execute test cases manually or using interface testing tools, frameworks and monitor the response.
  6. Analyze Data Exchange: Validate correctness (e.g., accurate data, proper structure) and consistency of the transmitted information.
  7. Test Error Handling: Examine how the interface handles communication failures, invalid requests, or unexpected data.
  8. Log & Report Issues: Report any defects or interface problems and collaborate with developers to resolve them.

3.10.4 Key Areas of Interface Testing

  1. Data Accuracy: Ensuring all transmitted data is correct, complete, and complies with expected formats (e.g., JSON, XML).
  2. Data Flow: Verifying the correct sequence and structure of data exchange between modules.
  3. Error Handling: Ensuring the system responds appropriately to faulty scenarios like network delays, incorrect inputs, or system crashes.
  4. PerformanceEvaluating how efficiently the interface handles large data payloads or high transaction volumes.
  5. SecurityTesting authentication methods, authorization protocols, and encryption of sensitive data during transfer.

3.10.5 Common Challenges in Interface Testing

  1. Dependency on External Modules: Interfaces often depend on external systems that may not be available during testing. Simulators or mock tools are used as replacement fixtures.
  2. Protocol Mismatches: Mismatches between agreed-upon data protocols (e.g., XML vs. JSON) can cause integration issues.
  3. Error Propagation: Problems in one module may affect other dependent modules through the interface.
  4. Dynamic Configurations: Many interfaces rely on dynamically changing input/output structures, making comprehensive testing challenging.

3.10.6 Tools for Interface Testing

Many tools are available for testing interfaces, particularly APIs and web services:

  1. Postman: A popular API testing tool for validating request/response data and handling automation tasks.

  2. SoapUI: Used for testing SOAP and REST web services, as well as validating APIs.

  3. Swagger: Provides API documentation and testing directly from the API description.

  4. JUnit/TestNG: Java-based testing frameworks for interface validation in backend systems.

  5. WireMock: Creates mock APIs and stubs for interface testing without dependence on actual systems.

  6. Karate: A framework for API testing and mocking that supports REST/GraphQL interfaces.

  7. Fiddler: Captures and analyzes web requests, primarily useful for interface debugging and testing.

3.10.7 Example of Interface Testing

Scenario: Testing an e-commerce platform's interface between its web application and payment gateway.

  1. Interface Requirement:

    • The payment gateway must accept payment details in JSON format and return success/failure responses.
  2. Test Cases:

    • Test a valid payment request.
    • Test invalid payment details (e.g., expired credit card).
    • Simulate network failure while processing payment.
    • Test high transaction volume.
  3. Execution:

    • Use Postman or SoapUI to send requests and validate responses.
    • Simulate edge cases using mock services for the payment gateway.

Outcome: Ensure seamless communication and error handling between the platform and payment gateway.

3.10.8 Benefits of Interface Testing

  1. Improved Integration: Ensures smooth communication between software components.
  2. Early Detection of Bugs: Finds integration issues before they cause major failures in the system.
  3. Reliability: Ensures consistent and dependable data exchange between components.
  4. Security: Validates authentication and data protection mechanisms, ensuring secure communication.
  5. Performant Interfaces: Optimizes interfaces by identifying inefficiencies in data exchange or system connectivity.

Software Boundary Conditions Testing, also referred to as Boundary Value Testing, is a type of software testing focused on evaluating the behavior of a system at the boundaries of input values. It tests the application with values that are at the edge boundaries (maximum, minimum, and just inside or outside of the limits) to identify potential defects or inconsistencies that occur at these limits.

Boundary conditions testing is an important part of black-box testing techniques, as it ensures the software can handle edge-case scenarios properly, which are often prone to defects.

Boundary Conditions Testing is an essential part of ensuring software reliability and robustness, particularly in edge scenarios where bugs are more prone to occur. It complements other testing techniques by focusing specifically on the limits and edges, making it invaluable for quality assurance in software development.

3.11.1 Objectives of Boundary Conditions Testing

  1. Validate Input Handling: Ensure the application correctly processes inputs at the edges of allowable ranges.
  2. Detect Edge-Case Failures: Identify issues that could arise from handling extreme or out-of-range values (e.g., overflow, underflow).
  3. Ensure System Stability: Verify that the system remains stable and functional with boundary values or extreme cases.
  4. Improve Reliability: Test conditions that users are most likely to encounter unintentionally, ensuring system robustness.

3.11.2 Key Characteristics of Boundary Conditions

Boundary conditions are usually the maximum, minimum, and values just inside or outside of the expected input range. For example:

  • If the input range is 1 - 100, the boundary conditions would be:
    • Boundary Values: 1 (minimum), 100 (maximum)
    • Near Boundaries: 0 (just outside minimum), 101 (just outside maximum), 2 (just inside minimum), and 99 (just inside maximum).

3.11.3 Difference Between Boundary Conditions Testing and Equivalence Partitioning

  • Equivalence Partitioning:
    • Divides inputs into ranges (partitions) representing valid and invalid sets.
    • Only one representative value from each partition is tested.
  • Boundary Conditions Testing:
    • Focuses specifically on values that are at the boundaries of partitions or input ranges.

3.11.4 Example of Boundary Conditions

  • Input Range: 1-100
    • Equivalence Partition Testing focuses on representative values inside valid (e.g., 50) or invalid (e.g., 150) partitions.
    • Boundary Conditions Testing evaluates values at the edges (e.g., 1, 100, 0, 101, 99).

3.11.5 Steps to Perform Boundary Conditions Testing

  1. Identify Input Ranges: Analyze requirements to determine the valid and invalid input ranges of the application.
  2. Determine Boundary Values: Identify boundary values for all inputs, including extreme valid limits and values outside the allowable range.
  3. Create Test Cases:
    • Design test cases to include:
      • Minimum value
      • Maximum value
      • Values just inside the boundary (valid range)
      • Values just outside the boundary (invalid range)
  4. Execute Test Cases: Test the application with these boundary values and observe its behavior.
  5. Compare Results: Validate the actual output against the expected output for each condition tested.
  6. Report Defects: Record any disruptions, errors, or crashes caused by boundary values and provide them to the development team.

3.11.6 Example of Boundary Conditions Testing

Consider a login form that accepts a username/password, with both limited to 5 - 10 characters.

  1. Boundary Values:

    • Minimum boundary: 5 characters (e.g., abcde)
    • Maximum boundary: 10 characters (e.g., abcdefghij)
    • Just inside valid boundary:
      • 6 characters (e.g., abcdef)
      • 9 characters (e.g., abcdefghi)
    • Just outside valid boundary:
      • 4 characters (e.g., abcd) [Too few characters]
      • 11 characters (e.g., abcdefghijk) [Too many characters]
  2. Test Cases:

    • Valid: Test with 5, 6, 9, and 10 characters.
    • Invalid: Test with 4 and 11 characters.
  3. Execution & Observation:

    • Valid inputs should pass successfully.
    • Invalid inputs should return appropriate error messages (e.g., "Password must be between 5-10 characters").

3.11.7 Benefits of Boundary Conditions Testing

  1. Early Bug Detection: Identifies defects or failures at edge cases, which are often overlooked by other testing techniques.
  2. Improved System Reliability: Ensures the application handles edge cases gracefully, reducing the risk of crashes or errors.
  3. Simplified Test Design: Testing edge values often reveals issues without needing exhaustive testing of all intermediate values.
  4. Enhanced User Experience: Prevents issues that users are likely to encounter with extreme inputs or invalid ranges.

3.11.8 Challenges in Boundary Conditions Testing

Challenges in Boundary Conditions Testing:

  1. Complex Validation: Identifying all relevant boundary conditions for large or complex systems can be time-consuming.

  2. Dynamic Boundaries: Boundaries that change due to runtime conditions or configurations may require additional efforts to test dynamically.

  3. Unexpected Edge Cases: Some issues may occur not at defined boundaries but at system-design-specific conditions (e.g., unexpected peaks in resource usage).

3.11.9 Tools Used for Boundary Conditions Testing

Boundary conditions testing primarily involves manual and automated testing using standard test tools.

3.11.10 Real-Life Example of Boundary Conditions Testing

Scenario: Testing an online shopping cart that accepts quantities between 1 and 99:

  1. Boundary Values:

    • Minimum valid: 1
    • Maximum valid: 99
    • Just inside: 2, 98
    • Just outside: 0, 100
  2. Tests:

    • Add 1 item -> Should pass.
    • Add 99 items -> Should pass.
    • Add 0 items -> Should fail.
    • Add 100 items -> Should fail.

Outcome: Identify any issues with quantity validation logic (e.g., accepting more than 99 or rejecting valid quantities).

Software Coverage Testing is used to measure the extent to which the source code of a program has been tested during its execution. It ensures that every part of the code (statements, paths, branches, etc.) has been exercised at least once during testing. There are multiple types of code coverage techniques, including Statement Coverage and Path Coverage, which focus on different aspects of the code's execution.

Testing coverage is essential to verify the thoroughness of test cases and ensure the software does not contain untested, potentially defective code.

Coverage testing is crucial for ensuring comprehensive software quality, whether the focus is on statement coverage for basic validation or path coverage for rigorous logic testing. Both techniques are often combined during testing to achieve higher reliability and reduce code-level defects.

3.12.1 Statement Coverage

Statement Coverage is a metric used to measure whether each individual statement in the source code has been executed at least once during testing. This ensures that every line of code has been exercised to expose any potential defects in basic control structures.

How Statement Coverage Works:

  • Statement coverage counts the number of executed statements out of the total number of statements in the program.
  • The goal is to test every line of code.
  • It is expressed as a percentage:      Statement Coverage = {Number of statements executed} / {Total number of statements} * 100 

Example of Statement Coverage:

Consider the following code snippet:

def calculate_discount(price): if price > 100: return price * 0.10 else: return price * 0.05

Test Cases for Full Statement Coverage:

    1. Input: price = 150 (Executes the if branch: return price * 0.10)
    2. Input: price = 50 (Executes the else branch: return price * 0.05)

With these two cases, all statements (if, else, and both return statements) are executed, achieving 100% Statement Coverage.

Benefits of Statement Coverage:

  1. Detects unexecuted statements in the code.
  2. Ensures that all lines of code are tested at least once.
  3. Improves confidence that basic functionality is covered.

Limitations of Statement Coverage

  1. Statement coverage does not guarantee logical correctness, as it does not consider multiple paths or branches within the code.
  2. It may miss bugs in conditional statements because it does not test every possible branch.

3.12.2 Path Coverage

Path Coverage is a more thorough form of coverage testing that ensures all possible execution paths in the code are tested at least once. A path is any unique sequence of instructions from the start to the end of the program or a method. Path coverage tests all control-flow paths through conditional statements, loops, and branches.

How Path Coverage Works:

  • Path coverage ensures that every possible route (e.g., true branch, false branch) of the program is exercised.
  • The number of possible paths depends on the complexity of the code, especially the number of decisions (e.g., if-else conditions).
  • It is expressed as a percentage: [ \text{Path Coverage} = \frac{\text{Number of paths executed}}{\text{Total number of paths}} \times 100 ]

Example of Path Coverage:

Consider the following code snippet:

def check_number(num): if num > 0: if num % 2 == 0: return "Positive Even" else: return "Positive Odd" else: return "Negative or Zero"

Here, possible paths are:

  1. num > 0 and num % 2 == 0 → Returns "Positive Even".
  2. num > 0 and num % 2 != 0 → Returns "Positive Odd".
  3. num <= 0 → Returns "Negative or Zero".

Test Cases for Full Path Coverage:

  1. Test with num = 4 → Path: num > 0 and num % 2 == 0.
  2. Test with num = 7 → Path: num > 0 and num % 2 != 0.
  3. Test with num = -3 → Path: num <= 0.

       All possible paths are tested, achieving 100% Path Coverage.

Benefits of Path Coverage:

  1. Covers all potential branches and paths, ensuring thorough testing.
  2. Detects hidden bugs in complex decision structures.
  3. Improves code quality by ensuring all paths are functional.

Limitations of Path Coverage:

  1. May not be practical for very complex code, as the number of paths increases exponentially (combinatorial explosion).
  2. It requires extensive effort and multiple test cases for comprehensive coverage.

3.12.3 Statement Coverage vs Path Coverage

AspectStatement CoveragePath Coverage
FocusEnsures all individual statements are executed.Ensures all possible paths through the code are executed.
ThoroughnessBasic level of code coverage.Thorough testing of control flow and logic.
Number of Test CasesUsually requires fewer test cases.Requires significantly more test cases for complex logic.
Bug DetectionMay miss bugs in conditional structures.Detects bugs in decision-making and branches.
ComplexitySimple and straightforward to implement.More complex to analyze and achieve full coverage.
Use CaseBest for basic checks to confirm lines are executed.Best for detailed testing of logic-heavy code or complex workflows.

3.12.4 Steps to Perform Coverage Testing

  1. Instrument the Code: Use tools or frameworks that can track which parts of the code are executed (e.g., JaCoCo for Java, Coverage.py for Python).
  2. Write Test Cases: Design test cases to target maximum code coverage. Prioritize paths and statements not previously executed.
  3. Execute Tests: Run test cases and collect coverage metrics.
  4. Analyze Results: Compare the achieved coverage percentage against expected levels (e.g., 80-100%, depending on project quality standards).
  5. Refine Test Cases: Add additional test cases to cover missing statements/paths and improve overall coverage percentage.

3.12.5 Example of Practical Usage

 Imagine testing a billing system that calculates discounts:

  • Statement Coverage: Ensures that every line of the discount calculation function is executed (e.g., "if discount > 5%, apply additional fee" is tested).
  • Path Coverage: Tests every path related to discount types (e.g., normal discount, bulk discount, VIP customer discount) to ensure no bugs in logic related to specific paths.

Software Mutation Testing, also known as Perturbation Testing, is a type of software testing used to evaluate the effectiveness of test cases by introducing small changes (mutations) to the code and checking whether the existing test cases can detect those changes. The goal is to ensure that the test suite is robust enough to catch faults or anomalies introduced into the software intentionally.

This testing method assesses the quality of the tests, rather than the software itself, by determining whether the tests can identify the "mutated" (faulty) versions of the program.

Mutation testing is a powerful technique for assessing the quality of test cases and identifying weaknesses in coverage, making it ideal for applications requiring high robustness and reliability. Although resource-intensive, it provides valuable insights into improving software testing practices.

3.13.1 Key Concepts in Mutation Testing

  1. Mutants:
    • Mutants are modified versions of the original code, created by introducing predefined small changes such as altering operators, variables, or logic.
    • Example: Changing + to - or > to <.
  2. Mutation Operators:
    • Rules defining the types of modifications to be introduced into the code.
    • Common mutation operators include:
      • Arithmetic operator replacement: Replace + with - or *.
      • Relational operator replacement: Replace > with <, or == with !=.
      • Logical operator replacement: Replace || with &&.
      • Variable replacement: Replace one variable with another.
      • Constant replacement: Change a numeric, Boolean, or string constant.
  3. Killed Mutants:
    • If a test case identifies a failure caused by the mutant (i.e., the output differs from the expected behavior), the mutant is considered "killed."
    • This shows that the test suite is effective.
  4. Survived Mutants:
    • If a mutant passes the tests and is not detected as faulty, it is considered "survived."
    • This indicates a weakness in the test suite.
  5. Mutation Score:
    • A metric that assesses the effectiveness of the test suite.
    • Mutation Score formula:    Mutation Score = {Killed Mutants}/{Total Mutants} * 100 
    • Example: If 80 out of 100 mutants are killed, the mutation score is 80%.

3.13.2 Steps in Mutation Testing

  1. Generate Mutants: Introduce small changes to the code using mutation operators. Mutants are created programmatically using tools.
  2. Run Test Cases: Execute the existing test suite on both the original code and all mutants.
  3. Analyze Test Results:
    • Compare the outputs of the mutants with the expected outputs.
    • Mutants producing incorrect outcomes should be detected by the test suite (i.e., killed).
  4. Evaluate Mutation Score: Calculate the mutation score and determine the effectiveness of the existing test suite.
  5. Improve Test Coverage (if needed): If the mutation score is low (i.e., many mutants survive), identify gaps in the test cases and improve them.

3.13.3 Example of Mutation Testing

Original Code:

def add_numbers(a, b): return a + b

Mutant Code (Arithmetic operator mutation):

def add_numbers(a, b): return a - b # "+" is replaced with "-"

Test Case:

assert add_numbers(2, 3) == 5 # Expected output for original code

Result:

    • For the original code, the test passes.
    • For the mutant code, the test fails (2 - 3 != 5), so the mutant is killed.
    • This indicates the test suite is effective at catching this type of error.

3.13.4 Benefits of Mutation Testing

  1. Improves Test Suite Effectiveness: Helps identify gaps or weaknesses in test coverage.
  2. Detects Hidden Defects: Ensures that tests can detect subtle code issues that might otherwise go unnoticed.
  3. Quantifies Test Quality: Provides a measurable metric (mutation score) to evaluate the robustness of test cases.
  4. Focuses on Fault Tolerance: Ensures the software can resist changes introduced by mistakes or poor code practices.

3.13.5 Limitations of Mutation Testing

  1. Computational Cost: Generating and testing a large number of mutants can be resource-intensive, especially for large programs.
  2. Equivalent Mutants Issue: Some mutants are functionally identical to the original code (e.g., changing x * 1 to x) and cannot be detected as faulty, reducing the usefulness of testing.
  3. Time-Consuming: Requires significant time to set up mutation operators, generate mutants, and execute tests.
  4. Complex Analysis: Analyzing survived mutants to improve test cases may require a deep understanding of the code.

3.13.6 Tools for Mutation Testing

Several tools are available for performing mutation testing.

3.13.7 When to Use Mutation Testing

  • Strengthening Test Suites: When you want to ensure your test cases are comprehensive and robust.
  • Critical Systems: For applications with high reliability requirements (e.g., medical systems, financial software, aviation systems).
  • Test Automation: In automated testing environments to improve coverage and quality of test cases.

3.13.8 Mutation Testing vs Other Testing Techniques

AspectMutation TestingStatement Coverage Testing
PurposeEvaluate the test suite’s ability to detect faults.Check whether statements are executed during testing.
FocusIntroduces artificial faults to test detection quality.Measures coverage of executed code statements.
ComplexityHigh complexity due to mutant generation and analysis.Lower complexity, focuses on simple execution checks.
OutcomeDetermines gaps in test cases and improves robustness.Provides a coverage percentage for tested statements.
CostComputationally expensive for large programs.Less expensive compared to mutation testing.

3.13.9 Example Use Case

Scenario: You are testing an e-commerce application’s discount calculation module.

  1. Apply mutation testing to intentionally alter the discount formula (e.g., replace * with /) and generate mutants.
  2. Run your test suite against the mutants to validate whether test cases can identify the incorrect calculation.
  3. Refine your tests if any mutants survive, ensuring comprehensive test coverage for the entire logic.

Smoke Testing is a type of software testing used to verify whether the most crucial functionalities of the application are working properly and the software build is stable enough for further testing. Think of it as a pre-check that allows testers and developers to validate that the basic functionality of the system is intact before diving into more detailed testing (e.g., System Testing or Regression Testing).

Smoke Testing is often referred to as a "sanity check" or build verification testing and is typically performed early on in the testing lifecycle to catch major issues before investing time and effort in more comprehensive testing.

Smoke testing plays a vital role in the software testing lifecycle by ensuring that basic functionalities and critical features of the application are working as expected after each build. It acts as the first line of defense to identify major flaws and build instability before investing in thorough testing. Combined with continuous builds and automated tools, smoke testing ensures rapid feedback and smooth progression into more advanced testing phases, such as regression or system testing. 

3.14.1 Key Characteristics of Smoke Testing

  1. High-Level Validation: Focuses on validating the core functionality of the application, not deep or detailed testing.
  2. Quick Execution: Aims to save time by ensuring the system's overall "testability" without going into intricate test cases.
  3. Critical Path Testing: Tests only high-priority features, basic workflows, and functionalities that, if broken, would render the software unusable.
  4. Trigger for Further Testing: Acts as a gateway; the build must pass smoke testing before entering advanced testing stages (e.g., functional or regression testing).
  5. Non-comprehensive: Does not cover edge cases or advanced scenarios; it ensures the application doesn’t fail in its basic operation.

3.14.2 Objectives of Smoke Testing

  • Verify whether a new software build is stable enough for further, more detailed testing.
  • Detect critical bugs early in the testing lifecycle (e.g., broken login functionality, crashed application).
  • Avoid wasting time conducting elaborate tests on a defective or unstable build.
  • Validate basic functionality after new updates, features, or changes to the codebase.

3.14.3 When is Smoke Testing Performed?

Smoke testing is typically performed in the following scenarios:

  1. After Every New Build: Following code integration, a new build is released. Smoke testing checks whether this build is functional.
  2. After Fixing Critical Bugs: Ensures a defect fix hasn’t caused new showstopper issues.
  3. Before Detailed Testing Begins: Confirms whether the application is ready for detailed testing phases (e.g., regression testing).

3.14.4 Types of Smoke Testing

  1. Manual Smoke Testing:
    • Testers manually perform smoke testing by running a set of high-priority test cases.
    • Best suited for smaller projects or builds with minimal changes.
  2. Automated Smoke Testing:
    • Automated tools execute smoke testing scripts for predefined areas of the application.
    • Best suited for frequent builds or larger projects with CI/CD pipelines.

3.14.5 What is Tested in Smoke Testing?

  1. Critical Functionality:
    • Basic application workflows.
    • Examples:
      • Can users log in to the system?
      • Can they perform basic navigation between screens/pages?
  2. High-Priority Features:

    • Features constrained by business requirements.
    • Examples:
      • Can payment processing work successfully in an e-commerce application?
      • Does the admin dashboard load correctly?
  3. Major Integrations:

    • Interfaces with external systems and APIs.
    • Examples: Does user authentication via external services (e.g., Google login) work?
  4. Key Dependencies:

    • Areas impacted by recent changes or updates.
    • Examples: Does the system crash after deploying a new build/update?

3.14.6 Advantages of Smoke Testing

  1. Early Detection of Major Issues: Quickly identifies critical defects, minimizing backtracking and saving time.
  2. Prevention of Time and Resource Waste: Blocks comprehensive testing on unstable builds, ensuring instability is resolved first.
  3. Better Build Quality: Gradually improves build reliability and stability for downstream testing phases.
  4. Optimized Feedback Loop: Helps development teams identify issues sooner in agile and iterative workflows.

3.14.7 Smoke Testing Process

Follow these structured steps to conduct smoke testing:

  1. Identify Critical Features To Test
    • Select core functionalities critical for system operation—these are often derived from business requirements or user stories.
    • Focus on features such as login/logout, navigation, and database connection.
  2. Prepare Test Cases
    • Write high-level test cases or scripts for smoke testing.
    • Test cases should be simple, focusing on whether a feature works (pass/fail criteria).
  3. Execute Tests
    • Run the test cases manually or using automated smoke testing tools on the new build.
    • Focus on quickly checking stability in essential areas.
  4. Document Results
    • Track and record test results.
    • Note any showstopper defects in functionality (e.g., crashes, errors, or missing features).
  5. Provide Feedback
    • Share testing results with the development team for fixes.
    • If smoke testing passes successfully, move to the next level of testing (e.g., functional/system/regression testing).
  6. Fix Issues and Retest
    • Address defects identified and retest critical features until the build passes smoke testing without issues.

3.14.8 Smoke Testing Template

Here’s a sample documentation template for a smoke testing checklist:

Test Case IDFeatureTest DescriptionExpected OutcomeActual OutcomeStatus (Pass/Fail)Comments
ST001Login FeatureVerify login with valid credentialsUser successfully logs inUser logs inPassN/A
ST002Navigation FeatureVerify user can navigate to DashboardDashboard loads correctlyBlank page displaysFailDashboard module missing!
ST003Payment ProcessingVerify payment functionalityPayment completes successfullyPayment error shownFailIncorrect API configuration!

3.14.9 Tools for Automated Smoke Testing

  1. Selenium WebDriver:
    1. Automate basic UI functional tests.
    2. Language Support: Python, Java, JavaScript, etc.
  2. Appium: Mobile application smoke testing on Android/iOS.
  3. Jenkins: Automates smoke testing steps in a CI/CD pipeline.
  4. Postman: Smoke testing APIs for basic interactions.
  5. TestNG: Automation frameworks for basic test scripts execution.

3.14.10 Challenges in Smoke Testing

  1. Incomplete Test Coverage: Since smoke testing targets only basic functionality, deeper issues may remain unidentified.
  2. Dependencies on Test Environments: Issues like broken builds or unstable environments may disrupt testing.
  3. Time Constraints: Effective smoke testing requires concise test case designs within limited windows.

Safety-critical NASA systems often demand rigorous testing methodologies due to their potential impact on human safety, health, and mission-critical objectives. These systems must adhere to strict standards and reliability requirements. The following are types of software testing commonly used in safety-critical systems to ensure their reliability, robustness, and compliance.

Safety-critical systems require stringent testing covering functional, non-functional, fault tolerance, and compliance-related aspects to prevent system failures that could harm lives, property, or the environment. Leveraging specialized testing methodologies like fault injection, hazard analysis, and compliance testing ensures these systems meet strict reliability and safety standards.

  1. Functional Testing (see Section 3.6)
    Functional testing (a.k.a. Requirements-Based Testing) verifies that the system behaves according to its defined requirements and specifications.
    • Purpose: Validate the functionality of the system to ensure it performs as intended.
    • Example: Testing whether a medical monitoring device correctly measures and displays a patient's heart rate.
    • Why it’s used in safety-critical systems:
      • Guarantee compliance with requirements.
      • Identify defects in system functionality that could lead to hazards.

  2. Failure Mode Testing (FME Testing)
    Failure Mode Testing evaluates how the system handles potential failure conditions.
    • Purpose: Test the system's ability to transition to a safe state under specific failure scenarios (failure modes).
    • Example: For an aircraft autopilot system, simulate sensor failures and ensure the autopilot disengages safely.
    • Why it’s used in safety-critical systems:
      • Assess resilience and fault tolerance.
      • Ensure system behavior in abnormal conditions prevents accidents.

  3. Boundary Conditions Testing (see Section 3.11)
    Boundary Conditions Testing focuses on testing the limits of inputs and outputs to ensure the system operates correctly within defined boundaries.
    • Purpose: Detect defects that occur at the extremes of allowable input ranges.
    • Example: Testing a pressure sensor to confirm it safely handles maximum and minimum pressures.
    • Why it’s used in safety-critical systems:
      • Prevent incorrect behavior at the boundaries that may lead to catastrophic failures.
      • Validate adherence to design specifications.

  4. Robustness Testing
    Robustness Testing evaluates the system's ability to perform dependably in unpredictable or adverse conditions.
    • Purpose: Ensure the system does not crash or behave unexpectedly under stress or when exposed to invalid data.
    • Example: Testing a medical infusion pump with corrupted drug dosage data to ensure the system halts operation safely.
    • Why it’s used in safety-critical systems:
      • Detect vulnerabilities under abnormal conditions.
      • Guarantee fail-safe mechanisms are in place to protect human life.

  5. Reliability Testing
    Reliability Testing measures the ability of the system to perform correctly and consistently over an extended period.
    • Purpose: Validate the system's stable functioning and reliability over time.
    • Example: Testing a fire alarm system continuously for 48 hours to ensure no interruptions or malfunctions.
    • Why it’s used in safety-critical systems:
      • Minimize the possibility of faults in long-term operation.
      • Demonstrate compliance with safety reliability standards.

  6. Stress Testing (See Section 3.7)
    Stress Testing evaluates the system's ability to handle extreme workloads or conditions without breaking down.
    • Purpose: Identify system limitations and behavior when subjected to extreme resource usage or load conditions.
    • Example: Testing a heart monitor's ability to compute data from 1,000 simultaneous inputs without latency.
    • Why it’s used in safety-critical systems:
      • Assess system performance under worst-case scenarios.
      • Validate emergency operation capability.

  7. Integration Testing (See Section 3.2)
    Integration Testing focuses on verifying interactions between various software modules and hardware components.
    • Purpose: Ensure seamless communication and functionality between system components.
    • Example: Testing interactions between an air traffic control radar system and its corresponding communication module.
    • Why it’s used in safety-critical systems:
      • Prevent integration errors from propagating and causing malfunctions.
      • Verify consistent operation across subsystem interfaces.

  8. Regression Testing (See Section 3.5)
    Regression Testing ensures that new changes or updates to the codebase have not introduced new defects.
    • Purpose: Retest previously passing features after modifications or patches are applied.
    • Example: Testing a pacemaker after software updates to confirm existing functionality remains intact.
    • Why it’s used in safety-critical systems:
      • Ensure stability after software updates or bug fixes.
      • Avoid introducing safety-critical defects once software changes are deployed.

  9. Safety Testing
    Safety Testing focuses on ensuring the application operates in a secure and safe manner without causing harm to users or the environment.
    • Purpose: Validate the system's safe state under normal and abnormal conditions.
    • Example: Testing a collision avoidance system to confirm that the vehicle avoids obstacles.
    • Example: MC/DC testing and cyclomatic complexity assessments
    • Why it’s used in safety-critical systems:
      • Guarantee compliance with NASA Software Assurance and Software Safety Standard (NASA-STD-8739.8).
      • Ensure mitigation of health or safety risks.

  10. Usability Testing
    Usability Testing evaluates the user interface and user experience to ensure it is intuitive and error-free for end-users.
    • Purpose: Confirm the system is user-friendly, particularly for non-technical users in emergency or high-pressure scenarios.
    • Example: Testing the ease of use of a ventilator control panel in an emergency medical situation.
    • Why it’s used in safety-critical systems:
      • Prevent mis-operation that could result in errors or hazards.
      • Optimize user response time during critical scenarios.

  11. Performance Testing (See Section 3.8)
    Performance Testing ensures the system meets speed, responsiveness, and stability requirements under normal and heavy loads.
    • Purpose: Evaluate the timing and latency requirements for real-time systems.
    • Example: Testing a drone navigation system for consistent real-time processing while moving through obstacle-laden terrain.
    • Why it’s used in safety-critical systems:
      • Guarantee the system responds promptly to critical events.
      • Meet stringent real-time performance benchmarks.

  12. Hazard and Risk Testing
    Hazard Testing identifies system scenarios that could lead to failure or dangerous conditions.
    • Purpose: Evaluate risks and hazards associated with system behavior and verify risk mitigation mechanisms.
    • Example: Testing an automated surgical robot to ensure correct behavior if sensors fail during operation.
    • Why it’s used in safety-critical systems:
      • Reduce the likelihood of harm caused by unexpected failures.
      • Guarantee compliance with NASA Software Assurance and Software Safety Standard (NASA-STD-8739.8).

  13. Fault Injection Testing
    Fault Injection Testing intentionally introduces faults (e.g., memory errors, hardware failures, or corrupted inputs) into the system to assess its ability to recover gracefully.
    • Purpose: Ensure the system can handle faults and transitions to safer states as expected.
    • Example: Simulating network failures in an intensive care unit's monitoring system to confirm safe behavior.
    • Why it’s used in safety-critical systems:
      • Evaluate fault tolerance and error recovery mechanisms.
      • Validate fail-safe responses in dangerous scenarios.

  14. Compliance Testing
    Compliance Testing ensures the system meets industry safety-critical standards and regulations.
    • Purpose: Verify compliance with laws, policies, and industry standards.
    • Common Standards:
      • NASA Software Assurance and Software Safety Standard (NASA-STD-8739.8).
      • Other standards
        • ISO 26262 (Automotive)
        • IEC 62304 (Medical devices)
        • DO-178C (Avionics software)
        • MIL-STD-882 (Military systems)
    • Example: Testing whether medical device software complies with NASA regulations.
    • Why it’s used in safety-critical systems:
      • Avoid regulatory violations that could result in accidents or recalls.
      • Ensure software is properly validated per industry best practices.

  15. Endurance Testing (See Section 3.9)
    Endurance Testing evaluates the system's performance and stability over prolonged periods under typical conditions.
    • Purpose: Detect problems like memory leaks or degraded performance that may emerge after continuous operation.
    • Example: Testing a space shuttle’s navigation software continuously for weeks.
    • Why it’s used in safety-critical systems:
      • Ensure long-term stability in mission-critical or 24/7 operation scenarios.

  16. Embedded Testing
    Embedded Testing ensures the integration of software with hardware in embedded systems.
    • Purpose: Verify real-time interactions between the software and hardware components.
    • Example: Testing an airbag's deployment logic in response to sensor data.
    • Why it’s used in safety-critical systems:
      • Guarantee system responsiveness in hardware-dependent operations.
      • Ensure timely execution of embedded software functions.


4. Test Classes

In software testing, Test Classes refer to categories or classifications of testing techniques based on their goals, approaches, or scopes. These test classes help in structuring the testing process systematically to ensure comprehensive coverage and quality assurance.

Test classes represent distinct approaches to testing software with different goals, techniques, and coverage scopes. Depending on the application's type, purpose, and criticality (e.g., safety-critical environments), testers choose appropriate combinations of test classes to guarantee quality, reliability, and user satisfaction.

Below is a breakdown of commonly recognized software test classes:

  1. Functional Testing Class
    • Definition: Focuses on verifying that the software behaves as expected according to requirements and specifications.
    • Purpose: Test what the software is supposed to do (its functionality).
    • Techniques:
      • Unit Testing: Test individual functions or modules for specific functionality.
      • Integration Testing: Test interactions between modules or components.
      • System Testing: Test the entire application as a unified system.
      • Regression Testing: Validate that new code changes do not break existing functionality.
    • Example: Testing if a login feature allows users to successfully log into the application with valid credentials.

  2. Non-Functional Testing Class
    • Definition: Evaluates non-functional aspects of the software such as performance, usability, reliability, etc.
    • Purpose: Test how the software performs (behavior under certain conditions).
    • Techniques:
      • Performance Testing: Measure speed, responsiveness, and stability under load.
      • Stress Testing: Test system behavior under extreme conditions or loads.
      • Usability Testing: Assess the ease of use and user-friendliness of the application.
      • Scalability Testing: Analyze the system's ability to scale with user growth.
      • Security Testing: Check the robustness of the application against potential security breaches.
    • Example: Testing a website's responsiveness when 1,000 concurrent users attempt to access it.

  3. Structural Testing Class (White-Box Testing)
    • Definition: Tests the internal workings of the software by understanding the code structure, algorithms, and logic.
    • Purpose: Ensure internal implementations and logic are correct and avoid defects.
    • Techniques:
      • Statement Coverage Testing: Check if every line of code is executed during testing.
      • Path Coverage Testing: Ensure every possible path in the code is executed.
      • Branch Coverage Testing: Validate the behavior of code branches, such as if-else conditions.
      • Mutation Testing: Introduce small code changes (mutants) and verify if the test cases can catch them.
    • Example: Verifying a sorting algorithm by examining its logic to ensure it handles all edge cases correctly.

  4. Acceptance Testing Class
    • Definition: Determines whether the software meets the requirements and expectations of the end-users or stakeholders.
    • Purpose: Confirm the application is ready for deployment.
    • Techniques:
      • User Acceptance Testing (UAT): Validates the system against user requirements and business processes.
      • Alpha Testing: Performed by internal teams before the software is released.
      • Beta Testing: Performed by actual end-users in a real-world environment.
    • Example: Testing if an e-commerce website allows users to successfully purchase products, as expected by business stakeholders.

  5. Exploratory Testing Class
    • Definition: A testing approach where testers explore the application without predefined test cases.
    • Purpose: Find defects by dynamically interacting with the software.
    • Techniques:
      • Ad hoc testing without formal documentation.
      • Testers use their expertise and experience to identify potential weak areas.
    • Example: Random navigation across an application's features to identify unexpected crashes or errors.

  6. Compatibility Testing Class
    • Definition: Ensures the software works as intended across different platforms, devices, or environments.
    • Purpose: Verify cross-platform functionality and adaptability.
    • Techniques:
      • Browser Compatibility Testing: Test if a web application performs consistently across browsers.
      • OS Compatibility Testing: Ensure compatibility across operating systems like Windows, macOS, or Linux.
      • Device Compatibility Testing: Validate functionality across mobile, desktops, tablets, etc.
    • Example: Testing if a mobile app works appropriately on both Android and iOS devices.

  7. Security Testing Class
    • Definition: Tests the software for vulnerabilities and ensures the system is protected against unauthorized access or threats.
    • Purpose: Identify weaknesses that could be exploited by hackers or malicious entities.
    • Techniques:
      • Penetration Testing: Simulate attacks to check for vulnerabilities and security holes.
      • Authentication Testing: Validate password mechanisms, access restrictions, and identity verification processes.
      • Encryption Testing: Verify data protection mechanisms.
    • Example: Testing whether a banking app properly encrypts sensitive data like usernames, passwords, and transaction details.

  8. Regression Testing Class
    • Definition: Focuses on testing previously working functionality after code changes, updates, or bug fixes.
    • Purpose: Ensure the new changes have not introduced any new issues or broken older functionality.
    • Techniques:
      • Retesting: Re-running test cases that previously passed.
      • Automated Regression Testing: Using tools to automatically execute test cases after every release.
    • Example: Re-testing the checkout process in an online store after updating the database schema.

  9. Localization Testing Class
    • Definition: Validates that software is properly adapted for specific languages, regions, and cultural contexts.
    • Purpose: Ensure the application functions correctly for end-users in different global markets.
    • Techniques:
      • Test date/time formats, currency symbols, language translations, and text direction (e.g., left-to-right or right-to-left).
      • Verify language support for special characters.
    • Example: Testing if a mobile app displays prices in Euros when accessed in European countries.

  10. Load Testing Class
    • Definition: Tests how the software performs under expected loads or usage conditions.
    • Purpose: Validate performance, stability, and responsiveness under normal load conditions.
    • Techniques:
      • Simulate a large number of requests, users, or transactions.
      • Tools like Apache JMeter or LoadRunner help generate virtual user loads.
    • Example: Testing a payroll application by simulating 1,000 concurrent employees accessing their pay slips.

  11. Stress Testing Class
    • Definition: Evaluates how the software behaves under extreme conditions or overstress.
    • Purpose: Identify the breaking point of the system (e.g., capacity or resource limits).
    • Example: Testing a real-time GPS tracking system by sending continuous location updates for thousands of vehicles simultaneously.

  12. Recovery Testing Class
    • Definition: Tests the system's ability to recover from crashes, failures, or disasters.
    • Purpose: Validate fail-safe mechanisms and fault tolerance.
    • Techniques:
      • Simulate data corruption or server failures.
      • Measure how quickly and effectively the system resumes functioning after a failure.
    • Example: Testing how a database recovers data after a sudden power outage.

  13. End-to-End Testing Class
    • Definition: Validates the complete workflow of an application, from start to finish, simulating real-world conditions.
    • Purpose: Test the overall application flow to ensure everything works seamlessly together.
    • Example: Testing an online bookstore, from searching for a book to completing payment, generating receipt, and tracking shipment.

  14. Unit Testing Class
    • Definition: Tests individual functions or modules in isolation.
    • Purpose: Validate correctness and logic at the smallest level.
    • Techniques:
      • Write test cases for each function independently.
      • Use tools like JUnit (Java), pytest (Python), or NUnit (.NET).
    • Example: Testing a simple function that calculates a discount percentage independently.

  15. Accessibility Testing Class
    • Definition: Ensures the software is accessible to users with disabilities (e.g., vision, hearing, or motor impairments).
    • Purpose: Conform to accessibility standards like WCAG (Web Content Accessibility Guidelines).
    • Example: Testing a website’s compatibility with screen readers like NVDA or JAWS.

5. Test Progression

Testing progresses from smaller, focused areas to broader and integrated systems:

  1. Unit → Integration → System → Acceptance → Regression
  2. Non-functional tests (Performance, Security, Compatibility) are conducted after integration/system testing.

Following software test progression ensures defects are discovered early, the software meets functional and non-functional requirements, and it is ready for deployment in production environments. Different tests occur at different stages, but progression is always aimed at improving quality and reliability.

Test progression addresses the sequence or ordering of tests. The Software Test Plan describes dependencies among tests that require that tests be performed in a particular order.

Software Test Progression refers to the systematic stages or phases in which testing is conducted during the software development life cycle (SDLC) to ensure the software meets its requirements and is free from defects. Test progression encompasses a step-by-step advancement of testing methods, complexity, and scope, starting from testing individual components to evaluating the system as a whole.

In modern Agile and DevOps methodologies, testing progression is often compressed into continuous testing cycles that occur alongside development. Key concepts include:

  1. Shift Left Testing: Testing begins early in the SDLC (e.g., during requirements and design phases) to catch defects sooner.
  2. Continuous Testing: Automated testing is integrated into the CI/CD pipeline to perform frequent regression and performance tests on every build.

Below is an in-depth explanation of software test progression, covering testing phases and their goals:

  1. Unit Testing (Lowest Level Testing)
    • Definition: Unit testing is the first step in testing progression, where individual components, functions, or methods of the software are tested in isolation.
    • Objective:
      • Validate the correctness of the smallest testable parts of the code.
      • Catch bugs early in development.
    • Focus: Logic errors, edge cases, boundary conditions, and exceptions within individual modules.
    • Performed By: Developers (often automated using frameworks like JUnit, NUnit, pytest).
    • Example: Testing a function that calculates the total price after applying discounts.
    • Tools: JUnit (Java), NUnit (.NET), pytest (Python), Jasmine (JavaScript).

  2. Integration Testing
    • Definition: After unit testing, integration testing focuses on testing the interaction between integrated modules or components.
    • Objective: Detect interface defects, data flow issues, or communication errors between modules.
    • Focus: Testing interfaces, communication protocols, and data exchanges between subsystems.
    • Performed By: Testers or developers.
    • Approaches:
      • Top-Down Integration: Start testing high-level modules and progressively test lower-level modules.
      • Bottom-Up Integration: Test lower-level modules first and progressively integrate higher-level modules.
    • Example: Testing a web application where the login module interacts with the user database.
    • Tools: Postman (API testing), SOAP UI, Karate, or other interface testing tools.

  3. System Testing
    • Definition: Once all modules are integrated and working together, system testing evaluates the software as a complete unit.
    • Objective: Test the application against specified system requirements to ensure it performs as expected.
    • Focus: Functional testing, non-functional testing (performance, scalability, etc.), usability, and reliability.
    • Performed By: Testers in a controlled environment similar to production.
    • Example: Testing an e-commerce website to ensure it allows users to search products, add items to the cart, and make purchases smoothly.
    • Tools: Selenium (Web automation), Appium (Mobile testing), LoadRunner (performance testing).

  4. Acceptance Testing
    • Definition: Acceptance testing evaluates whether the application meets the business requirements and is ready for deployment.
    • Objective: Ensure the software satisfies end-users, stakeholders, and business workflows.
    • Focus: Verifying user requirements, functionality, and overall system quality.
    • Performed By: Stakeholders, domain experts, end-users, or Test teams.
    • Types:
      • User Acceptance Testing (UAT): Validation by end-users.
      • Alpha Testing: Performed by internal stakeholders.
      • Beta Testing: Performed by external users in real-world environments.
    • Example: Testing whether a banking app allows customers to securely send money and view their transaction history.

  5. Regression Testing
    • Definition: Regression testing is conducted after new code changes, enhancements, or bug fixes to ensure they do not negatively impact the existing functionality.
    • Objective: Confirm stability and consistency of the software after modifications.
    • Focus: Re-running existing test cases for previously tested features.
    • Performed By: Developers, Testers (often automated).
    • Example: Testing an e-commerce application’s checkout process after adding new payment methods.
    • Tools: Selenium, TestNG, Appium, or automated regression testing frameworks.

  6. Performance Testing
    • Definition: Performance testing evaluates how the application responds under various workloads, ensuring it meets timing and scalability requirements.
    • Objective: Assess system behavior under expected and peak loads.
    • Focus: Speed, responsiveness, stability, resource usage, and capacity.
    • Performed By: Testers specializing in non-functional testing.
    • Types
      • Load Testing: Test average workload conditions.
      • Stress Testing: Test the system's limits under extreme conditions.
      • Scalability Testing: Measure performance as the system scales up with user growth.
    • Example: Simulating 10,000 concurrent users accessing a banking website.
    • Tools: Apache JMeter, LoadRunner, Gatling, Locust.

  7. Security Testing
    • Definition: Security testing ensures the software is protected against vulnerabilities and unauthorized access.
    • Objective: Test for vulnerabilities such as data breaches, improper authorization, or injection attacks.
    • Focus: Authentication, data encryption, penetration testing, secure communication.
    • Performed By: Security testers or ethical hackers.
    • Example: Identifying potential SQL injection vulnerabilities in a database-backed web application.
    • Tools: OWASP ZAP, Burp Suite, Nessus, Acunetix.

  8. Compatibility Testing
    • Definition: Compatibility testing ensures the software performs correctly across different environments, platforms, devices, or systems.
    • Objective: Validate the application across various configurations (e.g., browsers, resolutions, OS versions, devices).
    • Focus: Platform, browser, device, and configuration compatibility.
    • Performed By: Developers, testers.
    • Example: Testing a website for consistent rendering on Chrome, Firefox, Safari, and Edge browsers.
    • Tools: BrowserStack, CrossBrowserTesting, Sauce Labs.

  9. Recovery Testing (Resilience Testing)
    • Definition: Recovery testing evaluates how resilient the system is after encountering errors, crashes, or other failures.
    • Objective: Ensure the system gracefully recovers and resumes operations after failures.
    • Focus: Fail-safe mechanisms, data recovery procedures, and reliability in failure scenarios.
    • Performed By: Testers and system engineers.
    • Example: Testing how a database-backed system recovers after unexpected power outages.

  10. Exploratory Testing
    • Definition: Exploratory testing involves unscripted testing where testers dynamically explore the application. It is often ad-hoc.
    • Objective: Identify unexpected bugs or weak points that structured test cases might miss.
    • Focus: Tester creativity, random inputs, edge cases.
    • Performed By: End-users, Test teams, or other experienced testers with a deep understanding of the application.
    • Example: Clicking random buttons on a banking app to ensure no hidden crashes.

  11. Specialized Testing
    This refers to additional testing approaches depending on the specific requirements of the software. Examples include:
    1. Localization Testing: Validate applicability for different languages, regions, or cultural settings.
    2. Compliance Testing: Ensure regulatory compliance (e.g., HIPAA for healthcare, ISO 26262 for automotive systems).
    3. Endurance Testing: Test system stability for long-term operation.

6. Test Schedules

Software Test Schedules outline the timeline and plan for executing testing activities during the software development life cycle (SDLC). A well-defined test schedule ensures that testing is implemented systematically, deadlines are met, and deliverables are completed on time. Scheduling testing activities involves identifying test phases, allocating resources, estimating effort, ordering tasks, and planning around project milestones.

A structured software test schedule is critical for ensuring that testing activities are performed efficiently and aligned with project milestones, resource availability, and system requirements. By planning phases like unit, integration, system, and regression testing systematically, teams can deliver high-quality software within the project timeframe. Use schedules as living documents that adapt to project changes and continuously review progress for successful delivery.

Information to consider for test schedules includes:

  • List or chart showing time frames for testing at all test sites.
  • Schedule of test activities for each test site, including on-site test setup, on-site testing, data collection, retesting, etc.
  • Milestones relevant to the development life cycle.
  • If the software is safety-critical, test witnessing participant's (e.g., Software Assurance) availability.

See also SWE-065 - Test Plan, Procedures, ReportsSWE-066 - Perform Testing, and SWE-024 - Plan TrackingSee also Topic 7.08 - Maturity of Life Cycle Products at Milestone Reviews.

6.1 Software Test Schedules, Their Components, And How To Create One Effectively

6.1.1 Components of Software Test Schedules

  1. Testing Phases:

    • Unit Testing
    • Integration Testing
    • System Testing
    • Acceptance Testing
    • Regression Testing
    • Non-functional Testing (e.g., Performance, Security)
  2. Activities:

    • Test Planning
    • Test Design (writing test cases)
    • Test Execution
    • Reporting defects
    • Retesting fixes (defect closure)
    • Test Metrics and Analysis
  3. Resources:

    • Testers (e.g., QA team, system or independent testers)
    • Test Witnesses (e.g., Software Assurance) for safety-critical components
    • Tools (automation tools, bug trackers, performance testing frameworks)
  4. Deadlines:

    • Align testing activities with development timelines and ensure testing completion before deployment.
  5. Dependencies:

    • Ensure testing aligns with dependencies like code release, availability of environments, and stakeholder input.

6.1.2 Steps to Create an Effective Software Test Schedule

Follow these steps to design a robust software test schedule:

  1. Identify Testing Phases
    Define all phases of testing, their scope, and objectives to match the development stage:
    • Unit Testing during early development.
    • Integration Testing after individual modules are ready.
    • System Testing following complete integration.
    • User Acceptance Testing closer to deployment.

      Example:
      • Phase: System Testing → Objectives: Validate end-to-end functionality across modules within a staging environment.

  2. Define Testing Activities
    Break testing into specific tasks or activities such as:
    • Preparing test cases.
    • Setting up test environments.
    • Executing tests.
    • Logging and tracking defects.

      Example Activities:
      • Activity: Test Environment Setup → Time: 2 days → Dependency: Staging environment availability.

  3. Prioritize Testing Tasks
    Testing activities should be prioritized based on the criticality of features, project deadlines, and potential risks:
    • Critical modules (e.g., payment systems or data encryption logic) are tested first.
    • Lower-risk modules (e.g., UI aesthetics) can be scheduled later.

  4. Estimate Effort
    Calculate the time required for testing activities:
    • Consider the number of test cases, complexity of the application, resources, and dependencies.
    • Effort estimation is often performed using models such as Test Point Analysis, Experience-Based Estimation, or Work Breakdown Structure (WBS).

      Example:
      • Activity: Writing 100 test cases → Estimated Time: 5 days → Based on historical project data.

  5. Allocate Resources
    Assign team members and tools to specific testing activities:
    • Tester roles (manual or automation testing).
    • Test witnessing roles (from Safety & Mission Assurance organization)
    • Tools such as Selenium (functional), JMeter (performance), OWASP ZAP (security).
    • Ensure resource availability for peak workload periods.

  6. Define Milestones
    Set milestones to track testing progress and align them with the overall project timeline.
    • Examples of milestones:
      • Completion of Unit Testing.
      • Completion of Functional Testing.
      • Generation of the final test report.

  7. Manage Dependencies
    • Testing often relies on development deliverables, environments, and tools being available.
    • Plan buffer time for risks such as delayed code releases or test environment issues.

  8. Track Progress
    • Develop regular checkpoints or schedules to monitor the progress of testing.
    • Use tools like Test Management Systems (e.g., JIRA, TestRail) for visibility into ongoing tasks.

6.1.3 Example Test Schedule Template

Here's an example schedule for software testing:

PhaseActivityTeamStart DateEnd DateDependenciesStatus
Unit TestingWrite unit test casesDev TeamDay 1Day 3Developers complete module codeIn-progress
Integration TestingAPI integration testsTest TeamDay 4Day 6Functional API readyYet to start
System TestingEnd-to-end testingTest TeamDay 7Day 10All modules integratedYet to start
Stress TestingSimulate max loadPerf TeamDay 11Day 12Test environment setupYet to start
User AcceptanceUAT session with clientClient, End-UsersDay 13Day 15Acceptance criteria alignedYet to start

6.1.4 Common Challenges in Test Scheduling

  1. Unrealistic Deadlines: Inadequate time given for writing and executing test cases.

  2. Poor Planning: Missing activities such as regression or risk-based testing during the schedule.

  3. Resource Availability: Testers or environments may not be available at critical stages of testing.

  4. Changing Requirements: If requirements change frequently, it impacts the testing timeline and schedule.

6.1.5 Tips for Effective Test Scheduling

  1. Start testing early in the development lifecycle (Shift-Left Testing).
  2. Plan for buffer time to account for delays or unforeseen bugs.
  3. Use test management tools (e.g., Xray, Zephyr, TestRail) to automate scheduling and tracking.
  4. Collaborate with development and stakeholder teams to align schedules with dependencies and deadlines.
  5. Revisit and refine schedules periodically based on progress or risks.

6.1.6 Tools for Test Scheduling and Management

  1. JIRA: Workflow management with test scheduling plugins (e.g., Zephyr).
  2. TestRail: A test management tool for tracking test executions and schedules.
  3. Microsoft Excel: Simpler scheduling for smaller teams using spreadsheet templates.
  4. Azure DevOps: Integrated scheduling for Agile and DevOps projects.
  5. Monday.com/Trello: Kanban boards for tracking testing progress visually.


7. Acceptance Criteria 

Software Acceptance Criteria are predefined conditions, standards, or requirements that a software product must meet to be considered acceptable by stakeholders, end-users, or clients. These criteria serve as a benchmark for determining whether the software fulfills its intended purpose and business objectives, and whether it is ready for deployment.

Acceptance criteria are typically created during the early stages of the software development lifecycle, often as part of the requirements gathering and planning phase, and are used as the basis for User Acceptance Testing (UAT) and validating software quality.

Acceptance criteria act as the bridge between requirements and validated deliverables in software projects. They ensure that all stakeholders have a unified understanding of what the software should do and provide a clear definition of completion, ultimately contributing to consistent quality and user satisfaction. 

Acceptance (or exit) Criteria for set of tests (Example: 95% of test cases must pass (Meet expected results))

7.1 Key Characteristics of Acceptance Criteria

  1. Clear and Concise: Criteria should be well-defined, specific, and easy to understand to avoid ambiguity.

  2. Testable: Acceptance criteria must be measurable and testable to verify whether the software meets the requirements.

  3. Traceable: Acceptance criteria should trace back to the original business requirements or user stories, ensuring alignment.

  4. Negotiable: Criteria can be refined based on stakeholder feedback.

  5. Binary Outcome: Evaluation should result in "Pass" or "Fail" to clearly indicate whether the software meets the expectations.

7.2 Types of Software Acceptance Criteria

  1. Functional Acceptance Criteria
    • Define what the software is supposed to do.
    • Examples:
      • "The system must allow users to reset their passwords."
      • "The application must calculate sales tax correctly based on the user's location."
  2. Non-Functional Acceptance Criteria
    • Address qualities such as performance, scalability, security, and usability.
    • Examples:
      • "The page load time must not exceed 2 seconds under normal load conditions."
      • "The system must adhere to WCAG 2.0 accessibility standards."
  3. Business Acceptance Criteria
    • Ensure alignment with stakeholder requirements.
    • Examples:
      • "The user registration process must have a completion rate of at least 95% during beta testing."
  4. Regulatory Compliance Criteria
    • Ensure the software complies with applicable legal, regulatory, or industry standards.
    • Examples:
      • "The software must comply with GDPR regulations concerning data privacy."
      • "The system must meet ISO 26262 standards for automotive safety-critical software."
  5. User Interface (UI)/User Experience (UX) Criteria
    • Define usability and design expectations.
    • Examples:
      • "The application must provide a search functionality that displays results within 1 second."
      • "All buttons must use consistent styling and labeling."
  6. System Integration Acceptance Criteria
    • Ensure successful connectivity and communication between systems or components.
    • Examples:
      • "The payment gateway must integrate successfully with the checkout module."
      • "The system must sync user data with the client’s internal CRM tool."
  7. Security Acceptance Criteria
    • Define security requirements for the software.
    • Examples:
      • "All user passwords must be stored using SHA-256 encryption."
      • "The system must lock the user account after 5 failed login attempts."

7.3 Examples of Software Acceptance Criteria

Here are examples to illustrate how acceptance criteria might look for different applications:

  1. Functional Criteria:
    • Feature: User Login
      • The system must allow registered users to log in using their email and password.
      • Error messages must display for incorrect credentials.
  2. Non-Functional Criteria:
    • Performance: The application must handle up to 1,000 concurrent users without downtime.
    • Security: Sensitive data must be encrypted during transmission using SSL/TLS.
  3. Business Criteria:
    • Success Metrics: The shopping cart abandonment rate should decrease by 20% after implementation of the new payment flow.
  4. UX/UI Criteria: 
    • Design Consistency: Every form field must display a placeholder text and provide visual feedback (green checkmark or error message) upon validation.

7.4 How Acceptance Criteria Are Used

  1. User Stories:

    • Often written alongside user stories in Agile development to define “done” for a feature.
    • Example user story: "As a user, I want to reset my password so that I can regain access to my account."
    • Acceptance Criteria:
      • "The user must receive a reset password email."
      • "The new password must conform to specified complexity rules."
  2. Testing Guidance:

    • Test cases for UAT, system testing, and functional testing are derived directly from acceptance criteria.
    • Example: Test case for "The page load time must not exceed 2 seconds" includes performance testing under simulated loads.
  3. Collaboration and Validation: Serve as a common reference for developers, testers, and business stakeholders to validate requirements.

  4. Project Closure: Used during UAT or client reviews to determine whether the software meets delivery goals.

7.5 Benefits of Software Acceptance Criteria

  1. Ensures Alignment: Provides clarity between teams (developers, testers, stakeholders) about expectations.

  2. Reduces Ambiguity: Creates a clear definition of what constitutes "completed" or "done."

  3. Improves Testing: Facilitates writing well-structured test cases and ensures traceability between requirements and testing.

  4. Enhances Quality: Promotes adherence to high-quality standards and avoids unmet expectations.

  5. Prevents Scope Creep: Limits scope by defining clear boundaries for features and requirements.

7.6 Best Practices for Defining Acceptance Criteria

  1. Use SMART Criteria: Specific, Measurable, Achievable, Relevant, Time-bound.

  2. Collaborate with Stakeholders: Define criteria with input from developers, testers, and business teams.

  3. Make Criteria Actionable:

    • Avoid vague statements (e.g., "The software should work well").
    • Use actionable language (e.g., "The software must process 500 transactions per second").
  4. Organize by Priority: Focus on high-priority features first, and tackle less critical criteria when they do not interfere with deadlines.

  5. Review and Refine: Continuously review criteria alongside changes in requirements or scope during development.

  6. Adopt Gherkin Syntax for Clarity (Optional):

    • Gherkin syntax (used in Behavior-Driven Development tools like Cucumber) employs "Given, When, Then" format for acceptance criteria.
    • Example:
      • Given a registered user logs in,
      • When incorrect credentials are entered,
      • Then display an error message.

8. Test Coverage

8.1 Test Coverage Or Other Methods For Ensuring Sufficiency Of Testing

If not addressed elsewhere in the Software Test Plan, provide a description of the methods to be used for ensuring sufficient test coverage. 

Test coverage refers to the extent to which the software is tested, both in terms of functionality (breadth) and thoroughness (depth). It ensures that all critical areas of the application are validated against requirements, reducing the risk of undetected defects.

Methods for Ensuring Sufficient Test Coverage

  1. Requirements Traceability Matrix (RTM)
    • Definition: A document that maps each requirement to one or more test cases.
    • Purpose: Ensures that all functional and non-functional requirements are tested.
    • Best Practices:
      • Maintain the RTM throughout the project lifecycle.
      • Update it as requirements or test cases evolve.
    • Benefit: Guarantees complete functional coverage and supports auditability.

  2. Code Coverage Analysis
    • Definition: A metric that shows which parts of the codebase are executed during testing.
    • Types:
      • Statement Coverage: Checks if each line of code is executed.
      • Branch Coverage: Ensures all possible branches (e.g., if/else) are tested.
      • Path Coverage: Validates all possible execution paths.
    • Best Practices:
      • Use automated tools (e.g., JaCoCo, Cobertura).
      • Integrate with CI/CD pipelines.
    • Benefit: Identifies untested code and improves test effectiveness.

  3. Boundary Value Analysis (BVA) and Equivalence Partitioning (EP)
    • Definition:
      • BVA: Tests values at the edges of input ranges.
      • EP: Divides input data into valid and invalid partitions.
    • Purpose: Reduces the number of test cases while maximizing defect detection.
    • Best Practices:
      • Apply to all input fields and data-driven logic.
    • Benefit: Detects edge-case defects and improves input validation.

  4. Risk-Based Testing
    • Definition: Prioritizes testing based on the likelihood and impact of failure.
    • Purpose: Focuses testing efforts on high-risk areas.
    • Best Practices:
      • Conduct risk assessments with stakeholders.
      • Use risk matrices to guide test prioritization.
    • Benefit: Efficient use of resources and improved defect detection in critical areas.

  5. Exploratory Testing
    • Definition: Simultaneous learning, test design, and execution without predefined scripts.
    • Purpose: Uncovers unexpected issues through intuitive exploration.
    • Best Practices:
      • Use charters and time-boxed sessions.
      • Document findings and insights.
    • Benefit: Enhances depth of testing and complements scripted tests.

  6. Peer Reviews and Walkthroughs
    • Definition: Collaborative review of test cases and coverage plans.
    • Purpose: Identify gaps and improve test quality.
    • Best Practices:
      • Include cross-functional team members.
      • Use checklists to guide reviews.
    • Benefit: Improves accuracy and completeness of test coverage.

By combining structured techniques (like RTM and code coverage) with exploratory and risk-based approaches, the test plan ensures both breadth (all features are tested) and depth (each feature is tested thoroughly). These methods collectively enhance the reliability, maintainability, and quality of the software product.

See also SWE-219 - Test Coverage for Safety Critical Software Components,  SWE-189 - Code Coverage Measurements, and SWE-190 - Verify Code Coverage.

9. Test Witnessing

Software Test Witnessing is a formal process where external parties—such as stakeholders, Software Assurance, Software Safety, clients, regulatory bodies, or auditors—observe and verify critical testing activities to ensure compliance, functionality, and performance. The purpose is to provide transparency, validate the accuracy of test results, ensure adherence to standards, and build confidence in the software.

A solid plan for software test witnessing ensures the process runs smoothly, avoids disruptions, and delivers the necessary outcomes. Below is a detailed guide to help you develop a comprehensive Software Test Witnessing Plan.

9.1 Objective of the Test Witnessing Plan

The plan should clearly define:

  1. Why witnessing is being conducted (e.g., regulatory compliance, client validation, certification).
  2. What testing activities will be witnessed (e.g., functional tests, performance tests, compliance tests).
  3. Who will participate (e.g., clients, auditors, SMA team members).
  4. When and how the witnessing will occur.

9.2 Key Steps to Plan for Software Test Witnessing

  1. Define Purpose and Scope
    • Identify the Purpose:

      • Compliance verification: Ensure adherence to NASA and industry standards (e.g., ISO, FDA, IEC).
      • Client validation: Allow clients or stakeholders to confirm the system’s functionality.
      • Certification review: Validate testing results for regulatory approval (e.g., aviation software complying with DO-178C).
    • Define the Scope:

      • Which tests will be witnessed?
        • Functional Testing (does it meet requirements?).
        • Non-functional Testing (performance, reliability, security).
        • Integration or System Testing.
        • User Acceptance Testing (UAT).
        • Regression Testing for critical features.
    • Example Scope: "Software Assurance will witness the end-to-end functional and integration tests of the safety-critical modules and security testing for data encryption compliance."

  2. Identify Stakeholders
    • Determine who will participate in the test witnessing process:
      • Witnesses: Include Software Assurance, Software Safety, clients, regulators, auditors, or stakeholders authorized to verify test results.
      • Testing Team: test engineers, developers, and test managers responsible for conducting tests.
      • Observers (optional): Other participants who may provide feedback but do not actively validate tests.
    • Ensure Documentation:
      • Share responsibilities for witnessing agreements with everyone involved, including witness roles and rules.

  3. Select Tests to Be Witnessed

    • Choose tests that are:
      1. Critical for project success or compliance.
      2. High-risk areas that require validation.
      3. Business requirements or contractual obligations.
      4. Relevant to safety, security, quality, or functional correctness.
    • Examples:
      • Witness the cybersecurity tests.
      • Witness boundary conditions testing for safety-critical software.
      • Witness end-to-end business workflows (e.g., e-commerce checkout).

  4. Prepare Test Documentation
    Ensure all test artifacts are prepared and shared with witnesses:

    1. Test Plan: Provide a detailed outline of the tests, objectives, timeline, and resources.
    2. Test Cases: Include test cases that will be executed during witnessing and link them to requirements.
    3. Requirements Traceability Matrix (RTM): Demonstrate how test cases map back to the software requirements.
    4. Metrics and Logs: Share performance metrics, test logs, and defect trends to back up results.
    5. Entry and Exit Criteria: Clearly define conditions for starting and completing the witnessing process.

  5. Schedule the Witnessing

    • Timeframe: Schedule witnessing activities after relevant setups (test environments, tools, data) are complete.
    • Duration: Allocate sufficient time for actual tests, review, delays, retesting, and feedback.
    • Witness Availability: Coordinate availability with stakeholders, clients, or auditors for minimal disruptions.
    • Alignment: Schedule witnessing to align with software milestones (e.g., after system integration or UAT).

      Example Schedule:

      DateTestWitnessDurationLocation
      Oct 10, 2023Payment Module TestingClient2 HoursRemote Session
      Oct 12, 2023Security Audit TestingISO Auditor3 HoursOn-Site
       
  6. Prepare the Test Environment

    • Environment Availability: Ensure the test environment is configured and ready (e.g., staging with production-like data).
    • Test Data: Prepare meaningful and sufficient test datasets for witnessing.
    • Testing Tools: Ensure tools and automation scripts work and are accessible during witnessing.
    • Success Backup Plan: Have contingency plans for technical failures/downtime.

  7. Conduct the Test Witnessing

    1. Welcome Witnesses: Provide an agenda, introduce the testing team, and set expectations.
    2. Demonstrate Test Execution:
      • Execute test cases while witnesses observe.
      • Share real-time outputs (logs, test results, reports) during witnessing.
    3. Explain Context: Clarify test case objectives and expected outcomes before execution.
    4. Allow Interaction: Permit witnesses to ask questions during testing.
    5. Document Observations: Note witness feedback or approvals in formal meeting notes/reports.

  8. Report and Document Results
    After test witnessing:

    1. Formal Test Report: Include test results, observations, and a summary of witness feedback.
    2. Sign-Off: Witnesses (e.g., clients, regulators) provide formal approvals or documentation stating acceptance including sign-offs.
    3. Resolved Issues: Track issues (if any) identified during witnessing and plan resolutions.
    4. Retention: Archive witnessing reports, sign-offs, defect logs, and supporting documents for audit trails.

      Report Template Example:

      Test NameResultObserver CommentsStatus
      Payment GatewayPassedNo concernsApproved
      Security TestingPassedISO Encryption VerifiedApproved
       
  9. Address Feedback
    Conduct a post-test witnessing session to:

    • Discuss feedback or concerns provided by witnesses.
    • Discuss addressing and fixing identified issues, retesting. Offer follow-up demonstrations if required.
    • Communicate progress to stakeholders.

  10. Manage Risks
    Anticipate risks associated with witnessing and plan mitigation strategies:

    • Disruptions: Encourage clear communication and pre-test readiness.
    • Unclear Criteria: Ensure test entry/exit criteria are understood by all parties.
    • Technical Issues: Plan backups (e.g., retesting schedules, contingency environments).

9.3 Advantages of Test Witnessing

  1. Transparency: Ensures stakeholders see how tests are conducted and validated.
  2. Builds Trust: Increases confidence in the system from clients or regulators.
  3. Certifications: Proof for audits and compliance reviews.
  4. Accountability: Encourages testing teams to adhere to processes and standards.

9.4 Tools to Facilitate Test Witnessing

  1. Test Management Tools: JIRA, TestRail, Zephyr: Help track test progress and share documentation.
  2. Remote Tools: Zoom, Microsoft Teams for virtual witnessing.
  3. Automation Tools: Selenium, Appium for live demonstrations.

9.5 Summary Template for Software Test Witnessing Plan

  • Purpose: Validate software functionality, performance, compliance, etc.
  • Scope: Critical end-to-end workflows, integration points, security tests.
  • Stakeholders: Software Assurance, Software Safety, other SMA team members, auditors, testers.
  • Duration: Scheduled for October 10–12, 2023.
  • Environment: Ready production-like staging environment with test data.
  • Key Milestones:
    1. Prepare test artifacts (Oct 9, 2023).
    2. Conduct payment module testing (Oct 10, 2023).
    3. Receive witnessing feedback and approvals (Oct 12, 2023).

With proper planning, software test witnessing helps ensure that your testing activities meet stakeholder expectations and external compliance standards, fostering trust and confidence in the final deliverable.

If the system is safety-critical, provide provisions for witnessing of tests.

See also SWE-066 - Perform Testing and Topic 8.13 - Test Witnessing

10. Data Recording, Reduction, And Analysis

This section of the Software Test Pan describes processes and procedures for capturing and evaluating/analyzing results and issues found during all types of testing. The point of establishing these processes and procedures is to outline how the project is going to perform these activities. This section should include processes and procedures for:

  • Capturing test execution results.
  • Logging and tracking defects.
  • Analyzing test outcomes.
  • Ensuring timely resolution of issues.
  • Supporting continuous improvement and quality assurance.

When writing these processes and procedures, consider including "manual, automatic, and semi-automatic techniques for recording test results, manipulating the raw results into a form suitable for evaluation, and retaining the results of data reduction and analysis." 401

10.1 Test Execution and Result Capture

Define how the test results will be captured and documented. 

  1. Test Execution Process
    • Testers execute test cases as per the test schedule.
    • Each test case is marked as:
      • Pass: Expected result matches actual result.
      • Fail: Expected result does not match actual result.
      • Blocked: Test cannot proceed due to an unresolved dependency.
      • Not Executed: Test was not run during the cycle.
  2. Tools Used
    • Test Management Tools: e.g., Azure DevOps, TestRail, JIRA Xray, HP ALM.
    • Automation Frameworks: e.g., Selenium, JUnit, Postman (for API testing).
  3. Data Captured
    • Test case ID and description.
    • Execution date and tester name.
    • Actual vs. expected results.
    • Screenshots or logs (for failures).
    • Environment and configuration details.

It is recommended that test reports be created for unit tests of safety-critical items. This will aid in ensuring that there is a record of the results for repeatability (see SWE-186 - Unit Test Repeatability).

See also SWE-065 - Test Plan, Procedures, Reports.

10.2 Defect Logging and Tracking

Document how defects will be logged and tracked. If this process is documented in another document (e.g., CM Plan, Software Development Plan), provide a reference. The process should include:

  1. Defect Reporting Process
    • When a test fails, a defect is logged in the defect tracking system.
    • Each defect includes:
      • Unique ID
      • Summary and detailed description
      • Steps to reproduce
      • Severity and priority
      • Screenshots/logs
      • Environment details
      • Assigned developer
  2. Defect Lifecycle
    • New → 2. Assigned → 3. In Progress → 4. Fixed → 5. Retested → 6. Closed or Reopened
    • New → 2. Assigned → 3. Closed → 4. Permanent Limitation / Non-Issue
  3. Severity and Priority Classification
    1. Severity: Impact on the system (e.g., Critical, Major, Minor).
    2. Priority: Urgency of fixing the issue (e.g., High, Medium, Low).

10.3 Evaluation and Analysis of Results

Analyzing test outcomes is essential for understanding the effectiveness of the testing process, identifying defects, assessing software quality, and making informed decisions about release readiness. It transforms raw test execution data into actionable insights. Document how the test results will be analyzed and evaluated. This may include:

  1. Test Execution Summary
    • Total Test Cases: Number of test cases planned and executed.
    • Execution Status: Summary of test cases that Passed, Failed, Blocked, and Not Executed.
    • Pass Rate: Percentage of test cases that passed.
    • Failure Rate: Percentage of test cases that failed.
  2. Defect Correlation
    • Link failed test cases to logged defects.
    • Analyze:
      • Number of defects per module.
      • Severity and priority distribution.
      • Defect trends over time.
  3. Coverage Analysis
    • Requirements Coverage: Ensure all requirements have corresponding test cases.
    • Code Coverage: Use tools to measure how much of the code was exercised.
    • Risk Coverage: Confirm that high-risk areas have been adequately tested.
  4. Defect Triage Meetings
    • Cross-functional team reviews new and open defects.
    • Prioritize based on business impact and release timelines.
    • Assign owners and define resolution timelines.
  5. Root Cause Analysis (RCA)
    • Performed for high-severity or recurring defects.
    • Identifies whether the issue was due to:
      • Requirement gaps
      • Design flaws
      • Coding errors
      • Test case deficiencies
    • RCA outcomes are documented and used for process improvement.

10.4 Reporting and Metrics

Document how the test metrics and testing status will be reported. This should include

  1. Key Metrics Tracked
    • Test case execution rate
    • Pass/fail percentage
    • Defect density
    • Defect leakage (defects found post-release)
    • Mean time to defect resolution
  2. Daily Test Status Reviews
    • Conducted by the Test lead or test manager.
    • Review test progress, pass/fail rates, and blockers.
    • Update stakeholders on test health.
  3. Test Summary Reports
    • Generated at the end of each test cycle.
    • Includes:
      • Overall test coverage
      • Defect trends
      • Risk areas
      • Recommendations for improvement

10.5 Best Practices

  • Maintain detailed and consistent documentation.
    • Maintain traceability between test cases, requirements, and defects.
  • Use version control for test artifacts.
  • Automate repetitive test result collection and reporting where possible.
  • Encourage collaboration between the test, development, and SMA teams.
  • Continuously refine test cases based on defect trends and RCA.


See also SWE-068 - Evaluate Test Results and Topic 8.57 - Testing Analysis

11. Risks 

Document any risks or issues identified with testing. 

Documenting Risks and Issues Identified in Software Testing is an essential practice to ensure transparency, accountability, and proactive resolution. It helps teams anticipate potential problems, mitigate their impact, track unresolved issues, and improve overall testing processes. Below is a structured approach to documenting risks and issues related to software testing.

Documenting risks and issues in software testing allows teams to address both potential problems and existing defects systematically. Proper documentation ensures transparency across stakeholders, proactive resolution of challenges, and ultimately contributes to higher software quality and successful project delivery. Tools like JIRA, Confluence, or Excel spreadsheets can be useful for maintaining and tracking risk and issue logs.

See also SWE-086 - Continuous Risk Management and SWE-201 - Software Non-Conformances.

11.1 Risks in Software Testing

Risks are potential problems or uncertainties that, if left unchecked, could impact the testing process, the software quality, or project timelines. Documenting risks involves identifying, assessing, prioritizing, and planning mitigation strategies. Here are some common risks in Software Testing:

  1. Incomplete/Inaccurate Requirements:

    • Risk: Requirements that are unclear or continuously changing lead to ineffective test cases and missing functionality.
    • Mitigation: Regular requirement reviews, stakeholder validation, and using a Requirements Traceability Matrix (RTM).
  2. Insufficient Testing Time:

    • Risk: Testing schedules are compressed due to delays in development phases or aggressive timelines, increasing the likelihood of undetected defects.
    • Mitigation: Prioritize critical test cases, adopt risk-based testing, and negotiate realistic timelines.
  3. Limited Resources:

    • Risk: Inadequate test environments, insufficient skilled testers, or unavailable tools.
    • Mitigation: Early resource planning, cross-training testers, and using virtual/cloud-based testing environments.
  4. Technical Challenges:

    • Risk: Issues with test environments, data inconsistency, or unsupported configurations.
    • Mitigation: Maintain dedicated test environments and conduct pre-testing checks for stability.
  5. Defects in Automation Scripts:

    • Risk: Errors in testing scripts may lead to false positives/negatives and missed defects.
    • Mitigation: Regular maintenance and verification of scripts, and conducting manual spot-checks.
  6. Security Compliance Risks:

    • Risk: Testing may fail to uncover security vulnerabilities or software may not comply with regulatory standards.
    • Mitigation: Conduct thorough security testing (e.g., penetration tests), and ensure compliance with standards (e.g., GDPR, HIPAA).
  7. Test Coverage Gaps:

    • Risk: Certain functionalities or edge cases may not be adequately covered by testing.
    • Mitigation: Use test coverage analysis tools and peer reviews of test cases.
  8. Unstable Test Environments:

    • Risk: Fluctuating environments (e.g., incomplete configurations or dependencies) impact test reliability.
    • Mitigation: Freeze test environments early, and simulate production environments closely.
  9. Stakeholder Delays:

    • Risk: Delayed approvals for requirements or lack of availability of witnesses during critical testing phases.
    • Mitigation: Set clear approval deadlines and communicate regularly with stakeholders.
  10. Defect Leakage:

    • Risk: Defects may escape to production and cause operational disruption or poor user experience.
    • Mitigation: Implement thorough regression testing and performance testing before release.
  11. Third-Party Integration Issues:

    • Risk: Dependencies on external systems or APIs may create delays or compatibility problems.
    • Mitigation: Mock third-party systems during testing or collaborate with vendors early.

11.2 Issues Identified During Software Testing

Issues are problems detected during the testing process that need immediate resolution. Documenting issues ensures they are tracked properly, mitigated, and communicated with relevant teams. Here are some common issues identified in Software Testing: 

  1. Functional Defects:

    • Problem: Features fail to meet specified requirements or behave incorrectly.
    • Example: The "search" functionality does not return expected results for specific filters.
  2. Performance Bottlenecks:

    • Problem: The application is slow under normal or peak load conditions.
    • Example: Page load time exceeds 10 seconds when 500 concurrent users are logged in.
  3. Database Issues:

    • Problem: Data retrieval, insertion, or manipulation fails due to schema conflicts or inconsistent test data.
    • Example: Transactions aren't processed correctly due to data corruption.
  4. Environment Instability:

    • Problem: Frequent crashes, missing configuration files, or communication errors with dependent systems.
    • Example: The testing environment isn't configured correctly for payment gateway integration.
  5. UI/UX Design Issues:

    • Problem: Design elements do not match specifications or usability is compromised.
    • Example: A "Submit" button is not visible on smaller screen resolutions.
  6. Security Vulnerabilities:

    • Problem: Exposure of sensitive data, improper authentication methods, or unencrypted transmissions.
    • Example: A user is able to extract confidential transaction details using URL tampering.
  7. Regression Failures:

    • Problem: Existing, previously working functionality breaks after new code changes.
    • Example: The checkout system fails after implementing discount coupon logic.
  8. Automation Errors:

    • Problem: Automated test scripts fail due to incorrect configurations or outdated logic.
    • Example: The login script does not recognize updated session handling.
  9. Untested Scenarios:

    • Problem: Specific use cases or edge cases were overlooked due to incomplete test coverage.
    • Example: The system crashes when an invalid character is entered into the username field.
  10. Test Data Issues:

    • Problem: Inconsistent or incorrect test data results in unreliable test outcomes.
    • Example: Email validation fails due to missing sample emails in the test dataset.

11.2.2  Risk/Issue Documentation Template

Use a structured template to document risks and issues identified during software testing.

  1. Risk Template

    Risk IDDescriptionImpactLikelihoodPriorityMitigation PlanOwnerStatus
    R001Limited resources for testing toolsHighMediumCriticalAllocate budget for tools earlyProject ManagerOpen
    R002Test environment stability issuesMediumHighHighPerform daily environment health checksTest LeadMitigated

     

  2. Issue Template
    Issue IDDescriptionSeverityImpact AreaSteps to ReproduceProposed SolutionOwnerStatus
    I001Search filter not returning resultsHighFunctional Module1. Go to search page.Adjust search logicDeveloperIn Progress
    I002Encryption logic not working correctlyCriticalSecurity Compliance

    1. Login as Admin.
    2. Extract data

    Fix encryption methodSecurity EngineerResolved

11.2.3  Best Practices in Documenting Risks and Issues 

  1. Regular Reviews:

    • Conduct regular risk and issue reviews during testing cycles.
    • Keep documentation updated.
  2. Assign Ownership: Assign responsible team members for each risk and issue to ensure accountability.

  3. Track Resolution Progress: Use tools like JIRA, Bugzilla, or Trello to monitor the status of risks and issues.

  4. Communicate Early: Escalate high-severity risks or issues to stakeholders and resolve them collaboratively.

  5. Categorize: Differentiate between risks (potential problems) and issues (existing problems).

  6. Retrospection: Analyze recurring risks or issues to improve processes for future testing.


12. Qualification Testing

Qualification Testing is a formal process used to verify that a software system or component meets its specified requirements and is ready for deployment or certification. It is typically conducted in a controlled environment and follows predefined procedures and acceptance criteria. This section of the Software Test Plan defines items the parameters of the qualification testing such as:

  • Sites where testing will occur, identify by name.
  • Software and version necessary to perform the planned testing activities at each site, for example:
    • Compilers, operating systems, communications software.
    • Test drivers, test data generators, test control software.
    • Input files, databases, path analyzers.
    • Other.
  • Hardware and firmware, including versions, that will be used in the software test environment at each site.
  • Manuals, media, licenses, instructions, etc., required to setup and perform the planned tests.
  • Items to be supplied by the site and those items that will be delivered to the test site.
  • Organizations participating in the tests at each site and their roles and responsibilities.
  • Number, type, skill level of personnel required to carry out testing at each site.
  • Training and/or orientation required for testing personnel at each site.
  • Tests to be performed at each site.

In addition to the information required above, address the following information for all types of testing in the Test Plan:

  • Resources (personnel, tools, equipment, facilities, etc.).
  • Risks that require contingency planning.
  • What is to be tested and what is not to be tested.
  • Test completion criteria.

13. Additional Content

13.1 General Test Conditions

General test conditions are conditions that apply to all of the planned tests or to a specific group of tests. When documenting general test conditions, consider statements and conditions, such as these taken from Langley Research Center's NPR 7150.2 Class A Required Testing Documents With Embedded Guidance:

  • "Each test should include nominal, maximum, and minimum values."
  • "Each test of type X should use live data."
  • "Execution size and time should be measured for each software item."
  • Extent of testing to be performed, e.g., percent of some defined total, and the associated rationale.

13.2 Planned Tests, Including Items and Their Identifiers

If not already included in sections of the plan focused on specific types of testing (unit, integration, etc.), all planned tests, test cases, data sets, etc., that will be used for the project need to be identified in the Software Test Plan, along with the software items they will be used to test. Each item needs to have its own unique identifier to ensure proper execution and tracking of the planned tests. Consider the following information as information to capture for each test:

  • Objective.
  • Test level.
  • Test type.
  • Test class.
  • Requirements addressed.
  • Software items(s) tested.
  • Type of data to be recorded.
  • Assumptions, constraints, limitations (timing, interfaces, personnel, etc.).
  • Safety, security, privacy considerations.

See also SWE-015 - Cost Estimation

13.3 Additional Information In Plans

If not identified elsewhere, the Software Test Plan identifies the metrics to be collected for each type of testing. Suggested metrics include:

  • Number of units tested.
  • Hours spent.
  • Number of defects found.
  • Average defects found per line of code.
  • Measures of test coverage, software reliability and maintainability.
  • Other.

14. Best Practices

14.1 Best Practices For Software Test Plans:

  • Begin development of the Software Test Plan(s) early.
  • As soon as the relevant stage has been completed:
    • Helps identify confusing or unclear requirements.
    • Helps identify un-testable design features before implementation.
  • Allows for acquisition/allocation of test resources.
  • Involve the right people in the plan development (quality engineers, software engineers, systems engineers, etc.).

Use the right Sources of Information (first column in table below) as appropriate for the project and for each type of testing, such as:

Sources of Information

Unit Test*

SW Integration Test*

Systems Integration Test*

End-to-End Test*

Acceptance Test*

Regression Test*

Software Requirements Specification (SRS)

X


X

X

X

X

Software Design Description (SDD)

X

X





Design traceability

X

X





Interface documents

X

X

X

X

X

X

Draft user documentation

X






Code coverage analyzer specifications

X






Criticality analysis


X





Draft operating documents


X

X

X



Draft maintenance documents


X





Final operating documents





X


Final user documentation





X


Concept documents



X

X



Requirements traceability



X

X

X

X

Expected customer usage patterns and conditions



X

X

X

X

*May be a separate test plan referenced in the overall Software Test Plan or part of the overall Software Test Plan.

  • Have the Software Test Plan reviewed/inspected before use (SWE-087 - Software Peer Reviews and Inspections for Requirements, Plans, Design, Code, and Test Procedures).
    • Include Software Assurance and Software Safety personnel to verify safety-critical coverage.
  • Have changes to the Software Test Plan evaluated for their effect on system safety.
  • Keep the Software Test Plan maintained (up to date) and under configuration control.
  • Identify early and focus testing on the components most likely to have issues (high risk, complex, many interfaces, demanding timing constraints, etc.).
  • Plan to use independent testing (e.g., fellow programmers, separate test group, separate test organization, NASA Independent Verification & Validation) where possible and cost-effective as new perspectives can turn up issues that authors might not see.
  • Include coverage of user documentation, e.g., training materials, procedures.

15. Small Projects

Software Test Plans are necessary for all software projects, but for projects with small budgets or small teams starting with an existing test plan from a project of a similar type and size could help reduce the time and effort required to produce a test plan for a new project. Working with someone experienced in writing test plans, perhaps from another project and on a short-term basis, could help the project team prepare the document in a timely fashion without overburdening team resources. Where applicable, the test plan could reference other project documents rather than reproduce their contents, avoiding duplication of effort and reducing maintenance activities.

Since the Software Test Plan may be standalone or part of the Software Management Plan, incorporating the test plan into a larger project document may be useful for document tracking, review, etc. 

Follow Center policies and procedures when determining which approach to use for a particular project. 

The Software Test Plan may be tailored by software classification. Goddard Space Flight Center's (GSFC's) 580-STD-077-01, Requirements for Minimum Contents of Software Documents provides one suggestion for tailoring a Software Test Plan based on the required contents and the classification of the software being tested. This tailoring could reduce the size of the Software Test Plan and, therefore, the time and effort to produce and maintain it. 

16. Resources

16.1 References

16.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.


16.3 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

16.4 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 


16.5 Related Activities

This Topic is related to the following Life Cycle Activities:

17. Lessons Learned

17.1 NASA Lessons Learned

  • MPL Uplink Loss Timer Software/Test Errors (1998) (Plan to test against full range of parameters). Lesson Number 0939530: "Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested."
  • Deep Space 2 Telecom Hardware-Software Interaction (1999) (Plan to test as you fly). Lesson Number 1197545: "To fully validate performance, test integrated software and hardware over the flight operational temperature range."
  • International Space Station (ISS) Program/Computer Hardware-Software/Software (Plan realistic but flexible schedules). Lesson Number 1062536: "NASA should realistically reevaluate the achievable ... software development and test schedule and be willing to delay ... deployment if necessary rather than potentially sacrificing safety."
  • Thrusters Fired on Launch Pad (1975) (Plan for safe exercise of command sequences). Lesson Number 0403507: "When command sequences are stored on the spacecraft and intended to be exercised only in the event of abnormal spacecraft activity, the consequences should be considered of their being issued during the system test or the pre-launch phases."

17.2 Other Lessons Learned

The Goddard Space Flight Center (GSFC) Lessons Learned online repository 695 contains the following lessons learned related to software requirements identification, development, documentation, approval, and maintenance based on analysis of customer and other stakeholder requirements and the operational concepts. Select the titled link below to access the specific Lessons Learned: