- 1. Minimum Recommended Content
- 2. Test Levels
- 3. Test Types
- 4. Classes
- 5. Progression
- 6. Schedules
- 7. Acceptance Criteria
- 8. Coverage
- 9. Witnessing
- 10. Data
- 11. Risks
- 12. Qualification
- 13. Additional Content
- 15. Best Practices
- 16. Small Projects
- 17. Resources
- 18. Lessons Learned
Return to 7.18 - Documentation Guidance
1. Minimum Recommended Content
1.1 Purpose
The Software Test Plan describes the test activities, at a high-level, that will be performed to verify that the software has been implemented in a manner that satisfies the project's functional and non-functional requirements that are verified via testing (e.g., performance, reliability, safety, security, availability, usability) as defined within the Software Requirements Specification (SRS). As a result, this Plan addresses how the project will satisfy SWE-065 - Test Plan, Procedures, Reports, SWE-066 - Perform Testing, and SWE-191 - Software Regression Testing. This plan also defines the test methodology (i.e., qualification provisions) to be used to certify that the implemented software system satisfies the operational needs of the project.
Planning the software testing activities allow for the thorough deliberation of tasks, methods, environments, and related criteria before they are implemented. Planning also allows the project team to improve upon a previous project's testing by devising a plan for the implementation of more appropriate, modern, or efficient techniques and ensuring the inclusion of steps previously missed or not included.
As with any task, having a plan in place ensures that all necessary and required tasks are performed. Development of that plan provides the opportunity for stakeholders to give input and assist with the documentation and tailoring of the planned testing activities to ensure the outcome will meet the expectations and goals of the project. The Software Test Plan is baselined as part of the Critical Design Review (CDR) activities. A preliminary draft should be available for review as part of the Preliminary Design Review (PDR) work package with a baselined version ready at the exit of the Critical Design Review (CDR). As the life cycle progresses, it may be necessary to update the Test Plan after it is baselined; however, it should be complete and final by the exit of the Test Readiness Review (TRR).
Ensuring the Software Test Plan follows a template and includes specified information ensures consistency of test plans across projects, ensures proper planning occurs, and prevents repeating problems of the past.
1.2 Recommended Content
Minimum recommended content for the Software Test Plan. The Test Plan defines the approach for testing on a project or class of projects, including the methodology for each type of test to be performed. It addresses the following items:
- Test levels (separate test effort that has its own documentation and resources, e.g., unit or component, software integration testing, system integration end-to-end testing acceptance testing).
- Test types: There are many types of testing that can be used. Each type is generally intended to verify different aspects of the software. Depending on the type of testing, there may be test cases chosen from many of the types of testing and/or an exhaustive set of test cases may be chosen from one type (for example, functional testing). Test types may include:
- Unit Testing
- Software Integration Testing
- System Integration Testing
- End-to-End Testing
- Acceptance Testing
- Regression Testing
- Functional Testing (Requirements-based Testing)
- Stress Testing.
- Performance Testing.
- Endurance Testing.
- Interface Testing. (Both User Interface and Interfaces to Other System Functions)
- Boundary Conditions Testing.
- Coverage Testing (Both Path Coverage Testing and Statement Coverage)
- Mutation Testing or Perturbation Testing
- Types of Testing often used in Safety-Critical Systems:
- Fault Insertion Testing
- Failure Modes and Effects Testing
- Perturbation or Mutation Testing
- Test classes (designated grouping of test cases).
- Test progression. (Order in which test classes for each test level will be performed).
- Test schedules.
- Acceptance (or exit) Criteria for set of tests (Example: 95% of test cases must pass (Meet expected results))
- Test coverage (breadth and depth) or other methods for ensuring sufficiency of testing.
- Plan for test witnessing, if system is safety-critical
- Data recording, reduction, and analysis.
- Any risks or issues identified with testing.
- Qualification - Testing Environment, Site, Personnel, and Participating Organizations
2. Test Levels
There are many levels of testing. Several are used regularly in the Software Development Life Cycle. Others are used to test is ways that explore for weaknesses in software design, robustness, or for specific cases in safety critical software. The main levels are defined below and all are discussed further in the tabs of this topic.
Each software test level plays a crucial role in ensuring that defects are identified early, interactions between components are validated, and the system meets functional, non-functional, and business requirements. Properly planning and executing these test levels ensures the software delivers a high-quality and seamless experience to end-users.
See also Topic 7.06 - Software Test Estimation and Testing Levels.
2.1 Test Levels Overview Chart
Expand Test Level for details.
| Test Level | Key Objective | Scope | Performed By |
|---|---|---|---|
| Testing individual components, methods, or modules of the application to ensure they work as intended. | Smallest testable units of software (e.g., functions, classes, API endpoints). | Developers (usually supported by automated tools). | |
| Testing interactions between integrated modules, components, or third-party systems to ensure they work together as expected. | Groups of connected modules or subsystems. | Test team or developers. | |
| Testing the complete, integrated system as a whole to ensure it meets specified requirements. | End-to-end testing of the entire software system. | Dedicated testers in an environment similar to production. | |
| The final level of testing, where stakeholders, end-users, or clients verify that the software meets their business requirements and is ready for deployment. | Entire software, focusing on user workflows and business criteria. | Stakeholders, end-users, domain experts, or Test teams. | |
| Testing performed after updates, bug fixes, or feature changes to ensure the modifications haven't negatively impacted existing functionality. | Previously tested functionality. | Developers, Test teams, often using automated tools. | |
| Testing conducted to evaluate the software's responsiveness, stability, and scalability under expected and extreme workloads. | Entire system, focusing on non-functional requirements. | Performance testing specialists. | |
| Testing performed to uncover vulnerabilities, risks, or threats to the system's security. | Entire software system, focusing on sensitive data processing. | Security testers or ethical hackers. | |
| A brief and shallow test performed on builds to ensure critical features work and the software is testable. | High-priority functionalities. | Developers, Test Teams | |
| Ad-hoc and unscripted testing performed by experienced testers to evaluate software creatively. | Entire software. | End-users, Test teams |
3. Test Types
There are many types of testing that can be used. Each type is generally intended to verify different aspects of the software. Depending on the type of testing, there may be test cases chosen from many of the types of testing and/or an exhaustive set of test cases may be chosen from one type (for example, functional testing).
Test types may include (Expand each Test Type for details.):
4. Test Classes
In software testing, Test Classes refer to categories or classifications of testing techniques based on their goals, approaches, or scopes. These test classes help in structuring the testing process systematically to ensure comprehensive coverage and quality assurance.
Test classes represent distinct approaches to testing software with different goals, techniques, and coverage scopes. Depending on the application's type, purpose, and criticality (e.g., safety-critical environments), testers choose appropriate combinations of test classes to guarantee quality, reliability, and user satisfaction.
Below is a breakdown of commonly recognized software test classes:
- Functional Testing Class
- Definition: Focuses on verifying that the software behaves as expected according to requirements and specifications.
- Purpose: Test what the software is supposed to do (its functionality).
- Techniques:
- Unit Testing: Test individual functions or modules for specific functionality.
- Integration Testing: Test interactions between modules or components.
- System Testing: Test the entire application as a unified system.
- Regression Testing: Validate that new code changes do not break existing functionality.
- Example: Testing if a login feature allows users to successfully log into the application with valid credentials.
- Non-Functional Testing Class
- Definition: Evaluates non-functional aspects of the software such as performance, usability, reliability, etc.
- Purpose: Test how the software performs (behavior under certain conditions).
- Techniques:
- Performance Testing: Measure speed, responsiveness, and stability under load.
- Stress Testing: Test system behavior under extreme conditions or loads.
- Usability Testing: Assess the ease of use and user-friendliness of the application.
- Scalability Testing: Analyze the system's ability to scale with user growth.
- Security Testing: Check the robustness of the application against potential security breaches.
- Example: Testing a website's responsiveness when 1,000 concurrent users attempt to access it.
- Structural Testing Class (White-Box Testing)
- Definition: Tests the internal workings of the software by understanding the code structure, algorithms, and logic.
- Purpose: Ensure internal implementations and logic are correct and avoid defects.
- Techniques:
- Statement Coverage Testing: Check if every line of code is executed during testing.
- Path Coverage Testing: Ensure every possible path in the code is executed.
- Branch Coverage Testing: Validate the behavior of code branches, such as if-else conditions.
- Mutation Testing: Introduce small code changes (mutants) and verify if the test cases can catch them.
- Example: Verifying a sorting algorithm by examining its logic to ensure it handles all edge cases correctly.
- Acceptance Testing Class
- Definition: Determines whether the software meets the requirements and expectations of the end-users or stakeholders.
- Purpose: Confirm the application is ready for deployment.
- Techniques:
- User Acceptance Testing (UAT): Validates the system against user requirements and business processes.
- Alpha Testing: Performed by internal teams before the software is released.
- Beta Testing: Performed by actual end-users in a real-world environment.
- Example: Testing if an e-commerce website allows users to successfully purchase products, as expected by business stakeholders.
- Exploratory Testing Class
- Definition: A testing approach where testers explore the application without predefined test cases.
- Purpose: Find defects by dynamically interacting with the software.
- Techniques:
- Ad hoc testing without formal documentation.
- Testers use their expertise and experience to identify potential weak areas.
- Example: Random navigation across an application's features to identify unexpected crashes or errors.
- Compatibility Testing Class
- Definition: Ensures the software works as intended across different platforms, devices, or environments.
- Purpose: Verify cross-platform functionality and adaptability.
- Techniques:
- Browser Compatibility Testing: Test if a web application performs consistently across browsers.
- OS Compatibility Testing: Ensure compatibility across operating systems like Windows, macOS, or Linux.
- Device Compatibility Testing: Validate functionality across mobile, desktops, tablets, etc.
- Example: Testing if a mobile app works appropriately on both Android and iOS devices.
- Security Testing Class
- Definition: Tests the software for vulnerabilities and ensures the system is protected against unauthorized access or threats.
- Purpose: Identify weaknesses that could be exploited by hackers or malicious entities.
- Techniques:
- Penetration Testing: Simulate attacks to check for vulnerabilities and security holes.
- Authentication Testing: Validate password mechanisms, access restrictions, and identity verification processes.
- Encryption Testing: Verify data protection mechanisms.
- Example: Testing whether a banking app properly encrypts sensitive data like usernames, passwords, and transaction details.
- Regression Testing Class
- Definition: Focuses on testing previously working functionality after code changes, updates, or bug fixes.
- Purpose: Ensure the new changes have not introduced any new issues or broken older functionality.
- Techniques:
- Retesting: Re-running test cases that previously passed.
- Automated Regression Testing: Using tools to automatically execute test cases after every release.
- Example: Re-testing the checkout process in an online store after updating the database schema.
- Localization Testing Class
- Definition: Validates that software is properly adapted for specific languages, regions, and cultural contexts.
- Purpose: Ensure the application functions correctly for end-users in different global markets.
- Techniques:
- Test date/time formats, currency symbols, language translations, and text direction (e.g., left-to-right or right-to-left).
- Verify language support for special characters.
- Example: Testing if a mobile app displays prices in Euros when accessed in European countries.
- Load Testing Class
- Definition: Tests how the software performs under expected loads or usage conditions.
- Purpose: Validate performance, stability, and responsiveness under normal load conditions.
- Techniques:
- Simulate a large number of requests, users, or transactions.
- Tools like Apache JMeter or LoadRunner help generate virtual user loads.
- Example: Testing a payroll application by simulating 1,000 concurrent employees accessing their pay slips.
- Stress Testing Class
- Definition: Evaluates how the software behaves under extreme conditions or overstress.
- Purpose: Identify the breaking point of the system (e.g., capacity or resource limits).
- Example: Testing a real-time GPS tracking system by sending continuous location updates for thousands of vehicles simultaneously.
- Recovery Testing Class
- Definition: Tests the system's ability to recover from crashes, failures, or disasters.
- Purpose: Validate fail-safe mechanisms and fault tolerance.
- Techniques:
- Simulate data corruption or server failures.
- Measure how quickly and effectively the system resumes functioning after a failure.
- Example: Testing how a database recovers data after a sudden power outage.
- End-to-End Testing Class
- Definition: Validates the complete workflow of an application, from start to finish, simulating real-world conditions.
- Purpose: Test the overall application flow to ensure everything works seamlessly together.
- Example: Testing an online bookstore, from searching for a book to completing payment, generating receipt, and tracking shipment.
- Unit Testing Class
- Definition: Tests individual functions or modules in isolation.
- Purpose: Validate correctness and logic at the smallest level.
- Techniques:
- Write test cases for each function independently.
- Use tools like JUnit (Java), pytest (Python), or NUnit (.NET).
- Example: Testing a simple function that calculates a discount percentage independently.
- Accessibility Testing Class
- Definition: Ensures the software is accessible to users with disabilities (e.g., vision, hearing, or motor impairments).
- Purpose: Conform to accessibility standards like WCAG (Web Content Accessibility Guidelines).
- Example: Testing a website’s compatibility with screen readers like NVDA or JAWS.
5. Test Progression
Testing progresses from smaller, focused areas to broader and integrated systems:
- Unit → Integration → System → Acceptance → Regression
- Non-functional tests (Performance, Security, Compatibility) are conducted after integration/system testing.
Following software test progression ensures defects are discovered early, the software meets functional and non-functional requirements, and it is ready for deployment in production environments. Different tests occur at different stages, but progression is always aimed at improving quality and reliability.
Test progression addresses the sequence or ordering of tests. The Software Test Plan describes dependencies among tests that require that tests be performed in a particular order.
Software Test Progression refers to the systematic stages or phases in which testing is conducted during the software development life cycle (SDLC) to ensure the software meets its requirements and is free from defects. Test progression encompasses a step-by-step advancement of testing methods, complexity, and scope, starting from testing individual components to evaluating the system as a whole.
In modern Agile and DevOps methodologies, testing progression is often compressed into continuous testing cycles that occur alongside development. Key concepts include:
- Shift Left Testing: Testing begins early in the SDLC (e.g., during requirements and design phases) to catch defects sooner.
- Continuous Testing: Automated testing is integrated into the CI/CD pipeline to perform frequent regression and performance tests on every build.
Below is an in-depth explanation of software test progression, covering testing phases and their goals:
- Unit Testing (Lowest Level Testing)
- Definition: Unit testing is the first step in testing progression, where individual components, functions, or methods of the software are tested in isolation.
- Objective:
- Validate the correctness of the smallest testable parts of the code.
- Catch bugs early in development.
- Focus: Logic errors, edge cases, boundary conditions, and exceptions within individual modules.
- Performed By: Developers (often automated using frameworks like JUnit, NUnit, pytest).
- Example: Testing a function that calculates the total price after applying discounts.
- Tools: JUnit (Java), NUnit (.NET), pytest (Python), Jasmine (JavaScript).
- Integration Testing
- Definition: After unit testing, integration testing focuses on testing the interaction between integrated modules or components.
- Objective: Detect interface defects, data flow issues, or communication errors between modules.
- Focus: Testing interfaces, communication protocols, and data exchanges between subsystems.
- Performed By: Testers or developers.
- Approaches:
- Top-Down Integration: Start testing high-level modules and progressively test lower-level modules.
- Bottom-Up Integration: Test lower-level modules first and progressively integrate higher-level modules.
- Example: Testing a web application where the login module interacts with the user database.
- Tools: Postman (API testing), SOAP UI, Karate, or other interface testing tools.
- System Testing
- Definition: Once all modules are integrated and working together, system testing evaluates the software as a complete unit.
- Objective: Test the application against specified system requirements to ensure it performs as expected.
- Focus: Functional testing, non-functional testing (performance, scalability, etc.), usability, and reliability.
- Performed By: Testers in a controlled environment similar to production.
- Example: Testing an e-commerce website to ensure it allows users to search products, add items to the cart, and make purchases smoothly.
- Tools: Selenium (Web automation), Appium (Mobile testing), LoadRunner (performance testing).
- Acceptance Testing
- Definition: Acceptance testing evaluates whether the application meets the business requirements and is ready for deployment.
- Objective: Ensure the software satisfies end-users, stakeholders, and business workflows.
- Focus: Verifying user requirements, functionality, and overall system quality.
- Performed By: Stakeholders, domain experts, end-users, or Test teams.
- Types:
- User Acceptance Testing (UAT): Validation by end-users.
- Alpha Testing: Performed by internal stakeholders.
- Beta Testing: Performed by external users in real-world environments.
- Example: Testing whether a banking app allows customers to securely send money and view their transaction history.
- Regression Testing
- Definition: Regression testing is conducted after new code changes, enhancements, or bug fixes to ensure they do not negatively impact the existing functionality.
- Objective: Confirm stability and consistency of the software after modifications.
- Focus: Re-running existing test cases for previously tested features.
- Performed By: Developers, Testers (often automated).
- Example: Testing an e-commerce application’s checkout process after adding new payment methods.
- Tools: Selenium, TestNG, Appium, or automated regression testing frameworks.
- Performance Testing
- Definition: Performance testing evaluates how the application responds under various workloads, ensuring it meets timing and scalability requirements.
- Objective: Assess system behavior under expected and peak loads.
- Focus: Speed, responsiveness, stability, resource usage, and capacity.
- Performed By: Testers specializing in non-functional testing.
- Types:
- Load Testing: Test average workload conditions.
- Stress Testing: Test the system's limits under extreme conditions.
- Scalability Testing: Measure performance as the system scales up with user growth.
- Example: Simulating 10,000 concurrent users accessing a banking website.
- Tools: Apache JMeter, LoadRunner, Gatling, Locust.
- Security Testing
- Definition: Security testing ensures the software is protected against vulnerabilities and unauthorized access.
- Objective: Test for vulnerabilities such as data breaches, improper authorization, or injection attacks.
- Focus: Authentication, data encryption, penetration testing, secure communication.
- Performed By: Security testers or ethical hackers.
- Example: Identifying potential SQL injection vulnerabilities in a database-backed web application.
- Tools: OWASP ZAP, Burp Suite, Nessus, Acunetix.
- Compatibility Testing
- Definition: Compatibility testing ensures the software performs correctly across different environments, platforms, devices, or systems.
- Objective: Validate the application across various configurations (e.g., browsers, resolutions, OS versions, devices).
- Focus: Platform, browser, device, and configuration compatibility.
- Performed By: Developers, testers.
- Example: Testing a website for consistent rendering on Chrome, Firefox, Safari, and Edge browsers.
- Tools: BrowserStack, CrossBrowserTesting, Sauce Labs.
- Recovery Testing (Resilience Testing)
- Definition: Recovery testing evaluates how resilient the system is after encountering errors, crashes, or other failures.
- Objective: Ensure the system gracefully recovers and resumes operations after failures.
- Focus: Fail-safe mechanisms, data recovery procedures, and reliability in failure scenarios.
- Performed By: Testers and system engineers.
- Example: Testing how a database-backed system recovers after unexpected power outages.
- Exploratory Testing
- Definition: Exploratory testing involves unscripted testing where testers dynamically explore the application. It is often ad-hoc.
- Objective: Identify unexpected bugs or weak points that structured test cases might miss.
- Focus: Tester creativity, random inputs, edge cases.
- Performed By: End-users, Test teams, or other experienced testers with a deep understanding of the application.
- Example: Clicking random buttons on a banking app to ensure no hidden crashes.
- Specialized Testing
This refers to additional testing approaches depending on the specific requirements of the software. Examples include:- Localization Testing: Validate applicability for different languages, regions, or cultural settings.
- Compliance Testing: Ensure regulatory compliance (e.g., HIPAA for healthcare, ISO 26262 for automotive systems).
- Endurance Testing: Test system stability for long-term operation.
6. Test Schedules
Software Test Schedules outline the timeline and plan for executing testing activities during the software development life cycle (SDLC). A well-defined test schedule ensures that testing is implemented systematically, deadlines are met, and deliverables are completed on time. Scheduling testing activities involves identifying test phases, allocating resources, estimating effort, ordering tasks, and planning around project milestones.
A structured software test schedule is critical for ensuring that testing activities are performed efficiently and aligned with project milestones, resource availability, and system requirements. By planning phases like unit, integration, system, and regression testing systematically, teams can deliver high-quality software within the project timeframe. Use schedules as living documents that adapt to project changes and continuously review progress for successful delivery.
Information to consider for test schedules includes:
- List or chart showing time frames for testing at all test sites.
- Schedule of test activities for each test site, including on-site test setup, on-site testing, data collection, retesting, etc.
- Milestones relevant to the development life cycle.
- If the software is safety-critical, test witnessing participant's (e.g., Software Assurance) availability.
See also SWE-065 - Test Plan, Procedures, Reports, SWE-066 - Perform Testing. See also Topic 7.08 - Maturity of Life Cycle Products at Milestone Reviews, SWE-024 - Plan Tracking
6.1 Software Test Schedules, Their Components, And How To Create One Effectively
6.1.1 Components of Software Test Schedules
Testing Phases:
- Unit Testing
- Integration Testing
- System Testing
- Acceptance Testing
- Regression Testing
- Non-functional Testing (e.g., Performance, Security)
Activities:
- Test Planning
- Test Design (writing test cases)
- Test Execution
- Reporting defects
- Retesting fixes (defect closure)
- Test Metrics and Analysis
Resources:
- Testers (e.g., QA team, system or independent testers)
- Test Witnesses (e.g., Software Assurance) for safety-critical components
- Tools (automation tools, bug trackers, performance testing frameworks)
Deadlines:
- Align testing activities with development timelines and ensure testing completion before deployment.
Dependencies:
- Ensure testing aligns with dependencies like code release, availability of environments, and stakeholder input.
6.1.2 Steps to Create an Effective Software Test Schedule
Follow these steps to design a robust software test schedule:
- Identify Testing Phases
Define all phases of testing, their scope, and objectives to match the development stage:- Unit Testing during early development.
- Integration Testing after individual modules are ready.
- System Testing following complete integration.
- User Acceptance Testing closer to deployment.
Example:- Phase: System Testing → Objectives: Validate end-to-end functionality across modules within a staging environment.
- Phase: System Testing → Objectives: Validate end-to-end functionality across modules within a staging environment.
- Define Testing Activities
Break testing into specific tasks or activities such as:- Preparing test cases.
- Setting up test environments.
- Executing tests.
- Logging and tracking defects.
Example Activities:- Activity: Test Environment Setup → Time: 2 days → Dependency: Staging environment availability.
- Activity: Test Environment Setup → Time: 2 days → Dependency: Staging environment availability.
- Prioritize Testing Tasks
Testing activities should be prioritized based on the criticality of features, project deadlines, and potential risks:- Critical modules (e.g., payment systems or data encryption logic) are tested first.
- Lower-risk modules (e.g., UI aesthetics) can be scheduled later.
- Estimate Effort
Calculate the time required for testing activities:- Consider the number of test cases, complexity of the application, resources, and dependencies.
- Effort estimation is often performed using models such as Test Point Analysis, Experience-Based Estimation, or Work Breakdown Structure (WBS).
Example:- Activity: Writing 100 test cases → Estimated Time: 5 days → Based on historical project data.
- Activity: Writing 100 test cases → Estimated Time: 5 days → Based on historical project data.
- Allocate Resources
Assign team members and tools to specific testing activities:- Tester roles (manual or automation testing).
- Test witnessing roles (from Safety & Mission Assurance organization)
- Tools such as Selenium (functional), JMeter (performance), OWASP ZAP (security).
- Ensure resource availability for peak workload periods.
- Define Milestones
Set milestones to track testing progress and align them with the overall project timeline.- Examples of milestones:
- Completion of Unit Testing.
- Completion of Functional Testing.
- Generation of the final test report.
- Examples of milestones:
- Manage Dependencies
- Testing often relies on development deliverables, environments, and tools being available.
- Plan buffer time for risks such as delayed code releases or test environment issues.
- Track Progress
- Develop regular checkpoints or schedules to monitor the progress of testing.
- Use tools like Test Management Systems (e.g., JIRA, TestRail) for visibility into ongoing tasks.
6.1.3 Example Test Schedule Template
Here's an example schedule for software testing:
| Phase | Activity | Team | Start Date | End Date | Dependencies | Status |
|---|---|---|---|---|---|---|
| Unit Testing | Write unit test cases | Dev Team | Day 1 | Day 3 | Developers complete module code | In-progress |
| Integration Testing | API integration tests | Test Team | Day 4 | Day 6 | Functional API ready | Yet to start |
| System Testing | End-to-end testing | Test Team | Day 7 | Day 10 | All modules integrated | Yet to start |
| Stress Testing | Simulate max load | Perf Team | Day 11 | Day 12 | Test environment setup | Yet to start |
| User Acceptance | UAT session with client | Client, End-Users | Day 13 | Day 15 | Acceptance criteria aligned | Yet to start |
6.1.4 Common Challenges in Test Scheduling
Unrealistic Deadlines: Inadequate time given for writing and executing test cases.
Poor Planning: Missing activities such as regression or risk-based testing during the schedule.
Resource Availability: Testers or environments may not be available at critical stages of testing.
Changing Requirements: If requirements change frequently, it impacts the testing timeline and schedule.
6.1.5 Tips for Effective Test Scheduling
- Start testing early in the development lifecycle (Shift-Left Testing).
- Plan for buffer time to account for delays or unforeseen bugs.
- Use test management tools (e.g., Xray, Zephyr, TestRail) to automate scheduling and tracking.
- Collaborate with development and stakeholder teams to align schedules with dependencies and deadlines.
- Revisit and refine schedules periodically based on progress or risks.
6.1.6 Tools for Test Scheduling and Management
- JIRA: Workflow management with test scheduling plugins (e.g., Zephyr).
- TestRail: A test management tool for tracking test executions and schedules.
- Microsoft Excel: Simpler scheduling for smaller teams using spreadsheet templates.
- Azure DevOps: Integrated scheduling for Agile and DevOps projects.
- Monday.com/Trello: Kanban boards for tracking testing progress visually.
7. Acceptance Criteria
Software Acceptance Criteria are predefined conditions, standards, or requirements that a software product must meet to be considered acceptable by stakeholders, end-users, or clients. These criteria serve as a benchmark for determining whether the software fulfills its intended purpose and business objectives, and whether it is ready for deployment.
Acceptance criteria are typically created during the early stages of the software development lifecycle, often as part of the requirements gathering and planning phase, and are used as the basis for User Acceptance Testing (UAT) and validating software quality.
Acceptance criteria act as the bridge between requirements and validated deliverables in software projects. They ensure that all stakeholders have a unified understanding of what the software should do and provide a clear definition of completion, ultimately contributing to consistent quality and user satisfaction.
Acceptance (or exit) Criteria for set of tests (Example: 95% of test cases must pass (Meet expected results))
7.1 Key Characteristics of Acceptance Criteria
Clear and Concise: Criteria should be well-defined, specific, and easy to understand to avoid ambiguity.
Testable: Acceptance criteria must be measurable and testable to verify whether the software meets the requirements.
Traceable: Acceptance criteria should trace back to the original business requirements or user stories, ensuring alignment.
Negotiable: Criteria can be refined based on stakeholder feedback.
Binary Outcome: Evaluation should result in "Pass" or "Fail" to clearly indicate whether the software meets the expectations.
7.2 Types of Software Acceptance Criteria
- Functional Acceptance Criteria
- Define what the software is supposed to do.
- Examples:
- "The system must allow users to reset their passwords."
- "The application must calculate sales tax correctly based on the user's location."
- Non-Functional Acceptance Criteria
- Address qualities such as performance, scalability, security, and usability.
- Examples:
- "The page load time must not exceed 2 seconds under normal load conditions."
- "The system must adhere to WCAG 2.0 accessibility standards."
- Business Acceptance Criteria
- Ensure alignment with stakeholder requirements.
- Examples:
- "The user registration process must have a completion rate of at least 95% during beta testing."
- Regulatory Compliance Criteria
- Ensure the software complies with applicable legal, regulatory, or industry standards.
- Examples:
- "The software must comply with GDPR regulations concerning data privacy."
- "The system must meet ISO 26262 standards for automotive safety-critical software."
- User Interface (UI)/User Experience (UX) Criteria
- Define usability and design expectations.
- Examples:
- "The application must provide a search functionality that displays results within 1 second."
- "All buttons must use consistent styling and labeling."
- System Integration Acceptance Criteria
- Ensure successful connectivity and communication between systems or components.
- Examples:
- "The payment gateway must integrate successfully with the checkout module."
- "The system must sync user data with the client’s internal CRM tool."
- Security Acceptance Criteria
- Define security requirements for the software.
- Examples:
- "All user passwords must be stored using SHA-256 encryption."
- "The system must lock the user account after 5 failed login attempts."
7.3 Examples of Software Acceptance Criteria
Here are examples to illustrate how acceptance criteria might look for different applications:
- Functional Criteria:
- Feature: User Login
- The system must allow registered users to log in using their email and password.
- Error messages must display for incorrect credentials.
- Feature: User Login
- Non-Functional Criteria:
- Performance: The application must handle up to 1,000 concurrent users without downtime.
- Security: Sensitive data must be encrypted during transmission using SSL/TLS.
- Business Criteria:
- Success Metrics: The shopping cart abandonment rate should decrease by 20% after implementation of the new payment flow.
- UX/UI Criteria:
- Design Consistency: Every form field must display a placeholder text and provide visual feedback (green checkmark or error message) upon validation.
7.4 How Acceptance Criteria Are Used
User Stories:
- Often written alongside user stories in Agile development to define “done” for a feature.
- Example user story: "As a user, I want to reset my password so that I can regain access to my account."
- Acceptance Criteria:
- "The user must receive a reset password email."
- "The new password must conform to specified complexity rules."
Testing Guidance:
- Test cases for UAT, system testing, and functional testing are derived directly from acceptance criteria.
- Example: Test case for "The page load time must not exceed 2 seconds" includes performance testing under simulated loads.
Collaboration and Validation: Serve as a common reference for developers, testers, and business stakeholders to validate requirements.
Project Closure: Used during UAT or client reviews to determine whether the software meets delivery goals.
7.5 Benefits of Software Acceptance Criteria
Ensures Alignment: Provides clarity between teams (developers, testers, stakeholders) about expectations.
Reduces Ambiguity: Creates a clear definition of what constitutes "completed" or "done."
Improves Testing: Facilitates writing well-structured test cases and ensures traceability between requirements and testing.
Enhances Quality: Promotes adherence to high-quality standards and avoids unmet expectations.
Prevents Scope Creep: Limits scope by defining clear boundaries for features and requirements.
7.6 Best Practices for Defining Acceptance Criteria
Use SMART Criteria: Specific, Measurable, Achievable, Relevant, Time-bound.
Collaborate with Stakeholders: Define criteria with input from developers, testers, and business teams.
Make Criteria Actionable:
- Avoid vague statements (e.g., "The software should work well").
- Use actionable language (e.g., "The software must process 500 transactions per second").
Organize by Priority: Focus on high-priority features first, and tackle less critical criteria when they do not interfere with deadlines.
Review and Refine: Continuously review criteria alongside changes in requirements or scope during development.
Adopt Gherkin Syntax for Clarity (Optional):
- Gherkin syntax (used in Behavior-Driven Development tools like Cucumber) employs "Given, When, Then" format for acceptance criteria.
- Example:
- Given a registered user logs in,
- When incorrect credentials are entered,
- Then display an error message.
8. Test Coverage
8.1 Test Coverage Or Other Methods For Ensuring Sufficiency Of Testing
If not addressed elsewhere in the Software Test Plan, provide a description of the methods to be used for ensuring sufficient test coverage.
Test coverage refers to the extent to which the software is tested, both in terms of functionality (breadth) and thoroughness (depth). It ensures that all critical areas of the application are validated against requirements, reducing the risk of undetected defects.
Methods for Ensuring Sufficient Test Coverage
- Requirements Traceability Matrix (RTM)
- Definition: A document that maps each requirement to one or more test cases.
- Purpose: Ensures that all functional and non-functional requirements are tested.
- Best Practices:
- Maintain the RTM throughout the project lifecycle.
- Update it as requirements or test cases evolve.
- Benefit: Guarantees complete functional coverage and supports auditability.
- Code Coverage Analysis
- Definition: A metric that shows which parts of the codebase are executed during testing.
- Types:
- Statement Coverage: Checks if each line of code is executed.
- Branch Coverage: Ensures all possible branches (e.g., if/else) are tested.
- Path Coverage: Validates all possible execution paths.
- Best Practices:
- Use automated tools (e.g., JaCoCo, Cobertura).
- Integrate with CI/CD pipelines.
- Benefit: Identifies untested code and improves test effectiveness.
- Boundary Value Analysis (BVA) and Equivalence Partitioning (EP)
- Definition:
- BVA: Tests values at the edges of input ranges.
- EP: Divides input data into valid and invalid partitions.
- Purpose: Reduces the number of test cases while maximizing defect detection.
- Best Practices:
- Apply to all input fields and data-driven logic.
- Benefit: Detects edge-case defects and improves input validation.
- Definition:
- Risk-Based Testing
- Definition: Prioritizes testing based on the likelihood and impact of failure.
- Purpose: Focuses testing efforts on high-risk areas.
- Best Practices:
- Conduct risk assessments with stakeholders.
- Use risk matrices to guide test prioritization.
- Benefit: Efficient use of resources and improved defect detection in critical areas.
- Exploratory Testing
- Definition: Simultaneous learning, test design, and execution without predefined scripts.
- Purpose: Uncovers unexpected issues through intuitive exploration.
- Best Practices:
- Use charters and time-boxed sessions.
- Document findings and insights.
- Benefit: Enhances depth of testing and complements scripted tests.
- Peer Reviews and Walkthroughs
- Definition: Collaborative review of test cases and coverage plans.
- Purpose: Identify gaps and improve test quality.
- Best Practices:
- Include cross-functional team members.
- Use checklists to guide reviews.
- Benefit: Improves accuracy and completeness of test coverage.
By combining structured techniques (like RTM and code coverage) with exploratory and risk-based approaches, the test plan ensures both breadth (all features are tested) and depth (each feature is tested thoroughly). These methods collectively enhance the reliability, maintainability, and quality of the software product.
See also SWE-219 - Test Coverage for Safety Critical Software Components, SWE-189 - Code Coverage Measurements, and SWE-190 - Verify Code Coverage.
9. Test Witnessing
Software Test Witnessing is a formal process where external parties—such as stakeholders, Software Assurance, Software Safety, clients, regulatory bodies, or auditors—observe and verify critical testing activities to ensure compliance, functionality, and performance. The purpose is to provide transparency, validate the accuracy of test results, ensure adherence to standards, and build confidence in the software.
A solid plan for software test witnessing ensures the process runs smoothly, avoids disruptions, and delivers the necessary outcomes. Below is a detailed guide to help you develop a comprehensive Software Test Witnessing Plan.
9.1 Objective of the Test Witnessing Plan
The plan should clearly define:
- Why witnessing is being conducted (e.g., regulatory compliance, client validation, certification).
- What testing activities will be witnessed (e.g., functional tests, performance tests, compliance tests).
- Who will participate (e.g., clients, auditors, SMA team members).
- When and how the witnessing will occur.
9.2 Key Steps to Plan for Software Test Witnessing
- Define Purpose and Scope
Identify the Purpose:
- Compliance verification: Ensure adherence to NASA and industry standards (e.g., ISO, FDA, IEC).
- Client validation: Allow clients or stakeholders to confirm the system’s functionality.
- Certification review: Validate testing results for regulatory approval (e.g., aviation software complying with DO-178C).
Define the Scope:
- Which tests will be witnessed?
- Functional Testing (does it meet requirements?).
- Non-functional Testing (performance, reliability, security).
- Integration or System Testing.
- User Acceptance Testing (UAT).
- Regression Testing for critical features.
- Which tests will be witnessed?
- Example Scope: "Software Assurance will witness the end-to-end functional and integration tests of the safety-critical modules and security testing for data encryption compliance."
- Identify Stakeholders
- Determine who will participate in the test witnessing process:
- Witnesses: Include Software Assurance, Software Safety, clients, regulators, auditors, or stakeholders authorized to verify test results.
- Testing Team: test engineers, developers, and test managers responsible for conducting tests.
- Observers (optional): Other participants who may provide feedback but do not actively validate tests.
- Ensure Documentation:
- Share responsibilities for witnessing agreements with everyone involved, including witness roles and rules.
- Share responsibilities for witnessing agreements with everyone involved, including witness roles and rules.
- Determine who will participate in the test witnessing process:
Select Tests to Be Witnessed
- Choose tests that are:
- Critical for project success or compliance.
- High-risk areas that require validation.
- Business requirements or contractual obligations.
- Relevant to safety, security, quality, or functional correctness.
- Examples:
- Witness the cybersecurity tests.
- Witness boundary conditions testing for safety-critical software.
- Witness end-to-end business workflows (e.g., e-commerce checkout).
- Choose tests that are:
Prepare Test Documentation
Ensure all test artifacts are prepared and shared with witnesses:- Test Plan: Provide a detailed outline of the tests, objectives, timeline, and resources.
- Test Cases: Include test cases that will be executed during witnessing and link them to requirements.
- Requirements Traceability Matrix (RTM): Demonstrate how test cases map back to the software requirements.
- Metrics and Logs: Share performance metrics, test logs, and defect trends to back up results.
- Entry and Exit Criteria: Clearly define conditions for starting and completing the witnessing process.
Schedule the Witnessing
- Timeframe: Schedule witnessing activities after relevant setups (test environments, tools, data) are complete.
- Duration: Allocate sufficient time for actual tests, review, delays, retesting, and feedback.
- Witness Availability: Coordinate availability with stakeholders, clients, or auditors for minimal disruptions.
- Alignment: Schedule witnessing to align with software milestones (e.g., after system integration or UAT).
Example Schedule:
Date Test Witness Duration Location Oct 10, 2023 Payment Module Testing Client 2 Hours Remote Session Oct 12, 2023 Security Audit Testing ISO Auditor 3 Hours On-Site
Prepare the Test Environment
- Environment Availability: Ensure the test environment is configured and ready (e.g., staging with production-like data).
- Test Data: Prepare meaningful and sufficient test datasets for witnessing.
- Testing Tools: Ensure tools and automation scripts work and are accessible during witnessing.
- Success Backup Plan: Have contingency plans for technical failures/downtime.
Conduct the Test Witnessing
- Welcome Witnesses: Provide an agenda, introduce the testing team, and set expectations.
- Demonstrate Test Execution:
- Execute test cases while witnesses observe.
- Share real-time outputs (logs, test results, reports) during witnessing.
- Explain Context: Clarify test case objectives and expected outcomes before execution.
- Allow Interaction: Permit witnesses to ask questions during testing.
- Document Observations: Note witness feedback or approvals in formal meeting notes/reports.
Report and Document Results
After test witnessing:- Formal Test Report: Include test results, observations, and a summary of witness feedback.
- Sign-Off: Witnesses (e.g., clients, regulators) provide formal approvals or documentation stating acceptance including sign-offs.
- Resolved Issues: Track issues (if any) identified during witnessing and plan resolutions.
- Retention: Archive witnessing reports, sign-offs, defect logs, and supporting documents for audit trails.
Report Template Example:
Test Name Result Observer Comments Status Payment Gateway Passed No concerns Approved Security Testing Passed ISO Encryption Verified Approved
Address Feedback
Conduct a post-test witnessing session to:- Discuss feedback or concerns provided by witnesses.
- Discuss addressing and fixing identified issues, retesting. Offer follow-up demonstrations if required.
- Communicate progress to stakeholders.
Manage Risks
Anticipate risks associated with witnessing and plan mitigation strategies:- Disruptions: Encourage clear communication and pre-test readiness.
- Unclear Criteria: Ensure test entry/exit criteria are understood by all parties.
- Technical Issues: Plan backups (e.g., retesting schedules, contingency environments).
9.3 Advantages of Test Witnessing
- Transparency: Ensures stakeholders see how tests are conducted and validated.
- Builds Trust: Increases confidence in the system from clients or regulators.
- Certifications: Proof for audits and compliance reviews.
- Accountability: Encourages testing teams to adhere to processes and standards.
9.4 Tools to Facilitate Test Witnessing
- Test Management Tools: JIRA, TestRail, Zephyr: Help track test progress and share documentation.
- Remote Tools: Zoom, Microsoft Teams for virtual witnessing.
- Automation Tools: Selenium, Appium for live demonstrations.
9.5 Summary Template for Software Test Witnessing Plan
- Purpose: Validate software functionality, performance, compliance, etc.
- Scope: Critical end-to-end workflows, integration points, security tests.
- Stakeholders: Software Assurance, Software Safety, other SMA team members, auditors, testers.
- Duration: Scheduled for October 10–12, 2023.
- Environment: Ready production-like staging environment with test data.
- Key Milestones:
- Prepare test artifacts (Oct 9, 2023).
- Conduct payment module testing (Oct 10, 2023).
- Receive witnessing feedback and approvals (Oct 12, 2023).
With proper planning, software test witnessing helps ensure that your testing activities meet stakeholder expectations and external compliance standards, fostering trust and confidence in the final deliverable.
If the system is safety-critical, provide provisions for witnessing of tests.
See also SWE-066 - Perform Testing and Topic 8.13 - Test Witnessing.
10. Data Recording, Reduction, And Analysis
This section of the Software Test Pan describes processes and procedures for capturing and evaluating/analyzing results and issues found during all types of testing. The point of establishing these processes and procedures is to outline how the project is going to perform these activities. This section should include processes and procedures for:
- Capturing test execution results.
- Logging and tracking defects.
- Analyzing test outcomes.
- Ensuring timely resolution of issues.
- Supporting continuous improvement and quality assurance.
When writing these processes and procedures, consider including "manual, automatic, and semi-automatic techniques for recording test results, manipulating the raw results into a form suitable for evaluation, and retaining the results of data reduction and analysis." 401
10.1 Test Execution and Result Capture
Define how the test results will be captured and documented.
- Test Execution Process
- Testers execute test cases as per the test schedule.
- Each test case is marked as:
- Pass: Expected result matches actual result.
- Fail: Expected result does not match actual result.
- Blocked: Test cannot proceed due to an unresolved dependency.
- Not Executed: Test was not run during the cycle.
- Tools Used
- Test Management Tools: e.g., Azure DevOps, TestRail, JIRA Xray, HP ALM.
- Automation Frameworks: e.g., Selenium, JUnit, Postman (for API testing).
- Data Captured
- Test case ID and description.
- Execution date and tester name.
- Actual vs. expected results.
- Screenshots or logs (for failures).
- Environment and configuration details.
It is recommended that test reports be created for unit tests of safety-critical items. This will aid in ensuring that there is a record of the results for repeatability (see SWE-186 - Unit Test Repeatability).
See also SWE-065 - Test Plan, Procedures, Reports.
10.2 Defect Logging and Tracking
Document how defects will be logged and tracked. If this process is documented in another document (e.g., CM Plan, Software Development Plan), provide a reference. The process should include:
- Defect Reporting Process
- When a test fails, a defect is logged in the defect tracking system.
- Each defect includes:
- Unique ID
- Summary and detailed description
- Steps to reproduce
- Severity and priority
- Screenshots/logs
- Environment details
- Assigned developer
- Defect Lifecycle
- New → 2. Assigned → 3. In Progress → 4. Fixed → 5. Retested → 6. Closed or Reopened
- New → 2. Assigned → 3. Closed → 4. Permanent Limitation / Non-Issue
- Severity and Priority Classification
- Severity: Impact on the system (e.g., Critical, Major, Minor).
- Priority: Urgency of fixing the issue (e.g., High, Medium, Low).
10.3 Evaluation and Analysis of Results
Analyzing test outcomes is essential for understanding the effectiveness of the testing process, identifying defects, assessing software quality, and making informed decisions about release readiness. It transforms raw test execution data into actionable insights. Document how the test results will be analyzed and evaluated. This may include:
- Test Execution Summary
- Total Test Cases: Number of test cases planned and executed.
- Execution Status: Summary of test cases that Passed, Failed, Blocked, and Not Executed.
- Pass Rate: Percentage of test cases that passed.
- Failure Rate: Percentage of test cases that failed.
- Defect Correlation
- Link failed test cases to logged defects.
- Analyze:
- Number of defects per module.
- Severity and priority distribution.
- Defect trends over time.
- Coverage Analysis
- Requirements Coverage: Ensure all requirements have corresponding test cases.
- Code Coverage: Use tools to measure how much of the code was exercised.
- Risk Coverage: Confirm that high-risk areas have been adequately tested.
- Defect Triage Meetings
- Cross-functional team reviews new and open defects.
- Prioritize based on business impact and release timelines.
- Assign owners and define resolution timelines.
- Root Cause Analysis (RCA)
- Performed for high-severity or recurring defects.
- Identifies whether the issue was due to:
- Requirement gaps
- Design flaws
- Coding errors
- Test case deficiencies
- RCA outcomes are documented and used for process improvement.
10.4 Reporting and Metrics
Document how the test metrics and testing status will be reported. This should include
- Key Metrics Tracked
- Test case execution rate
- Pass/fail percentage
- Defect density
- Defect leakage (defects found post-release)
- Mean time to defect resolution
- Daily Test Status Reviews
- Conducted by the Test lead or test manager.
- Review test progress, pass/fail rates, and blockers.
- Update stakeholders on test health.
- Test Summary Reports
- Generated at the end of each test cycle.
- Includes:
- Overall test coverage
- Defect trends
- Risk areas
- Recommendations for improvement
10.5 Best Practices
- Maintain detailed and consistent documentation.
- Maintain traceability between test cases, requirements, and defects.
- Use version control for test artifacts.
- Automate repetitive test result collection and reporting where possible.
- Encourage collaboration between the test, development, and SMA teams.
- Continuously refine test cases based on defect trends and RCA.
See also SWE-068 - Evaluate Test Results and Topic 8.57 - Testing Analysis.
11. Risks
Document any risks or issues identified with testing.
Documenting Risks and Issues Identified in Software Testing is an essential practice to ensure transparency, accountability, and proactive resolution. It helps teams anticipate potential problems, mitigate their impact, track unresolved issues, and improve overall testing processes. Below is a structured approach to documenting risks and issues related to software testing.
Documenting risks and issues in software testing allows teams to address both potential problems and existing defects systematically. Proper documentation ensures transparency across stakeholders, proactive resolution of challenges, and ultimately contributes to higher software quality and successful project delivery. Tools like JIRA, Confluence, or Excel spreadsheets can be useful for maintaining and tracking risk and issue logs.
See also SWE-086 - Continuous Risk Management and SWE-201 - Software Non-Conformances.
11.1 Risks in Software Testing
Risks are potential problems or uncertainties that, if left unchecked, could impact the testing process, the software quality, or project timelines. Documenting risks involves identifying, assessing, prioritizing, and planning mitigation strategies. Here are some common risks in Software Testing:
Incomplete/Inaccurate Requirements:
- Risk: Requirements that are unclear or continuously changing lead to ineffective test cases and missing functionality.
- Mitigation: Regular requirement reviews, stakeholder validation, and using a Requirements Traceability Matrix (RTM).
Insufficient Testing Time:
- Risk: Testing schedules are compressed due to delays in development phases or aggressive timelines, increasing the likelihood of undetected defects.
- Mitigation: Prioritize critical test cases, adopt risk-based testing, and negotiate realistic timelines.
Limited Resources:
- Risk: Inadequate test environments, insufficient skilled testers, or unavailable tools.
- Mitigation: Early resource planning, cross-training testers, and using virtual/cloud-based testing environments.
Technical Challenges:
- Risk: Issues with test environments, data inconsistency, or unsupported configurations.
- Mitigation: Maintain dedicated test environments and conduct pre-testing checks for stability.
Defects in Automation Scripts:
- Risk: Errors in testing scripts may lead to false positives/negatives and missed defects.
- Mitigation: Regular maintenance and verification of scripts, and conducting manual spot-checks.
Security Compliance Risks:
- Risk: Testing may fail to uncover security vulnerabilities or software may not comply with regulatory standards.
- Mitigation: Conduct thorough security testing (e.g., penetration tests), and ensure compliance with standards (e.g., GDPR, HIPAA).
Test Coverage Gaps:
- Risk: Certain functionalities or edge cases may not be adequately covered by testing.
- Mitigation: Use test coverage analysis tools and peer reviews of test cases.
Unstable Test Environments:
- Risk: Fluctuating environments (e.g., incomplete configurations or dependencies) impact test reliability.
- Mitigation: Freeze test environments early, and simulate production environments closely.
Stakeholder Delays:
- Risk: Delayed approvals for requirements or lack of availability of witnesses during critical testing phases.
- Mitigation: Set clear approval deadlines and communicate regularly with stakeholders.
Defect Leakage:
- Risk: Defects may escape to production and cause operational disruption or poor user experience.
- Mitigation: Implement thorough regression testing and performance testing before release.
Third-Party Integration Issues:
- Risk: Dependencies on external systems or APIs may create delays or compatibility problems.
- Mitigation: Mock third-party systems during testing or collaborate with vendors early.
11.2 Issues Identified During Software Testing
Issues are problems detected during the testing process that need immediate resolution. Documenting issues ensures they are tracked properly, mitigated, and communicated with relevant teams. Here are some common issues identified in Software Testing:
Functional Defects:
- Problem: Features fail to meet specified requirements or behave incorrectly.
- Example: The "search" functionality does not return expected results for specific filters.
Performance Bottlenecks:
- Problem: The application is slow under normal or peak load conditions.
- Example: Page load time exceeds 10 seconds when 500 concurrent users are logged in.
Database Issues:
- Problem: Data retrieval, insertion, or manipulation fails due to schema conflicts or inconsistent test data.
- Example: Transactions aren't processed correctly due to data corruption.
Environment Instability:
- Problem: Frequent crashes, missing configuration files, or communication errors with dependent systems.
- Example: The testing environment isn't configured correctly for payment gateway integration.
UI/UX Design Issues:
- Problem: Design elements do not match specifications or usability is compromised.
- Example: A "Submit" button is not visible on smaller screen resolutions.
Security Vulnerabilities:
- Problem: Exposure of sensitive data, improper authentication methods, or unencrypted transmissions.
- Example: A user is able to extract confidential transaction details using URL tampering.
Regression Failures:
- Problem: Existing, previously working functionality breaks after new code changes.
- Example: The checkout system fails after implementing discount coupon logic.
Automation Errors:
- Problem: Automated test scripts fail due to incorrect configurations or outdated logic.
- Example: The login script does not recognize updated session handling.
Untested Scenarios:
- Problem: Specific use cases or edge cases were overlooked due to incomplete test coverage.
- Example: The system crashes when an invalid character is entered into the username field.
Test Data Issues:
- Problem: Inconsistent or incorrect test data results in unreliable test outcomes.
- Example: Email validation fails due to missing sample emails in the test dataset.
11.2.2 Risk/Issue Documentation Template
Use a structured template to document risks and issues identified during software testing.
Risk Template
Risk ID Description Impact Likelihood Priority Mitigation Plan Owner Status R001 Limited resources for testing tools High Medium Critical Allocate budget for tools early Project Manager Open R002 Test environment stability issues Medium High High Perform daily environment health checks Test Lead Mitigated - Issue Template
Issue ID Description Severity Impact Area Steps to Reproduce Proposed Solution Owner Status I001 Search filter not returning results High Functional Module 1. Go to search page. Adjust search logic Developer In Progress I002 Encryption logic not working correctly Critical Security Compliance 1. Login as Admin.
2. Extract dataFix encryption method Security Engineer Resolved
11.2.3 Best Practices in Documenting Risks and Issues
Regular Reviews:
- Conduct regular risk and issue reviews during testing cycles.
- Keep documentation updated.
Assign Ownership: Assign responsible team members for each risk and issue to ensure accountability.
Track Resolution Progress: Use tools like JIRA, Bugzilla, or Trello to monitor the status of risks and issues.
Communicate Early: Escalate high-severity risks or issues to stakeholders and resolve them collaboratively.
Categorize: Differentiate between risks (potential problems) and issues (existing problems).
Retrospection: Analyze recurring risks or issues to improve processes for future testing.
12. Qualification Testing
Qualification Testing is a formal process used to verify that a software system or component meets its specified requirements and is ready for deployment or certification. It is typically conducted in a controlled environment and follows predefined procedures and acceptance criteria. This section of the Software Test Plan defines items the parameters of the qualification testing such as:
- Sites where testing will occur, identify by name.
- Software and version necessary to perform the planned testing activities at each site, for example:
- Compilers, operating systems, communications software.
- Test drivers, test data generators, test control software.
- Input files, databases, path analyzers.
- Other.
- Hardware and firmware, including versions, that will be used in the software test environment at each site.
- Manuals, media, licenses, instructions, etc., required to setup and perform the planned tests.
- Items to be supplied by the site and those items that will be delivered to the test site.
- Organizations participating in the tests at each site and their roles and responsibilities.
- Number, type, skill level of personnel required to carry out testing at each site.
- Training and/or orientation required for testing personnel at each site.
- Tests to be performed at each site.
In addition to the information required above, address the following information for all types of testing in the Test Plan:
- Resources (personnel, tools, equipment, facilities, etc.).
- Risks that require contingency planning.
- What is to be tested and what is not to be tested.
- Test completion criteria.
13. Additional Content
13.1 General Test Conditions
General test conditions are conditions that apply to all of the planned tests or to a specific group of tests. When documenting general test conditions, consider statements and conditions, such as these taken from Langley Research Center's NPR 7150.2 Class A Required Testing Documents With Embedded Guidance:
- "Each test should include nominal, maximum, and minimum values."
- "Each test of type X should use live data."
- "Execution size and time should be measured for each software item."
- Extent of testing to be performed, e.g., percent of some defined total, and the associated rationale.
13.2 Planned Tests, Including Items and Their Identifiers
If not already included in sections of the plan focused on specific types of testing (unit, integration, etc.), all planned tests, test cases, data sets, etc., that will be used for the project need to be identified in the Software Test Plan, along with the software items they will be used to test. Each item needs to have its own unique identifier to ensure proper execution and tracking of the planned tests. Consider the following information as information to capture for each test:
- Objective.
- Test level.
- Test type.
- Test class.
- Requirements addressed.
- Software items(s) tested.
- Type of data to be recorded.
- Assumptions, constraints, limitations (timing, interfaces, personnel, etc.).
- Safety, security, privacy considerations.
See also SWE-015 - Cost Estimation.
13.3 Additional Information In Plans
If not identified elsewhere, the Software Test Plan identifies the metrics to be collected for each type of testing. Suggested metrics include:
- Number of units tested.
- Hours spent.
- Number of defects found.
- Average defects found per line of code.
- Measures of test coverage, software reliability and maintainability.
- Other.
14. Best Practices
14.1 Best Practices For Software Test Plans:
- Begin development of the Software Test Plan(s) early.
- As soon as the relevant stage has been completed:
- Helps identify confusing or unclear requirements.
- Helps identify un-testable design features before implementation.
- Allows for acquisition/allocation of test resources.
- Involve the right people in the plan development (quality engineers, software engineers, systems engineers, etc.).
Use the right Sources of Information (first column in table below) as appropriate for the project and for each type of testing, such as:
| Sources of Information | Unit Test* | SW Integration Test* | Systems Integration Test* | End-to-End Test* | Acceptance Test* | Regression Test* |
|---|---|---|---|---|---|---|
Software Requirements Specification (SRS) | X | X | X | X | X | |
Software Design Description (SDD) | X | X | ||||
Design traceability | X | X | ||||
Interface documents | X | X | X | X | X | X |
Draft user documentation | X | |||||
Code coverage analyzer specifications | X | |||||
Criticality analysis | X | |||||
Draft operating documents | X | X | X | |||
Draft maintenance documents | X | |||||
Final operating documents | X | |||||
Final user documentation | X | |||||
Concept documents | X | X | ||||
Requirements traceability | X | X | X | X | ||
Expected customer usage patterns and conditions | X | X | X | X |
*May be a separate test plan referenced in the overall Software Test Plan or part of the overall Software Test Plan.
- Have the Software Test Plan reviewed/inspected before use (SWE-087 - Software Peer Reviews and Inspections for Requirements, Plans, Design, Code, and Test Procedures).
- Include Software Assurance and Software Safety personnel to verify safety-critical coverage.
- Have changes to the Software Test Plan evaluated for their effect on system safety.
- Keep the Software Test Plan maintained (up to date) and under configuration control.
- Identify early and focus testing on the components most likely to have issues (high risk, complex, many interfaces, demanding timing constraints, etc.).
- This may require some level of analysis to determine optimal test coverage. (Click here for more information on 8.1 Test Coverage Or Other Methods For Ensuring Sufficiency Of Testing.
- Reminder: 100% code test coverage using the Modified Condition/Decision Coverage (MC/DC) criterion is required for all identified safety-critical software components. (See SWE-219 - Code Coverage for Safety Critical Software).
- Plan to use independent testing (e.g., fellow programmers, separate test group, separate test organization, NASA Independent Verification & Validation) where possible and cost-effective as new perspectives can turn up issues that authors might not see.
- Include coverage of user documentation, e.g., training materials, procedures.
15. Small Projects
Software Test Plans are necessary for all software projects, but for projects with small budgets or small teams starting with an existing test plan from a project of a similar type and size could help reduce the time and effort required to produce a test plan for a new project. Working with someone experienced in writing test plans, perhaps from another project and on a short-term basis, could help the project team prepare the document in a timely fashion without overburdening team resources. Where applicable, the test plan could reference other project documents rather than reproduce their contents, avoiding duplication of effort and reducing maintenance activities.
Since the Software Test Plan may be standalone or part of the Software Management Plan, incorporating the test plan into a larger project document may be useful for document tracking, review, etc.
Follow Center policies and procedures when determining which approach to use for a particular project.
The Software Test Plan may be tailored by software classification. Goddard Space Flight Center's (GSFC's) 580-STD-077-01, Requirements for Minimum Contents of Software Documents provides one suggestion for tailoring a Software Test Plan based on the required contents and the classification of the software being tested. This tailoring could reduce the size of the Software Test Plan and, therefore, the time and effort to produce and maintain it.
16. Resources
16.1 References
- (SWEREF-031) SEL-84-101, Revision 1, Software Engineering Laboratory Series, NASA Goddard Space Flight Center, 1990.
- (SWEREF-047) SEL-81-305, Revision 3, Software Engineering Laboratory Series, NASA Goddard Space Flight Center, 1992.
- (SWEREF-110)
- (SWEREF-209) IEEE Computer Society, IEEE Std 1012-2016 (Revision of IEEE Std 1012-2012), Published September 29, 2017, NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards. Non-NASA users may purchase the document from: http://standards.ieee.org/findstds/standard/1012-2012.html
- (SWEREF-211) IEEE Computer Society, IEEE STD 1059-1993, 1993. NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards.
- (SWEREF-215) IEEE Computer Society, IEEE Std 829-2008, 2008. NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards.
- (SWEREF-222) IEEE STD 610.12-1990, 1990. NASA users can access IEEE standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of IEEE standards.
- (SWEREF-276) NASA-GB-8719.13, NASA, 2004. Access NASA-GB-8719.13 directly: https://swehb.nasa.gov/download/attachments/16450020/nasa-gb-871913.pdf?api=v2
- (SWEREF-278) NASA-STD-8739.8B, NASA TECHNICAL STANDARD, Approved 2022-09-08 Superseding "NASA-STD-8739.8A"
- (SWEREF-401) Federal Aviation Administration (FAA), December, 1999. DI-IPSC-81438A,
- (SWEREF-452) SED Unit Test Guideline, 580-GL-062-02, Systems Engineering Division, NASA Goddard Space Flight Center (GSFC), 2012. This NASA-specific information and resource is available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook. Replaces SWEREF-081
- (SWEREF-507) Public Lessons Learned Entry: 403.
- (SWEREF-530) Public Lessons Learned Entry: 939.
- (SWEREF-536) Public Lessons Learned Entry: 1062.
- (SWEREF-545) Public Lessons Learned Entry: 1197.
- (SWEREF-695) The NASA GSFC Lessons Learned system. Lessons submitted to this repository by NASA/GSFC software projects personnel are reviewed by a Software Engineering Division review board. These Lessons are only available to NASA personnel.
16.2 Tools
16.3 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook:
16.4 Center Process Asset Libraries
SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197
See the following link(s) in SPAN for process assets from contributing Centers (NASA Only).
17. Lessons Learned
17.1 NASA Lessons Learned
- MPL Uplink Loss Timer Software/Test Errors (1998) (Plan to test against full range of parameters). Lesson Number 0939530: "Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested."
- Deep Space 2 Telecom Hardware-Software Interaction (1999) (Plan to test as you fly). Lesson Number 1197545: "To fully validate performance, test integrated software and hardware over the flight operational temperature range."
- International Space Station (ISS) Program/Computer Hardware-Software/Software (Plan realistic but flexible schedules). Lesson Number 1062536: "NASA should realistically reevaluate the achievable ... software development and test schedule and be willing to delay ... deployment if necessary rather than potentially sacrificing safety."
- Thrusters Fired on Launch Pad (1975) (Plan for safe exercise of command sequences). Lesson Number 0403507: "When command sequences are stored on the spacecraft and intended to be exercised only in the event of abnormal spacecraft activity, the consequences should be considered of their being issued during the system test or the pre-launch phases."
17.2 Other Lessons Learned
The Goddard Space Flight Center (GSFC) Lessons Learned online repository 695 contains the following lessons learned related to software requirements identification, development, documentation, approval, and maintenance based on analysis of customer and other stakeholder requirements and the operational concepts. Select the titled link below to access the specific Lessons Learned:
- Test plans should cover all aspects of testing. Lesson Number 56: The recommendation states: "Test plans should cover all aspects of testing, including specific sequencing and/or data flow requirements."
- Proper sequencing of stress tests can make root cause analysis easier when failures occur. Lesson Number 68: The recommendation states: "Proper sequencing of stress tests can make root cause analysis easier when failures occur."


