3. GuidanceThe Secure Coding Best Practices document defines specific guidelines to assist in designing and developing secure code. The best practices identified in the document cover the entire software development life cycle, from requirements through operations. As with most everything in software, the earlier in the life cycle issues and problems are identified and resolved, the less impact those problems have on schedule, budget, and rework. Additionally, security flaws and vulnerabilities can be introduced at any point during the life cycle. Therefore, it is important to implement secure coding best practices throughout the life cycle. 3.1 RequirementsIt is important to consider security issues and secure coding principles during the development of requirements. Well-defined and complete security requirements help to drive a secure design and provide traceability through design, implementation, and testing to ensure a secure system. Security requirements should be derived from NASA Standards, NPRs, interfacing projects, documentation on the integrated system, use of Secure development frameworks, and the project risk and protection plan documents. Security requirements for acquisitions including OTS, OSS, should be considered, levied on, and implemented by the providers. See also SWE-050 - Software Requirements, 3.2 ArchitectureDuring the development of the architecture, it is important to consider the security vulnerabilities of the different architecture options. The team must weigh these vulnerabilities against the benefits of each option to make informed decisions that will provide the ideal solution for the system to be built. The level of security required by the system must also be considered. These architectural decisions can be security boundaries (physical or software) and need to ensure that impacts are contained, similar to fault containment regions. Architectural decisions need to be secure by default, for example, default disallow and only allow specific access instead of default allow with only specific denials. 3.3 DesignIn the detailed design of the system, there are many more security-related decisions to be made which directly impact how the system will be implemented. Make design decisions that simplify the implementation, eliminate or minimize security vulnerabilities, and satisfy the security requirements of the system. Note that the practices in this section are not additional tasks that must be done. Instead, they guide making good design choices. It is possible that good design choices do not include security considerations. For example, in developing software, errors messages should be explicit for the user but, in a security context, this may expose sensitive information to an attacker. Examples of sensitive information can be: - Memory addresses (e.g. RAM)
- PII
- Usernames/groups
- Innerworkings of the software system (function names, architecture, …)
- Filenames and locations
For more information see CWE-200 , CWE-209 , and CWE-1295 . During selection of OTS software, consideration needs to be given to the security policies of that software. OTS software may not utilize secure protocols, which require a user to secure, or have supply chain issues. See SWE-211 - Test Levels of Non-Custom Developed Software and SWE-156 - Evaluate Systems for Security Risks. See also SWE-058 - Detailed Design, 3.4 ImplementationEven with good requirements and good design, many vulnerabilities are introduced during implementation through poor coding practices. The development team must be trained in secure coding so that they are aware of the possible security vulnerabilities and know how to avoid them. The implementation practices below (e.g. static analysis) may be used to identify weaknesses and, if implemented, result in stronger code. 3.5 Automated Static AnalysisAutomated static analysis is useful in detecting problems and issues in the code, including secure coding issues. Static analysis can be performed as soon as the first code is developed even before the system is executing. It is good practice to define, acquire, and configure the static analysis tools for the project before coding begins so that developers can perform static analysis on the code from the start and regularly thereafter. Performing static analysis throughout development as opposed to once at the end is typically more efficient and results in better code sooner. Automated static analysis might not be able to recognize when proper input validation is being performed, leading to false positives. It also might not be able to detect the usage of custom API functions or third-party libraries that indirectly invoke OS commands, leading to false negatives, especially if the API/library code is not available for analysis. It generally does not account for environmental considerations when reporting out-of-bounds memory operations. This fact can make it difficult for users to determine which warnings should be investigated first. For example, an analysis tool might report buffer overflows that originate from command line arguments in a program that is not expected to run with setuid or other special privileges. Due to the possibility of false positives and negatives, manual inspection should still be performed, and developers should not solely rely on the automatic static analysis results. In the case of false negatives, reviewers may be able to find the problem but running multiple automatic static analysis tools can help mitigate this issue. For false positives, the project should have a process of manual review and documentation of the results to provide evidence that it is a false positive and then may utilize the automatic static analysis’ tool set to ignore the false positive. Caution should be used when customizing the tool’s configuration to ensure that only intended false positive results are suppressed. 3.6 Manual Static AnalysisManual static analysis is commonly referred to as code inspection, code review, or peer review. It involves manually examining the code for errors, possible security flaws, and compliance with coding standards. It is recommended to perform automated static analysis on code before performing the manual code review so that the obvious issues identified by automated tools are resolved before the manual inspection. The manual inspection can then focus on logic issues, security practices, business rules, and other types of errors that cannot be discovered through automated analysis. Manual static analysis can be performed at any point once the code has been written. As with the automatic static analysis, it is recommended to perform manual static analysis incrementally as the code is being developed, rather than one time once the implementation is complete. Incremental evaluation results in more effective reviews because the amount of code being reviewed at any one time is smaller, resulting in shorter reviews. Caution should be taken when doing many smaller reviews, assumptions or “requirements” from other code may be forgotten and missed, therefore it is recommended to do a final integrated review to ensure that these areas are not missed. Additionally, errors and bad practices can be caught early in development and corrected so that the development team learns and improves throughout the implementation cycle. 3.7 BuildIt is important when building the code to use tools and available compiler features to ensure the highest quality code possible. Pay attention to compiler warnings and resolve all warnings, especially those related to secure coding weaknesses. It is recommended to compile the code with all warnings enabled (i.e., no compiler exception options used). A recommended technique would be to use the compiler to convert all warnings to errors thus preventing compilation and enforcing fixes to warning. When building the software, it is recommended that a cryptographically secure hash (see NIST Computer Security Resource Center Hash Functions ) be generated and used in a Software Authorization Notice (SAN), Software Bill of Materials (SBOM), or similar documentation, for verification that an unaltered approved build is used for execution. If building applications for a desktop or mobile device, code signing certificates should be used to ensure that the approved version of the code is distributed. 3.8 Automated Dynamic AnalysisThe automated dynamic analysis examines the code behavior during the execution of the system and automatically identifies possible issues. As a result, the automated dynamic analysis must be performed once the system is working. Tools such memory leak checkers, security scanning tools, and port mappers, are examples of dynamic analysis tools that must be run on the code/program to provide confidence in the security of the systems. As with automated static analysis, no single tool or method will identify all of the issues in the system. It is best to use a combination of tools to provide sufficient coverage. Some of the methods below are time-consuming and some may not provide sufficient payback to warrant their use, depending on the system being examined. Evaluate the different methods below and choose the ones that will be most effective and efficient for the project of interest. 3.9 Manual Dynamic AnalysisThe manual dynamic analysis examines the code behavior during the execution of the system but requires manual analysis to identify possible issues. The manual dynamic analysis must be performed once the system is working so that there is some behavior to analyze. Examples of manual dynamic analysis would be a tester entering credentials manually to see if the system responds as expected, or users creating requests (to test allocation/deallocation of resources in a denial of service mindset), and testers trying to access resources that test access permissions. The advantage of manual dynamic analysis is that since the behavior of the system is manually analyzed, errors in business rules and unexpected behavior in different scenarios can be identified. However, since it is manual, it requires more time and effort than automated dynamic analysis. As with some of the other analysis types, it is not realistic to perform all of the types of dynamic analysis below. No single tool or method will identify all of the issues in the system. It is required to use a combination of methods to provide sufficient coverage. Some of these methods are very time-consuming and expensive or require special knowledge or experience. Choose the methods that are most effective and efficient for the project of interest. 3.10 TestingThe testing methods below are likely additional effort to the normal testing performed. However, each type of specialized testing below provides some benefit to increasing the security and robustness of the system being developed. Once again, it is not practical to perform testing of all the phases below (Guideline for a roadmap to Cyber Resilient Software) but depending on the system and the level of security required, one or more of the phases may provide some benefit. See also SWE-159 - Verify and Validate Risk Mitigations, 3.11 Operation/System ConfigurationOnce the system has been developed and tested, the security of the system cannot be forgotten. It is important to be aware of security vulnerabilities and weaknesses that can be introduced in the deployment, configuration, and operation of the system. Choose the guidelines below that to apply to the system and its testing and operational environments and provide the most benefit. 3.12 Guideline for a roadmap to Cyber Resilient SoftwarePhase 1 - Basic Security- Apps run in separate processes
- Processes run with non-root (administrative) service accounts
- Operating Systems (OS) hardening and compiler security settings are used
- Cryptographic integrity checks on executables
- Security audit logs
- Enforced file system access controls
Phase 2 - Secure response and recovery- Security lockdown mode
- Secure system recovery
- • Secure backups (including configuration files)
- Secure software updates
Phase 3 - Role Based Access Control (RBAC) and intrusion detection- Authenticate commands from all sources
- Multiple levels of authorization (e.g., administer, operator)
- Secure boot
- Algorithmic intrusion detection
Phase 4 - Zero trust, mandatory access control- Zero trust message bus
- SELinux mandatory access kernel calls
Phase 5 - Advanced Security- AI/ML intrusion detection
- Memory safe programming language
- Secure microkernel of operating system
3.13 Maintenance Of The SoftwareHaving a plan for executing updates, running maintenance tasks (compacting logs, rotating files…), and managing software patches as they are provided by vendors or the team must be in place for the operations modes of the software. This plan must contain guidance on fixing vulnerabilities in the software itself as well as disclosure mechanisms to any customers. These plans can be updated as situations change, but measurements of risk should be taken into account (i.e., weigh the risk of updating software right before a major mission milestone with limited testing time). An operations plan for when a security incidence response is necessary to provide personnel a plan for analyzing the code/program, discussing with any IT security operations centers, and containing the impact from the security vulnerability. 3.14 Additional GuidanceAdditional guidance related to this requirement may be found in the following materials in this Handbook: 3.15 Center Process Asset Libraries
See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). |