bannerd


SWE-207 - Secure Coding Practices

1. Requirements

3.11.6 The project manager shall identify, record, and implement secure coding practices.

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

SWE-207 - Last used in rev NPR 7150.2D

RevSWE Statement
A


Difference between A and B

N/A

B


Difference between B and C

NEW

C

3.11.8 The project manager shall identify, record, and implement secure coding practices.

Difference between C and DNo change
D

3.11.6 The project manager shall identify, record, and implement secure coding practices.



1.3 Applicability Across Classes

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

Secure coding practices should be identified, recorded, and implemented to include all life cycle phases of the project.  Unsafe coding practices result in costly vulnerabilities in application software that leads to the theft of sensitive data.   Some secure practices include but not limited to the strict language adherence and use of automated tools to identify problems either at compile time or run time; language-specific guidance, domain-specific guidance, local standards for code organization and commenting, and standard file header format are often specified as well.

3. Guidance

The Secure Coding Best Practices document 004   defines specific guidelines to assist in designing and developing secure code. The best practices identified in the document cover the entire software development life cycle, from requirements through operations. As with most everything in software, the earlier in the life cycle issues and problems are identified and resolved, the less impact those problems have on schedule, budget, and rework. Additionally, security flaws and vulnerabilities can be introduced at any point during the life cycle. Therefore, it is important to implement secure coding best practices throughout the life cycle.

3.1 Requirements

It is important to consider security issues and secure coding principles during the development of requirements. Well-defined and complete security requirements help to drive a secure design and provide traceability through design, implementation, and testing to ensure a secure system. Security requirements should be derived from NASA Standards, NPRs, interfacing projects, documentation on the integrated system, use of Secure development frameworks, and the project risk and protection plan documents.  Security requirements for acquisitions including OTS, OSS, should be considered, levied on, and implemented by the providers.

See also SWE-050 - Software Requirements

3.2 Architecture

During the development of the architecture, it is important to consider the security vulnerabilities of the different architecture options. The team must weigh these vulnerabilities against the benefits of each option to make informed decisions that will provide the ideal solution for the system to be built. The level of security required by the system must also be considered. These architectural decisions can be security boundaries (physical or software) and need to ensure that impacts are contained, similar to fault containment regions.  Architectural decisions need to be secure by default, for example, default disallow and only allow specific access instead of default allow with only specific denials.

3.3 Design

In the detailed design of the system, there are many more security-related decisions to be made which directly impact how the system will be implemented. Make design decisions that simplify the implementation, eliminate or minimize security vulnerabilities, and satisfy the security requirements of the system. Note that the practices in this section are not additional tasks that must be done. Instead, they guide making good design choices. It is possible that good design choices do not include security considerations.  For example, in developing software, errors messages should be explicit for the user but, in a security context, this may expose sensitive information to an attacker.  Examples of sensitive information can be:

  • Memory addresses (e.g. RAM)
  • PII
  • Usernames/groups
  • Innerworkings of the software system (function names, architecture, …)
  • Filenames and locations

For more information see CWE-200 602, CWE-209 604, and CWE-1295 605.

During selection of OTS software, consideration needs to be given to the security policies of that software.  OTS software may not utilize secure protocols, which require a user to secure, or have supply chain issues.  See SWE-211 - Test Levels of Non-Custom Developed Software  and SWE-156 - Evaluate Systems for Security Risks.

See also SWE-058 - Detailed Design

3.4 Implementation

Even with good requirements and good design, many vulnerabilities are introduced during implementation through poor coding practices. The development team must be trained in secure coding so that they are aware of the possible security vulnerabilities and know how to avoid them. The implementation practices below (e.g. static analysis) may be used to identify weaknesses and, if implemented, result in stronger code. 

3.5 Automated Static Analysis

Automated static analysis is useful in detecting problems and issues in the code, including secure coding issues. Static analysis can be performed as soon as the first code is developed even before the system is executing. It is good practice to define, acquire, and configure the static analysis tools for the project before coding begins so that developers can perform static analysis on the code from the start and regularly thereafter. Performing static analysis throughout development as opposed to once at the end is typically more efficient and results in better code sooner.

Automated static analysis might not be able to recognize when proper input validation is being performed, leading to false positives. It also might not be able to detect the usage of custom API functions or third-party libraries that indirectly invoke OS commands, leading to false negatives, especially if the API/library code is not available for analysis. It generally does not account for environmental considerations when reporting out-of-bounds memory operations. This fact can make it difficult for users to determine which warnings should be investigated first. For example, an analysis tool might report buffer overflows that originate from command line arguments in a program that is not expected to run with setuid or other special privileges.

Due to the possibility of false positives and negatives, manual inspection should still be performed, and developers should not solely rely on the automatic static analysis results.  In the case of false negatives, reviewers may be able to find the problem but running multiple automatic static analysis tools can help mitigate this issue.  For false positives, the project should have a process of manual review and documentation of the results to provide evidence that it is a false positive and then may utilize the automatic static analysis’ tool set to ignore the false positive.  Caution should be used when customizing the tool’s configuration to ensure that only intended false positive results are suppressed.

3.6 Manual Static Analysis

Manual static analysis is commonly referred to as code inspection, code review, or peer review. It involves manually examining the code for errors, possible security flaws, and compliance with coding standards. It is recommended to perform automated static analysis on code before performing the manual code review so that the obvious issues identified by automated tools are resolved before the manual inspection. The manual inspection can then focus on logic issues, security practices, business rules, and other types of errors that cannot be discovered through automated analysis.

Manual static analysis can be performed at any point once the code has been written. As with the automatic static analysis, it is recommended to perform manual static analysis incrementally as the code is being developed, rather than one time once the implementation is complete. Incremental evaluation results in more effective reviews because the amount of code being reviewed at any one time is smaller, resulting in shorter reviews. Caution should be taken when doing many smaller reviews, assumptions or “requirements” from other code may be forgotten and missed, therefore it is recommended to do a final integrated review to ensure that these areas are not missed. Additionally, errors and bad practices can be caught early in development and corrected so that the development team learns and improves throughout the implementation cycle.

3.7 Build

It is important when building the code to use tools and available compiler features to ensure the highest quality code possible. Pay attention to compiler warnings and resolve all warnings, especially those related to secure coding weaknesses. It is recommended to compile the code with all warnings enabled (i.e., no compiler exception options used).  A recommended technique would be to use the compiler to convert all warnings to errors thus preventing compilation and enforcing fixes to warning.  When building the software, it is recommended that a cryptographically secure hash (see NIST Computer Security Resource Center  Hash Functions 305)  be generated and used in a Software Authorization Notice (SAN), Software Bill of Materials (SBOM), or similar documentation, for verification that an unaltered approved build is used for execution.  If building applications for a desktop or mobile device, code signing certificates should be used to ensure that the approved version of the code is distributed.

3.8 Automated Dynamic Analysis

The automated dynamic analysis examines the code behavior during the execution of the system and automatically identifies possible issues. As a result, the automated dynamic analysis must be performed once the system is working. Tools such memory leak checkers, security scanning tools, and port mappers, are examples of dynamic analysis tools that must be run on the code/program to provide confidence in the security of the systems.

As with automated static analysis, no single tool or method will identify all of the issues in the system. It is best to use a combination of tools to provide sufficient coverage. Some of the methods below are time-consuming and some may not provide sufficient payback to warrant their use, depending on the system being examined. Evaluate the different methods below and choose the ones that will be most effective and efficient for the project of interest.

3.9 Manual Dynamic Analysis

The manual dynamic analysis examines the code behavior during the execution of the system but requires manual analysis to identify possible issues. The manual dynamic analysis must be performed once the system is working so that there is some behavior to analyze. Examples of manual dynamic analysis would be a tester entering credentials manually to see if the system responds as expected, or users creating requests (to test allocation/deallocation of resources in a denial of service mindset), and testers trying to access resources that test access permissions.

The advantage of manual dynamic analysis is that since the behavior of the system is manually analyzed, errors in business rules and unexpected behavior in different scenarios can be identified. However, since it is manual, it requires more time and effort than automated dynamic analysis.

As with some of the other analysis types, it is not realistic to perform all of the types of dynamic analysis below. No single tool or method will identify all of the issues in the system. It is required to use a combination of methods to provide sufficient coverage. Some of these methods are very time-consuming and expensive or require special knowledge or experience. Choose the methods that are most effective and efficient for the project of interest.

3.10 Testing

The testing methods below are likely additional effort to the normal testing performed. However, each type of specialized testing below provides some benefit to increasing the security and robustness of the system being developed. Once again, it is not practical to perform testing of all the phases below (Guideline for a roadmap to Cyber Resilient Software) but depending on the system and the level of security required, one or more of the phases may provide some benefit.

See also SWE-159 - Verify and Validate Risk Mitigations

3.11 Operation/System Configuration

Once the system has been developed and tested, the security of the system cannot be forgotten. It is important to be aware of security vulnerabilities and weaknesses that can be introduced in the deployment, configuration, and operation of the system. Choose the guidelines below that to apply to the system and its testing and operational environments and provide the most benefit.

3.12 Guideline for a roadmap to Cyber Resilient Software

Phase 1 - Basic Security

  • Apps run in separate processes
  • Processes run with non-root (administrative) service accounts
  • Operating Systems (OS) hardening and compiler security settings are used
  • Cryptographic integrity checks on executables
  • Security audit logs
  • Enforced file system access controls

Phase 2 - Secure response and recovery

  • Security lockdown mode
  • Secure system recovery
  • •    Secure backups (including configuration files)
  • Secure software updates

Phase 3 - Role Based Access Control (RBAC) and intrusion detection

  • Authenticate commands from all sources
  • Multiple levels of authorization (e.g., administer, operator)
  • Secure boot
  • Algorithmic intrusion detection

Phase 4 - Zero trust, mandatory access control

  • Zero trust message bus
  • SELinux mandatory access kernel calls

Phase 5 - Advanced Security

  • AI/ML intrusion detection
  • Memory safe programming language
  • Secure microkernel of operating system

3.13 Maintenance Of The Software

Having a plan for executing updates, running maintenance tasks (compacting logs, rotating files…), and managing software patches as they are provided by vendors or the team must be in place for the operations modes of the software.  This plan must contain guidance on fixing vulnerabilities in the software itself as well as disclosure mechanisms to any customers.  These plans can be updated as situations change, but measurements of risk should be taken into account (i.e., weigh the risk of updating software right before a major mission milestone with limited testing time).  An operations plan for when a security incidence response is necessary to provide personnel a plan for analyzing the code/program, discussing with any IT security operations centers, and containing the impact from the security vulnerability.

3.14 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

3.15 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

4. Small Projects

Unfortunately for small projects, software security is applicable in all situations, malicious actors will often use the weakest point to gain access to more systems which means that even though the project is scaled back the software security must be present.  For small projects, the first thing that should be done is understanding the risk postures and attack surfaces (both human and technological) once these are identified then practices can be applied to them reducing the attack surface.

Independent documents for coding standards and secure coding documents do not have to be created (creating more overhead), they can be combined into a single document.  Software development teams should have and use these methodologies on all projects and not create a new set of practices for each instance.  Security requirements may be reused or gathered from previous projects to supplement the specific software requirements for the mission.  Software security architecture and design patterns need to be learned by developers so that security practices are automatically part of normal software development activities.  Testing must be at the same rigor level, but automation and other techniques may be used to reduce the workload. 

5. Resources

5.1 References

  • (SWEREF-004) This site supports the development of secure coding standards for commonly used programming languages such as C, C++, Java, and Perl, and the Android™ platform. Top ten plus two bonus practices.
  • (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
  • (SWEREF-305) NIST, Information Technology Laboratory, Computer Security Resource Center,
  • (SWEREF-602) Common Weaknesses Enumeration, Miter Corporation CWE is a community-developed list of common software and hardware weakness types that could have security ramifications.
  • (SWEREF-604) Common Weaknesses Enumeration, Miter Corporation CWE is a community-developed list of common software and hardware weakness types that could have security ramifications.
  • (SWEREF-605) Common Weaknesses Enumeration, Miter Corporation CWE is a community-developed list of common software and hardware weakness types that could have security ramifications.
  • (SWEREF-664) OCE site in NASA Engineering Network, Portal that houses information for software developers to develop code in a secure fashion, Formerly known as "Secure Coding"
  • (SWEREF-665) NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP).
  • (SWEREF-666) CVE® is a dictionary of publicly disclosed cybersecurity vulnerabilities and exposures that is free to search, use, and incorporate into products and services, per the terms of use.


5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.


6. Lessons Learned

6.1 NASA Lessons Learned

No Lessons Learned have currently been identified for this requirement.

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

7. Software Assurance

SWE-207 - Secure Coding Practices
3.11.6 The project manager shall identify, record, and implement secure coding practices.

7.1 Tasking for Software Assurance

From NASA-STD-8739.8B

1. Assess that the software coding guidelines (e.g., coding standards) includes secure coding practices.

7.2 Software Assurance Products

  • Source Code Analysis
  • SA assessment of software coding guidelines for inclusion of secure coding practices.

  • The results of SA independent static code analysis, on the source code, showing that the source code follows the defined secure coding practices. 


Objective Evidence

  • The software development organization secure coding standard.
  • Static/Dynamic analysis results.

Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

  • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
  • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
  • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
  • Signatures on SA reviewed or witnessed products or activities, or
  • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
    • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
    • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
  • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.


7.3 Metrics

  • # of Non-Conformances identified in Cybersecurity coding standard compliance (Open, Closed)

See also Topic 8.18 - SA Suggested Metrics.

7.4 Guidance

  1. Confirm the coding guidelines (e.g., coding standards) address secure coding practices.  The selection of which coding standard to use should be done during the planning part of the software project.

Some of the widely used coding standards that consider safety are:

  • For C language: MISRA C, SEI CERT C Coding Standard. The SEI CERT C Coding Standard is a software coding standard for the C programming language, developed by the CERT Coordination Center to improve the safety, reliability, and security of software systems.
  • For C++ language: MISRA C++, JSF AV C++ Coding Standard, SEI CERT C++ Coding Standard, AUTOSAR C++ Coding Guidelines.

2. Confirm that secure coding practices are used.

Review the Software Development Plan to see which coding standards for secure coding practices have been selected. Confirm with the project (the software development lead and with the project manager) that the code standards selected are actually being used. Viewing the results obtained from a standard checker can help verify that the standard is being followed. If no standard checker is being used, a quick spot check can also be done on the code to verify that the secure coding practices recommended by the standards are being followed in the code.

3. Perform an independent code analysis for secure coding practices.

Use a code analysis tool on the source code to look for compliance with the coding standard rules.  Doing this manually without a tool is nearly impossible, due to too many coding rules and too much code. Spot checking manually may be necessary in specific cases, for example reviewing artifacts and checking languages that do not have automated tools for analysis for example ladder logic on Programable Logic Controllers. 

If engineering is running a tool that does the standard checking, then SA can look at and use the tool output to determine if the code meets the code standards. 

It is best if SA runs a code standard checker on the source code. Part of this is to get SA more involved directly in the source code product so SA won't just rely on what engineering is saying about the source code. 

IV&V may be able to help with the use of independent code analysis for secure coding practices.

Check to see if the engineering team and the project have run analysis tool to assess the cybersecurity vulnerabilities and weaknesses in the source code if so check to see that the findings from the analysis tool have been addressed by the team.

 Confirm that the engineering team and the project have addressed any identified cybersecurity vulnerabilities and weaknesses in the software requirements, design, code and that any changes to address these vulnerabilities have been tested or are planned for the test.

A method of identifying weaknesses and vulnerabilities is to use the National Vulnerability Database 665  from NIST that is the U.S. government repository of standards-based vulnerability data. Software weaknesses can be identified using Common Weakness Enumeration (CWE) 666  - a dictionary created by MITRE.

See the secure coding site 664  for more information (NASA access only).

7.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

  • No labels