bannerc
SWE-185 - Secure Coding Standards Verification

1. Requirements

3.11.9 The project manager shall verify that the software code meets the project’s secure coding standard by using the results from static analysis tool(s). 

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

Click here to view the history of this requirement: SWE-185 History

1.3 Applicability Across Classes

 

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable
A & B = Always Safety Critical; C & D = Sometimes Safety Critical; E - F = Never Safety Critical.

2. Rationale

The use of uniform software coding methods, standards, and/or criteria ensures uniform coding practices, reduces errors through safe language subsets, and improves code readability. Verification that these practices have been adhered to reduces the risk of software malfunction for the project during its operations and maintenance phases. Assuring the adherence of the developed software to the coding standards provides the greatest benefit when followed from software development inception to completion. Coding standards are selected at the start of the software development effort. Verification activities of the software work products include reviews, such as peer reviews and inspections (see SWE-087), and assessments of how the coding standards are used to develop the software work products. The use of automated tools for assessing adherence to standards at appropriate reviews, or even on a batch mode run overnight, will assist the project team in adherence and verification.

3. Guidance

Verification of the developed software to the coding standards provides the greatest benefit when followed from software development inception to completion. Coding standards are selected at the start of the software development effort. Verification activities of the software work products include reviews, such as peer reviews and inspections (see SWE-087), and assessments of how the coding standards are used to develop the software work products.

The use of automated tools for assessing adherence to standards at appropriate reviews, or even on a batch mode run overnight, will assist the project team in adherence and verification. “Code should be mechanically checked against the standards with the help of state-of-the-art static source code analyzers. ... Flight code should be checked nightly for compliance with a coding standard and subjected to rigorous analysis with state-of-the-art [static source code analysis tools]. The warnings generated by each of these tools are combined with the output of mission-specific checkers that secure compliance with naming conventions, coding style, etc. In addition, all warnings, if any (there should be none), from the standard C compiler, used in pedantic mode with all warnings enabled, should be provided to the software developers... [who] are required to close out all reports before a formal code review is initiated. In peer code reviews, an additional source of input is provided by designated peer code reviewers... Separately, key parts of the software design can be also checked for correctness and compliance with higher-level design requirements with the help of logic model checkers.”  477

Training should be provided for the software development team in the use of the logic model checkers for the analysis and verification of flight software.  477

Manual analysis to verify the complete application of safety or security coding standards is all but impossible.  476


Additional guidance related to software coding standards and verification may be found in the following related requirements in this Handbook:

4. Small Projects

No additional guidance is available for small projects.

5. Resources

5.1 References

  • (SWEREF-476)

    Michael Aguilar, NASA Engineering and Safety Center, October 21, 2014.

  • (SWEREF-477)

    Klaus Havelund and Gerard J. Holzmann Laboratory for Reliable Software (LaRS) Jet Propulsion Laboratory, California Institute of Technology 4800 Oak Grove Drive, Pasadena, California, 91109-8099.

  • (SWEREF-664)

    OCE site in NASA Engineering Network, Portal that houses information for software developers to develop code in a secure fashion

  • (SWEREF-665)

    NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP).

  • (SWEREF-666)

    CVE® is a dictionary of publicly disclosed cybersecurity vulnerabilities and exposures that is free to search, use, and incorporate into products and services, per the terms of use.


5.2 Tools

Unable to render {include} The included page could not be found.
 


6. Lessons Learned

6.1 NASA Lessons Learned

No Lessons Learned have currently been identified for this requirement.

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

7. Software Assurance

SWE-185 - Secure Coding Standards Verification
3.11.9 The project manager shall verify that the software code meets the project’s secure coding standard by using the results from static analysis tool(s). 

7.1 Tasking for Software Assurance

  1. Analyze the engineering data or perform independent static code analysis to verify that the code meets the project’s secure coding standard requirements.

7.2 Software Assurance Products

  • Source Code Analysis 
  • The analysis of engineering results or the SA independent static code analysis on the source code, showing the secure coding practices were followed.
  •  Identification of any risks or issues with use of the secure coding practices


    Objective Evidence

    • None
     Definition of objective evidence

    Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

    • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
    • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
    • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
    • Signatures on SA reviewed or witnessed products or activities, or
    • Status report, email or memo containing Short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
      • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
      • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.

7.3 Metrics

  • # of software work product Non-Conformances identified by life-cycle phase over time
  • Document the Static Code Analysis tools used with associated Non-Conformances
  • # of total errors and warnings identified by tool
  • # of errors and warnings evaluated vs. # of total errors and warnings identified by tool
  • # of Non-Conformances raised by SA vs. total # of raised Non-Conformances
  • # of static code errors and warnings identified as “positives” vs. # of total errors and warnings identified by tool
  • # of static code errors and warnings resolved by Severity vs. # of static code errors and warnings identified by Severity by tool
  • # of static code “positives” over time (Open, Closed, Severity)
  • # of Cybersecurity vulnerabilities and weaknesses identified by tool
  • # coding standard violations identified (Open, Closed, type of violation, Severity)

7.4 Guidance

  1. Confirm the coding guidelines (e.g., coding standards) address secure coding practices.  The selection of which coding standard to use should be done during the planning part of the software project

Some of the widely used coding standards that consider safety are:

For C language: MISRA C, SEI CERT C Coding Standard. The SEI CERT C Coding Standard is a software coding standard for the C programming language, developed by the CERT Coordination Center to improve the safety, reliability, and security of software systems.

For C++ language: MISRA C++, JSF AV C++ Coding Standard, SEI CERT C++ Coding Standard, AUTOSAR C++ Coding Guidelines.

2. Confirm that secure coding practices are used.

Confirm that the project is actually using the code standard selected.

3. Perform an independent static code analysis for secure coding practices.

Use a static code analysis tool on the source code to look for compliance with the coding standard rules.  Doing this by hand without a tool is nearly impossible, too many coding rules and too much code.

If engineering is running a tool that does the standard checking then SA can look at and use the tool output to determine if the code meets the code standards. 

It is best if SA runs a code standard checker on the source code. Part of this is to get SA more involved directly in the source code product and not just relying on what engineering and saying about the source code. 

IV&V may be able to help you with an independent static code analysis for secure coding practices.

Check to see if the engineering team and the project have run a static analysis tool to assess the cybersecurity vulnerabilities and weaknesses in the source code if so check to see that the findings from the static analysis tool have been addressed by the team.

A method of identifying weaknesses and vulnerabilities is to use the National Vulnerability Database 665 from NIST that is the U.S. government repository of standards-based vulnerability data. Software weaknesses can be identified using Common Weakness Enumeration (CWE) 666 - a dictionary created by MITRE.

See the secure coding site 664 for more information (NASA access only).



  • No labels