bannerc
Source Code Quality Analysis

1. Introduction

The major purpose of doing source code analysis during the implementation phase is to determine whether the generated code correctly implements the verified design and meets the quality standards desired by the project such as few or no defects, reliability, maintainability, efficiency, understandable, etc. Some of the code analysis techniques mirror those used in design analysis. However, the results of the design analysis techniques might be significantly different than those done during earlier development phases because the final code may differ substantially from what was expected or predicted. Even though these analyses seem like repeats, during implementation they are run on actual code, where previously they were probably used on detailed design code-like products.

There are also many other attributes that cannot really be analyzed until code is available, such as code size, complexity, timing, resource usage, etc. There are many tools available to help with automating the analysis done during implementation, but most of them require a fairly mature code base. For some techniques, even if they can be done early in implementation, they may need to be repeated when the final version of the code is available. This is particularly true if any of the early analyses discover issues in the requirements or design that require considerable rework.


Tab 2 in this section consists largely of references to other areas in this Handbook that contain guidance for implementation.

Tab 3 in this section discusses a variety of  analysis techniques and methods that can be used to improve the quality of the code with any type of software. 

Tab 4 focuses on analysis techniques and methods for safety-critical software.

2. Code Quality Guidance

This page is designed to pull together much of the good guidance that is already in other places in this Handbook and provide references to those places. There are a number of requirements in NPR 7150.2 and tasks in NASA-STD-8739.8 that require certain good practices to be used during the code implementation phase. In addition, this page includes a generic list of “best coding practices” which can be considered during implementations.

Use of Coding Standards: See SWE-060, tab 3 for a discussion on coding standards. Coding standards encourage the uniform use of coding practices, making it easier to understand, read, and debug the finished code. They also encourage the use of best practices. Typically, coding standards are language-specific and often include best practices for other attributes such as safety and security. For example,  the Cert C Secure Coding Standard includes a number of secure coding practices that should be used to improve the program’s security profile. Projects with safety critical software should use a coding standard that includes both safety and security if possible.

SWE-060 tab 3 also recommends the use of an Integrated Development Environment (IDE), particularly for larger projects. This provides the same integrated set of tools to everyone on the project and can enhance productivity and improve communication.

SWE-061 tab 3 contains a Checklist for C Programming Practices. This checklist is designed for safety-critical projects, but could be used for others as well.

Use of Accredited Tools: SWE-136, tab 3 contains guidance on using accredited tools for development or maintenance. The tab lists a number of places where accredited tools can contribute significantly to the quality and safety of the software and it provides information on how to get the tools accredited.

Use of Static Analyzers: SWE-135, tab 3 contains guidance on static analyzers which are very useful to identify errors in coding, security problems and safety issues. Static analyzers are also language specific and often focus on particular types of errors so it is necessary to ensure the analyzers the project chooses actually support the features needed by the project (e.g., safety, security). The use of more than one static analyzer tool is recommended since the available tools vary in the types of errors they catch.

Unit Testing: SWE-062, tab 3 and SWE-186, tab 3 focus on unit testing, which is considered an important part of implementation. It is particularly important for safety critical software since it is often not possible to check some of the safety features once the system is integrated. For safety-critical units, the unit tests should be carefully planned, and documented with the results carefully documented and any errors noted captured in the defect/issue tracking system. Unit tests should be repeatable.

Some Coding Best Practices to keep in mind while coding:

  • Keep the code modular
  • Keep the modules small (less than 50 lines recommended)
  • Make sure the code is readable
  • Use comments and document the code
  • Don’t repeat code -If it is needed multiple times, make it a function or routine
  • Keep the code simple
  • Avoid hard-coding
  • Use descriptive names (Action words can be used for the start of functions to designate their function)
  • Use version control
  • Keep the code consistent across the team
  • Let team mates review your code (outside of peer reviews)
  • Set time and budget estimates realistically

3. Code Quality Analysis Techniques

There are many ways to evaluate the quality of the generated code and to determine whether the code meets the necessary capabilities and requirements for the project. Some of these techniques are already required by NPR 7150.2 and/or NASA-STD-8739.8 and quite a few others are listed in the guidance for various requirements in this Handbook.  Some of these are discussed below. These techniques may be used on all types of code, including safety critical code. See tab 2 for general guidance on good coding practices that apply to both safety critical and non-safety critical software. See tab 4 for more information on analysis techniques for safety-critical software.


  1. Peer Reviews, Code Walk-throughs or Inspections – The requirements for peer reviews and inspections are addressed in SWE-086, SWE-087 and SWE-089 of NPR 7150.2. The primary purpose of these types of reviews are to identify errors in the code or to familiarize the stakeholders with the code. For code, these types of reviews are required as documented in the Project Plan or Development Plan and are often limited to critical portions of the code or complex sections. Characteristics of these reviews are: 1) advance preparation by attendees 2) use of a checklist 3) verifies the product meets the requirements 4) participants usually include both team members and other peers 5) results are documented and errors and issues are addressed following the reviews. Much more information on these can be found under the guidance of the requirements above and in Topic 7.10 in this Handbook. These types of reviews are recommended for safety-critical areas of the code.
  2. Checklists – Checklists can be used to verify whether certain practices or processes have been during the code development of the software. One such checklist, Checklist of C Programming Practices for Safety, is found in the software guidance tab (tab 3) of SWE-060 in this Handbook. This checklist was designed for safety-critical software and will help determine whether many of the safety related best practices have been followed.
  3. Static Code Analysis – SWE-135 in NPR 7150.2 requires the use of static analyzer tools during development and testing. Modern static code analysis tools can identify a variety of issues and problems, including but not limited to dead code, non-compliances with coding standards, security vulnerabilities, race conditions, memory leaks, and redundant code. Software peer reviews/inspections of code items can include reviewing the results from static code analysis tools. One issue with static code analyzers is they may generate a number of false positives that will need to be resolved and can be very time consuming. Static code analyzers are not available for all platforms or languages. For critical code, it is essential to use sound and complete static analyzers. Sound and complete analyzers guarantee that all errors are flagged and that no false negatives (i.e., an erroneous operation is classified as safe) are generated. Such commercial analyzers are expensive but necessary for critical applications. Note that sound and complete static analyzers are now available free of charge for C and C++ software systems. More information can be found on static analyzers in the software guidance tab (tab 3) of SWE-135 in the Software Engineering Handbook. The use of static code analyzers is required for safety critical software.
  4. Bi-Directional Traceability – The bi-directional traceability  of the software requirements to the design components and the design components to the software code required in SWE-052 will provide the information to help determine whether all of the requirements have been included in the design and code.
  5. Interface Analysis – While interface analysis is not always done, it can identify many problems earlier in the life cycle. Interface errors are one of the most common types of errors. The coded interface should be checked against the interface definition documentation to be sure it has been coded properly.
  6. Security Source Code Review – This is a targeted review for security where the reviewer launches a code analyzer that checks for potential security issues and steps through the code line by line to evaluate any potential issues.
  7. Analysis for COTS, GOTS, OSS, reused code – All the categories of reused code should be checked for a number of potential problems before being included in the final code base. Items to look for are: 1) unused code/dead code 2) unnecessary functionality 3) Does the reused code work properly using the same assumptions and constraints as the system being developed? (Think about boundary conditions.) 4) Are there potential security problems? There are tools that can help with some of these questions, but for some additional vendor information may be necessary. For COTS or OTS where the source code is not available, it may be possible to get some security information by reviewing the version history and looking for previous security problems.
  8. Unit Testing – Unit testing is considered part of implementation. It is required in SWE-062 and is very important for checking the individual functionality of each unit of code. It must be done before code integration since after integration, the individual component inputs and outputs are often no longer accessible.
  9. SEI Code Analysis for Architecture, Quality and Security Assessments – The IV&V Facility is currently working to prototype a version of the SEI analysis process to assess code quality for NASA use. This code analysis is an approach to evaluate code quality by using an automated analysis of the software to assess the degree to which it satisfies one or more desired quality attributes. At some future point, NASA may be able to use this process to assess certain code quality attributes of interest to NASA.

4. Safety Code Analysis

Some of the same analysis techniques listed in Tabs 2 and 3 are also applicable and many are recommended or required for safety-critical software. For some of the techniques in the previous tab, the Software Assurance team may be using some of these techniques  such as the use of Checklists. It is during software implementation (coding) that software controls of safety hazards are actually realized.  Safety requirements have been passed down through the designs to the coding level. Programmers must recognize not only the  specific safety-related design elements but should also be aware of the types of errors that can be introduced into the non-safety code which can compromise safety controls. Coding checklists should be provided to check for the common errors.  Safety Checklists can be used to verify that all the safety features and safety requirements have been included in the code. Programming Checklists (See the section on Programming Checklists under the Topics in this Handbook)may be used to check for best practices, compliance to coding standards, common errors, and problems noted in lessons learned. When checklists are used, the Software Safety personnel should be reviewing the results and making sure that any issues found have been addressed. Similarly, with the static code analysis, the Software Safety personnel will generally be reviewing the results, particularly noting any issues that might cause safety problems and verifying they have been addressed. Safety Personnel should be attending the peer reviews of any safety-critical modules. Software Safety Personnel should also be looking at the bi-directional traceability to ensure that all of the safety-related requirements have been correctly designed and correctly converted from the design into the code.

  1. Unit Test Analysis – The Software Safety Personnel should review or witness the unit testing of the safety-critical modules to be sure they produce the expected results. Unit testing is particularly important with the safety-critical software since many of the safety features are very difficult to test once the whole system has been integrated. Also, see the information in Topic 8.16 - Tab 5 on unit testing safety critical software.
  2. Code Logic Analysis – Code logic analysis evaluates the sequence of operations represented by the coded program to detect logic errors in the coded software. Generally, this analysis is only applied to safety-critical software modules as it is a time-consuming activity. To do this, flow charts are developed from the actual code and compared with the design descriptions and flow diagrams. Similarly, the equations in the code are compared with the equations in the design materials. Finally, memory decoding is used to identify critical instruction sequences even when they may be disguised as data. The analyst should determine whether each instruction is valid and if the conditions under which it can be executed are valid.  Memory decoding should be done on the final code.
  3. Code Data Analysis – The objective of code data analysis is to ensure that the data is being defined correctly and used properly. The usage and value of the data items in the code should be compared with their descriptions in the design. Another concern is to ensure that the data is not being altered inadvertently or over-written. Also, check to see that interrupt processing is not interfering with the safety-critical data.
  4. Code Interface Analysis – Code interface analysis is intended to verify that the interfaces have been implemented properly. Check that parameters are properly passed across interfaces.  Verify that data size, measurement unit, byte sequence, and bit order within bytes are the same on all sides of the interface.
  5. Unused Code Analysis –Unused code is a problem because it can contain routines that might hazardous if inadvertently executed and because it may cause unnecessary complexity and usage of resources. Unused code can generally be identified by using static code analyzers.
  6. Interrupt Analysis –This analysis focuses of the effect of interrupts on program flow and potential data corruption. For example, can an interrupt keep a safety-critical task from completing? Can a low priority process interrupt a high priority task and change data? When analyzing interrupts, think about the following: program segments where interrupts are locked out, re-entrant code, interruptible code segments (protect a timing-critical segment from interrupts if a delay would be unacceptable), priorities, and  undefined interrupts.
  7. Final Timing, Throughput, and Sizing Analysis - With the completion of the coding phase, the timing, throughput, and sizing parameters can be measured. The size of the executable component (storage size) is easily measured, as is the amount of memory space used by the running software.  Special tests may need to be run to determine the maximum memory used, as well as timing and throughput parameters.  Some of these tests may be delayed until the testing phase, where they may be formally included in functional or load/stress tests.  However, simple tests should be run as soon as the appropriate code is stable, to allow verification of the timing, throughput, and sizing requirements.

Performing these types of analyses will likely result in finding a number of coding errors in addition to areas where changes or additions need to be made to the requirements. All errors found should be documented in a tracking system and tracked to closure. If errors are found in the requirements, these requirements changes should go through the formal change process and when approved the design and code should be changed accordingly. Hazard analyses should be updated to reflect the changes.

Any safety analysis done should be reported out at reviews and regular status meetings with the project. Reporting should include the identification of the source code analyzed,  the types of analyses performed, the types and numbers of errors/issues found, timeframe for resolutions of the issues and overall status of the code, based on the analyses done. Include an assessment of any risks identified. 

5. Resources

5.1 References

No references have been currently identified for this Topic. If you wish to suggest a reference, please leave a comment below.

3.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

  • No labels