Simple coding mistakes that can lead to critical vulnerabilities

Software security is an aspect of major concern for software applications since the exploitation of a single vulnerability may lead to far-reaching consequences. Most of the software vulnerabilities stem from a small number of common programming errors, which are introduced by the developers during the implementation phase mainly due to their lack of security expertise or mistakes made due to delivery time pressure. In order to mitigate these errors and protect software applications against critical vulnerabilities, appropriate security mechanisms should be implemented.

Below is the updated list of the top 25 most dangerous software weaknesses of 2022, which is maintained by the Common Weakness Enumeration (CWE)[1].
Figure 1: The list of the top 25 most dangerous software weaknesses of 2022 as  maintained by the Common Weakness Enumeration (CWE)

This list contains the most common and impactful from a security viewpoint weaknesses of software systems. It is interesting that weaknesses like “Out-of-bounds Read”, “Out-of-bounds Write”, and “Improper Input Validation” rank so high in the list. This shows that these relatively simple mistakes can have significant consequences on a software system, whereas, although they are issues that can be easily fixed in the code, they are still among the most common weaknesses found in software systems. In order to stress how easily those critical vulnerabilities could be avoided with simple code checks, we provide the following examples that are presented in the table below:

Table 1: Examples of common vulnerabilities along with their fixes

The first example, which is depicted in the table above, corresponds to a Buffer Overflow vulnerability. This vulnerability is popular in unsafe programming languages like C and C++. This security issue arises when the software application does not perform bounds checking and allows input to write beyond the end of an allocated buffer, overwriting in that way adjacent memory locations. These locations may contain data or executable code, leading the program to unexpected behavior including memory access errors, incorrect results, and crashes. In the given example, the function fun() receives a parameter str and copies this parameter to the buffer array, without checking their bounds, potentially leading to an overflow. This issue is addressed by properly checking the sizes of the two buffers before performing the copy.

The second example that is presented in the table above corresponds to an OS Command Injection vulnerability. This vulnerability arises when the software product constructs an OS command using user-defined (i.e., tainted) inputs, without proper validation or neutralization. This could allow attackers to execute unexpected and dangerous commands directly on the operating system. In the given example, the p variable receives user-defined data from a user request, and these data are directly used for executing a command. In order to mitigate this issue, as shown in the figure above, the user-defined input should be checked for illegal characters (i.e., input validation), and in case that illegal characters are present, it should be sanitized by removing these illegal characters (i.e., input sanitization/neutralization). In addition, instead of string concatenation, the final command should be constructed by using a special method (i.e., parameterization). Alternatively, the command and the data could be passed as individual parameters to a ProcessBuilder object instead of using the Runtime.exec() method.

The above examples provide further support to the common belief that the majority of the critical vulnerabilities are introduced by relatively simple code mistakes made by developers during the implementation, which could have been easily avoided at first place. It also stresses the need for mechanisms (e.g., static code analyzers, etc.) able to help developers detect and eventually fix those simple issues prior to the release of the software on the market.

In order to address the aforementioned challenge, within the context of the IoTAC project and as part of our Security-by-Design (SSD) Platform, we propose the Security Evaluation Framework (SEF). It is a framework that applies security-specific static analysis in order to detect security issues (i.e., potential vulnerabilities) that reside in the source code of a given software application and provide feedback on how these issues can be fixed. The regular execution of SEF during the development of software applications could lead to the identification and elimination of critical vulnerabilities prior to their release on the market, leading to more secure solutions. More information about the SEF component of the SSD platform can be found in one of our previous posts.

[1] https://cwe.mitre.org/top25/archive/2022/2022_cwe_top25.html

Leave a Reply

3 × 3 =