For over half a century, software engineers have known malicious actors could exploit a class of software defect called “memory safety vulnerabilities” to compromise applications and systems. During that time, experts have repeatedly warned of the problems associated with memory safety vulnerabilities. Memory unsafe code even led to a major internet outage in 1988. Just how big a problem is memory unsafety? In a blog post, Microsoft reported that “~70% of the vulnerabilities Microsoft assigns a CVE [Common Vulnerability and Exposure] each year continue to be memory safety issues.” Google likewise reported that “the Chromium project finds that around 70% of our serious security bugs are memory safety problems.” Mozilla reports that in an analysis of security vulnerabilities, that “of the 34 critical/high bugs, 32 were memory-related.”
These vulnerabilities are not theoretical. Attackers use them in the commission of attacks against real people. For example, Google’s Project Zero team analyzed vulnerabilities that were used in the wild by attackers before they were reported to software providers (also called “zero-day vulnerabilities”). They report that “out of the 58 [such vulnerabilities] for the year, 39, or 67% were memory corruption vulnerabilities.” Citizen Lab uncovered spyware used against civil society organizations that exploited memory safety vulnerabilities.
In what other industry would the market tolerate such well-understood and severe dangers for users of products for decades?
Over the years, software engineers have invented numerous clever, but ultimately insufficient mitigations for this class of vulnerability, including tools like memory randomization and sandboxing techniques that reduce impact, and tools for static and dynamic code analysis that reduce occurrence. In addition to those tools, organizations have spent significant time and money training their developers to avoid unsafe memory operations. There are also several parallel efforts to improve the memory safety of existing C/C++ code. Despite these efforts (and associated costs in complexity, time, and money), memory unsafety has been the most common type of software security defect for decades.
There are, however, a few areas that every software company should investigate. First, there are some promising memory safety mitigations in hardware. The Capability Hardware Enhanced RISC Instructions (CHERI) research project uses modified processors to give memory unsafe languages like C and C++ protection against many widely exploited vulnerabilities. Another hardware assisted technology comes in the form of memory tagging extensions (MTE) that are available in some systems. While some of these hardware-based mitigations are still making the journey from research to shipping products, many observers believe they will become important parts of an overall strategy to eliminate memory safety vulnerabilities.
Second, companies should investigate memory safe programming languages. Most modern programming languages other than C/C++ are already memory safe. Memory safe programming languages manage the computer’s memory so the programmer cannot introduce memory safety vulnerabilities. Compared to other available mitigations that require constant upkeep – either in the form of developing new defenses, sifting through vulnerability scans, or human labor – no work has to be done once code is written in a memory safe programming language to keep it memory safe.
What has been lacking until a few years ago is a language with the speed of C/C++ with built-in memory safety assurances. In 2006, a software engineer at Mozilla began working on a new programming language called Rust. Rust version 1.0 was officially announced in 2015. Since then, several prominent software organizations have started to use it in their systems, including Amazon, Facebook, Google, Microsoft, Mozilla, and many others. It is also supported in the development of the Linux kernel.
Different products will require different investment strategies to mitigate memory unsafe code. The balance between C/C++ mitigations, hardware mitigations, and memory safe programming languages may even differ between products from the same company. No one approach will solve all problems for all products. The one thing software manufacturers cannot do, however, is ignore the problem. The software industry must not kick the can down the road another decade through inaction.
CISA’s secure by design white paper outlines three core principles for software manufacturers: take ownership of customer security outcomes, embrace radical transparency, and lead security transformations from the top of the organization. Solutions to the memory unsafety problem will incorporate all three principles.
CISA urges software manufacturers to make it a top-level company goal to reduce and eventually eliminate memory safety vulnerabilities from their product lines. To demonstrate such a commitment, companies can publish a “memory safety roadmap” that includes information about how they are modifying their software development lifecycle (SDLC) to accomplish this goal. A roadmap might include details like the date after which it will build new products or components in a memory safe programming language and plans to support the memory safety initiatives of open source libraries that are part of their supply chain.
Memory unsafety has plagued the software industry for decades and will continue to be a major source of vulnerabilities and real-world harm until top business leaders from the software manufacturers make appropriate investments and take ownership of the security outcomes of their customers. As we recognize National Coding Week, we look forward to participants across the software industry working together to make software that is safer by design, and memory safety is the key to achieving that goal.