The post AI Is Writing the Next Wave of Software Vulnerabilities — Are We “Vibe Coding” Our Way to a Cyber Crisis? appeared first on RunSafe Security.
]]>For decades, cybersecurity relied on shared visibility into common codebases. When a flaw was found in OpenSSL or Log4j, the community could respond: identify, share, patch, and protect.
AI-generated code breaks that model. Instead of re-using an open source component and having to comply with license restrictions, one can use AI to rewrite a near similar version but not use the exact open source version.
I recently attended SINET New York 2025, joining dozens of CISOs and security leaders to discuss how AI is reshaping our threat landscape. One key concern surfaced repeatedly: Are we vibe coding our way to a crisis?
At the SINET New York event, Tim Brown, VP Security & CISO at SolarWinds, pointed out that with AI coding, we could lose insights into common third-party libraries.
He’s right. If every team builds bespoke code through AI prompts, including similar to but different than open source components, there’s no longer a shared foundation. Vulnerabilities become one-offs. If we are not using the same components, we won’t have the ability to share vulnerabilities. And that could lead to a situation where you have a vulnerability in your product that somebody else won’t know they have.
The ripple effect is enormous. Without shared components, there’s no community-driven detection, no coordinated patching, and no visibility into risk exposure across the ecosystem. Every organization could be on its own island of unknown code.
Even more concerning, AI doesn’t “understand” secure coding the way experienced engineers do. It generates code based on probabilities and its training data. A known vulnerability could easily reappear in AI-generated code, alongside any new issues.
Veracode’s 2025 GenAI Code Security Report found that “across all models and all tasks, only 55% of generation tasks result in secure code.” That means that “in 45% of the tasks the model introduces a known security flaw into the code.”
For those of us at RunSafe, where we focus on eliminating memory safety vulnerabilities, that statistic is especially concerning. Memory-handling errors — buffer overflows, use-after-free bugs, and heap corruptions — are among the most dangerous software vulnerabilities in history, behind incidents like Heartbleed, URGENT/11, and the ongoing Volt Typhoon campaign.
Now, the same memory errors could appear in countless unseen ways. AI is multiplying risk one line of insecure code at a time.
Nick Kotakis, former SVP and Global Head of Third-Party Risk at Northern Trust Corporation, underscored another emerging problem: signature detection can’t keep up with AI’s ability to obfuscate its code.
Traditional signature-based defenses depend on pattern recognition — identifying threats by their known fingerprints. But AI-generated code mutates endlessly. Each new build can behave differently and conceal new attack vectors.
In this environment, reactive defenses like signature detection or rapid patching simply can’t scale. By the time a signature exists, the exploit may already have evolved.
So how do we protect against vulnerabilities that no one has seen — and may never report?
At RunSafe, we focus on one of the most persistent and damaging categories of software risk: memory safety vulnerabilities. Our goal is to address two of the core challenges introduced by AI-generated code:
By embedding runtime exploit prevention directly into applications and devices, RunSafe prevents the exploitation of memory-based vulnerabilities, including those that are unknown or zero days.
That means even before a patch exists, and even before a vulnerability is discovered, RunSafe Protect keeps code secure whether it’s written by humans, AI, or both.
AI-generated code is here to stay. It has the potential to speed up development, lower costs, and unlock new capabilities that would have taken teams months or years to build manually.
However, when every product’s codebase is unique, traditional defenses — shared vulnerability intelligence, signature detection, and patch cycles — can’t keep up. The diversity that makes AI powerful also makes it unpredictable.
That’s why building secure AI-driven systems requires a new mindset that assumes vulnerabilities will exist and designs in resilience from the start. Whether it’s runtime protection, secure coding practices, or proactive monitoring, security must evolve alongside AI.
At RunSafe, we’re focused on one critical piece of that puzzle, protecting software from memory-based exploits before they can be weaponized. As AI continues to redefine how we write code, it’s our responsibility to redefine how we protect it.
Learn more about Protect, RunSafe’s code protection solution built to defend software at runtime against both known and unknown vulnerabilities long after the last patch is available.
The post AI Is Writing the Next Wave of Software Vulnerabilities — Are We “Vibe Coding” Our Way to a Cyber Crisis? appeared first on RunSafe Security.
]]>The post Fixing OT Security: Why Memory Safety and Supply Chain Visibility Matter More Than Ever appeared first on RunSafe Security.
]]>
Unlike traditional IT, OT systems power critical infrastructure like energy grids, water management, manufacturing floors, and more. These devices often run on low-powered hardware with long lifespans and were never designed for modern connectivity. They were secured by locked doors, not firewalls.
Fast forward to today, and these devices are increasingly connected to the internet—and exposed.
Many vulnerabilities in common OT products are caused by buffer overflows or memory corruption flaws. While systemic, these vulnerabilities can be proactively addressed with memory safety protections.
“If you can eliminate entire classes of vulnerabilities before software hits the field, you don’t need to play whack-a-mole with patches,” says Saunders.
RunSafe’s approach focuses on preventing exploitation at the binary level, effectively making vulnerabilities non-exploitable without requiring post-deployment patching.
Even the simplest industrial device could include thousands of open-source software components. Without visibility into the Software Bill of Materials (SBOM), organizations are left guessing about what’s inside.
“If a vendor can’t tell you what’s in their product, chances are, they don’t know either,” says Saunders.
Knowing your software’s components—and their vulnerabilities—is critical for compliance. It’s also critical for managing risk across the supply chain, identifying attack surfaces, and making smart, prioritized decisions.
Patching in OT isn’t like clicking “update” on your phone. It can require physical access to remote locations and months of planning. Worse, many vulnerabilities go unpatched for 180+ days, leaving critical infrastructure exposed for far too long.
This makes proactive protection methods—like RunSafe’s memory randomization techniques and runtime protection—essential tools in a modern OT defense strategy.
Joe Saunders outlines a simple yet powerful framework:
This shift toward accountability and visibility reduces operational costs and futureproofs infrastructure.
Fixing OT security won’t happen with checklists and wishful thinking. It’ll take:
“The real question isn’t whether we can fix OT security,” Saunders concludes. “It’s whether we want to—and who’s willing to lead the charge.”
The post Fixing OT Security: Why Memory Safety and Supply Chain Visibility Matter More Than Ever appeared first on RunSafe Security.
]]>The post Reducing Your Exposure to the Next Zero Day: A New Path Forward appeared first on RunSafe Security.
]]>Our goal at RunSafe is to give defenders a leg up against attackers, so we wondered: What if we could quantify this seemingly unquantifiable risk? What if we could take meaningful action to implement zero-day protection for systems before vulnerabilities are even discovered?
To dig into these questions, we partnered with Ulf Kargén, Assistant Professor at Linköping University, who developed the CReASE (Code Reuse Attack Surface Estimation) tool, which underpins RunSafe’s Risk Reduction Analysis.
VulnCheck, in their “2024 Trends in Vulnerability Exploitation” report, found that 23.6% of all actively exploited vulnerabilities in 2024 were zero-day flaws. We’re seeing nation-state actors like Volt Typhoon and Salt Typhoon specifically target these unknown vulnerabilities to achieve their objectives, as noted in research from Google Threat Intelligence Group, which tracked 75 zero-day vulnerabilities exploited in the wild in 2024.
Most of the industry’s response to zero days has been trying to detect and prevent threats by looking for indicators of attack, suspicious behavior, and patterns that might tip us off. But attackers have gotten really good at hiding and masking their activity. What’s been left wide open is the underlying risks in software itself. Instead of securing the foundation, we’ve built bigger walls around our systems.
That might work in a data center where systems live behind firewalls and racks of gear. But in the world of IoT and embedded devices, there are no walls. These systems are deployed far from the protection of the network where they are alone, exposed, and vulnerable. They need to be self-reliant. They need to be like samurai—able to defend themselves without backup.
Because of this, we saw the need for a method to quantify the risk of zero days and a way to make devices intrinsically more robust against exploitation, regardless of what vulnerabilities might exist within them. If you can quantify risk with real technical rigor, you can make smart decisions to reduce your attack surface and make a compelling argument to leadership on where to focus resources.
Modern cyberattacks frequently use a technique called Return-Oriented Programming (ROP). When traditional code injection attacks became difficult due to improved security measures, attackers evolved to use “code reuse” attacks instead.
Modern exploits repurpose a program’s own code, using existing code snippets (called “gadgets”) within a program and chaining them together to create malicious functionality. The program’s own code is weaponized against itself.
This insight gives us a way to measure memory-based zero-day risk specifically. While it’s impossible to predict all potential vulnerabilities in code, we can analyze whether useful ROP chains exist in a binary that could lead to the successful exploitation of a vulnerability.

We worked alongside researcher Ulf Kargén at Linköping University who developed the Code Reuse Attack Surface Estimation (CReASE) tool to quantify previously unmeasurable risk. You can listen to Ulf discuss the tool and how it works in this webinar.
CReASE scans binaries to identify potential ROP gadgets and determines whether they could be chained together to perform dangerous system calls. It doesn’t try to predict where specific vulnerabilities might exist but instead analyzes whether the code structure would allow successful exploitation if a vulnerability were discovered.
It answers the question: Are any useful ROP chains available to an attacker?
Unlike existing tools that focus on guaranteeing working exploit chains (often sacrificing scalability or completeness), CReASE uses novel data flow analysis to achieve both scalability and completeness comparable to a human attacker.
The result is a risk scoring system that quantifies the probability that the next memory-based zero-day vulnerability could be exploited to achieve specific dangerous outcomes like remote code execution, file system manipulation, or privilege escalation.
The CReASE tool underlies RunSafe’s Risk Reduction Analysis, which you can use to analyze your exposure to CVEs and memory-based zero days.
To understand why this approach is so powerful, we need to recognize two critical facts:
These numbers tell us that memory safety vulnerabilities constitute a significant risk in our codebases. When a memory vulnerability is exploited, attackers can execute arbitrary code, take control of devices, crash systems, exfiltrate data, or deploy ransomware.
By focusing our risk quantification and mitigation efforts on memory-based vulnerabilities specifically, we’re addressing a common and dangerous attack vector for zero-day exploits.
Once we quantify the risk, what can be done about it? Traditional memory protection like Address Space Layout Randomization (ASLR) provides some security by randomizing where blocks of code are loaded in memory. However, ASLR still loads functions contiguously, making it vulnerable to information leak attacks.
RunSafe’s approach takes randomization to the function level. Instead of randomizing where the entire binary loads, we randomize each function independently. In a typical binary with 280 functions, this creates 280 factorial possible memory layouts — more than 10^400 combinations.
Even if a memory-based zero-day vulnerability exists, with RunSafe’s Load-time Function Randomization (LFR), attackers can’t reliably construct a working ROP chain because they can’t predict where the necessary gadgets will be located. We’ve effectively made the vulnerability inert.

The most effective approach to memory-based zero-day risk combines analysis and protection:
Our customers typically see a risk reduction that changes the odds from “the next zero-day can compromise the system” to “maybe one in the next 10,000 zero-days might succeed.” That’s a dramatic improvement in security posture.
While no solution can eliminate all types of zero-day vulnerabilities, addressing memory-based vulnerabilities targets the most common and dangerous attack vector. In a world where zero-days will always exist, making them ineffective is the next best thing to eliminating them entirely.
Want to try out the Risk Reduction Analysis tool for yourself? All you’ll need to do is create an account and upload a binary to get your results.
Run an analysis here.
The post Reducing Your Exposure to the Next Zero Day: A New Path Forward appeared first on RunSafe Security.
]]>The post Memory Safety KEVs Are Increasing Across Industries appeared first on RunSafe Security.
]]>In a webinar hosted by Dark Reading, RunSafe Security CTO Shane Fry and VulnCheck Security Researcher Patrick Garrity discussed the rise of memory safety vulnerabilities listed in the KEV catalog and shared ways organizations can manage the risk.
Data from VulnCheck shows a clear increase in memory safety KEVs over the years, reaching a high in 2024 of around 200 total KEVs.
“We’re seeing the number of known exploited vulnerabilities associated with memory safety grow,” Patrick said. “If you look at CISA’s KEV list, the concentration is quite high as far as volume.”

Data from VulnCheck, Memory Saftey Known Exploited Vulnerabilities
Memory safety KEVs are also found across industries, including network edge devices, hardware and embedded systems, industrial control systems (ICS/OT), device management platforms, operating systems, and open source software.

Data from VulnCheck, Memory Saftey Known Exploited Vulnerabilities by Industry
Patrick emphasized the universal nature of the threat: “If you look at this list, there’s manufacturing impacted, medical devices, embedded systems, and critical infrastructure. Across the board from an industry perspective, you’re going to see these vulnerabilities everywhere.”
Not only are memory safety KEVs widespread, many are also classified as critical, with high CVSS scores. Six memory safety weakness types are now included in MITRE’s list of the top 25 most dangerous software weaknesses for 2024.

Data from VulnCheck, Memory Saftey Known Exploited Vulnerabilities by CVSS Criticality
Memory safety vulnerabilities—like buffer overflows, use-after-free bugs, and out-of-bounds writes—have long plagued compiled code. “About 70% of the vulnerabilities in compiled code are memory safety related,” explained Shane Fry.
When attackers exploit these bugs, the results can be severe. Organizations may face:
This KEV, an out-of-bounds write (CWE-787), affected several Ivanti products and was linked to the Hafnium threat actor group. Patrick called out the speed at which this vulnerability moved from discovery to exploitation: “The vendor identifies there’s a vulnerability, there’s exploitation, they disclose the vulnerability, they get it in a CVE, and then CISA adds it—all in the same day.”
Typically, the disclosure process does not flow so quickly, but in this case it was a good thing as the exploit targeted a security product. Shane observed: “One of the very interesting philosophical questions that I think about often in cybersecurity spaces is how impactful a security vulnerability in a security product can be. Most people think that if it’s a security product, it’s secure. And off they go.”
A heap-based buffer overflow flaw (CVE-2024-49775) in Siemens’ industrial control systems exposed critical infrastructure to risks of arbitrary code execution and disruption. The vulnerability exemplifies the widespread impact memory safety issues can have across product lines when they affect common components.
The accelerating growth of memory safety KEVs has not gone unnoticed by global security organizations. In 2022, the National Security Agency (NSA) issued guidance stating that memory safety vulnerabilities are “the most readily exploitable category of software flaws.”
Their guidance recommended two approaches:
Similarly, CISA has emphasized memory safety in its Secure by Design best practices, advocating for organizations to develop memory safety roadmaps.
The European Union’s Cyber Resiliency Act (CRA) takes a broader approach, emphasizing Software Bill of Materials (SBOM) to help organizations understand vulnerabilities in their supply chain. As Shane noted, “We saw a shift in industry when the CRA became law that, hey, now we have to actually do this. We can’t just talk about it.”
Given the growing threat landscape, organizations need practical approaches to address memory safety vulnerabilities.
For most companies, a full rewrite in Rust or another memory-safe language isn’t realistic. Instead, start by identifying high-risk, externally facing components and consider targeted rewrites. Shane suggested starting with software or devices that most often interact with untrusted data.
Implementing secure development practices can help prevent introducing new vulnerabilities.
“There’s a lot of aspects of Secure by Design, like code scanning and secure software development life cycles and Software Bill of Materials, that can help you understand what you’re shipping in your supply chain,” Shane said.
Runtime hardening is an effective defense for legacy or third-party code that can’t be rewritten. Runtime protections prevent the exploit of memory safety vulnerabilities by randomizing code to prevent attackers from reliably targeting vulnerabilities.
RunSafe accomplished this with our Protect solution. “Every time the device boots or every time your process is launched, we reorder all the code around in memory,” Shane said.
It also buys time, allowing organizations to avoid having to ship emergency patches overnight because their software is already protected.
Memory safety vulnerabilities are becoming more common across industries. The risks are serious, especially when attackers can use these flaws to take control of systems or steal data.
Organizations need to take action now. By rewriting the highest-risk code, following secure development practices, and using runtime protections where needed, companies can reduce their exposure to memory safety threats.
Memory safety problems are widespread, but they can be managed. Secure by Design practices and runtime protections offer a path forward for more secure software and greater resilience.
The post Memory Safety KEVs Are Increasing Across Industries appeared first on RunSafe Security.
]]>The post Converting C++ to Rust: RunSafe’s Journey to Memory Safety appeared first on RunSafe Security.
]]>At RunSafe Security, I had the opportunity to lead the transition of our 30k lines of C++ codebase to Rust. Two things influenced our decision:
The transition wasn’t without its challenges. Converting a large, established C++ codebase to Rust required careful planning, creative problem-solving, and plenty of patience. In this blog, I’ll walk you through why we chose to make the switch, the obstacles we encountered along the way, and the results we achieved. I hope these insights provide value to anyone considering a similar journey.
RunSafe chose the Rust programming language because of several advantages it offers.
The most important advantages from our perspective were the combination of memory safety and lack of garbage collection.
Rust’s advantages extended beyond security. We also saw opportunities for:
Migrating a C++ codebase to Rust is not a decision without obstacles. For RunSafe, challenges stemmed from both technical limitations and philosophical differences in how C++ and Rust approach certain concepts.
C++ often permits unrestricted mutability, allowing developers to directly manipulate global state. Rust’s borrow checker, which enforces ownership rules, fundamentally rejects this assumption. Overcoming this required rewriting significant portions of the codebase to adhere to Rust’s stricter ownership principles.
C++’s compatibility with a wide range of platforms, including esoteric ones, is unmatched. Rust’s smaller ecosystem and more limited targets presented a hurdle when considering some low-level platforms that RunSafe’s software relied on.
If you’re considering converting a C++ codebase to Rust, I recommend taking a structured approach to cover all your bases. Here’s how RunSafe managed the transition:
Overall, we found the transition to Rust to be successful. Key outcomes included:
The answer depends on how critical memory safety is to your project and how often you find yourself chasing bugs caused by undefined or unreliable behavior in C++. The key question is: how much of your code actually needs rewriting?
Here are some critical factors to consider:
Rewriting an entire million-line codebase isn’t realistic—it’s neither cost-effective nor time-efficient. However, you might identify smaller, high-risk sections of your codebase that are prone to memory safety issues. Even rewriting small portions of critical code can be enough to reduce your bug surface area.
The post Converting C++ to Rust: RunSafe’s Journey to Memory Safety appeared first on RunSafe Security.
]]>The post Understanding Memory Safety Vulnerabilities: Top Memory Bugs and How to Address Them appeared first on RunSafe Security.
]]>
Memory safety vulnerabilities remain one of the most persistent and exploitable weaknesses across software. From enabling devastating cyberattacks to compromising critical systems, these vulnerabilities present a constant challenge for developers and security professionals alike.
Both the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) have emphasized the importance of addressing memory safety issues to defend critical infrastructure and stop malicious actors. Their guidance highlights the risks associated with traditional memory-unsafe languages, such as C and C++, which are prone to issues like buffer overflows and use-after-free errors.
In February 2025, CISA drilled down even deeper with their guidance, issuing an alert on “Eliminating Buffer Overflow Vulnerabilities.”
Why do memory corruption vulnerabilities still exist, how do they manifest in practice, and what strategies can organizations implement to mitigate their risks effectively? Let’s take a look.
Memory safety vulnerabilities occur when a program performs unintended or erroneous operations in memory. These issues can lead to dangerous consequences like data corruption, unexpected application behavior, or even full system compromise. Common Weakness Enumeration (CWEs), a body of knowledge tracking software vulnerabilities, highlights these as some of the most severe weaknesses in software today.
Memory safety issues are inherently tied to programming languages and runtime environments. Languages like C and C++ offer control and performance but lack built-in memory safety mechanisms, making them more prone to such vulnerabilities.
Attackers leverage memory corruption vulnerabilities as access points to infiltrate systems, exploit weaknesses, and execute malicious actions. Addressing memory vulnerabilities is essential for safety and security, especially for industries like critical infrastructure, medical devices, aviation, and defense.
There are many different types of memory safety vulnerabilities, but there are particular ones that developers and security professionals should understand. The six explained below are listed on the 2024 Common Weakness Enumeration (CWE™) Top 25 Most Dangerous Software Weaknesses list (CWE™ Top 25), and are familiar faces on the list from previous years. The CWE Top 25 lists vulnerabilities that are easy to exploit and that have significant consequences.
A buffer overflow occurs when a program writes more data to a buffer than it can safely hold. This overflow can corrupt adjacent memory, potentially leading to crashes, data corruption, or even allowing attackers to execute arbitrary code.
Example of a Buffer Overflow
A notable example of a buffer overflow vulnerability is CVE-2023-4966, also known as “CitrixBleed,” which affected Citrix NetScaler ADC and Gateway products in 2023. This critical flaw allowed attackers to bypass authentication, including multi-factor authentication, by exploiting a buffer overflow in the OpenID Connect Discovery endpoint.
The vulnerability enabled unauthorized access to sensitive information, including session tokens, which could be used to hijack authenticated user sessions. Discovered in August 2023, CitrixBleed was actively exploited by various threat actors, including ransomware groups like LockBit, leading to high-profile attacks such as the Boeing ransomware incident.
This vulnerability highlights the ongoing significance of buffer overflow vulnerabilities in critical infrastructure and the importance of prompt patching and session invalidation to mitigate potential compromises
A heap-based buffer overflow occurs when a program writes more data to a buffer located in the heap memory than it can safely hold. This can lead to memory corruption, crashes, privilege escalation, and even arbitrary code execution by attackers manipulating the heap memory structure.
Example of a Heap-Based Buffer Overflow
An example of a recent critical heap-based buffer overflow is CVE-2024-38812, a vulnerability in VMware vCenter Server, discovered during the 2024 Matrix Cup hacking competition in China. With a CVSS score of 9.8, this flaw allows attackers with network access to craft malicious packets exploiting the DCERPC protocol implementation, potentially leading to remote code execution. This heap overflow vulnerability was initially patched but required a subsequent update to fully address the issue.
Use-after-free errors arise when a program continues to use a memory pointer after the memory it points to has been deallocated. This can lead to system crashes, data corruption, or exploitation through arbitrary code execution.
Example of a Use-After-Free Error
CVE-2021-44710 is a critical use-after-free UAF vulnerability discovered in Adobe Acrobat Reader DC, affecting multiple versions. The vulnerability has a CVSS base score of 7.8, indicating its high severity. If successfully exploited, an attacker could potentially execute arbitrary code on the target system, leading to various severe consequences including application denial-of-service, security feature bypass, and privilege escalation.
An out-of-bounds write occurs when a program writes data outside the allocated memory buffer. This can corrupt data, cause crashes, or create vulnerabilities that attackers can exploit.
Example of an Out-of-Bounds Write
CVE-2024-7695 is a critical out-of-bounds write vulnerability affecting multiple Moxa PT switch series. The flaw stems from insufficient input validation in the Moxa Service and Moxa Service (Encrypted) components, allowing attackers to write data beyond the intended memory buffer bounds.
With a CVSS 3.1 score of 7.5 (High), this vulnerability can be exploited remotely without authentication. Successful exploitation could lead to a denial-of-service condition, potentially causing significant downtime for critical systems by crashing or rendering the switch unresponsive.
Improper input validation occurs when a system fails to adequately verify or sanitize inputs before they are processed. This flaw can lead to unintended behaviors, including command injection, buffer overflows, or unauthorized access. Attackers exploit this weakness by crafting malicious inputs, often bypassing security controls or causing system failures. Input validation issues are particularly common in web applications and embedded systems where external data is heavily relied upon.
Example of Improper Input Validation
CVE-2024-5913 is a medium-severity vulnerability affecting multiple versions of Palo Alto Networks PAN-OS software. This improper input validation flaw allows an attacker with physical access to the device’s file system to elevate privileges.
Integer overflow or wraparound occurs when an arithmetic operation results in a value that exceeds the maximum (or minimum) limit of the data type, causing the value to “wrap around.” This vulnerability can lead to unpredictable behaviors, such as buffer overflows, memory corruption, or security bypasses. Attackers exploit this weakness by manipulating inputs to trigger overflows, often resulting in system crashes or unauthorized actions. This issue is common in low-level programming languages like C and C++, where integer operations are not inherently checked.
Example of an Integer Overflow
CVE-2022-2329 is a critical vulnerability (CVSS 3.1 Base Score: 9.8) affecting Schneider Electric’s Interactive Graphical SCADA System (IGSS) Data Server versions prior to 15.0.0.22074. This Integer Overflow or Wraparound vulnerability can cause a heap-based buffer overflow, potentially leading to denial of service and remote code execution when an attacker sends multiple specially crafted messages. Schneider Electric released a patch to address this vulnerability.
Recently, nation-state actors, like the Volt Typhoon campaign, have demonstrated the potential real-world impact of memory safety vulnerabilities in the software used to run critical infrastructure.
Additionally, in the last few years, memory safety vulnerabilities within ICS have seen a steady upward trend. There were less than 1,000 CVEs in 2014 but nearly 3,000 in 2023 alone.

Here are a few examples of memory safety vulnerabilities directly impacting critical infrastructure.
Ivanti Connect Secure Flaw
A zero-day vulnerability (CVE-2025-0282) in Ivanti’s Connect Secure appliances allowed remote code execution, enabling malware deployment on affected devices.
Siemens UMC Vulnerability
A heap-buffer overflow flaw (CVE-2024-49775) in Siemens’ industrial control systems exposed critical infrastructure to risks of arbitrary code execution and disruption.
Mercedes-Benz Infotainment System
Over a dozen vulnerabilities in the Mercedes-Benz MBUX system could allow attackers with physical access to disable anti-theft measures, escalate privileges, or compromise data.
Rockwell Automation Vulnerability
A denial-of-service and possible remote code execution vulnerability (CVE-2024-12372) in Rockwell Automation’s PowerMonitor 1000 Remote product. This heap-based buffer overflow could compromise system integrity.
Memory vulnerabilities represent a significant share of software-based attacks. According to a study by CISA, two-thirds of vulnerabilities in compiled code stem from memory safety issues. These vulnerabilities can impact industries that depend heavily on legacy systems written in C and C++—industries like aerospace, manufacturing, and energy infrastructure.
Organizations can address memory safety vulnerabilities by taking proactive measures like:
RunSafe Security is committed to protecting critical infrastructure, and a major key to doing so is eliminating memory-based vulnerabilities in software. Following CISA’s guidance and Secure by Design is an important first step. However, CISA’s guidance to rewrite code into memory safe languages is impractical for companies that produce dozens or hundreds or thousands of embedded software devices, often with 10-30 year lifespans.
This is where RunSafe steps in, offering a far more cost effective and an immediate way to eliminate the exploitation of memory-based attacks. RunSafe Protect mitigates cyber exploits through Load-time Function Randomization (LFR), relocating software functions in memory every time the software is run for a unique memory layout that prevents attackers from exploiting memory-based vulnerabilities. With LFR, RunSafe prevents the exploit of 86 memory safety CWEs.
Rather than waiting years to rewrite code, RunSafe protects embedded systems today, allowing software to defend itself against both known and unknown vulnerabilities.
Interested in understanding your exposure to memory-based CVEs and zero days? You can request a free RunSafe’s Risk Reduction Analysis here.
The post Understanding Memory Safety Vulnerabilities: Top Memory Bugs and How to Address Them appeared first on RunSafe Security.
]]>The post CISA’s 2026 Memory Safety Deadline: What OT Leaders Need to Know Now appeared first on RunSafe Security.
]]>It’s for this reason, among other national security, economic, and public health concerns, that the Cybersecurity and Infrastructure Security Agency (CISA) has made memory safety a key focus of its Secure by Design initiatives.
Now, CISA is urging software manufacturers to publish a memory safety roadmap by January 1, 2026, outlining how they will eliminate memory safety vulnerabilities in code, either by using memory safe languages or implementing hardware capabilities that prevent memory safety vulnerabilities.
Though manufacturers are on the hook for the security of their products, the responsibility doesn’t fall solely on the shoulders of the manufacturers. Buyers of software in the OT sector also have an equally important role to play in addressing memory safety to build the resilience of their mission-critical OT systems against attack.

“The roadmap to memory safety is a great starting point for asset owners to talk to their suppliers, saying this is a big concern of mine, especially for my OT software,” said Joseph M. Saunders, Founder and CEO of RunSafe Security. “Then, what we’re looking for from product manufacturers is that they have a mature process to assess how to achieve memory safety.”
Why all the fuss about memory safety, and why now? Memory safety vulnerabilities consistently rank among the most dangerous software weaknesses, and they are alarmingly common. Within industrial control systems, memory safety vulnerabilities have been steadily rising, growing from less than 1,000 CVEs in 2014 to nearing 3,000 in 2023 alone.

In one example, programmable logic controllers were found vulnerable to memory corruption flaws that could enable remote code execution. In the OT world, where systems control critical industrial processes, such vulnerabilities aren’t just security risks — they’re potential catastrophes waiting to happen.
CISA has set a clear deadline: January 1, 2026. With this date in mind, OT software manufacturers and buyers can begin to have important conversations about addressing memory safety, both for existing products written in memory-unsafe languages and for new products to be released down the line.
What should be on the agenda for discussion when building and evaluating a memory safety roadmap? Here are four key areas to look at.
Start with a comprehensive Software Bill of Materials (SBOM) to identify and prioritize memory-based vulnerabilities in OT software. Think of it as a detailed inventory that helps you:
Once vulnerabilities are identified, manufacturers should take next steps to eliminate them. OT software buyers can discuss with manufacturers about remediation options like:
Software buyers should discuss with their suppliers how they are incorporating memory safety into their product lifecycle planning.
Look ahead by:
A memory safety roadmap is a great opportunity for software manufacturers and buyers to open up conversations about memory safety and collaborate to find a path forward. When considering working with a supplier, evaluate their willingness to
By working together, software buyers and manufacturers can not only meet CISA’s memory safety mandate but also build more resilient OT systems.
“All asset owners should do a study with their suppliers to understand the extent to which they are exposed to memory safety vulnerabilities,” Saunders said.
From there, software manufacturers can build a roadmap to tackle the memory safety challenge once and for all.
Learn more about how RunSafe Security protects critical infrastructure and OT systems from memory-based vulnerabilities.
The post CISA’s 2026 Memory Safety Deadline: What OT Leaders Need to Know Now appeared first on RunSafe Security.
]]>The post Don’t Let Volt Typhoon Win: Preventing Attacks During a Future Conflict appeared first on RunSafe Security.
]]>For a deeper discussion on Volt Typhoon’s tactics and the national security stakes, listen to our RunSafe Security Podcast episode featuring experts unpacking the threat to critical infrastructure.
Unlike with ransomware attacks, Volt Typhoon’s aims are focused on the long-term, not the short-term. Ransomware culprits want quick transactions with only enough disruption to their targets as required to extract payments. Nation-state aligned cyber offense, however, has a two-fold purpose: (1) blunt military capabilities and (2) create disruption and overall panic in society.
These actors take a long-term perspective, preparing in advance to be able to achieve both goals when a conflict arises. In the case of the PRC-backed Volt Typhoon, that conflict could be the PRC’s attack on Taiwan. If Volt Typhoon is effective in meeting its goals, it could impact the response of the United States, potentially deterring the US from engaging in the conflict. We are in an age of cyber as a means to achieve geopolitical goals.

The Kill Chain construct originates in kinetic warfare. For example, in 1990, General John Jumper coined a kill chain for airborne targets: F2T2EA (find, fix, track, target, engage, assess). It is expected to be completed as rapidly as possible. Cyber kill chains, on the other hand, can occur over a much longer period of time.
In the cyber kill chain coined by Lockheed Martin, Reconnaissance is the first phase before Weaponization. The Reconnaissance phase can be prolonged; it is also referred to as the preparation of the battlefield stage. The attacker is in the system, searching for weaknesses, and in some cases living off the land.
Volt Typhoon is in the Reconnaissance stage within critical infrastructure, preparing to move to the next stages of the kill chain if geopolitical circumstances warrant.
The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has published detailed descriptions of the typical pattern of attack involving Volt Typhoon. Drawing from a February 2024 CISA notice, a Volt Typhoon cyber kill chain typically involves:
Surveillance starts by profiling network traffic into and out of a target organization. The goal is to identify:
Compromising the edge devices of a target network is important, because often those edge devices are configured to provide alerts when indicators of compromise occur. By compromising the edge devices themselves (routers, etc), the attacker can “blind” the sensor network. Exploitation will target the operating systems and applications on the edge devices. Once edge devices have been compromised, those devices themselves become part of the surveillance network, but with a privileged view of behavior “inside the network.” Volt Typhoon has used a wide variety of exploitation techniques:
Using the privileged position on edge devices, the attacker tries to determine administrative credentials so that further access to devices will no longer require the exploitation from step 2 above. Utilizing credentials also means that the attacker can maintain persistence on a device, even if the exploitation vector from step 2 is patched.
Compromising the domain controller is a key objective, because it means that all further accesses will have the patina of legitimacy. Identifying fraudulent logins from legitimate user names and passwords requires sophisticated tools that most organizations don’t use.
With “domain controller” access, the attacker now moves into the stage of executing their mission on target, which might be:
The attack scenario above highlights that a memory safety vulnerability is part of the Volt Typhoon cyber kill chain. It’s a reasonable approach as: (1) a number of the most dangerous CWEs per MITRE relate to memory safety (6 of the top 25); (2) 70% of Google and Microsoft security patches relate to memory safety as referenced in NSA’s advisory about memory safety; and (3) in VxWorks Urgent/11, 6 of the 11 vulnerabilities related to memory safety.
It is reasonable to hypothesize that Volt Typhoon cyber kill chains also target OT/ICS devices, which are attractive as both edge devices (step 2 above) and as controllers of critical infrastructure. Capture of these devices enables Volt Typhoon to directly disrupt critical infrastructure by sending malicious commands and/or erroneous data to these devices. Additionally, these OT/ICS devices are often good pathways into the IT infrastructure.
In the case of an attack on Taiwan, Volt Typhoon could deter the US from taking action by exploiting a memory-based vulnerability, for example, to disrupt a major power grid.
Approaches do exist for memory safety now, including the hardening of software in OT/ICS devices to withstand attacks against both known and unknown memory-based vulnerabilities. Unlike static defenses that can be easily bypassed, techniques like moving target defense randomize at a granular level the memory layout of software binaries. By constantly shifting the software landscape, attackers are unable to predictably exploit memory vulnerabilities.
For example, RunSafe Security’s Protect solution mitigates cyber exploits by dynamically relocating software functions in memory every time the software is loaded. By ensuring that the memory layout changes every time software is loaded, RunSafe effectively thwarts repeated attack attempts and prevents bad actors, like Volt Typhoon, from compromising critical infrastructure.
As outlined, there is time to foil Volt Typhoon’s Exploit stage and their ultimate goal of impacting the United States response if a conflict arises. Only critical infrastructure owners can close the gap—by demanding Secure by Design systems and insisting on runtime protection against memory-based exploits.
Unlike CISA, which provides guidance and recommendations, critical infrastructure owners purchase and update ICS/OT devices installed in their plants, grids, and factories. It falls in their hands to require, demand, and pound the table for ICS/OT device manufacturers, who are their vendors, to increase the cyber resilience of embedded devices. These devices must have the ability to operate through attacks in real-time without the need for human, centralized monitoring and response.
By demanding memory safety and other defense measures from their ICS/OT device vendors now, critical infrastructure owners can prevent the success of Volt Typhoon in the future.
Want to hear from RunSafe leaders on how to approach this challenge? Tune in to the RunSafe Security Podcast episode on Volt Typhoon to explore real-world implications and actionable strategies.
Learn more about RunSafe Security’s Protect solution in our technical deep dive.
The post Don’t Let Volt Typhoon Win: Preventing Attacks During a Future Conflict appeared first on RunSafe Security.
]]>The post Secure by Design: Building a Safer Future Through Memory Safety appeared first on RunSafe Security.
]]>
The software industry faces a persistent challenge: how to address memory safety vulnerabilities, one of the most common and exploited flaws in modern systems. Memory safety vulnerabilities expose systems to remote control, data breaches, and disruptions, often with devastating consequences. In response, Secure by Design principles offer a forward-thinking solution to the challenge by embedding security into the foundation of software development.
In a recent webinar hosted by RunSafe Security, industry experts Doug Britton, EVP and Chief Strategy Officer at RunSafe, and Shane Fry, Chief Technology Officer at RunSafe, provided insights into the critical role that Secure by Design can play in mitigating these vulnerabilities.
Here are the key takeaways from their discussion and how RunSafe’s approach supports a proactive cybersecurity model.
Access the full webinar discussion on memory safety here.
Memory safety vulnerabilities are among the oldest and most widespread issues in software development. Doug Britton emphasized how these vulnerabilities, despite being known for decades, persist in many codebases and continue to be exploited by attackers.
“The classic memory safety issues are among the oldest and most insidious in software,” Doug explained. “We’ve known about them for decades, but they persist, and attackers continue to exploit them.”
According to Jack Cable, senior technical advisor at CISA, two-thirds of vulnerabilities in compiled code are related to memory safety. Attackers frequently weaponize these flaws, leading to widespread breaches and disruptions. Even advanced testing tools and compilers often fail to detect all vulnerabilities, leaving organizations exposed. These vulnerabilities include buffer overflows, which Doug and Shane referenced as a common point of exploitation.
Secure by Design is an approach to software development that prioritizes security from the very beginning. Rather than bolting on security measures after the fact, Secure by Design integrates security into every stage of the development lifecycle. The method aims to reduce the likelihood of vulnerabilities by making security an inherent part of the system architecture.
In the case of memory safety, this often involves transitioning to memory-safe languages like Rust and Go, which can drastically reduce the risk of memory-related bugs. However, as Doug noted, transitioning entire codebases to these languages is not a quick fix. “Most vendors we’ve spoken to have said it’s going to take them at least eight years to get a significant portion of their code running in memory-safe languages,” he shared.
This timeline introduces a significant challenge: organizations cannot afford to wait for years while attackers continue to exploit existing vulnerabilities. Immediate solutions are needed to bridge the gap.
RunSafe Security provides a practical, scalable solution to address memory safety vulnerabilities while organizations work toward the long-term goal of transitioning to memory-safe languages. Instead of requiring developers to rewrite their entire codebase, RunSafe’s technology mitigates vulnerabilities without changing a single line of code.

As Shane Fry explained, RunSafe’s approach focuses on randomizing code layouts in memory. This method prevents attackers from predicting where code is located, making it significantly harder to exploit memory vulnerabilities such as buffer overflows.
“What we do is randomize where the code is in memory,” Shane said. “Even if an attacker finds a buffer overflow, they won’t know where to point the processor to execute their malicious code because everything has been randomized.”
This process, known as Moving Target Defense, effectively protects against both known and unknown vulnerabilities at runtime. RunSafe’s solution can be integrated in as little as 30 minutes, making it accessible for organizations of all sizes without disrupting development workflows.
Despite its clear benefits, adopting Secure by Design principles is not without its challenges. One of the most significant barriers is the transition to memory-safe languages. As Doug and Shane discussed, this process is resource-intensive and requires specialized expertise, which many organizations do not yet possess.
“To implement Rust or Go, developers need to learn these languages, and that takes time,” Doug pointed out. “It’s not as simple as downloading a module and instantly knowing Rust. It’s going to take an equivalent amount of time and effort as it took to build the original C and C++ codebases.”
Additionally, there are significant economic barriers to rewriting code, especially in industries reliant on legacy systems. Shane highlighted that while Secure by Design offers a path toward a safer future, organizations need interim solutions to protect themselves in the present.
This is where RunSafe’s solution becomes invaluable. By randomizing code layout, organizations can protect their existing systems while planning for the long-term transition to memory-safe languages. This hybrid approach allows companies to mitigate immediate risks without requiring massive investments in rewriting code from scratch.
While Secure by Design requires upfront investment, its long-term benefits make it a critical strategy for organizations looking to build more secure and resilient systems.
Shane emphasized that acting now is crucial, as attackers won’t wait for organizations to fully transition to memory-safe languages. “The bugs that exist today are going to be exploited tomorrow,” he warned. “It’s essential that we take immediate steps to protect our systems.”
Key advantages include:
Secure by Design principles offer a clear path forward for organizations looking to protect themselves from memory safety vulnerabilities. RunSafe Security provides a critical solution that helps companies bridge the gap between today’s risks and the long-term goal of transitioning to memory-safe languages.
By embracing Secure by Design and leveraging RunSafe’s technology, organizations can build more resilient systems, reduce their exposure to vulnerabilities, and protect their operations from the ever-present threat of cyberattacks.
For a deeper dive into Secure by Design strategies and memory safety solutions, download our full webinar and learn from industry experts Doug Britton and Shane Fry how to protect your codebase and future-proof your organization’s cybersecurity strategy.
The post Secure by Design: Building a Safer Future Through Memory Safety appeared first on RunSafe Security.
]]>The post The Real Cost of Rewriting Code for Memory Safety – Is There a Better Way? appeared first on RunSafe Security.
]]>The Hidden Costs of Rewriting Code
The Memory Safety Crisis: A Growing Concern
Innovative Approaches to Memory Safety
Addressing Memory Safety: A Comprehensive Approach
Memory safety vulnerabilities are a persistent and pervasive issue in the software development world, leading to some of the most severe and costly security breaches. From buffer overflows to dangling pointers, these vulnerabilities are a common attack vector, exploited by malicious actors to gain unauthorized access, cause data corruption, or crash systems. Traditionally, the go-to solution has been to rewrite the affected codebase to ensure memory safety, which is not feasible. However, this approach is fraught with challenges, including substantial time investment, high costs, and the inherent complexity of reworking existing systems without introducing new bugs.
Rewriting code for memory safety is akin to renovating an old house; it’s labor-intensive, expensive, and often reveals unforeseen problems that further complicate the project. For developers and IT security professionals, the idea of re-engineering vast amounts of legacy code for hundreds of different types of products and often millions of fielded devices is daunting, often leading to project delays and increased pressure on already stretched resources. The technology leaders and system architects face the additional burden of justifying these costs and disruptions to stakeholders.
But what if there were a better way?
Advances in technology now offer innovative solutions that enhance memory safety without the need for extensive code rewrites. These cutting-edge approaches mitigate risks while saving time and resources, enabling product manufacturers to secure their systems more efficiently and effectively. Read on to discover these groundbreaking methods and explore how they can transform the landscape of software security.
Rewriting code for memory safety is usually a monumental endeavor that consumes significant time and resources. For software developers and engineers, the process involves learning new programming languages, testing open source components that may not be compatible, hiring new developers with different skills, testing all over again, and then getting your customers to buy new versions of the device they just purchased. This meticulous task demands a high level of expertise and substantial man-hours, diverting valuable resources from other critical projects.
Moreover, the process of rewriting code can inadvertently introduce new bugs and vulnerabilities. As developers modify and restructure the code, there is always the risk of human error, leading to new security flaws that could be even more challenging to detect and rectify. This not only undermines the initial objective of enhancing security, but can also add additional rounds of testing and debugging, further stretching timelines and budgets.
The disruption caused by rewriting code extends beyond the development team. Existing workflows are interrupted, scarce resources that could be focused on new features are deployed to redoing existing features, leading to delays in project timelines and deviations from carefully planned product roadmaps. For technology leaders and system architects, this upheaval can create significant strategic challenges, as they must balance the urgent need for security with the equally pressing demands of innovation and market competitiveness.
In light of these hidden costs, it becomes evident that the traditional approach to memory safety is far from ideal. Product owners and development teams need solutions that address security vulnerabilities without derailing their operations and straining their resources.
Memory safety vulnerabilities have become a pressing issue in today’s digital landscape, with far-reaching consequences. According to a recent study by Microsoft, nearly 70% of all software vulnerabilities stem from memory safety issues, underscoring the critical nature of this threat. High-profile breaches, such as the Heartbleed bug and the WannaCry ransomware attack, highlight the devastating impact these vulnerabilities can have. These incidents not only compromised sensitive data but also caused billions of dollars in damages and disrupted services globally.
As software systems grow increasingly complex, maintaining memory safety becomes more challenging. Modern applications often integrate numerous third-party libraries and dependencies, each with its own potential vulnerabilities. This complexity amplifies the difficulty of ensuring that every component adheres to stringent memory safety standards. For software developers and security professionals, the task of safeguarding these intricate systems is akin to finding a needle in a haystack, requiring continuous vigilance and comprehensive testing.
Given the escalating scale and sophistication of these threats, the need for proactive and effective solutions is more urgent than ever. Traditional methods like rewriting code are no longer sufficient. Product manufacturers must adopt innovative strategies that can address memory safety vulnerabilities swiftly and efficiently, without compromising their operational capabilities.
New approaches to memory safety are transforming the way product manufacturers address vulnerabilities. Advanced software hardening techniques have emerged as a game-changer, providing robust security enhancements without disrupting existing workflows. These methods integrate seamlessly with current software development processes, ensuring that memory safety is maintained without compromising operational efficiency.
Key features of these advanced approaches include real-time monitoring and threat detection, which continuously scan applications for suspicious activity. Automated response and recovery mechanisms further bolster security by swiftly neutralizing threats and restoring systems to a safe state, minimizing downtime and mitigating the impact of attacks.
Moreover, these innovative solutions are designed to be compatible with a wide range of software environments, ensuring that they can be deployed across diverse platforms and applications. This flexibility makes it easier to adopt these techniques without extensive modifications to existing infrastructure. For product managers and security leaders, the benefits are clear: enhanced security, reduced risk of breaches, and a more resilient software ecosystem, all achieved without the significant resource investment typically associated with code rewrites.
Addressing memory safety requires a comprehensive and multifaceted approach that goes beyond implementing innovative technologies. While advanced software hardening techniques are crucial, their effectiveness is amplified when combined with best practices and robust organizational policies. This holistic strategy ensures that every aspect of the software development lifecycle prioritizes memory safety, creating a resilient and secure foundation.
Collaboration is key to achieving this goal. Developers, security teams, and stakeholders must work together to identify vulnerabilities, develop mitigation strategies, and implement effective solutions. By fostering open communication and collaboration, product teams can ensure that memory safety is not just a technical issue but a shared responsibility. This unified effort helps to align priorities, streamline processes, and ensure that security considerations are integrated into every phase of development.
Continuous training and awareness are also vital components of a comprehensive memory safety strategy. Regular training sessions and workshops help keep teams updated on the latest threats, best practices, and technological advancements. Encouraging a culture of knowledge-sharing and continuous learning ensures that everyone, from junior developers to senior architects, remains vigilant and informed.
By combining innovative solutions with collaborative efforts and ongoing education, product manufacturers can build a robust defense against memory safety vulnerabilities. This comprehensive approach not only enhances security but also promotes a culture of secure software development, ensuring long-term protection and resilience.
To achieve memory safety and calculate your potential attack surface reduction, consider implementing software memory protections without rewriting a single line of code. Imagine how much your CFO will appreciate the efficiency and cost savings of this proactive security measure. Start enhancing your software’s defense today!
The post The Real Cost of Rewriting Code for Memory Safety – Is There a Better Way? appeared first on RunSafe Security.
]]>The post The Memory Safety Crisis: Understanding the Risks in Embedded Software appeared first on RunSafe Security.
]]>Risks of Memory Vulnerabilities in Embedded Software
Challenges of Addressing Memory Safety
RunSafe’s Innovative Approach to Memory Safety
Software Supply Chain Security with RunSafe
Ensuring Security in Embedded Systems, ICS, and OT
Practical and Cost-Effective Memory-Based
Vulnerability Protection
Memory safety is a foundational aspect of software development, ensuring that programs operate reliably and securely without accessing or manipulating memory incorrectly. In embedded systems, where software controls critical functions such as transportation systems or power grids, the importance of memory safety cannot be overstated.
The National Security Agency (NSA) has issued guidance emphasizing the severity of such vulnerabilities, prompting major tech companies like Google and Microsoft to underscore their prevalence. Likewise, the Cybersecurity and Infrastructure Security Agency (CISA) has issued an implementation plan to fortify and defend the digital landscape.
This blog post delves into the risks posed by memory vulnerabilities in embedded software, the challenges in addressing them, and how embedded software security solutions like RunSafe Security can enhance memory safety without extensive code rewrites or performance degradation.
Memory safety in embedded software is not just a concern, it’s a substantial and pressing threat to software deployed within critical infrastructure. This concern is further amplified by the NSA’s recent guidance in November 2022, which underlines the gravity of the risk posed by memory-based vulnerabilities. These vulnerabilities have the potential to compromise the integrity and security of essential systems, a risk that cannot be ignored.
A similar analysis from MITRE reveals a sobering reality: three of the top eight most dangerous software weaknesses are memory safety issues. Google and Microsoft echo these concerns, reporting that nearly 70% of their vulnerabilities in native code stem from memory-based flaws.
The NSA’s recommendation, a fast transition to memory-safe alternatives like Go, Java, Ruby, Rust, and Swift, highlights the situation’s urgency. However, the monumental task of rewriting code for memory safety means touching billions of lines of code across countless code bases and products. This presents significant challenges to any organization, both in terms of financial investment and opportunity cost.
Traditionally, the recommended approach to addressing memory safety in embedded software has been rewriting code in languages like Rust, which is known for its memory safety features. However, rewriting billions of lines of code across numerous code bases and products entails significant costs and time investments. Moreover, this may disrupt existing workflows and introduce unnecessary complexities into development processes.
RunSafe Security presents a pioneering solution in contrast to conventional methods, providing organizations with the capability to attain memory safety without requiring extensive code alterations or sacrificing performance. Utilizing its cutting-edge technology, RunSafe employs a method of hardening code by randomizing the placement of functions in memory, ensuring a unique memory layout for each binary during runtime.
By embedding protective measures directly into the software during the build process, RunSafe effectively addresses memory-based vulnerabilities while maintaining system performance. This approach offers a practical and economically viable alternative to traditional security measures, mitigating the risk of exploitation without imposing significant overhead on operations.
Moreover, RunSafe seamlessly integrates with continuous integration and continuous delivery (CI/CD) pipelines, streamlining the incorporation of enhanced security measures into the software supply chain. This integration ensures that developers can maintain their productivity while simultaneously fortifying the security of their applications, significantly improving the resilience of deployed software against potential threats.
By implementing RunSafe technology within CI/CD pipelines, organizations gain the capability to reinforce proprietary software compiled internally or by suppliers, alongside deploying hardened iterations of incorporated open-source components. Leveraging RunSafe’s CI integrations with GitLab and GitHub, customers can automate SBOM generation, and integrate security measures at build time—all without compromising developer efficiency or system performance.
Securing operational technology (OT), industrial control systems (ICS), and other critical embedded systems poses distinctive challenges. Yet, RunSafe’s technology adeptly tackles these obstacles by thwarting memory-based attacks, and safeguarding embedded software during runtime without imposing undue administrative burdens.
Through our automated tool, protective measures are integrated at the software build stage, and activated during deployment to fortify embedded software during runtime. Our extensive deployment experience spans across a myriad of devices, ranging from firmware on servers to interoperability software facilitating communication between electric vehicle charging stations and the energy grid, as well as software employed in industrial automation facilities.
In addition to this, RunSafe’s technology can be integrated into DevSecOps workflows, ensuring that security measures are applied consistently throughout the development process and across development teams. By incorporating protections at the build stage, organizations can mitigate vulnerabilities early in the software development lifecycle, reducing the risk of exploitation in production environments.
RunSafe’s compatibility is not limited by operating system or instruction set. RunSafe products are compatible across various operating systems (such as LynxOS, VxWorks, Linux, Android, QNX and several other iterations) and most instruction sets (Intel, ARM (32 and 64), Power PC, and additional instruction sets). This versatility ensures that organizations across diverse sectors can confidently leverage RunSafe’s technology to enhance the security posture of their embedded systems, knowing that it can adapt to their specific needs.
The imperative to address memory safety in embedded software is a critical issue that demands immediate attention. RunSafe Security offers a practical and cost-effective solution to this problem, allowing organizations to strengthen their software against memory-based vulnerabilities without the need for extensive code rewrites or performance sacrifices.
As the threat landscape evolves, embracing innovative approaches like RunSafe Security becomes essential for safeguarding critical infrastructure and ensuring the resilience of embedded systems.
Take charge of your organization’s memory safety today in your software deployments and mitigate the risks posed by memory-based vulnerabilities with RunSafe Security.
The post The Memory Safety Crisis: Understanding the Risks in Embedded Software appeared first on RunSafe Security.
]]>The post Memory Safety Through Hardening System Code appeared first on RunSafe Security.
]]>Memory Safety Through Hardening System Code
A static target is a sitting duck. However, we all know that hitting a moving target can be far more challenging—as it involves accounting for the target’s speed, direction, and distance. Then there are other factors, such as the attacker’s skill, the type of weapon used, and environmental conditions, which also play a significant role in determining the difficulty of hitting a moving target. The embedded software world, the world in which our country’s critical infrastructure (government, business, military, utilities, and more) exists, and is utilized and repurposed, is a static target. That’s changing. It has to change. The embedded software of our most vital systems is vulnerable…and constantly under attack.
Weapons systems, in particular, are tempting targets. According to Schneier on Security, “Our military systems are vulnerable. We need to face that reality by halting the purchase of insecure weapons and support systems and by incorporating the realities of offensive cyberattacks into our military planning. Over the past decade, militaries have established cyber commands and developed cyberwar doctrine. However, much of the current discussion is about offense. Increasing our offensive capabilities without being able to secure them is like having all the best guns in the world, and then storing them in an unlocked, unguarded armory. They just won’t be stolen; they’ll be subverted.”
This blog is obviously speaking to the broader aspects of cybersecurity. But the point is valid. When it comes to cybersecurity, we need to turn our focus toward defense. Defending our weapons systems’ memory-based vulnerabilities. And defending the memory-based vulnerabilities in all of our critical infrastructure.
For more than a year now, the U.S. government has been communicating the severity of the risk that memory-based vulnerabilities pose in embedded software deployed across critical infrastructure. According to MITRE, 3 of the top 8 (and 8 of the top 25) most dangerous software weaknesses are memory safety related. Microsoft and Google have each stated that software memory safety issues are behind approximately 70% of their vulnerabilities. The NSA recommends that software producers using C and C++ languages should rewrite their products in memory-safe languages such as Rust (a modern version of C and C++ that ensures that programmers write stable and extendable, asynchronous code).
But therein lies the heart of the problem. Making the move to memory-safe languages is more than difficult. It’s both time- and cost-prohibitive to rewrite all your system code in another language. So how do you ensure memory safety, and thereby close the loopholes that adversaries are taking advantage of? The key is in how we think about it. We must shift our thinking away from merely patching problems toward a more strategic approach. Let’s not just fill in the holes in the road (again and again and again); let’s build a new road!
The first step in taking immediate action toward solving the problem is to research what other options are available. Existing scanning, patching, and monitoring tools do not solve the memory vulnerabilities inherent in C and C++ coded systems. Perhaps it’s time to consider a solution that would protect your current code, while you make the transition to memory-safe languages for future coding. It’s important to look for tools that will enable your software development team to achieve memory safety without rewriting code and without affecting system performance.
Specifically, this suite of tools should cover the three most common use cases: dropping security into the code at build time, downloading protected versions of open-source software packages, and flagging instabilities and vulnerabilities at runtime.
Currently, attackers have the advantage over defenders in cybersecurity. The right tool looks to shift the balance of power from the attacker to the defender while preventing data loss. To shift the balance of power, you first need to know how exposed your software is to memory-related exploits. That is, a risk assessment.
The right tool will utilize the National Vulnerability Database (NVD) to analyze software vulnerabilities in hundreds of thousands of software packages (open-source, commercial, and proprietary software) and use a fast and powerful methodology to examine the software bill of materials (SBOM), assess the risk to the system, and alert stakeholders.
Lastly, the tool you choose should automate the hardening of high-risk code with moving-target defense technologies, reducing the attack surface by eliminating the exploitation of entire classes of vulnerabilities and data theft.
You want insights within minutes, including a detailed list of recommendations.
As discussed, the toughest target to hit is a moving target. You want to turn your code into that moving target, hardening your code using load-time function randomization (LFR). This stops hackers from exploiting memory-related vulnerabilities. Rather than taking 5-10 years and millions (if not tens of millions) of dollars to rewrite every line of code to memory-safe languages, LFR can be applied in minutes without rewriting software.
This software diversity relocates where functions load into memory uniquely for every software load for each instance deployed, denying attackers the ability to exploit memory-based weaknesses.
With no new software and no change in the lines of code, there is no change in system performance and no change in functionality. It’s important to employ Moving Target Defense (MTD)-related techniques for this approach, so the hardening process does not incur performance overhead. That way, there is no way for hackers to know distances between functions or create exploits that take advantage of knowing how functions are laid out, let alone launch and scale attacks on your systems.
Hardening the code helps you achieve the transition from a static target to a moving target, reducing the software attack surface by eliminating the exploitation of software memory issues and protecting your data.

The post Memory Safety Through Hardening System Code appeared first on RunSafe Security.
]]>The post Safeguarding LYNX MOSA.ic: Lynx and RunSafe’s Security Partnership appeared first on RunSafe Security.
]]>Safeguarding LYNX MOSA.ic: Lynx and RunSafe’s Security Partnership
Why does memory safety matter to Lynx users?
No impact on Runtime performance? Development schedule?
Easier compliance with security requirements
What about the Software Bill of Materials?
How to immunize LYNX MOSA.ic against 70% of code vulnerabilities?
RunSafe Security is excited to announce a strategic partnership with Lynx Software Technologies (Lynx). The partnership between Lynx and RunSafe protects LYNX MOSA.ic against 70% of the most common vulnerabilities with no developer impact.
At the launch of the partnership with Lynx Software Technologies, the RunSafe protection applies to the build root Linux operating system, applications, and customer software. In early 2024, the team intends to release the LynxOS-178 version of the protections.
The segments, including aircraft manufacturers, industrial controls, defense systems, and IOT devices, are really challenged with meeting desired delivery schedules at a time when system complexity is increasing at a fast rate. Our effort here with Lynx is beyond “marketecture” and PowerPoint. The purpose of this engagement is to deliver RunSafe technology as a proven component in LYNX MOSA.ic. The “ic” stands for Integration Center and we fully believe that one critical piece in enabling reductions in program risk and development schedule comes from integrating cybersecurity protections during the product development process. Lynx will deliver RunSafe’s technology as part of its LYNX MOSA.ic product.
In compiled code, memory safety bugs are the single largest class of bugs. In the embedded and real-time operating system spaces, compiled code represents the vast majority of code. A few statistics, based on research at North Carolina State University:
Time is on the side of the attackers. Deployment cycles of systems in military, aerospace, and federal markets are long. In the case of planes, they can be measured in decades. Patches are hard to deploy to these complex, fielded systems. Meanwhile, highly resourced and skilled attackers develop cyber kill chains, finding (or buying on the dark web) zero days. With RunSafe, systems running Lynx RTOS are protected and future-proofed against the majority of these zero-day attacks without patching.
For those Lynx customers selling to US government programs, there is an increased focus on the memory safety of acquired systems and software. Figure 1 below shows actions undertaken by the US government to tighten up memory safety.

Figure 1
The process of reordering the functions happens when the process is set up in memory. Instead of jumping right to the normal entry point for the function, the binary reorders its functions first. Then it begins normal operations. This reordering of memory typically adds only 1%-3% to process setup time. If a function took 1 second to load before, it will now load in 1.01 to 1.03 seconds.
From that point forward, the RunSafe protection is passive. There is no change in the sheer instruction count, control flow, etc. There are no additional memory reads, writes, lookups, etc. Our customers, even on the most stringent real-time systems, have never found a measurable impact on runtime performance.
Lynx customers are better able to document their compliance with RMF (DoDI 8510.01), which incorporates NIST 800-53. Consequently, programs accelerate time to ATO at a lower cost. For high-impact systems, RunSafe enhances compliance with more than 20% of the controls. For many RunSafe-impacted controls, the only alternative is thousands of hours of testing.
RunSafe provides “last-mile integrity” into the running memory, increasing confidence that the code operates in memory the way the developer intended. By mitigating an entire class of vulnerabilities, RunSafe makes possible incident handling responses that were previously impossible. This protection can be applied to every layer of the system’s software. Table 1 shows the breakout of impacted controls across the various levels of system integrity. A whitepaper with more details about the NIST 800-53 controls can be found here.

Table 1
Machine-readable SBOMs are being required by EO14028 and making their way into FAR clauses. RunSafe’s tools make it possible to build a complete and perfect SBOM for each Lynx project, regardless of whether a package manager is used for the compiled code, or not. By working into the build process, RunSafe is able to identify every file coming into the build, every dependency, library, include file, etc., and build a perfect tree of dependencies between the components. For Lynx Software Technologies customers, this is a zero-effort activity.
A future webinar and configuration document will describe the configuration steps, but this would involve a slight change to a build script, instructions on where to include the license key, and which configuration files to edit. No other changes are necessary (see figure 2).

Figure 2
Memory safety bugs (stack overflows, heap overflows, etc.) wreak havoc on their targets by using code already in memory in unintended ways. For example, by moving the instruction pointer one byte past an add instruction, the processor may execute a branch instruction. For these attacks to work, the attacker must have highly predictable insight into the layout of memory on the victim process.
Figure 3 below shows a few different memory scenarios. The first column is the memory layout if no protection is applied. This gives the attacker a perfectly deterministic layout across all devices. The second column shows the protection of a technique called Address-Space Layout Randomization (ASLR). This randomizes the base address of binaries or libraries, but t威而鋼
he body of the binary will still have a deterministic organization, relative to this base address. Given the prevalence of memory leaks, ASLR hasn’t done much to slow down attackers in the last 15 years. RunSafe’s protection randomizes the individual functions that comprise a binary, as shown in the Fine-grained randomization graphic.
On average, binaries tend to have around 280 functions. That gives our attacker roughly 280(!) permutations to consider. To understand that scale, if an attacker were able to attempt 100 trillion possible randomizations every nanosecond, the attacker would spend 3*10524 universes before they hit every possible permutation.

Figure 3
After analyzing thousands of binaries and hundreds of thousands of functions using open-source software, RunSafe Security was able to show that there weren’t enough bits of code to misuse at a function-level, to create attacks that work on unprotected code.

The post Safeguarding LYNX MOSA.ic: Lynx and RunSafe’s Security Partnership appeared first on RunSafe Security.
]]>The post Securing Critical Infrastructure with Memory Safety in Software appeared first on RunSafe Security.
]]>Securing Critical Infrastructure with Memory Safety in Software
The issue at hand: memory vulnerabilities
The challenge of transitioning to memory-safe languages
Current efforts and the role of memory protections
The urgency of the matter: threats on the horizon
Improving critical infrastructure: memory safety now
How secure is the code underpinning our most vital systems? In our increasingly digitized world, memory safety in software has transitioned from a technical detail to a cornerstone of critical infrastructure protection.
As the NSA and other prominent bodies underscore its importance, it becomes evident that software vulnerabilities, especially in C or C++ applications, are not just code flaws but potential gateways for malicious actors. These gateways are particularly concerning when considering infrastructures like energy grids and manufacturing plants powered by industrial control systems (ICS) and SCADA systems. The stakes are high, and the challenges manifold.
Let’s take a deep dive into the memory-based vulnerabilities landscape, the daunting task of transitioning to memory-safe languages, and the innovative solutions, such as RunSafe, that aim to immunize your software today and for the future.
Foundational software, especially those running our critical infrastructures, are built on the backbones of C and C++. These languages, while powerful, have an Achilles heel: vulnerabilities arising from their handling of memory.
At its core, the software governing crucial sectors like our energy grids and manufacturing hubs must be flawless. But, in the reality of C and C++-coded systems, memory mismanagement isn’t rare. Such missteps can serve as exploitable vulnerabilities for those with ill intent.
A key concern here is the inherent predictability within these languages. Historically, if a bad actor could decipher vulnerabilities in one software instance, they could design a universally effective exploit. This isn’t a bug but a golden ticket for those with malicious objectives.
To tackle this, it’s not enough to just patch these memory-based vulnerabilities. Instead, the approach needs to be more strategic. We can boost unpredictability by changing how software interacts with memory and introducing elements of randomness in function loads. This makes each software instance unique, diminishing the chances of a single exploit affecting all.
Critical infrastructure sectors often boast an expansive product portfolio, developed and refined over many years. Consider a company with over 1,000 diverse products, each a snapshot of technological innovation at its time of conception. The challenge arises when a sizable fraction of these products, entrenched in our critical infrastructures, are susceptible to memory vulnerabilities.
Transitioning to memory-safe languages for newer software products seems a plausible solution, but what about the legacy? Software products with life spans bridging decades are already deeply embedded in vital sectors. Their transition concerns technical adaptation and logistical, strategic, and continuity considerations.
Amidst these complex shifts, vulnerabilities can subtly emerge. It’s comparable to a structure being most at risk when undergoing renovation. Software can also inadvertently expose itself to unforeseen threats during these transformative phases.
Thus, as industries evolve, striking a balance between embracing the future and ensuring robust memory safety in software becomes an intricate dance; one that demands proactive strategies, innovative solutions, and unwavering vigilance against potential exploits.

The cybersecurity landscape is constantly evolving, and as threats grow more sophisticated, so must our defense approaches. At the heart of this evolution is the principle: “secure by design, secure by default.” This philosophy underscores the idea that software must be developed from its inception with robustness and resilience in mind rather than relying on patches and retroactive fixes.
Why does this matter?
For one, we’re talking about more than just individual pieces of software. It’s about the digital backbone of critical infrastructure — the systems that power our factories, manage our utilities, and keep our society running smoothly.
In this sphere, we advocate for preemptive memory protections. Not as an optional add-on but as a foundational layer of defense. While the industry ponders over a holistic shift to memory-safe languages, a gap remains.
What do we do about the here and now?
The answer lies in identifying memory-based vulnerabilities and actively safeguarding against them. By diversifying the memory layout in each software instance, the predictability, a significant advantage for adversaries, is negated. Every attempt to exploit becomes a unique challenge, deterring large-scale attacks.
The RunSafe suite of tools reflects this philosophy, emphasizing code hardening to ensure resilient applications. Instead of merely providing a bandaid, we focus on true resilience.
From integrating security at the code’s build phase to offering protected versions of open-source software and real-time vulnerability alerts, our approach is comprehensive. By leveraging RunSafe code hardening techniques, software becomes more resilient and more unpredictable to potential attackers.
In the ever-turbulent realm of geopolitics, the ripples of conflict don’t just reverberate on the battlefield; they cascade into the cyber arena. Take, for instance, the escalating tensions between China and Taiwan. Beyond the immediate geopolitical ramifications, such conflicts have profound implications for cybersecurity, particularly concerning critical infrastructure.
With its vast interconnected digital networks, U.S. infrastructure presents a tempting target in these tense situations. And as history has shown, where there’s political tension, cyber-espionage and cyber-attacks are seldom far behind. State-sponsored digital incursions can often disrupt systems, cause considerable financial losses, and even endanger lives.
An exploitable critical weak point in these scenarios is memory vulnerabilities. Every unsecured code line, every unprotected function, becomes a potential entry point. And in a large-scale conflict, these weak spots can be exploited for espionage and sabotage. Imagine the chaos if a nation’s energy grid, transportation systems, or communication networks were compromised during a crisis.
The message is clear: the need for comprehensive memory protection is not a distant future requirement – it’s an immediate necessity. As geopolitical tensions loom, the urgency to immunize our software from potential exploitation has never been more pressing.
In today’s rapidly advancing cyber landscape, the need for robust memory protections is more evident than ever. RunSafe champions this philosophy of applying memory protections today as a pragmatic bridge to the memory-safe languages of tomorrow. While the allure of these new languages is undeniable, the transition will require time — time that critical infrastructures might not have, especially in the face of rising cyber threats.
Every moment we wait, we leave our critical infrastructure — the backbone of our nation’s economy, safety, and well-being — vulnerable to debilitating attacks. With every delay, we edge closer to the precipice of cyber catastrophe. RunSafe’s approach serves as a sentinel in these challenging times, offering an immediate solution that doesn’t demand the upheaval of current systems or delay essential protections.
While the future lies in memory-safe languages, the present calls for immediate action. To ensure the security and integrity of our critical systems, don’t wait for tomorrow. Connect with RunSafe today and fortify your defenses against the threats of the digital age.

The post Securing Critical Infrastructure with Memory Safety in Software appeared first on RunSafe Security.
]]>The post A Bridge to Memory Safety: Leveraging Load-time Function Randomization for Immediate Protection and Liability Shift appeared first on RunSafe Security.
]]>A Bridge to Memory Safety: Leveraging Load-time Function Randomization for Immediate Protection and Liability Shift
Addressing Critical Infrastructure Vulnerabilities: Insights and Recommended Actions
Advancing Memory Safety Measures
Mitigating Memory Exploits: The Power of Load-time Function Randomization
Enabling Immediate Memory Safety
The looming threat China poses in attacking the US Critical Infrastructure – whether in conjunction with a military attack on China or independent of it – raises the question: what can we do to solve the most serious vulnerabilities enabling potential disruptions in service, memory-based vulnerabilities?
The default answer today through NSA Guidance and a call to action by CISA Director Jen Easterly is for industries to employ memory safe languages. However, this rewriting of code will take 10-20 years, billions of dollars, and create a significant disruption – and as such will not be available in time to fend off an attack from China.
To avoid delays in implementing memory safety through the long time frame in which memory safe languages would take to be adopted and to save money by protecting software without rewriting software, the US Government can enable Memory Safety without delay, as follows:
With the release of the Office of National Cyber Director’s (ONCD) National Cybersecurity Strategy (NCS) document and the subsequent Cybersecurity and Infrastructure Security’s (CISA) announcement of Secure by Design / Secure by Default (SBD^2), the US Government has taken a very strong step forward to mitigate memory safety issues in software code that adversaries use to exploit software across Critical Infrastructure.
Although SBD^2 principles are sound and proven to help industries move forward with guidance, shifting to memory safe languages today is extremely difficult – and the combination will not yield memory safety immediately. This document summarizes actions that can be taken today to ensure memory safety is viable today.
By incorporating these recommendations into the rollout of Memory Safety programs going forward, the US Government can give industries the ability to shift liability and enable safe harbor immediately.
Let’s solve Memory Safety now – and let’s be prepared if China attempts to disrupt our critical infrastructure as part of a broader geopolitical conflict.
The post A Bridge to Memory Safety: Leveraging Load-time Function Randomization for Immediate Protection and Liability Shift appeared first on RunSafe Security.
]]>The post How Does 5G Impact National Security? appeared first on RunSafe Security.
]]>How Does 5G Impact National Security?
Securing the 5G Global Network
Will There Be a National 5G Network?
Watch the 5G Bash at CyberWeek 2020
RunSafe Security hosted a 5G Cyber Bash webinar as a part of CyberWeek 2020. Our thought leadership two years ago is still relevant today and will continue to be into the future as we protect 5G devices and networks with our patented cyberhardening technologies.
As industry rolls out 5G technology, there is a lot at stake both economically and from a national security perspective.
Gilman Louie, venture capitalist from Alsop Louie Partners, opened the panel by discussing how much we can learn from the United States’ rollout of 4G LTE which sets the stage for what the implementation of 5G could mean.
One can see that 5G is the information infrastructure of the future, therefore the global network must be secured so open commerce within the western liberal order can operate without disruption from a centrally planned, top-down government more keen to control commerce and conduct surveillance. The rollout also has implications for the US Department of Defense.
Lisa Porter, co-founder of LOGIQ and former DUSD of R&E, shared her thoughts on the Department of Defense’s (DoD) response to 5G. She emphasized the DoD’s recognition that they need to accelerate adoption of 5G and embrace the importance of closely partnering with the private sector to do so. She also reiterated the notion that there is no finish line when it comes to the advancement of technology and the DoD’s adoption of zero-trust architecture.
“Trust is vulnerability…we can’t trust anything, so we need to architect solutions that address that,” she shared.
We then heard comments from Randy Clarke, vice chair of the National Spectrum Consortium. He discussed the importance of continuing to invest in software solutions, saying “Prototyping makes good policy.”
He discussed why 5G is so critical to the economic drivers of this nation and the world, and therefore vitality of its security because it is mission critical. The discussion closed with the panelists discussing whether or not there will ever be a national 5G network.
Our mission at RunSafe aligns with protecting software across interdependent 5G networks so that bad actors cannot exploit vulnerabilities on 5G devices. To learn more about RunSafe and how we harden code to stop hackers from exploiting memory-related vulnerabilities, click here.
The post How Does 5G Impact National Security? appeared first on RunSafe Security.
]]>The post New Survey Results: Cyber Decision Makers Are Unaware about the State of Firmware Security appeared first on RunSafe Security.
]]>New Survey Results: Cyber Decision Makers Are Unaware about the State of Firmware Security
Are Cyber Risk Decision Makers Truly Informed?
What Tools Are Available to Fill Cyber Knowledge Gaps?
The software world continues to undergo dramatic change. From digital transformation to devops and shift left, organizations are re-inventing their software development lifecycle processes with an eye for automation and agile or continuous practices.
With that said, understanding risk across your software infrastructure includes understanding the supply chain in detail. Most organizations are still struggling to secure the embedded firmware their devices and supply chains rely upon, leaving themselves extremely vulnerable.
Eclypsium conducted a survey to determine how much cyber risk decision makers in financial services companies know (or don’t know) about the state of firmware security in their device fleet and supply chains.
Eclypsium surveyed a total of 350 IT security DM respondents, from organizations with a minimum of 1,000 employees in May 2022. The respondents originated from a variety of locations, including the US (150), Canada (50), Singapore (50), Australia and New Zealand (50) and Malaysia (50). All respondents were from organizations in the financial services sector.
RunSafe Security protects firmware for several organizations and specializes in reducing risk across your software supply chain—whether open source, third party, or proprietary code. See what RunSafe’s product lineups can do for you with a hassle-free trial.
The post New Survey Results: Cyber Decision Makers Are Unaware about the State of Firmware Security appeared first on RunSafe Security.
]]>The post Paul Rosenzweig: 5G Networks and Cybersecurity Metrics appeared first on RunSafe Security.
]]>Currently, Rosenzweig is a cybersecurity consultant, practicing attorney, Senior Fellow at the R Street Institute, and law professor at George Washington University.
Before defining the three dimensions of trust Rosenzweig subscribes to, we must look at trust in the context of cybersecurity. You can think of trust as the point at which you can no longer filter out any more risk. Your software is protected, and in turn, so is your company.
The three different dimensions of trust Rosenzweig discusses are:
Beyond the dimensions of trust, there is heavy significance on zero-trust and 5G network technologies.
For example, 5G technology faces a different risk than the software security risk you incur from downloading an app, such as LinkedIn, on your phone. If you look at these risks on a spectrum, you can see the optimal level of trust to take on each stake.
With 5G technology, you must allow access to networks and the ability to manipulate them. But if you download LinkedIn on your phone, you must only worry about data access and privacy concerns.
And with 5G, we should ask ourselves:
Do you consider cybersecurity to be an art or a science? If you view cybersecurity how Rosenzweig does, you would agree that it’s more of an art today, but can become more of a science through capturing and monitoring key metrics.
But today we aren’t able to measure the efforts of cybersecurity and efficiently reproduce it, so cybersecurity can’t be classified as a scientific field.
For example, we think two-factor authentication helps protect us. But how much does it help? If cybersecurity were a science, we would measure precisely how much protection two-factor authentication provides and improve off of that data. However, this is not a precise measurement.
Because cybersecurity isn’t a measurable field, why do we need cyber metrics? The answer: If we don’t establish universal metrics to measure cybersecurity efforts, the rate of change will outpace our ability to keep control over technology.
In addition to metrics, we need to standardize definitions and questions that pertain to cybersecurity. By establishing universal definitions and questions in the field, we will have a baseline to optimize the metrics over time.
It doesn’t have to be perfect at the beginning. But if we don’t start now, it’ll become much more challenging to create as time goes on.
So who or what should be in charge of beginning this process? Rosenzweig advocates that the United States government should start measuring its cybersecurity efforts.
We should begin by aggregating metrics that are similar, but differently measured.
For example, if we were to collect 50 different definitions of a data breach, all 50 would be alike but not uniform. However, this would give us a starting point to create a standard report for a data breach.
After standardizing cybersecurity definitions and questions, the government should think of new things that we should measure.
By doing so, we’re also addressing Rosenzweig’s most challenging lesson in cybersecurity: human experience is not ideal, meaning don’t let perfect be the enemy of good.
Looking to increase the speed and effectiveness of your response to a cyber breach and learn more about immunizing your software? RunSafe’s new Alkemist:Flare continuously monitors the health of your systems during runtime to provide indicators of stability, reliability and vulnerability while instantly flagging failures and potential attacks. Request a free 30-day analysis of runtime vulnerabilities today.
The post Paul Rosenzweig: 5G Networks and Cybersecurity Metrics appeared first on RunSafe Security.
]]>The post Shared Security in a Cloud Environment appeared first on RunSafe Security.
]]>Shift Left for Shared Cloud Security
Cloud deployments introduce major new shared security considerations for organizations. This changes some key operational imperatives for development, security, and IT professionals. On one hand, commercial cloud providers’ “Infrastructure as Code” delivers multiple layers and types of network and server security out of the box. Paradoxically, that can create a false sense of security around applications, containers, and workloads—all of which come with vulnerabilities that cloud tools can’t catch. Vulnerabilities replicate at light speed in the cloud via “golden images” that can massively expand the scale of the exploitable attack surface. Often, a significant amount of time passes – in the case of open source, many times it’s years – before a vulnerability is identified and fixed. And once deployed, it’s expensive and time-intensive to patch, remediate, quarantine, or roll back code. So, while cloud providers take ownership for the Infrastructure as Code elements of the stack, dev and devops teams inside the organization become responsible for a broader application security mandate than in traditional pre-cloud environments.
As the slide below illustrates, when outsourcing core infrastructure to the cloud, application and infrastructure security teams must “shift left” to focus on SDLC and CI/CD pipelines they control, and remove as much vulnerability as possible before deployment. Kudos to Snyk for drilling down to what this really takes in practice!
Slide presented at SNYKCON keynote, 10/18/2020
Scanning, testing, and patch management are essential parts of this discipline, but as a RunSafe recent study shows, scanning tools missed as many as 97.5% of known memory vulnerabilities over the last 10 years, as well as 100% of the “unknown unknown” Zero-Day vulnerabilities. And MITRE ranked memory CVEs at the top of their most-recent list of the most dangerous threats to software.
Cloud Workload Protection
In its recent Cloud Workload Protection Platform (CWPP) research note, Gartner identified memory exploit protection as an essential component of any organization’s CWP strategy (Gartner, Cloud Workload Protection Platforms, 2020):
“Exploit Prevention/Memory Protection–application control solutions are fallible and must be combined with exploit prevention and memory protection capabilities…”—Gartner CWPP 2020
Gartner CWPP, 2020
Per the report, in the Risk-Based Hierarchy of Workload Protection Controls, exploit prevention/memory protection is a core workload protection strategy:
We consider this a mandatory capability to protect from the scenario in which a vulnerability in a whitelisted application is attacked and where the OS is under the control of the enterprise (for serverless, requiring the cloud providers’ underlying OS to be protected). The injected code runs entirely from memory and doesn’t manifest itself as a separately executed and controllable process (referred to as “fileless malware”). In addition, exploit prevention and memory protection solutions can provide broad protection against attacks, without the overhead of traditional, signature-based antivirus solutions. They can also be used as mitigating controls when patches are not available. Another powerful memory protection approach used by some CWPP offerings is referred to as “moving target defense” — randomizing the OS kernel, libraries and applications so that each system differs in its memory layout to prevent memory-based attacks.
Cloud-Native Application Security
Cloud deployments rely on well-defined devops workflows to create operational efficiency and resilience. Organizations use many different tools and methods to make that happen, but best practices include the components/steps shown below:
https://medium.com/faun/devops-without-devops-tools-3f1deb451b1c
It’s a good assumption that every new code release contains new memory vulnerabilities. This has been borne out over years of experience, with accelerated release cycles and an increased reliance on open-source components increasing the exposure. Beyond best practices in coding and native IDE functionality, however, there is industry consensus that adding more vulnerability prevention steps to sprint cycles is counter-productive. So, in RunSafe’s view, the most effective DevSecOps strategy is to treat every code object as vulnerable.
Immunization from memory vulnerabilities can be inserted anywhere from the Build through Deploy steps. RunSafe’s Alkemist tools integrate with any devops product in this toolchain via simple RESTful API calls, repo gets, secure containers, or CLI.
This can provide comprehensive memory threat protection for all three universal building blocks of cloud computing:
How Can RunSafe Help with Your Cloud Deployment?
RunSafe provides memory protection at the code, container, or workload level for compiled software and firmware. With this protection in place, downstream consumers in the devops workflow can determine them to be “memory safe” relative to unprotected workloads. This enables several DevSecOps advantages:
Adding code immunization into cloud DevSecOps workflows can completely neutralize memory vulnerabilities in any compiled code. This produces significant ROI, since 40-70% of identified vulnerabilities are memory-based, depending on the code stack. The value and importance of this step are even greater today given Synopsys’ most-recent Black Duck Open Source Security finding that 95%+ of new enterprise applications include open source components, which by their nature increase the incidence and persistence of unknown Zero-Day vulnerabilities.
The post Shared Security in a Cloud Environment appeared first on RunSafe Security.
]]>The post The Devil in the Details: How The Caching Daemon Keeps Our Yocto Customers Running Safe appeared first on RunSafe Security.
]]>RunSafe is in the business of helping developers – and the organizations that employ them – to reduce risk. A key part of purpose is making sure our customers have the right tools that work in an optimal way, which can vary by use-case.
There is a whole universe of memory-based attacks like stack, heap, buffer and a myriad of other types of overflow and memory abuse attacks that can be potential attack vectors. At RunSafe, our technology limits risk by not enabling attackers to weaponize memory bugs, even as those memory bugs continue to exist in the underlying software.
Among the many environments where RunSafe supports customers is in Yocto build environments, which are commonly used in the embedded space. The Yocto Project is run by the Linux Foundation and is an open source effort that has found increasingly broad adoption.
With our Alkemist:Source product, RunSafe provides load time function randomization (LFR) capabilities. It’s a security feature that can be added to compiled binaries, such that whenever the binary is loaded, it randomizes memory allocation and prevents certain types of attacks. RunSafe has a delivery model that enables the easy integration of the LFR into the Yocto build process, so that when you get your image at the end of the Yocto build, it has built-in protection.

With Yocto and Alkemist:Source, every time you boot your device, every time applications load, they look different to an attacker. And no, this isn’t theoretical – it’s a simple 5 minutes process (we’ve outlined the process in a previous blog post – )
The Yocto Resource Utilization Challenge
We have many customers that use Alkemist: Source for one or two binaries. With Yocto, it’s not just one or two binaries that are being protected, it’s everything in the system and full image protection.
When dealing with embedded devices that have limited resources, there can be different challenges than running a single binary that needs to be protected on a big server. That’s what happened with one of our customers that deployed Alkemist:Source for a Yocto build project on a resource constrained device.
What the organization found is when their image was built with LFR enabled, there was a measurable performance hit on startup; and, applications were taking longer to load than what they wanted. Now in this case, our customer had a really heavy startup procedure, which is not always the case for most embedded devices, so they were running on a slow processor.
We knew we had to come up with a solution to this problem. That’s where the caching daemon comes into the picture.
How the Caching Daemon Works
With Alkemist: Source, when a binary is loaded into memory it’s randomized. With a shared library that creates some additional resource utilization. What happens is the randomization is actually a copy of that shared library existing in memory, because there’s a copy-on-write protection for the shared library.

So what in effect occurs is when the shared library loads, RunSafe randomizes it and it then copies it to a new memory location. So if 10 different applications are all using that shared library, they end up making 10 copies and randomizing it 10 different times. That leads to some memory bloat and can lead to a potential performance hit, especially on resource constrained devices, like the Yocto example that our customer was using.
The solution we came up with at RunSafe is to minimize the shared library memory randomization hit is the caching daemon. With the caching daemon, there is only one copy of the shared library that is randomized in memory. The one randomized copy in turn is used by all the other binaries that might link to the shared library.
Basically we keep a record of things in memory. Rather than re-randomize when we see a given shared object used again, we pass the location of that shared object already in memory for re-use.
Technical Details on Caching Daemon
RunSafe has a provisional patent on the caching daemon as it introduces a number of unique innovations. We took great pains to secure the daemon itself writing it in the open source Rust programming language, which has its own built-in security feature and is considered to be typesafe.
LFR Library Caching consists of two major components: a new cache daemon that keeps memory mappings for each loaded library and enforces security policy, and the existing libLFR, modified to communicate with the cache daemon.
The cache daemon is responsible for caching code randomized by LFR. Before libLFR randomizes a library in a client process, it asks the daemon for a cached copy. If the daemon has a cached copy, it sends the file descriptor and additional relocation metadata for that library to the client process.
To control the resource usage of the cache daemon and avoid caching libraries that are no longer in use, the daemon will have a tunable maximum cache size, specified in memory size or number of libraries. The daemon will evict libraries from the cache using a least recently used (LRU) strategy, so that frequently used libraries always stay in cache, as long as there is space. When the daemon evicts a library from the cache it will close the file descriptor. The kernel will then free the memory when the last process with that file mapped exits (or unloads the library).

Memory Safety that Respect Performance Demands
There is no shortage of academic research about how to limit the risks of memory attacks. While research is all fine and good, with Alkemist:Source, RunSafe is able to help organizations implement memory security in a practical way.
We have moved from just solving the first challenge of helping to mitigate memory risks, to help our customers with deeper underlying issues of performance and how we impact minimal resource systems.
You can have performance and you can have security.
RunSafe provides a simple and seamless option to completely eliminate zero-day and other memory-based vulnerabilities, without patching, for yocto developers. With a 5-minute one-time implementation into the native yocto build stage, RunSafe’s Alkemist technology immunizes binaries from memory attacks, so that every image is functionally identical but logically unique. This changes hacker economics back in favor of the manufacturers and users of embedded devices.
You can get started today by registering at alkemist.runsafesecurity.com.
The post The Devil in the Details: How The Caching Daemon Keeps Our Yocto Customers Running Safe appeared first on RunSafe Security.
]]>