The post The Top 6 Risks of AI-Generated Code in Embedded Systems appeared first on RunSafe Security.
]]>RunSafe Security’s 2025 AI in Embedded Systems Report, based on a survey of over 200 professionals throughout the US, UK, and Germany working on embedded systems in critical infrastructure, captures the scale of this shift: 80.5% use AI in development, and 83.5% already have AI-generated code running in production.
As AI-generated code becomes a permanent part of embedded software pipelines, embedded systems teams need a clear view of where vulnerabilities are most likely to emerge.
According to respondents, six risks stand out as the most urgent for embedded security in 2025.

Security vulnerabilities in AI-generated code remain the single most significant concern among embedded systems professionals. More than half of survey respondents identified this as their primary worry.
AI models are trained on vast repositories of existing code, much of which contains security flaws. When AI generates code, it can replicate and scale these vulnerabilities across multiple systems simultaneously. Unlike human developers, who might introduce a single bug in a single module, AI can reproduce the same flawed pattern across entire codebases.
The challenge is particularly acute in embedded systems, where code often runs on safety-critical devices with limited ability to patch after deployment. A vulnerability in an industrial control system or medical device can persist for years, creating long-term exposure. Traditional code review processes, designed to catch human errors at human speed, struggle to keep pace with the volume and velocity of AI-generated code.
Nearly half of embedded systems professionals worry about the difficulty of debugging and maintaining AI-generated code. When a human engineer writes code, they understand the logic and intent behind every decision. When AI generates code, that understanding often doesn’t transfer.
When bugs appear, tracing their root cause becomes significantly harder. Engineers must reverse-engineer the AI’s logic rather than reviewing their own design decisions. When modifications are needed—whether for new features or security patches—developers must understand code they didn’t write and may struggle to modify without introducing new issues.
In embedded systems, where code often has a lifespan measured in decades, maintainability is critical. If the original code is AI-generated and poorly documented, each maintenance cycle becomes progressively more difficult and error-prone.
41% of survey respondents flagged regulatory and compliance uncertainty as a major concern with AI-generated code. Yet the certification processes organizations rely on today weren’t designed to account for AI-generated code. In regulated industries such as medical devices or aerospace, obtaining code certification requires extensive documentation of design decisions and validation procedures. When AI generates the code, much of that documentation doesn’t exist in traditional forms.
This creates several challenges. Organizations must make their own determinations about what constitutes adequate validation for AI-generated code, with limited regulatory guidance. When incidents occur, the question of liability becomes murky: Who is responsible when AI-generated code fails?
AI models learn from existing code, and that code is often flawed. More than a quarter of embedded systems professionals worry that AI tools will perpetuate and scale insecure coding patterns across their systems.
This risk is particularly concerning because it’s systemic rather than isolated. If an AI model is trained on C/C++ code that commonly contains memory safety issues—and C/C++ historically dominates embedded systems—the AI will likely generate code with similar vulnerabilities.
This risk isn’t just theoretical. Memory safety vulnerabilities such as buffer overflows and use-after-free errors account for roughly 60-70% of all security vulnerabilities in embedded software. If AI tools perpetuate these patterns at scale, the industry could see a multiplication of one of its most persistent and exploitable vulnerability classes.
AI-generated code often functions as a black box. The code works, but understanding why it works or why it fails can be extraordinarily difficult. 26% of embedded systems professionals cite this lack of transparency as a significant concern.
In embedded systems, where reliability and safety are paramount, this opacity creates serious problems. Engineers need to understand not just what the code does, but how it handles edge cases, error conditions, and unexpected inputs. With AI-generated code, that understanding is often incomplete or absent.
The survey reveals additional concern about what happens when AI-generated code becomes increasingly bespoke. Historically, when developers used shared libraries, a vulnerability discovered in one place could be patched across an entire ecosystem. If AI generates unique implementations for each deployment, this shared vulnerability intelligence fragments, making collective defense more difficult.
Nearly one in five embedded systems professionals identifies legal and licensing risks as a concern with AI-generated code. AI models are trained on vast amounts of code, much of it open source with specific licensing requirements. When AI generates code, questions arise: Does the output constitute a derivative work? Who owns the copyright to AI-generated code?
These questions remain largely unresolved, and different jurisdictions may answer them differently. For embedded systems manufacturers, this creates software supply chain risk. If AI-generated code inadvertently reproduces proprietary algorithms or patented methods from its training data, manufacturers could face infringement claims.
For organizations in regulated industries or those serving government customers, these legal uncertainties can be deal-breakers. Defense contractors, for example, must provide clear provenance and licensing information for all software components.

The 2025 landscape reveals an industry at a critical juncture. AI has fundamentally changed how embedded software is developed, and that transformation is accelerating. 93.5% of survey respondents expect their use of AI-generated code to increase over the next two years.
But this acceleration is happening faster than security practices have evolved. The tools and processes that worked for human-written code at human speed aren’t designed for AI-generated patterns at machine velocity.
The good news is that awareness is high and investment is following: 91% of organizations plan to increase their embedded software security spend over the next two years.
Understanding these six critical risks provides a roadmap for where design decisions, security investments, and process changes will have the most significant impact. Organizations that address these risks proactively—through better tooling, enhanced testing, runtime protections, and clearer governance—will not only strengthen their systems but also position themselves as industry leaders.
The insights in this post are based on RunSafe Security’s 2025 AI in Embedded Systems Report, a survey of embedded systems professionals across critical infrastructure sectors.
Explore the full report to see the data, trends, and strategic guidance shaping the future of secure embedded systems development.
The post The Top 6 Risks of AI-Generated Code in Embedded Systems appeared first on RunSafe Security.
]]>The post Beyond the Battlefield: How Generative AI Is Transforming Defense Readiness appeared first on RunSafe Security.
]]>In a recent episode of Exploited: The Cyber Truth, RunSafe Security CEO Joseph M. Saunders and Ask Sage’s Arthur Reyenger joined host Paul Ducklin to discuss how AI is transforming mission readiness. But instead of focusing on sci-fi scenarios, their conversation looked at the ways AI is supporting behind the scenes.
For decades, defense teams have struggled under the weight of processes, documentation, testing requirements, and the sheer volume of data needed to support modern missions. Whether you’re analyzing electromagnetic spectrum threats, vetting new technology, or validating weapons systems, the bottleneck is almost always the same: time.
That’s exactly where generative AI is already having an outsized impact.
Organizations are using AI to speed up tasks that previously slowed entire programs—think requirements gathering, testing cycles, red-team scenario planning, and acquisition paperwork. Reyenger described one real-world deployment where a combat command used generative AI to evaluate new technologies faster:
“We saved them 95% of the time and the cost to be able to go through those processes.”
That kind of acceleration doesn’t just make workflows cleaner—it moves capability into the field when warfighters actually need it.
If there’s a misconception about AI in defense, it’s that its greatest value lies in autonomous weapons. In reality, AI is transforming less glamorous, but mission-critical areas like code development and sustainment.
Saunders emphasized that AI is already reshaping how embedded systems and defense software are built. Instead of teams getting buried in boilerplate code, AI handles the repeatable pieces, letting engineers focus on architecture, performance, and security. The result is faster innovation and more secure systems.
Another example comes straight from the U.S. Navy. Ships equipped with 3D printers previously had to request schematics and documentation from shore through slow, satellite-connected networks. Now, generative AI models running locally can help crews identify the right parts, understand dependencies, and produce what they need instantly, even while offline.
This is the kind of operational lift that rarely makes headlines but changes everything. Missions recover faster. Readiness improves. Warfighters stay effective in environments where bandwidth, connectivity, and time are scarce.
As the Department of Defense continues to adopt AI, one principle remains non-negotiable: humans stay in the loop. The most powerful applications of generative AI are the ones reducing cognitive load so people can make better decisions.
Reyenger captured this well when discussing how AI fits into modern workflows:
“Technology should not be dictating the way that organizations define their workflows. It should be supporting them. If you’re doing it a certain way, it was because it was right at a time.”
This mindset also extends to the cybersecurity and model-security challenges surrounding AI. Ask Sage’s “fire-and-forget” architecture, for example, ensures sensitive data doesn’t persist inside models—an essential requirement for defense environments where security, privacy, and zero-trust principles are table stakes.
As Saunders emphasized in the episode, the goal isn’t just choosing the best foundation model today. It ensures defense teams aren’t locked into a single vendor or platform, and that AI remains flexible enough to evolve with the mission.
The more generative AI takes on repetitive work—documentation, analysis, testing, search, troubleshooting—the more time experts can spend on creativity, strategy, and judgment. And that’s where warfighters deliver their greatest value.
AI’s impact in defense isn’t about the machines. It’s about freeing people to think, decide, innovate, and act faster and with more confidence.
Listen to the full episode here.
The post Beyond the Battlefield: How Generative AI Is Transforming Defense Readiness appeared first on RunSafe Security.
]]>The post Meeting ICS Cybersecurity Standards With RunSafe appeared first on RunSafe Security.
]]>As software supply chains grow in complexity and ICS devices take on more digital functionality, operators face risk from vulnerabilities buried deep within firmware, dependencies, and proprietary code. Strengthening security and demonstrating compliance begins with improving the integrity, transparency, and resilience of that software.
RunSafe helps industrial organizations achieve this by hardening code against exploitation, increasing visibility into software components through build-time Software Bill of Materials (SBOM) generation, and extending protection to systems that can’t easily be patched or rebuilt.
These capabilities align directly with the technical controls required across major ICS cybersecurity standards, helping operators close gaps in their security posture.
| ICS Standard / Regulation | Relevant Requirements | RunSafe Capability That Supports It |
|---|---|---|
| IEC 62443 (including SR 3.4: Software & Information Integrity) | Software integrity, tamper prevention, secure component management | Protect: Runtime exploit prevention stops unauthorized code execution even when vulnerabilities exist. Identify: Build-time SBOMs document components for integrity verification. |
| NIST 800-82 (Guide to ICS Security) | System integrity (SI), configuration management (CM), continuous monitoring (RA/CA), incident response | Identify: SBOMs support configuration management and vulnerability assessment. Protect: Runtime exploit mitigation enhances system integrity. Monitor: Crash analytics & exploit detection support continuous monitoring. |
| NIST Risk Management Framework (RMF) | Ongoing assessment, vulnerability management, security controls validation | Identify: SBOMs accelerate risk assessment and control verification. Monitor: Evidence and telemetry support ongoing authorization and assessment. |
| NERC CIP | Software integrity, vulnerability assessments, incident reporting, BES Cyber System security | Identify: SBOMs shorten vulnerability assessment cycles. Protect: Hardens embedded systems to maintain operational integrity. Monitor: Provides supporting data for CIP-008 incident response. |
| EU Cyber Resilience Act (CRA) | Mandatory SBOMs, secure-by-design software, vulnerability handling, lifecycle security | Identify: Build-time SBOM generation identifying all components, including for C/C++ builds. Protect: Code hardening reduces exploitability for both known and unknown vulnerabilities. |
| U.S. Federal SBOM Mandates (NTIA, DHS, DoD, FDA) | Accurate, complete, machine-readable SBOMs; traceability; vulnerability identification | Identify: Comprehensive CycloneDX SBOMs generated at build-time that support all mandatory NTIA fields. |
| UK Cybersecurity and Resilience Bill | Supply chain assurance, software integrity, rapid incident reporting | Identify: SBOMs enable supply chain verification and vulnerability tracking. Protect: Code hardening reduces exploitability for both known and unknown vulnerabilities. |
| ISA/IEC 62443-4-1 (Secure Development Lifecycle) | Component inventory, secure build processes, threat mitigation | Identify: SBOM visibility integrated into SDLC and build processes. Protect: Mitigates memory-based vulnerabilities for devices in the field even before patches are available. |
IEC 62443 defines security levels (SL-1 to SL-4) to counter cyber threats to ICS systems. Security Requirement 3.4 requires mechanisms to ensure software and information integrity by detecting and preventing unauthorized modifications, which is essential for defending against zero-day exploits.
RunSafe Security supports this with runtime code protection and automated defenses that maintain software trustworthiness in ICS devices, aligning with these IEC 62443 integrity requirements.
NIST SP 800-82 is a specialized guidance document focused on Industrial Control Systems (ICS) and Operational Technology (OT) environments. It defines 19 control families tailored to these unique contexts, addressing operational, technical, and management controls relevant to ICS security.
RunSafe’s Protect solution assists in meeting NIST standards by hardening software across firmware, applications, and operating systems to reduce vulnerabilities, especially memory-based and zero-day threats. This aligns with minimizing risks outlined in NIST 800-82, such as unauthorized modifications, malware infections, and system exploitation.
NERC CIP applies to bulk electric systems and mandates stringent access control, security monitoring, and incident response to protect critical grid infrastructure.
RunSafe’s automated software hardening strengthens embedded software against vulnerabilities, including zero-day attacks, helping to meet NERC CIP mandates for cybersecurity system management and reducing the attack surface of BES Cyber Systems.
The EU Cyber Resilience Act imposes mandatory cybersecurity requirements on manufacturers placing products with digital elements into the European market. The regulation requires comprehensive SBOM documentation, vulnerability disclosure processes, and Security by Design principles throughout the product lifecycle.
RunSafe empowers organizations to meet EU CRA requirements through automated build-time SBOM generation, embedded software hardening, and proactive vulnerability identification.
The UK’s proposed legislation extends cybersecurity obligations across critical national infrastructure sectors. The bill emphasizes supply chain security and mandates incident reporting within strict timeframes, creating accountability for operators and vendors.
RunSafe Security supports compliance with the UK Cybersecurity and Resilience Bill by providing embedded software security designed specifically for ICS systems and software supply chain transparency through build-time SBOM generation.
RunSafe improves ICS security posture by providing:
Together, these capabilities directly support key ICS cybersecurity requirements.

ICS cybersecurity risks increasingly stem from software complexity. PLCs, HMIs, sensors, gateways, and controllers rely on layered stacks of compiled code, RTOS kernels, communication libraries, protocol implementations, and third-party components. As this software ecosystem expands, several categories of risk emerge:
Industrial devices often incorporate dozens or hundreds of software elements, both internally developed and externally sourced. Many of these components lack update mechanisms or clear lifecycle management. When vulnerabilities are disclosed, asset owners frequently lack the visibility needed to determine whether their systems are exposed.
Memory safety remains one of the most common contributors to ICS vulnerabilities. Buffer overflows, use-after-free flaws, and out-of-bounds writes still account for a significant portion of CVEs in industrial and embedded software. These weaknesses persist in critical infrastructure because:
Andy Kling, VP of Cybersecurity at Schneider Electric, a major player in the ICS/OT space, found that “memory safety was easily the largest percentage of recorded security issues that we had.” 94% of these weaknesses come from third-party components.
While memory safety is not the only category of ICS risk, it remains one of the most damaging, often enabling remote code execution or multi-stage exploit chains.
Software supply chain cyberattacks frequently target the software dependencies and build environments behind industrial products. Without reliable SBOMs, operators cannot:
The lack of software transparency turns compliance into guesswork and slows incident response.
Industrial environments face major deployment challenges:
These realities make it difficult to rely solely on patch management, network segmentation, or perimeter defenses.
Because ICS software interacts directly with physical equipment, software vulnerabilities can lead to:
Software risk in ICS is therefore both digital and physical, with potentially severe outcomes.
Given the depth of software risk in modern ICS environments, organizations need solutions that both reduce exploitability and produce the evidence required for rising compliance standards.
RunSafe delivers this by integrating directly into existing development and maintenance workflows, making it possible to improve security posture without operational disruption.
Begin by embedding RunSafe’s SBOM generation directly into your CI/CD pipeline or offline build environment. Whether you’re working with embedded Linux, Yocto/Buildroot builds, or legacy RTOS toolchains, RunSafe’s Identify capability produces CycloneDX-compliant SBOMs and supports all mandatory NTIA fields.
You’ll gain full component visibility—down to libraries, files, and versions, including proprietary components—so you can quickly assess exposure, audit supplier code, enforce license policy, and meet SBOM-mandate requirements for ICS environments.
Protecting your software with RunSafe Protect is as easy as installing the packages from our repositories and making a one-line change to your build environment. Once installed, you can automatically integrate Protect into your existing build process.
RunSafe Protect hardens compiled binaries against memory-based exploits and zero-day attacks by applying Load-time Function Randomization (LFR). Even legacy PLC firmware, vendor-supplied binaries, or devices in air-gapped networks can benefit from exploit mitigation. Because the protection works independently of patch status, you’re reducing risk proactively while maintaining operational continuity.
Deploy RunSafe’s Monitor capability across your hardened device fleet to capture crash indicators, detect unusual behavior patterns, and differentiate between benign failures and potential exploit attempts.
Securing industrial control systems requires more than perimeter defenses or periodic patch cycles. It demands protections that operate inside the software itself—across legacy devices, modern embedded platforms, and complex software supply chains.
RunSafe provides that foundation by hardening binaries against exploitation, generating accurate SBOMs at build time, and delivering operational insight through lightweight monitoring. Together, these capabilities give ICS operators a practical path to strengthen system integrity, reduce exploitability, and demonstrate compliance with the world’s most important cybersecurity standards.
With the right protections applied directly to the software running your critical processes, resilience becomes achievable rather than aspirational.
Request a consultation to get started with RunSafe or to assess your embedded software security and risk reduction opportunities.
Yes. RunSafe hardens compiled binaries at runtime to let operators secure decades-old PLCs, RTUs, and embedded controllers even when patches are unavailable or unable to be applied.
Does RunSafe impact real-time performance or PLC scan cycles?
RunSafe takes an agentless approach to have very low impact and has been deployed in many resource-constrained environments successfully.
Can RunSafe be deployed in completely air-gapped ICS environments?
Yes. RunSafe supports offline licensing and local-only operations. All analysis, hardening, and SBOM generation can be performed inside secure, disconnected networks. This is particularly valuable for ICS environments with strict isolation requirements or regulatory prohibitions against cloud connectivity.
How does RunSafe help with IEC 62443 SR 3.4 and software integrity requirements?
IEC 62443 SR 3.4 requires mechanisms to prevent unauthorized modification or execution of software components. RunSafe delivers this by making memory-based exploits—including zero days—non-exploitable. Even if a vulnerability exists, exploit attempts fail, helping operators maintain software integrity even on unpatched or legacy systems.
Does RunSafe support NIST 800-82 and NERC CIP incident response and integrity controls?
Yes. RunSafe contributes to several core NIST and NERC CIP requirements:
This helps operators produce clear, evidence-backed compliance documentation.
How does RunSafe support SBOM requirements in the EU Cyber Resilience Act and U.S. federal mandates?
RunSafe generates build-time SBOMs, capturing every component, including low-level C/C++ libraries and embedded dependencies often missed by scanning tools. RunSafe’s Identify capability produces CycloneDX-compliant SBOMs and supports all mandatory NTIA fields.
Can RunSafe help reduce zero-day exploitability in ICS or embedded software?
Yes. RunSafe’s patented Load-time Function Randomization defends software from memory-based zero days by altering the memory layout of an application each time it runs. This prevents attackers from leveraging memory-based vulnerabilities, such as buffer overflows, to attack a device or gain remote control.
How does RunSafe differ from network-based ICS security tools?
Network tools (IDS, DPI, segmentation) detect or contain attacks, but they cannot prevent exploitation inside the device. RunSafe operates within the software itself, transforming binaries so they cannot be exploited even if the attacker reaches the device or bypasses perimeter defenses. It complements—not replaces—existing ICS security layers by addressing the root of software exploitability.
What types of ICS platforms and RTOS environments does RunSafe support?
RunSafe supports a broad range of ICS platforms, including: VxWorks, QNX, Yocto, Buildroot, Linux, Bare Metal, and more. View a full list of integrations and supported platforms here.
The post Meeting ICS Cybersecurity Standards With RunSafe appeared first on RunSafe Security.
]]>The post Safety Meets Security: Building Cyber-Resilient Systems for Aerospace and Defense appeared first on RunSafe Security.
]]>In this environment, a software glitch, a failed component, or a cyber intrusion can have the same catastrophic impact: a system that doesn’t behave as intended when lives and missions are on the line.
Patrick Miller, Product Manager at Lynx, has spent his career at the intersection of safety, security, and performance, working across aerospace, defense, enterprise cloud, and embedded systems.
In this Q&A, Patrick shares how architecture, separation, and long-term thinking can help engineers and product teams design resilient weapons systems.
Patrick Miller: The biggest lesson is that every domain taught me something different about risk. Security is as much about governance and auditability as it is about technical controls. In defense and aerospace, I learned that resilience isn’t just about preventing failures, it’s about designing systems that degrade gracefully when something does go wrong.
But the connective tissue across all these domains is, particularly in product management, you must know your customer’s actual need, not just the problem you think you’re solving.
Early in my career, I realized that the “why” behind a security requirement often matters more than the requirement itself.
Patrick: They are not separate priorities but two expressions of the same goal: keep the system doing what it’s supposed to do, when it’s supposed to do it, and in the face of adversity.
Safety asks, “What happens when things might fail?”
Security asks, “What happens when someone tries to make them fail?”
In aerospace, especially, legacy systems were designed in an era when the threat model was simple: physical tampering or insider threat. Now we have connected avionics and software-defined platforms with attack surfaces we didn’t have to think about ten years ago.
A safety failure and a security failure can look identical from the flight deck, and both result in the aircraft not doing what the crew intended. The intersection is in architecture. If you design a system with strong separation, say at the kernel level, between safety-critical functions and everything else, you’re solving for both.
Patrick: A key risk is that a vulnerability in one aircraft or subsystem can theoretically affect an entire fleet. However, this shift also creates the opportunity to build security and resilience from day one, rather than bolting it on afterward.
When the industry is trying to add security to 20-year-old real-time operating systems to modernize embedded platforms for customers, it’s like retrofitting a house with a new foundation. Software-defined systems let you architect with separation, modularity, and defense-in-depth from the start. You can implement zero-trust and cyber-resilience principles in real-time environments in ways you couldn’t with monolithic systems.
Patrick: This is one of the most complex problems in the industry. A crewed aircraft certified today will probably still be flying forty years from now, just as there are crewed aircraft certified forty years ago still flying today. By then, the threat landscape will have evolved dramatically. Cryptographic algorithms considered secure today may be obsolete in a post-quantum world.
The mitigation is architectural resilience. First, design with modularity so that security updates can be applied surgically to vulnerable components without recertifying the entire system. Second, implement strong separation so that compromising one module doesn’t cascade through the entire aircraft. Finally, at the program level, it means thinking about your supply chain and third-party dependencies not as static decisions, but as ongoing risk management.
Patrick: The step-change in the evolution of UAVs flips the traditional paradigm on its head. With a crewed aircraft, safety is paramount because at least one, or more likely, many human lives are at stake. In a contested environment, the calculus is different for a single-use tactical UAV. You might accept a higher technical risk if it means fielding new capabilities faster.
Here’s where I push back on the “disposable” framing: even if the platform is disposable, the capability often isn’t. If an adversary captures and reverse-engineers your UAV, they can gain insights into your tactics, sensors, and comm architecture, which requires constant iteration to stay ahead. So even for “disposable” systems, I think about: What’s worth protecting architecturally?
You can design a UAV that’s tactically expendable but still prevents an adversary from extracting intelligence or spoofing commands.
Patrick: This is where data-driven prioritization really matters. The temptation is always to boil the ocean: add every new security feature, refactor the entire architecture, and implement the latest standards. Instead, measure the opportunity cost of each modernization decision. What are my top three security gaps today? What are my top three performance bottlenecks? Which modernization efforts address both?
Implementing strong separation boundaries improves real-time performance by preventing one task from blocking another, particularly in multi-core processors. At a practical level, invest in your DevSecOps pipeline early and institute automated testing, static analysis, and security scanning to build confidence to modernize faster.
Patrick: “Designing for separation” means treating compartmentalization as a primary architectural concern, not an afterthought. It’s the difference between saying “we’ll secure the perimeter and hope nobody gets through” and saying “someone eventually gets through, here are the north-south and east-west limits that prevent further intrusion, here is how we know the threat actor entered and how to neutralize them.”
In practice, that means defining your trust boundaries early. What functions are safety-critical? What modules are network-connected? What functions are mission-critical but not safety-critical? A separation kernel enforces those boundaries between partitions at the hypervisor level: one partition can’t access another’s memory; one partition can’t interfere with another’s timing. In real-time systems, timing is paramount, so this isolation protects both safety and security simultaneously.
Patrick: The Department of Defense’s Software Fast-Track initiative is pushing contractors to adopt modern DevSecOps practices and accelerate secure software delivery timelines. We’re also watching how NIST 800-53 and 800-171 requirements cascade down through the supply chain, forcing even smaller tier-two and tier-three suppliers to implement rigorous security controls.
The Software Bill of Materials (SBOM) mandate is particularly interesting because it’s forcing manufacturers to have real visibility into their software dependencies, which is foundational for long-term supply chain security.
On the civil aviation side, DO-326a and DO-356 are pushing the industry from a compliance-checkbox approach toward continuous monitoring and threat assessment throughout the aircraft lifecycle. Zero-trust mandates across both defense and critical infrastructure are also driving architectural changes at the platform level, which aligns well with what we’re building at Lynx.
Patrick: It means I can trust the displays and controls to follow my inputs and trust the instruments. It means that if there’s a compromise somewhere in the system or sensor, it fails safely, perhaps with a display warning, but not with an unexpected output from the aircraft. A pilot already has enough mental load as it is and does not want to have to think or worry about cybersecurity while flying.
There’s a saying: “aviate, navigate, communicate.” Built-in resilience means the architects and engineers did their job right so that cybersecurity is invisible to me, a redundancy so the aircraft operates reliably.
Patrick: Know your customer’s actual problem, not just the requirement they gave you. Sometimes the requirement is a proxy for something deeper. Sometimes the customer doesn’t even know how to articulate it yet. For me, that means talking directly with customers, attending industry conferences, asking hard questions, and actively listening.
What’s your threat model today? How are you thinking about long-term sustainability? What architecture decisions are constraining you? Be honest about tradeoffs. You can’t optimize for everything, but if you understand your customer’s actual priorities, you can make product feature choices that strike the right balance.
In aerospace and defense systems, safety and security directly overlap. The same design choices that protect flight safety also determine cyber resilience. Architectural separation, modularity, and supply chain transparency are prerequisites for survivability in the digital battlespace.
RunSafe Security and Lynx have partnered to advance this mission through technical collaboration. The integration of LYNX MOSA.ic and RunSafe Protect delivers the industry’s first DAL-A certifiable, memory-safe RTOS platform, uniting safety, security, and operational efficiency in a single solution.
Read our joint white paper for more on this partnership: Integrating RunSafe Protect with the LYNX MOSA.ic RTOS
The post Safety Meets Security: Building Cyber-Resilient Systems for Aerospace and Defense appeared first on RunSafe Security.
]]>The post The RunSafe Security Platform Is Now Available on Iron Bank: Making DoD Embedded Software Compliant and Resilient appeared first on RunSafe Security.
]]>The challenge for defense programs is that meeting these requirements, particularly for embedded systems, often results in increased labor and difficulty in getting tools approved and deployed.
That’s why we’re excited to share that the RunSafe Security Platform is now available on Iron Bank, the DoD’s hardened repository of pre-assessed and approved DevSecOps solutions.
As a verified publisher, RunSafe provides DoD software development teams with access to Software Bills of Materials (SBOM) generation, supply chain risk management (SCRM), and code protection through an ecosystem they already trust.
Iron Bank is built to help defense programs quickly deploy new tools without spending months navigating approval processes. Every product listed on Iron Bank goes through rigorous security assessments, container hardening, and compliance validation. Because the containers are scanned daily for vulnerabilities, DoD teams gain access to resilient tools that keep the software supply chain secure and get software to deployment faster.
With RunSafe listed as a verified publisher, DoD teams and integrators can now pull down the platform directly from Iron Bank, making it easier for defense programs to integrate and use.
The RunSafe Security Platform addresses some of the toughest challenges in embedded software security. Here’s what you can access through Iron Bank:
RunSafe provides the authoritative build-time SBOM generator for embedded systems and C/C++ projects. Automating SBOM generation is critical for meeting DoD requirements, especially for unstructured C/C++ code where traditional SBOM tools fall short.
SCRM capabilities enable DoD teams to take action, not just generate a static SBOM. Teams can monitor for new vulnerabilities and check license enforcement and provenance. With a complete, correct SBOM, teams can implement required SCRM practices.
RunSafe hardens binaries against exploitation through moving target defense (e.g. Runtime Application Self Protection (RASP)), defending weapons systems at runtime to increase resilience. This resilience applies to future zero days as well, providing fielded weapon systems protection between software upgrade cycles that can be two years apart.
If your program is working to modernize DevSecOps practices, automate SBOM generation, or secure embedded systems without code rewrites, RunSafe is now available directly on Iron Bank.
You can find our containers by logging in and accessing the Iron Bank repository here.
Request a consultation to learn more about the RunSafe Security Platform and Iron Bank.
The post The RunSafe Security Platform Is Now Available on Iron Bank: Making DoD Embedded Software Compliant and Resilient appeared first on RunSafe Security.
]]>The post From Black Basta to Mirth Connect: Why Legacy Software Is Healthcare’s Hidden Risk appeared first on RunSafe Security.
]]>Hospitals and medical device manufacturers are facing a quiet crisis rooted not in cutting-edge exploits or nation-state hackers, but in old software.
Across healthcare, legacy code is turning routine cybersecurity weaknesses into real-world patient safety risks. The problem is simple to explain and hard to solve: devices built to last for decades are now connected to modern networks, yet run on outdated and difficult-to-patch code.
As connected devices become the norm, this technical debt has become a liability that extends to patient care.
In a discussion on the business realities of medical device cybersecurity, Shane Fry, CTO of RunSafe Security, Patrick Garrity, Security Researcher at VulnCheck, and Phil Englert, VP of Medical Device Security Health-ISAC, discussed the vulnerability and compliance landscape and where software security comes into play.
Watch the full webinar for more on medical device cybersecurity here.
A medical device can remain in use for 15-20 years. That longevity might make sense for hospitals managing costs, but it means the software inside those devices is often frozen in time. Meanwhile, the threat landscape moves forward.
“These devices can be used for decades,” said Patrick Garrity, Security Researcher at VulnCheck. “That becomes a real challenge. Manufacturers have to be mindful of that.”
Imagine a connected infusion pump or imaging system that still relies on a Windows 7 or even XP base. Patches stop, drivers go unsupported, and over time, the device becomes a soft target on an otherwise modern network.
And because medical systems are tightly integrated—feeding data into hospital EMRs, remote dashboards, and cloud platforms—an outdated component in one corner of the network can expose an entire healthcare operation.

The stakes became clear during the Black Basta ransomware attack on Ascension Health earlier this year. Hospitals were forced to revert to paper-based systems. Electronic medical records, scheduling systems, and digital imaging were suddenly inaccessible.
RunSafe Security CTO Shane Fry summed up the real-world impact: “If your network’s down, you can’t do surgery.”
Beyond the immediate operational disruption, the consequences for patients were serious. Doctors faced delays accessing treatment histories. Pharmacists couldn’t verify prescriptions electronically. In some facilities, even infusion pumps and lab equipment had to be taken offline as a precaution.
Ransomware may be the headline, but the underlying vulnerability is the same—cybersecurity weaknesses left unaddressed.
As Phil Englert, VP of Medical Device Security at Health-ISAC, noted: “Cyber is a failure mode. It’s a way for things not to work or not to work as intended when you want them to.”
When software failures and weak security controls ripple into care delivery, cybersecurity is a patient safety imperative.

Most healthcare breaches don’t start with exotic zero-days. They start with vulnerabilities everyone already knows about.
Attackers target what’s common: outdated Microsoft servers, unpatched remote access tools, misconfigured network gateways, and open-source components left to age quietly inside medical devices.
Garrity pointed to examples such as NextGen Healthcare’s Mirth Connect, a popular data exchange system exploited in ransomware campaigns. The flaw wasn’t obscure, as it had been publicly documented and patched. Yet more than a year later, vulnerable systems remained exposed online, still running unpatched versions.
“Threat actors are going to opportunistically target anything and everything. And… they’re just using what’s already published and off-the-shelf,” Garrity said. “Even outdated remote management tools or cloud connectors can become attack surfaces.”

Legacy software turns these well-known weaknesses into long-term liabilities. Once a system goes unpatched, every new connection—every piece of cloud integration or remote monitoring—adds to the risk.
The consequences of cybersecurity weaknesses aren’t limited to downtime or headlines—they directly affect revenue and market access.
According to RunSafe Security’s 2025 Medical Device Cybersecurity Index, 83% of healthcare buyers now include cybersecurity standards in their RFPs, and 46% have declined to purchase medical devices due to security concerns. Outdated or insecure software doesn’t just pose a technical problem; it can cost sales.
For device manufacturers, the message from buyers is unmistakable: security maturity equals market readiness. Procurement teams are treating cybersecurity posture as a business criterion alongside clinical performance and cost.
Hospitals, too, are taking notice. Many are implementing procurement checklists requiring vendors to provide Software Bills of Materials (SBOMs), vulnerability response plans, and clear lifecycle support documentation. Without those, even innovative technologies struggle to clear the contracting stage.
Managing legacy code in a regulated, high-stakes industry isn’t easy, but it’s not impossible. The most resilient organizations are taking pragmatic, layered steps to reduce risk without overhauling every device.
Create and maintain Software Bills of Materials (SBOMs) during the build process, not after. This ensures visibility into every dependency and allows for continuous monitoring of vulnerabilities over time.
Focus patching on vulnerabilities with known exploitation in the wild, not just those with high CVSS scores.
Where patches aren’t possible, use segmentation, strict access controls, and runtime protections to reduce exposure.
Reserve processing and storage capacity for future updates and plan for cryptographic agility so devices remain secure over their full lifespan.
Communicate openly about support timelines and risk mitigation options. Buyers and regulators increasingly view transparency as part of good cybersecurity hygiene.
Healthcare is shifting from a “check-the-box” approach to one centered on resilience. Regulators are reinforcing that shift: the FDA’s premarket guidance now requires SBOMs and vulnerability management plans, while the EU’s Cyber Resilience Act pushes similar expectations globally.
The result is a new baseline where cyber hygiene and secure design aren’t just best practices, they’re business necessities.
“If you don’t know what’s in the software you’re deploying to your networks, then how can you know that a vulnerability affects you?” Fry said. “Without that Software Bill of Materials, you’re going to be very limited.”
For manufacturers and healthcare providers alike, addressing legacy code is about security and trust. It’s about maintaining operational continuity. And ultimately, it’s about keeping patients safe in a world where every connected device is part of the care equation.
As Fry put it: “Everything that we should be doing in cybersecurity should be viewed through … the lens of making sure the patient can get the best care they need as quickly as they can.”
For more on medical device challenges and defenses, listen to our panel discussion: From Ransomware to Regulation: The New Business Reality for Medical Device Cybersecurity.
The post From Black Basta to Mirth Connect: Why Legacy Software Is Healthcare’s Hidden Risk appeared first on RunSafe Security.
]]>The post Stopping Copyleft: Integrating Software License Compliance & SBOMs appeared first on RunSafe Security.
]]>Embedded engineering teams are aware of the risks and looking for tooling that surfaces license risk early in the development pipeline. RunSafe’s license compliance feature addresses this need by detecting licenses in your code and enforcing your organization’s risk profile to prevent the release of affected code. Teams can ship faster, with the permissions they’ve set, and without risking IP.
Software license compliance means following the legal terms attached to every piece of code in your product, including proprietary, open source, or vendor-supplied. When you ship a device with firmware or deploy a software update, you’re accepting the obligations tied to every component inside.
Embedded teams face unique challenges:

Copyleft is a licensing approach that uses copyright law to keep software open. If you distribute a program containing copyleft-licensed code, you’re typically required to release your modifications—and sometimes related components—under the same copyleft license. That reciprocal obligation separates copyleft from permissive licenses.
If a copyleft license is violated, the potential implications include:
The longer violations exist in shipped products, the more devices are affected and the harder remediation becomes.
Without an accurate SBOM and automated license enforcement, it’s difficult to stop copyleft from entering your codebase. That’s why embedded teams need RunSafe’s file-level SBOMs and license compliance, which surface licenses early and then allow you to block or approve them before release based on your specific risk profile.

RunSafe’s license compliance feature gives embedded teams control over licenses to prevent violations before code ships. We combine build-time Software Bill of Materials (SBOM) generation with automated policy enforcement to simplify and standardize the process.

RunSafe lets you define clear licensing policies across your entire organization, and will be adding support for project-level license compliance to allow for more granularity and flexibility in how you configure your rules. Specify which licenses are approved, which are banned, and which require review. Whether you need to block GPL variants, flag AGPL dependencies, or restrict any copyleft terms, RunSafe allows you to set rules that make sense for your organization.
In the RunSafe Security Platform, you’ll see a list of all the licenses in your software detected by RunSafe’s build-time SBOM generator. You can also view a list of common open-source licenses and choose which to allow or deny. By defaulting to the licenses actually present in your software’s SBOM, your organization can focus on dependencies in use without getting bogged down by unnecessary compliance reviews.
This is where RunSafe balances control with practicality. For any license you haven’t explicitly classified (unset licenses), you choose one of two approaches:
Allow by default: New dependencies flow through automatically unless they match your explicitly denied list. This keeps development moving while blocking known copyleft risks.
Deny by default: Any unrecognized license halts the pipeline until you review and approve it. This guarded posture ensures maximum protection as your dependencies evolve.
Once configured, enforcement happens automatically in your CI/CD pipeline. As your CI tool runs your builds, RunSafe generates SBOMs and checks them against your license policy. Pipelines containing denied licenses will fail with clear output in your logs, identifying exactly which licenses triggered the block.
As your team adds new libraries or updates existing ones, newly detected licenses automatically appear in your unset list. Depending on your enforcement posture, they either flow through (if allowed by default) or stop the pipeline for review before releasing code (if denied by default). You can adjust individual license decisions at any time, moving them between allowed and denied as your policy matures.

You can’t enforce license policy if you don’t know what’s in your build. SBOMs solve that, but most SBOM tools fail in C/C++ environments because license data lives in:
Without file-level SBOM accuracy, compliance becomes guesswork. This is where RunSafe differentiates itself. By generating SBOMs at the file level during build-time, RunSafe can accurately capture license information for embedded projects. This then leads to greater confidence in license compliance.
By enforcing license policy at build time and pairing it with accurate SBOMs, you can reduce copyleft risk before it reaches production.
Interested in giving it a try? Sign up for a free trial of the RunSafe Security Platform.
Sometimes, but it depends on the license. LGPL permits dynamic linking with conditions, while GPL remains ambiguous. For embedded systems—which overwhelmingly use static linking—copyleft risk is much higher.
How often should SBOMs be generated?
Every build. Automating SBOM generation ensures accuracy as dependencies change.
What tools can prevent GPL or copyleft code from entering my firmware build?
Look for tools that generate accurate SBOMs at build-time and enforce license policy in the CI/CD pipeline. The most effective solutions automatically flag or block high-risk licenses before code is merged or released. RunSafe provides this capability by combining file-level build-time SBOMs with pipeline enforcement for embedded projects.
How do I automatically enforce open-source license policies in CI/CD?
You need pipeline-level enforcement, not manual reviews. Modern tools can apply license rules (allow, deny, or review) and stop risky code from reaching release or merge. RunSafe integrates directly into your CI/CD pipeline, ensuring that disallowed licenses never reach release branches or production firmware.
How can I detect licenses in C/C++ code when there’s no package manifest?
Most scanners depend on manifests and package metadata, which C/C++ projects often lack. Instead, you need file-level detection that reads license headers and repository artifacts. RunSafe’s build-time SBOM generator does exactly this, making license visibility possible even in C/C++ codebases.
How can I block copyleft without slowing down developers?
Choose a tool that supports both “allow-by-default” and “deny-by-default” modes. That allows developers flexibility for a fast flow in most work and strict control when needed. RunSafe supports both, so teams can balance velocity and risk.
How can I automate GPL license compliance for firmware?
Automation requires two layers: (1) license detection via build-time SBOMs, and (2) policy enforcement in CI/CD. When these steps are automated, teams avoid manual review and prevent GPL from slipping into release artifacts. RunSafe delivers both in a single workflow.
How do I enforce open-source license rules in GitHub or GitLab CI?
Use a tool that integrates into your CI/CD pipeline and can block merges or releases based on your set license policies. RunSafe ties directly into GitHub and GitLab CI pipelines so enforcement happens automatically with each build.
The post Stopping Copyleft: Integrating Software License Compliance & SBOMs appeared first on RunSafe Security.
]]>The post The Decade Ahead in Aerospace Cybersecurity: AI, Resilience, and Disposable Weapons Systems appeared first on RunSafe Security.
]]>The aerospace and defense sector is entering a new chapter. With networked systems, distributed architectures, and mission-critical connectivity, cybersecurity is as vital as physical shielding.
In a recent discussion on aerospace cybersecurity strategy, Shane Fry, CTO of RunSafe Security, and Patrick Miller, Product Manager at Lynx, discussed how the next five to ten years will reshape how we defend aerospace assets.
What follows are four key trends that highlight where the industry is headed.
Watch the full webinar for more on aerospace cybersecurity here.
Artificial intelligence is emerging as both a revolutionary tool and a potential liability in aerospace cybersecurity. Shane noted how AI is reshaping nearly every aspect of defense systems, from vulnerability detection to operational optimization.
“There’s a lot of really cool research being done in penetration testing and finding vulnerabilities and using AI to assist operators in security,” he said.
AI is now helping engineers and analysts identify weaknesses faster and automate portions of cyber defense previously handled manually. But as Shane pointed out, the same technology that accelerates innovation can also magnify risks.
“One of the things that is a negative,” he warned, “is many of these new capabilities are trained on software that’s not secure.”
Large language models and AI code-generation tools often learn from open-source repositories riddled with known flaws. That means they might produce software that appears sound but hides vulnerabilities—like memory corruption or buffer overflows—deep within the codebase.
“We’re going to see a rise in software vulnerabilities,” Shane predicted, “as more developers use these code-assistants to produce faster code that looks good but may actually have subtle memory corruption vulnerabilities in them.”
For a sector where software directly underpins mission safety and national defense, that’s a sobering reality. The next phase of AI integration, Shane cautioned, may bring turbulence before it delivers real progress.
“The next six months to a year or two years might be really rough,” he admitted, “but ultimately, the progress will be for the overall good.”
One of the most striking shifts Shane highlighted was the rise of low-cost, “disposable” weapon systems, particularly unmanned aerial vehicles (UAVs). In a time where speed and affordability are driving procurement decisions, these assets are designed to complete their mission, but not necessarily to return.
“When we talk about disposable UAVs,” Shane explained, “there’s a lot of interest in having lower-cost solutions that we don’t care if they survive the mission. We just need them to accomplish the mission.”
That philosophy is reshaping how system owners think about design and risk. Yet, Shane cautioned that the push for cheaper, faster production can come at a dangerous price.
“As we strive to cut as much as we can to bring costs down,” he said, “we’ve got to make sure that we’re still doing enough security, enough safety so that we can accomplish the mission.”
In a world where digital compromise can have physical consequences, even a minor software flaw could be catastrophic. “We don’t want to end up in a situation where a UAV flying overhead has a trivial vulnerability that gets exploited, and the drone turns around and bombs an allied target,” Shane said.
That vivid example underscores a growing tension in modern defense programs: how to balance affordability and agility with assurance and control. The Department of Defense’s efforts to modernize its software approval and certification processes for future readiness will be critical.
A core component of that, Shane noted, is “having good, accurate SBOMs and being able to understand what the risk is in your software that you’re shipping.”
Additionally, this will help ensure that even low-cost or disposable systems can be deployed responsibly.
In short, the aerospace industry is entering an era where scale and security must coexist. Disposable systems may not be built to last, but their cybersecurity must endure long enough to protect the mission, the data, and the allies they serve.
Cyber defense of aerospace is not a solo endeavor. Governments, aerospace primes, and vendors will need to align to defend complex systems.
Shane observed that modern systems integrate legacy code, which is becoming increasingly interconnected, heightening risk. To manage this complexity, the industry is leaning into partnerships and certified deployment paths.
For example, Lynx’s secure hypervisor technology, when paired with RunSafe’s memory protection, delivers a layered, modular architecture that strengthens system isolation and resilience in the field.
When discussing future integration of AI, Patrick noted: “Lynx has taken the approach of enabling those safety-focused or non-critical applications to run alongside, but totally separate from, the safety-critical applications and subjects within that same hardware.”
This collaborative, layered approach is one example of working together to reduce overall risk.
Overarching all of these trends is the principle of resilience. In an environment where AI may introduce new vulnerabilities, collaboration expands the ecosystem, geopolitics raises the stakes, and disposable systems proliferate, aerospace defenders must build platforms that can endure, adapt, and recover.
As Shane explained: “You’re going to get the most out of your hardware and your systems by having a more robust and modular system, with security baked in.”
He continued: “Having a modular system lets you get new software, new features, new capabilities onto your platforms faster.”
Patrick noted that the partnership model between Lynx and RunSafe demonstrates what “defense-in-depth” looks like in practice.
“With the Lynx and RunSafe partnership,” Patrick said, “pairing that with RunSafe’s memory protection, you’re able to remove a whole category of exploits you’d otherwise have to defend against.”
The next decade for aerospace cybersecurity will be defined by convergence between AI and assurance, collaboration and software supply chain transparency, pre-emptive design and agile operations.
Building resilience and adopting safety-focused engineering will enable faster innovation without leaving cybersecurity by the wayside.
For more on RunSafe and Lynx’s work in aerospace cybersecurity, read our white paper on “Integrating RunSafe Protect with the LYNX MOSA.ic RTOS.”
The post The Decade Ahead in Aerospace Cybersecurity: AI, Resilience, and Disposable Weapons Systems appeared first on RunSafe Security.
]]>The post Defending the Factory Floor: How to Outsmart Attackers in Smart Manufacturing appeared first on RunSafe Security.
]]>That’s the question host Paul Ducklin explored with Joseph M. Saunders, CEO and Founder of RunSafe Security, in an episode of Exploited: The Cyber Truth.
From the limitations of traditional security models to the growing importance of software quality, this conversation revealed why the cybersecurity playbook for industrial automation needs a re-write for today’s threats.
Modern factories are no longer isolated networks of robots and sensors. They’re deeply connected ecosystems, merging OT (Operational Technology) and IT, often with cloud integration and industrial IoT devices.
As Joe put it, “Connected devices bring productivity gains, but also new levels of security consideration.”
That connectivity has blurred the once-clear boundaries between factory floor systems and IT networks. Attackers can now exploit weak spots at every layer—from programmable logic controllers (PLCs) to human-machine interfaces (HMIs), SCADA systems, and even the cloud.
The Purdue Security Model, long a foundation for industrial security, assumes clear segmentation between these layers. But in an era where a single sensor might communicate via Bluetooth, Wi-Fi, or cellular, those boundaries collapse.
Attackers no longer need “write access” to cause harm. Even read-only access—like viewing operational data or production schedules—can provide immense competitive or geopolitical advantage.
Example: Knowing how much of a certain alloy a manufacturer has in stock could reveal supply chain bottlenecks or production delays.
Too many organizations still approach cybersecurity as a checkbox exercise. Standards like IEC 62443, developed by the ISA and IEC, help guide secure industrial automation practices—but they’re only as good as the commitment behind them.
Joe cautioned that “checkbox compliance” misses the point. Security should be viewed as an extension of software quality and operational excellence.
Building security into the software development lifecycle (SDLC)—from coding and testing to patching—is essential. Companies that automate and embed these processes don’t just produce safer software; they create better products, faster.
Many industrial environments still rely on legacy systems with equipment that can’t easily be replaced or updated. These older devices, often running outdated firmware, represent some of the most vulnerable points in a factory’s network.
Joe explained how memory safety protections can extend the secure life of legacy systems without requiring new software agents or hardware upgrades.
RunSafe’s Load-Time Function Randomization provides runtime protection that prevents exploitation even when patches aren’t available—adding security without disrupting operations.
“You can extend the life of legacy systems by applying memory safety protection in a way that doesn’t add software or slow them down,” Joe said.
That’s not just risk mitigation—it’s cost savings, uptime assurance, and long-term resilience.
Securing smart factories also requires a focus on understanding the entire software supply chain. To protect devices, you need to know what software is built into each one.
Manufacturers, suppliers, and customers each play a role in ensuring the integrity of the products they build and deploy.
Joe recommends evaluating partners and suppliers based on:
RunSafe’s recent Medical Device Industry Report showed similar trends. Organizations are starting to reject insecure products altogether, even when they meet performance requirements. That mindset shift is now reaching industrial automation.
The conversation ultimately returned to a core truth: Security isn’t about perfection—it’s about resilience.
When software is developed securely, vulnerabilities are reduced. When runtime protections are added, attackers are denied an easy path to exploit. And when organizations collaborate across the software supply chain, entire industries become more secure.
“Security isn’t a checkbox—it’s a reflection of quality,” Joe reminded listeners.
RunSafe Security protects embedded software across critical infrastructure, delivering automated vulnerability identification and software hardening to defend the software supply chain and critical industrial systems without compromising performance or requiring code rewrites.
Learn more about how we do it in this case study: “Vertiv Enhances Critical Infrastructure Security for Embedded Systems with RunSafe Integration.”
The post Defending the Factory Floor: How to Outsmart Attackers in Smart Manufacturing appeared first on RunSafe Security.
]]>The post AI Is Writing the Next Wave of Software Vulnerabilities — Are We “Vibe Coding” Our Way to a Cyber Crisis? appeared first on RunSafe Security.
]]>For decades, cybersecurity relied on shared visibility into common codebases. When a flaw was found in OpenSSL or Log4j, the community could respond: identify, share, patch, and protect.
AI-generated code breaks that model. Instead of re-using an open source component and having to comply with license restrictions, one can use AI to rewrite a near similar version but not use the exact open source version.
I recently attended SINET New York 2025, joining dozens of CISOs and security leaders to discuss how AI is reshaping our threat landscape. One key concern surfaced repeatedly: Are we vibe coding our way to a crisis?
At the SINET New York event, Tim Brown, VP Security & CISO at SolarWinds, pointed out that with AI coding, we could lose insights into common third-party libraries.
He’s right. If every team builds bespoke code through AI prompts, including similar to but different than open source components, there’s no longer a shared foundation. Vulnerabilities become one-offs. If we are not using the same components, we won’t have the ability to share vulnerabilities. And that could lead to a situation where you have a vulnerability in your product that somebody else won’t know they have.
The ripple effect is enormous. Without shared components, there’s no community-driven detection, no coordinated patching, and no visibility into risk exposure across the ecosystem. Every organization could be on its own island of unknown code.
Even more concerning, AI doesn’t “understand” secure coding the way experienced engineers do. It generates code based on probabilities and its training data. A known vulnerability could easily reappear in AI-generated code, alongside any new issues.
Veracode’s 2025 GenAI Code Security Report found that “across all models and all tasks, only 55% of generation tasks result in secure code.” That means that “in 45% of the tasks the model introduces a known security flaw into the code.”
For those of us at RunSafe, where we focus on eliminating memory safety vulnerabilities, that statistic is especially concerning. Memory-handling errors — buffer overflows, use-after-free bugs, and heap corruptions — are among the most dangerous software vulnerabilities in history, behind incidents like Heartbleed, URGENT/11, and the ongoing Volt Typhoon campaign.
Now, the same memory errors could appear in countless unseen ways. AI is multiplying risk one line of insecure code at a time.
Nick Kotakis, former SVP and Global Head of Third-Party Risk at Northern Trust Corporation, underscored another emerging problem: signature detection can’t keep up with AI’s ability to obfuscate its code.
Traditional signature-based defenses depend on pattern recognition — identifying threats by their known fingerprints. But AI-generated code mutates endlessly. Each new build can behave differently and conceal new attack vectors.
In this environment, reactive defenses like signature detection or rapid patching simply can’t scale. By the time a signature exists, the exploit may already have evolved.
So how do we protect against vulnerabilities that no one has seen — and may never report?
At RunSafe, we focus on one of the most persistent and damaging categories of software risk: memory safety vulnerabilities. Our goal is to address two of the core challenges introduced by AI-generated code:
By embedding runtime exploit prevention directly into applications and devices, RunSafe prevents the exploitation of memory-based vulnerabilities, including those that are unknown or zero days.
That means even before a patch exists, and even before a vulnerability is discovered, RunSafe Protect keeps code secure whether it’s written by humans, AI, or both.
AI-generated code is here to stay. It has the potential to speed up development, lower costs, and unlock new capabilities that would have taken teams months or years to build manually.
However, when every product’s codebase is unique, traditional defenses — shared vulnerability intelligence, signature detection, and patch cycles — can’t keep up. The diversity that makes AI powerful also makes it unpredictable.
That’s why building secure AI-driven systems requires a new mindset that assumes vulnerabilities will exist and designs in resilience from the start. Whether it’s runtime protection, secure coding practices, or proactive monitoring, security must evolve alongside AI.
At RunSafe, we’re focused on one critical piece of that puzzle, protecting software from memory-based exploits before they can be weaponized. As AI continues to redefine how we write code, it’s our responsibility to redefine how we protect it.
Learn more about Protect, RunSafe’s code protection solution built to defend software at runtime against both known and unknown vulnerabilities long after the last patch is available.
The post AI Is Writing the Next Wave of Software Vulnerabilities — Are We “Vibe Coding” Our Way to a Cyber Crisis? appeared first on RunSafe Security.
]]>The post Beyond Patching: How to Secure Medical Devices and Meet FDA Compliance Without Changing Code appeared first on RunSafe Security.
]]>In healthcare cybersecurity, one of the biggest challenges is protecting medical devices that are difficult to patch and written in memory unsafe languages. Unlike web applications or mobile software, which can be updated overnight, medical devices are often built to last 10–15 years and aredesigned for reliability and patient safety—not constant code revisions.
Yet cyber threats are growing, and FDA regulations are tightening. Manufacturers and healthcare providers are now under pressure to secure legacy systems while keeping patients safe. The question is: how can this be done without rewriting a single line of code?
This was the focus of a recent episode of Exploited: The Cyber Truth, featuring Phil Englert (VP of Medical Device Security at Health-ISAC) and Joseph M. Saunders (Founder & CEO of RunSafe Security). Their insights offer a practical roadmap that blends compensating controls, regulatory awareness, and industry collaboration.
Medical devices weren’t designed with today’s cybersecurity challenges in mind. Hospitals rely on equipment that often stays in service for a decade or more, from MRI machines to pacemakers, because replacing them isn’t financially or operationally feasible. These devices also run on limited computing resources and cannot tolerate downtime, making traditional patching nearly impossible.
As Englert explained, “We’ve painted a target on our back” by connecting these devices to networks for efficiency and data sharing but without always providing the necessary safeguards. That combination of longevity, limited resources, and operational necessity makes securing these devices a unique and ongoing challenge.
When patching or rewriting isn’t an option, the focus shifts to compensating controls, or ways to secure devices without touching their software, as well as opportunities for code protection.
These approaches are not one-size-fits-all. The strategy for an implanted pacemaker is very different from that for a helium-filled MRI machine. But the principle remains: if you can’t harden the device itself, you must harden its environment.
Another theme from the discussion was the rise of Software Bills of Materials (SBOMs). Much like a nutrition label on food, SBOMs give visibility into the “ingredients” inside a medical device. This transparency allows healthcare providers to quickly assess whether known vulnerabilities, like Log4j, impact their devices, hold manufacturers accountable, and make smarter, risk-based decisions about deployment.
As Saunders noted, SBOMs are most valuable when generated close to the point of software production, ensuring accuracy and reliability.
For years, FDA cybersecurity guidance was considered “best practice.” That changed in December 2022 when Congress gave the FDA statutory authority over device cybersecurity under the PATCH Act. By March 2023, manufacturers were required to follow a secure software development lifecycle, account for the full environment in which devices operate, and maintain controls and documentation throughout the device’s lifespan.
This represents a major shift. Compliance is now enforceable, and the focus has expanded from protecting data to ensuring patient safety across interconnected healthcare ecosystems.
Cybersecurity lapses aren’t abstract IT problems—they have real consequences for patient outcomes. Studies show that clinical performance can decline for up to 18 months following a hospital breach, as resources are diverted to recovery efforts. The “blast radius” often extends beyond one hospital, affecting neighboring facilities that absorb overflow patients.
Among organizations that experienced cybersecurity incidents affecting medical devices, 75% said that cyber incidents caused at least a moderate patient care impact. 46% required manual processes to maintain operations and 24% required patient transfers to other facilities.
As Saunders emphasized, “Cybersecurity is an enabler of patient safety.” Even the most advanced medical care can be undermined without strong cybersecurity practices in place.
Perhaps the most actionable takeaway is that no single organization can address these challenges alone. Manufacturers, healthcare providers, regulators, and third-party service organizations all have roles to play.
Practical steps include:
Englert summed it up best: “80% of anything is better than 100% of nothing. Start where you can with the resources you have.”
For more insights on medical device cybersecurity, download RunSafe’s 2025 Medical Device Cybersecurity Index.
The post Beyond Patching: How to Secure Medical Devices and Meet FDA Compliance Without Changing Code appeared first on RunSafe Security.
]]>The post How Aviation Cybersecurity Strategy Became the Industry’s Biggest Blind Spot appeared first on RunSafe Security.
]]>For decades, aviation has operated under a simple but powerful principle: safety first. The industry’s rigorous certification standards have created some of the world’s most reliable systems, with aircraft designed to account for every conceivable mechanical failure, weather condition, and human error.
But that very mindset—safety above all—has created a blind spot. While the aviation industry perfected aviation flight safety, it overlooked an equally urgent priority: cybersecurity. Modern aircraft are hyper-connected flying computers, connected to ground networks, satellite systems, and the internet itself.
In their 2025 report, the Cyberspace Solarium Commission offered a warning. The aviation industry is facing escalating threats from ransomware attacks, GPS spoofing, and sophisticated cyber intrusions.
A new aviation cybersecurity strategy is now mission-critical for protecting passengers, operations, and national security.
The aviation industry relies on DO-178C and similar safety standards, which focus on ensuring that flights land safely despite system failures, hardware malfunctions, or software bugs. These standards have been remarkably successful, as commercial aviation remains one of the safest forms of transportation.
However, these safety protocols were designed for an era when the primary threats were mechanical failures and human error, not malicious attacks. DO-178C accounts for everything that should be on an aircraft, but it doesn’t address threats from sources that shouldn’t be there, like hackers infiltrating flight systems through network connections.
As DO-356, the aviation industry’s newer security standard, explicitly states: “Safety and security are not the same thing; however, there is a strong overlap.” The document acknowledges what many industry professionals are only now realizing: a security breach can quickly become a safety issue. If flight systems are designed with safety in mind, but not security, they are not truly safe. A breach of security will cause a violation of safety.
Recognition of the problem is the first step toward solving it. The Federal Aviation Administration proposed new cybersecurity requirements in August 2024 that would make cyber protection a standard part of airworthiness for newly built airplanes and equipment.
Additionally, one of the recommendations stemming from the CSC report is that “The FAA and TSA should harmonize cybersecurity regulatory requirements for the aviation subsector.” This includes referencing existing NIST frameworks and adding guidelines for supply chain security unique to the needs of the aviation industry.
Compliance with regulations, however, is far from simple and comes at a significant cost, particularly for legacy or long-lived systems. Take the F-35, for example. Its prototype and design work began in the 1990s, well before today’s cybersecurity threats had taken shape. While it incorporates cutting-edge technology, much of its foundational architecture was conceived in a pre-cyber era. These systems must now be retrofitted or augmented to meet security measures within the constraints of rigid defense budgets that often make comprehensive overhauls impossible.
Where should the aviation industry invest its time and dollars? The first step is elevating software security to the same level of importance as flight safety. A July 2024 study by SecurityScorecard, a cybersecurity firm, found that the aviation industry overall scores a “B” on cybersecurity and that aviation-specific software and IT vendors scored the lowest in cybersecurity readiness across industries.
Improving this score requires implementing Software Bills of Materials (SBOMs) to track every software component in aviation systems and prioritizing vulnerability management with the same rigor as mechanical maintenance.
For older systems, runtime code protection technologies can strengthen cybersecurity without requiring full code rewrites, bridging the gap between legacy architecture and modern security standards.
Also on the horizon are security solutions that attain safety of flight certifiability, making security a much easier and obvious piece of highly-regulated aircraft.
Aviation’s cybersecurity blind spot didn’t develop overnight, and isn’t easily resolved. However, the industry’s legendary commitment to safety provides a strong foundation for building equivalent security standards. The same methodical, evidence-based approach that made flying safer than driving can be applied to making it more secure.
The industry that taught the world how to fly safely now has the opportunity to show how to fly securely as well.
Read more about how RunSafe supports an overall aviation cybersecurity strategy in our white paper: “RunSafe Security Safety of Flight Approach.”
The post How Aviation Cybersecurity Strategy Became the Industry’s Biggest Blind Spot appeared first on RunSafe Security.
]]>The post The Wild West of C/C++ Development & What It Means for SBOM Generation appeared first on RunSafe Security.
]]>Unlike modern languages with centralized package managers, standardized toolchains, and strict conventions, C/C++ operates like the Wild West. Developers can link in dependencies in a variety of ways, from copy-paste to remote fetches during compilation. When you need to understand what your software is made of for compliance and vulnerability management, things get sticky fast.
Software Bills of Materials (SBOMs) have become a best practice for managing software supply chain risk as well as a regulatory requirement. But for C/C++, generating a complete and accurate SBOM is notoriously difficult due to the lack of a package manager.
How can developers report on what all is in a C/C++ build? In this article, I look at how C/C++ dependency chaos undermines SBOM accuracy and why a build-time, file-based approach is often the only way to generate trustworthy SBOMs for this legacy language ecosystem.
Modern ecosystems like Python, JavaScript, and Rust have centralized package registries (e.g., PyPI, npm, crates.io) and well-defined manifest files (e.g., requirements.txt, package.json, Cargo.toml). This allows SBOM tools to easily detect:
C/C++ has none of this.
Instead, developers rely on informal, ad-hoc methods that leave no clear metadata trail, making it nearly impossible for SBOM tools to determine what’s in your codebase, let alone which components are vulnerable or out of date.
Here are several examples I regularly encounter:
These challenges result in manual dependency tracking and versioning difficulties, creating security concerns when dependencies are overlooked or forgotten. If your SBOM generator can’t capture these dependencies, you lose visibility into their codebase composition, making it nearly impossible to accurately identify and address security vulnerabilities.

Despite being widely disliked, Git Submodules are one of the most popular dependency management approaches in C/C++. They provide a way to embed external repositories while maintaining some level of version control.
The challenge with submodules is that they frequently reference Git commit hashes rather than semantic versions. Instead of depending on “library v1.2.0,” you’re depending on commit hash ‘a1b2c3d4’ versus the traditional semantic versioning approach. This makes it difficult to map dependencies to Common Platform Enumeration (CPE) identifiers or vulnerability databases that expect product versions, not version control hashes.
Despite this limitation, Git Submodules are probably one of the easiest dependency approaches to work with from an SBOM perspective, which says something about how challenging the alternatives can be.
SBOM Challenge: Git commit hashes don’t map to CVE databases.
Result: Even if a tool includes the submodule in your SBOM, it won’t be able to determine if it’s vulnerable.
OpenCV, one of the most popular computer vision libraries, exemplifies the challenges of C/C++ dependency management. OpenCV’s build tree includes a “3rdparty/” directory full of other open source libraries. These are copied into the codebase at a specific moment in time, without any version tracking or external linkage.
Here’s an actual SBOM entry for a file embedded via OpenCV:
{
"name": "predictor_enc.c",
"authors": [
{
"name": "Google Inc"
}
],
<... cut by me for space in message ...>
"copyright": "Copyright 2016 Google Inc",
"properties": [
{
"name": "filePath",
"value": "/plugfest/opencv/3rdparty/libwebp/src/enc/predictor_enc.c"
}
]
},
Upon investigating the repository, you can confirm the “3rdparty/” dependencies are manually included instead of through git submodules or cmake’s built-in `FetchContent` behavior.. As a result, this open source software would be missed and not reported on as a dependency.
SBOM Challenge: No traceable source, version, or update mechanism.
Result: Traditional SBOM tools miss these entirely because they look for package manifests not files buried in a subdirectory.
Mongoose, a popular embedded web server, takes an even more direct approach. Their official documentation instructs users to copy exactly two files—mongoose.c and mongoose.h—anywhere in their codebase.
This approach creates several challenges:
SBOM Challenge: Completely invisible unless you trace every file.
Result: These dependencies blend in with the rest of your source tree. Unless your SBOM tool analyzes each compiled file and scans license headers, you’ll miss them completely.
SQLite demonstrates the most invisible form of dependency management. Some build systems will fetch SQLite source code directly from the web during compilation using commands like wget or curl. This dependency exists nowhere in your source code as it only appears during the build process.
SBOM Challenge: Only exists at compile time.
Result: A static SBOM generator has no way to know the file was downloaded unless it observes the build process in real time.
These real-world examples illustrate why traditional package-based SBOM generation fails for C/C++. When dependencies can be copied directly into source trees, embedded as Git Submodules, fetched dynamically during builds, or integrated through copy-paste instructions, they are all too easy to miss.
To overcome these challenges, SBOM tools must watch the build process itself and not just analyze the source code or look for packages. A file-based, build-time SBOM generator tracks every file that is compiled, linked, or fetched, and extracts metadata like:
Every file used in a build gets recorded, providing visibility into the actual composition of software. That visibility leads to better vulnerability identification, less software supply chain risk, and compliance with SBOM regulations.
Having built a C/C++-specific SBOM generator at RunSafe, I’ve learned that you can’t force C/C++ dependency management into the expected package manager approach. And you shouldn’t try.
C/C++ development is complex and full of legacy habits that defy modern package management. What we need are tools that support the embedded C/C++ world. When we try to force this rich, complex, legacy ecosystem into modern packaging paradigms, we lose a lot of critical information. RunSafe’s build-time, file-based approach aims to capture that missing information.
The Wild West of C/C++ development isn’t going away. But with the right tools and approaches, we can bring order to the chaos without losing the flexibility that makes C/C++ so powerful in the first place.
What unconventional build systems have you encountered in your C/C++ projects? Share your stories. The more we understand the chaos, the better we can support accurate, secure, and compliant SBOM generation.
The post The Wild West of C/C++ Development & What It Means for SBOM Generation appeared first on RunSafe Security.
]]>The post The 2025 SBOM Minimum Elements: A Turning Point But Embedded Systems Still Risk Being Left Behind appeared first on RunSafe Security.
]]>In August 2025, the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Homeland Security (DHS) released a new draft of the Minimum Elements for a Software Bill of Materials (SBOM). It’s the first major revision since 2021, when the National Telecommunications and Information Administration (NTIA) outlined a simple baseline of just seven fields.
The new draft is a genuine turning point. It raises the bar for SBOMs, pushing them beyond check-the-box compliance and closer to being a real tool for managing risk. But while these changes are a big step forward, they still leave embedded systems—built largely in C and C++—struggling to fit into a framework designed with modern package-managed software in mind.
As Kelli Schwalm, SBOM Director at RunSafe Security, puts it: “The recommended 2025 SBOM minimum elements show how far the industry has come since 2021. We’ve moved from a theoretical checklist to practical requirements that reflect how SBOMs are actually used in the real world.”
However, Kelli warns, the implicit assumption in the draft recommendations is that a software component equals a package. For embedded systems, that’s not the case. Without explicit recognition of file-based SBOMs, we risk leaving critical systems out of the picture.
An SBOM is an ingredients list for software. To be useful, every SBOM must include certain key data fields, or the minimum elements.
In 2021, the NTIA’s list was a good starting point, but as software development has evolved, it is now far too basic: name, version, identifier, timestamp.
To reflect the reality of software development today, the 2025 draft adds fields like:
The updates reflect how organizations are actually using SBOMs today to manage software supply chain risk.
The 2021 requirements made it possible to deliver a barebones SBOM with nothing more than an Excel spreadsheet. Those quickly went stale and carried little value. The new requirements—particularly hashes, license data, and generation context—make that shortcut nearly impossible, forcing a move toward automated SBOM generation.
As Kelli explained: “By requiring more fields—hashes, authorship, generation context—CISA is making it almost impossible to get by with an outdated Excel spreadsheet. These new elements push the industry toward automated, accurate SBOM generation, which is the only way to keep pace with today’s threat environment.”
License information is now a minimum requirement. Licensing is a compliance issue, but license restrictions can directly impact how software can be used or redistributed. By including it, CISA and DHS are addressing a real-world gap that often goes unnoticed until it becomes a legal or operational problem.
“Licensing impacts how organizations use and share software,” Kelli said. “Ignoring it in SBOMs left a blind spot in the software supply chain, and closing that gap is long overdue.”
Including generation context—stating where in the lifecycle the SBOM was created—is a small addition with outsized impact. For example, “Build-time SBOMs give the clearest and most accurate view of software,” Kelli said. “This requirement could pressure suppliers to deliver them, raising the quality bar across the ecosystem.”
By making generation context mandatory, the draft puts pressure on suppliers to produce higher-quality SBOMs and discourages binary-only or “black box” approaches.
Compare SBOM generation approaches.
RunSafe sees the 2025 SBOM minimum elements as a much-needed course correction. They reflect four years of learning, discourage paper-thin compliance practices, and push the industry toward automated, accurate, and timely SBOMs.
But gaps remain, especially for embedded systems. Most federal guidance still implicitly equates “software component” with “package.” For modern applications built with package managers, that’s fine. For embedded devices, it’s a problem.
“Most federal guidance still assumes software components come neatly packaged, but embedded software in C and C++ rarely works that way,” Kelli said. “File-level SBOMs are harder to generate, but they’re also where you get the most precise vulnerability data.”
That precision matters most in embedded systems—the software inside medical devices, critical infrastructure, and defense technology—where false positives waste time and false negatives risk lives.
“If we don’t account for embedded use cases, we risk leaving out some of the most critical systems,” Kelii said.
RunSafe believes the draft’s generalized fields are a step in the right direction, but federal guidance should go further. It should explicitly recognize that SBOMs can be generated at the file level, not just at the package level, and that embedded contexts demand this granularity.
The 2025 SBOM minimum elements draft is a milestone. It raises expectations, improves accuracy, and pressures suppliers to move beyond token compliance. That’s progress.
But for SBOMs to fulfill their promise across the entire software ecosystem, we must ensure embedded systems are not left behind. File-level SBOMs are essential for securing the most critical software our society relies on.
At RunSafe, we applaud the direction of the new guidance and will continue to advocate for embedded-first thinking in SBOM guidance. The security of the software supply chain depends on it.
Learn more about RunSafe’s approach to SBOM generation and software supply chain security. Download our white paper or view our SBOM tool comparison.
The post The 2025 SBOM Minimum Elements: A Turning Point But Embedded Systems Still Risk Being Left Behind appeared first on RunSafe Security.
]]>The post Why Securing Autonomous Vehicles Must Start Now appeared first on RunSafe Security.
]]>In a recent episode of Exploited: The Cyber Truth, RunSafe Security CEO Joe Saunders joined Gabriel Gonzalez of IOActive to discuss the real-world vulnerabilities threatening connected cars and, more importantly, how the industry can build resilience from the ground up. Their insights shed light on both the dangers and the opportunities shaping the future of mobility.
As Joe puts it: “Vehicles are loaded with software. If you think about it, they are software systems with wheels as opposed to cars with individual components.”
Consider the humble telematics unit, the “black box” that connects vehicles to cellular networks. Gabriel Gonzalez and his team discovered that insecure MQTT configurations in fleet vehicles could let attackers intercept messages, track cars, and even take remote control.
“They found that they could actually fully control the car. It could unlock the car and get all the messages from the car,” Gabriel explained about the vulnerability his team discovered.
It’s a chilling reminder thatan entire fleet could be compromised by one misconfigured server.
This is a technical flaw and a business risk. For logistics companies, mobility providers, or public safety fleets, a cyber incident could mean service disruption, financial loss, and reputational damage.
Today’s vehicles contain dozens of applications, from ECUs to infotainment systems and advanced driver assistance modules. “There are hundreds of millions of lines of code now on automobiles and vehicles today. There is a lot of attack surface and therefore a lot of opportunity for things to go awry,” Joe noted.
As Gabriel points out: “In a taxi or similar autonomous vehicles, you don’t know what the previous passenger did to the vehicle.” Even simple actions like moving a seat could become dangerous if controlled by an attacker during driving.
Software supply chain complexity adds another layer of risk. “In automotive, there are different types of companies. Some of them are just integrators. They don’t even own the code for the ECUs,” Gabriel explained, highlighting how vulnerabilities can originate far from the final vehicle manufacturer.
It’s not enough to patch vulnerabilities after the fact. Security must be part of the design of modern vehicles, not an afterthought.
So what’s the path forward? Both Joe and Gabriel emphasized one guiding principle: security-by-design.
Key strategies include:
“Building security into those components prior to shipping can certainly reduce the exposure and the problem that you would suffer from having to patch components,” Joe emphasized. “Build-in security that protects the device at runtime, I think, ultimately is a good approach.”
One of the most encouraging shifts in the industry is cultural. Fifteen years ago, researchers were often ignored—or worse, threatened—for disclosing vulnerabilities. Today, manufacturers increasingly partner with researchers and CERTs to responsibly patch flaws.
“Maybe 15 years ago when you submitted a vulnerability, companies didn’t even know what we were talking about. Some companies were thinking that we were trying to get money out of them,” Gabriel recalled. Today’s reality is starkly different: “Nowadays, all the other certs and all these entities, they help with the process, especially with companies that are not well known and they are not too large.”
From DEF CON’s Car Hacking Village to Pwn2Own competitions, automakers now recognize that transparency and collaboration lead to stronger defenses. In fact, inviting researchers to “break” cars in controlled environments is becoming a badge of maturity, not weakness.
The race toward autonomy isn’t just about AI, sensors, or customer experience—it’s about trust. Without cybersecurity, the risks could outweigh the rewards. RunSafe Security’s 2025 Connected Car Cyber Safety & Security Index supports this. Consumers are more aware of cybersecurity risks than ever before. 70% of drivers said they would consider buying an older, less connected car just to reduce cyber risk.
For industry leaders, that means:
If we get this right, autonomous vehicles could transform mobility in ways we’ve only begun to imagine. If we get it wrong, the consequences will be measured in more than just software bugs—they’ll affect safety, privacy, and public confidence.
At RunSafe Security, we help OEMs and suppliers build resilience from the inside out by hardening software and reducing exploitability without impacting performance.
Learn more in our white paper: Driving Security: Safeguarding the Future of Automotive Software.
The post Why Securing Autonomous Vehicles Must Start Now appeared first on RunSafe Security.
]]>The post The Top 7 Medical Device Vulnerabilities of 2025 appeared first on RunSafe Security.
]]>As reported by healthcare executives across the U.S., UK, and Germany in RunSafe Security’s 2025 Medical Device Cybersecurity Index, the most critical vulnerabilities in medical devices fall into seven categories: malware infections, network intrusions, ransomware targeting device operations, remote access exploitation, supply chain compromises, vendor-identified vulnerabilities, and data exfiltration.
These are the weaknesses shaping the medical device cybersecurity landscape in 2025.

Malware remains the most widespread vulnerability in medical devices, as reported by healthcare leaders. 51% listed malware infections requiring device quarantine as the most significant medical device cybersecurity incident at their organization.
Often, malware infections are targeted campaigns that force organizations to quarantine devices and disconnect systems. For example, malware can force radiology departments to take entire imaging systems offline to prevent spread, leading to delays in diagnostics. In some cases, malware has been used to wipe device firmware or corrupt system files, requiring full reinstallation before devices can be put back into service.
The consequence is not just about downtime, but also about cascading disruption—if a key device type, such as ventilators or pumps, is quarantined, entire treatment processes can be delayed.
Nearly half of organizations experienced intrusions into networks hosting medical devices. Network intrusions targeting medical devices are among the most severe forms of attack, as they often remain undetected for extended periods. Attackers can gain unauthorized access through poorly segmented networks, default credentials, or outdated communication protocols.
An example is when adversaries pivot from compromised IT systems into clinical device networks, gaining visibility into—and sometimes control over—networked equipment such as monitoring systems or infusion pumps. Intruders may then install backdoors, capture sensitive data in transit, or manipulate device functions.
Unlike IT breaches focused on stealing files, network intrusions into medical devices often create silent footholds that persist until they are specifically identified and removed.
More than one-third of organizations reported ransomware specifically disrupting device operations. Unlike traditional ransomware, these campaigns target availability rather than just data.
For example, ransomware designed to target imaging systems can lock operators out of MRI or CT machines, effectively halting diagnostic capabilities until ransom demands are met. Similarly, ransomware targeting infusion pumps or surgical robots can force organizations to suspend treatment procedures.
The distinguishing factor here is intent: attackers understand that the availability of these devices is essential, making disruption itself the pressure point rather than data encryption alone.
Remote access is vital for servicing, diagnostics, and software updates—but it has also become a primary entry point for attackers. Nearly three in ten organizations reported that remote access was exploited to compromise devices.
This typically occurs when attackers identify unsecured remote desktop sessions, weak VPN configurations, or vendor accounts with excessive privileges. Once inside, adversaries can move laterally across device networks or alter system settings.
A common scenario is the exploitation of default or reused credentials for remote maintenance tools, which grants attackers the same level of control as authorized technicians.
Software supply chain attacks introduce vulnerabilities long before devices are deployed in hospitals. One in four organizations experienced compromises traced back to third-party software, libraries, or hardware components embedded in their devices.
A well-known example outside healthcare is the SolarWinds compromise, but similar attacks in the medical sector can introduce malicious code into firmware or software updates distributed by a trusted vendor. When those updates are applied, every customer organization inherits the vulnerability.
Due to their scale, software supply chain compromises can impact thousands of devices across multiple healthcare systems simultaneously, making them particularly challenging to contain.
Nearly a quarter of organizations faced critical vulnerabilities identified and disclosed by their device vendors. While transparency is essential, patching medical devices poses unique challenges compared to traditional IT.
Devices often require downtime for testing and validation before updates can be applied, and in regulated environments, some patches may trigger re-certification processes. For example, if an imaging device firmware update conflicts with existing calibration protocols, the device must be retested before it can be returned to service.
This means that even when fixes are available, the process of applying them can be slow and disruptive, leaving exploitable windows open for attackers.
Data theft remains a significant issue, with nearly one in four organizations reporting exfiltration from medical devices. These devices often process highly sensitive patient information, including imaging results, diagnostic data, and treatment histories.
For example, compromised cardiology devices can leak continuous heart monitoring data, or imaging systems can be used to extract thousands of patient scans. In some attacks, stolen data is packaged and sold on underground markets, where medical records often fetch higher prices than financial information.
As devices become more interconnected and data-rich, they expand the potential attack surface for adversaries seeking to capture and monetize sensitive healthcare information.
Download our guide to securing medical device software throughout its lifecycle, from development through deployment.
Attackers are going after the backbone of patient care. Cybercriminals are successfully targeting the very systems healthcare providers depend on most for patient diagnosis, treatment, and monitoring, including:

The 2025 landscape makes one thing clear: medical devices are now at the center of the cybersecurity conversation. From malware and ransomware to supply chain compromises, attackers are finding multiple pathways to exploit weaknesses that directly affect device reliability and trust.
For manufacturers, the implications are significant. Healthcare providers, regulators, and procurement teams are scrutinizing device cybersecurity more closely than ever. Vulnerabilities are no longer viewed as isolated technical flaws but are seen as risks to adoption, compliance, and long-term market success.
Understanding these seven critical vulnerabilities provides a roadmap for where design choices, testing protocols, and security investments will have the greatest impact. Manufacturers that take these threats seriously will not only strengthen their products but also differentiate themselves in a market where resilience is becoming a baseline requirement.
The insights in this post are based on RunSafe Security’s 2025 Medical Device Cybersecurity Index, a comprehensive survey of healthcare decision-makers on medical device cybersecurity. For manufacturers, it’s a clear signal of where to focus security efforts and how to meet the expectations of regulators and healthcare buyers.
Explore the full report to see the data, trends, and guidance shaping the future of secure medical devices.
The post The Top 7 Medical Device Vulnerabilities of 2025 appeared first on RunSafe Security.
]]>The post Connected Cars, Connected Risks: Automotive Cybersecurity Is in High Demand appeared first on RunSafe Security.
]]>RunSafe Security’s new 2025 Connected Car Cyber Safety & Security Index reveals that consumers are more aware of cybersecurity risks than ever before and they’re ready to make buying decisions based on how automakers respond.
For years, automotive cybersecurity was treated as a technical issue best left to engineers. Today, it’s become a mainstream consumer concern. In our survey of 2,000 connected car owners across the U.S., the UK, and Germany, 65% of drivers believe remote hacking of a vehicle is possible, yet only 19% feel “very confident” that their car is protected.

That confidence gap is profound. Drivers perceive their vehicles as more vulnerable than other connected devices, such as smartphones, which receive regular security updates. And they’re right to be concerned. Recent years have seen everything from mass remote recalls to exploits that allowed researchers to take control of vehicles from miles away.
One of the most striking findings is that drivers now view connected car security as a matter of life and death. An overwhelming 79% say protecting their physical safety from cyberattacks is more important than safeguarding the personal data inside their cars.
Unlike traditional cybersecurity breaches that expose sensitive data, automotive hacks can directly compromise safety-critical systems like steering, braking, and acceleration. Consumers understand this risk and expect automakers to treat it with the seriousness it deserves.
Modern vehicles aren’t built by a single company; they’re the product of a complex ecosystem of suppliers. Our survey found 77% of drivers recognize third-party components as cybersecurity risks, and 83% want transparency about software origins.

This demand for disclosure puts new pressure on automakers. Consumers don’t want vague assurances about safety and security. They want to know what’s inside their vehicles, where it came from, and how it’s being protected.
Perhaps the most important finding of all: cybersecurity now has the power to make—or break—a sale. Eighty-seven percent of consumers say strong protections influence their buying decisions, with 35% willing to pay premium prices for enhanced security.

That’s a stunning shift in the way drivers view their cars. Security has moved from a behind-the-scenes technical feature to a frontline differentiator, on par with performance, comfort, and fuel economy.
Automakers that ignore this reality risk losing customers to more security-conscious competitors—or worse, driving them toward older, less profitable vehicles. In fact, 70% of drivers said they would consider buying an older, less connected car just to reduce cyber risk.
The 2025 Connected Car Cyber Safety & Security Index shows that automotive cybersecurity is a business imperative that directly impacts brand loyalty, market share, and revenue potential.
The message from consumers is clear: build security in, be transparent about software supply chains, and treat cybersecurity as seriously as safety. Those who act now will gain a durable competitive edge.
Download the full 2025 Connected Car Cyber Safety & Security Index to see the complete findings and insights for automakers and suppliers.
The post Connected Cars, Connected Risks: Automotive Cybersecurity Is in High Demand appeared first on RunSafe Security.
]]>The post What Product Leaders Need to Know About EU CRA, FDA, and Cyber Regulations appeared first on RunSafe Security.
]]>The regulatory landscape for product security has fundamentally shifted. What was once a “nice-to-have” consideration has become mandatory compliance across industries, with cybersecurity now sitting at the center of product development, risk management, and go-to-market strategies.
Product leaders today face mounting pressure from multiple regulations—the EU Cyber Resilience Act (CRA), FDA cybersecurity requirements, and a growing list of industry-specific mandates—while still needing to maintain innovation speed and profitability. The stakes couldn’t be higher: non-compliance risks include fines up to €15 million or 2.5% of global turnover under the CRA, market access restrictions, and potentially devastating reputational damage.
But here’s the critical insight that forward-thinking product leaders are discovering: these regulations don’t have to be a burden. When approached strategically, regulatory compliance can become a competitive differentiator that strengthens products, builds customer trust, and creates sustainable business advantages.
The 2024-2025 period has seen new cybersecurity regulations appear across sectors, representing a fundamental shift in how responsibility for product security is assigned and enforced.
The most significant change is the shift in liability. Under traditional models, end users often bore responsibility for securing the products they purchased. Today’s regulations flip this dynamic, making manufacturers the primary guardians of product security throughout the entire lifecycle. The CRA rebalances responsibility toward manufacturers and sets new standards across product lifecycles, fundamentally changing how companies must approach product development and support.
Supply chain transparency has become another critical factor. New requirements for Software Bills of Materials (SBOMs) and vulnerability disclosure mean that product leaders can no longer treat their supply chains as black boxes. Every component, every dependency, and every potential vulnerability must be catalogued, monitored, and managed.
As RunSafe Founder and CEO Joe Saunders has emphasized, “If a vendor can’t tell you what’s in their product, chances are, they don’t know either.” This lack of knowledge will no longer fly with consumers, regulators, or internal risk management teams.

Perhaps most importantly, these regulations demand a cultural transformation within organizations. As noted by cybersecurity experts at IMD Business School in a June 2025 Qt Group analysis, “the EU Cyber Resilience Act demands a fundamental cultural and leadership shift in organizations,” moving away from security as a bolt-on feature to security as a foundational element of product design.
The CRA represents the most comprehensive product security regulation to date, with implications that extend far beyond European borders. Understanding its requirements isn’t just about compliance, it’s about understanding where the entire industry is heading.
The CRA entered into force on December 10, 2024, but the most critical date for product leaders is December 11, 2027, when most obligations become enforceable. This timeline creates both urgency and opportunity: companies that start preparing now will have significant advantages over competitors who wait until the last minute.
The regulation’s scope is deliberately broad, covering all connected products and software sold in the EU, regardless of where the manufacturer is located. This means that any company selling digital products globally needs to consider CRA compliance as a baseline requirement.
The CRA’s “Secure-by-Design” mandate isn’t just regulatory language but a complete shift in how products must be conceived, developed, and maintained. Security can no longer be retrofitted; it must be integral from the earliest design phases.
Vulnerability management under the CRA requires manufacturers to report vulnerabilities within 24 hours of discovery and implement coordinated disclosure processes. This creates new operational requirements but also opportunities for companies that excel at rapid response and transparent communication.
The documentation requirements are extensive, covering security documentation, risk assessments, and conformity declarations. While this creates an administrative burden, it also forces companies to develop more rigorous security practices that typically result in higher-quality, more resilient products.
Post-market obligations represent perhaps the biggest shift, requiring ongoing security updates for a minimum of five years or the expected product lifetime. This transforms the economics of product development, making long-term security support a core business consideration rather than an afterthought.
The FDA’s 2025 cybersecurity guidance updates represent a maturation of medical device security requirements, but their implications extend beyond healthcare. As one of the most regulated industries, medical devices often preview compliance approaches that eventually spread to other sectors.
The FDA’s latest guidance mandates that cybersecurity must be demonstrated from pre-market design through post-market support, with strong documentation on vulnerabilities and supply chain transparency for all device components. This lifecycle approach mirrors the CRA’s philosophy and suggests a convergence toward comprehensive product security requirements across industries.
The emphasis on SBOM requirements for connected medical devices creates new transparency obligations but also opportunities for companies that can demonstrate superior supply chain security. Companies that proactively implement robust component tracking and vulnerability management will find themselves better positioned for both regulatory compliance and customer trust.
The FDA’s approach changes go-to-market strategies for medical technology companies. Security documentation is now part of the regulatory submission process, meaning that security considerations must be built into product development timelines from the beginning.
This creates new resource allocation challenges, as companies need dedicated cybersecurity expertise within product teams. However, it also creates competitive advantages for companies that develop this expertise early and can demonstrate superior security practices to customers and regulators.
Learn more about navigating vulnerability identification and postmarket cybersecurity for medical devices in this video: On-Demand Webinar: Medical Device Cybersecurity Challenges
With limited resources and expanding regulatory requirements, product leaders must prioritize their security investments strategically. The most successful companies focus on areas that provide both regulatory compliance and business value.
SBOM requirements appear across multiple regulations—the CRA, FDA guidance, and emerging requirements in other sectors. This makes supply chain transparency a high-leverage investment that addresses multiple compliance requirements simultaneously.
The business case extends beyond compliance. Companies with comprehensive SBOM capabilities can respond faster to supply chain vulnerabilities, reduce incident response costs, and demonstrate superior risk management to customers and partners. The key is implementing automated SBOM generation and continuous component monitoring rather than treating it as a one-time documentation exercise.
Both the CRA’s 24-hour reporting requirement and the FDA’s lifecycle security obligations demand sophisticated vulnerability management capabilities. Companies that excel in this area gain competitive advantages that extend far beyond compliance.
Proactive vulnerability management reduces breach costs significantly—studies show comprehensive vulnerability management programs can reduce incident costs by millions of dollars. Research indicates that the average cost of a data breach reached $4.88 million in 2024, according to IBM’s Cost of a Data Breach Report. More importantly, companies known for rapid, transparent vulnerability response build trust with customers and partners that translates into business growth.
The implementation challenge is building systems that can automatically detect, assess, and respond to vulnerabilities across complex product portfolios. This requires integration between threat intelligence, asset management, and incident response processes.
The CRA’s Secure-by-Design requirements and the FDA’s lifecycle approach both emphasize building security into products from the ground up. Companies that master secure development practices don’t just achieve compliance—they build products that customers trust and competitors struggle to match.
Key elements include integrated threat modeling, secure coding standards, and security testing throughout the development lifecycle. The goal isn’t just to pass security audits but to build products that are inherently more resilient and trustworthy.
The most successful product leaders are discovering that proactive product compliance creates business value that far exceeds the investment required.
In increasingly security-conscious markets, products with built-in security capabilities command premium pricing and win more procurement decisions. This trend is particularly pronounced in healthcare, where RunSafe Security’s 2025 Medical Device Cybersecurity Index found that 60% of healthcare organizations prioritize built-in cybersecurity protections when selecting vendors, with 79% willing to pay a premium for devices with advanced runtime protection.
Security-first product positioning also builds long-term customer relationships. Companies that can demonstrate transparent security practices and rapid vulnerability response develop customer loyalty that extends beyond individual product transactions.
Secure-by-Design development practices reduce technical debt by preventing security issues rather than retrofitting solutions. This approach typically results in lower long-term development and maintenance costs, even accounting for upfront security investments.
Good security practices also streamline regulatory audits and compliance verification. Companies with mature security programs spend less time and resources on compliance activities because their standard practices already meet or exceed regulatory requirements.
The financial benefits of proactive security extend beyond cost reduction. Healthcare organizations demonstrate this market reality clearly, as seen in RunSafe’s 2025 Medical Device Cybersecurity Index. 79% of healthcare buyers are willing to pay a premium for devices with advanced runtime protection, with 41% willing to pay up to 15% more for enhanced security. Additionally, 83% of healthcare organizations now integrate cybersecurity standards directly into their RFPs, while 46% have declined to purchase medical devices due to cybersecurity concerns.

Risk reduction creates additional financial value through lower incident response costs, reduced legal exposure, and improved cyber insurance rates. Companies with strong security programs often achieve significantly lower insurance premiums and better coverage terms. The documented financial impact of attacks like WannaCry, which cost the NHS £92 million, demonstrates that prevention is far more cost-effective than recovery.
The regulatory landscape will continue to evolve, with several emerging trends that product leaders should monitor and prepare for.
Based on the regulatory trends and business opportunities outlined above, several strategic recommendations emerge for product leaders navigating this complex landscape:
Manual compliance processes are unsustainable given the complexity and pace of regulatory change. Companies that build automated compliance capabilities will have significant advantages over competitors still relying on manual processes for monitoring, reporting, and verification.
Few companies have all the expertise needed to excel across the full spectrum of security and compliance requirements. Strategic partnerships with specialized security vendors can provide access to expertise and capabilities that would be expensive to develop internally while accelerating time-to-market for compliant products.
Rather than playing defense against individual threats, prioritize technologies that can eliminate broad categories of vulnerabilities. Runtime protection solutions that prevent exploitation at the device level represent this approach—they provide comprehensive protection without requiring constant updates or patches. This strategy significantly reduces risk while simplifying compliance management across product portfolios.
The regulatory landscape continues to evolve rapidly, and companies that can anticipate changes rather than just react to them will maintain competitive advantages. Establish dedicated resources for monitoring regulatory trends and translating them into product development requirements.
Rather than building separate compliance programs for different markets, design for the most stringent requirements across all target markets. This approach reduces complexity while ensuring products can be sold in any market without additional compliance engineering.
The convergence of the EU Cyber Resilience Act, FDA cybersecurity requirements, and other emerging regulations represents both the greatest challenge and the greatest opportunity facing product leaders today. The companies that view these requirements as innovation drivers rather than compliance burdens will build more secure, resilient, and successful products.
The critical insight is that waiting is not a viable strategy. The December 2027 CRA deadline and evolving FDA requirements create urgency, but companies that start building security capabilities now will discover that the benefits extend far beyond regulatory compliance.
For product leaders ready to turn cybersecurity compliance into a competitive advantage, the path forward is clear: embrace security as a core product differentiator, invest in the capabilities needed to excel at both security and compliance, and build the partnerships needed to stay ahead. The companies that make these investments today will be the market leaders of tomorrow.
Learn more about how to safeguard your code to up your product compliance. Get the white paper: “Safeguarding Code: A Comprehensive Guide to Addressing the Memory Safety Crisis.”
The post What Product Leaders Need to Know About EU CRA, FDA, and Cyber Regulations appeared first on RunSafe Security.
]]>The post Secure Coding Practices: A Q&A with Industry Expert Rolland Dudemaine appeared first on RunSafe Security.
]]>
Even with decades of hard-earned security wisdom and modern verification tools, embedded software encounters the same kinds of bugs. Why do these mistakes keep showing up in code written by seasoned engineers? How do you write software that’s both secure and shippable, especially when staring down a deadline?
To dig into these questions, we spoke with Rolland Dudemaine, Director of Field Engineering at TrustInSoft. Rolland has spent more than 25 years in the embedded software world, working on the design and development of safety-critical and security-sensitive systems. He’s a regular open-source and AUTOSAR contributor, and he’s seen the industry’s best practices evolve alongside the pitfalls that stubbornly refuse to disappear.
In this Q&A, Rolland offers a straight-from-the-trenches look at secure coding, from the easy-to-miss mistakes that cause the biggest headaches, to the right way to layer security tools, to what “memory safety” really means in practice. Whether you’re writing firmware every day or steering an organization’s embedded security strategy, you’ll find insights here you can put to work.
Rolland Dudemaine: In general, the remaining coding mistakes relate to the corner cases of the software. These often lead to runtime errors that can cause crashes, or worse, silent data corruption that may be exploitable.
Among those that remain, off-by-one errors (leading to buffer overflow/underflow) and computation overflow/underflow are the most typical, because they are not necessarily easy to reach during functional testing. When using a programming language that requires manual memory allocation, use-after-free remains a very visible cause of trouble. MITRE does a good job of listing such issues.
One of the reasons why these issues remain is that it is not possible to functionally test for these corner cases: there is an almost infinite number of ways to corrupt data. Instead, using appropriate tools is the only way to detect these kinds of issues.
Rolland: For projects reusing code, projects always underestimate the cost of ownership of open-source libraries. It’s not that these libraries inherently have lower quality; instead, the specific use of such libraries within the project may not be the same as the original intended usage, and it happens often that projects reach buggy corner cases. If you use third-party code, you become responsible for it.
For project-specific software, processes often focus on form over function. While using coding rules is the best way to improve maintenance cost, and well-tested, consistent code is likely to have fewer bugs, this doesn’t mean such bugs are eliminated! It’s only risk reduction, without guarantee.
During testing and code audits, using appropriate tools to check for mistakes is important. This includes static analysis, coverage tools and sanitizers. TrustInSoft Analyzer is one such tool that covers all of this in one go, but using separate tools is already a start.
Rolland: Security is all about layering. Much like serious network security always advises applying multiple network encryption schemes, code security goes through examination of the code through different angles and levels of protection.
“Security is all about layering. … Code security goes through examination of the code through different angles and levels of protection.”
That said, good security (and safety!) planning tries to avoid failures, and also plans for swift reaction in case of error after deployment. Similarly, static analysis, formal verification, and fuzzing are great examples of tools to be used during development, while runtime protection is efficient to ensure that any remaining failures will still be caught and handled gracefully in the field. RunSafe’s runtime protection is a state-of-the-art example of such a scheme that will detect and report any failure observed in production.
Rolland: Perfection doesn’t exist in this world. What remains is how close the project needs to approach perfection. From there, various decisions can be made to focus on the pain points and dire consequences of field failures. And the effort to avoid them will have to be adapted in consequence.
“Perfection doesn’t exist in this world. What remains is how close the project needs to approach perfection.”
A similar pattern is to make or buy: Do you reuse third-party code? Open-source code? Use free or commercial tools to work?
When you start to get serious about your job, it quickly becomes visible that a thorough, reviewed, and if possible exhaustive approach should be used. Again, this can range from simply following coding guidelines and using systematic and formal verification tools to conducting an independent vulnerability analysis.
A heavy but interesting worldwide reference is the Common Criteria specification. Unless one has an extremely critical asset (think nation-level top-secret) to protect, the list is too extreme to be reasonably applied as is. However, it is a fantastic description of the methods to develop and verify software: selecting the right level for your needs and challenge will always push things in the right direction.
Rolland: Based on feedback from our customers, the most common “potentially serious bugs” are accesses to uninitialized variables, as well as off-by-one errors. Since they are caught ahead of production, the true damage they could have caused is hard to predict, but can range from a mere malfunction to a potentially devastating bypassable security gate.
Another example that we/TrustInSoft recently advertised at the CYSAT Paris event is a series of bugs that we found in the NASA cFE (Control Flight Executive). That open-source piece has been used in many space devices in production (James Webb telescope, among others), yet we recently managed to find a few runtime errors that could be damaging, including access to uninitialized variables.
Rolland: The adoption of systematic security audits, sanitizers, and other formal verification tools, such as TrustInSoft Analyzer, has helped raise the bar and limit the amount and types of bugs that pass through.
That said, everyone working on C or C++ language code has started to look at Rust and other “memory-safe” languages. We’re actually adding Rust support to TrustInSoft Analyzer and it will ship in our next release.
However, early analyses on customer projects using Rust show that, in fact, runtime errors persist, at a lower but visible level. One of the reasons is that the definition of memory safety in Rust isn’t that there is no risk anymore; rather, if a failure is detected at runtime, the code may deterministically panic instead of doing something unpredictable. It’s much better, but it will not prevent a DoS (Denial-of-Service), for instance.
All in all, adopting a new language presents an opportunity to transition to more modern practices: code is rewritten with enhanced experience, improved design, refined coding practices, improved testing, and more precise specification. The new language itself isn’t the sole cause of improvement, but it’s a pretext for change for the better, and an opportunity to use efficient verification tools.
Rolland: The European CRA (Cyber Resilience Act), the US Cyber Trust Mark, and the Chinese CSL (CyberSecurity Law) all mandate SBOM for a reason: it’s the minimum security hygiene to list what you are using and shipping. If you don’t even know what you’re shipping, how could you even start to evaluate the risks?
Once such a list is established, it becomes possible, mapped to the system and software architecture, to determine which are the risky items in terms of attack surface, and consequently identify where the verification effort should be focused.
SBOM does not make the system inherently secure: it allows projects to gauge risks. So, it definitely goes in the right direction, even when this just looks like paperwork at first.
Rolland: It’s all too easy to answer AI to this question, as AI is seen as the answer to everything these days. However, AI in this specific case is a risk: humans using AI for coding are no longer the designer/logician/artist of the code, but merely reviewers. There is consequently a higher risk of subtle security implications when adopting AI-generated code; it becomes truly important, consequently, to use either formal verification tools to verify, more thorough human reviews, or both.
On a more positive note, the move to memory safe programming languages is opening the eyes of many developers and managers to the fact that good practices lead to much better code quality. We see more interest in formal verification tools, and TrustInSoft Analyzer is being trusted more than ever to verify critical code, regardless of the origin.
“Good practices lead to much better code quality.”
Securing embedded systems is an ongoing process that demands rigor, the right tools, and a willingness to adapt as threats evolve. As Rolland Dudemaine’s insights show, achieving meaningful improvements in software security requires both technical discipline and strategic planning, from catching elusive corner-case bugs to layering defenses that protect systems long after deployment.
If your team is looking to strengthen its approach to embedded security, RunSafe Security offers solutions designed to neutralize the entire class of memory safety vulnerabilities and protect against runtime exploits without disrupting your development workflow.
Learn how our runtime code hardening eliminates memory corruption risks and see how we help organizations in critical industries ship safer, more resilient systems.
Explore RunSafe Protect.
The post Secure Coding Practices: A Q&A with Industry Expert Rolland Dudemaine appeared first on RunSafe Security.
]]>The post Is Your Security Helping or Hurting Your Product Line Profitability? appeared first on RunSafe Security.
]]>Security can either be your biggest margin killer or your most powerful profit enabler. From the beginning, our goal at RunSafe has been to put control back into the hands of the defenders. And that means building solutions that meaningfully reduce risk across your product portfolio. As cyber defense champions, we can quantify economic benefits to security solutions that improve your product line profitability.
Security incidents now average $4.88 million per breach according to IBM’s 2024 Cost of a Data Breach Report, but that figure only scratches the surface. The real damage comes from the operational drag that reactive security creates long before any breach occurs.
Consider an example from a software manufacturer, one of RunSafe’s customers. Implementing RunSafe’s runtime code protection saved the company over $1 million per year, with reduced patching representing the largest cost saving.

Calculate your potential Total Cost of Ownership here.
Being proactive about security (deploying RunSafe Protect) rather than reactive (relying on patching) saved this company a significant amount of money. And it’s not just money. It’s also about opportunity cost.
The hidden costs of reactive security include:
Competitive Disadvantage: Slower release cycles compared to competitors, who ship features faster with built-in security.
The problem with scan-and-patch security is inefficiency and ineffectiveness. For example, in our work on embedded devices, we see on the daily that memory safety vulnerabilities account for 40-70% of identified vulnerabilities in embedded code.
A study by North Carolina State University shows that for Linux operating system software over a 10-year period, only 2.5% of memory vulnerabilities were identified in vulnerability scanning tools. This shows us that scanning, a widely adopted practice, leaves one vulnerable.
Similarly, multiple studies say companies and users generally aren’t patching on time due to a lack of knowledge, the effort to coordinate change, the process slowing progress, fear of breaking the current setup, and other barriers. At its best, patching is reactive. More often, costs and other barriers mean patching is delayed, if done at all.
Even when vulnerabilities are found, patching faces massive barriers:
The math is brutal. If you’re patching reactively, you’re not just paying for the patch—you’re paying for the disruption, the delays, the testing cycles, and the opportunity costs of having your best engineers chasing down someone else’s vulnerabilities instead of building your next breakthrough feature.
Here’s where the economics flip completely. Runtime security—integrating code protection directly into your development process—transforms security from a margin killer to a competitive advantage.
RunSafe’s approach demonstrates this transformation. RunSafe Protect eliminates an entire class of vulnerability common in embedded software to defend your software from the very beginning and dramatically reduce your attack surface. Protect safeguards your systems during runtime without compromising performance or requiring post-deployment modifications.
The results speak for themselves. RunSafe deployed code protection to dramatically reduce the attack surface by 70% for an industrial automation leader deploying HMI products. The company was able to measurably reduce risk and protect software in very difficult-to-update facilities within critical infrastructure.
Understand the total exposure of your embedded software and quantify your risk reductions with RunSafe Protect. Give your code a scan.
The broader business impact includes:
Beyond cost savings, the right security approach actually opens new revenue streams. Companies with robust security profiles win contracts that others can’t touch. In RunSafe Security’s 2025 Medical Device Cybersecurity Index, we saw that 83% of healthcare organizations now integrate cybersecurity standards directly into their RFPs and 46% have declined to purchase medical devices due to cybersecurity concerns. A lack of security quickly leads to lost revenue in this competitive market.
On the other hand, it opens the door to increased product line profitability. 79% of healthcare buyers are willing to pay a premium for devices with advanced runtime protection. Similarly in the automotive industry, RunSafe’s 2025 Connected Car Cyber Safety & Security Survey, 87% of survey participants said a car brand that offers strong cybersecurity and privacy would influence their purchase decision, with 35% willing to pay more.
Customers are saying security is worth the cost. That’s good news for product teams looking to make smart investments.
Security doesn’t have to be a necessary evil that drains profitability. When implemented early, systematically, and with business impact in mind, security becomes a competitive advantage that drives margin improvement and sustainable growth.
The companies that figure this out first will have operational advantages their competitors can’t match: faster development cycles, lower operational costs, stronger customer relationships, and access to markets that others can’t reach.
Can you afford to keep subsidizing reactive security approaches that are killing your margins and slowing your growth?
See how runtime security can transform your product line profitability. Calculate your potential ROI with RunSafe Protect or schedule a call with our team to discuss your specific business impact.
The post Is Your Security Helping or Hurting Your Product Line Profitability? appeared first on RunSafe Security.
]]>