Podcasts – RunSafe Security https://runsafesecurity.com Thu, 11 Dec 2025 15:23:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://runsafesecurity.com/wp-content/uploads/2024/09/cropped-RunSafe_Logo_Favicon_2024-32x32.png Podcasts – RunSafe Security https://runsafesecurity.com 32 32 When Open Source Gets You Into Hot Water: Copyleft Risk in Embedded Systems https://runsafesecurity.com/podcast/copyleft-risk-embedded-systems/ Thu, 11 Dec 2025 15:19:24 +0000 https://runsafesecurity.com/?post_type=podcast&p=255326 The post When Open Source Gets You Into Hot Water: Copyleft Risk in Embedded Systems appeared first on RunSafe Security.

]]>

 

Open-source components unlock speed for engineering teams—but in embedded systems, a single licensing oversight can lead to legal exposure, forced source-code disclosure, blocked shipments, or compliance failures you can’t fix after the fact.

In this episode of Exploited: The Cyber Truth, Salim Blume, Director of Security Applications at RunSafe Security, joins RunSafe CEO Joseph M. Saunders to break down what happens when the problem isn’t a CVE, but a clause buried in a copyleft license deep inside your firmware. Salim explains:

  • Why license compliance is harder in embedded systems than in cloud or web apps
  • How GPL, AGPL, and other restrictive licenses can obligate you to open-source proprietary code, 
  • Real-world examples where unnoticed copyleft clauses triggered major consequences, such as Vizio’s GPL lawsuit and Cisco’s WRT54G router family
  • How SBOMs at build time reveal hidden licensing and vulnerability exposure
  • Why automated license-policy enforcement in CI/CD pipelines is essential
  • How embedded RASP techniques and memory-safety protections help mitigate future risk

Whether you’re building robotics, industrial devices, transportation systems, or connected products, this discussion offers clear, actionable guidance to protect your IP—and your business—from copyleft surprises.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Guest Speaker – Salim Blume, Director of Security Applications, RunSafe Security

Salim Blume is RunSafe Security’s Director of Security Applications. His team brings RunSafe’s technology to customers by way of the RunSafe Security Platform, making it seamless to integrate SBOM generation and memory randomization into pipelines and benefit from vulnerability management, license compliance, and more.

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:01)

Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe.

[Joe] (00:19)

Greetings, Paul.

[Paul] (00:21)

We have a special guest this week, Salim Blume, Director of Security Applications at RunSafe. Hello and welcome, Salim.

[Salim] (00:32)

Hello, Paul. Thanks for having me.

[Paul] (00:34)

We have an intriguing title this week, namely, what if open source gets you into hot water? And that’s source as in source code, not as in tomato sauce. And although we’re all techies talking here on this podcast, the licensing aspects relating to the technology you produce can be a terribly complicated part of making, supplying, selling, and supporting software, can’t it?

[Salim] (01:09)

Absolutely. It becomes so complex. It’s like most people just try and ignore it, but it’s something you absolutely cannot ignore. It doesn’t matter how big or small your company is. At one point, your software is being consumed by a customer of yours. And if it contains the wrong licenses, you’re putting yourself and, at times, your customers at risk.

[Paul] (01:29)

Now, a lot of people misunderstand what is meant by the very general term open source, don’t they? You can have free software, which is closed source; in other words, the source code is never revealed at all. Free has two legal senses: free as in freedom to use it, and also free as in not having to pay for it.

[Salim] (01:53)

The way that you can make that determination is by looking at the license of the software. If it is open source that you can go and look at, maybe it’s on GitHub, and it has a license file, you can see this is how you are allowed to consume it. And that’s the only way to know which free you’re dealing with. And it ends up being this huge legalese contracting. And if you’re just one guy trying to get started, how are you supposed to parse all this? Or, at the other scale, if you have thousands of repos, consuming tens of thousands of components, how are you supposed to handle every single license file that you’re putting into your own software?

[Paul] (02:30)

And I guess the other problem is if you’re consuming software that has come floating to you down the supply chain, A, how do you know what licenses really apply there? And B, how do you know whether the people upstream of you have made any effort at all to comply properly? Because in the end, you carry the can when your software goes to market.

[Salim] (02:54)

You’re right, and there are different combinations of those licenses. Somebody may write software and license it to say, if you’re non-commercial, it’s licensed this way. And if you are commercial, you are licensed this way.

[Paul] (03:06)

Wow, so in other words, it’s free and open source if you’re a hobbyist, or perhaps if you’re some certain type of organization, but not if you’re considered commercial, then you’re faced with how do you decide which side of that boundary you land?

[Salim] (03:23)

And it may be once you hit a certain revenue amount. And who’s keeping track of that? And you may be in compliance as long as you’re shipping a small quantity of devices. But as soon as you hit it big, who’s keeping track of, are we still compliant with this license that we were last month?

[Paul] (03:42)

So do you want to give a description of some of the main sorts of open source licenses that are out there?

[Salim] (03:49)

Yeah, sure. So there are hundreds, if not thousands, of licenses out there. They do fall into some broad categories. You see restrictive licenses, permissive licenses. There are different types of restrictive licenses. If you are poking around software licenses, you’re going to see GPL, you’re going to see MIT, you’re going to see A-GPL, and Apache, and BSD, and there are tons of them. MIT is a very popular permissive license because a license isn’t just for the person consuming the software, it’s for the person creating the software. And so an MIT license, it’s helpful for someone who writes, we’ll say a tool, put it on the internet, and say, I’m proud of this. I think people should use it, but I don’t want to be held liable for any bugs that you may find if you do put this into production. But I don’t care if you do use it in commercial software. So that’s, that’s more of a permissive license. Whereas a more restrictive license, GPL being again, probably the most popular one of those.

[Paul] (04:46)

GPL is short for General Public License, not GNU Public License, although it comes from the GNU Software Foundation originally, doesn’t it?

[Salim] (04:57)

There are also multiple versions of GPL and that is sort of the tip of the spear of this concept of Copyleft, which a lot of people have probably heard about. If they’re listening to this podcast, they’re probably interested in how am going to defend myself from this boogieman of copyleft?

[Paul] (05:14)

Now, copyleft, if I remember correctly, it actually started as a joke back in the 70s. Copyleft, all wrongs reserved and it was meant as a joke. But it was taken in a sort of periodical way to mean that this software is not in the public domain. You can’t do whatever you want with it and the legalities turn out to be quite different, don’t they, in terms of your liabilities if you use GPL software?

[Salim] (05:19)

Yes. It was codified by GPL releasing for the first time as an actual license. So going from a joke to actually this is something that we want to hold people accountable to. That idea being if I write software and release it under a license that could be considered copyleft and then you use my software, you are obligated to license your software under an equivalent copyleft license. Which is to say, if mine is from the corporate sense, a very restrictive license where you must open source all of your code and you use it. Now you must license your commercial code in the same way and potentially release all of your software.

[Paul] (06:28)

So in theory, you could have a giant product that has millions of lines of code in it, and you import into your code base one tiny little implementation, say of a cryptographic algorithm that might be 20 or 30 lines that is under a GPL, a copyleft style license. And if you then publish your program with that code in it, your entire code base essentially should become subject to that copyleft agreement as well and you have to publish all of your secrets for everyone to see. Is that right?

[Salim] (07:03)

Not only are you on the hook, there are organizations out there that believe so strongly in that, that they will come and look for violators of licenses such as GPL and take them to court to force them to open source their software. Vizio, the TV manufacturer, is currently on the hook for exactly that thing. Most famously would be Cisco who had a router family, a WRT 54G.

In 2003, it was found that the software on these routers was using the GPL license. But it’s a lot more complicated than that. Cisco had purchased Linksys for half a billion dollars. Linksys was making those routers. Linksys didn’t put the software there either. They got it from Broadcom with the Wi-Fi chip. Broadcom didn’t put it there either. They used a third-party contractor who wrote software that used a GPL license tool for the 802.11g driver, and that made it through four levels. This contractor went into Broadcom, into Linksys, through a merger and acquisition into Cisco. Cisco’s on the hook for that and did end up having to release all the source code for that particular router.

[Paul] (08:19)

Sort of happy ending there wasn’t there in that they decided well if we have to do this we’ll create a version of the router that you can basically flash with your own version of Linux. It worked as very positive marketing for them in the end didn’t it.

[Salim] (08:33)

It did. And it became, for a long time, the best-selling router because it was so easily modifiable. And there was such a community around doing that.

[Paul] (08:42)

Now in that case, you imagine a company like Cisco, you think, well, they’ll just get the checkbook out and they’ll just write a giant check and say, okay, well, we’ll pay you back licensing fees. But in most cases with GPL infringement, copyleft infringements, that doesn’t work, does it? Because the organization that’s going after you doesn’t want your money, it wants you to comply with the open source, the freedom side of things. Either you reveal all your source code, even the stuff you wanted to keep proprietary, or you stop selling the product entirely with that code in it and start again from scratch. Even if you have deep pockets, you can’t necessarily buy your way out.

[Salim] (09:25)

That’s what Cisco saw. That’s what Vizio is in the middle of fighting. You’re absolutely right. And not only are the organizations that are going after companies interested in that, the licenses are written in such a way that there is no backdoor. Even if the organization suing was interested in a monetary gain, that’s not how the license is written. We’re kind of abbreviating copyleft into GPL because it’s the most popular. There are a whole bunch of others, but when talking GPL, that’s not how it’s written.

[Paul] (09:49)

Yes.

[Salim] (09:55)

You can’t skirt out by cutting a check, as you say.

[Joe] (09:58)

And I might add right there, one of the standards or tests had been, it’s not meant if you’re just using the software internally for your own development purposes, but if, in fac,t you distribute the software. And there’s been kind of an evolution there where a SaaS enabled service under an AGPL license is intended to consider that that SaaS service is a form of distribution of software. And so even though you’re not shipping software to third parties, you’re delivering a service over a network. It also can constitute a software distribution and under the AGPL variation that would be obligated to copyleft as well.

[Paul] (10:40)

A is for Affero, isn’t it? Which I believe is the company that first came up with it. I’m understanding that was intended if you like to fight back against industry giants like Google, who would run a million Linux servers and make billions of dollars out of it. But you couldn’t get hold of their software because technically they’d say, well, it’s actually part of our network. Even ISPs, who provide routers where you don’t own the router but they lease it to you and you plug it in in your household, they’d say now that’s part of our network so that’s not covered by an open source license. But in the modern era, it might be because of the existence of licenses like AGPL. Have I got that correct, Joe?

[Joe] (11:28)

Yeah, one hundred percent. Just as Salim identified the Cisco case, famously Oracle was found in violation of an AGPL license from Qualcomm. Oracle had thought because it’s a network service they were offering, a SaaS-enabled offering through their cloud hosting. It wouldn’t be subject to it, but just as you say, they’re making a lot of money using a component that was intended to be copyleft.

[Paul] (11:53)

And am I right that for many of the fans of open source software, it isn’t really about the money at all. Oracle might be in their sights because they are just so wealthy, but even if you’re a tiny company, for many open source fans, it’s the principle that’s at stake. If you’re a tiny business, if you suddenly have to stop selling a product, that could have a disastrous effect upon you.

[Salim] (12:17)

Even as a small company, you may be forking an open source project on GitHub. If you’re just getting started, you may not realize the mistake you’re making and you’re doing so very openly. If you are working on a tool that is licensed with a copy of license, you make a fork. It’s very obvious to everybody out there that you’ve just taken that action. And so you put a target on your own back.

[Paul] (12:38)

And it’s my understanding that if you do rush into building software and you embrace open source or you think you have, but you don’t think about complying with all the necessary licenses, whether they’re copyleft or MIT style, whatever they might be, if you don’t think about that from the start, then if you do get into trouble later, even if you’re quite happy to comply, you say, yes, we’ll release all the source code, we’ll get everything together, that itself can be an enormous task, can’t it? Getting that right in a hurry is very difficult indeed.

[Salim] (13:15)

Yeah, I would say especially so for companies that release software on like embedded devices where it’s not just a website that’s out there accessible on a single domain name. As soon as you catch the issue, you can push a new version. Everything’s fine. If you instead are like a device manufacturer, you’re building robots for a factory and you have many versions of software and you have no way to really know what versions of your software exist out in the world. For instance, in GPLV3 section eight, have a grace period where you can have, depending on the violation, 60 or 30 days to cure your license violation. That’s a lot easier, like I say, if it’s just pushing a new dependency to a website. It’s monumentally more difficult on wide distributions of software running on end devices.

[Paul] (14:10)

It doesn’t matter how nice you’re trying to be about it, just finding out what you need to do to complete your compliance could take you a lot more than that 30 or 60 day grace period.

[Salim] (14:22)

In this perfect world, you may have a perfect understanding of all of the software you’ve ever shipped, and you may still have a customer who says, I’m not going to upgrade. There’s nothing you do about it. You can’t then cure the situation.

[Paul] (14:34)

Or you might not realise that the offending code was in one of your older products because you weren’t keeping track of things back then. Yeah. How do you go retrospectively and find it before somebody else decompiles your firmware and goes, ha ha ha, I’ve caught you out?

[Salim] (14:50)

I think what Cisco decided was that it’s far cheaper just to open source it.

[Paul] (14:54)

Yes, that can be difficult if you can’t get your hands on the source code that you had back then because maybe you weren’t quite so disciplined about version control and stuff like that. So, Salim, that raises the intriguing question. How do you keep track of all of this? How do you make sure that you haven’t accidentally included something that you did not expect? That one little super-efficient cryptographic algorithm that turns your MIT-licensed project into a GPL-licensed one.

[Salim] (15:29)

It starts with accurate SBOMs.

[Paul] (15:32)

That’s Software Bill of Materials.

[Salim] (15:35)

Yes, you are definitively stating this is what my software is. And that is how you can then say, okay, if I know that I was using this third-party software, now I can recognize, at that time on that version, that was GPL licensed. Gosh, I should do something about that. You can’t do that without an accurate assessment of your own software. 

And that also means the third-party software that you’re pulling into your own. That does mean that this is where it gets more difficult. If you’re pulling in a binary from somebody else, they had better be including their own SBOM where they attest this is what was in this binary before you use it. You want that for security reasons as well as for future vulnerability scanning. Now I know that this software that I’m putting into my system and then selling as a more valuable package, it is vulnerable to X, Y, Z in the future. Just as with the license problem of knowing do I need to remove dependencies that violate licenses? Now I need to upgrade or change dependencies for vulnerabilities. It all starts with an accurate SBOM.

[Paul] (16:43)

So it sounds as though there are some very good reasons just from a cybersecurity point of view, let alone a legal correctness licensing and community centric point of view, to know what’s in your software anyway. If you can’t tell what license applies to your program, how do you know what now well-known buffer overflows are latent inside it as well?

[Salim] (17:07)

Think you have stacked it well. You can solve the license problem. You can solve the vulnerability problem.

[Paul] (17:12)

So how do you make sure that your Software Bill of Materials really does reflect reality? We talked with Kelli in a recent podcast about, if you like, the difference between just opening up your cupboard and saying, well, I’ve only got this giant list of ingredients, so it’s probably going to be some or all of those, which A, may over-include things and B, you may accidentally have used one that wasn’t in the cupboard. Yep. Or you wait till the cake’s baked and then you have a taste afterwards and you go, yeah, it’s probably got sugar, bicarb, and a bit of salt. That’ll do.

[Salim] (17:48)

To watch as the cake is made, as the software is built, and write down every single thing. And it helps when you have a recipe. And then you can say, yes, I did just put two eggs in there. Did just put a cup of flour. You have to monitor every step of the process. The only way to do that is to automate it. Then you need to keep track of what you wrote down. A good way to do that is through a central repository of all of your SBOM information, then it’s very easy to say, now in these same automated processes, especially built on CI/CD pipelines, let’s look at what went in and now what needs to be remediated?

[Paul] (18:29)

And there are some standardized formats for how you record that bill of materials data, aren’t there? Yep. If you follow one of those well-known standards, it makes it much easier for the people who are upstream and downstream of you to consume that data automatically, instead of having to read through 17 different ways of writing a list of ingredients and possibly getting caught up in ambiguities.

[Salim] (18:56)

Biggest one is Cyclone DX. There’s SPDX, SWID. Cyclone DX is huge.

[Paul] (19:03)

Do have to put cryptographic hashes in there so you can identify the exact files that you use?

[Salim] (19:10)

You don’t have to, but the spec does allow for that. And it’s a great thing to do, especially again, when you’re consuming third-party software that does then allow you to put those into your license and vulnerability management tools. And then later look backwards if necessary and say, Hey, third-party supplier, this is something you may need to take a look at. Or I think I’m now at risk because of something you’ve introduced. This is something you have to deal with.

[Paul] (19:34)

Now Joe, you in previous podcasts have made some intriguing remarks about the potential liabilities of, if you like, trying too hard, or perhaps I even mean too casually, with generative AI by taking, say, a well-known open source tool, throwing it into an AI, generative AI system and saying, hey, can you improve this a bit? And then you end up with something that is slightly different, that doesn’t match. the source code has a different hash and then you do run the risk that you will fall out of visibility both to licensing checks and to vulnerability disclosures. So that’s maybe a little bit of a problem waiting to happen isn’t it?

[Joe] (20:21)

It is a problem that’s waiting to happen in part because you just may not know there’s vulnerabilities that otherwise you would be aware of had you used the original binary or component or library in that case.

[Paul] (20:34)

And this is not somebody being malevolent, saying, rewrite this so that it doesn’t look like the original so I can deliberately pretend that it wasn’t licensed. This could happen just with the best one in the world. You think, well, let’s see if we can make the code a bit shorter, a bit faster, a bit cooler.

[Joe] (20:49)

Right. But it also raises the question, there are really good reasons why you do want to use open source software. You don’t really want to recreate all these different components because that becomes inefficient. There’s a lot to gain from leveraging open source software, in part sharing the vulnerabilities so that you can ensure that your software is protected, but also to ensure that you don’t have to rebuild something and then maintain something that would otherwise be supported and generally available and used by thousands of other software companies or products or people or teams.

[Paul] (21:24)

In other words, if you make some tweaks that just happen to work for you, then when you go back to the community and say, hey, look, I made these changes, you should use them too. Not everyone else might agree with you and you might end up with something that you have to maintain completely independently with no help for years and years and years in the future. Whereas if you’ve been a little more agreeable to the community, then you’d have the community on your side forever more.

[Joe] (21:49)

And to Salim’s point earlier, than checking these things and doing it right from the first place, you get the benefit of the open source community and then therefore support it going forward. And as a result of that, it does raise some questions about different use cases. So we talked about earlier when a company acquires another software company, they do need to check for compliance to these open source licenses. But also your product team should be checking this before they release code.

And there’s all sorts of good ways to enforce that teams do check, so enforcing license checks like that at build time before you release software is one of those great opportunities to ensure you don’t fall into the cycle of having to catch up later after your customers find out the software could be in violation.

[Paul] (22:35)

So, Salim, for those of our listeners who aren’t developers themselves, do want to say a little bit about CI/CD? That’s continuous integration, continuous deployment or continuous delivery. And the idea of a CI/CD pipeline and how that influences the way that most people build software these days.

[Salim] (22:59)

Sure. At a high level, CI/CD pipeline is allowing you to say, I think this version of software is ready to go to a customer. Generally, what people want their pipelines to do is test the software, build it in a final form, and deliver it where it needs to go. Before you deliver your software anywhere, that is the best time to make sure you haven’t introduced additional vulnerabilities. haven’t introduced additional license problems. Do that in your pipeline when you’re saying, Hey, I think this neat little widget is going to be something cool that customers will like. Let me pull it in from this open source library. Yep. It works as I expected. Let’s ship it. Come to find out that was a GPL widget that somebody built in their spare time that they feel very strongly about. They are able to determine that you consumed that, that software there is, and they come after you and that cool widget. If you did not have a license check in place, your pipeline is now downstream and it’s too late.

[Paul] (24:01)

Now the continuous delivery side, I guess that’s much more relevant for things like web apps, where you can update them every afternoon if you want, than it is with embedded devices where you may deliberately want to minimize the number of times that you have to do an update because it’s harder and takes a lot longer to apply them for small devices which are distributed globally. So what are the emerging tools or practices other than just saying CI/CD that companies that make and sell embedded devices ought to be looking out for if they haven’t already crossed this licensing correctness bridge?

[Salim] (24:48)

We’ve talked about license enforcement. We’ve talked about vulnerability management. That’s important. But to reiterate a point that I think we’ve made throughout for embedded software, it is more critical than otherwise to catch things early. A twist on that, though, is what can you integrate early that will protect you later? So, yeah, if you can integrate a license compliance check, if you can integrate a vulnerability check, great. But what else can you do to protect yourself later. There are embedded RASP solutions where you can introduce security at your build that once you’ve shipped it, even if a vulnerability is detected, you can still know that you are protected from that vulnerability.

[Paul] (25:32)

Joe, do you just want to say what is meant by that term, RASP?

[Joe] (25:36)

RASP stands for runtime application self-protection. And there’s different variations of it. From the RunSafe perspective, what we consider a great runtime protection is the ability to ensure that you can prevent exploitation at runtime by implementing RunSafe memory safety protections. And in that case, what we do is we relocate where functions load into memory uniquely every time the software loads, making it very difficult, if not nearly impossible, for the attacker to know where a weakness or a vulnerability is in the first place.

[Paul] (26:14)

So the idea there is that it is an active defense that happens every time the software runs, rather than merely a build-time precaution that makes sure you don’t put the wrong stuff in there.

[Joe] (26:27)

It is an active defense. I think ultimately the biggest difference RunSafe brings to the table is other organizations would have put some other kind of software agent or wrapper around a solution to create that runtime defense. With RunSafe, you can achieve the same outcome without adding any new software on the target system.

[Paul] (26:48)

So presumably the idea there is that given that it often is very difficult and time consuming to update embedded devices, you don’t want the situation that if some modest vulnerability appears that would otherwise mean there’s little you can do except to push out patches possibly to the chagrin of your customers, that you may be able to say, look, okay, proactively protected against that if we only patch it later on when we do a whole load of other changes, we can show you that that is good enough.

[Joe] (27:23)

And imagine the value if you can generate a Software Bill of Materials, ensure license checks are made before you distribute software, and you can protect against vulnerabilities, whether a patch is available or not, all at the same time, leveraging RunSafe. So that’s my little plug for RunSafe this week. You are able to do all of those things with RunSafe Security.

[Salim] (27:47)

The whole point is to do it early.

[Paul] (27:49)

It sounds like Mr. Miyagi in the Karate Kid. Best way to avoid punch, no be there. It is very hard to fix these things retrospectively. So as far as you can, you want to make sure that you’re compliant right from the get-go. Even if that adds a tiny little bit of complexity and time to that build and ship process that you have. It will work well for licensing, and as Joe says, it will also help you with knowing what vulnerabilities you and your customers are exposed to in the future. Because if you can’t answer the licensing questions, you probably can’t answer the vulnerability questions either, can you?

[Salim] (28:27)

I think that captures it really well. You can’t put yourself in that position to find out.

[Paul] (28:31)

There isn’t a magic bullet that can just fix these things retrospectively. So with that, let me just say thank you so much, Joe and Salim, for talking about what is a thorny problem that is actually very critical in software engineering, even though it’s legal rather than technical. So thank you so much for your thoughtful insights. Thanks to everybody who tuned in and listened. That is a wrap for this episode of Exploited: The Cyber Truth.

If you enjoy this podcast, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media too. And don’t forget to tell everyone in your team about us. Once again, thanks for listening. And remember, stay ahead of the threat. See you next time!

The post When Open Source Gets You Into Hot Water: Copyleft Risk in Embedded Systems appeared first on RunSafe Security.

]]>
The Asymmetric Advantage: How Cybersecurity Can Outpace Adversaries https://runsafesecurity.com/podcast/asymmetric-cybersecurity-advantage/ Thu, 04 Dec 2025 15:21:17 +0000 https://runsafesecurity.com/?post_type=podcast&p=255308 The post The Asymmetric Advantage: How Cybersecurity Can Outpace Adversaries appeared first on RunSafe Security.

]]>
 

Defenders don’t have to outspend or outnumber attackers—they just need to change the rules.

In this episode, RunSafe Founder & CEO Joseph M. Saunders explains how organizations can adopt an asymmetric cyber defense strategy that dramatically reduces exploitability across entire classes of vulnerabilities. Instead of reacting to every new CVE, Joe outlines how automated build-time protections can neutralize memory safety flaws, disrupt attacker workflows, and free teams from constant firefighting.

Joe and host Paul Ducklin explore why adversaries currently enjoy a resource advantage, how embedded and OT systems face unique risks, and why a shift toward Secure by Design is essential for resilience across critical infrastructure.

Watch to learn:

  • How to flip attacker economics to favor defenders
  • Why memory safety flaws remain the top driver of cyber risk
  • Where patchless exploit prevention fits into modern security strategies
  • How AI-generated code could introduce new silent vulnerabilities
  • What teams can do today to build more resilient systems for tomorrow

If you’re building or protecting devices that must run safely for years—or even decades—this is a must-watch conversation.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:06)

Welcome back everybody to this episode of Exploited: The Cyber Truth. I am Paul Ducklin and I am joined by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe.

[Joe] (00:21)

Hey Paul, I’m ready to shake it up and duke it out. Let’s go.

[Paul] (00:25)

Before we start, it’s probably worth saying that we just happened to be recording this on 11.11, Armistice Day, or Veterans Day, as you call it in the US. So we should say, lest we forget. And when we’re thinking about things that we should not forget, and what we can do for the greater good of all, Joe, this week’s topic is a lovely one.

The Asymmetric Advantage: How Cybersecurity Can Outpace Adversaries. To read the media, some angles of cybersecurity coverage, you’d think that cyber criminals and state-sponsored attackers are light years ahead and we’re never going to catch up and we’re all doomed. But you don’t see it that way, do you?

[Joe] (01:16)

Well, I think competition with China and the U.S. actually elevates the question because if you think about China, they have an FBI director, Ray, about a year and a half ago, almost two years ago now, demonstrated that they have a 50-to-one manpower advantage in the so-called cyber warfare. And so if China has a 50-to-one advantage over the U.S., they have a massive advantage over everybody else.

And so I do think it begs the question, how do you come at the problem if someone has a 50-to-one advantage?

[Paul] (01:53)

I guess implicit in that 50-to-one advantage is not just that they have a much bigger population than the United States, they don’t have 50 times as many people, it just suggests that they’ve got more money to throw into the problem as well, and maybe more laws like, hey if you find a vulnerability you have to tell the government first and then they decide which ones get revealed. So if the playing field isn’t level, then you have to find a cool way of making the balls roll differently, don’t you?

[Joe] (02:24)

You do. And your point is valid that if it’s beyond just the manpower and it’s the other advantages, well, one thing that can help to neutralize that advantage is technology and innovation. That’s where the concept of an asymmetric approach to cyber defense comes from. How can you change the playing field? How can you shift the tectonic plates in a way that resets the playing field, resets the rules, or at least changes the dynamics between adversary and defender. That’s where there is all sorts of room for creativity and innovation, and certainly approaches to cyber defense that help shift that balance in general.

[Paul] (03:10)

Yes, it’s always worth investing in things that make bad things less likely to happen, isn’t it? A great example might be automotive safety or road safety. It’s my understanding that in the United Kingdom, if you go back 100 years to the 1920s, there were far fewer cars and far fewer people and far fewer miles driven. Despite that, there were still more road deaths then than there are now. And some of that is that drivers have got better, and driver training has got better, and licensing has got more relevant. But a lot of it is that the vehicles we drive around in are just harder to do damage with by mistake, because we’ve put effort into making them safer even if the driver is still a… that seems to me pretty strong evidence that you can make a huge difference.

[Joe] (04:10)

It can make a huge difference. When you think about an asymmetric shift in traditional cyber defense, you’re thinking about how do I patch this vulnerability to stop that attack? Well, that’s a sort of a one-to-one response, and you’re always reactive, you’re always playing defense in the sense. But how can you go on offense and create greater resilience by shifting the rules? And in your automotive example, the rule shift is not how do we improve security?

But how do we ensure safety and that safety mechanism over time has meant fewer deaths. And also from a security perspective, as we see productivity gains in autonomous driving and the like, well, the auto industry is very attuned to safety. so security in an autonomous world must equate to safety. My point is, ultimately, if you think about it as traditional defense, how am I going to patch this vulnerability, then you may not have the breakthrough idea. But if you think about it from a safety perspective or a different perspective, then you’ve got an opportunity to make a step function shift in cyber defense in general. And so I have other examples I’d be happy to go into. 

[Paul] (05:24)

Why don’t you start with one of those examples right now, Joe. Just lead away, because I think it’s fascinating how much you can do if you pick a few smart things to do that affect the entire playing field, as it were, rather than, as you say, just saying, well, every time the other guy gets a body blow in, I’ll make sure I get a body blow back. Every time he hits me in the head, I’ll make sure I return a blow to the head.

Even if you win in the end, it’s going to be a bit of a Pyrrhic victory, isn’t it? You need something that kind of rewrites the rules so that it’s too hard for the other guy to continue.

[Joe] (06:03)

Certainly counterpunching is a form of deterrence. And I think that leads to a difficult situation that escalates because it may continue and maybe you fight back a little bit harder. You punch a little bit harder. Someone else wants to retaliate again to give you two examples, certainly near and dear to the founding story of RunSafe is the story around memory vulnerabilities in software. These vulnerabilities have existed since the eighties since we’ve known about vulnerabilities.

[Paul] (06:35)

Well, we just passed the anniversary of the internet worm, didn’t we? 2nd of November, how much we have learned in all those years, 37 years, Joe. It was weak passwords, misconfigured servers, and a buffer overflow. Plus ça change plus c’est la même chose.

[Joe] (06:41)

Yes. The more things change, the more they stay the same. Patching vulnerabilities has sort of been a go-to necessary step for everybody. At some point, constantly chasing patches, it’s exhausting, and you’re always one step behind the adversary and you’re reactive. And so can you change the playing field? Can you make it so even if something is not patched, you can still prevent exploitation? 

Changing that mindset,  and that’s what we set out to do at RunSafe. Some folks might call it patchless security, meaning even if there isn’t a patch, you can still prevent the exploitation without even knowing what the attack vector actually looks like before the vulnerability is discovered. How do you prevent a zero day from becoming a zero day? And the way we do it at RunSafe is we address the vulnerabilities in a different way. We relocate those functions uniquely in the software every time the software loads on a device out in the field. And the benefit of that is the attacker never knows exactly where that vulnerability is going to be in memory. That’s a very specific example, obviously near and dear to the RunSafe founding story of finding a different way to do cyber defense that is asymmetric.

[Paul] (08:22)

Now Joe, not claiming that this obviates the need for patches when they’re required, or that everyone else has got it wrong, are you? But what you are saying is that it’s better to be in a situation where if you can’t get a patch out, or if it’s going to take weeks, months or even years, which is much more likely in an embedded system than it is on the average Windows laptop, then you still have a fighting chance.

And that also means that you don’t need to break into a sprint every time there’s a vulnerability reported, with the result that you’re basically sprinting to finish a marathon. You’re able to pick the sprints that you need to do for the patches you absolutely cannot avoid and not spend time just always having to counterpunch.

[Joe] (09:14)

Exactly right. The patching process, if you can smooth that out and you can free up resources in a more predictable way, introduce updates in a more rational way, not in a reactive way, a more predictive proactive way, then you can focus your development efforts on new feature development as opposed to constantly reacting and chasing the next patch or the next fix or the next bug in a reactive sense. That’s where the tectonic plate shift comes from is by redefining the landscape, you’re changing the cost structure. It really becomes an economic difference as well. This asymmetric concept. It’s a very powerful thing. And let me give you one other example, which isn’t really about security, but it is about innovation and technology for 40, 50 years, maybe even longer. Moore’s law was all about doubling capacity and compute resources every 18 months. That was sustainable only for so long. At some point it’s going to taper off.

[Paul] (10:22)

Yes, you’ve run out of nanometers.

[Joe] (10:25)

You run out of nanometers. So what do you do? You’ve got to come up with something different. The compute needs and the energy needs in today’s AI environment is completely different than it was 5, 10, 15 years ago. You look at GPUs and you look at Nvidia and you look at what’s going to happen going forward with things like system on wafer, software on wafer technology. If you can find a way to combine four or eight chips on one wafer, so to speak, without those things overheating, then all of a sudden you’ve got a massive increase, a step function increase in compute resources. Those are the kinds of things, those breakthrough in technology, that I think makes those economic shifts that are interesting out there.

[Paul] (11:15)

So Joe, what do you say to those people who can be quite vocal on technological forums on social media about the real problem being, say, C and C++ and old school languages, and what we need to do is throw out the baby, the bath, and the bathwater, and rewrite absolutely everything in a memory-safe language like Rust instead. Could we do that even if we wanted to?

[Joe] (11:43)

I think you can do it in some sectors. I don’t think it’s easy to do in embedded software in certain areas across critical infrastructure. Part of the reason for that is there are compatibility issues. There’s also resource availability and know-how issues. And then of course there’s economics that work against the industry in a way, because the buyers of the technology, those that are buying these embedded devices, expect to capitalize those over 10, 15, 20, 30 years. And so they don’t necessarily just want to update software, like in a web-based environment or cloud infrastructure environment, where it might be a lot easier to apply a patch. They also don’t want to replace the capital investment that they made because they’re trying to eke out performance for many years. In that example, there’s a reason there’s an economic benefit to come up with a way to ensure your software is memory safe now, but you don’t have to go through the pain of rewriting all your software in a memory safe language, because that will prevent you from investing in new innovation and new breakthroughs by rewriting everything. You’ll be crowding out your development effort focused on solving the memory safety issue.

[Paul] (13:01)

And it wouldn’t be a very asymmetric shift either, would it? If you think, well, we’ve got an adversary who’s got 50 times as many people on the attack job as we have, and they got 50 times as much money. What are we going to do about it? A solution that says, well, why don’t we just stop doing what we’re doing, produce no new software, no patches for a few years, and spend 50 times as much money and start again? That’s a little bit of a fool’s errand, isn’t it?

[Joe] (13:30)

Yeah, it’s almost the opposite. It’s almost the opposite step function. There’d be a step function backwards for a period of time in order to get the benefits. So ideally you can have your cake and eat it too. Have memory safety achieved today without rewriting all your code so that you have resilience in the investments that you’ve made all the while freeing up resources to do new development in areas you want to, not just rewriting legacy code to replace it in general.

[Paul] (14:05)

And Joe, what would you say to people who note that attacks against systems like iOS or Mac OS and Windows, let’s be fair, memory-based attacks are now much, much more expensive and much more difficult. In the words, has been a somewhat asymmetric shift thanks to memory protections that have been put in like ASLR address space layer randomization, stack protection, control flow guard, all of that stuff that works well on desktop and server scale systems, doesn’t it? 

Because you can build a bigger operating system and a bigger cocoon in it. But with embedded devices, it’s more like saying, well, we can’t wrap the whole car in a million airbags because it still has to get down these tiny lanes. And you’re stuck with those tiny lanes in the embedded market, aren’t you? A) Because the devices are embedded and they’re supposed to last for decades, because there are zillions of them and C) Because they’ve been tested and proved to work correctly from a safety and a timing and a performance regulatory point of view as they were. New software might be more secure from a vulnerability point of view, but it won’t necessarily meet the safety requirements that that software was approved for.

[Joe] (15:29)

Yeah. And we’re operating in highly constrained environments where the compute resources might be restricted as well. Safety standards are certainly key aspects. You have to invest in getting safety certified in general, and that can be a lengthy laborious process. You have to demonstrate deterministically that everything will still operate the way you expect. And so certification is an investment on the product manufacturer side to ensure that safety, but also the hardware that’s already been invested in may come with low power, low compute resources. And you can’t just rewrite that in new software and expect that you’re going to be able to meet all the requirements that are on there. 

You have performance issues, you have hardware constraints, you have economic constraints, and you have policy and regulatory constraints around safety certifications that all make rewriting a bigger task than strictly just wiping things out. And if you think of the environment on laptops, as those become more and more powerful, you can afford more and more compute resources going towards managing memory and managing exploit prevention in general. And you can’t necessarily do that in these highly constrained environments in critical infrastructure, say in the energy grid or in a data center where this stuff has to operate 24 seven or the data processing just won’t happen.

[Paul] (16:58)

Because for a lot of these devices, although I guess technically they have an operating system layer, it’s not like Mac OS or Windows, it? Where you have the operating system, which is this multi-gigabytes worth of stuff with a huge number of libraries on top of which you install apps. You want to have exactly one app that performs exactly one function in an exact and repeatable way.

So the software and the operating system are kind of one thing, aren’t they? So you can’t just wait until the operating system vendor says, oh, we’ve added all this new extra stuff. Oh, and we can sell you some cybersecurity tools, EDR stuff that you can add in as well. You have to, if you like, go back not exactly to basics, but literally to fundamentals so that the entire system, the entire firmware you deliver runs the same code. So you don’t get to change the code, but you get to change its exploitability in a way that essentially ruins the determinism of an attack without affecting the determinism of its real-time behavior.

[Joe] (18:09)

Yeah. And I like to say functionally identical, but logically unique. Right. And what I mean by that is from the attacker’s perspective, it looks different, even though functionally everything behaves the exact same way as you would expect. And that’s the premise behind relocating where functions load into memory at load time so that the system still operates. But from the attacker’s perspective, they can no longer find the vulnerability itself because it’s moved from the last time they’ve seen it.

[Paul] (18:39)

So Joe, what other threats do you see affecting particularly the embedded marketplace, where we can effectively counterpunch without landing one blow at a time in exact lockstep with our adversaries, but by building things that are more secure from the start? What changes do we need, both technically and almost socially from a DevOps point of view, to be able to build software that is sufficiently similar that it still passes all its regulatory checks, but sufficiently different that it’s no longer as easy for an attacker to break into.

[Joe] (19:21)

Well, I think you hit on the notion of Secure by Design. If you think about having highly reproducible builds that allow you to add in security as you’re compiling software, as opposed to trying to defend the network and prevent people from getting in, it’s a much more efficient process to incorporate security into your software development process in the first place, adding in security protections there, identifying and analyzing the vulnerabilities that build time. So you have a chance to understand what forms of mitigation are needed. You also can analyze those vulnerabilities, how they shift over time. But I do want to offer maybe a cautionary tale as well, which is there are movements, of course, where gen AI may start to produce some code. And I do think that could be an asymmetric shift in software development. And certainly people get great productivity gains out of the software development process. But let’s think about this for a second. There are plenty of open source software components that would work in these embedded systems quite nicely. And so we should focus the generative AI development efforts, writing new code in those areas where new code is needed. We shouldn’t necessarily try to rewrite existing open source components using AI. That’s, it almost feels like a waste of time, and sure, maybe you could get some performance gains here and there, or you might want to have something that’s a little different than what everyone else is doing. 

But in the end, one of my fears in this whole equation is if a generative AI application is writing code and it rewrites an open source component so it’s a near lookalike, but it’s not the same code base, it becomes very hard to identify vulnerabilities that someone else might see in the true open source component that also exists in the near look-alike open source component that was written in generative AI. And so you lose the effect of sharing information about vulnerabilities and disclosures about vulnerabilities and information about known exploits. And someone might have a near look-alike version of that, have no idea it’s the same thing as an open source component. And it still may have its own memory-based vulnerability, let’s say. And so that system’s exposed without the network effect of sharing vulnerabilities across open source systems in general. 

So I think it cuts both ways. Technology could lead to productivity gains, but could lead to some new security exposure and enable vulnerabilities that are hard to detect. And as a result of not getting the benefit from the disclosures through, say, the CVE program that hopes to share with everybody what the underlying vulnerabilities are in certain code, in certain open source software, it could put these systems at risk even further. But in that case, I do think taking this asymmetric shift in cyber defense also helps. It can prevent exploitation even on that AI-written code, even when we don’t know that the vulnerability exists in that code in the first place.

[Paul] (22:41)

So Joe, if we look to the future, what do you think are the biggest challenges and opportunities? Not just for cybersecurity in general, but more specifically in the OT arena. To do things that with hindsight we wished we’d done 10 or 15 years ago, but we didn’t. And where it’s not quite as easy to fix the sins of the past as it is, as you like to say, with a web app. Where you can quite literally fix it between one person visiting the website and the next visitor arriving.

[Joe] (23:16)

Yeah, I think some of the challenges and some of the opportunities kind of center around exploitability and knowing where there is exposure, where the vulnerabilities exist, and whether they’re exploitable. If we can focus our attention in the areas where things are no longer exploitable and take an asymmetric shift and take off a majority of the vulnerabilities and make them not exploitable. That means we can focus development in other areas, and that’s a tricky thing. For example, in the medical device arena, part of the whole vulnerability management area is to show whether it’s code quality or exploitability itself, looking at those items and demonstrating that things are no longer exploitable is really kind of the standard in order to ship software. I think the opportunity is to look for ways to prevent exploitation in an asymmetric way so that we can focus our development areas in other ways.

[Paul] (24:15)

And it sounds as though things like the Cyber Resilience Act, the CRA, in the European Union, even though it feels like quite a challenge to many people, could have a very positive effect because it’s sort of saying that if you make software and you sell it, then you have to take the liability for what you’ve built and therefore it makes economic as well as ethical and social sense to get it right in the first place.

In other words, it’s a little bit of a stick rather than just a carrot that will help get everybody moving, even those who may have been a bit reluctant so far. Would you agree with that?

[Joe] (24:56)

I would. And I think if you sort of look at the Cyber Resilience Act as a blessing, if you see it as kind of an opportunity to open the way you think about resilience and cybersecurity and look at it as an opportunity to improve your overall processes, to include your software development practices and incorporate security in from the get-go, it’s going to lead to better code quality in the end. So my point is things like the Cyber Resilience Act could be that trigger to help you transform how you go about your software development process that becomes the asymmetric shift in cyber defense by redoing your approach to security in the first place, building security in as you go forward. And so what that means is for new products, you can start to demonstrate these best practices and shift the landscape yourself and change your support cost going forward.

[Paul] (25:53)

Sure, I think that’s an excellent way to finish because it turns something that I know at least some people see, like GDPR when it came out, as just this expense that they’re going to have to go through because some regulatory body said so. But actually, if you see it as something that can help you build better products that more people will want to buy and that will last longer, that’s great for all of us. So Joe, thank you so much for your passion and your insight in this fascinating topic. That is a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media as well.

That means a lot to us. And don’t forget to recommend us to everybody in your team so they too can benefit from Joe’s wisdom and insight. Once again, thanks for listening, everybody. And remember, stay ahead of the threat. See you next time.

The post The Asymmetric Advantage: How Cybersecurity Can Outpace Adversaries appeared first on RunSafe Security.

]]>
Smarter Vulnerability Management in OT Systems: Building Resilience https://runsafesecurity.com/podcast/ralph-langner-smarter-vulnerability-management/ Thu, 20 Nov 2025 13:44:15 +0000 https://runsafesecurity.com/?post_type=podcast&p=255266 The post Smarter Vulnerability Management in OT Systems: Building Resilience appeared first on RunSafe Security.

]]>
 
 

Industrial control systems power the world, but their long lifespans and insecure-by-design devices make vulnerability management uniquely challenging. In this episode of Exploited: The Cyber Truth, Stuxnet authority Ralph Langner joins RunSafe CEO Joseph M. Saunders for a candid, experience-driven look at how defenders can strengthen resilience across critical OT environments.

Ralph explains why CVE-driven approaches often miss the mark in OT, where attackers can exploit built-in features just as easily as known flaws. Joe discusses how memory-based vulnerabilities continue to open doors for ransomware groups and nation-state actors and why eliminating entire exploit classes offers a powerful defensive advantage. Together, they break down the operational realities, architectural gaps, and practical steps teams can take today to meaningfully reduce OT risk.

You’ll learn:

  • Three types of OT vulnerabilities and those that matter most
  • Why insecure-by-design systems will remain in place for decades
  • The role of ransomware, IT-side exposure, and poor segmentation
  • How class-level protections strengthen resilience across the device lifecycle
  • Incremental improvements organizations can implement right now

If you’re responsible for OT or critical infrastructure security, this conversation with one of the field’s most respected voices offers a grounded roadmap for smarter, safer vulnerability management.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Guest Speaker – Ralph Langner, Founder and CEO of Langner, Inc:

Ralph Langner is founder and CEO of Langner Inc., and host of the weekly Common Sense OT Security webcast (each Tuesday at noon on LinkedIn, Youtube, and X). He is one of the founders of the OT security field and received global recognition for his analysis of the Stuxnet malware. Langner’s OTbase OT asset management software is used by Fortune 500 companies in Manufacturing and Oil & Gas.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:04)

Welcome back, everybody, to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe.

[Joe] (00:20)

Greetings, Paul. Great to be here.

[Paul] (00:21)

You’re on the road again, aren’t you? Just in case the drinks get delivered in the background and people wonder what the background noise is. You’ve taken time out of your traveling schedule to be with us, so thanks for that. We have a very special guest today, and that is Ralph Langner. He is Founder and CEO of Langner Inc. Hello, Ralph.

[Ralph] (00:45)

Hi Paul, thanks for having me.

[Paul] (00:47)

A pleasure. Now, Ralph, many of our listeners will recognize your name and associate you very specifically with the Stuxnet virus, and I’m sure we will have at least a bit to say about that. However, our topic is much more general than that.

Our title is “Smarter Vulnerability Management in OT Systems.” I’m sure we will talk about all sorts of vulnerability management because it’s rare that you have an OT system that is not interconnected to some IT system at some point these days. However, before we start, I’d like you and Joe to give our listeners an insight into what drew you into operational technology industrial control systems rather than a more traditional sort of IT career.

[Ralph] (01:40)

Well, for me, that was a very long journey because it started very early in the 80s when I got fascinated with cyber-physical systems, and you may not believe i,t but at the time I was studying psychology.

[Paul] (01:54)

That probably came in handy later on, right?

[Ralph] (01:57)

Yeah. And so I was focusing on things like psychophysiology, where the mind and the body interconnect. And a couple of years later, I found myself working in software, and it certainly happened that this field also attracted my attention, where information meets physics. So I focused on developing software products for connecting the first PCs at the time with factory automation equipment.

And then a couple of years later, the whole industry shifted from point-to-point serial connections to Ethernet and to networks as we know them today. And since I had a pretty good idea of how those protocols operated, etc., it was totally clear for me that the whole industry was moving into one gigantic cyber risk scenario. So from that day, I shifted my attention to cybersecurity and especially in the OT space.

[Paul] (02:56)

Joe, your turn.

[Joe] (02:58)

In my case, I followed Ralph and joined the party much later, I would say around 2015, 2016. And for me, it was the intersection, maybe not of psychology, but of economics and technical issues, that when you combine those, it becomes a national security issue. Given the rise in geopolitical tensions and the targeting of energy or data centers or other systems like that, the consequences are quite high and the problems are a bit challenging because it’s very hard to update systems. 

For me, it became sort of an economic principle of how do you solve a large swath of the vulnerabilities in an area that’s constrained both technologically and operationally. For me, it’s a very interesting angle, but the national security implications are quite high at the same time.

[Paul] (03:49)

And we go back to, well, around 2010 and talk about matters of national security, Ralph, that does bring us to the infamous Stuxnet virus. For me, one of the most fascinating things about that malware when it was decompiled is that there was still a part of it that was essentially impossible to understand, namely, what was it supposed to target? It’s going to mess with some motor control systems, but which ones, where, and why?

[Ralph] (04:23)

For me, it was very fascinating. The fact that we did not know the target and we didn’t even know if the malware had been executed already or not. And that certainly got me alerted because I was under the impression, well, we might have a chance to prevent disaster from happening. The vendor in question at the time, Siemens, in their public statements, they said as much as well. We might never know what the target is. So it pretty much got me up on my heels because, understanding the sophistication of the malware, I saw that disaster might strike, and it might well have struck in the United States or in Germany. And this is something that I thought would be worthwhile preventing.

[Paul] (05:13)

So, in an OT-type attack, if you can’t be certain what the intended target is, do you have to start with the assumption that it could be anybody? And if so, how do you prioritize which potential attacks you should take on first?

[Ralph] (05:32)

Get back to the first part of your question. Let’s just say during your evening walk, you come across a rocket launcher. That’s scary. But you will figure out rather quickly that rocket launcher, whoever put it there, whoever is intending to use it, he’s not going to use it against the bakery next door or the pizza parlor, right? So the size and nature of the weapon, especially if it’s custom built, tells you something about the intended target. 

When you see the sophistication that you can see in Stuxnet, when you see what is possible with cyber-physical attacks, you will certainly get the impression,  this is going to be the next area of warfare. We’re going to end up in cyber war. That is certainly what I had expected. And then fast forward, it actually never happened. And as a striking example, if you look at the ongoing war in Ukraine, cyber doesn’t play a role. And honestly, I have to say, I’m very surprised about that. What we see instead is old-fashioned kinetic war with the addition of drones.

[Joe] (06:40)

I need to jump in there. I mean, there are active cyber attacks that have happened in Ukraine. And in fact, the cyber attacks precede many of the kinetic attacks. And if you think about acting groups like KAMACITE and Electrum, who are active in Ukraine and are associated with Russia, there are significant cyber attacks happening in the Ukraine. I look at Taiwan and I see that there’s 2.4 million cyber attacks a day. And a lot of them are originating from China itself. 

Part of the activity that happens in Ukraine that I think is instructive for Taiwan is that folks like KAMACITE will enter by way of IT systems and provide access for acting groups like Electrum that are targeting OT systems. I have seen plenty of data and I’ve seen heat maps following the cyber attacks that precede kinetic attacks. And it’s all in the western part of Ukraine where the kinetic attacks occur.

[Paul] (07:38)

Joe, would you also say that there’s a significant risk, even with vulnerabilities or exploits that are not specifically used for what you might traditionally call an attack, where you’re aiming to break in and force somebody else’s industrial control systems to do something unintended? Because there’s an awful lot of data available that can be milked out of the average industrial control system if you think of something like a power grid or a sewerage control system or traffic control systems across a large city. And all of that matters too, doesn’t it?

[Joe] (08:18)

I think it does matter. We’ve seen movement in that area in the past couple of years in the United States. And part of it is knowing and testing. If you have a water facility, what is their ability to respond? So you can check and test and see what the responses are and learn a lot about a system in order to think about if you’re an attacker, think about how might you approach other water centers across the country.

[Paul] (08:44)

Managing all of this is a little bit different from a traditional IT system and very different from a traditional web app where you can just change the code and the next person who arrives at the website gets the new version. So what are the biggest challenges and considerations, Joe and Ralph, that you find in what we might refer to traditionally as vulnerability management in OT or industrial control systems?

[Joe] (09:10)

Well, from my perspective, we’ve got the product manufacturers who produce these technologies. And of course, then you have the operators of infrastructure who are deploying those assets inside their infrastructure. The operators need to protect the network itself, but there is separation of cost and consequence. If the operator bears the risk, ultimately, but the product manufacturer produces that technology, there needs to be a lot of coordination between the organizations. And so I think the difficulty in getting updates out into the system, I think the capitalized expense, this is where my economics interest comes in. 

There’s purchases of equipment that’s expected to be capitalized over 10, 20, 30 years. And these gadgets, these devices, these systems are intended to survive or to operate for a long time. Their age in part creates part of the vulnerability. And as we’ve seen, memory-based vulnerabilities on these embedded systems, on these cyber-physical systems. The vulnerabilities have been around since the 80s. Now there are human elements that help, and there’s technical approaches that help, but I think the distributed nature, the separation from product manufacturer to operator, and then the complexity in getting updates out to these systems all contribute.

[Ralph] (10:29)

So let me give you my perspective on this. First of all, I think it’s important to sort out the landscape and you think about OT vulnerabilities because there are so many different aspects of it. Most people, when they hear the term OT vulnerability management, for them, it’s just about knocking down CVEs. And if you try to do that, you face a unique challenge because there are hundreds of thousands of vulnerabilities, known vulnerabilities, for which you have a CVE.

So you would need an army of engineers to actually fix even half of these. But the other thing, since we have been talking about cyber war, et cetera, what most people don’t know. When you consider a competent attacker, they would not exploit those CVEs. So just to give you an example, for pretty much every single PLC model with an embedded web server, you will have CVEs because it appears impossible for mere humans to design a web server that you put on a PC that doesn’t allow for cross-site scripting and all that nonsense. 

But the competent attacker would never try to take advantage of these vulnerabilities because the competent attacker knows if they want to succeed in OT, they just need to exploit features. Those are not considered bugs. They are not considered vulnerabilities. They are just legitimate product features. And the backstory is that those OT systems, those controllers, etc., they’re insecure by design and they’re going to stay that way for quite a while. So this is what we have to deal with. 

Depending on the context that you discuss this in, like when you think about predicate infrastructure and national security, et cetera, the CVEs don’t play that much of a role. It’s just more the question, what would a competent attacker do? And this is something that you can analyze and protect pretty decently against it.

And then there is even a third group of vulnerabilities in this context that you also have to consider, which is mere stupidity when it comes to configuration. And since we have been talking about municipal water facilities and they are hacked, what the problem is, somebody put the controller or HMI on the public internet without even thinking about a decent password. Those three topics are very, very different. Unfortunately, as I see it, the majority of organizations in this space, meaning the asset owners, focus too much on shooting down the CVEs, which is not the most rewarding job.

[Joe] (13:01)

Just add that the quantity of vulnerabilities is one thing. I think the severity of vulnerabilities is another thing, and you can start to prioritize what does matter. I think the traditional thinking of patching every single one is in fact a losing proposition in that there needs to be a more asymmetric shift in technology that doesn’t target vulnerabilities per se, but looks at classes of vulnerabilities and eliminates the exploitation of those. 

Then specifically, I’m thinking about memory-based vulnerabilities that a lot of these acting groups are using. And so if you apply asymmetric technology to prevent exploitation for a majority of them, then your problem in your funnel of known vulnerabilities goes way down, and your ability to adapt goes way up. And certainly, there are ways to prevent exploitation of potential zero days, without even knowing what the attack vector is. If you apply that stuff in the build side of the equation, then you have a much deeper set of resilience so you can focus on the other areas of defense and depth.

[Ralph] (14:12)

Let me add another facet to this context, and that would be just look at the present threat landscape. It is very clear that the majority of actual cyber attacks in the OT space for the last couple of years were in one specific region, and that is simply ransomware. Luckily, we didn’t see any real cyber-physical attacks, but we were seeing thousands of ransomware attacks. So even though I don’t think that these ransomware operators are really targeting specific industries such as manufacturing or creative infrastructure. I think those are opportunistic attacks, but it’s very clear how this whole thing plays out. It always is associated with a Windows box, with a vulnerable Windows box. And this is where, in OT, you find a target-rich environment.

[Joe] (15:04)

Considering what could go wrong. And you look at China’s pre-positioning and critical infrastructure, and you look at things like Volt Typhoon and Salt Typhoon and the like, it’s well beyond ransomware that the problem occurs. I think what most people in the industry are concerned about is also the loss of operational capability, whether it’s taking out data centers or others, and the vulnerabilities in those systems by virtue of the HMI and then connecting to the thermal controllers and the cooling systems and things like that. 

The problem is complex. I see a great opportunity to improve the security posture by shoring up the vulnerabilities related to memory-based attacks. And just as Ralph talked about, well, what could go wrong? There’s a lot that can go wrong with all these memory-based vulnerabilities across critical infrastructure. When we have already seen attacks in Ukraine, attacks in Taiwan, and critical infrastructure, and then certainly pre-positioning by China in US critical infrastructure. I think the stakes are high and growing, and there’s certainly risk that needs to be mitigated.

[Ralph] (16:11)

Let’s just focus our attention on the more interesting cyber-physical attacks. Let’s focus on taking down a data center or multiple data centers, for that matter. You would not attempt to do this by trying to attack all those NVIDIA machines directly. You would go through the building automation part, which is considerably easier because usually those systems are not well protected. And unfortunately, the cybersecurity guys that are in charge of the security for that data center, they never even considered that all that network shit that is basically driving that whole building automation and the lack of security that you usually find there. But this could be fixed fairly simply.

[Paul] (16:56)

Now Ralph, earlier you mentioned, almost in passing, about OT systems that, in most cases, they are effectively insecure by design. So what do we do, both for OT systems, for IT systems, and their nexus, the network connections that bind them together? What do we do to change the world so that secure by design becomes something that we can take for granted rather than something that organizations seem to avoid either because they’re afraid they’ll never get there or they think it will simply cost too much.

[Ralph] (17:36)

That is a wish that OT security experts had for decades. Well, why don’t we just try to push the automation vendors to actually build security in, but so far, it never happened. And should it ever happen, you and I will not live to see it. That’s for a very simple reason. You have already addressed a long lifetime of these systems. If you just think about the installed base. 

So presently, a rough estimate would be globally, we are talking about 300 million industrial control systems. And pretty much all of them are insecure by design. So if you just imagine how long it would take to even replace a 10th or maybe 20 % of that installed base, it’s going to take decades. What I could envision is that through some breakthrough technology shift, we would see more secure products. Like when you think about the revival of U.S. manufacturing then that could probably involve new designs, new architectures that would, over time, actually address and fix that problem. However, I’m actually not that much concerned about those insecure by design controllers, RTUs, actuators, sensors, you name it, because, well, we have certainly learned by now how to arrive at secure network design.

[Paul] (18:59)

And yet we have ransomware attacks all the time, well-documented, that affect people across the entire industrial base.

[Ralph] (19:08)

Don’t get carried away because so far the ransomware text that we have seen didn’t involve the industrial control systems.

[Paul] (19:17)

What I’m saying is that they did involve the network, so the idea that we have solved the network security problem seems to be, how can I put it, wishful thinking.

[Ralph] (19:25)

Hear me out. So certainly we haven’t solved network security. We will never be able to solve network security, but we are making incremental improvements.

[Paul] (19:35)

Why can’t we make those incremental improvements in OT systems as well, without having to go after each vulnerability one by one?

[Ralph] (19:42)

Yes, certainly we are making progress in that area. And one strong push would come if you start implementing the basics. What everybody should try to accomplish rather quickly, if they haven’t done so already, is to separate the enterprise network or the IT side from those process networks with a DMZ. 

For reasons beyond my comprehension, that hasn’t happened so far. And we have seen cases where the asset owner was under the impression that they have a DMZ when in fact they didn’t. Happened in the Triconics attack, where the safety controller was compromised. The asset owner thought it separated properly when a risk assessment had shown later on after the fact, no, it’s not a DMZ. So that would be a good starting point, to just segregate those two worlds from each other.

[Paul] (20:35)

Joe, where do you issue relating to liability and lifetime come in? And I’m thinking specifically of discussions that we’ve had in the past about things like the CRA, the Cyber Resilience Act in the European Union, which as I understand it is trying to be a stick to press manufacturers to be liable for poor decisions that they make for insecurity by design and also requiring them to commit to the lifetime over which they will actually support their products, whether they’re IT or OT or IoT or whatever they might be. Do you think that that’s absolutely needed if we want to get anywhere?

[Joe] (21:20)

Well, I think it can be helpful. Do I think it’s absolutely needed? No. But I do think that the stick does help raise part of the awareness. And I also think that it does start to change the thinking in terms of economic implications. And today, I think, yes, maybe reputation could be a problem if you are Schneider Electric and your gadgets ultimately were targeted inside an operator. Well, that comes down pretty hard on Schneider Electric, even if it was Saudi Aramco that was attacked for example, there is reputational risk, but I do think that a stick that tries to impose liability and also requires other aspects does start to elevate people’s awareness about what kind of responsibilities they can have. 

Secure by design, building security into your products is a good idea and also good practice. And it may in fact lead to improvements in code quality and byproducts like that by simply looking at what your development processes are and what your methods are for building and compiling and testing and automating those processes to ensure that you’re producing technology that is ultimately resilient.

[Paul] (22:30)

This certainly sounds as though it will lead to a world in which we will not have the kind of response that Ralph talked about earlier in the Stuxnet incident, where it sounds like they just threw up their hands and said, well, how will we ever know? We may never find out. That seems to be a very self-serving and defeatist attitude. And if manufacturers and vendors did not think that way, do agree that the world would be a better place from a cybersecurity perspective?

[Joe] (22:58)

Yeah, I think if people improve their software development processes, then we’ll have higher-quality code and more resilient code in the first place. And it does take folks like Ralph who thought about what could be the target and what could be going on here, despite the manufacturer in that case, not focusing on it. It has a major implication for how we approach cybersecurity going forward. So I’m grateful for Ralph for what he has done and what he is doing. 

And I agree that there are human elements. There’s also technical approaches to create asymmetric shifts. And I often think about things in both a silicon-based approach and then a carbon-based approach. I think we need to look for a technology solution that does create an asymmetric shift in cyber defense. But I also concede on the carbon-based side that we do need a human-based approach to ensure that you are managing the operations of your security programs effectively as well.

[Ralph] (23:59)

Let me add another dimension here, which is money. And in my experience, that’s the most important dimension, because if you look back at what the automation vendors did, let me put it in simple terms. We know how to build secure by design products. That’s not a mystery. The real problem is once you do that, your secure by design controller, all of a sudden it is let’s just say, 50% more costly. It’s more expensive than the regular one that you’ve been using now for 20 years. That other model of which you have in store, let’s just say 5,000 pieces. That is the real issue. 

We have seen that play out many, many times after Stuxnet. For example, we used to consult automation vendors on how to design secure controllers. And then we saw that those projects had been stopped because the vendor realized, oh, we will never be able to sell this thing because it’s going to cost, let’s just say 50% more.

[Joe] (25:01)

We always promote from our side that for a 5% increase in cost, you can solve 80% of the problem. So I think that 5% is a good number in some situations. It would be calculated that way if you look at it on an individual product basis. What I’m talking about in terms of updating software development processes, these manufacturers are quite large. And so if you spread that redesign process over multiple teams, it’s far less than the 50% that you would suggest or see in an individual case.

[Ralph] (25:32)

Okay, so let me make this clear. I’m talking about end-user cost. And first of all, what you have to factor in there, that new product will certainly have a higher price ticket on it. But then also if you think about the cost that comes with the additional effort it takes to train your existing workforce to actually use those secure by design products properly, that’s another cost factor.

The way that I pitched it back in the day, well, you know, think of it as a business opportunity because all those insecure controllers will be replaced and that’s a gigantic business opportunity. But according to their calculations, it didn’t pan out so far.

[Paul] (26:10)

Ralph, I’m conscious of time, so to conclude, can I put a question and just give you and Joe a chance to give a pointed, suggestive answer to our listeners? If you could recommend just one change that manufacturers or vendors could make in the next year, what would it be?

[Ralph] (26:30)

Just be open about your security issues, I would say. And we have seen a lot of progress in that area. So what I would like to see is that every automation vendor makes their vulnerabilities and their other problems publicly available. Downloadable via a REST API port, something like that, because you need automation to actually process it. That would be a huge improvement.

[Paul] (26:57)

Joe?

[Joe] (26:58)

Solving memory-based vulnerabilities at a class level instead of individual CVE and vulnerability level would result in an asymmetric shift in cyber defense and is easy to implement. So I would solve the memory-based vulnerabilities as an easy first step.

[Paul] (27:13)

Excellent. I’ll summarize that into three words: onwards and upwards. Ralph and Joe, thank you so much for your insights and in particular for discussion of the history, the current, and the future of OT security. I’m sure our listeners have enjoyed it greatly. 

So thanks to everybody who tuned in and listened. That is a wrap for this episode of Exploited: The Cyber Truth. If you enjoy this podcast, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media as well, and don’t forget to tell your whole team about us so that they, too, can listen to Joe and Ralph’s opinions and expertise. Once again, thanks to everybody who tuned in and listened, and remember, stay ahead of the threat. See you next time.

The post Smarter Vulnerability Management in OT Systems: Building Resilience appeared first on RunSafe Security.

]]>
Clean Files, Safe Operations: Defending Federal and OT Systems from AI-Driven Threats https://runsafesecurity.com/podcast/kelly-davis-clean-files-safe-operations/ Thu, 13 Nov 2025 15:50:47 +0000 https://runsafesecurity.com/?post_type=podcast&p=255244 The post Clean Files, Safe Operations: Defending Federal and OT Systems from AI-Driven Threats appeared first on RunSafe Security.

]]>

 

As AI accelerates both innovation and cyberattacks, federal, defense, and OT systems now face an overwhelming surge of seemingly harmless files that conceal sophisticated, AI-generated threats. Attackers are producing thousands of new malware variants per hour, hiding malicious payloads inside everyday PDFs, Office documents, and other trusted formats.

In this episode of Exploited: The Cyber Truth, host Paul Ducklin talks with RunSafe Security Founder and CEO Joseph M. Saunders and Glasswall Senior Solutions Architect Kelly Davis to uncover how organizations can stay ahead of AI-enhanced file-based attacks.

Kelly explains how Content Disarm and Reconstruction (CDR) ensures that only verified clean files reach secure networks. Joe connects these practices to embedded and OT systems, where compromised software can have physical, real-world consequences.

Together, they explore:

  • How attackers hide malware deep inside PDFs, Office documents, and workflows that users trust
  • Why detection-based security is too slow—and how AI is widening the gap
  • The four-step CDR process and its role in both inbound and outbound protection
  • How federal agencies can adopt file-level defenses using pilots, boundary controls, and workflow APIs
  • How runtime defenses and binary diversification protect OT systems from memory-based attacks
  • Why generating SBOMs at build time is essential for software supply chain integrity
  • How organizations can use technology to reverse attacker economics and regain the advantage

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Guest Speaker – Kelly Davis, Senior Solutions Architect at Glasswall: 

Kelly Davis, Glasswall’s Senior Solutions Architect, brings deep expertise in DevOps, IT architecture, and Zero Trust security. Previously, he was a Lead IT Specialist at the Command Control and Communication Tactical Directorate Communications Networks Division in the DoD, delivering secure, scalable solutions in high-stakes environments. At Glasswall, he applies this experience to drive innovation and resilience across the company’s cybersecurity solutions.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:07)

Welcome back everybody to this episode of Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello Joe.

[Joe] (00:22)

Hello Paul, look forward to the discussion today.

[Paul] (00:24)

You’re on the road again, aren’t you? From the woodlands of Texas, I believe.

[Joe] (00:29)

It’s fun to travel and it’s fun to be in the woodlands. I’m visiting family, but I’m actively engaged in working as well. So happy to be here.

[Paul] (00:35)

Yes, cybersecurity doesn’t take any rest, does it? And with that, let me introduce today’s special guest, Kelly Davis, who is Senior Solutions Architect at Glasswall. Welcome, Kelly.

[Kelly] (00:50)

Thank you, Paul. Thank you, Joe, for having me.

[Paul] (00:52)

Very provocative title this week, Clean Files, Safe Operation, with the subtitle of Defending Federal and OT Systems from AI Driven Threats. Kelly, why don’t you open our innings by telling us what the problem is with clean files, or more importantly with files that are not clean, in IT in general and in federal government circles specifically.

[Kelly] (01:20)

Yeah, most certainly. I spend most of my time working with the Federal and Defense customers, typically helping them solve file-based threats.

[Paul] (01:29)

And that’s not just files that are coming into an organization or somebody’s inbox. It’s also the stuff that you produce and then need to deliver.

[Kelly] (01:40)

That’s actually it. It can be falls at rest, falls that are being utilized from day-to-day routines. What we do is we prevent that endless game of the whack-a-mole and try to figure out what’s actually guaranteed safe and what’s not actually safe. At Glasswall we don’t look for the bad stuff that may actually exist. We rebuild falls to make sure they’re clean, period.

[Paul] (02:00)

So do you want to say something about the kind of risks that files such as documents or spreadsheets or PDFs or whatever it might be pose? It’s not like you get one document in March and then maybe produce an updated version in June. There are new versions of files all the time that are moving around inside and outside an organization. So how do you keep that under control? And what are the attackers trying to do to probe systems with rogue files?

[Kelly] (02:30)

To start with, you’re a new landscape right now where AI is the big boom, right? Everything is AI. You have access to AI at your fingertips from your phone, your mobile devices, or from the web. It’s a wild world we’re in right now. So attackers are generating 20 to 40,000 new malware variants every hour. Obviously, that’s extremely scary. Using AI to study our defenses.

[Paul] (02:54)

When we talk about malware, that’s not necessarily traditional malware like, here is a program, an executable file that does bad things. In fact, it could even be something like a document that doesn’t contain any particular malicious code, but contains malicious or misleading instructions, state-of-the-art phishing if you like, that lures people into taking actions themselves that after they’ve done so, they desperately wish they hadn’t. And unlike the old days, they’ll probably be correctly spelled, have correct grammar, and be reasonable looking. How do you cope with that?

[Kelly] (03:35)

I’ll go back to what we’re seeing in AI and how they’re able to hide malware in the most innocuous places, like embedded in a PDF. PDF documents are extremely convoluted. If you can deconstruct a PDF document, a lot of people won’t really understand this because at the human eye level, it just looks like a nice document. But at the binary layer and the data structure layer of a PDF, there’s hidden JavaScript, there’s all sorts of untucked macros and aqua forms.

[Paul] (04:01)

Yes, the last time I personally looked at the PDF standards documents was about a decade ago. Oh my. Even then, they were something like 600 pages long. That’s a pretty daunting challenge, isn’t it?

[Kelly] (04:17)

And that’s where the attackers in the AI, that’s how they’re injecting the so-called payloads tucked away within the data in the binary layers of these files. The same goes with Excels and macros. It can look completely legitimate. You open it up, you click a cell, and then a payload detonates. That’s essentially how they’re being able to manipulate and gain access to your various systems through these documents. In this world, in this landscape, files remain the primary attack vector. PDF files are office docs. These are the main trusted formats and nobody typically questions them until it’s too late. It’s a massive threat vector these days.

[Paul] (04:55)

And that old-school advice that says things like, well, don’t open documents from people you don’t know is pretty much useless these days, isn’t it? It’s no use saying, for example, to someone who works in the HR department, don’t open resumes from people who don’t yet work for the company and that you’ve never met before, because that’s their job, is to open those files and see if the candidate is suitable. 

So when you talk about clean files, does that mean taking a file that has arrived, deconstructing it, and rebuilding it so that the informative content is the same as before, but all the bits that aren’t strictly necessary have been removed so that they can’t lie dormant inside the system to do something bad later?

[Kelly] (05:46)

This is essentially how it works. It’s a simple four step process. With Glasswall, it’s called Content Disarm and Reconstruction. But the four steps within that. One, inspect. It’s where we break down the file into its constituent components to validate the structure of the file. And step two is to rebuild. So we repair the invalid and malformed structures that could potentially be within a file at the binary layer. Then you have step three, we’ll clean. So we remove all the high-risk elements that do not pertain or do not match up to a vendor specification so that PDF spec that you talked about, it removes the high-risk elements based on the policy, the macros, the JavaScript or the embedded files that could be within that file. And then lastly, the fourth step is we deliver and it’s fully functional. You can’t tell that it has been inspected and rebuilt back to its compliance standard. The users don’t even know what’s happening if a file is being CDR’d by Glasswell and CDR’d Content Disarm and Reconstruction.

[Paul] (06:43)

Now does this mean that for some customers they may need to adjust some of their policies and procedures or maybe to improve the tools that they use in say the automatic generation of documents? If someone is triggering alerts or alarms very regularly with documents that they genuinely and legitimately created then you may actually have uncovered that there’s a flaw in the document building process that they themselves are using. And I guess when it comes to passing documents around in secure environments, particularly ones where documents may move between different security levels, that’s a great thing to know, that someone is inadvertently not playing by the rules but never realized it.

[Kelly] (07:32)

There’s a stat there. So there’s about one in 100,000 files that contain potentially malicious content, which is a significant threat service. If you think about that, right, that’s just day to day usage.

[Paul] (07:44)

So how does the issue of deconstructing and rebuilding files differ when those files need to move between different security levels or from place to place inside a segmented network? Does that mean that there are constructs inside things like Word documents and PDF files that you tolerate at one security level but might want to strip out at a more strict security level? Because there’s more that can go wrong.

[Kelly] (08:15)

These environments are brutal for security. And when you’re thinking about various classification levels, whether it’s air gap networks or files that need to be transferred into a different classification level, Randy Resnick, the DoD, the DoW now, the DoW CIO, admitted that there’s something hugely wrong these days when it comes to the poor job of integrating security over the last 30 years. What we were doing was just basically duct taping certain things and poorly implementing solutions to protect the various classification levels. 

Obviously, in these environments, there are certain items that are not suitable for specific environments. Broken down by security levels, classification levels, enclave levels, these various files are broken down by personnel who can view certain components. When you’re passing data through Glasswall, you have the ability to modify your policies to cleanse certain data to prevent the spillage while the fall is transferring into another environment so that certain personnel can now view these documents without breaching rules. So from a security standpoint, we help these systems and these organizations at those levels protect their data through transfers and air gap environments within the various classification levels.

[Paul] (09:32)

So it’s fairly obvious how AI comes into the attack side. Although it doesn’t create new types of attack, it makes it really easy for attackers to produce not just tens or hundreds of different variants of a known attack, but as you say tens or hundreds of thousands of new samples per day or even per hour. When it comes to moving stuff out of the organization, how do you make sure that incorrect or malicious material hasn’t been injected on the inside before it goes out? Is that a similar process just done in the outbound direction?

[Kelly] (10:13)

Exactly that. The way you would clean your files inbound is the same way you would clean your files outbound with Glasswall. We have specific toolings that can be put in place that can be embedded into specific firewalls. If you’re familiar with the ICAP protocol, it’s a security protocol that you can enable that allows you to proxy to various servers or whatnot. You can use Glasswall in that capacity whether it’s outbound or inbound. Files that are being transported out and then can then lean upon Glasswall to sanitize the documents or to redact specific content from the documents, whether it’s leaving the organization or coming into your organization. If you think about what AI is doing to help attackers study the target environments, you can only imagine what we’re doing on the other side with AI to ensure that we are meeting the standards and having that leg up for security toolings.

[Paul] (11:01)

We can get the AI automatically to deal with the work harder point of view. It doesn’t get boring looking at 200,000 new documents in a day, whereas a human would just run out of the ability to focus. And that leaves much, much more time for expert humans to work smarter so we get to work smarter and harder at the same time.

[Kelly] (11:25)

Yes, I wholeheartedly agree. Coming from where we are now with the advancement of AI and how we’re using it to plan our offensive attacks. And the real truth is the old traditional detection way is outdated, right? And it was always playing catch-up. If you think about the time a detection system or signature within a malware software gets updated, generally you see an average of about 18 days before a new threat gets detected based on that signature that has just been released.

So you’re always playing catch-up. And if you think about the federal environments and they move at a snail’s pace, that 18-day gap could seem like 18 years. They’re definitely playing catch-up in there, outdated by the numbers.

[Paul] (12:09)

So Joe, maybe I can flip things over from the more conventional IT side to the OT side, where generally you’re not sending specifications documents and spreadsheets and PowerPoint files to your embedded devices, but you are sending very specialized executable code files.

You can’t rewrite those executables by changing the way they behave and the actual operations they perform, because they may be mandated to do certain things in a certain way within a certain time in order to get their certification. But you can nevertheless build a security component into those files, can’t you? So that they actually perform in exactly the same way, but if they do misbehave, then they’re much less likely to be exploitable or to go haywire in an uncontrolled fashion.

[Joe] (13:07)

Yeah, exactly right. If you think about what Kelly’s talking about with these kinds of files and all the information that’s contained within them, that’s certainly a serious threat that has to be mitigated in the approach Kelly’s describing makes a lot of sense. But as you suggest, the OT environment might be a little different. We do see various levels of defense and depth that’s out there. On one level, what people are doing is they could say, well, at least I know that the software I booted matches the software that was shipped and have some kind of attestation or signing to get to that secure boot. But I think where you’re going then, Paul, is, well, that’s great, but then those files, as they get loaded into memory, they’re exposed to runtime attacks. And what are the approaches to stop those?

[Paul] (13:57)

If you’re going to put some extra special magic into the file, then it needs to be done in advance so the file is protected before it’s signed, before it’s delivered, before it’s installed, before it’s launched.

[Joe] (14:09)

Right. And so that entry points through the software supply chain. And then ultimately it’s very likely that whatever that malicious act is attempting to achieve, good chance that it’s going to attempt to be achieved at runtime when the software is loaded into memory in a similar way. I mean, what you want to do is do things as those files are getting loaded into memory and relocate where those vulnerabilities could be so that these memory-based attacks couldn’t be realized. If you can ship software binaries that then load uniquely every time so that at runtime, attackers can’t deliver the payload or can’t exploit the software, I think that’s part of the difference. And there’s still lots of really good information in these OT systems, just like you might find in these files that Kelly’s talking about. There’s operational data. The consequences are also very significant.

Sending off the file-based attacks using ways to randomize where those functions load into memory to prevent exploitation in the first place, even if someone compromises the supply chain or does get on system and tries to introduce arbitrary code to do something different is ultimately the goal. The approaches here we’re talking about are very different, but the end result is trying to come up with novel ways to stay ahead of those attackers. And in the world of AI, I think that only becomes more complex. The approach Kelly’s taking on the IT side, I think is great to try to stay ahead of those maneuvers and not chase the changing, ever-evolving signature, but eliminating the vulnerability in the first place.

[Paul] (15:52)

And Joe, when it comes to the concept of clean files and computer software, programs, executables, binaries, there’s the whole issue of whether a file should be considered clean in the supply chain, combined with the question of, did you actually include the clean file that you intended when you built the software, or did it somehow surreptitiously get swapped out at the last moment. And that’s where Software Bills of Materials or SBOMs come in, isn’t it?

[Joe] (16:30)

Yeah, and I think the timing when you build that Software Bill of Materials as closely to when that binary is produced in the first place is a key step in there. Building that at the same time as the binary is getting generated is the best approach because then what goes in the software build materials matches exactly what went into the binary in the first place. What a lot of people are doing in the embedded software space anyway, and in these OT systems is they will look to derive an SBOM from the binary after the fact, maybe six months, eight months, 12 months after it’s been produced, that distance creates and that time gap creates a lot of risks. So, generating that Software Bill of Materials and knowing exactly with 100 % completeness, 100 % correctness, what went into that binary in the first place, and then securing that binary and sharing that in a trusted way becomes a really good way to vouch what’s in that binary in the first place.

But then these other techniques, these defense in depth strategies, ensuring secure boot, ensuring runtime defense, those also then play a role in the overall defense posture.

[Paul] (17:41)

So Kelly, if I can return to you now, when Joe is talking about protecting binaries, executable files that are built and supplied, say for embedded systems or very specialized devices, there’s a necessary limit on the number of distinct final executables that get pushed from the development environment into the wild. It’s very different when it comes to files like documents and PDFs because they typically circulate and quite purposefully get modified along the way, possibly by many people legitimately inside an organization or the organizations that body works with. So what sort of controls do you think that federal government organizations could introduce that perhaps they haven’t already that make it more likely that they will construct what you might call clean files in the first place and reduce the risk that they will inadvertently introduce malicious content. Sorry that was rather a long question.

[Kelly] (18:52)

That’s a great question and it becomes tactical very quick. The playbook that I would use for this type of protection is first you have the obvious entry point, right? Into these organizations. The one main facet is the email gateways, right? And web downloads. The way to protect that is to having some sort of integration at that point, whether it’s an ICAP server or existing proxies and users don’t even know that it’s happening. Second, I would focus on boundary protection. So talking in the federal space, every file that’s crossing a classification level or network boundaries, they obviously would get sanitized or sandboxed, put into a different environment to detonate and to make sure that they’re cleansed. But obviously, we know sandboxing is a little outdated. It takes a little bit time. You may not actually get the full report. And then the next would be embedded into your workflow. So as you mentioned, you have the developers developing, and then they have a pipeline or some sort of workflow where it has to get pushed over to the next QA team, and then they generate an artifact and then the artifact triggers another pipeline where an SBOM or whatever that may be or payload may be. I know that we have specific partners and agencies using our REST full APIs in a way to automatically clean files, whether it’s in your SharePoint environment or your new cloud S3 bucket environments. Wherever the files live, embedding it in your day-to-day workflow, think that’s what makes this great from a security standpoint and then scaling it. The main key is policy differentiation within these enclaves or these various classifications. Maybe you strip all the macros from a file that’s coming in from the internet. On the other side, you’re more familiar with internal documents. Basically configuring at your own risk tolerance at that point.

[Paul] (20:34)

It sounds as though there are some compromises that users might be expected to make that they might be a little bit resistant to at first. It is quite difficult to persuade people to give up IT conveniences that they’ve had in the past, even if you’re only asking them to give up a very little bit. We’ve seen that with multi-factor authentication, haven’t we? With people saying, how can you expect me to spend an extra 30 seconds a day typing in this code or presenting this magic key and then once they get used to it they realize you know what it wasn’t that hard after all. Is it the same when it comes to managing things like document flow inside a big and possibly bureaucratic organization?

[Kelly] (21:20)

I wholeheartedly agree that introducing the various authentication protocols can definitely be cumbersome and users are not fond of it. But when they realize the protection behind it, typically it just works in with the day-to-day workflow. From a document standpoint, yes, it depends on how you implement this. So there’s a few seamless ways you can do so. You can integrate via proxy so the users have no idea that it’s happening in the backend. And then you can utilize a modular open system approach querying the set API or the applications to then sanitize those files or to sanitize what’s happening in the back end without the users knowing. And then there’s also here, so it’s just something really cool that we’re working on here at Glasswall. 

It’s with our foresight AI capability. It’s predicting what these files may look like and providing you an intelligent threat report on what will get stripped out from the file prior to it happening. So this can be bubbled up and provided to your IT system way ahead of time while files are being in transit through your email gateways or your proxies. So you’re not just protecting the files, but you’re learning about what’s happening and what’s targeting you.

[Paul] (22:23)

So presumably for your outbound files where you’ve constructed things that you think are okay and nobody’s complained about before, you might actually learn ways in which you could simplify your workflow, simplify the types of documents that you produce, which will actually save you time and money and make everybody that you sent these documents to safer at the same time for a true sort win-win situation.

[Kelly] (22:49)

I would agree. If an organization is a little timid about getting their feet wet or diving off the deep end, you start with a pilot, right? Pick your highest-risk files and the flow that they may be coming in to protect them first, right? Maybe it’s inbound email for executives or files crossing to an unclassified to a classified environment, show that value and then expand it based on that. The Navy did a project. You all can look it up, it’s public. They did Project Flank Speed where it was an initiative to focus on how to protect data coming in and out.

And they did it by iterative approach, tackling one component at a time, and then going down the cycle and ensuring that it met that standard. I guess you just start with the pilot and then you just continue to move forward with it.

[Paul] (23:29)

I’m conscious of time Joe, so I’d like to ask you if you would provide some, what I might call, encouraging concluding remarks. It’s clear that a little bit of discretion goes an awful long way. What would your main advice be, particularly for federal government departments that think that this is all too hard and they’ll never get there? How can we get this started in a way that will let us stay ahead?

[Joe] (23:57)

Well, I do think we all can agree that attackers are creative. They’re often well-funded and it’s a constantly evolving landscape, and with the introduction of AI, either the volume’s increasing or sophistication is increasing. Yeah, or both. And so the attacks are evolving and having a good process that’s not trying to chase the new innovation I think is part of the breakthrough in security. 

And we look for these asymmetric shifts in defense tools, such as cleaning files, both inbound and outbound. For the government, part of the opportunity here is to not try to reinvent all these ideas. Product companies that are producing stuff, their technology, their software does drive tremendous innovation and brings that asymmetric shift to the equation.

That’s what we try to do at RunSafe with feeding files, I think, Kelly and team. That’s what they’re trying to do here. So we do need to rely on technology and be mindful of the effects of AI, but certainly look for asymmetric shifts ultimately.

[Paul] (25:12)

So loosely speaking, Joe, you’re sort of talking about changing the economic equation, if you like. So it’s become much cheaper and easier for attackers to generate thousands or hundreds of thousands of malware variants, for example. But that doesn’t mean that we can’t nevertheless continually make it more expensive for the attackers, even though they have these things that they consider optimizations.

[Joe] (25:40)

Well said, Paul. A podcast dedicated to staying ahead of the attack is a proposed well. 

[Paul] (25:47)

Yes. Well, with that, let me say that is a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. Thank you so much, Kelly and Joe, for, I guess, just touching the surface of this broad and deep field of cybersecurity that we live in. 

If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media as well and don’t forget to recommend us to everybody in your team. Here’s to fighting back against the attackers in a way that means we really do work harder and smarter at the same time. Stay ahead of the threat. See you next time.

The post Clean Files, Safe Operations: Defending Federal and OT Systems from AI-Driven Threats appeared first on RunSafe Security.

]]>
Designing Security into Life-Critical Devices: Where Innovation Meets Regulation https://runsafesecurity.com/podcast/life-critical-device-security/ Thu, 06 Nov 2025 15:23:28 +0000 https://runsafesecurity.com/?post_type=podcast&p=255214 The post Designing Security into Life-Critical Devices: Where Innovation Meets Regulation appeared first on RunSafe Security.

]]>

 

As connected healthcare evolves, medical device cybersecurity has become inseparable from patient safety. In this episode of Exploited: The Cyber Truth, RunSafe Security Founder and CEO Joseph M. Saunders joins host Paul Ducklin to discuss how medtech organizations are designing security into devices from day one—embedding protection across concept, development, and maintenance phases.

Joe unpacks what “secure-by-design” really means in practice, how the FDA’s new Secure Product Development Frameworks (SPDFs) are shaping engineering collaboration, and why cultural change is essential to making cybersecurity a core part of product quality.

This episode offers practical guidance for developers, compliance officers, and product leaders on:

  • Building security into device lifecycles—not bolting it on later
  • Meeting regulatory expectations while accelerating innovation
  • Managing post-market security for long-lifecycle devices
  • Earning trust and ensuring patient safety in connected care systems

If you work in medtech, regulatory compliance, or embedded security, this discussion will help you understand how to stay audit-ready, innovate faster, and lead with security-by-design.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:04)

Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello Joe.

[Joe] (00:05)

Hey Paul, looking forward to today’s topics of discussion.

[Paul] (00:24)

You’re sort of half working half vacationing aren’t you in the mighty state of California? Where the weather is probably a bit better than a lot of the rest of the United States at this time of year.

[Joe] (00:35)

It is perfect weather, it’s cold and crisp and I went for a nice long walk this morning so that’s the vacation side. And then the flip side is I’m working a full day today. So both work and vacation at the same time I guess.

[Paul] (00:47)

Since we’ve spoken about life-affirming experiences, like going for a walk on a crisp and beautiful morning, let’s delve into this week’s topic, Designing Security into Life-Critical Devices. Our subtitle is Where Innovation Meets Regulation. Joe, in the healthcare industry, traditionally it’s all been about technology and fancy new stuff, hasn’t it?

But cyber security is now becoming a very, very pressing concern.

[Joe] (01:23)

At the high level, we’ve got requirements in the US FDA requirements to build in security and ensure that medical devices are protected from cyber attack. We can all understand why. It’s absolutely incredible the number of devices. And of course, there’s different classes of devices. The common denominator is these devices are connected. They are doing life critical functions and security is important because the consequences are stark. And so what we want to ensure is that these medical devices are safe, are secure, are resilient. It comes back to then the software development practices of incorporating security into your processes overall and building secure medical devices from the get go.

[Paul] (02:14)

And that’s not something that is traditionally associated with that kind of device, is it? If you’re building some kind of new amazing surgical robot or upgrading your fantastic MRI scanner to give even more detail, as a scientist and an engineer, you want the technology to be as fancy and as amazing as possible. But as you say, with thousands or even tens of thousands of these embedded devices, in the average hospital these days, there’s quite a lot that could go wrong, possibly at the same time.

[Joe] (02:52)

Yeah, if you rewind about 10 years, the FDA started to get serious about cybersecurity of medical devices, and we’ve come a really long way since then.

[Paul] (03:02)

That’s quite an interesting thing to think about, isn’t it? the FDA, for listeners outside North America, is the Food and Drug Administration? Yes. It’s now cyber security that falls under their remit. As much as, or even more importantly than, its traditional remit.

[Joe] (03:22)

Therein is sort of the trap from a product development perspective. As you said, we want to develop new products that are innovative to boost quality of life, quality of patient care. The trap, as I say, could be that you don’t think about security as much as you do about the innovation in the patient care aspects, which of course are primary. My thought though, as we’ll get into further, I’m sure, is that it’s not mutually exclusive. You don’t have to suffer innovation for security’s sake. And the FDA has tried to set the expectation and of course medical device manufacturers are complying, incorporating security into your software development process where it can be an enabler of all the security expectations that the FDA is setting while you can focus in on the innovative features and capabilities of the medical devices you’re trying to produce is the right mindset to have ultimately. We’ve used the phrase and we talk about it all the time.

Secured by Design is the right mindset.

[Paul] (04:25)

Secure by Design is kind of like saying, take this idea of checkbox compliance and just throw it out of the window. Get to a point where you tend to comply because you actually thought about and acted on all these security issues right from the start.

[Joe] (04:43)

I believe the right mindset is to incorporate security from the start, just as you said, and not bolt it on later. And to do that gets back to some basics in software development. Would love to dive into that deeper as we think about this challenge.

[Paul] (04:59)

Let’s dive into that at least a little bit right away and talk about Secure Product Development Frameworks, or SPDFs for short, which are now strongly emphasised by the Food and Drug Administration’s latest cybersecurity guidance.

[Joe] (05:18)

If you think about the medtech arena and you look at secure product design, threat modeling and identifying a risk management framework, ensuring you understand what the expectations of the FDA are, and then developing that secure architecture for your devices, especially if there’s communications on those devices. And of course, authentication of devices and use within your overall system. The idea then is to not only build in all those security controls that you would have, but also generate software build materials to help communicate all the different software that goes on that device.

[Paul] (05:59)

So that Software Bill of Materials means that you are able to come up with a list of ingredients in your product, which in turn means that if one of them is later found and publicized to have a vulnerability, which is something we discussed in that great podcast recently with Kelli, you know that it affects your product and you know that you need to take stock of that and come up with some answer for your customers. And that is a strength and not a weakness, isn’t it?

[Joe] (06:34)

It is a strength and not a weakness. And part of that is because we know not all the software that goes into these devices is originating from the medical device manufacturer itself. They have suppliers, they’re using open source software. And so really understanding not just the mix of those components that are on the device, but what is their origination? What’s the provenance of that software? And what are the vulnerabilities with all that software that comes into your device ultimately? Again, from third parties, from open source and in the software that the medical device manufacturer produces themselves. Developing a Software Bill of Materials to understand all those components, making sure that you’re not violating any open source license restrictions, but then also identifying all those components so you can more effectively identify the vulnerabilities associated with them and address those. 

Because part of what the regulations require then ultimately is to mitigate the vulnerabilities on those devices. It all comes together. You generate a software, build materials, you understand that software provenance, you identify the vulnerabilities, and then you find ways to mitigate those. And if you have a robust methodology towards security integrated into your software development process, then you have a better chance of minimizing all of the effects of that and reducing the cost to support, from a security perspective the overall compliance you have a better, more robust, more resilient product that you put out in the field.

[Paul] (08:08)

So not just a case of knowing what ingredients went in in case you need to vouch for one of them later after the product is fielded. It also provides a mechanism for making sure that you aren’t accidentally putting in ingredients when you build the software product that you didn’t expect because somebody substituted something without asking with the best will in the world or much worse some devious attacker tricked you into substituting something without even realizing it.

[Joe] (08:44)

Yes. People often will just say soup software of unknown Providence.

[Paul] (08:52)

Yes, I have a cryptography library of some sort. Yeah, but which one? Yes. Which version? What options did you use when you compiled it? All of those things can be critical, can’t they?

[Joe] (09:05)

If you’re producing a device that has vulnerabilities in your software that is of unknown provenance, then you are putting patients at risk. You are putting hospital systems and healthcare systems at risk. So really understanding the provenance of the software and the vulnerabilities associated with them while mitigating and addressing them is of utmost importance. And how could you argue otherwise? That’s why I think the industry has come so far.

[Paul] (09:31)

Exactly.

[Joe] (09:35)

Safety is of utmost importance to ensure patient care. And with that, understanding the software, underlying software components is an obvious step where you have to have a good feel for what’s in these devices you’re manufacturing and producing.

[Paul] (09:49)

And Joe, if we can just zoom in a little bit on the creation of a Software Bill of Materials, you have some quite strong opinions about how and when those should be created, don’t you? Some people will say, well, you just need to know all the source code that you could pick from. So let’s go through the larder and write down everything that’s in there. And we’ll know that it must be at least one of those, we hope. And another way says, well, you wait until the cake’s been baked and then you get some taster to come in and figure out what went into it. But the so-called build-time Software Bill of Materials, where you identify, log, and manage every single component, package, or even file that actually gets used in the baking of the cake, in your opinion is a much, much stronger way of vouching for what you’ve made.

[Joe] (10:40)

Exactly right. And if you think about the different ways you can generate a software build into it, you can do it from source code. Yes. And you’re sort of assuming what’s going to be put into that software, sort of have a plan, but that doesn’t tell the whole story. And as you say, if you try to do it from the binary working backwards, like the baked cake analogy, trying to understand its ingredients, you don’t quite get all the way there either. And the best moment to get the full picture is at software build time where you have perfect visibility, you have 100 % completeness in identifying all those files, all those components, all those packages that are used to create that ultimate medical device product. Going back to the purpose of understanding the underlying risk in this software, it’s safety and it’s security and it’s patient care. And so why would you cut corners? Why would you do an inferior approach when there’s perfectly good ways to do it to be more complete?

[Paul] (11:42)

By getting to the point that you can actually demonstrate or begin to field your product faster, surely that’s got to be better for your innovative engineers than having them think they’ve finished a product and then run around in circles for two or three years afterwards trying to get it from, hey it works in the lab, to we’re allowed to sell it in the field.

[Joe] (12:04)

That’s why I think it comes back to the overall architecture and software development practices. If you can standardize your overall architecture and software development approach as much as possible. Now you can’t do it perfectly on all devices because the requirements are different, but the extent to which you can standardize, what that means is you have economies of scale. You can address the same vulnerabilities across your products and you can build the same disciplines into your software development process in the first place.

[Paul] (12:36)

In other words, if you’re going to make mistakes on product A, then it’s very much better if you consciously and practically avoid making the same or similar mistakes on product A plus 1, A plus 2 and A plus 3. As you proceed, your development times should in theory get shorter and shorter with better and better results.

[Joe] (13:01)

And more innovation over time because you’re freeing up your resources to build new innovative products and features going forward.

[Paul] (13:09)

Yes, and if there are going to be vulnerabilities, particularly exploitable ones, in the future, having that bill of materials means that you know where those problems are and what you need to do at a minimum and perhaps at a maximum to fix them. So it’s not just about being proactive, it’s about being able to deal with bad situations more effectively if they should occur.

[Joe] (13:38)

My feeling is we shouldn’t be recreating the entire effort every single time. And if you can standardize a bit and then identify those imperfections in process, those software bugs, those vulnerabilities that are unique to that device, you’re really working on the exceptions and you’ve got the overall process and platform secure. And what that means is your developers are more efficient, your products are less costly and they have a higher security posture overall. 

Why would you solve the same vulnerability over and over and over again across 20 or 30 different products when you could standardize your overall approach? I think you’re right. I think focusing in on that overall process so that when you do identify an exception, a software bug and vulnerability, you do have focus to mitigate. One of the challenges I think that’s out there is there are a lot of false positives that end up consuming developer time. I do think with the robust methodology that focuses on true positive vulnerabilities, minimizes the false positives, then the idea is that developers will be more efficient and your products will be more secure and safe.

[Paul] (14:38)

Absolutely.

So Joe, what do have to say to the kind of person who still thinks that something like a detailed Software Bill of Materials just tends to advertise what vulnerabilities you might have so openly that it actually makes you more of a target and gives you less protection than if you had a little bit of secrecy slash obscurity in the mix?

[Joe] (15:20)

Security researchers who end up developing exploits because they have found bugs or weaknesses in components that you have, they’re going to find vulnerabilities whether you publish your SBOM or not.

[Paul] (15:33)

Absolutely, yes.

[Joe] (15:35)

The better approach is to embrace those facts and build an SBOM that allows you to communicate what you’ve done and to ensure that you’ve addressed as many of the issues, if not all the issues as possible. And I would argue going a step further, adding in security protections that anticipate that new bugs will be found is a good way to do that. The question becomes, if you want to obscure things and hide from them and not communicate, you’re going to get exploited because you probably don’t have your eye on the ball.

[Paul] (16:08)

Absolutely.

[Joe] (16:09)

Your overall approach is so much better embracing transparency, embracing communication of your vulnerabilities, and addressing them in a proactive way that the chances of getting exploited go way down. And it’s not about publishing the Software Bill of Materials because that’s easy enough for an attacker to identify what the underlying components are in the software. It’s about engaging and managing your risk as opposed to trying to obscure your code.

[Paul] (16:39)

And those attackers can just use binary analysis, can’t they? But if they do find a problem, the one thing you can be sure of is that they are not going to tell you. Whereas if you are open and honest about all of this stuff, you actually, as you say, not only are better prepared yourself and less likely to commit to something bad, you also have a very good chance that one of the good guys will find the problem and disclose it you responsible for the greater good of all.

[Joe] (17:10)

And you bring up an excellent point. If you think about what an attacker does to analyze a binary and by a binary, mean, the software that’s deployed on one of these medical devices, of course, there are tools out there that security researchers, whether they’re attackers or let’s say the good guys trying to identify vulnerabilities that can be fixed ahead of time. There are binary analysis tools, as you say, that really look at what are the underlying weaknesses in those software binaries. on which or against which an attacker can build in an exploit. And a good example of that is looking for the underlying return oriented programming gadgets, ROP gadgets or ROP chains, that exist in compiled code in software binaries.

[Paul] (18:00)

Those are things of already executable code that you don’t have to poke in there, that if you can just deviate the flow of control very slightly, you may actually be able to get the software to misbehave so that it nearly but doesn’t quite crash, and when it has nearly but not quite crashed, you, the attacker, end up controlling what it does next. Yep. Unauthorised, unwanted, uncontrolled, unregulated.

[Joe] (18:28)

By doing that, what you’re doing is you’re accessing legitimate functions. Just as you say, the attacker can string them together in a way that wasn’t originally intended by the software developer. And the reason I went into that specific detail is we said, should I publish a Software Bill of Materials? The answer is yes. You want to transparently communicate with your customers. Why? Because attackers have these other tools to really understand what’s going on. I know you know this, Paul, but if you do a search on eBay, you can probably buy a gadget or a device. Yes. On eBay. And so what does an attacker do? They buy one of the existing devices because people are selling off their devices and recouping some cash. When you buy one, what can you do with that? Well, you can analyze it for months on end. They don’t need to look at a Software Bill Materials.

[Paul] (19:02)

Yes.

[Joe] (19:22)

They’ve got advanced tools to analyze the binaries, look for those underlying ROP gadgets, ROP chains, find their points of entry. They’ll look at communications that they can leverage to gain access. That’s what leads to exfiltrating data from devices. That leads to manipulating the code to do something it’s not supposed to do. What that means is, yes, it’s a big tall order to take on folks like that. Yes, they have lots of time to prepare in some circles, you might say the preparation in the battlefield so they can execute their exploits later. It’s still an economic equation. 

Yes. If you do the right steps to thwart even the best prepared cyber attacker who might spend five, six months developing exploits to work reliably on a target device, if you can thwart them and make that much harder, they’re simply going to look elsewhere. You want to be robust enough where they don’t want to spend six months on your effort because it’s going to be wasted effort. All of that comes back to the software methodology, incorporating security into the practice in general.

[Paul] (20:30)

Joe, you mentioned there the issue of data exfiltration and of remote code execution where you get some code to execute leading to perhaps much greater, much worse things being fetched, installed, and used later. How should manufacturers rethink their defenses for medical devices that are now, very loosely speaking, almost always on the internet rather than completely self-contained and disconnected from it.

[Joe] (21:01)

You make a good point, which is these devices are in fact generating a bunch of data that is used for good reason to help monitor effectiveness and care of the patient. If anybody’s been in a hospital, you have seen all the screens, all the monitors. And when you look at all these devices, whether they’re taken home with the patient or incorporated into the hospital, they are connected because they’re communicating results and data about effectiveness of care. You may go to a specialized spot where there’s a medical technician that’s going to do the MRI or do the x-ray, let’s say, and that information will all be shared to radiologists who might be in a different physical location altogether. And those radiologists are reviewing multiple scans and results and communicating back. 

If you look at that whole ecosystem, the cloud environment, the communications channels and the connected devices, quote unquote, on the edge. Then you can imagine that you have to think about security in the broader sense, the cloud, the communications, and the devices. And so it’s a lot for a hospital system to think about. I do think there needs to be zero trust architecture built in there, encrypted communications in there. And then of course, protecting the software itself on those devices. 

Why? Because as we have seen in many industries, it’s these open communications channels that enable somebody to get on device and exercise their cyber attack or their exploit. And it’s this connectedness that benefits society by improving both the efficiency and effectiveness of patient care, but also creates that exposure to attack methods that a cyber researcher or cyber exploit developer will rely on to administer their cyber attack in the first place.

[Paul] (23:01)

So to summarize the ethos of Secure by Design, you might think of it in a way that reactive security is a heck of a lot easier if you get the proactive part of security right in the first place.

[Joe] (23:16)

100%. I think of it as almost an optimization equation. Yeah. Yes. And you want to maximize security and minimize effort. If you can do everything right from a Secure by Design perspective and look at your overall security architecture and have security built in. In fact, when something does happen, it should be the exception and it should be more easily addressable at that stage.

[Paul] (23:21)

Yes.

So Joe, to finish up with all of what you’ve said in mind, how will we actually know that we’ve reached what you might call a good point in healthcare security? That we’ve really embraced proactive security and Secure by Design in healthcare security from innovation in embedded devices all the way to care delivery in clinics and care homes and doctors, surgeries, and hospitals?

[Joe] (24:15)

I would like to put out that challenge of looking at the Software Bill of Materials and the communication of those materials from medical device maker to hospital system and ensure that everybody is looking at and reviewing those. That’s one milestone I have. It seems simple, but when people are doing that on a consistent basis and can communicate things like mean time to resolution or how long it takes to get a product approved and on the market. Those are some measurements, but I think the action of producing a Software Bill of Materials and communicating it is an important step to it. 

Over time, what we ought to see is that there are faster response times in mitigating vulnerabilities when they are found out down the road, and that there is more time for developing new features in less time chasing false positives and unknown risks. So I think there’s got to be this balance towards proactive cyber defense. And what I would like to see across the board is a robust software methodology that is willing to share the software bill materials and ultimately have fewer false positives on the vulnerability side and faster time to resolution when in fact a vulnerability gets introduced into the ecosystem.

[Paul] (25:39)

So, Joe, if you will forgive me using a trifecta of cliches to close out with, in cybersecurity it really is a case of if you don’t measure it, then you cannot manage it. It’s also a case that security is very definitely a journey and not a destination. And perhaps even more importantly, it should be treated as a value to be maximized rather than a cost to be cut to the bone.

[Joe] (26:09)

I agree 100 % Paul and I think that’s a great summary.

[Paul] (26:13)

Excellent. I wish we didn’t have to stop because you can probably hear that Joe is getting more and more passionate and I just wanted to hear more and more of that. But that’s a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. Thanks to Joe for his passion, his enthusiasm and for looking attentively to the future. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media as well, and don’t forget to recommend us to everybody in your team so they can listen to Joe’s passion. And please remember, stay ahead of the threat. See you next time.

The post Designing Security into Life-Critical Devices: Where Innovation Meets Regulation appeared first on RunSafe Security.

]]>
How Generative AI Is Addressing Warfighter Challenges https://runsafesecurity.com/podcast/arthur-reyenger-generative-ai-warfighter/ Thu, 30 Oct 2025 14:05:22 +0000 https://runsafesecurity.com/?post_type=podcast&p=255165 The post How Generative AI Is Addressing Warfighter Challenges appeared first on RunSafe Security.

]]>

 

In today’s fast-paced defense environment, speed and intelligence win battles before they begin. In this episode of Exploited: The Cyber Truth, Joseph M. Saunders of RunSafe Security and Arthur Reyenger of Ask Sage explore how generative AI is revolutionizing military operations—from accelerating acquisition and mission planning to enabling predictive analytics and secure collaboration.

They share powerful insights on:

  • The evolution of AI in defense and why “do more with less” is mission-critical
  • Real examples of AI accelerating approval processes by 95%
  • How digital twins and synthetic data enhance readiness without risk
  • Why COTS AI outperforms custom-built systems in agility and cost
  • The importance of responsible, human-in-the-loop AI for national security

Tune in to hear how generative AI is reshaping decision-making, reducing cognitive load, and empowering the next generation of warfighters.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Special Guest:  Arthur Reyenger, Generative AI Strategy Executive for Channels and Commercial, Ask Sage, Inc

Arthur is a Generative AI Strategy Executive with Ask Sage, where he leads engagement development with customers across a wide range of use cases. Prior to joining Ask Sage, Arthur was a founding member of CloudInsyte.  There, Arthur built a successful cloud consulting practice specializing in big data solutions for the gaming and hospitality verticals. Before CloudInsyte, Arthur led the cloud practice for International Integrated Solutions (IIS) where he developed sales training and enablement programs, designed & implemented hybrid cloud managed services, as well as brought those to market. Going further back, Arthur was a Hybrid Cloud Architect at NetApp’s Emerging Products Group and a Strategic Consultant to the Enterprise at Verizon covering the Northeast.  

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:07)

Welcome back everybody to this episode of Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of RunSafe Security. Hello Joe.

[Joe] (00:20)

Greetings, Paul. Look forward to the discussion.

[Paul] (00:23)

Now we’re joined in this episode, Joe, by someone who is not only a good friend of yours, but who is also in effect a colleague, because you just happen to be the chairman of the board at the company where he works. So please welcome our guest Arthur Reyenger, who is Generative AI Strategy Executive at Ask Sage. Welcome Arthur.

[Art] (00:45)

Paul, really nice to work with you.

[Paul] (00:48)

Right, we have a fascinating sounding title, “How Generative AI Is Addressing Warfighter Challenges.” Now Arthur, when I first saw the word warfighter I thought, that’ll be the marines that are parachuted behind enemy lines at the very start of a very special operation. But we’re talking about a much broader picture. Do you want to say something about the kind of challenges that we are trying to address?

[Art] (01:14)

Sure, it certainly does include those individuals that are the tip of spear that are actually enacting those missions at the front lines, but it’s also all the supporting teams behind them that are ultimately helping them to plan, prepare, and execute that mission, get the right technology in their hands. So it’s a much broader user community that we’re talking about here.

[Paul] (01:33)

So this is not just supporting some individuals in some specific missions, it’s also about enabling a larger community with intelligence, with information, with operational improvements across the board.

[Art] (01:49)

Absolutely, it’s all the same things. Do more with less, be able to differentiate, and have a tactical advantage when you’re going to execute that mission.

[Paul] (01:57)

Do more with less is a bit of a mantra for everybody these days isn’t it? It is. What makes the challenge different now than it was even just five years ago, let alone ten, fifteen, twenty years ago?

[Art] (02:12)

I think it used to be a much more straightforward approach. You had your domains well covered. It was land, sea, or air. Today, there are numerous fronts that warfighters need to be aware of and need to incorporate. We have disinformation campaigns. We have an electromagnetic spectrum that has to be considered. The speed of decision-making needs to be increased. And they’re falling under the same issues that others have with analysis paralysis. We almost have too much data coming from too many sources.

And that’s why you need to have tools that can ultimately help you sort through that and make the correct decision.

[Paul] (02:44)

That’s a similar sort of challenge that traditional cybersecurity folks face with, say, malware. Forty years ago, when a new virus came out, everyone got excited and they spent a month analyzing it. Today, we’re talking about hundreds of thousands of new malware samples per day. You can’t use the old methods to solve new problems. The volume is simply too high.

[Art] (03:08)

Now you have to interrogate those things and determine where you want to spend your time, where it’s going to be the most applicable, where you’re the most vulnerable. It’s very similar to the same challenges that cybersecurity faces.

[Paul] (03:19)

And I presume you also have a significant problem with misinformation that’s deliberately disseminated by your adversaries in order to try and enlarge your analysis paralysis as you might say.

[Art] (03:33)

Absolutely. That is a major concern and I would actually say that cybersecurity and disinformation is probably where things are starting more frequently.

[Paul] (03:41)

The idea of generative AI sounds very futuresome, but it’s not quite as new as people think, is it? And it’s something that has been of great value in the defense community for many years already.

[Art] (03:56)

AI is in no way, or form a new concept. It’s something that’s been leveraged within the DoD for a very long time. It goes back to 1950 with Turing’s paper, Can Machines Think? And since then, we’ve seen adoption through the use of both computer vision and autonomous systems within the DoD and the federal government. As a matter of fact, legislation was just amended in the DoD Directive 3000.09, and was last revised in 2023 to allow for lethal autonomous weapons systems to ultimately execute without human oversight. 

So again, these are not new concepts or new systems or a new way to leverage technology. I think the difference here is the technology has evolved to the point where it’s now in the hands of every warfighter. Everyone can use it rather than it being very specialized for autonomous systems and computer vision applications.

[Paul] (04:51)

And those warfighters increasingly are not behind enemy lines or even on enemy lines. They may be in some secure bunker or office far, far away, setting the parameters for things like drones to do their work and not necessarily needing to pilot or control those drones for their entire mission. So can you give us an example of how generative AI is already being used to support mission outcomes?

[Art] (05:20)

At Ask Sage, we were fortunate enough to work with a large combat command that was really working to try to get new technology from the commercial segment, tested, authorized, and then in the hands of the warfighter. Being able to use generative AI to speed up that authority to operate processes and being able to test and vet that technology is really helping to accelerate to get the specific things that those warfighters need in order for them to be able to make a difference and have an advantage. I believe they said that we saved them 95 % of the time and the cost to be able to go through those processes.

[Paul] (05:52)

That’s not just using generative AI in a mission, it’s using generative AI to improve the speed at which you can produce deliverables of any sort. Software, firmware, hardware, whatever it might be.

[Art] (06:04)

Absolutely.

Then there’s aspects to red teaming and planning where this is invaluable to be able to run more scenarios, to be able to look at those different vectors and be able to provide more contingency planning. Then there’s something that we do here in the U.S. that a lot of our adversaries don’t, and that’s really test weapon systems. So being able to have generative AI be able to help you generate synthetic data to be able to test and vet the sensors on a particular system before it even winds up in a plane, before it even winds up on the field. puts us at a strategic advantage. And I know those things are being done today.

[Paul] (06:39)

Is that the idea of a digital twin? Yep. You can actually use synthetic tools to create test results that are considered equivalent in practice, not merely in theory, to crashing the real thing.

[Art] (06:54)

You nailed it. Instead of crashing 20 planes to determine what that maximum load is, now we can do that synthetically, ultimately validate that system, compare it against another, before it ever gets rolled out into production. Pretty incredible.

[Paul] (07:06)

Joe, you want to say at this point something about how generative AI can help the process of software engineering for weapon systems?

[Joe] (07:18)

The starting point is that the productivity gain accelerates execution of any of these programs and whether that’s software development or cybersecurity compliance or other initiatives. Generative AI can be super helpful. And I know folks at SAGE see 35x productivity gains in some cases. I’m sure Art has even higher productivity gains because he’s leveraging SAGE in his day-to-day work. Development teams can benefit if you can imagine a workflow where not only can you help shape and refine customer feedback, but you can define requirements. You can build some initial code. 

You can really enhance development of basic components. So developers can focus in on the real creative aspects of what they might want to build. I think today in embedded systems, generative AI and the developer need to work together, creating artifacts, testing, developing some initial code, identifying some bugs, fixing some bugs, fixing some vulnerabilities. There’s a major impact on software development nonetheless, and we’re just in the early days. I mean, this generative AI stuff, despite AI being around forever, 50, 70, 80 years or what have you, the generative AI impact has been significant only in the last couple of years. And with that, I think we’re gonna make dramatic improvements in software development. And the bottom line is that all of this gets to driving innovation faster.

So an organization like the US Department of War can remain competitive or be the leader. With that in mind, there’s a massive impact that generative AI has today, but it’ll even increase going forward.

[Paul] (09:00)

Joe, you have in previous podcasts identified what you might call a feedback loop between today’s processing needs for AI to do the kind of stuff that wasn’t possible five years ago and legacy embedded systems in that the biggest, bestest, fanciest data centers with the best protected servers in the world are no good to you if somebody can jump in and shut off the air conditioning.

[Joe] (09:29)

Yeah, we’ve seen it in America’s AI Action Plan how energy and data centers are essential for the US winning the AI competition, if you will, or the AI race, need a flexible grid that’s adaptive, need resilient data centers. So the large language models can process. We all get excited to look at the next large language model, but with applications like AskSage and infrastructure that’s resilient, the idea of putting productivity tools in people’s hands to make a big difference today has a huge effect. 

And if the data center goes down or the large language models don’t process or the energy consumption is outpacing what can be delivered to a base or a user that just undermines the ability and the progress that people want to make. Really, it will start to affect their day-to-day lives because people are becoming more and more dependent on generative AI solutions. I’ll say it for Art, there’s hundreds of thousands of users in the Department of Defense or Department of War in the US, and that’s a testament to the need to accelerate innovation for the warfighter.

[Paul] (10:37)

Arthur, do you want to say something at this point about AI in general or generative AI specifically in the defencse world? Because I think if you talk to the average person, when they think about AI in the military, they imagine conventional old school warfare conducted by autonomous robots. That’s not really what it’s all about, is it? As Joe said, there’s this slew of departments and people and organizations and private partners in the background who rely on generative AI just to make everything work.

[Art] (11:10)

I’ll give you a story first. We had a very interesting use case that came up with the Navy. The Navy has 3D printers on ships. They ultimately help them be able to machine parts that they need at will. But prior to this, they would have to go get all the documentation, the schematics. They would have to phone home through geosynchronous satellites and a network in order to get all that information. So now we’re able to provide lightweight, finely tuned models to deploy those on those ships, to be able to interface with those technicians and those warfighter supporting teams. 

To be able to load those schematics and machine those parts without ever having to leave the local network on the ship itself. It’s not a very sexy use case, but it’s again, something that’s ultimately driving a lot of efficiency and providing a lot of tactical advantage. If you’re not leveraging AI to help, then you’re going to be left behind. That’s really what we’re doing here is we’re enabling folks to be strategic and more creative in the things that they’re doing, rather than having to get mired down within the minutia. To be able to have that engineer who is the human in the loop that can ultimately figure out exactly what’s wrong and then take that back to the 3D printer, get the parts that you need and you’re back on mission without having to stall out for standard maintenance on parts.

[Paul] (12:20)

Yes, it’s not just being able to make the part, it’s also knowing which part to make at exactly what time.

[Art] (12:28)

And is that part going to have a cascading effect on all the other parts that were ultimately put in place? Yes. Are there things that I need to anticipate that are further on down the chain? Is this a symptom of a larger problem? You get to leverage that incredible power of generative AI to be able to support those things while focused on, again, the very tactical need of I have to fix XYZ.

[Paul] (12:33)

Of course. It’s also, if you like, a way of reducing the cognitive load for everyone in the process. That instead of fixing the part and then having to have a series of lengthy meetings to decide, does that mean that there’s going to be problems elsewhere in the system? All of that could have been taken care of proactively.

[Art] (13:11)

It’s providing those strategic personnel with an army of AI assistance for whatever discipline that they need in order to get their work done.

[Paul] (13:19)

So this is very, very different from The Terminator and stuff like that. What happened is that the backroom boys and girls became many times more effective. The actual humans involved have been freed up to do what you might call higher order tasks.

[Art] (13:38)

Absolutely. If you just look at Ask Sage ourselves as our own use case, as Joe alluded to earlier, our founder saw a 35 and then actually a 50x velocity increase in the way that he was developing the platform. So the way that we like to say is Ask Sage wrote 90 % of itself. Instead of being that developer, you become that orchestrator of dev AI assistance in order to complete your mission.

[Paul] (14:03)

Most code only has, what, 1 to 10 % clever bits in it. Yep. The rest is just needed to support all of that.

[Art] (14:11)

If you’re trying to put a well architected framework in place and you have the blueprint for the application that you’re looking to deploy, to your point, I really only need to focus on the very innovative functions that I need. And then I can be very prescriptive with the AI. Let the AI assistant do what’s embarrassingly parallel and allow you to focus on what hasn’t been done yet.

[Paul] (14:31)

One thing that strikes me at this point is that defense contractors and the military are notoriously fussy about dotting the i’s and crossing the t’s with very, very good reason. So they have very well established processes and procedures that have evolved over decades. How do you take comparatively new technology like generative AI and add it into that system without forcing anybody to break the safeguards that they think are important?

[Art] (15:03)

That’s more of the art than the science, I would say. The way that we like to approach that problem is we’ll analyze the existing workflow that’s manual. We’ll start with that one step that would add a lot of value, and then we’ll bookend it from there. So then look at the other ways throughout that process where generative AI can be inserted. It’s my personal belief that technology should not be dictating the way that organizations define their workflows. It should be supporting them. If you’re doing it a certain way, it was because it was right at a time.

So let’s analyze what value that was providing you and make sure that that doesn’t get lost in leveraging this new technology to drive that out.

[Paul] (15:39)

So it’s almost finding a way to help the system evolve so everybody is satisfied with it, rather than insisting on a revolution that may lose some received wisdom from this.

[Art] (15:51)

Yeah, not interested in throwing the baby out with the bath water. We want to have those humans in the loop. You want to have those subject matter experts and you want to have the transparency and what the generative AI is doing through the process so that you can essentially be George Jetson. You can hit stop at the Spacely Sprocket factory when you’re ready to come in and make a change.

[Paul] (15:54)

Excellent.

A good analogy might be, if you think back to the Second World War and the First World War and military calculations then, like producing artillery tables, the word computer was a type of job that was performed by people who were really good at doing arithmetic calculations accurately. And those people now don’t have to do that anymore. They can go and actually design the systems and assume that someone can compute the necessary sines and cosines automatically and precisely every time. It’s the same sort of process, isn’t it?

[Art] (16:42)

Transistors were invented, but the technology wasn’t there where you could really use them. So you still needed that human calculation aspect. And then as the technology evolved and got smaller, then you could build more autonomous systems that became the computers that we know today for those calculations. Similarly with generative AI, it was the evolution of the GPU that really took this thing that’s existed for a while and really made it accessible.

[Paul] (17:05)

So what makes the defense community want to do this via public-private partnerships, rather than building their own completely separate AI engine and keeping it secret from everybody else?

[Art] (17:20)

Well, those are two very conflicting schools of thought. So that’s the standard COTS or government built systems versus COTS that are commercially owned and operated and ultimately presented back.

[Paul] (17:32)

That’s COTS, commercial off the shelf. And I presume there that the fear that somebody in the military might have is, well, if they sold it to us, what if they sell it to somebody else?

[Art] (17:42)

That is a concern, but with any of those solutions that are coming from the defense industrial base and the supply web that exists, there’s a series of vetting that has to take place between foci training and other things to understand where the investment and where the loyalty lies from the cap table of that particular company and how it’s going to ultimately be consumed. So there is that fear, but there’s also safeguards in place to ensure that that organization is aligning with the mission.

[Paul] (18:11)

Now, I can imagine why a long-serving general or admiral might be concerned about outsourcing the AI aspects of what they’re doing, and it’s something that’s concerning, say, copyright holders in the civilian world. What happens if the AI absorbs some information, but in such a way that it accidentally, unintentionally, emerges at a later stage in some other project when it wasn’t supposed to?

[Art] (18:40)

It comes down to again, how you’re going to architect the system, the checks and balances that you want to have in place. On the one hand, you could have very lightweight models that are either completely developed by the government or the third party or ones that are just open source that are finely tuned and trained that can be completely segmented off. But then you have to worry about drift and you have to make sure that you’re maintaining that model that hasn’t been trained on a huge corpus of data. And then on the other side of that, you could leverage the much larger commercial models that companies are putting out and then you have to trust that they are configuring these models with the right indemnities and the right controls in place. We’re agnostic. 

We try to enable both of those strategies.

[Paul] (19:19)

So do you think if the Department of Defense tried to do this all on its own that would be a little bit of a fool’s errand because they would probably need an infrastructure as big as the one we already have for all the other uses of AI? Would that be an unachievable goal?

[Art] (19:36)

Iron sharpens iron. You don’t have anyone that’s competing in that space. And although I do agree that it provides additional security advantages without that innovation and without competition, you’re losing out on the innovative capabilities that ultimately leapfrog ahead. That’s why from an industry standpoint, they look at things that were built for government use or built by the government as ultimately being an inferior tool much of the time. 

So instead is really try to take the best and brightest within the commercial space and what they’re doing, put some well architected framework around that technology so you have those checks and balances and be able to leverage those innovative capabilities to provide that tactical advantage that we’re looking for.

[Joe] (20:14)

Folks like AskSage really embrace the benefits of the COTS approach. The government having to build something on its own is really a waste of taxpayer dollars when a platform like AskSage exists. AskSage also then really has the philosophy, as Art was referring to, there’s a fundamental philosophy to put the choices of which large language model, which hosting in the hands of the users themselves so they can always take advantage of best-in-class and best models for the tasks that they want to get done. 

And if those are predetermined and built into some kind of solution that limits the user’s choice, it will result ultimately in inferior productivity and will reduce innovation overall. The approach that the department has taken is to accelerate the adoption of COTS software, the fact that SAGE is agnostic to large language models and agnostic to cloud hosting helps avoid that kind of lock-in and that degradation of advantage over time by empowering users ultimately to make the best choices. And that’s a huge advantage, not only for the users, but for the US.

[Paul] (21:24)

I already mentioned one aspect that concerns people in the civilian world about the data it’s training itself on, and that is what happens if somebody feeds in private or personal data by mistake and it later emerges, so the data leakage side. What about, however, AI models that are relied upon for critical scenarios that have been deliberately poisoned by what you might call adversarial data deliberately injected by people with your worst interests at heart.

[Art] (21:56)

Do you really want to be able to build the necessary safeguards, do quality control checks against the outputs? You want to have benchmarks in place so that you can ultimately train for that. It’s knowing where the model came from and what data it had been trained on before it was released. A lot of these things are stuck in time. So if the indemnities have already been changed by that model to no longer train on data after it’s released.

Then there shouldn’t be a way for you to ultimately put additional malicious data or change the way that that model is going to leverage its parameters to put something malicious in a response.

[Joe] (22:29)

The AskSage platform really in some ways abstracts some of these security concerns that you bring up. The fact that there’s really advanced zero trust built around label-based access controls. And there’s also this context of AskSage being kind of a pioneer around fire and forget so that the data in its context doesn’t reside on the model side, but resides on the platform side ultimately prevents a lot of what you’re talking to.

[Paul] (22:57)

Right, actually if you really wanted to, you could regenerate the model, leaving out the data that you’ve now decided that you distrust, and everybody else’s use of that model or that service will have no effect on you at all.

[Joe] (23:11)

Yeah, so fire and forget means you submit your prompt in that query, and then that information is not retained on the model side, it’s retained on the platform side. And that platform then is in a highly secure environment to protect the users. You won’t leak information, and you certainly will be under less influence from any kind of hallucination or tainting of the models themselves.

[Paul] (23:35)

So I guess an analogy for that might be what we did with passwords in the late 1970s, early 1980s, when we decided that instead of letting the server store the actual password, we would store a cryptographic hash of the password so we can validate it, but the server never needs to remember or record what the actual password was. And if it doesn’t do that, then it’s impossible for it to leak it by mistake. Same sort of idea.

[Art] (24:02)

That is a great analogy. Yes. All of our communications with those models through that fire and forget API are ephemeral so that none of that data is ultimately retained and then trained in that model or into a future model. The fire and forget API that we’re leveraging stateless in the way that it’s communicating with those models.

[Paul] (24:18)

What does the idea of responsible AI mean in a military context, particularly when you’re talking about autonomous weapons? A vessel that might be running around at sea might itself decide to fire up some other aerial drone or something like that. How do you go about building in safeguards?

[Art] (24:38)

That goes back to the red teaming and having the human that ultimately decides that this is the plan, that this is the mission that we’re ultimately going to undertake. Right. It’s up to that individual to determine what those autonomous systems can actually do. And then once they’ve decided that that’s necessary to complete whatever the mission is, there’s no real need to necessarily have to follow the bullet once it’s shot from the gun. So I think it’s measuring twice and cutting once. You’re going through all the planning steps necessary.

You’re understanding what those contingencies and what the outcomes are if you were ultimately to take that action. And then after that, you’re ultimately letting the chessboard evolve based on that directive.

[Paul] (25:16)

I love that saying, measure twice, cut once. It’s easy to remember. And if only we did a little bit more of it when it came to making cybersecurity decisions.

[Art] (25:29)

Absolutely. Responsible AI is, and I’m really saying this is Arthur Reyenger’s opinion and not necessarily as an AskSage representative. Responsible AI, it’s a word that’s kind of been co-opted like a lot of other terms that become very, very important or hold meaning within the culture. For me, it’s referring to the way that you provide an ethical design, the development, the deployment, and then the use of that system. And I think that the United States and specifically the Department of War does a really good job in testing the systems that they’re going to use, making sure that they’re not putting bad weapons systems or capabilities on the front lines, and then knowing what those outcomes are going to be. So as long as you’re doing that, you can feel more confident that you’re leveraging these tools responsibly.

[Paul] (26:15)

And importantly, you can use AI to help you test those very tools more thoroughly without actually having to, as you say, build 20 planes and crash them deliberately.

[Art] (26:28)

Absolutely. And without having to waste taxpayer money or loss of life in order to do that, we’re really putting the measure twice, cut once into practice.

[Paul] (26:37)

And guess there’s a strong element of deterrence out of all of this as well. That may make your adversaries think twice or thrice before doing something from their side.

[Art] (26:48)

It could be as simple as based on the analysis and the planning of a particular scenario, you realize all you really need to do is jam comms at a particular port and you don’t actually have to deploy any human capital or resources in order to have a show of force or power or disrupt the adversary. I think this also gives you the ability to run more creative planning, more creative scenarios, things that save time, money, and hopefully, human life. So. Absolutely.

[Paul] (27:14)

Sort of less is more approach.

Do you want to say something about the kind of collaboration between research institutions and national labs?

[Art] (27:24)

Had a lot of success with the national laboratory space. The one that I can think of off the top of my head is Argon National Lab or ANL. They had some homegrown tools that they had developed like a lot of other folks, but they also had researchers that were working on best in breed and next generation applications for generative AI. And then they had all of their supporting teams within that same national lab that needed to be able to support those researchers in their efforts. So being able to come in and provide a single well architected platform to be able to take in all of the different generative AI technologies that they have and be able to make sure that they have that silo capability to do the work from a research standpoint, but also support them with some of the commercial models that those supporting teams would need. It fostered collaboration. It allowed them to standardize on security. really allowed them to gain a lot of the benefits from generative AI without them having to build these things from scratch. We’d love to use that as the micro example that we could apply to the Department of War at scale.

[Paul] (28:22)

I’m conscious of time, so to finish up, maybe I’ll put a question to you, Joe, specifically, and that is, if you could challenge one entrenched assumption, let’s say in the Pentagon, about the adoption of AI, what would that be?

[Joe] (28:38)

There’s been so much investment in generative AI and in large language models. And a lot of the mindshare and attention of folks gets them to a spot where they think the real value is in the foundation models themselves. That leads to vendor lock-in. We saw that in the cloud services world. I believe a key assumption in the department around assuming that the values in the foundation models has to be challenged. We want the users to have a choice and we certainly don’t want to hurt the warfighter. We want to drive innovation to the warfighter. I think the key thing for everybody to look out for is not just what is the best model today for the task, but what is the best path to drive innovation for the future.

[Paul] (29:27)

I think that’s a fantastic way to finish up, Joe. It’s sort of a warning that it’s okay to choose one path, but actually in the future you may find that you want to have different parts of your organization on different paths. And if you’re locked in, you can’t do that, can you? So a little bit of liberty goes an awful long way. So that is a wrap for this episode of Exploited: The Cyber Truth.

Thanks to everybody who tuned in and listened, and a very special thanks to Joe and Arthur for their very passionate and reasoned responses to my questions. If you liked this podcast, please don’t forget to subscribe so you know when each new episode drops. Please like and share on social media as well. Please share us with all of your team so they too can benefit from Joe and Arthur’s wisdom, insight and passion.

And don’t forget, stay ahead of the threat, see you next time!

The post How Generative AI Is Addressing Warfighter Challenges appeared first on RunSafe Security.

]]>
What the 2025 SBOM Minimum Elements Mean for Software Supply Chain Security https://runsafesecurity.com/podcast/2025-sbom-minimum-elements/ Thu, 23 Oct 2025 15:48:30 +0000 https://runsafesecurity.com/?post_type=podcast&p=255135 The post What the 2025 SBOM Minimum Elements Mean for Software Supply Chain Security appeared first on RunSafe Security.

]]>

 

The updated 2025 SBOM Minimum Elements  from CISA and DHS introduces new guidance on what data and context should be included in every SBOM they generate. In this episode of Exploited: The Cyber Truth, host Paul Ducklin sits down with RunSafe Security’s Kelli Schwalm, Director of SBOM, and Joseph M. Saunders, RunSafe Founder & CEO, to explore the  updates, how they impact embedded software development, and why more-detailed SBOMs are a benefit to software security..

Kelli explains the new technical standards, from component-level cryptographic hashes to generation context metadata that clarifies how SBOMs are produced. Joe discusses how these changes move SBOMs from static compliance artifacts to living tools for transparency and risk reduction.

They also dive into:

  • The challenge of implementing SBOMs in embedded and legacy systems
  • How build-time visibility improves vulnerability management
  • Why accurate license and dependency tracking is key for compliance and security
  • The future of SBOMs in protecting critical infrastructure and national resilience

If you’re a software builder, security engineer, or policymaker, this episode offers practical insights for adapting to the new SBOM landscape.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Special Guest:  Kelli Schwalm, Director of SBOM, RunSafe Security

Kelli Schwalm is SBOM Director at RunSafe Security, where she leads the team developing RunSafe’s unique approach to generating build-time SBOMs for embedded software, particularly software written in C/C++. Prior to joining RunSafe, Kelli worked on embedded security technologies for mission-critical systems with a focus on Linux Kernel development.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:01)

Welcome back everybody to this episode of Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe.

[Joe] (00:20)

Greetings, Paul. Great to be here.

[Paul] (00:22)

And today we have very special guest indeed, that is Kelli Schwalm, who is Software Development Director, leading the SBOM development at RunSafe. Hello Kelly!

[Kelli] (00:35)

Hi, I’m excited to be here as well.

[Paul] (00:37)

And we have some surprisingly important stuff to talk about. This week’s title is “What the 2025 SBOM Minimum Elements Mean for Software Supply Chain Security,” which is quite a mouthful. Let me just introduce that by reminding everyone that SBOM, without a B at the end, doesn’t blow things up, it gets them into order, stands for Software Bill of Materials.

The important thing behind this episode is that CISA and the DHS, that’s the Cybersecurity and Infrastructure Security Agency and the Department of Homeland Security in the US have just released the first major update to their minimum elements for a Software Bill of Materials since 2021. This raises the bar for what suppliers should provide in their Software Bill of Materials. So Kelli let us start at the beginning. What has changed in this new draft of the minimum SBOM elements? What have they added and why does it matter?

[Kelli] (01:49)

They made some major and minor updates to a lot of the components that are either required or haven’t been required previously or have a higher level of detail that is needed to really satisfy that minimum level of what an SBOM should represent. So for example, a component hash is a new field that is required within the SBOM and that really just identifies what the component is.

A component can have the same name, can have the same properties, it can have the same information, but across versions, depending on who supplies it, that hash can change. We like to talk about how files could also represent a component and that file can be modified throughout the lifetime of a build. And we want to report that as distinct components. Other major changes are identifying generation contexts dependency relationship requiring transitive dependencies.

[Paul] (02:48)

Now, transitivity, for our listeners, is something that many people won’t have heard of since primary school, where you learn about all the special rules of addition and multiplication. But loosely speaking, transitivity means that if A depends on B, and B depends on C, then there’s no getting around the fact that A depends on C as well, and that set of links really matters.

It’s not just enough to go back one step in the chain, is it? You might have to go back two or three or four, depending on the industry you’re in.

[Kelli] (03:23)

Absolutely, and that’s a very frequent scenario that does occur. So when a user installs a package on their system that will often install other packages, they’re also required. So it is very important to track that as vulnerabilities can be present in any one of those used dependencies.

[Paul] (03:36)

Yes.

So it’s not quite like a cake recipe where it will expressly say, you need 50 grams of sugar and 150 grams of sodium bicarbonate. There might be an ingredient that’s just put in 200 grams of this other product. And when you get that product out and you look at its list of ingredients, there’s 50 more things in that and each one of those has 50 more things in it. And all of them make a difference in the end.

[Kelli] (04:08)

Yes, and as someone with a food allergy, I love that analogy because I’m frequently typing all of the ingredients on every component I add into my recipes.

[Paul] (04:19)

Yes, that’s a great analogy for transitive dependency, isn’t it? And that can really make a huge difference in software. You think you’ve got one dependency, but in fact you can have a whole web of intrigue in the background. Any component of which could be buggy because it’s not looked after well, or even worse, as we’ve seen recently in many cases, could have been deliberately modified by cybercriminals or even state-sponsored actors who’ve possibly spent a long time building trust in the ecosystem so they can poison something far upstream with the intention of attacking some specific target downstream and to hell with the consequence of all the other people they affect as well.

[Kelli] (05:03)

Absolutely. And the way we identify vulnerabilities is really interesting because we rely on package identifiers through CPEs, which are Common Package Enumeration. So that’s essentially an ID for a package that is well recognized and it comes from a central database. CPEs identify a package, which would be a dependency. They’re a way for the entire industry to recognize what that package is named, what it’s called, how to attribute it to then CVEs, which are the Common Vulnerability Enumeration, which is an idea of a vulnerability. So there’s a strict mapping between CPEs, the package ID, and CVEs, the vulnerability ID.

[Paul] (05:49)

So the idea there is if you have a particular CPE identifier in your product and then one of those CPEs turns out to have some critical vulnerability you will not only know you will be able to track it down and fix it. 

[Kelli] Correct. 

[Paul] So there’s an awful lot at stake here isn’t there?

[Kelli] (06:09)

There is, and interestingly, CPEs are only created as matching CVEs are needed. There can be a backlog in a CPE needing to be reported just to link to that CVE.

[Paul] (06:23)

So you mean unless and until someone figures out that a particular package might be bad, it’s kind of assumed to be harmless. Yes. And it doesn’t appear on the list. Now, is that why this idea of individual cryptographic hashes for the actual packages you use has been introduced? The files that I used are exactly these for the sake of repeatability. Is that the idea there?

[Kelli] (06:48)

I think there’s certainly a significant impact from that. For example, a supplier might modify a package. Many RTOSs, or real-time operating systems, package things like their own modified version of GCC, or it may be modified, we don’t really know. So those cryptographic hashes are able to say, is this modified? It’s also able to attribute a package to the vulnerabilities from a different supplier if needed.

I don’t know if that’s the intent of the cryptographic hashes, it’s certainly a byproduct that’s very compelling.

[Paul] (07:22)

It certainly sounds as though it forces people to provide a more definitive description of what they actually used, rather than just the title that it happened to have when they downloaded it.

[Kelli] (07:34)

I’m going to go on a tangent a little bit, but I think something that’s very cool about SBOMs is that they do comply with specific schema standards through Cyclone DX or SPDX. There’s also, I think, SWID, which is generally not used from what we’ve seen. Because these are well-defined schema formats, there are so many tools to analyze the data. So really, when it comes down to it, more data is better because this isn’t meant to be reviewed by a human. This is meant to be reviewed by some tool that takes in the JSON, for example, and can strip all the information they need and analyze it.

[Paul] (08:14)

So presumably the idea of that is it forces everybody to describe their packages A) in a machine-readable way, which helps with automation but does no harm for humans and B) so that you don’t have to spend hours and hours translating everybody’s data, perhaps slightly incorrectly everybody is singing from the same song sheet. Now Kelli, there’s a new item called

[Kelli] (08:40)

Yes.

[Paul] (08:44)

Generation context. Can you explain something about that?

[Kelli] (08:49)

Yes, so there are different categories of SBOMs and that really just defines when they’re generated. There is a binary-based SBOM that’s just generated on a resulting binary, no source code included.

 

[Paul] (09:04)

That’s basically taste the cake and go, yeah, probably a bit too much sugar in that. But maybe you don’t quite taste the arsenic that somebody snuck in there. Yes.

[Kelli] (09:10)

Yes.

Sometimes that is very useful. For example, with my food allergy, I can’t have gluten, so I will have my husband taste test food to be like, mm.

[Paul] (09:28)

Now that’s the way to do it.

[Kelli] (09:30)

This has happened a few times. I’ll look and be like, nope, that’s not safe. However, it doesn’t give as much information, as reliable as going straight to that ingredient list. And again, going on that analogy, my husband might taste the food and know there’s gluten in it just because of the texture, but he doesn’t know what the actual flour mixture is. So you still don’t have that ingredient list you’re expecting. You’re more so looking for a particular ingredient.

[Paul] (09:58)

Sounds like an excuse to buy a gas chromatograph just for hobby use at home.

[Kelli] (10:02)

You have a good point.

[Paul] (10:05)

Ha ha ha ha ha.

[Kelli] (10:07)

Another category is a source-based SBOM. So that’s when you’re just taking a static look at the source code. Right. And you’re saying, I know everything that could go in. So that’s all the ingredients in your pantry. You know everything that could go into that recipe. However, that may not go into the recipe. So you might have three types of flour, but only two are mixed in. 

And so then my favorite category is the build-time SBOM, where you really are just sitting at build, at compilation, and you’re seeing everything that’s coming in. So that’s someone essentially watching the recipe unfold, watching every ingredient go into the mixture and recording it.

[Paul] (10:50)

Presumably that provides strong protection against A) accidents and B) malevolence.

[Kelli] (10:58)

Yes, absolutely. And builds can be very complicated. They can also call out through external resources to then download even more dependencies that may not be on the system prior to the build.

[Paul] (11:12)

In other words, someone could in all innocence say, here’s my bill of materials, but they haven’t taken into account or haven’t noticed that when they do a build, they actually suck in more things. Perhaps a rogue update that just gets blindly accepted without human review, which means that the bad guys get their way. Yes. So Joe, if we just switch to what you might call the bigger picture for a moment, how do these regulatory changes move SBOMs into something that actually drives resilience and as you like to call it elevation of process in the software supply chain?

[Joe] (11:50)

I think increasing transparency is ultimately the goal so that software teams can share with their customers exactly those ingredients that are in the products that they deploy in their infrastructure, in their operational technology networks. With that level of transparency by sharing the information, including having complete and correct identification of components, you’re better able to share vulnerabilities and risk that might be in a software system.

If I put my RunSafe hat on for a minute, I take it in even step further to help increase the security posture. That is we want to encourage people to disclose vulnerabilities so everybody knows that a vulnerability exists. So the corrections and the patches can get applied. But if you’re already protecting the software, you can disclose with confidence. So if you have things like RunSafe security protections on the software and you identify a vulnerability, well, great news. You should share it with everyone so everybody knows.

And you should tell your customers you’re already protected. So the idea is to not only be transparent, but to disclose more readily sooner in the process so everybody can increase their defenses. And we don’t blindly assume we just won’t be attacked. If you combine good insight from having the best approach to generate a Software Bill Materials along with code protection, then you have the best opportunity to link to all those vulnerabilities to share with your customers and ultimately to boost transparency with your customers overall. And if you’re just relying on checking a box, you’re not that serious about security, then you’re probably going to miss some of these things and your customers will ultimately pay the price.

[Paul] (13:31)

Now I know that a lot of embedded software is still written in C or C++. It’s very well understood. The development tools are readily available even for esoteric embedded systems. But SBOMs in the C and C++ world have some special challenges of their own, don’t they?

[Kelli] (13:52)

I love talking about this because it is such a headache. Okay, so really with any kind of legacy C/C++, or really any embedded compilation software development, anything goes. I like to refer to it as the wild west of software development.

[Paul] (13:56)

Eee, that’s great! I’m all in.

[Kelli] (14:13)

People will copy and paste specific versions of files right into their code base. And you might see a commit of the references, which is really just a name assigned to a change set, a name and a description. And that may be your only indication if there is even version tracking in that repository. Other things might use Git submodules, but Git submodules is kind of the best of the worst.

So it’s a way of tracking those packages through some means.

[Paul] (14:47)

So that’s where you divide your project into some sort of logically useful subdivisions rather than just having one giant file called program.c.

[Kelli] (14:58)

Yes, which also can happen.  Unfortunately.

[Paul] (15:01)

Ha ha ha ha!

Now there’s also another hassle with even modern C code, let alone legacy code, isn’t there, that when you’re supporting lots of different chipsets or lots of different device types as they’ve evolved over the years, you can have what’s called conditional compilation, which depends on all kinds of build time stuff. So you look at the code and you’ll have 17 versions of some say hashing algorithm and which one gets selected cannot be determined and lessen until the software is actually built.

[Kelli] (15:36)

Yes, so that’s very true. And I like to use an example that kind of ties everything together. So in a compilation of that environment where you’re really building for a chipset, you might also build a library that is a dependency of that binary. That library may have the same name and it’s compiled for a particular chipset. And in the same compilation process, it has been linked into the final binary for that chipset. You have no way to differentiate them across the different chipsets because they are proprietary, they are the same name, they’re the same location, they have almost the same everything.

[Paul] (16:14)

Probably have exactly the same version number in, won’t they?

[Kelli] (16:17)

Correct. So build time is great because then you can look at everything coming in during that process and say, well, this library depends on these source files that are particular to this chipset.

[Paul] (16:31)

So the deal there is that by monitoring what happens at build time, you can not only tie binary objects or libraries back to specific source projects or sub-projects, you can also record the actual build parameters that were used, which is good for repeatability. But I guess it also means that if there’s a CVE for that particular library, you often do find CVEs that say this applies to ARM processor version only or Intel x64 only, that means that you can be much more precise about assessing vulnerabilities and vulnerability disclosure. So you also save panic and false positives as well.

[Kelli] (17:13)

Yes.

That notion of trying to pare down what vulnerabilities actually apply is called triaging. Triaging these false positives, alerting to these CVEs or these vulnerabilities. So being able to filter out the different chip set dependencies is a great way to triage and redevote resources to other vulnerabilities that do apply and you do need to mitigate.

[Paul] (17:43)

So what are some of the things that you can do to solve these problems? What actually went into the cake and what effect did that have on the cake that we didn’t think of before?

[Kelli] (17:54)

So my favorite approach for embedded is to look at the files that are being accessed. The files tell us a lot. They tell us the source code. So for example, many CVE descriptions will contain something along the lines of this vulnerability is present in these files of this package. We know the file’s coming in. So we can again do that filtering mechanism if we report by file. And we really operate for the most part on file-based operating systems. Builds are being done on systems where everything is treated as a file more than you would ever anticipate. So that includes libraries, includes applications, that includes the actual source files, that includes the artifacts that go from the source file to the compiled output. So treating things on a by-file basis really gives you all of the information you could want going into that final resulting binary.

[Paul] (18:54)

And presumably also gives you intimate insight into files that are generated temporarily and used, for example, as scripts that affect the build process, even if they’re then removed afterwards. Yes. Because my understanding is a lot of supply chain attacks these days involve targeting the build process with the intention of corrupting the build environment while nevertheless building a correct binary at the end in the hope that nobody will notice.

[Kelli] (19:24)

Yes. And like I said before, embedded is the wild west of compilation. People will actually generate a library for one chipset, generate it for another chipset over and over again, all in the same build process. Is that particularly hygienic for a build process? No, it is not. Well, if you do a file-based SBOM, you can see.

[Paul] (19:48)

How do you prevent that?

[Kelli] (19:53)

Every file that’s coming in, you can see, well, we referenced those 73 chipsets before we got to this one. Or through dependency mapping, which that dependency relationship mentioned in the minimum requirements, we can say, this binary actually depends on this file that’s really suspect. That’s weird. Where is that coming from? Another compelling alternative to build time SBOM generation, especially in the embedded world, is doing these hashes not just of components, but of different code snippets and trying to identify them through there, which is very useful with open source where you can gain all of these code snippets and recognize where they’re coming from. So the GCCs of the world, you can still recognize. Then embedded, so many things are proprietary. Those code snippets aren’t accessible. So that becomes a much more complex problem to handle.

[Paul] (20:50)

Do you think that means that vendors may need to be pushed to provide, perhaps under special licenses or NDAs, access to their source code? Or do think we’re stuck with the fact that some of the things that we compile into our software are just going to be intractable blobs that we have to take on trust?

[Kelli] (21:10)

I think if we lived in a software utopia, it would be great to have access to these code snippets to properly identify. However, we don’t. So I do think it is trying to push the market into a place that will never exist. So we do need other ways to identify components that go into an SBOM that aren’t just code snippets.

[Paul] (21:33)

We may have made it sound more difficult than it really is, even though the whole idea is that huge amounts of this can be reliably automated. So if a developer or product security manager listens to this episode and thinks, nah, it’s too hard, I’m just going to stick to checkbox compliance, Joe and Kelli, what would either or both of you say to try and help them get over that hurdle?

[Kelli] (22:01)

Personally, I would say to just jump into the community. There are a lot of people very passionate about educating everyone on what’s available, the complexities that occur, how to overcome those complexities. And people are very passionate about cybersecurity in this community. So just go to the conferences, look on LinkedIn, see who’s talking about it and ask.

[Joe] (22:27)

And from my perspective, and I’ve heard Kelli say this in the past, one of the great capabilities or know-how of RunSafe Security is to make it very simple to integrate and access a lot of information, a lot of intelligence, a lot of insight at that moment of build for improving your overall security posture and making it easy to deploy. It really is core competency of RunSafe to integrate at build time, make that very simple for people.

So you can extract all that insight that we’ve been talking about and all the complexity that we’ve heard is stuff that can be automated as you say, Paul, but really streamlined, given the standard formats and even the identification of these minimum elements that are needed.

[Paul] (23:10)

And vulnerability disclosures aren’t a sign of weakness, are they? Trying to sweep all this stuff under the carpet, like many vendors used to do in the olden days, just doesn’t work. Because if you don’t find the problem, the cybercriminals or the state-sponsored actors are going to find it for you.

[Kelli] (23:29)

And I do feel like there may be no better feeling than finding a bug before it’s ever encountered in the wild and fixing it and knowing that you just were able to resolve it. And everything’s good in the world.

[Paul] (23:41)

Do think that there are still people who think I’m just going to stick my head in the sand and I’m just going to try and do it the old way because it’s too hard to change?

[Kelli] (23:51)

It’s more likely that they are concerned about what might be revealed about their software through an SBOM.

[Paul] (24:00)

You mean they’ll give away some information about the secret source?

[Kelli] (24:02)

Correct. But generally the software is patented in some way and really the payout of being able to quickly mitigate any vulnerabilities before they are ever exploited is way higher than the risk of giving away too much information.

[Paul] (24:22)

Joe, I think you’ve said in previous podcasts that more or less 80% of the software modules that go into modern embedded software tools will be open source. If that’s the case, you’re not really going to lose much with your SBOM.

[Joe] (24:41)

I think the percent of open source across embedded systems probably varies tremendously. And some people are shifting perhaps from real time operating systems to embedded Linux. As a result of all of that, I do think the shifting landscape matters. And I also, dare I bring up a generative AI, but I do think that in some cases you could create a close version of an open source component, but make it your own using generative AI. And that from an outsider’s view would look like a brand new component altogether wouldn’t match as the same open source file that’s on there. I do think that could create some complexity as well. And so the true counts of open source may not be well known because there could be copycats or near versions. Part of my thinking on that is again, you want to have cyber defenses that work even if it contains the same vulnerability as the near clone one.

You want to make sure you can either identify those components or prevent the exploitation of them, regardless of its source.

[Paul] (25:43)

Even though it’s free and open source, the licensing requirements can get surprisingly complicated, can’t they? You might have a license that says you can do what you want with this, others say you can use this without sharing your source code, but you have to put this copyright notice in, and then there’s the GPL copyleft where you use my code, you have to show everybody yours. My understanding is that these new rules for SBOMs require people to get serious about providing genuine licence information so that these problems don’t arise.

[Joe] (26:17)

Well, I think part of the reason why it’s important is you certainly don’t want to ship product or distribute product that contains a restrictive license. If you in fact are not complying, having checks and balances in your build process, in your software development process to ensure that you do catch those more restrictive open source licenses becomes an important thing to ensure that your product itself beyond just the open source component that you included, but the full package, the full product doesn’t itself become.

So you want to be very careful about how you distribute software when it does involve open source licensed components.

[Paul] (26:54)

That’s not just for moral and ethical reasons, is it? It also can have severe legal consequences up to and including the position where you could be required to stop selling your product and even to remove existing products for the market.

[Joe] (27:09)

Yeah, and I think it also plays out if your company gets acquired, there’ll be a full review of those license checks as well. So it’s just a good practice to have a discipline internally to ensure you’re not violating those terms.

[Paul] (27:22)

So if we can finish up, my understanding is that the current minimum standards for SBOM draft document is still open for feedback. Joe and Kelli, what things do you think aren’t in the minimum elements standards that really ought to be there? And if you had to push for one or two things to be added, what would they be and why?

[Kelli] (27:44)

I still think these minimum requirements don’t adequately address embedded SBOM generation. For the most part, SBOMs are assumed to reference languages like Python, like Rust, that have this notion of a package manager. You can refer to the manifest that already exists.

[Paul] (28:04)

In other words, they’re relying on metadata that’s already well established in the industry. Like you say, in the embedded market where you’ve got perhaps legacy C code, it is the wild west, east, north and south, isn’t it? What do you do about that? What could be added to the minimum elements that would help iron out that problem?

[Kelli] (28:11)

Correct.

I think really it just needs to be a perspective shift. So a lot of the descriptions, they’ve tried to make them more general, but components can be a file, can be a library, can be an application. And yet few entities really understand that file is valid and file can contain a lot of very helpful information. So for example, we’ve been toying with the idea of mapping files to a package and figuring out how we can relate that the schemas don’t really address that scenario because in most cases you would just report the packages.

[Paul] (29:03)

So you just say this particular tarball, for example, and you wouldn’t concern yourself with the 10, 50, a thousand individual files and scripts that might be inside it.

[Kelli] (29:15)

Yes, those files can contain valuable information about licensing. They’re very important to include because it validates that reference that we make when we report a license or an author or a copyright, all things that are referenced in the minimum elements in some capacity.

[Paul] (29:20)

Absolutely.

The whole idea is to know all the ingredients that went into the cake, then just saying I took a package that said flavor enhancer on it and threw in some of that doesn’t sound like enough to me. But do think if you push for that, you will meet stiff resistance from the import package community, the node.js people and the Python people and so on? Do you think they’ll go, oh, no, no, you’re going to make it too hard, even though you would undoubtedly make it better and safer?

[Kelli] (30:05)

No, I think people are moving towards that, wanting to recognize all scenarios. So SBOMs are fairly modern. They’re a fairly modern concept where we had to hit the most value the fastest. So relying on those manifest files, getting the bare minimum information, which is why we had the, I think it was the 2021 minimum elements, that did a great job of priming the community to start down this process. But now we need to expand it to more cases to really understand that a lot of vulnerabilities for infrastructure, they rely on embedded software. And so if that embedded software is vulnerable, we have a problem.

[Paul] (30:49)

Absolutely.

[Kelli] (30:51)

Having gone to VulnCon, for example, in North America, a lot of people are very interested in solving that problem. It’s just not a lot of people know how yet.

[Paul] (31:01)

Joe, you’ve made the point in previous podcasts that if you want to attack America’s AI capability, then the obvious way is let’s break into the heavily protected super secure servers and let’s try and affect the software. But if you can’t do that, maybe just shutting down the VulnCon would be more than enough.

[Joe] (31:24)

Absolutely. And I think the resilience of the infrastructure is very important, whether that’s the cooling system inside the data center or the reliability of the energy grid itself. I do like to say without infrastructure resilience, there can be no AI dominance in the U.S.

[Paul] (31:33)

Yes.

So a final thought for our listeners, if I might ask, what’s the one thing that security leaders and software development can take on board today so that they’re ready for these new SBOM rules and regulations?

[Kelli] (31:57)

I would say just dive in. There’s a wealth of resources out there, a wealth of software that they can play around with to start generating SBOMs and see what comes of it.

[Joe] (32:09)

And I would just add, consistent with what we said around the analogy with the baked cake, you can have your cake and eat it too. When you generate a Software Bill of Materials to offer insight, not only to be transparent with your customers, but also to boost your security operations without slowing down your development process.

[Paul] (32:32)

Excellent. I think that’s a fantastic point on which to end. It’s not as hard as you think. You’re going to have to do it. So you might as well get started anyway. It will be good for everybody. So don’t delay. Do it today. That is a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. Thanks especially to Joe and Kelli for their valuable and thoughtful insights.

If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like us and share us on social media as well. Please also don’t forget to share us with everyone in your team. And remember, stay ahead of the threat. See you next time.

The post What the 2025 SBOM Minimum Elements Mean for Software Supply Chain Security appeared first on RunSafe Security.

]]>
Collaboration in Cyberspace with Madison Horn https://runsafesecurity.com/podcast/madison-horn-collaboration-in-cyberspace/ Thu, 16 Oct 2025 14:40:35 +0000 https://runsafesecurity.com/?post_type=podcast&p=255090 The post Collaboration in Cyberspace with Madison Horn appeared first on RunSafe Security.

]]>
 

In this episode of Exploited: The Cyber Truth, host Paul Ducklin welcomes Madison Horn, National Security & Critical Infrastructure Advisor at World Wide Technology, alongside Joseph M. Saunders, Founder & CEO of RunSafe Security. Together, they explore the power of collaboration in cyberspace—and why unity between government and industry is key to defending our most critical systems.

Madison draws from her extensive background in national security and infrastructure resilience to discuss how public-private partnerships can evolve beyond check-the-box compliance. Joe adds perspective on the economic and operational alignment needed to ensure a well-functioning, secure society.

Tune in to hear:

  • What defines effective public-private collaboration in cybersecurity
  • How AI and emerging technologies reshape critical infrastructure defense
  • Why resilience depends on communication between operators and suppliers
  • The case for security-by-design as a shared responsibility
  • The role of deterrence, cyber forces, and software resilience in national security

Whether you’re in the public sector, tech manufacturing, or cyber policy, this episode offers a candid look at how cooperation—and not competition—will define the next era of cybersecurity.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Special Guest:  Madison Horn, National Security & Critical Infrastructure Advisor at World Wide Technology

Madison Horn is a seasoned cybersecurity executive and two-time federal candidate, most recently running for U.S. Congress in the 2024 cycle. She brings over 15 years of experience leading cyber strategy and incident response across critical infrastructure, national security, and regulated industries. Madison has held senior roles at Siemens Energy, PwC, and Accenture Security, where she built and led global portfolios, advised C-suites on digital risk, and guided organizations through major transformation and resilience initiatives. She also founded RoseRock Advisory to support startups and investors at the intersection of cybersecurity, geopolitics, and innovation.

Madison now serves as National Security Strategy & Policy Advisor for WWT, where she is focused on advancing strategic cybersecurity initiatives, strengthening public-private partnerships, and supporting national resilience across defense and critical infrastructure sectors.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:04)

Welcome back, everybody, to another episode of Exploited: The Cyber Truth. I am Paul Ducklin and I’m joined as usual by Joe Saunders, CEO and Founder of Run Safe Security. Hello, Joe. You’re looking forward to this one, aren’t you?

[Joe] (00:21)

Greetings, Paul.

Very much so. We have a great guest today.

[Paul] (00:28)

Yes, we have a super special guest today, that is Madison Horne, who is National Security and Critical Infrastructure Advisor at Worldwide Technology. Greetings Madison.

[Madison] (00:43)

Greetings, Paul and Joe. I’m going to go back and listen to more of the podcast to see if you guys call everyone a special guest.

[Paul] (00:49)

Remember the thing about words, like best is you can have equal best, two people can win a gold medal. It’s only when you say better that it kind of by being relative it becomes absolute. Please don’t panic about that. Today’s topic is collaboration in cyberspace. Madison, I hope you don’t mind if I start off by asking you to tell us about your background and in particular how you came to see cybersecurity as the thing you think it is today. Basically something that it’s not just a product that you need to buy, it’s almost something that we need as part of a well-functioning society.

[Madison] (01:31)

Yeah, sure. I love the way that you shaped that intro. I think everyone is like, tell us about your background. But I appreciate the “why the hell are you passionate about the space?” Because for me, it certainly is personal. I think it’s personal for a lot of people that work in cyber. We don’t dedicate our time and make the sacrifices at home purely to make money. And so I would love to hear more people within the space of like, why are you actually passionate about cyber? So thanks for the question. Long story short, been in cyber for the past 17 years and have had several like bends and turns in my career. Started out doing basic project management slash assessments in the critical infrastructure space. Absolutely fell in love with the world of critical infrastructure. Obviously why I’ve stayed in it. Helped lead a red team, build out an incident response team, was in the startup space, went through Accenture, PWCs.

I found a lovely home at Siemens Energy and then did this wild thing. And I joke at saying that I took a sabbatical, but I ran for the U.S. Senate and I ran for Congress really to help elevate the need for a national conversation around the importance of cyber, but one that is rooted in the industry and also has a technical background just because of the intersection of technology, policy, but social impact. And I don’t mean social impact as it relates to what people typically go to or where their brains go to. But I mean that if the electric grid goes down, then the hospitals that potentially could be impacted, the people’s lives that are going to be impacted by it, it’s personal to me. 

And the personal element and to put a bow on it we don’t get stuck here is that I come from a place that is very, very impoverished where the system is so fragile that you create any type of little hiccup and you’re talking about potential loss of life or real human hurt. And so for me, cyber is really about that defensive posture and remembering that we work in cyber to protect people. So that’s why I’m passionate about cyber. I don’t see it as a product. I see it as something that is in the nation’s best interest from a security perspective, economic, human life, societal, all those things.

[Paul] (04:00)

Yes, I agree, because when you think about political arguments and elections in almost any Western democracy in the world, the things that come up are, well, I want better roads, or I want cheaper electricity, or I want better health care and all of that stuff. But behind all of those separate issues is the fact that in the modern era, thanks to the way our critical infrastructure works, if we don’t get cybersecurity right, It doesn’t matter how clever you want to be about any of those issues, they could all be threatened in a similar way, couldn’t they?

[Madison] (04:34)

What we are seeing and have been seeing since 2014, we’re seeing real lives example, you know, for the first time within the critical infrastructure space. Sure, we can talk about Stuxnet perhaps, but what we have seen is almost a test bed in Ukraine around how important it is to defend our critical infrastructure. It is now a basic maneuver in warfare and a play against our enemies, whether it’s around pre-positioning or something during active conflict. We don’t have to ask the question, is this important? No, we’re seeing it in real time.

[Paul] (05:13)

So do you want to just say maybe a little bit more about how technology, policy and geopolitics have kind of woven themselves into each other and the changes that that has made to the attitudes we really need when it comes to cyber security?

[Madison] (05:31)

Sure, mean this could, my heavysides because my god.

[Paul] (05:36)

When I’m laughing there, I’m laughing out of anxiety rather than because it really is humorous.

[Madison] (05:42)

My role right now at WWT is three things. And I sit within this triangle, which allows me to give this perspective. So at the top of that triangle really is being an advisor to our critical infrastructure sector on all things, regulation, emerging threat, emerging technology, how it’s being implemented, what are the new risks being introduced, et cetera, et cetera. The other side of that is helping liaison a little bit with a different capacity within the think tank community, our nonprofits that are helping push healthy policy as it relates to critical infrastructure and that intersection that you’re asking about. The other part of that triangle is the cyber product landscape. And I get to advise our cyber product friends and fellows of, what do we need for the critical infrastructure sector? Are we going to take advantage of the fact that the public sector is reaching, reaching for the private sector and saying, what do we do? Now we have to accept there is bureaucracy in place, but it is an opportunity for us to really play that liaison to the public sector and say, hey, this is what we need and this is our perspective. But from a technology landscape, we’re almost reaching this same paradox. And let me say the word AI.

It’s almost as if we’re creating the internet again. And we didn’t try to regulate the internet in the beginning of time. And I think that we need to treat it very, very similarly. And we can kind of see the current administration is really leaning in and saying, Hey, we’re not going to try to over-regulate this space. What we’re going to ask the private sector is how do you want us to regulate it? And if I was a business owner, I was like, I don’t want any rules. I want to make up my own rules.

Let me be very clear. There just needs to be some type of guard rails in place or otherwise that’s a crazy train heading towards a dumpster fire driven by a feral animal. I think that is a very, very different approach than what we have seen in the past where we have more so been told that this is what we are going to do from a regulatory perspective versus the heavy leaning into industry. It’s just, the industry going to lean in?

[Paul] (07:59)

The leaning in that you talk about, guess that’s a sort of metaphorical way of talking about private companies that don’t actually need to cooperate or have no regulatory reason to cooperate deciding to do so because actually that will make them better competitors and in general probably have a better result for society. Would you agree with that?

[Madison] (08:23)

I wouldn’t say yes, but I want to flip that just a little bit. Again, I work within the critical infrastructure space. Yes. There is no way to say that anyone operating with a critical infrastructure space doesn’t have skin in the game to participate. Good. I think what we have to flip is what is their responsibility? You want to invest the time, but what are we going to get out of it? And let me create an example.

When CISA was first established, then there was a lot of excitement around the potential of what CISA could become. And so regardless if there has been a perception that perhaps CISA hasn’t been the most effective governing body, let’s lean in and let’s say why and how we would like to see CISA mature. And so when I say lean in, lean into the spaces that the industry hasn’t been happy with. And if we don’t lean in, then we’re going to continue getting the same thing.

[Paul] (09:28)

Joe, if I can just ask you at this point, this sounds very much like we really need to concentrate on getting the IT industry in general and embedded security in particular into what you might call the post-checkbox compliance era, where people are not simply doing the minimum that is required, they have to come at it from a different angle and want to comply because of the way they run their businesses.

[Joe] (09:58)

Yeah, and I think compliance is part of it, but I also think it’s the alignment of the economic interest. Yes. Critical infrastructure in general. Part of the goal is to ensure that the economy thrives and we do have a well functioning society and water is delivered and energy is delivered and data centers operate. And with all of those things, there’s an economic interest and a consequence of a cyber attack. And so if you look at it from the consequences perspective and look at it from what can we all do to ensure that critical infrastructure continues to operate as expected, then you start to look at what the economic consequences are and what the economic interests are. I think there is an alignment of interest in the spirit that Madison’s been describing.

Certainly coming back from where she grew up in Oklahoma when things don’t operate, but also just more generally with business, depending on energy. And if you think about AI and Madison brought up artificial intelligence, I like to say there’s no AI dominance without critical infrastructure resilience. And that includes the energy infrastructure and the data center infrastructure that powers AI. I come back to your question then Paul and say the alignment of economic interests means that good practices in your software development, good practices in the product development that you deliver and good practices to share information with your ecosystem, with your customers, and yes, with the government so everyone can benefit ends up being good business for everybody and ensures that we do have reliable infrastructure in the end.

[Paul] (11:31)

And it’s not particularly difficult, is it? It doesn’t have to cost a lot to start doing this. And when you start doing it, you’ll probably find that a lot of things that used to be expensive for you, like producing patches every few months even for devices that are hard to patch, suddenly get mitigated and everybody benefits, including your own business. It’s almost like self-serving altruism, if you like.

Madison, maybe you could give us some examples of what you might consider best-in-class collaboration between government and the private sector from the past. Or examples where that sort of partnership has not worked out well and what the critical difference is between them.

[Madison] (12:17)

So I’m going to lean into the financial sector. Maturity or investment chases money. And this sector is by far, I would say, the most resilient. They are not necessarily quick for adoption of new technology, but they’re not scared of it. And so they lean in in a really interesting way that has a level of sophistication. But I think that it is because of the fact that historically, It was the first air quotes critical infrastructure sector that demanded interaction between the private and public sector. I would say an example that doesn’t work or didn’t work traditionally, I think is the one that we’ve already talked about a little bit. And I don’t mean to beat a dead horse and I don’t want to call anyone out doing something for the very first time is hard.

And the example that I’m talking about is the relationship with CISA and I would say the energy sector. The energy sector, it has so many problems with the legacy equipment that we already understand, but no one wants to say, hey, Congress, we need $1 billion. And I’m making this number up. No one wants to go to Congress and say, we need $1 billion to actually protect just the utility grid.

Who is going to go do that?

[Paul] (13:44)

That number sounds quite cheap to me. Totally! I bet you it would be a lot more! If you just decided let’s throw money at it instead of bringing about a long lasting change in the way it conducts itself if you like.

[Madison] (13:58)

I don’t want to blame it on the way that they’ve conducted themselves because there are incredible people who work within the critical infrastructure sector, again, within the energy sector specifically. It is they’re operating with a very, very, very thin budget that is based on rates that are capped. And so how can you continuously increase a budget when you can’t have rate increases? Your hands are tied. So sure, in the way that we operate, in allocating funds. But I think we’re just getting to a point, we being the collective public sector. Okay, now we understand the problem. It’s all this outdated legacy equipment that was never intended to be on the internet. And holy crap, how do we rebuild the entire energy grid around the United States and ensure that China can’t be sitting in it on a database basis? I mean, that’s what we would have to do. Yes.

[Joe] (14:54)

Part of the issue is the legacy code is a massive problem, as you describe. In the practices that went into the development of the legacy code and then the constrained resources to defend that legacy code is a very, very difficult problem to solve, particularly in interoperability software to connect grids to other sources. Those happen to be areas I think that could get attacked. Nonetheless, I do think your point is valid that these systems were not built to be defended. They were built to ensure energy got from one point to the other and exactly that.

[Paul] (15:28)

From the energy sector point of view, feel sorry for the experts there. If you drive past, say, an electrical substation where high-tension lines come in from power stations and then get redistributed, sometimes it looks like a terrifying dystopian industrial landscape. But if you look at the shapes of things like conductors and insulators and how that power is managed, that is art, science and engineering at an extreme level.

I guess if you’re working in the energy sector you’re going, we are handling 1 million volt DC power lines. That’s what we’re good at. They don’t see cybersecurity as something that ever used to be important to them because they’re already in this fascinatingly complex industry anyway, just like rocket scientists. So I guess that’s a barrier that we have to break down, isn’t it? Suddenly, because these things, as you say Madison, are on the internet, they require a whole lot of extra art and science and engineering that was never traditionally part of that discipline.

[Joe] (16:32)

And especially with constrained economic resources.

[Paul] (16:35)

Absolutely.

[Madison] (16:36)

Our security teams have always been saying, hey, a major breach is possible, but it would be an act of war. Let’s be very, very clear. That’s where we are. And that’s why the energy sector hasn’t seen a major outage, in my opinion. It’s a little bit of this argument of likelihood and a hoping that it doesn’t happen and that we don’t get to that point. We’ve known it’s important. How do we do it in a manner based on priority and with the budget in which we have right now and evangelize what is cybersecurity and ensuring that they understand that it’s not an insurance policy.

[Paul] (17:18)

Well said. So where does AI, the elephant in the room if you like, fit into all of that? These are huge changes and they’re being adopted by almost everybody, sometimes apparently without much thought at all. It’s just, hey, this is new, we should try it.

[Madison] (17:37)

Part of my media training, we never get in an argument, Again, keeping it to the world of critical infrastructure. And this is where we have to make the delineation between IT and OT. Part of your statement was there’s this adoption of AI and it’s just the wild, wild west. That is just absolutely not the case in the world of critical infrastructure. Because if you did that, then I mean, the risk is a widespread blackout in DC. They have to be methodical. Adoption is naturally going to happen quicker on the IT space, your traditional enterprise, HR processes, email systems, et cetera, et cetera. But in the OT world, when we’re talking about operational technology, then AI is still very much and I would say that R&D phase, that use case and development phase. It’s a lot around monitoring, understanding normal in the environment, helping with maintenance windows and doing some predictions on that front. In the OT space, you can’t deploy something and get it wrong. You just can’t. If Instagram goes down for 30 minutes, I mean, there will be people who panic. It’s a whole different type of panic if the power goes down in DC for 30 minutes at 1 a.m.

[Paul] (19:06)

Yes, there’s a very big difference between a Windows update taking a little bit longer than you expected and making you five minutes late for a Zoom call compared to the flaps deploying on your plane as you’re landing coming five seconds too late. It is a very different world, isn’t it?

[Madison] (19:24)

I don’t mean to go immediately to doom and gloom, but I create these extreme scenarios so that we can understand the potential.

[Paul] (19:34)

Madison and Joe, if you had to challenge one old assumption about cyberspace and collaboration, where would you start? What would the first change you wanted to make be?

[Madison] (19:48)

I don’t know if it would be a change more so in like continuing the path.

[Paul] (19:54)

That’s a better situation to be in, isn’t it, than having to stop people doing one thing and start doing something different.

[Madison] (20:00)

I am a glass half full type of gal, right? Good. Great. Don’t care if my house is on fire, I am gonna see something good about it.

[Paul] (20:09)

No more cockroaches.

[Madison] (20:11)

Sure.

So yeah, that’s hysterical, actually threw me off for a second. The World Economic Forum comes out with a survey every single year of what CISOs, what executives, what industry leaders believe is one of the major problems. One of the major problems still in our space is that we cannot articulate the risk times the likelihood and the impact, which is the investment discussion. That’s still one of our major problems, but society is starting to understand cyber. It means that we’re making progress.

[Joe] (20:46)

From my perspective, I think communication between operators of infrastructure and suppliers of technology is a strong opportunity for improvement. Just as Madison talked about the risks and the consequences and the economic costs of cyberattacks. One of the big challenges I think in critical infrastructure we have is that the organization that bears the risk ends up being on the operator side. Whereas the technology in the cyber defense could often come from the supplier side. And so there’s a disconnect on an economic agency level. There’s a couple of factors at play. Part of it is that product manufacturers try to upsell organizations in order to derive benefit from cyber enhancements in their products. And that’s not necessarily in the interest of the operator. And the organizations that purchase the technology are usually making long-term capital investment, and they want that technology to last 10, 15, 20, 30 years. 

So that’s one big area. I think another big area is that there’s been a lot of talk about memory safety and memory safe languages and the issue there, of course, in infrastructure. Just as we talked about maybe web enabled applications and the software infrastructure and enterprise IT, it can be swapped out more easily. Rewriting software and deploying that across critical infrastructure is a non-starter also. And so for me, I see ultimately there is a national security aspect, there’s an economic question here, and then there’s a technological question here. And in my view, that collaboration between the operator and the supplier is a key one to help really understand what the shared security concerns could be, and are there more economical ways to produce an outcome that prevents exploitation or some really grave consequence.

[Paul] (22:38)

So this is a sort of Secure by Demand, Secure by Design, co-operation or combination, isn’t it? And that’s not Secure by Demand that you bash your hand on the table. It’s basically saying as the consumer of a product, you will try and draw the market in a way where it takes you where you want to go. Because, Joe, you did a survey recently about automotive consumers, didn’t you? Where a very large percentage of people said expect cyber security to improve, but a very small percentage said, and we’re prepared to pay extra for it. It was quite clear that they said, it’s your job to build it in the first place, which is quite a warming thing to hear, isn’t it? It sends quite a clear message to the industry itself.

[Joe] (23:23)

It does. And then I point out that in the healthcare ecosystem, health providers are starting to really discriminate when they buy new medical devices. Yes. Those that have security protections built in versus those that do not.

[Madison] (23:37)

I love that you brought up security by design. I’m obsessed. I think it is again, a huge, huge sign of maturity within not just the product space, but in the cybersecurity field in general, and really ensuring that we understand as an industry that cyber resilience, dare I say the term, is a shared responsibility. It is not just a customer’s responsibility, but it is also the vendors, individuals who implement the technology and also our end users.

[Paul] (24:12)

I agree with that very strongly. It really is something for all of us, isn’t it? It’s not just, well, some industry expert must do it or some government body should mandate it. We all have to care and we all have to make it happen.

[Madison] (24:27)

The problem and the threat are too large for us to go along.

[Paul] (24:30)

Absolutely.

[Joe] (24:31)

With China having already pre-positioned, I’ll call them cyber bombs, in US critical infrastructure, there is a risk here from a national security perspective also to add to the equation. There is a government interest. do see critical infrastructure as an extension of national security and finding that mission to ensure we maintain a well functioning society and critical infrastructure operates is then an extra dimension here that has to be factored in there has to be ways that the government can enable or help industry enable themselves to adopt secure by demand and Secure by Design principles.

[Paul] (25:10)

So, Madison and Joe, to finish up because I’m conscious of time, what bold steps, if that’s not too bold a term in its own right, would you like to see government and industry leaders agreeing on to improve safety and security and critical infrastructure over the next 10 years or more?

[Madison] (25:28)

I do think within our space, ego and ownership and the bureaucracies get in the way. And so going back to the initial beginning of this conversation, when you asked me around, what do you think that we could do more of in the public private partnership, continue to lead in and to do it from a place that is based on that initial mission and why you’re dedicated to the cyber landscape. We all get so exhausted from the conference calls, from feeling like we’re constantly saying the same things over and over again. But I think what we need to continue to do is own our space, advocate for our space, and always have a perspective that is void of ego. And I think that we can achieve whatever we damn well please.

[Joe] (26:22)

In my view, and I’ve talked about this in the context of Taiwan in the past, and I think it’s true in the US today, if there is a form of deterrence that takes the shape of a couple different layers, one is to punch back. And if there’s information sharing that critical infrastructure can use and the US government can punch back, that’s a form of deterrence. I also think that there may be some form of incident response that can be enabled across the US to help.

Some form of cyber force in the US to train people and get people up to speed further. And then a third one I have, which shouldn’t surprise anybody, but I do think there’s a form of resilience in software, a level that needs to be adopted. And when we talk about the legacy code, I envision a way to prevent exploitation of legacy code in general. In my mind then, if you think about those three areas, punching back, having a form of incident response, and then having software that protects software that’s already deployed so you don’t have to rewrite it. Those are three forms of resilience that I think matter and make a difference and ultimately will maintain a well-functioning society.

[Madison] (27:28)

I feel like you’re talking, you just created three other podcasts.

[Joe] (27:33)

HAHAHAHA

[Paul] (27:34)

You’re not wrong.

[Madison] (27:36)

One, conversation around deterrence and obviously the fact that it is now being discussed as a new tool within the diplomatic toolkit, which I have some questions around and we could talk about the US government’s role in deterrence and what that looks like with our energy sector. And then cyber force, obviously with the launch of the new commission of cyber force.

That could be a whole other conversation. I think we should spend the first five minutes that we hope the uniforms are not just black hoodies, awesome additions. And I look forward to conversations ahead as we see both of those areas, I would say over the next two to three years, really teased out and real action put behind both those areas, Joe.

[Paul] (28:23)

Madison, I think that’s a really upbeat and positive way on which to finish. So I’d like to say at this point that that is a wrap for this episode of Exploited: The Cyber Truth. I think you have shown, if I may quote a truism, that things work better together when we work better together, and very definitely that is the case in cyber security. So thanks to everybody who tuned in and listened.

If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media as well. Don’t forget to recommend us to all of your team so they too can benefit from Madison and Joe’s insights, wisdom and passion for cybersecurity. And remember everybody, stay ahead of the threat. See you next time.

 

The post Collaboration in Cyberspace with Madison Horn appeared first on RunSafe Security.

]]>
Risk Reduction at the Core: Securing the Firmware Supply Chain https://runsafesecurity.com/podcast/firmware-supply-chain/ Thu, 09 Oct 2025 14:37:04 +0000 https://runsafesecurity.com/?post_type=podcast&p=255069 The post Risk Reduction at the Core: Securing the Firmware Supply Chain appeared first on RunSafe Security.

]]>

Firmware forms the foundation of all embedded and connected devices—but it’s often overlooked in cybersecurity discussions. In this episode of Exploited: The Cyber Truth, Joseph M. Saunders, Founder and CEO of RunSafe Security, explains why attackers are increasingly targeting firmware to gain persistence and control across critical sectors like healthcare, automotive, energy, and defense.

Joe details how firmware determinism, third-party dependencies, and complex supply chains create high-stakes vulnerabilities. He also shares practical strategies for breaking determinism to thwart attackers, understanding firmware Software Bills of Materials (SBOMs), and implementing protections at build time to reduce risk.

Whether you’re a CISO, security leader, or device manufacturer, this episode provides actionable insights to secure the foundation of your systems and strengthen resilience across your enterprise or operational environment.

Key topics include:

  • Real-world ways adversaries exploit firmware vulnerabilities
  • Risks inherited from third-party firmware and complex supply chains
  • How “shifting security down the stack” enhances trust for all systems above it
  • Practical steps CISOs, security leaders, and device manufacturers can take to harden firmware

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:01)

Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of Run Safe Security. Hello, Joe.

[Joe] (00:19)

Greetings, Paul. Look forward to the discussion.

[Paul] (00:22)

Very intriguing title this week, “Risk Reduction at the Core: Securing the Firmware Supply Chain.” Historically, we sort of divided computing devices into hardware and software, didn’t we? Software you could load off a disk, and the hardware was the actual chips and the resistors and the wires and the capacitors and maybe the keyboard. But these days when we talk about firmware, it is part of the system and it typically includes the operating system and the applications that an embedded device will run. And it can’t just be fixed by loading a different one from disk next time you boot up, can it?

[Joe] (01:04)

Yeah, it presents all sorts of issues, security issues different from general IT infrastructure in part because this firmware, these embedded devices, are deployed across critical infrastructure and they may be in parts that are hard to reach. So not only is the technology a little different, but where it gets deployed and how it’s used is different than a user in an enterprise. So these IoT devices are special and also very vital for how our economy works and our infrastructure works.

[Paul] (01:37)

In the old days, firmware was burned into a ROM, read-only memory, in a way that it could not be updated. You literally had to desolder or pull out the chip and put in a new one to get an upgrade. These days with flash memory, you can update firmware automatically without changing the chips, which is very convenient for the good guys, but it also introduces a massive risk because what we can update easily when we really need to, the bad guys may be able to update without authorization when we don’t want them to.

[Joe] (02:13)

Yeah, certainly risk for the good guys, opportunity for the bad guys. Yeah. For exactly the reasons you say.

[Paul] (02:22)

It sounds a lot worse when you put it like that. Our risk, their benefit. And we have to try and turn that on its head, don’t we?

[Joe] (02:29)

Yes, and so there’s always that trade off with progress with the ability to offer updates and reach these connected devices and the ability to actually do updates on device, if you will. That means there may be enhancements, there may be additional security benefits offered down the road or other capabilities that are introduced. And for that benefit, it does mean that there is exposure.

[Paul] (02:55)

So this really is a critical attack vector, isn’t it? Because it’s not just as though, well, maybe some of us might have booby-trapped online meeting applications on our phones that would be unfortunate for us and the people we have meetings with. This could be malware, malicious software that gets embedded in devices that are quite literally all over the country and that may not easily be able to be updated again for days or weeks or months because of the fact that they are in far-flung places and they can’t just be updated as easily as a mobile phone or a web app.

[Joe] (03:38)

You think about embedded devices in operational technology networks and embedded in maybe the water supply system or the energy grid, or even in a manufacturing plant. Part of the concern is with the proliferation of these devices and as these devices have gotten smarter and connected and they’re involved in these massive portions of critical infrastructure, then there is a consequence that is there of disrupting critical infrastructure and disrupting the operations, disrupting the energy grid itself. We don’t want to let that happen. 

And so with that, there’s always this issue of commercial organizations shipping products into infrastructure. That infrastructure may also be managed by another commercial entity. And so there’s a whole relationship in thinking about how to secure not only access to the infrastructure, then the devices themselves. And the example I like to always use to kind of illustrate why this attack vector is so significant is that of cooling systems inside data centers. And you might think, you know, that’s a good supporting capability that keeps data centers at the right temperature. And with the proliferation of large language models, we actually have an increase in energy consumption. And in order for these systems to keep operating, there may be controllers and sensors and other things around the data centers taking temperature or doing other things. And if those things get compromised and the cooling system fails, then the very large language models inside that data center won’t process. The idea is that in this kind of infrastructure, disruption is a consequence that could cause economic loss or in the case of the water supply, even worse. So it’s very important to then think about how do you secure these devices because there’s always the potential that these types of devices can be compromised and attacked.

[Paul] (05:39)

Yes, and if you think about water supply and the other side of that in the ability to drain water away successfully and control what happens to the waste. All of these things are typically controlled these days by hundreds of thousands, possibly millions of individual components like valves, each of which is manipulated by some kind of embedded device with its own firmware. And so a bug in one of those, if it’s used in thousands of pump rooms or waste water processing plants around the country, a bug in one could actually cause a disruption to all of them, even if that wasn’t what the attackers originally intended. They might suddenly find that they build a tool to attack one city, for example, and suddenly realise, hey, this works in all 50 states and the District of Columbia.

[Joe] (06:33)

You’re exactly right. The proliferation of these devices across geography means that there is then suddenly an attack vector that is beyond geographic bounds. And so whereas these individual points or plants or portions of infrastructure may not have been as connected in the past, now the common attack surface is the very software that’s common across all the systems and all the geographies. And that common software means if there’s a vulnerability in one, there’s a vulnerability in all of them. And if an attacker has figured out how to compromise that device, they can replicate that. I always like to say that one of the great promises in software is the ability to produce millions and millions of copies of the same code. And when you put that on a piece of hardware, all those components will operate the exact same way. With all the same inputs, you’ll get the same outputs. It’s deterministic.

[Paul] (07:29)

It’s not like an old analogue system like the centrifugal advance on the points in the car. It mostly worked the same, but it did depend on temperature and metallic wear and all sorts of external factors. With a digital system, in theory, for the same input you should get the same output every time. And that’s great for testing, but it’s also a terrible risk if something goes wrong to one of them. It goes wrong theoretically to all of them possibly at the very same time.

 

[Joe] (08:01)

Yeah, so that determinism also then works in the favor of the attacker. Yeah. If they compromise one, if they reverse engineer one that bought say off eBay or someone had an extra one and they got their hands on it. 

[Paul] (08:15)

Before people think,  well, this kind of stuff doesn’t come up on eBay, bear in mind that almost all of the famous ATM hacks that have been demonstrated over the years have been done by people like the late great Barnaby Jack buying used ATMs off eBay, sometimes for just hundreds of dollars a time. Go think.

[Joe] (08:39)

Exactly right. Sometimes there’s this legitimate reasons to buy that hardware because you might be developing something new and you want to test things. And so there is a market that’s legitimate for those things, but that very market then also creates the opportunity for someone to do perhaps something nefarious with it. I guess that determinism ultimately works in the favor of the attacker as well. There’s a vulnerability on one, there’s a vulnerability on many, as we just said, you know, that’s one of the tricks. It’s very hard to some of these devices for different reasons and I’m sure we’ll get into that.

[Paul] (09:14)

Maybe we could get into that right now, Joe, and you could just say something about how do you build a system that is essentially deterministic, yet it is not identical in a way that thwarts the attacker.

[Joe] (09:28)

The way I like to describe it is: how do you make those devices remain functionally identical, but in fact be logically unique so that you break that determinism for the attacker. The way we do it at RunSafe, of course, is we insert security protections. When you build that software, produce that binary, put that binary on those devices, we add in security as that’s getting manufactured. For the purpose of then when that software loads, on the device out in the field and that software loads into memory. We randomize where those functions go. We relocate uniquely every time where those functions go into memory when that software loads on that device. And what that means is you are breaking that determinism. 

If there’s a vulnerability and function ABC that always loads in memory location ABC. Well on your device, Paul, it may load into memory location XYZ in mine, might be in memory location QRS. And the benefit of that though, if you think about other ways that these devices could be protected, one of the big constraints in infrastructure of course, is the power and the compute capacity, if you will. If you can’t increase those things easily and you can’t add new software onto a device because it has limited power and limited compute resources, then you can’t put agents on software that might monitor things. There are some limitations in infrastructure that could change behavior, could slow things down, could change the support process and introduce risk in and of itself as well.

[Paul] (11:10)

You may not even have kilobytes to spare, let alone megabytes or gigabytes. So you can’t just keep adding new stuff to it, can you? You have to deliver exactly the same machine instructions, but shuffle the deck so when you deal it you get the same game, but not by adding lots of extra blank cards that make it behave significantly differently.

[Joe] (11:32)

Right, because as you increase the resources involved on these devices, the cost goes up. Also, if you consider say satellites or flight software or drones, airplanes, even military weapons, one of the key considerations, of course, can you deliver software on time? But also its size, weight, and power. You can’t just necessarily increase the hardware that you put on some of these systems.

You’re also increasing the cost. You want to avoid increasing size, weight, and power in a lot of these systems. If you’re introducing other hardware to monitor these devices on transportation or autonomous systems or space systems, or even in aviation in general, or weapons programs, it comes at a cost. There’s a downside to that. Just as equally as important as not overloading that embedded device inside the data center or in the energy grid.

It’s also true in these transportation and space applications where you can’t afford new hardware in the first place.

[Paul] (12:38)

To put it in perspective for those of our listeners who might have worked on things like web apps, you don’t even have to wait ‘til tomorrow. You can just do the update before the next person visits the website and you go, whew, well, we dodged that problem. There’s none of that in any of the embedded space really, is there? Not only is it impractical to do it because of the complexities of updating, there are just regulatory and operational reasons why it cannot be done.

[Joe] (13:06)

Yeah, there’s definitely regulatory reasons. And I would just say part of the ecosystem to distribute this kind of technology is someone producing a component or firmware on a device may be delivering that in part to an OEM that’s producing a broader system overall. And so that creates a layer of distribution that OEM may have a distributor who’s then reaching and putting these systems or these devices into the infrastructure itself.

That could yet be another layer. Then you start to think about, how do I push updates through two or three or four layers and who supports that? And what’s the cost of that and what’s the timing and how often can that even happen? You can’t do it continuously for sure. There’s even a cost to doing it. So it’s only happening periodically. And I think in a lot of these systems, one of the challenges is even though you can provide some of these updates, I think a lot of the updates take a long time to even reach the final end product. 

You can imagine if a technician has to go around and touch all these systems to update it because there might be security reasons or others. The complexity of the problem, given the number of suppliers is high and you have to certainly understand what software, what operating system, what components, what libraries are on my devices and how at risk are my devices in the first place. And so it poses a pretty significant security question for the supply chain itself, how do you get reliable information? How do you assess that risk of the software that’s on those devices? And what do you do about it? How do you manage your supply chain? The number of developers that touch these kinds of devices then, if you think of those thousand suppliers, they are using open source software. 

They themselves are using third parties for some of the development. And then they themselves are developing some of it in-house. I think the best approach is if the manufacturers have a good set of standards on what has to be done before those devices reach the manufacturing plant itself. 

In my view, what needs to happen, given the distributed complex software supply chain implied across these thousands of vendors, is a way to understand what’s contained within the Software Bill of Materials on these devices I’m deploying in my infrastructure and what are the vulnerabilities associated with those and what, if anything, has been done on those devices to prevent exploitation, to protect those, to monitor those, so that we don’t find ourselves in one of these situations.

[Paul] (15:41)

Joe, when you say Software Bill of Materials, or SBOM for short, that’s BOM not BOMB, that’s not just the recipe for how you might make the software if you wanted to start from scratch. Because that just tells you what you thought you wanted to put in it. It doesn’t necessarily say what actually went into it when you built it. And it doesn’t vouch for the fact that nobody had fiddled with some of those ingredients before you baked the cake.

How do you make sure that the person who’s one upstream from you is delivering you what you thought so you can deliver the right thing to the guy who’s one downstream from you?

[Joe] (16:22)

Yeah, I think generating a Software Bill Materials as close as possible to the time you produce the software binary that’s getting deployed on that device is the best approach. And it’s for the reasons you’re starting to describe, which is if you do it after the fact, that’s like determining what were the measurements that went into the baked cake that you just produced. If you do it before you bake the cake, well, what about that little chef’s magic touch?

And in this case, the chef is actually, I guess the build system and maybe even that compiler that converts the source code into the software binary in the first place. What actually gets produced when you deliver that binary isn’t as straightforward as what was the original developer’s plan? What’s going to go into it? Yeah. Ingredients get added at the final moments.

[Paul] (17:16)

Or they get substituted with things that somebody thought, what’s the difference between sodium bicarbonate and sodium carbonate? Surely it won’t make a difference. So how do you equalise all that in something as complicated as the firmware supply chain for embedded and critical devices?

[Joe] (17:34)

You do need to produce those Software Bill Materials as that software is getting compiled. Otherwise you may end up with margarine in your cake and not butter. And you know, that would be catastrophic.

[Paul] (17:47)

Yes. So Joe, there are tools out there that claim they can essentially taste the cake after it’s been baked and you’ve said in previous podcasts that they can actually do a very good job if you’ve got nothing else but by very good job they might get 80 % of the ingredients correct. But not knowing 20 % of what went into your cake, 20 % sounds like an awful gap.

[Joe] (18:13)

Well, it comes at a cost. It comes at chasing false positives, false negatives. Yes. And the reason for that is what the binary-based SBOM generation tools do is they look for components and then they make assumptions usually based on heuristics. If I see the following types of libraries or components in this software, then I’m going to assume that you also have X, Y, and Z. We can’t just rely on heuristics.

We will miss things, we’ll be chasing the wrong components. And when we identify components and associate vulnerabilities with them, it only compounds the problem when that component doesn’t actually exist in that binary. What I think is a great opportunity, and it’s where RunSafe fits, is to be able to apply security protections and almost be independent or agnostic to what those architectural implications are. We work across instruction sets, we work across operating systems.

We support many different kinds of build tool chains themselves. If you think about an organization that may produce a lot of different gadgets and have slightly different build tools or may have different settings for the kinds of embedded Linux that goes on these devices, it does matter. And you need to have a standard process to get the ingredients at build time. So you have an authoritative SBOM you can trust. What becomes a relatively simple request, Hey, what software is in my device actually is the result of a very complex ecosystem, as we said. And what we really want is to have really good information about what’s in the device and make that easy for developers to produce upstream so downstream users know with confidence how to assess the risk of a device in the first place.

[Paul] (19:59)

What are the regulators and perhaps just as importantly, what are customers and the industry itself thinking about how they might reduce these risks and improve their supply chain?

[Joe] (20:11)

I’m actually on respond on three different regulatory frameworks, if you will. Yes. In the medical device arena, at least in the U.S. there is a requirement to produce a Software Bill of Materials for medical devices. So that’s one set of requirements. You have to produce it and know what’s in there and also then address the vulnerabilities. But the second one is relatively new in the United States and it’s coming due. You have to be ready by 2027 and you have to start to have the framework put together by March of 2026, and that’s for the automotive industry. And part of the issue is if you think about all these components, the U.S. wants to make sure that you’re not incorporating components developed by an entity from China or Russia, because there is a supply chain risk. And so that’s a new requirement in the automotive industry. And in fact, just to point out how complex this can get for a request like that for automotive companies who want to sell their cars in the United States, you have to ensure that component doesn’t come from China or Russia, as I said, but you as the OEM, the product manufacturer, the brand we know and love, whether it’s Ford or Honda or BMW, you’re going to sell your car in the United States. You have to go four layers deep into your supply chain to ensure that there isn’t a component originating from an entity in one of those countries.

So that’s a pretty significant demonstration of how complex it is. The third one is in the EU, there’s the Cyber Resilience Act.

[Paul] (21:43)

I was hoping you had mentioned that.

[Joe] (21:45)

Of course, I save the best for last.  The way I think about it is we’ve talked about in the past, Paul, is that if you’re going to have to do this anyway, don’t cut corners. Invest in a software bill of materials that will somehow pay dividends in other ways. And what I mean by that is something that can help set you onto a path to boost resilience, to boost your security posture, maybe to improve things in your software supply chain.

[Paul] (21:56)

Absolutely.

[Joe] (22:13)

Practices in the first place.

[Paul] (22:15)

I guess what you’re saying is if you have the words checkbox compliance floating around anywhere in your company, boot those words out right away, build your software, build your products in a way that naturally they tend to comply. And if they don’t, well, you can easily go back and fix it. We should do this because we want to, not merely because we need to.

[Joe] (22:39)

Yeah, I think it’s a good business practice to think about security as an element of quality, to think about software bugs as an element of quality, safe systems by definition are quality systems. And I think the key is building a really solid foundation and methodology and benchmark the process you go through to make sure that it’s repeatable and reproducible. If you could do all that and do it in a timely fashion, then you’re on your way to being a high quality software development organization. And let’s face it, software development is hard. Getting rid of all bugs is an impossible task, but at the same time, you do need to meet the standards and probably exceed the standards if you’re gonna be perceived as a provider with any semblance of quality in your product. If you think about organizations, they have their own internal governance. 

Yes, for policies that teams need to, product teams need to adhere to in addition to industry standards. And so when you have a combination of policy, internal governance and industry standards, those are good controls to put into your software development and really operationalize so that you’re producing in a consistent way. Let’s not forget though that there is an adversary out there. There’s sort of the hidden stakeholder is the adversary that if you don’t produce something of decent quality or of the semblance of security or a safe system, then attackers will find it because you’re exposed. 

There’s a lot of forces at play here to ensure that you do develop a high quality software that’s safe and secure. And ultimately just differentiation in general is a good reason to invest in security and invest in understanding what all those components are and being able to communicate transparently with good customer service to your customers. So they know if they’re exposed, they know they can communicate with you when new vulnerabilities come out and get the straight answer. Why would you take an incomplete approach and just check the box? You might as well invest because you’re going to be chasing your tail down the road.

[Paul] (24:50)

Yes, I was speaking at a conference in Berlin last week and Berlin Airport was famously affected by a supply chain attack basically. The company that runs their check-in services and their bag drop services was taken off the air and the airport was in chaos. The next time some airport goes out to buy check-in software, they might be asking more difficult questions than they did in the past. With that in mind, what are your top recommendations for CISOs or for software engineering teams who want to be able to produce software that passes what you might call the truth in engineering test? Where do you start if you haven’t done so already?

[Joe] (25:36)

We live in a digitally connected, interconnected world and we all benefit from it. We all benefit from the convenience, ability to scan things and doors open and just walk right in, in a trusted environment. There’s lots of good ways that we’ve applied technology and connected technology to improve the quality of the life and the creature comforts we expect in today’s world. And with that, it does mean that the CISO you mentioned, does have a very difficult job and it’s challenging.

[Paul] (26:08)

It’ss not just what you buy, it’s as much what you supply to the next person along.

[Joe] (26:14)

Right.

And so my top recommendations are to certainly ask for a Software Bill Materials and to use that in a way that helps you get vulnerability data and risk data about the devices so that you can really prioritize which systems, which vendors, which, which devices you need to look at more carefully, work with the suppliers. My second recommendation is have a seat at the table, ask questions and give feedback and be constructive and not just demanding.

[Paul] (26:47)

Yes, nobody’s perfect, but if everyone gets a little bit better, that’s a lot more productive for the future than one or two companies becoming fantastic and everyone else lagging behind. Because there will be a little bit of everybody’s ingredients in most cakes.

[Joe] (27:03)

Yeah, for sure. And so generate a Software Bill of Materials use that information to help you prioritize what areas to look at, engage your suppliers and give them the feedback that you’re seeing based on the risk you’ve prioritized. And then even go a step further. Are there ways you can ask for Secure by Design practices? CISOs are in a strong spot. The catchphrase of course is secure by demand, but the idea is to ask for security built in for the benefit of your ecosystem, your network, your operations, and your environment.

[Paul] (27:35)

Yes, because neither secure by design nor secure by demand, opposite ends of the market if you like, neither of those can really work on its own.

[Joe] (27:46)

Yeah, it seems like we need secure by engagement, secure by collaboration. And that’s really the point. It’s two ends of the same problem. And you’re both stakeholders. The operator is a stakeholder, the manufacturer is a stakeholder, and we need cooperation in between. We need best of breed, software bill of materials. We need those things done at build time so everyone has a clear, visible view of the vulnerabilities and the risk at play.

[Paul] (27:51)

Absolutely. So if we can just finish up by, if you like, the shoe on the other foot and imagining the CISO at the supplier who is providing software and firmware to third parties down the line. What can they do in terms of things like vulnerability disclosure? How do you deal with vulnerability disclosures in a way that is not doing your company down, but at the same time isn’t inadvertently misleading people or making things sound less severe than they really are? How do you tell if you like, truth with honour.

[Joe] (28:46)

Well, it’s a really good question. And I think in the old days, we may sort of avoid telling people about vulnerabilities until we have a fix or a patch. Yes. And then we erase it out there and say, you got to do it immediately. And, that just creates tension in communications. Hurry up, implement this. Well, how long have you known about it? I’ve known about it for three weeks and you’re just telling me now.

[Paul] (29:07)

Suddenly it’s your fault. What do mean you haven’t patched in half an hour? Well, why didn’t you fix it four months ago?

[Joe] (29:10)

Right.

Exactly. So that’s not really a healthy relationship. Now, if you build in security into your products and you have protections that prevent exploitation, even when a patch is not available, guess what that means? That means you can disclose immediately upon finding it. If you already have protection in place, if you build in security, then you can make your communications more transparent, more functional, less dysfunctional, and gain the confidence by disclosing. It’s like a low key flex as we say these days. Bad news, there’s a vulnerability in your system, but we’ve already got you covered. You’ve demonstrated that your software development process, your security investment and your support process and your communication with your customers are all in alignment. You’re not having to face that trade off of should I disclose or not you’re disclosing and inspiring confidence in your customer.

[Paul] (30:14)

This is something that affects all of us who are involved in the software industry in any way. We can’t just wait for somebody else to do it all for us. We all have to work together. And it’s very much as the Air Force might say, per aspura ad astra. Through hard work, you can reach the stars. So thanks to everybody who tuned in and listened. That is a wrap for this episode of Exploited: The Cyber Truth.

Thank you so much to Joe Saunders once again for his thoughtful and insightful deliberations. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like us and share us on social media as well. Please also don’t forget to share us with everyone in your team so they can hear Joe’s words of wisdom. Remember everybody, stay ahead of the threat. See you next time.

The post Risk Reduction at the Core: Securing the Firmware Supply Chain appeared first on RunSafe Security.

]]>
When Machines Get Hacked: A Manufacturer’s Guide to Embedded Threats https://runsafesecurity.com/podcast/manufacturing-embedded-threats/ Thu, 02 Oct 2025 14:42:48 +0000 https://runsafesecurity.com/?post_type=podcast&p=255028 The post When Machines Get Hacked: A Manufacturer’s Guide to Embedded Threats appeared first on RunSafe Security.

]]>
 

Cyber adversaries are exploiting the weakest points in manufacturing—embedded devices, PLCs, and legacy systems that keep industries running. In this episode, RunSafe Security Founder and CEO Joseph M. Saunders joins host Paul Ducklin to reveal how attackers infiltrate operational technology, gain access to system calls, and even turn software supply chain components into weapons.

Drawing on lessons from recent attacks and U.S. government red team exercises, Joe explains why memory safety matters, how Secure by Design practices reduce risk, and why runtime protections can neutralize exploits before they succeed. With the rise of AI and increasingly connected systems, the conversation underscores why manufacturers can no longer afford to treat cybersecurity as an afterthought.

Key topics include:

  • How adversaries infiltrate embedded and industrial devices
  • The role of nation-state motivations, economic espionage, and insider threats
  • Why memory-unsafe languages remain a root cause of critical vulnerabilities
  • How Secure by Design practices and runtime protections can harden devices without disrupting operations
  • What manufacturers must watch as AI-driven attack paths begin to emerge

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:06)

Welcome back to Exploited: The Cyber Truth. I am Paul Duclin, joined as usual by Joe Saunders, CEO and Founder of Run Safe Security. Hello, Joe.

[Joe] (00:19)

Greetings, Paul. Great to be back.

[Paul] (00:22)

Today’s topic is very simple to say but difficult to deal with and that is “When Machines Get Hacked.” And our subtitle is “A Manufacturer’s Guide to Embedded Threats.” To put it simply Joe, we’re going to be focusing today on what you might call the lower levels of the Purdue model, the bits that affect the parts of the system that aren’t traditionally associated with IT, and are correspondingly difficult to look after because they could be anywhere, like buried inside a lathe or a pump house or a ship. What motivates today’s attackers to go after embedded systems?

[Joe] (01:07)

The answer to that question is the same thing that motivates attackers in other systems. The OT networks, the operational technology that helps to make critical infrastructure operate, whether it’s water systems or the energy grid or other areas, has maybe a special reason on top of ransom and financial gain or just ideology trying to do something to a nation state to disrupt their operations.

But I do think it carries an extra level of attention because if you are a nation state targeting another country’s infrastructure, there’s probably a motivation that at some point in the future, if there’s some kind of kinetic warfare or some kind of geopolitical tension and you can detonate, let’s call it a cyber bomb out of convenience, some kind of cyber exploit at a time of your choosing in the future, then you can really disrupt how citizens perceive their own government. And I think it’s as much about the strategic statecraft as it is about the financial motivations, maybe ideology in general that you would otherwise traditionally have.

[Paul] (02:21)

I guess you have the problem that even if an attacker turns out not to have precise control over every valve in a system, if they’re able to mess with one or two, that’s rather unsettling for anybody who’s worried about an attack that might unfold. And even if there isn’t what you might call kinetic warfare going on at the time, it’s still pretty unnerving isn’t it to think that whether they’re nation state attackers, or let’s call them old school cyber criminals. If you’ve got these bad actals wandering around in your water system, in your waste water pumping system, in your port, in your manufacturing plants, that’s not particularly cheery news to get, is it?

[Joe] (03:07)

Our society relies on a well-functioning set of utilities and public goods like water or energy or other things. so it is unsettling to know that some kind of cyber attacker, some kind of bad actor could manipulate those systems, could taint those systems, could disrupt those systems. And of course, if you took that to its furthest degree, what would really be unsettling is a mass unintended migration of people, because there’s no potable water, there’s no energy, and there’s no working systems. I don’t anticipate that day anytime soon, but I do think people are pre-positioning, and I say people, mean, nation state actors are pre-positioning, as we’ve seen in the US in critical infrastructure. And so they have the means to provide some level of disruption at this point.

[Paul] (03:57)

And worse than that, even if they’re not able to get a sufficient level of control to interfere actively with the systems, or even if they don’t intend to, just by snooping around, they get an awful lot of information, don’t they? Not only about how society and its systems function, but also you could say up to and including intellectual property about the technology that makes those systems work.

So it’s the worst of both worlds, isn’t it?

[Joe] (04:29)

It is the worst of both worlds. And China, for example, has its own massive domestic economic footprint. And so if China can steal intellectual property, use it for its own domestic needs and export that newfound technology for its own gain in its influence, economic influence in other countries, then you can see how simply the theft of intellectual property is a form of economic espionage and sabotage.

I say that just to paint the picture of how the theft of intellectual property has propagated into more of a technology offensive to influence other countries. As China has caught up technologically, they have certainly invested more and more the past 10, 15 years in R &D. But let’s face it, if they can get an advantage over a US company by exploiting or leveraging US technology and then repurposing it for their own gain with their own companies.

It’s every bit a part of China’s strategy.

 

[Paul] (05:30)

Yes, I guess I underspoke when I mentioned the theft of intellectual property, because whether it’s state-sponsored actors or cybercriminals out to make millions of dollars and leech it from our economy, personally identifiable information has huge value and we’ve seen terrible blunders lately, haven’t we? A recent example, not in embedded systems, being Allianz Life in the United States, who had to admit that the majority of their 1.4 million customers had their data stolen. You can’t really mitigate that once it’s out of the door.

[Joe] (06:06)

You can’t mitigate it in imagine with artificial intelligence and and good solid surveillance technology and analytical methods combining. say let’s pick social media for a second with Instagram and Tik Tok and all the information that you can gather about user subscribers. That kind of information can be exploited, but I’ll take it a step further. Paul in 2014/2015 I should say the OPM hack in the United States. 

[Paul] (06:38)

Yes, huge amounts of data about everybody most government employees, right down to the kind of information you’d need for identity theft.

[Joe] (06:47)

Exactly. It leads to identity theft, but it happens to be all the personnel that have security clearance in the United States. And guess what’s in that information? All that personally identifiable information contains birth dates and identifiers of children of security clearance holders. And so as those kids, let’s say they were born in 2010, they’re now 15 years old today. Their identifiers are out there. Maybe if they were born in 2006, they’d be 18, 19 years old.

You can combine their social media behavior with connection to that security clearance data pretty easily using analytic methods. And so I think there is a long-term warfare around identity theft and cyber attack and generally the theft of the intellectual property that all amounts to a form of economic warfare to undermine U.S. corporations and undermine U.S. systems. If I can anticipate what people’s needs are, I know what systems they’re using. I know what they’re consumer trends are, and I know which water systems affect certain groups, then I can execute a long-term strategy to disrupt an otherwise well-functioning society and start to influence how they perceive their own government.

[Paul] (07:59)

So Joe, given that we’re supposed to be focusing on embedded systems, but the whole picture matters because after all, if you’ve got somebody’s personally identifiable information and you can maybe figure out their password or you can masquerade as them to trick some IT person with social engineering skills to get access into a network, that can further your ability to dig deeper and deeper and go down right to the lowest level.

Can you share a recent example of an attack on embedded systems or what you might call industrial control systems rather than just an IT based attack?

[Joe] (08:34)

We’ve talked about some of these in the past on different podcast episodes, but I do think the Cyber Avengers affiliate with Iran’s IRGC is a really good example of targeting programmable logic controllers inside water systems. And we saw that in 2024. We saw attacks in Texas on water systems where attackers were able to control and maybe even overflow certain water tanks and water systems to disrupt service.

And then we also see them separately from those two PRC sponsored attacks through actors like Volt Typhoon and even Salt Typhoon, targeting different elements of critical infrastructure. And in a lot of cases, the methods are meant to gain access to these systems and then find ways to control them remotely or administer arbitrary code to do something that the system wasn’t originally intended to do.

And so those are three good examples. Think the state of Texas had an attack in 2024. Think generally we know what the Cyber Avengers have been doing that are affiliated with IRGC. And we have seen over the past couple of years, Volt Typhoon and Salt Typhoon in particular disrupting the telecommunications infrastructure with Salt Typhoon.

[Paul] (09:51)

Now, Joe, in the Cyber Avengers case, it may not be that they actually set out to target wastewater systems. They could simply have said, let’s wander around. Once we’re into the IT part of the system, let’s see what parts of the OT or the industrial control network have additional vulnerabilities that let us percolate, if you like, lower and lower down. So do you want to say something about how some of these attacks were actually pulled off. Like, what did they do and what blunders, let’s be blunt, did we make on our side, making it easy for them to get in?

[Joe] (10:29)

So a couple of things, there are access methods that nation states can use if they can compromise the human machine interface, which is that level two of the Purdue model that you’re referring to, to then access PLCs, the programmable logic controllers. That is a method to gain access.

[Paul] (10:48)

Now, generally that HMI, the human machine interface, loosely speaking, those are supposed to be things like control panels that are in a pump room, or that are attached to a lathe that have things like open valve, closed valve, emergency stop. And they’re supposed to be operated, again, loosely speaking, by someone who’s actually standing there. They’re not machine-machine interfaces. They’re just like the buttons you have on your TV on, off. And yet, if they have flaws in them, that means you can sort of skip over all the other layers. They can sit in a country far, far away and pretend that they’ve actually driven out into the middle of nowhere in Texas with a physical key for a pump room, open it up, gone in and press the actual button themselves. That’s quite a dizzying amount of power to hand over, isn’t it? So what do you do about that?

[Joe] (11:42)

Well, it’s not like it’s a physical lever that you’re opening and closing the valve these days. It’s the, our digital controls, obviously, and these systems are digitally connected and they’re not just digitally connected. They have communication ports and connectivity to the broader ecosystem. And that in fact is what offers the potential for attackers to gain access through HMI to these PLCs. And so you can obviously segment your networks more often. You can build in security into all those devices to try to restrict the ability to compromise those systems. And then at the end of the day, you also need to look at what else can be done with those PLCs that are connected to say the robotics on the manufacturing plant floor. All of those are attack vectors. 

In one sense, what you need to do is work with all your suppliers. If you’re managing a facility that’s now connected to the internet and you have a bunch of different vendors providing core components of your infrastructure, of your operational technology network, you need to understand the risk posture of those devices. You also need to then segment that network, like I said, and make sure that you have sort of the defense in depth. But I think the most important thing is that relationship with your vendors, because if the supply chain creates vulnerabilities that you don’t know about, then you don’t have a great chance of defending it. Operators and asset owners really need to understand the security posture of all their assets in their infrastructure.

[Paul] (13:13)

You get that in spades with embedded systems because they’re often using much older hardware, maybe with very limited memory, maybe with very limited CPU power. So while they might not be at risk from, let’s put this giant untested library in, they are at risk from the fact that it’s harder to design them to be secure in the first place. And many of them date from an era when people just weren’t doing that.

So in particular, what makes what are called these days memory unsafe languages a particular concern as you go down the Purdue stack, as you get to the smaller and more specialized devices at the bottom levels?

[Joe] (14:00)

The memory unsafe languages, languages like C and C++, are widely adopted across critical infrastructure. The underlying problem is they have inherent weaknesses that even the best programmers will leave exposed certain parameters or certain exposed points that allow a well-informed attacker to find a weakness in the underlying software and exploit it.

[Paul] (14:23)

Because those languages come from an age when people expected to be allowed to have direct access to memory to go back to the old basic language days, peek and poke memory at will in order to achieve things that were otherwise impossible because there was no library code or no well-designed interface to do it safely. And of course if a crook can stick in the knitting needle into the wrong hole, all sorts of trouble emerges, doesn’t it?

[Joe] (14:54)

All sorts of trouble emerges and we did a recent analysis on an embedded system. And we being RunSafe security. What we look for were those underlying gadgets that are reachable by an attacker by virtue of these memory-based vulnerabilities. And in general, you call those return oriented programming gadgets or ROP gadgets. And when you string a couple of gadgets together, they become a chain. So you have ROP chains.

And essentially what those gadgets and those chains allow an attacker to do, if he or she finds one, it allows them to gain control of, for example, sys calls or system calls inside the software. By virtue of gaining access to sys calls, you can achieve actions that weren’t otherwise available to you by the standard functions that were written into the program in the first place.

[Paul] (15:46)

So you might turn, say, a command to read from a file into the equivalent command, maybe even does by flipping one bit into a write command and now you’ve got control over the file. Maybe you tell it to add a password that lets you in later. Maybe you say, hey, here’s some new code I want you to run and don’t ask anybody. Like you think of the Pwn2Own and hacking competitions, the attackers, and that’s responsible disclosure. So I’m in favour of that. That’s great.

And they turn up and I think they get exactly 30 minutes to pull off their attack and there’s a timer that’s shown on video. But what you don’t see is that they may have spent a whole year practicing and preparing that attack so they can pull it off in the 30 minute period. And some of the top attackers, they only need seconds because they’ve practiced well enough that they just know the attack is going to work. The complexity of finding the attack is very different from the complexity of pulling it off.

Once you’ve figured it out, you can either sell it on to somebody else who wants to use it or hand it to a team of attackers who can use it at will. They don’t have to do that three, six, nine months of research for every valve or for every lathe or for every water drainage pump.

[Joe] (17:02)

And that actually is the economic equation on which RunSafe was founded and launched, which was if we could find a way to disrupt attackers, even if they know these blueprints, even if they know these methods, then we’re achieving something that’s costing them money and time and ideally force them to look elsewhere. And that is if you apply the RunSafe techniques to relocate where all those functions are, so you can no longer find the underlying weakness, the gadget, the ROP gadget, the ROP chain that I’m talking about, then the attacker will be disrupted because they no longer are finding those gadgets that they can grab onto and manipulate the system calls to do something different. A recent study a red team did, I should say a US government red team did was super interesting. And I’d like to share it, Paul. One of the more devastating attacks historically, or at least exploits that was out there was called Urgent 11.

[Paul] (17:57)

That sounds like a film, but it shouldn’t really shouldn’t laugh. It’s a nice sounding name, but rather devastating potential impact.

[Joe] (18:07)

Yes, and it did affect products in the energy arena. But the idea there is that with Urgent 11, there happens to be 11 underlying vulnerabilities that are accessible to attackers. And at least six, possibly seven of them are memory-based vulnerabilities that exist in the underlying operating system. And I think in that particular case, it was VxWorks. It tells the story of the supply chain.

You’re not aware in the VxWorks some of the underlying communication ports and TCP IP components that are accessible in where there are dependencies that allow an attacker to grab into a system. And so what the red team found was in a certain system built on a real time operating system with an application on top of it. There were brace yourself, Paul, 14,500 ROP gadgets found in that software.

[Paul] (19:04)

So those are little fragments of code that can be stitched together in arbitrary ways, although they don’t look like the kind of code a human would write. They can do things like add 6 to this number, subtract 5, access this memory address, jump to somewhere that I’ve chosen earlier, and by stitching them together in the right order, you can basically build any old program. It may look like a mess, and if you’re a human who wrote that kind of code, you get told off in a code review, but if you’re an attacker, you don’t care about the quality of your code. You just care, will it work in 999 times out of 1000?

[Joe] (19:39)

And in this case with Urgent 11, with those 14,500 gadgets accessible for the attacker, they only need to find 11 gadgets to do what they want to do. Out of 14,500 find 11. And guess what? Those 11 exist multiple times. I bet you can’t guess how many gadgets were remaining after RunSafe was applied,Paul. But I’m going to tell you.

[Paul] (20:02)

I’ve guessed in my mind, so you tell me and I’ll tell you whether I was right. How’s that for a casino bet, Joe? The player. Yes, I’ve got a black check, Joe. I’m not showing you my cards. Go on.

[Joe] (20:09)

That’s great, you can’t lose. Trust me.

OK, so I think you should hold up your number just to help verify. So with 14,500 gadgets accessible after RunSafe was applied, the number went down to zero gadgets available to the attacker. And so this is a monumental feat in computer science in my book, and the RunSafe team accomplished that. Imagine what that can do if you can virtually eliminate, or reduce to zero in this case, gadgets accessible to an attacker. That means that the vulnerabilities you do know about and the ones you don’t know about are no longer accessible to the attacker. And that’s why I feel so strongly, so passionately, as you’ve pointed out, Paul.

[Paul] (21:00)

I’m hearing it now, Joe. Our listeners obviously can’t see, I can see you on video, you’re getting closer and closer to the microphone and your smile is getting bigger and bigger and bigger, which is great because I guessed three that there would be three ROP gadgets left.

[Joe] (21:15)

I’m really curious why you pick three. I guess maybe there’s your three favorite ones that you always know exist that no one else knows about, but the red team certainly didn’t find those. Yeah. But there are tools to count gadgets. And I think it’s a wonderful metric. Gadget counts before and after. From that perspective, you can eliminate the risk of these memory vulnerabilities. And we got on all of this because of memory unsafe languages. The issue, of course, is that memory unsafe languages.

What I like to consider really efficient code, C and C++ is everywhere in critical infrastructure. It’s on medical devices, it’s in the energy grid, it’s on automobiles, it’s in aviation systems. And certainly the memory safety set of vulnerabilities that are implied in that need to be addressed in our own little way. RunSafe is trying to help make these unsafe systems safe by preventing these memory based exploits from being exploited in the first place.

[Paul] (22:16)

And your skeptics might say, well, what’s the big deal if you’re Windows 11 or a recent version of Linux, you’ve got adress space layout randomization, you’ve got all kinds of kernel options you can set that will load security modules. You can add all these flags to the compiler that add all these runtime checks and that fixes the problem. But you often don’t have that luxury on an embedded device, do you? If you were to wrap it in this protective cocoon, it might seem to work okay, but in an emergency, you couldn’t ratify that it would close the valve in time, wouldn’t meet its specifications, or that it might fail for other reasons, like suddenly running out of memory. 

So I guess what you’re alluding to here is the concept of secure by design, where you try and make sure that as far as you can, you think about security before you start, while you’re developing and afterwards while you’re supporting. But you don’t leave everything until the backend where you go well we’ll just add patch on patch on patch until you’ve got car body work that when you take the magnet out it doesn’t stick anywhere. You have that complexity in embedded systems don’t you? You can’t just make the changes that you want anytime you want to.

[Joe] (23:36)

And these systems last for a long time in the infrastructure. Yes, the compute resources and the power resources may be limited. I like what you’re saying. You can’t just patch and patch and patch. I would argue, you know, why patch if we’ve got bubble gum and band-aids? We can put things on these systems and prevent attacks that way and infrastructure should simply work. And obviously I’m joking, but the idea is that you want efficient systems. You don’t want patch systems. And if you can eliminate the risk of exploitation, you should do it.

But your point around all the compiler settings and flags and different ways you can build your system and build security into it by tools available from the operating system or otherwise comes also at a cost. And that cost could be increasing the dependencies, increasing the size of ultimately the binaries that get produced. But bringing in dependencies that have vulnerabilities is probably one of the biggest things that happens. And developers need to reduce dependencies and reduce vulnerabilities and reduce footprint and have the most efficient code out there. So you could potentially add all sorts of settings and flags, but that comes at a cost that requires further and further use of tools to perform hygiene and analyze where’s the next attack going to come from. My mind is always goes towards keeping it as clean and simple as possible, as efficient as possible, and still have a way to mitigate against entire classes of vulnerability.

[Paul] (25:07)

So maybe we can finish up if I ask you in the future, what are the emerging threats and trends that you think manufacturers of embedded devices should be looking out for?

[Joe] (25:19)

Well, I hate to say it, but I do think, because this is going to sound like the core topic of the day, but the emerging vulnerabilities around AI systems and generative AI systems, the more we see AI systems interacting with each other, there’s the chance for attackers to exploit the inputs of AI systems, I guess is a way to see it.

[Paul] (25:41)

Yes, it’s sort of like ROP gadgets for the AI processing engine, but it’s supposed to detect that you’re asking it a question that it’s not supposed to go there, but you word it in such a way that you bypass its protections and it goes and generates code that does something bad or suggests an action that is unsuitable. That’s always going to be a risk, isn’t it?

[Joe] (26:00)

Exactly.

It is a risk, but we see the advent in the really fast adoption of model context protocol, MCP, and two systems interacting. And if an attacker sits in between those systems and figures out how to manipulate the messaging, you certainly can see how that would disrupt critical infrastructure. So I think that’s way in the future for some of these systems because safety is in mind and maybe AI is not going to be used immediately.

But the world is changing fast and the way business is being done is changing fast. You said forward thinking; I’m looking at all the aspects of AI that could be exploited. With that said, the U S government, the Trump administration, put out just recently America’s AI action plan. And what’s interesting for me about it is how is the U S going to win the artificial intelligence race?

When China is such a formidable competitor in this arena. And part of the aspect that really struck me was the emphasis on secure, resilient data centers, secure, resilient energy grid that’s able to manage interoperability with distributed energy sources to adapt to the needs of the moment. And let’s face it, large language models consume energy and it’s part of the driver for the data center build out. And so it is natural to see the importance of a resilient data center and a resilient energy grid. And from a RunSafe perspective, we’ve been protecting components of data centers in the energy grid from our get-go. 

And so for me, I think that’s all that demand is only going to increase for us. And it’s potentially then a warning call to everybody else to look out for what’s going to spawn from the artificial intelligence race and what are the attack methods? I gave an example one of the equivalent of a man in the middle between two MCP servers, but also then the importance of the underlying critical infrastructure to avoid disruption of large language models from processing in the first place, because in the future, business and operations and critical infrastructure will depend on some form of artificial intelligence.

[Paul] (28:18)

Joe, that’s very well put and I think it emphasizes as much as we possibly can that cybersecurity is a journey, it is not a destination. It’s something that we all need to be thinking about. And if I can refer back to the podcast where we had Leslie Grandy as a guest, she said, you don’t have to use AI yourself. You may decide that it’s not for you and you don’t need it in your systems.

But you need to think like a red team person, you need to know what your attackers would find out about your system if they used it. Simply put, we can’t close our eyes to anything and I guess the price of freedom is eternal vigilance. I’m smiling but I’m not laughing when I’m saying that. So, Joe, thank you so much once again for your passion. It really makes me feel good about the future of cybersecurity to have people like you in the industry.

So thanks so much for your time and thanks to everybody who tuned in and listened. If you found this podcast insightful, please don’t forget to subscribe, please like and share us on social media and please recommend us to all of your team. And don’t forget, stay ahead of the threat. See you next time.

 

The post When Machines Get Hacked: A Manufacturer’s Guide to Embedded Threats appeared first on RunSafe Security.

]]>
Weapons Cybersecurity: The Challenges Facing Aerospace and Defense https://runsafesecurity.com/podcast/weapons-cybersecurity/ Thu, 25 Sep 2025 14:07:12 +0000 https://runsafesecurity.com/?post_type=podcast&p=254959 The post Weapons Cybersecurity: The Challenges Facing Aerospace and Defense appeared first on RunSafe Security.

]]>

Modern weapons are no longer just hardware—they’re deeply connected, software-driven systems vulnerable to cyber attack. In this episode of Exploited: The Cyber Truth, RunSafe Security’s Dave Salwen joins Paul Ducklin and Joseph M. Saunders to discuss the cultural and technical challenges of securing the future of Aerospace & Defense.

From GPS jamming and supply chain risks to the dangers of relying on outdated patch cycles, Dave outlines how adversaries exploit weaknesses and why resilience requires more than traditional IT defenses. Discover why Secure by Design, proactive defense against unknown vulnerabilities, and cultural change across the defense ecosystem are critical to keeping mission-critical systems secure.

Key topics include:

  • How adversaries exploit software flaws in unpatched, mission-critical systems
  • Why cultural change inside the DoD and its ecosystem is as vital as its technical defenses
  • The role of Secure by Design in weapons development lifecycles
  • The risks of open-source and supply chain dependencies in defense programs
  • Why resilience and runtime defenses are critical to mission survivability

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn


Guest Speaker:  Dave Salwen, VP of Embedded Systems at RunSafe Security

Dave leads RunSafe’s global Public Sector go-to-market efforts, bringing expertise in rapid technology development and public sector sales. He previously led business development for Raytheon Space and Airborne Systems’ $500M R&D division and held leadership roles at Leidos (SAIC) in advanced technology and electronic warfare. Earlier, he worked in commercial tech at ScoreBoard and PSINet. Dave holds a BS from the University of Pennsylvania and an MBA from MIT Sloan.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:01)

Welcome back everybody to this episode of Exploited: the Cyber Truth. I am Paul Ducklin joined today as usual by Joe Saunders CEO and founder of RunSafe Security. Hello, Joe And our very special guest today is Dave Salwen who is SVP of business development at RunSafe Security.

[Joe] (00:20)

Greetings, Paul.

[Paul] (00:31)

And we have a very intriguing title, I must say. Weapons Cybersecurity, the challenges facing aerospace and defence. Now, before we start, I’ll just say, Dave, when I saw this title, my own limited knowledge of weapons, safety and security, I was thinking, well, you know, if you have a revolver, that normally has a transfer bar, so if you drop it, doesn’t go off. And when you finished using it, you normally have a safe that you lock it up in.

But we are talking about safety and security of a very different sort in the modern era, aren’t we? It’s not about locking the things up, it’s actually keeping them safe while they’re out in the field and active.

[Dave] (01:14)

These weapons are billion dollar, even trillion dollar systems. They’re aircraft, they’re radars, they’re sensors, they’re communication devices and actual munitions, kinetic and non kinetic. Like everything around us, like autos, like our smartphones, they’re becoming more and more and more about software. Yes. It’s a real topic of discussion inside the DOD, how to best secure weapon systems from a cyber attack perspective.

[Paul] (01:47)

So is not like you might think of weapons a century ago. This is not just the actual munition, it’s the entire system that gathers data that helps you understand where you need to go, who needs to do what.

[Dave] (02:03)

Exactly. These systems have communications. These systems have command and control. These systems have GPS. These systems have algorithms and radars that are pulling in lots of data to make them more effective, to provide information back to the military on what they’re doing and how successful they’ve been. I mean, these are very connected, very software driven systems.

[Paul] (02:26)

Yes, and you mentioned that they’re communicating back. If you think of kernels or generals sitting back at HKEY watching what’s going on, they’re not watching on embedded systems. They’ve probably got laptops or dedicated IT systems, which are somehow receiving data from the field, probably in real time. So there’s a two-way communication that means you have all the risks of the embedded systems and the specialized networks combined with risks to the what you might call the regular IT side as well.

[Dave] (03:00)

I think about the weapons systems, generally they’re in the field, generally they’re without cyber support. They’re not those enterprise IT data systems, headquarters of an air operation center or what have you. That has its own challenges that IT enterprise like challenge. That’s where most cyber defense kind of fits really well. But when you’re dealing with a weapon system that’s out in the field, when you’re dealing with a weapon system, that isn’t going to get an upgrade, a software upgrade or a software patch for one or two years. That is a whole different beast when it comes to cyber defense. Those systems that are in the field that have mission critical roles that are under cyber attack from the most sophisticated adversaries, what’s happening with them? Are they cyber secure enough? I and many would argue not even close. And so that’s where we are today.

[Paul] (03:56)

Can you share an example of a recent cyber incident, in detail if you can speak publicly about it or just illustratively if not, that gives us an example of the kind of risks that these systems, as distinct from traditional IT systems, suffer from?

[Dave] (04:13)

It is hard to talk about specific systems. Let’s just look at in the newspapers of what’s going on. You can start with Volt Typhoon, likely Chinese backed, getting into critical infrastructure, thought of as commercial critical infrastructure, water systems, energy systems, IT systems and the like. But it’s showing you there are sophisticated adversaries connected to the Chinese who are getting into these public infrastructure systems.

Do we think they’re not thinking about how to get into weapon systems? Of course that’s going on. Transitioning, well we’ve got a conflict, unfortunately, going on with Russia’s invasion of Ukraine. Again, we’re seeing cyber attacks in the nature of the whole typhoon on Ukrainian infrastructure. And to me, when you talk about GPS spoofing, GPS jamming, you’re right on the cost.

[Paul] (05:10)

Yes, I was thinking of that. That’s something that affects everybody, but it definitely affects standalone or autonomous systems that rely on GPS to tell them where they are because there’s no one else to do it.

[Dave] (05:23)

That often is considered sort of more in the domain of electronic warfare, not cyber warfare, but that distinction starts to blur. And so now all of a sudden back to this premise, you have these weapons systems that are in the field that aren’t getting patched, that are subject to attacks from absolutely the best adversaries. To me, the approach of US government to weapons cybersecurity

[Paul] (05:32)

I agree.

[Dave] (05:53)

is a bit too much like the approach to enterprise IT security. It’s not putting enough rigor into the cyber defense.

[Paul] (06:02)

traditional security approaches patch Tuesday once a month Update your iPhone I did mine to iOS 26 the day it came out and I figured you know if it doesn’t work I’ll probably just go back to iOS 18. I’ve got a reserve phone I’ll use that for the few hours that my phone isn’t available It just doesn’t work like that with embedded systems in general and it really doesn’t work like that with things as important as weapon systems specifically does it?

[Dave] (06:31)

Exactly. again, Patch Tuesday, that is great for the enterprise IT system. These weapon systems are very complex. Every time they change the software, that triggers a massive test cycle. You don’t just update your phone and as you were saying, hey, if it works great and if it doesn’t, I’ll update it again. No, when these weapon systems are updated, that triggers appropriately a very rigorous testing cycle. So guess what? These systems are only updated once every year or once every two years. And by the way, to update them, they’re not accessible over the air like an iPhone.

[Paul] (07:09)

Well you’d rather hope not wouldn’t you?

[Dave] (07:12)

You’ve got to get those weapons systems back into a depot. I really want to bring in the word culture. Right. I know this is about technology, but again, to me, the gap in weapons system cyber defense begins with a shift in culture inside the DOD and its ecosystem. What is the vision? What is the goal in cyber defense for these weapons systems?

[Paul] (07:41)

It’s never really worked even for consumer laptops just papering over the cracks every week or every month. It’s much better if you actually get things right upfront and if you can bake in things that mean that if there is a vulnerability in the future you don’t necessarily need to patch immediately to be able to mitigate it.

[Dave] (08:05)

Exactly, it’s that second point. What you’re describing of building it in is a key change in culture. And the second thing again, and you touched on it, is you can’t just go after the known vulnerabilities. You have to acknowledge, hey, these are very complex software systems. They have vulnerabilities in them despite best coding practices. We have to be proactive and we have to not just focus on the known vulnerabilities, you have to focus on the unknown vulnerabilities as well right from the start.

[Paul] (08:39)

Joe, maybe I can ask you at this point to say something about a topic dear to your heart, which is also more of a cultural matter than a technological one, a way of thinking as much as a way of doing, and that is the idea of Secure by Design.

[Joe] (08:54)

Yeah, I think the big challenge people have to think about from a software development perspective, when you think about weapons systems is it’s not just as simple as doing a few updates and then pushing the release back out like you might in a web environment or something else. There is a hurdle that weapons programs have to go through to achieve authority to operate and to meet a standard and expectation of quality and rigor to include security and testing.

We want to make sure, and everybody wants to make sure that these weapons and all their components work exactly as intended. There are pushes then to improve the efficiency of those processes, but historically we would have, I would say a waterfall kind of software development life cycle for these weapons programs. And more recently we have seen changes towards a more repeatable DevOps process.

[Paul] (09:49)

Now DevOps, that’s development operations. Yes. So instead of just send the developers away for seven months to build something and then they come back and show you what they’ve done and then you see whether it will work together. Every time anyone changes anything, you make sure that you haven’t gone off on a wrong tangent.

[Joe] (10:07)

Ultimately, it comes down to what is your process to ensure you can achieve authority to operate and meet both the timeline and the budget to push out your weapons on schedule. To the extent we can do that faster, it means that the Department of Defense or Department of War is more competitive and more innovative. And that ultimately, I think, is the balancing act of ensuring software is bug free and has authority to operate, versus that timeline to achieve it. We’re certainly trying to accelerate development life cycles without compromising quality.

[Dave] (10:44)

When you talked about secure by design, Joe and Paul, to me, there’s a big element of cultural change involved in that movement. Again, I think that’s a gap in the DOD weapons systems that the DOD weapons systems are not being treated with that specialness, that they are more like operational technology, less like enterprise IT. They’re out in the field, they’re unpatched. And again, subject to cyber attacks by the most sophisticated attackers on the planet, aside from the US attackers.

[Paul] (11:21)

With autonomous systems and with weapon systems that aren’t so much about blowing things up as actually acquiring information about who’s doing what where, there are a lot more moving parts and many more of those moving parts are just pure software, aren’t they?

[Dave] (11:37)

Exactly. It’s that change. And again, it’s happening in other industries as well, like the automobile industry. Yes. Where physical and mechanical and physically isolated was the norm. Now it’s so much more about software and connectivity. And additionally, to go into a bit more detail, the software is sometimes making great use of open source. So any attacker can get their hands on open source. People are also trying to make software uniform and modular across systems, that’s making it easier for the attacker. If they find an issue, most likely that’s going to be relevant to other systems as well because there’s so much of that modularity and reuse going on because it’s about software. It’s not about custom hardware applications anymore.

[Paul] (12:31)

Yes, and we’ve seen recent attacks in the open source space like the XZutils hack where someone going by the name Jia Tan, no one seems to know who he is even now, spent something like one to two years earning trust with archiving compression tools so that he was trusted to work with these projects, but his ultimate goal was to hack into the version of open SSH which was used specifically in the Debian flavour of Linux and you just think wow there’s such time and money available to whoever the attackers were. These are challenges that even when you think you can trust your source you may not be able to and you have to be nimble enough to deal with the fact that you might be wrong.

[Dave] (13:20)

Yeah, mean, combating a foe that is willing to spend years building that kind of a cover story is just a very big task. Alongside all the other challenges we have from the exploitation of vulnerabilities that were done not on purpose. Writing code is not easy. And despite best practices, the best coders, when they’re dealing with embedded system languages like CC++, end up with vulnerabilities in their code.

And then the attackers, I know, and develop exploits against them. As you were saying, Paul, in the military, in the defense world, they’re much more disciplined. They don’t launch an attack just to show off. Yes. There’s a concept of war reserve mode. You can develop your attacks. You can test them. can validate them, but you are not going to expose them to the enemy until there’s an actual conflict. In Jaguar Land Rover, right now, today, as we’re recording this, there are global plants are still shut down. They’re shut down for two weeks now. And some people are saying they’re going to be shut down until November.

[Paul] (14:26)

Boy. I noticed when that first happened I didn’t realise it was becoming such a long-running saga. If I’m not wrong, the current suspicion is that this was simply that person A phoned up person B and said, Hey, will you let me in? And they said, Yeah, okay, since you asked so nicely, here you go.

[Dave] (14:44)

Getting out of my swim lane, dealing in that IT enterprise, you can’t call up ⁓ a jet fighter per se. But again, when we’re seeing these attacks in the domain of IT, and we know the vulnerabilities exist in the DOD weapons systems, and we know that non-kinetic attacks are very much a part of the craft of war,

[Paul] (15:08)

That’s part of what’s referred to as the grey zone, isn’t it? You’re almost attacking, but you’re stopping just short of anyone being able to point a finger at you and do anything about it. As much as there are many moving parts in the bits you need to defend, there are a lot of moving parts in every cyber attack, aren’t there?

[Dave] (15:13)

Yes, as you’ve seen, there’s that gray zone, that preparation of the battlefield. Some of that preparation of the battlefield goes beyond weapon systems, right, into just public critical infrastructure. When you have these complex weapon systems in operation, executing missions that are critical, of course they’re going to be under attack. My sophisticated adversary is, to me, there’s a big gap, and I believe it begins with the cultural approach of right now,

[Paul] (15:42)

Yes.

[Dave] (16:01)

Adopting enterprise IT cyber defense to weapon cyber defense, not recognizing that weapon cyber defense is so different. And then the second result of that is this focus on known vulnerabilities. You’ve got to focus on the unknown because these systems are in the field and don’t get patches except on one or two year cycles.

It’s just naive to think that our adversaries aren’t developing cyber attacks for weapons systems. It’s naive to think that these weapons systems can’t be exploited. So the right answer is that cultural change that brings unknown vulnerabilities into the requirement set, that brings proactive cyber into the requirement set.

[Paul] (16:45)

So Dave, you want to say something about, since we’re talking about culture more than technology, when it comes specifically to weapons security rather than automotive or power grid or mobile phones for that matter, what does that shift look like in practice, say for people in different parts of the organisation, for say engineers, for program management and for defence leaders?

[Dave] (17:12)

me a close relative of cultural as organization. lot of my background before RunSafe was in the electronic warfare domain. The interesting dynamic in electronic warfare is the people working on electronic warfare attack work shoulder to shoulder with the people working on electronic warfare defense. The attackers turn around to their defender colleagues and say, hey, I just developed this attack for this enemy adversary weapon system, by the way, would work on our.,

[Paul] (17:44)

dear.

[Dave] (17:47)

That’s one thing that I think really should start to happen more. The cyber attackers in the DOD should be informing the cyber defenders. Again, that mindset of moving away from this is an enterprise IT system to this is a weapon system. These are what real attacks on weapon systems look like. Hey, we should be preparing our weapon system for those type of attacks. And in fact, it is the dynamic that happens at RunSafe our technical leads are now in cyber defense, they started in cyber attack. That dynamic of using what you know from cyber attack to do really cutting edge cyber defense is, think, how we’re able to do some unique things in the market.

[Paul] (18:32)

Joe, you’ll remember a few weeks ago we did a podcast with Leslie Grandy who was talking about Prime Meditatio Malorum imagining what could possibly go wrong. She said you may decide in your organisation that you’re not interested in AI, you’re not going to use it, you don’t even need it. But you’d better try it out to see what answers people who do use it are going to get. Because if they find something you don’t like they are going to use it against you. And that applies everywhere in the chain, doesn’t it?

[Joe] (19:06)

It does apply everywhere in the chain. I think Dave’s point is, Hey, we’ve got to look at this from different perspectives. Interestingly enough in the U S we’re thinking about what is the U S cyber force and how much cyber offense is involved in all that. If that’s the case, we also need to be setting the stage for even more deterrence, but even more than deterrence, we should be looking at resilience and resilience of weapons programs. Because if the U S escalates its cyber offense, then I do think that its resilience and the need for resilience will have to also increase.

[Paul] (19:40)

These days there’s a lot more collaboration with all sorts of different private sector industries, aren’t there? And as Dave said, also those development teams to be more productive, more responsive are in turn opening up not just to commercial off-the-shelf software, but to open source software as well. So what form does that cultural collaboration take in this new era where everything’s a bit more open, but parts of it still have to be as closed as ever they were back in the day.

[Joe] (20:14)

Well, we have integrated supply chains and that creates a reason for collaboration in part. I would even look at outside aerospace and defense companies like General Motors are making a substantial investment into the software world. These are not just hardware companies anymore, like they were in yesterday. You look at organizations like Lockheed Martin, they have, I don’t know, thousands and thousands, tens of thousands of software developers and the solutions they offer are software driven that necessitates an understanding of what the software development lifecycle looks like, but also what the software supply chain looks like. 

You have to start to open up and think about what are the ramifications if I’m taking software from a third party? What are the ramifications if I’m using open source? And what are the ramifications as I connect all these new software applications into some environment that’s communicating with another set of systems? And so all of this means that the attack surface from a cybersphere is increasing all the while while we’re being more open and transparent about the software tools that we’re using.

[Paul] (21:20)

So at the risk of sounding salesy, which I don’t mean to but I don’t see why I shouldn’t, let’s talk about the technology you can bring to bear on getting security right before you deliver rather than papering over the cracks afterwards. Dave, what are some of the products that people who want to make that shift to secure by design can make in the software components that they deliver?

[Dave] (21:43)

To back up a little bit from your question, to restate my wish list for cyber security of weapons systems, it’s sort of threefold. First is recognize weapons systems are not enterprise IT systems and that unknown vulnerabilities have to be part of the solution. would be number one. Number two, that organizational change of the cyber defenders in weapons systems should be more informed by the cyber attackers of weapons systems within the DoD organization. 

And then the third thing on my wish list, and this is happening and this sort of gets to your direct answer, these weapons system programs, they’re always complex, they’re always under budget pressure, there’s always under schedule pressure. But RunSafe is starting to work with some of the first movers, the early adopters, and starting to bring that proactive defense to these systems without changing the functionality of these systems. And I’m really excited as these first mover systems start to tell their friends and family within the DoD that there are solutions that can be recognizing that weapon systems are different, that can be more tuned to embedded systems, but that can also be proactive and that can go after the unknown as well as the known vulnerabilities. And that it’s real in DoD speak. It’s TRL9, it’s undeployed systems.

These can be adopted not just by the first movers, but by more generally weapon systems.

[Paul] (23:15)

Learned a brand new acronym today, which I actually really love, from Joe Saunders and that is RASP. R-A-S-P. Tell me something about that.

[Dave] (23:25)

To spell it out, runtime application self-protection. That is at the heart of RunSafe’s protection approach. And it has to do with giving that weapon system, that embedded system, that defense automatically. Really it’s manifested as a moving target defense. Memory locations are unknowable by the attacker. And in that sense, their attacks, they can still launch them, but they will fail. So the attack fails, that’s goodness.

And in addition, the DOD context, a failed attack is really good information for the person being attacked. It’s revealing of your enemy’s capabilities, of your enemy’s strategies.

[Paul] (24:08)

Even more so perhaps than any other sector. The idea that hey, well, we’ll just rewrite all our code in some fancy new language that gives us protections that never existed when C and C++ were invented, e.g. Rust, in the weapon systems environment, that’s essentially undoable.

[Dave] (24:30)

Rust is a good language, but for example, you cannot use rust if you have a safety of flight system. There’s a whole raft of weapons that even if you had the money and even if you had the time, they’re never gonna get approved for safety of flight concerns. It’s too new a language. People aren’t comfortable in that safety of flight domain. I think for all weapons systems and for plenty of commercial systems, the expense and the time of moving all the legacy code to Rust is just a non-starter. You need something that can be proactive, that can deal with the unknown, that can deal with those long patching cycles to the legacy systems. And that’s where RunSafe fits so well, and it’s super exciting.

[Paul] (25:12)

So to finish up, I’ll just throw a question out to both of you and either or both of you can answer it. Do you think that aerospace and defence can stay ahead of the cyber threat? If so, what should we start doing immediately that we maybe haven’t quite done yet?

[Dave] (25:30)

So there’s no question the aerospace and defense market can get it right. mean, the amount of creativity and really, really amazing ideas and thinkers is there.

[Paul] (25:43)

Yes, I’ll agree with that. If you have the kind of scientists and engineers, the kind of minds that can send a probe to a tiny asteroid, break up its surface and bring the stuff back to Earth so we can analyse it, we’ve certainly got the cleverness. Yes. And we’ve probably got the will. The question is how to make it so.

[Dave] (26:04)

 I don’t know that we have the will. Really? And that’s where I get a little pessimistic. I don’t think that my assessment of the gap in weapons system cybersecurity is universal.

[Paul] (26:19)

I think I’m mixing up, Dave, the word will and desire that are actually not synonyms, I’ve just realised.

[Dave] (26:29)

There are so many priorities that people are dealing with and so many distractions that people are dealing with. While they have the cleverness, it may not get the priority it deserves until, unfortunately, in my opinion, there might be a horrible event due to successful attack to change the culture.

[Paul] (26:47)

Joe, how can we avoid doing it that way round? How can we have the defence ahead of the attack, if you know what I mean, which has always been an issue in cyber security, a vital issue, but I guess in aerospace and defence you can put that to the power of itself almost.

[Joe] (27:04)

Yeah, I think part of the goal is to drive innovation as fast as possible. Yes. And as we drive innovation, we have to take into consideration resilience, which means that software development practices need to find a way to continue to accelerate. And we need to find ways to build in security so that that runtime defense is made available even when a patch is not available. I think Dave said the set the context right.

We can’t just easily update these systems. So when we build them, we have to be very careful to anticipate both the known potential vulnerabilities that exist and what could lead to and result from unknown vulnerabilities when we build systems. Accelerate innovation, increase resilience, ensure weapon survivability in all contested environments, including the cyber domain.

[Paul] (27:56)

Wow Joe, think that’s a fantastic way to finish. Very, very strong words, but it’s clear that having the desire to do something is not the same as having the will to do it. And even having the will to do it is not the same as actually doing it. So for those people who haven’t yet taken their first step in whatever sector towards secure by design, now more than ever is a good time to do it. That’s a wrap for this episode of Exploited the Cyber Truth.

If you find this podcast insightful, please subscribe so you know when every new episode drops. Please like and share us on social media. Please recommend us to your entire team so that they can hear Joe and Dave’s words of wisdom as well. Thanks to everybody who tuned in and listened. And remember, stay ahead of the threat. See you next time.

The post Weapons Cybersecurity: The Challenges Facing Aerospace and Defense appeared first on RunSafe Security.

]]>
Can Taiwan Survive a Digital Siege? https://runsafesecurity.com/podcast/taiwan-digital-siege/ Thu, 18 Sep 2025 14:07:29 +0000 https://runsafesecurity.com/?post_type=podcast&p=254939 The post Can Taiwan Survive a Digital Siege? appeared first on RunSafe Security.

]]>
 

Taiwan faces millions of cyberattacks daily, and with nearly 90% of the world’s advanced semiconductors produced on the island, the stakes couldn’t be higher. In this episode of Exploited: The Cyber Truth, host Paul Ducklin and RunSafe Security CEO and Founder Joseph M. Saunders dissect what a digital siege on Taiwan could look like, and why the consequences would ripple far beyond the region.

They discuss the fragility of Taiwan’s energy grid and telecom networks, the exposure of undersea cables, and the risks of a cyber-first campaign designed to paralyze the island before any kinetic attack begins. Drawing parallels to Ukraine and Israel, they highlight where resilience measures have succeeded and where Taiwan still has gaps to close.

Key topics include:

  • Taiwan’s “super sector” semiconductor industry and its global impact
  • How gray-zone tactics, cyberattacks, and disinformation could destabilize the island
  • Why energy and telecom resilience are essential for survival
  • The urgent need for memory safety and software supply chain security in critical infrastructure
  • What Taiwan’s digital defense—or failure—means for the U.S. and global security

A must-listen for policymakers, cybersecurity professionals, and anyone concerned about the future of global stability.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:01)

Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of RunSafe Security. Hello Joe.

[Joseph M. Saunders] (00:20)

Hello, Paul. Great to be here today.

[Paul] (00:23)

So our topic is, can Taiwan survive a digital siege? For those who’ve never actually looked at a map of the South China Sea and that region, it’s kind of important to know that Taiwan is just about 20% bigger than Belgium, which is a modestly sized European country that’s quite densely populated, but has more than twice as many people.

And it is also home, of course, to TSMC, the Taiwan Semiconductor Manufacturing Company Limited, which is also used by other semiconductor giants who have their own factories, including Intel and TI in the US, and if I’m not wrong, STMicroelectronics and companies like NXP, used to be Philips in Europe. So it is of massive global importance.

[Joseph M. Saunders] (01:21)

Well, as you say, Taiwan’s economically a very important country, not only for its semiconductor industry, but for all of its electronics and everything that it produces. I think it’s a top 20 in terms of gross domestic product, top 20 country in terms of output per year. And it’s a small island nation, which just happens to be 30, 40 miles away from mainland China.

[Paul] (01:48)

Yes, which is one of the biggest countries in the world with the second biggest population. Talk about a little bit of a David and Goliath situation.

[Joseph M. Saunders] (01:57)

Bit of a contrast for sure. And it’s separated by the South China Sea. And so there’s all sorts of economic activity going through the shipping ports in the region. There’s the economic output of Taiwan itself. And its position is geographically strategic for not only US interests, but lots of countries’ interests.

[Paul] (02:18)

It certainly has global economic innovation at its core, doesn’t it? And yet it relies very heavily on imports to keep all that modern stuff ticking over. I believe they still need to use coal for about 40% of their electricity. They use methane for about 42% of it. Almost all of their LNG, that methane, is imported. I believe they have a supply chain that’s about two weeks long.

And that introduces massive challenges all of its own, doesn’t it?

[Joseph M. Saunders] (02:52)

It does introduce challenges in some kind of blockade, preventing liquefied natural gas coming into the island is one method to really put some pressure on Taiwan. And that certainly would have an effect for all the reasons you already mentioned around its importance to the global economy. Another angle is concern that at some point in the future, China will attack the island through military action of some sort, but it also needs to be considered.

The cyber risk the island has because of some of these important supply chain questions that you raise with that risk of energy supply. You can imagine that there needs to be strong infrastructure to ensure that when supplies arrive that they can be distributed. If you think about the energy risk, the semiconductor industry and the geo position of Taiwan, there’s a lot at risk here and a lot of reason to not only protect it from military action and blockades, but also from a cyber attack.

[Paul] (03:54)

Yes, I’m just looking in front of me at a list of some of the well-known companies for whom TSMC makes chips. Now I mentioned Intel and TI and NXP and companies like that because they are chip companies that have their own fabrication plants but that also rely on TSMC. But there are lots of so-called fabless companies these days. They basically take their design and say build me 17 trillion of these.

[Joseph M. Saunders] (04:25)

And to put that in perspective, mean, I think it’s 90% of the advanced semiconductor chips are in fact produced in Taiwan. And so there is this global ecosystem with massive companies. Certainly TSMC has looked for ways to expand its footprint, even a build out in Arizona. But there’s all sorts of logistical issues and expertise and local expertise when it’s managed in Taiwan that may or may not convey to a fab plant in Arizona, for example. So it’s yet to be seen if in fact it’s truly diversified its supply chain. So there’s a lot still to come on that story.

 

[Paul] (05:06)

So what would an attack on the actual digital side of an infrastructure like that look like? Obviously a very specific problem for Taiwan given its island nature and its size and its location. But in truth, a problem for almost any industry in any country of the world. How do know that an attack had even started?

[Joseph M. Saunders] (05:30)

It’s certainly a digital world. Again, you’ve spelled it out quite well, but I like to think that if everything was mechanical, you’d have to go around one by one to every traffic light or every pump and do something to it. But in a digital world that’s connected, access is so much more straightforward and so much more wide-scale that it can be disrupted quite easily if not for cyber prevention or cyber protection in general. The interesting thing about this is that in the US we like to say we have 16, 17 sectors that comprise critical infrastructure. Well, Taiwan has one that the US doesn’t have, and that is technology parks. And technology parks are these fabrication plants.

[Paul] (06:14)

The sector that drives and helps the 16 or 17 sectors in the US.

[Joseph M. Saunders] (06:20)

Yeah, so let’s call it a super sector, which is exactly why I brought it up. And so these technology parks in Taiwan need to be made safe. But there are other priorities in Taiwan as well.

[Paul] (06:32)

As he said, having all these digital switches that allow you to manipulate and to fix devices without having to go to each and every one is very much a blessing. But it can turn into a curse if that remote access goes wrong because it means an enemy who hasn’t even set foot on your territory can reach out from a laptop screen somewhere and do much the same thing if you aren’t careful. So how do you build that carefulness into the system? Where does the money come from? How much does it cost?

[Joseph M. Saunders] (07:06)

Critical infrastructure, there are some very high priority items that are essential for an island nation like Taiwan to stay connected. And some of the high priority areas in Taiwan include all of the industrial control systems in the energy grid, and then also telecommunications and certainly financial services. And if we think about all the cyber attacks that are happening, can you imagine an island without energy, communications, and an ability to make financial payments. It would be devastating. You can see why potential vulnerabilities in cyber could be such a massive risk to the country itself.

[Paul] (07:48)

My understanding is that Taiwan has recently increased its military defence budget. How much of that money should be going towards cyber resilience rather than, say, towards another aircraft carrier? Which, as we discussed when we talked to Sparky Braun a few episodes ago, in South Korea they’ve decided, you know what, you look at the history of the USS Nimitz and the USS Gerald Ford, you think, wow, those are really important vessels, but you know what?

We’re not going to do that anymore. We’re going to go for autonomous vessels. We’re going to go for the drone type approach. So we’re more vigorous, more resilient, more adaptable. So how does cyber resilience come out of a military budget, if that’s the place for it? And how do you make sure that the right amount of money does get spent?

[Joseph M. Saunders] (08:37)

Well, certainly Taiwan has its own Ministry of Digital Affairs and a Ministry of Defense. And as you know, as a part of a panel that included retired Admiral Chen, who’s a current legislator in the Taiwanese government, and he is actually responsible for the defense budget. And having spoken to him just last week, Paul, he had mentioned how he agrees that critical infrastructure and protecting critical infrastructure is an extension of national defense and national security, and that Taiwan does need to do more because of what’s called the gray zone tactics of attacking, say, from cyber means. 

It may not be a full blown kinetic attack, but it might be something that can be very disruptive. Admiral Chen is leading the charge on that budget, that budget increase. It went from just over 2% to now 3.2% this next year. And so certainly they’re going to spend a good portion of that on traditional military needs. could be training weapons programs. It could be other forms of technology. But a portion of that, as you say, will likely go towards cyber tools of different types. And that could be cyber offensive tools. It could be cyber defensive tools. It could be for open source threat intelligence tools. 

And I imagine that at least 300 million US dollars a year could go towards those kinds of software tools that could help Taiwan have an asymmetric shift in its cyber posture, especially if you consider what’s at stake given someone threatening Taiwan like China, who has probably the largest cyber army in the world and by some counts is 50 times bigger than even the US’s cyber army. What I think we’re looking at is Taiwan could benefit from having the right investment in technology that can have an asymmetric shift to make it very difficult for that massive cyber army to attack.

[Paul] (10:36)

Yes, I watched the video of that panel session of which you were a part. And you expect to hear an Admiral waxing lyrical about naval stuff. And in fact, he included in the things that Taiwan, or indeed any other country needs to defend against potential attacks, whether they’re in time of war or in peace, he mentioned, and I wrote them down here, the power grid, gadgets in general.

So that could include even things that people have around their homes that you rely upon to make society work, medical systems and more. So it’s very much thinking beyond the let’s build another aircraft carrier. We really are in a different era, aren’t we?

[Joseph M. Saunders] (11:20)

We are in a different era. And just to go back to a theme earlier, the connectivity of all those systems is high. The connectivity of the medical systems and the health care systems and the medical devices themselves, you would be surprised if you looked at how much of infrastructure is dependent on the telecommunications infrastructure and how much is dependent on the energy infrastructure. And so there are some key priority areas telecom and energy being very high priority.

[Paul] (11:51)

Yes, and they hunt in pairs, don’t they? Because the energy grid relies upon a strong telecommunications network in order to let all the parts of the grid know what they’re doing so it can be well balanced. And of course, the telecoms network relies on a regular and reliable supply of electricity to function at all.

[Joseph M. Saunders] (12:10)

And so those two in particular are vital for Taiwan to withstand any kind of cyber siege. So that’s actually part of where my recommendation is to really bolster telecom equipment and telecom networking, as well as aspects of the energy sector itself.

[Paul] (12:27)

So if you had, let’s say, $300 million to spend, you’ve said that a key focus would be on telecoms and the energy grid. As the Admiral himself said, there’s a lot more to it. There are the gadgets, there are the medical systems, there are all the other things. You mentioned traffic lights. So that’s where you’d spend it, but what would you spend it on? Of this applies whether you’re in Taiwan or whether you’re in a Pacific island that’s even less well connected a Vanuatu or something like that, or even if you’re in the continental United States of America.

[Joseph M. Saunders] (13:02)

Yeah, I think it applies everywhere. And for Taiwan specifically, my view is there’s a couple areas. There’s AI enabled open source threat intelligence that I think is necessary in part to make sure all the necessary threat intel is collected, but also to help ensure there is a sharing of information with partners and allies and the like.

[Paul] (13:25)

So by that you mean that you may use what you might even call old-fashioned techniques to collect information, but you have to get an edge in picking out the stuff that really matters.

[Joseph M. Saunders] (13:36)

Yeah, exactly right. You need that as early warning to help prioritize where you ought to be looking. So I think that’s one key aspect.

[Paul] (13:44)

What about software development in general, Benjo? We’ve got all these embedded devices, some of them have been around for 5, 10, 15, 20 years. We’re going to be building new ones, we’re going to be trying to fix the old ones. How do we avoid making some of the mistakes that we made in the past?

[Joseph M. Saunders] (14:02)

Yeah, exactly. And I think that’s where the second level of recommendation comes in. And that is to harden the software that goes on these devices that get deployed across critical infrastructure and working in the software development process, adding in security protections so that these systems are protected even when a patch trying to resolve a vulnerability at some point in the future isn’t available. These systems should still remain resilient. 

Memory based protection is essential and also gives a very significant asymmetric advantage because if you could add in hardening on these devices and make it so even if the attacker knew how to compromise a single device, you need to make it so they can’t build a reliable exploit that works across devices. So I think protecting the firmware, the software, the application layer, the operating system on these devices so that they cannot be exploited in the first place would go a long way to free up resources to be used in other areas as well.

 

[Paul] (15:05)

Now on systems like Windows and Mac OS and Linux, we sort of take some aspects of that for granted because of a thing called ASLR address space layout randomization, where when a program loads or actually more precisely on Windows, it’s actually only every time your computer reboots, the sort of debt gets reshuffled so that programs don’t load in exactly predictable areas of memory like they used to in the Windows XP days.

On Windows we have that protection, but it is a little bit limited because we still get plenty of attacks despite ASLR. And as I said on Windows, it’s not every time a program runs. It is only every time you reboot your system. But there’s even more problem with that on embedded devices, isn’t there? Because you’re not looking at a laptop with 16 gig of memory and a virtual memory system that lets you run massive programs and they work fine. You might be looking at a device that was designed to fit in something the size of a matchbox, to run on a 3 volt battery, and to last for 20 years that has 128 kilobytes of memory. So you have to work smarter as well as harder, don’t you?

[Joseph M. Saunders] (16:17)

That those pesky little boxes that are both rugged and able to survive in low power with low compute happen to just be very, very reliable. But as you say, they don’t have the luxury of being over constrained with extra software on them. And so you do need to find ways, much like ASLR, to disrupt the ability for the attacker to identify those areas that they could compromise. What we’ve tried to do at RunSafe, for example, unlike ASLR, can be defeated with not to get too technical here, but with a single information link, you can defeat ASLR.

[Paul] (16:54)

And that could be something as innocent as a log file entry that just happens to record a memory address because the programmer thought it might be useful. They gave away the keys to the castle.

[Joseph M. Saunders] (17:06)

The whole kit and caboodle – exactly right. And so I do think it’d be economically feasible for Taiwan to deploy across all its devices, all its systems, what I would call Load-time Function Randomization that couldn’t be easily defeated, even if there’s a single information leak and doesn’t require new hardware, new upgrades, does work in low power, highly constrained compute environments. And so that would be a significant portion of my recommendation should anybody ask. So I think the cyber offensive tools, software and device hardening to prevent exploitation, there are other areas further into the cybersphere that I think they can go from there as well.

[Paul] (17:51)

The obvious one that springs to mind because we sort of touched on the concept earlier when I spoke about the fact that Taiwan, I believe, generates close to half its electrical energy from methane and it has about a two week supply. So you should be hearing the words supply chain concerns in that statement. Now, it’s a different sort of thing in software, isn’t it? You’re not worried that somebody might cut you off from your software supply chain.

It’s almost the opposite these days, isn’t it? You’ve got this abundance of choice that means what if you pick something in your software supply chain that later gets poisoned either by accident or as we’re increasingly seeing by design possibly by attackers who aren’t individuals or money-based cyber criminals but are state-sponsored attackers who may spend months or in some cases even years worming their way, pun intended, into a position of trust in the open-source community so they can, figuratively at least, drop a bombshell on the world by sneaking something in that shouldn’t be there.

[Joseph M. Saunders] (19:05)

Yes, and for a country like Taiwan who does see critical infrastructure as an extension of national security, looking for ways to ensure there’s rigor behind the software supply chain security, I think the government itself could ask for everybody to provide a solid review of the software supply chain. And that would include generating the Software Bill Materials, analyzing the vulnerabilities, understanding the risk associated with potential zero days that could compromise systems in the future and really imposing that to ensure that everybody has a complete and transparent view of what the risk looks like. 

Let’s face it, when it’s a country that could potentially be under siege by an adversary looking to change the course of history, you can’t really afford to wait and find out if there’s going to be a compromise or an attack or some kind of disruption in service you need to be as prepared as possible. And I think one way to do that is to analyze the risk in the supply chain as much as defending the software and preventing exploitation.

[Paul] (20:12)

So in a situation like this, where the output of Taiwanese semiconductor factories is of critical importance to the economy of the US, what do you think the US should be doing when it comes to something like cyber coordination, intelligence sharing and software development technologies that make a secure digital infrastructure possible?

[Joseph M. Saunders] (20:38)

Yeah, I think US companies offer a huge advantage in many of the cyber technologies. And so I do think that there is a really strong potential partnership for Taiwan to engage US companies. If it’s going to increase its defense budget and buy some of that technology, I think that’s one way. I also know more specifically to your question, there is a defense cooperation security agency inside the Department of Defense.

And I believe that when it comes to cyber resilience, that organization could provide a transfer of capabilities and technology to other countries who want to ensure that their critical infrastructure remains protected. So through the Department of Defense’s DCSA, I think some of these countries can make requests to secure cyber technology and methods and training to ensure that a country like Taiwan is prepared for full on cyber attack.

[Paul] (21:38)

So mean it’s not just enough to say, let’s put aside $300 million to spend on this. You have to spend it in a way that will deliver measurable returns quickly. And that kind of quick return is particularly difficult in the embedded market, isn’t it? If you have a web app, hey, well, we’ll just update it tomorrow. Heck, let’s do it this afternoon. But you don’t have that luxury with the embedded market, whether it’s military equipment or things like pump rooms, power stations and telecommunications kit.

[Joseph M. Saunders] (22:17)

Yeah, I think given the nature of the threat and the size of the cyber army in China, Taiwan does need a form of asymmetric shift in its cyber defense. And when that comes to embedded systems and critical infrastructure itself, there are technologies and techniques to create a game-changing shift. And I think that’s part of what might be appropriate in this case, given the substantial risk that the cyber siege does represent to Taiwan.

[Paul] (22:46)

We’ve already talked about how to actually know what form the digital side of the threat is taking right now. And you’ve spoken about how you can increase your ability to know what bad actors are doing and how you can share that information. But what about exactly the same sort of threat to other places in the world? Either because they’re allies of Taiwan or simply because, hey, what worked once might work elsewhere.

Could this same approach be used against the US, the UK, or European countries, or any number of African and South American countries? Not to mention places like South Korea, Japan, the Philippines.

[Joseph M. Saunders] (23:29)

Yeah, and I think about places like San Diego and Norfolk, even places with large maritime presence and ports.

[Paul] (23:37)

For our overseas listeners, particularly for our British listeners, that’s Norfolk, Virginia, not Norfolk on the eastern coast of England.

[Joseph M. Saunders] (23:46)

Yes, Norfolk, Virginia and Southern Virginia. So yeah, San Diego, Norfolk, Houston, these are vitally important ports. It really does affect seaports around the world in the same way that it could affect ports in an island nation like Taiwan. And what’s funny is I often joke about wanting to take a trip across Eastern Europe. And I particularly want to stop in Poland and talk to folks about the cyber attacks they experience because it’s been well known and documented that Russia tests its cyber technology in Poland before it does campaigns around the world. 

You know, I’m sure there are plenty of areas where China is testing certain kinds of attacks and certainly meddling with infrastructure inside Taiwan as well. Your point is very well taken that Taiwan could be under siege for geopolitical reasons, for economic reasons, for competition reasons.

But I also think that the lessons there or lessons from other countries could be applied anywhere in the United States or any other country around the world.

[Paul] (24:55)

And we see that in miniature with ransomware cyber criminals, don’t we? They choose a company to attack because it happens to be at the top of their list. If they get in, next thing they want a million dollars in blackmail money. And after they’ve succeeded at place A, then they will attack place B and place C and place D as well. Because that gives them more power, it makes them more feared, and let’s face it, it makes them more money. So why would it be any different in the field of international influence industrial espionage and I don’t know what the right term is, power projection is that what you call

 

[Joseph M. Saunders] (25:32)

Call it horse projection and power projection.

[Paul] (25:35)

So there are a lot of things we can do Joe, but if you had one particular takeaway that you wanted to advise to policy makers and cyber security listeners, what should they be thinking about now? Where to start?

[Joseph M. Saunders] (25:51)

Well, I think from a strategic view of risk, I guess for places like Taiwan and really any other country, one of the most important things to realize is that cyber tactics are a part of modern warfare. And a part of modern warfare includes gray zone tactics that might poke people, but not invoke a full on attack.

[Paul] (26:16)

So grey zone is, that’s a term that sort of means you’re putting lots of pressure on the person but you haven’t done anything that somebody could point a finger at and say, that’s an act of war. So you’re swinging your fist but you’re stopping it just in front of the person’s nose.

[Joseph M. Saunders] (26:30)

Exactly.

Or you might do a couple body blows, but you don’t punch them in the face. The point is cyber is part of modern warfare and gray zone tactics are a part of modern warfare. And so with that said, I think the biggest takeaway is ensuring that you do protect critical infrastructure is one step and certainly protecting the software deployed across critical infrastructure is an essential step. And I say that because we don’t want some of these cyber attacks to go any further, we don’t want them to succeed because at some point down the road, those may be considered acts of war. And with that, we don’t want escalation when we could be preventing. 

So I think protecting software across critical infrastructure is an essential step. And when I look at what happened in Ukraine, I think there’s kind of a related topic. If you see where the kinetic attacks were, they were preceded by cyber attacks in the same area. There’s no doubt that cyber and kinetic warfare tactics are intertwined and part of the future of warfare. We have to be ready. We have to be resilient. We have to defend our infrastructure. And we need to maintain communications, energy, payment networks, and a well-functioning government in order to ensure that we have something to continue to fight for.

[Paul] (27:57)

And when it comes to topics like industrial espionage, I guess you have to remember that if someone is getting right in your face, if they are squeaking their fist and stopping it a centimeterfrom your nose, and you get away without getting thumped, you still have to be careful that while they’re doing that, they haven’t slipped their hand into your jacket and made off with your wallet and your mobile phone at the same time. Particularly when you’re a country like Taiwan and some of the stuff that be purloined relates to semiconductor secrets for all of the laundry list of global companies that I mentioned earlier. It all matters a lot and as you mentioned Joe all of the components are kind of interconnected. Your telecommunications grid won’t work without good electricity supply and vice versa.

[Joseph M. Saunders] (28:49)

Yeah. And all of these areas need defense in depth. And I think part of that is the cyber hardening. I think part of that is redundancy and other tactics to ensure that you have good infrastructure in place that’s resilient, but no doubt that the cyber seizure is real and that cyber protection is needed.

[Paul] (29:09)

And therefore cybersecurity very much is a value to be sought and cherished and not merely a cost to be itemized and minimized. Well Joe, that’s heady stuff I must admit. Thanks to everybody who tuned in and listened. Thanks especially to Joe for his very very pertinent and thoughtful insights. If you find this podcast insightful please don’t forget to subscribe so you know when each new episode drops. 

Please like and share us on social media as well, and don’t forget to share us with all of your team so they can benefit from Joe’s wisdom as well. Once again, thanks to everybody who tuned in and listened. That is a wrap for this episode of Exploited: The Cyber Truth. Remember, stay ahead of the threat. See you next time!

 

The post Can Taiwan Survive a Digital Siege? appeared first on RunSafe Security.

]]>
Build-Time Protections vs. Post-Production Panic https://runsafesecurity.com/podcast/build-time-vs-post-production/ Thu, 11 Sep 2025 14:21:25 +0000 https://runsafesecurity.com/?post_type=podcast&p=254858 The post Build-Time Protections vs. Post-Production Panic appeared first on RunSafe Security.

]]>
 

In this episode of Exploited: The Cyber Truth, host Paul Ducklin and RunSafe Security CEO Joe Saunders explore a critical question: should we keep chasing patches or stop attackers before code ships?

Joe draws on decades of experience in cybersecurity and national security to show how build-time protections—like automated memory safety, Software Bills of Materials (SBOMs), and code-hardening—shift the balance in favor of defenders. From aerospace to energy grids, patching isn’t always an option, and waiting on post-production fixes can leave life-critical systems exposed.

Listeners will learn how proactive defense strategies:

  • Eliminate the “whack-a-mole” patching cycle
  • Reduce the costs and risks of delayed software updates
  • Improve resilience for embedded and operational technology (OT) systems

Tune in for a clear-eyed discussion on what it really means to build secure software and why patching after the fact is no longer enough.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:03)

Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of RunSafe Security. Hello there, Joe.

[Joe] (00:19)

Greetings, Paul. How are you?

[Paul] (00:21)

I am very well, thank you, and I’m particularly looking forward to this episode. Now, I think both of us tend to shy away from doing a podcast that’s a sales spiel. In this case, there’s no reason why we should avoid talking about RunSafe’s products and services because they’re the background to all of this. So let me throw you in at the deep end. Our title is Build Time Protection versus Post Production Panic.

Everyday operating systems like say if you have Windows 11 at home, there is a bit of a post-production panic on the second Tuesday of every month when you get the notification, updates are available and then you sit back and think, golly, will my computer reboot? Will it start up again? Will my scanner still work afterwards? Now in the embedded software market, things like industrial control systems, operational technology, OT, you don’t always have that luxury to do monthly patches even if you want to, do you? So have to find another approach.

[Joe] (01:24)

You do have to find another approach and in the embedded software market, oftentimes that kind of software or that kind of code is embedded on cyber-physical systems deployed across critical infrastructure. And I think we all want infrastructure to be available. So the services that we have come to rely on in a well-functioning society still operate. Part of that is the software updates to keep software secure, to keep it up to date and the like, but you don’t want these systems to be unavailable and oftentimes they’re hard to reach and they’re hard to update. 

What that means is we need to have greater and greater resilience in the software that we build and deploy in the first place so they’re not chasing vulnerabilities after the fact because as we know that’s what ultimately can put infrastructure at risk.

[Paul] (02:14)

And indeed, very name embedded software kind of says it all, doesn’t it? On regular Windows laptop, there are probably 10 different operating systems you can choose to use. And then on top of those, are dozens, hundreds, thousands of different software apps you can have or not have. An embedded system, the hardware and the software kind of come as a, almost as a sealed unit, don’t they? Because often they’re designed to do exactly one thing perfectly over and over again, possibly for decades. So they don’t have to be a spreadsheet today and a word processor tomorrow, but they do have to operate that valve exactly correctly to specification for years and years years and years come hell or high water.

[Joe] (03:00)

from hell or high water. then, so what that means is the software needs to be reliable. It needs to be secure. It needs to be safe. And there’s a lot of discipline that goes into making software trustworthy in that sense. If you look at the aviation industry, flight control software and airworthiness is a huge barrier in software development to get right. Because let’s face it, if you’re building a car, you’re building an airplane, you’re building something of this nature, you want to ensure that safety is realized. 

And so that’s why there are pretty robust milestones that software developers seek to meet in order to ensure that their software remains safe and reliable. And part of that is determinism, knowing that everything that’s going to happen is going to happen as you say, in the fashion you expect it and in the time horizons and specifications that you anticipate it.

[Paul] (03:53)

Yes, and people forget that, don’t they, as we’ve said before, when they hear the term real-time software. They imagine that’s all about speed and performance and frames per second and no lag when you’re listening to music or watching a video online. But it’s a different kind of performance. A computer might by design be very slow, use very little power, but to fulfil its remit, it must perform specific tasks within specific hard limits every time. Even if those limits are a minute or an hour or a day, it can’t be a day in a second. And therefore every time you patch it and every time you try and plaster over any cracks in it and change the software even a bit, you run the risk that it will no longer comply with the requirements that were there at the beginning.

[Joe] (04:43)

100%. You look at the software development practices involved for some of these mission critical applications or systems or devices deployed in critical infrastructure. There’s a reason you have a robust methodology to ensure that there’s determinism, that there’s reliability, that there’s a ruggedness to the software itself so that there is a high expectation that there’s little or no downtime. There’s a high expectation that there’s little or no bugs that get found that could put the system in jeopardy. 

And so there’s a lot of testing. There’s a lot of processes involved and lo and behold, if you issue an update, well, you still have to go through all that same testing anyway. What that means is there are probably fewer updates. It’s a highly fragmented market and it’s fragmented in the sense that there are many different use cases. have safety of flight. have autonomous driving. 

We also have industrial automation and manufacturing plants that are…

[Paul] (05:27)

Yes.

[Joe] (05:42)

moving around robots and blades and different things that also have safety requirements.

[Paul] (05:48)

Yes.

That’s a little different than, dear, my Zoom call crashed and I had to join again. Yes. The sword-waving robot went nuts. That’s a very different proposition, isn’t it?

[Joe] (05:59)

Yeah, a sentient forklift is a scary thought. 

[Paul] (06:03)

Yes, I was interested just as an aside to see that there’s recently been a lot of publicity about someone who hacked a product called Bellabot, which is a very, very basic food delivery robot that comes out of China that basically just brings your plates of food to your table. And they figure out, hey, well, I could eat your meal. I can have it directed to me. And you think if that’s a risk with something as simple as a wheeled robot that isn’t sentient or AI driven in any way.

How much more concerned do we have to be about safety of flight, safety of driving? One rotten apple really would spoil the barrel.

[Joe] (06:42)

Yeah. And since we’re on the topic of artificial intelligence and robotics and things like that, I do want to bring up a pressing point here in the States. And that is the U S government published a policy statement around America’s AI action plan. And it really has three pillars. And one of them gets to security and safety that we’re talking about in the software development process. Those pillars are the U S needs to win the large language model race in the AI race itself.

That’s pillar one. Pillar two though, gets to what we’re talking about, the infrastructure, the energy grid, the data centers, and how those need to be reliable and secure. Because as they say, there really is no chance of AI dominance in the world if you don’t have energy and data center resilience. Even in those industries, you need to have very reliable systems like cooling systems and other things that help make the data infrastructure and the energy get delivered.

[Paul] (07:41)

The idea that we’ve come to accept, at least for things like laptops and servers, is that, well, if something goes wrong, we can just wrap another layer of runtime protection around. We can stick in some antivirus. We can stick in some threat protection. We can stick in some address space, layout randomization, and all of this sort of stuff. You don’t quite get that luxury with embedded systems, do you? Because every time you wrap it in another layer of cotton wool, you’re taking up more memory.

You’re taking up more time. You can’t go, well, we’ll just add another 32 gig of RAM, another 64 gig of RAM, when you might be talking about systems that only have 64 kilobytes of RAM because they were built to be tiny, to be low power, and they were built 25 years ago to last indefinitely.

[Joe] (08:28)

Exactly. I think the cost of patching and the burden or the economic hurdle in these industrial use cases is high. And what that means is we want to put more and more in the front of the equation at build time. The whole analogy of playing whack-a-mole, a lot of times we just find a vulnerability and we patch it and issue an update. And you don’t quite have that luxury for the reasons we’re discussing. Absolutely. There is a better way. There’s a way to build security into the process in the first place. 

And it isn’t strictly limited to having the most sophisticated advanced software development processes. You also need to supplement that with good techniques that can help prevent exploitation, even when a patch is not available, I think is ultimately part of the point. Yes. And that really gets to how you do things at build time.

[Paul] (09:22)

So Joe, there’s a sort of trinity of parts to the RunSafe security platform. If I can mention products by name, I don’t see any reason why I shouldn’t. And I’ll go for them in reverse order if I may. There’s RunSafe Monitor, which I hope we’ll have time to get onto later, but that’s more about watching software while it’s running. Before that, there’s RunSafe Protect, which aims to build protection into the software without wrapping it in so many layers of extra stuff that it performs differently. And even before that, there’s RunSafe Identify, which makes sure that you are building the right stuff to start with. Let’s start at the beginning with RunSafe Identify, which is a kind of super special version of Software Bill of Materials control, isn’t it?

[Joe] (10:12)

It is. What we try to do at RunSafe is identify risk, prevent exploitation of protect code. And then as you say, monitor software. If you think about identifying risk, you need to take a holistic view when you’re looking at your Software Bill of Materials in these embedded systems. If you don’t know everything that’s in your software and you can’t anticipate what the extent of the vulnerabilities are before you ship, then you’re going to have a hard time finding and proving that software is reliable in the first place. 

So identifying and enumerating all the precise ingredients. Although it sounds simple, maybe on the surface, what happens is in these embedded systems, when you compile source code into software binaries, you are pulling in different packages that might have certain kinds of dependencies in them that you weren’t even aware of that would get pulled in into your final finished product. A build-time Software Bill of Materials that sees all of that stuff that goes into your binary is the best time to build a Software Bill of Materials.

[Paul] (11:16)

You have some fascinating statistics that you’ve shared with us before, maybe you want to do it again, about the number of ingredients that you get in a typical modern software recipe. We’re talking about not just tens or hundreds, but maybe thousands of different ingredients, and each of those ingredients could have its own chain of ingredients that it just brings in without you even realizing.

[Joe] (11:41)

Exactly right. There’s things that the compiler will pull in. There’s things based on the operating system settings that might change the output of what goes into your overall package that you’re shipping.

[Paul] (11:52)

Absolutely anyone who’s a programmer will know that sinking feeling when you look at a C source file and you see ifdef this else ifdef that ifdef the other and think dear like you’ve got an old ARM processor that doesn’t have floating point I know I’ll use this whole new library that you never even knew was there before That can be quite an unpleasant surprise when it comes to post-production hence potential pain

[Joe] (12:19)

100 % right. And with that, there are other folks who will derive software build materials from the binary. And it’s really trying to taste the food after it’s been cooked or tasting the cake and trying to derive exactly, precisely all the ingredients that are put into it. And, know, there’s some good taste testers, I think, that can identify a good number of them. But in the software world, you get about 80 % right.

[Paul] (12:41)

So does that work by looking for things like strings that might have a version number? You find open SSL space 3.5.1, you kind of guess that that’s probably the version that was compiled in.

[Joe] (12:54)

Yeah, exactly. And you might ultimately then rely on past experience or other code and heuristics that suggest, well, if I do see these components, then I likely have this other package. And in the end, what you end up is you have a software bill materials, but you probably have a lot of false positives.

[Paul] (13:11)

and a lot of false negatives. If you don’t even know what to look for in the first place, you’re never going to find it.

[Joe] (13:14)

Exactly.

And so this is why I think a lot of people in the industry will say the best Software Bill of Materials occurs and gets created as you’re building the binary itself, at least in these embedded systems. Yes. When you stitch together those object files into a binary, you want to have a robust, complete Software Bill of Materials that says, yes, this is exactly what went into my binary. Why? Because all those false positives and those false negatives that you highlight the problem just compounds because what you want to do is then start to associate the vulnerabilities that are existing in those components that you actually have in your binary in end product. 

If you don’t have the bill of materials right, you’re relying on imperfect information to find vulnerabilities and you won’t find the whole story in a complex software supply chain where you have open source software, you have third party developers.

You have contractors, you have your in-house developers. And as you said, as a lead up to this whole discussion, you end up having hundreds or thousands of components for a small little piece of firmware that gets shipped. You might have thousands of components and 80 % of them are coming from the outside. So how do you identify, then, where your risk is? How do you build stable code? How do you ship reliable software that’s deterministic that meets those performance metrics? Part of the discipline is knowing exactly what goes in. And as we have said a couple of times on this podcast, Paul, we’ve said it today. It’s hard to know unless you really capture that data at the ground truth moment as you’re implementing the software itself.

[Paul] (15:02)

It’s easy to make an honest mistake when you’ve done hash include some file name and you kind of assume that the file you’re going to pull in is a library that you’ve been using for years. But today, because something else changed in the system or some other developer decided to upgrade something or some automatic security fix happened, suddenly what you’re building into the software may not be exactly what you expect. And that’s the goal of RunSafe Identify, isn’t it?

[Joe] (15:32)

Yeah, that’s part of it. It’s to identify the exact components that are in your software so that you can link to all those individual components linked to their vulnerabilities and understand then what kind of risk, what kind of set of vulnerabilities are you looking at? And then devise a way to go about ensuring that you build a very reliable set of software. 

In addition to enumerating what’s in there, there is an opportunity to add in security, but there’s also an opportunity to enforce good discipline, good policy, corporate governance in the build process as well for exactly the reasons you see. You want to be able to enumerate exactly what’s in that software and then use that to help identify risk and then enforce policy, whether that’s license checks for open source license violations, or if it’s vulnerabilities that need to be resolved or mitigated or somehow addressed to prevent the exploitation thereof.

[Paul] (16:27)

So Joe, let’s move forward to the next stage in the equation, which is Run Safe Protect. Now, technically, I suppose you can say that’s runtime protection because it keeps track of things like potential vulnerabilities or potential exploits or software misbehaviors at runtime, but it is not injected into the software at runtime like things like antivirus and process monitoring and memory spying and all of those things that we’re used to in laptop EDR software. Do you want to say something about how that works and why?

[Joe] (17:05)

Absolutely. So what we do at RunSafe is we monitor what gets built by collecting some metadata about all the functions that are created and used in the software binary that gets produced.

[Paul] (17:21)

And even for a tiny, tiny program, say a valve actuator, that might be hundreds or even thousands of functions even in a modestly sized C program that compiles to a few tens or hundreds of kilobytes at the most.

[Joe] (17:35)

I think I think the average C or C++ application will have 217 functions, but many of them have a lot more and some will have less, but the average is I think 217.

[Paul] (17:48)

And that’s before it calls all the other components that went in there that each have their 217 functions.

[Joe] (17:54)

Exactly. It compounds, as you say. And so what we do is we identify all those individual functions at build time so that our process can then relocate where those functions get loaded uniquely every time the software loads out in the field. And the benefit of that is to then provide runtime protection by preventing exploitation of weaknesses or vulnerabilities that could in fact still exist despite your testing.

And despite all your best efforts, we’re preventing exploitation at runtime as you say, but the key is for those that are fans of the compilation process and the linking process, we intercept the linker process. We measure and collect information about all those functions so that at load time, when the software loads on a device out in the field, we can relocate uniquely only those functions and not the data associated inside that binary itself.

When you have fine grained randomization at the function level, like what RunSafe does, even if you find one card in the deck, you still don’t know the order in which all the other cards are in that deck. And so what that means is you are in fact denying the attacker the determinism they need for their exploit to work in the first place. And that’s what’s kind of so beautiful about this. 

I like to say the software world is built on determinism and if you produce one copy of software or firmware or what have you, and you stamp that out a million times prior to run safe prior to address space layout randomization, the identical memory would exist in all 1 million copies of that. The functions would all load in the exact same spot with address space layout randomization. All you’re doing is you’re shifting the memory by a little bit and then everything else remains in the same order with run safe. All the functions are relocated uniquely every time the software loads. Even though the software remains functionally identical, the determinism to the attacker is broken. It’s logically unique. So the exploit that works in the lab no longer works on the device out in the field. You’ve denied the attacker the determinism they need for their exploit to work.

[Paul] (20:15)

And this reshuffling of the deck but with the same 52 cards in it will happen every time that you actually load that particular binary.

[Joe] (20:24)

Yeah, every time the software loads and the beauty is we’re not adding more cards. We’re not taking 52 cards and making it 78 cards.

[Paul] (20:33)

And I think what a lot of people forget about ASLR on something like say Windows, as useful as it is and as necessary as it is, they forget that once you’ve loaded program once, say notepad, or more importantly, a DLL that every program in the system uses like kernel32.dll, unless and until you reboot the system, once you know where it loads today, that’s where it will load tomorrow and week later and a month later if you haven’t rebooted. And I think people forget that. think, when I exit my browser I flush all the cookies, that’s a safety thing. So presumably when I load it again, I’ll also get new ASLR. Well, generally speaking, you won’t.

[Joe] (21:15)

Going back to sort of the fundamental principle that we had when we started RunSafe, it was to think about an asymmetric way to change cyber defense for critical infrastructure and for the embedded software that gets deployed in there. And so think about it this way, if you can make a simple change in your build process that doesn’t cost your build time and also is reliable in the sense that you’re not the functionality of software, you’re not adding new software onto the device, and you can prevent an entire class of attacks and entire class of vulnerabilities that represent a lion’s share, maybe 70 % of the vulnerabilities in critical infrastructure, in compiled code, in these embedded systems. If you can do that, then you are fundamentally shifting the economic equation of cyber defense. That is the principle on which RunSafe was found.

We wanted an asymmetric shift in cyber defense, denying attackers the determinism they need to exploit your devices so that your long lasting assets that get shipped out into infrastructure can last five, 10, 15, 20 years and be immune from a cyber attack even when a patch is not available.

[Paul] (22:30)

And Joe, these RunSafe products, Identify and Protect, they don’t require a huge culture or technology shift inside your development team or your organization, do they? It’s not like you’re asking people to rewrite their software, go out and learn a whole new language or throw out the compiler that’s the only one they’ve got for this particular device and try and knit another one. So it is comparatively easy to integrate this into your continuous integration, continuous development process, if you already have one, compared to something like saying, right, no more C, everything’s in Rust, even though we don’t have a Rust compiler for all the embedded devices that we support.

[Joe] (23:16)

Exactly right. I think if you can make an incremental change to your build process and have an asymmetric change on your cyber defense and all the while have a more complete understanding of all the individual components that go into your software, you have made a dramatic shift in your security posture by simply adding a couple steps into your build process to take advantage of this kind of tooling. Like I said, the premise was to have an asymmetric shift on cyber defense.

But doing it in a way that makes it very simple to adopt, very easy to implement and not have the downstream effects on system performance on the devices out in the field. The alternative that people face today, they face the challenge of, I rewrite all my software in a different language? Should I rewrite it in Rust instead of C++? And the answer is you don’t have to. And that is a massive shift.

Because even if you’re changing the language, then you’re thinking about the compiler, you’re thinking about the test harness, you’re thinking about the hardware, you’re thinking about the compute that you need. And all of those things become, you’re almost re-architecting your products in the first place. That’s not really viable in some of these industries. That’s not viable for airworthiness. You can’t just bring in new hardware and ship it out next month. You can’t do that in the auto industry. You certainly can’t do that in manufacturing plants where we make long-term investments to derive tremendous output in our manufacturing production.

[Paul] (24:45)

And in some of those environments, Joe, if I’m not wrong, there are regulatory problems that even if you wanted to rewrite some of your software and say Rust, the compilers for those might not yet be ratified. So even if you think they produce better, safer, cleaner, more memory mellow code, you may simply not be allowed to use them because understandably, the regulators figured we want to try and go with what we know rather than potentially introducing so many changes that it actually gets worse rather than better.

[Joe] (25:20)

Exactly. Again, our take at RunSafe has been that you won’t solve every single vulnerability, but you can prevent the exploitation thereof. Therefore, maintaining the determinism that the safety standards, the security standards, the compliance standards require to ensure that these devices will act as you expect out in the field.

[Paul] (25:41)

And you’re not being like the notorious three monkeys, are you? See no evil, hear no evil, speak no evil. You’re not producing these products so that people can go, you know what, I’m just going to carry on with all the bad habits of the past and RunSafe will somehow fix it for me. What you’re doing is saying if you’re going to spend time rewriting some of your code that you can, or adopting Secure by Design practices, why don’t we make it easy for you to do so?

So that you have more time to get a patch ready and pushed out so that you don’t have this post-production panic. You don’t have the SharePoint had a fix the next month that needed another fixed

[Joe] (26:22)

It would be a great world if our mistakes.

[Paul] (26:25)

HA! Sorry,

that’s not really funny, but I know what you mean.

[Joe] (26:31)

Well, it would be a great world if we could just cover up our mistakes, cover up our vulnerabilities, and no one ever did anything with them. And unfortunately, there are research, vulnerability researchers, there’s red teams, there are customers, there are nation states, there are hacktivist groups, there are cyber attackers of different flavors. There are those that do ransomware. There are those that seek money for different findings in bug bounty programs and whatnot.

[Paul] (27:03)

But apart from those few opponents that we have…

[Joe] (27:09)

Someone’s gonna find the vulnerability, that’s the point.

[Paul] (27:12)

Exactly. And they’re not necessarily going to tell you, especially if they are a cyber criminal who thinks they can make a million dollars out of it. Or even more importantly, if they’re a state sponsored actor that thinks, you know what, in nine months time, 12 months time, 18 months time, this could come in a lot of handy.

[Joe] (27:33)

And of course, we’ve talked about the prowess of China’s cyber research arm, if you will, and the attackers that come out of IRGC and Iran and North Korea and even Russia. And these are formidable research teams. They’re looking to exact some kind of outcome or some kind of effect at some point in the future at a time of their choosing because it aligns with their interests, their ideology, their nation state plans or what have you. 

At RunSafe, we want to make critical infrastructure safe so that the economy can thrive and we don’t give the upper hand to China or other potential adversaries in the United States or Western aligned countries. And so the idea of using that as a mission that keeps us going is something that is true across RunSafe and our team. And the idea that you can add in security at build time at relatively low cost, both in performance and economic terms, and have an asymmetric shift in cyber defense is a great equation for everybody, including the national security reasons that exist to prevent exploitation in critical infrastructure.

[Paul] (28:48)

And ironically, paradoxically, don’t know what the right word is, astonishingly, amazingly, brilliantly, in fact, adopting technologies like RunSafe Identify and RunSafe Protect actually could give you the extra time you need to make the long-term changes that you want. So that instead of going, golly, I have to rewrite this whole thing in Rust. It’s going to take me forever. I’m never going to get it finished. You can actually bring some new culture into your organization without affecting your business, without giving your customers cause for alarm and without, as we said in the title at the beginning, without any post-production panic. So Joe, do you want to finish off, although strictly speaking, it’s not build time protection, but it integrates with RunSafe Protect in such a way that I like to think of it that way. Do you want to say something about RunSafe Monitor?

[Joe] (29:45)

If RunSafe Identify helps you identify risk within your software code and your software supply chain, and RunSafe Protect allows you to prevent exploitation of software at runtime, what RunSafe Monitor is designed to do is to help you identify indicators of compromise and potential bugs in your software code by collecting information about a software crash. And this is a passive monitoring that doesn’t cause runtime overhead or slow down of any stretch.

[Paul] (30:17)

So this isn’t like these Windows solutions that poke instrumentation instructions into your code at load time so they can call some hundred megs worth of antivirus or whatever. This is a monitoring system that can tell you either that, well, you’ve had a problem and this is what we learned about it, which means you can fix it more quickly. Or perhaps even say we’ve seen some anomalies, maybe not be an exploitable vulnerability, but

[Joe] (30:29)

Exactly it.

[Paul] (30:47)

If you want somewhere to look next, this is a good place to start. So it’s of preventative as well as merely detective, if that’s a word. I don’t think it is, but it is now.

[Joe] (30:57)

And the original vision for RunSafe Monitor was much like a signal flare. And it’s just to send up a message to say, wow, we just captured something that you need to be aware of. We sit there passively until a software crash happens. When a crash happens, we collect about 20 state variables at the point of the crash. And you can tell a lot about that, something far short of a core dump. You can start to get a good indication of what exactly went wrong at the moment of the crash? And should that be looked at by your security operations team, because it might be an indication of compromise, or does that look more like a bug that needs to go back to the development team to see if there’s something needs to be fixed? 

Incidentally, what we found is it’s not just in production that RunSafe Monitor is useful. It also comes in handy during testing. RunSafe Monitor is meant not just for runtime production monitoring, but also in those simulations that you have testing out things in your test environment as well. But the idea then is to give feedback to the development team or the security operations team and give them a heads up that there could be an indication of compromise or underlying weakness or bug in the software.

[Paul] (32:11)

Because core dumps come with fascinating challenges all of their own, don’t they? They’re great for debugging, but in the real world they can actually be a little bit of a cyber security panic situation. Because the idea is, hey, let’s capture everything so that they can completely reconstruct this back at HQ, including passwords, authentication tokens, all sorts of stuff that was only ever supposed to be in memory wasn’t supposed to be saved. And I guess the other problem too, anyone who’s had windows telling them, your system is blue screened and now we’re going to prepare the crash dump. They can be absolutely enormous and good luck, A, fitting that in and B, being able to download it from some of the embedded devices out there.

[Joe] (33:00)

Yeah. And being able to store some of that information or, or send it off to your SIM without all that extra cost or risk associated with the overall core dump itself. We’re talking really small bits of data here when we’re talking, you know, some limited number of state variables that can really give you good insight what happened in a core dump. Obviously it can be a massive file can cost a lot in terms of data upload or download, depending on how you look at that. If you can collect this information, at runtime and you can use that to inform people, then you’re ahead of the curve in that regard as well.

[Paul] (33:37)

If you can get 98 % of the knowledge with 2 % of the intervention, the size and the risk of exposing information you shouldn’t. What a great thing that is. Joe, I’m conscious of time, so I think we better wrap up because I could just listen to you for another 30 minutes on this easily. And I really just want to say thank you that you have absolutely once again shown yourself to be the kind of person who says, you know what, I have these products to sell you because I think that the problems they solve are really important and this is a good way to solve it rather than saying, hey, I think these problems are important because I just happen to have the products to sell you. So if any of our listeners are interested in knowing more, they can just head to runsafesecurity.com.

[Joe] (34:25)

Yeah, come to RunSafeSecurity.com and we can help you identify risk, protect code, and monitor software.

[Paul] (34:31)

Well said, Joe. Thank you so much. That is a wrap for this episode of Exploited: The Cyber Truth. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like us and share us on social media as well. Please also don’t forget to share us with everyone in your team so they can hear Joe’s words of wisdom. Thanks for listening, everybody. And remember, stay ahead of the threat.

The post Build-Time Protections vs. Post-Production Panic appeared first on RunSafe Security.

]]>
What Drivers Really Think About Connected Car Safety https://runsafesecurity.com/podcast/connected-car-safety/ Thu, 04 Sep 2025 13:55:36 +0000 https://runsafesecurity.com/?post_type=podcast&p=254782 The post What Drivers Really Think About Connected Car Safety appeared first on RunSafe Security.

]]>

 

Cybersecurity isn’t just an automotive industry concern, it’s becoming a consumer expectation. RunSafe Security’s 2025 Connected Car Cyber Safety & Security Survey reveals how drivers view cyber risks in connected and autonomous vehicles and who is responsible for managing them.

In this episode of Exploited: The Cyber Truth, Paul Ducklin is joined by RunSafe CEO Joe Saunders to unpack what the survey results mean for automakers, regulators, and drivers.

Key discussion points include:

  • Why 65% of drivers believe remote hacking is possible
  • Why 79% prioritize physical safety over data privacy
  • How 87% say strong cybersecurity influences their buying decision
  • Concerns about over-the-air updates and the risks of interference
  • The role of regulation and industry standards in building trust
  • How cybersecurity is becoming inseparable from vehicle safety and brand loyalty

Whether you’re an OEM, policymaker, or consumer, this conversation highlights why cybersecurity must be treated as a must-have feature that is also fundamental to vehicle safety.

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul]

Welcome back, everybody, to Exploited: The Cyber Truth. I am Paul Ducklin, joined by Joe Saunders, CEO and Founder of RunSafe Security.

Hello, Joe.

[Joe]

Hey, Paul, great to be here, and I love today’s topic.

[Paul]

I somehow thought you would, because it opens, I don’t want to say a can of worms, because worm has very special meaning when it comes to cybersecurity. But some fascinating insights will come out today, because our topic is what drivers really think about connected car safety. RunSafe Security’s 2025 Connected Car Cyber Safety and Security Survey just dropped, in which 2,000 drivers, i.e. consumers, not manufacturers, not OEMs, not cyber security experts, were asked what they thought about vehicle safety and security. So let’s start with a simple question. Do you think remote hacking of vehicles is possible? 65% of drivers said, yes, I do think it’s possible.

That kind of suggests that 35% think it can’t be done, despite years of evidence to the contrary. What do you make of that?

[Joe]

Well, I think it’s interesting. You noted there were 2,000 respondents. They are consumers.

They’re based in the United States. They’re based in the UK. They’re based in Germany.

And you would think that in all areas, consumers would be well-informed. But since we did a cross-section, I think it is interesting, having interviewed 2,000 people, that the findings are what they are. In my mind, people are generally aware that hacking, so to speak, or attacking cars is certainly possible.

And only a subset believe that they’re fully protected. And maybe the reason there is this confidence gap is in part that people don’t anticipate they would necessarily be targeted. So they can’t envision they would be attacked.

But they somehow know at the same time that it is possible.

[Paul]

I hear you, Joe. I guess what you’re saying is that they’re not really answering, do I think that in theory somebody they wanted to go after could get pwned? It’s more like, what’s the chance that I’ll be driving along the freeway and someone will swerve my car into the armco?

Probably very little. That’s the wrong way to think about it, isn’t it?

[Joe]

Of course, because we have to look at this in a broader sense. And that includes a couple of things. The consequence of an attack could lead to maybe an accident and someone could get injured or even worse.

And also we need to look at it from a kind of an insurance and a liability and different business costs for the manufacturers to produce products that are safe. In my mind, the bigger story is about safety. And this is a factor that suggests that manufacturers should continue to invest in safety.

Safety is one of those key issues that people care about. Naturally, from my perspective, I consider cybersecurity a really important subset to the ongoing safety of vehicles going forward, especially as we get further into various forms of autonomy, whether that’s autonomous driving itself or driver-assisted capabilities and controls that might affect cars on the road today.

[Paul]

So I guess that does sort of explain why 79%, let’s call it eight out of 10 people, said protecting physical safety in connected cars is more important than protecting personal data. But it definitely does not mean that those 80% of people don’t care about protecting personal data. But one interesting thing I noticed, Joe, is that there’s a separate question in there about over-the-air updates.

Now, on that issue, 80% of respondents were concerned that over-the-air updates could be interfered with by cybercriminals. That doesn’t seem to mesh with the idea that only 65% of drivers thought that remote hacking was possible, because I’d have thought that hacking and cracking an over-the-air update is the ultimate form of remote hack because it’s 100% persistent. You basically take over the car forever, and officially at that. 

So how do you explain that difference? Do you think that’s just a matter of understanding or of semantics? 

[Joe]

Really, as we thought about this internally at RunSafe, we realized that over-the-air updates are probably more commonly utilized in certain types of vehicles that are newer on the market.

[Paul]

Yes. 

[Joe]

Let me back up first and say over-the-air updates are essential to fix bugs, to ensure that safety persists on vehicles. And with 100 million lines of code on modern vehicles, there is reason to believe that we should do everything we can. Just as we patch our mobile phone, we should be patching our mobile vehicles because of all the software that’s on it these days.

I think the autonomous features themselves, features that are coming fast and furious down the road. But do all drivers today utilize autonomous features today? The answer is no.

It’s a small subset. And so I think over time, these statistics are going to change. And it’s in part as more and more vehicles are incorporating more and more autonomous features.

And there’s more advanced applications that are used on vehicles, on devices. We will see more and more concerns about over-the-air updates. And I do think in the subset of people that have vehicles that are updated often, I do think they probably have a higher incidence of being concerned about the updates over the air getting compromised.

[Paul]

I guess if we can get there with mobile phones, I’m sure we can get there with cars. And if you’re concerned about updating your mobile phone, you should definitely be concerned about updating your mobile car, shouldn’t you? Not least because most connected cars these days, they include all the features of your mobile phone.

Apps, infotainment, a mobile connection that you don’t even choose. It just comes with the car and always works. They’re a mobile phone and much, much more.

I think the reason that over-the-air updates could be a great savior in the automotive industry is that historically, these kind of fixes have been done with product recalls, haven’t they? Please show up at your dealer within the next six weeks. If you show up, that’s great.

But what if you forget?

[Joe]

It’s very true that not updating software can lead to tremendous risk because you’re still exposed to a vulnerability that’s not been patched on your device. I actually get annoyed now if my overnight updates on my iPhone don’t take place because all of a sudden I’m exposed. I do think it’s just kind of the maturation of the process.

It’s a new environment and over time, people will settle in and just be normal. Like we do with phones, I think with cars, having regular updates will be acceptable at some point for sure. But everyone, I think, realizes the safety aspects of automobiles means that those updates have to be perfect. 

[Paul]

And the cryptographic correctness of those firmware blobs that get downloaded and updated is important. I don’t think we should be surprised and indeed should expect that automotive vendors will be even more careful to make sure that the firmware blobs they deliver to their customers will be the ones that arrive and will be the only ones that actually get installed and used.

[Joe]

Exactly right. And I think automotive makers are very cautious about safety violations or risk to drivers’ safety. And so with that, they go through very extensive software testing.

I don’t think we will see the kind of compromise in a software update in vehicles sending something out that hasn’t been tested, as you might see in other domains. Automotive makers know that they do need, in fact, have to adhere to safety expectations not only consumers have, but regulators have. 

[Paul]

Absolutely. Having said that, Joe, do you think that standards like ISO 26262, which includes a part called ASIL, which I believe stands for Automotive Safety Integrity Level, particularly relevant for driver assistance, self-driving features in cars. Do you think that will extend to include what you might call ACIL, Automotive Cybersecurity Integrity Level?

Do you think that we will see government-mandated cybersecurity standards for vehicles that can deal both with safety-related firmware updates and what you might call functionality firmware updates, like Infotainment Bluetooth stack? Games, which I see are the latest thing that the car vendors are getting into supplying in-vehicle, cloud-based gaming for your kids while you drive. What could possibly go wrong?

 So where do you think the regulators will go?

[Joe]

I think the regulators will stay in the safety arena when it comes to cyber mandates, in all honesty, although I will caution everyone to say that the infotainment systems are an access point onto vehicles from which attackers might jump to another segment of a vehicle and do some kind of cyber attack. Over time, I think there’ll be better network segmentation on vehicles.

[Paul]

By network segmentation, you mean that if you break into the infotainment network, it will be somewhere between very hard and almost impossible to jump across. My understanding in the aeronautic industry, they’ve tended to stay quite well apart, whereas in automotive systems, it hasn’t quite worked like that, has it? The same touchscreen menu that the driver has to select engine-related settings like fuel economy, power boosts, hill start assist, all of that stuff, the very same interface also deals with Bluetooth and what parental controls do I want over the movies my kids can watch in the back seat.

So they have traditionally in automotive not been as well segmented as perhaps they should have been. Would you agree with that?

[Joe]

I would agree, and I think part of it is if you were designing a vehicle from the ground up, quote unquote, whole cloth, you would be able to implement architectures without any kind of legacy dependencies. And we know in the automotive industry, there are subsystems that have been around for dozens of years, if not 30, 40, 50 years. And the CAN bus protocol and the CAN bus itself on vehicles for a majority of cars has been an essential aspect to help deliver messaging from one component to the other.

 And so if you were able to design around the CAN bus itself, then you probably could have greater network segmentation as a result. I think there’s a legacy aspect that has affected how security itself evolves. And all of this is changing. 

I mean, we’ve come a long way in 10 years since that original GPAC, whether it’s 2014 or 2015 at this point, I can’t quite remember. But let’s say 10 years ago when that Jeep got driven off the side of the road due to a remote attack.

[Paul]

 That’s the video with Andy Greenberg of Wired driving his Cherokee. I presume he just hired it without telling anybody. The hackers were sitting at Charlie Miller’s house, weren’t they?

[Joe]

Yeah.

[Paul]

They’d done it before. They’d hacked his car while they were plugged into the OBD port. So they were in the back seat.

But this time they said, no, we want to prove we can do it wirelessly. And I just think he made a terrible blunder. He should have made one of them get in the car with him as an insurance policy. 

Because they basically cut the car off while he was on the freeway at a part that had no emergency lane. And it wasn’t downhill, so he couldn’t coast and escape at the next off ramp. So yeah, that was a bit of an eye-opener, wasn’t it? 

[Joe]

Yeah. And without the same level of publicity in the early days at RunSafe, we had coordinated with the FBI and the Virginia State Police and demonstrated how even law enforcement vehicles could be compromised while law enforcement were driving those vehicles. And it was kind of a scary thought for law enforcement to think that they could be targeted. 

[Paul]

Wow, yes. Because even a simple denial of service attack, like where the vehicle splutters or cuts out, or you can’t put the blues and twos, the sirens and the lights on, to attend to an emergency would be quite catastrophic.

[Joe]

Our premise at the time, and I think it’s still in part true, is those badges are potential targets and logos are targets, if you think about fleets. And so there’s other forms of motivation beyond consumers.

 [Paul]

For someone to be able to collect data about an entire brand’s fleet, that would be valuable to script kiddies, to cyber criminals, and very definitely to state-sponsored actors, wouldn’t it?

[Joe]

Exactly. And so you might find out route information, you might just disrupt deliveries in general. Imagine the kind of activity that’s done with trucks or delivery companies and logistics companies.

I think UPS, I think FedEx, I think the major trucking companies are pretty sophisticated when it comes to cybersecurity. What’s changing in the industry though is the autonomous nature and the proliferation of software on these devices. So I think 10 years ago, that was accurate.

I think it’s still true, but the landscape is changing with all the different kinds of communication ports that are on these systems and the dramatic increase in lines of code. The risk is changing because of the amount of software exposed.

[Paul]

You know, Joe, another interesting pair of facts to my eyes are one very good and the other possibly even better. And that is that despite the fact that we said only 65% of drivers think that remote hacking is even possible, nevertheless, 87% actually said that cybersecurity, strong cybersecurity would influence their buying decision, which is very good news, isn’t it? And perhaps even more interestingly, only 35% of them said, we’re prepared to pay more for that.

They’re saying, you know what? This is so important that we’re not paying extra for it. We expect it to be in the car in the same way that we don’t expect to pay a premium to have brakes that work or steering that can go left as well as right.

[Joe]

And it points out that I think there is room for regulation when it affects safety. I do think consumers care about safety, as I’ve mentioned several times, but I do think that cybersecurity is top of mind and could be listed as a strength in one’s vehicle, especially given all the complex systems around there. People wanna have confidence that their devices are going to be secure.

If you have a gadget at home, if you have a full network that operates your house, maybe you have a surveillance system. I think if you saw that there’s one surveillance system that is cyber hardened and one isn’t, I think that could make a difference. 

[Paul]

Exactly.

[Joe]

And I think in vehicles, people have an expectation that cybersecurity would be there. But again, I would point out that difference, the 87% say strong cybersecurity will influence their buying and 35% willing to pay a premium does feel like a disconnect in one sense. But as you say, it may be because they have an expectation.

I think regulators need to consider that kind of data. I think it’s important for them to consider consumers have an expectation that their vehicles will be safe and secure and it’s not necessarily on them to pay for it.

[Paul]

Yes. And it also suggests that good old market forces themselves could have a very positive influence on cybersecurity because the other way of reading that is 65% of the people surveyed said, we’re not prepared to pay more for cybersecurity, but we’re certainly prepared to pay nothing at all. In other words, we’ll ditch your brand and we’ll shop somewhere else.

[Joe]

And I think it’s important to point out if everything else is equal, maybe the cybersecurity one does break the tie. There are still other main things and main reasons why people buy cars. Say it’s a family driving kids to school.

They want the safety or they want the comfort because someone has a long commute or they use the vehicle for their work and they have lots of communication needs and lots of connectivity needs and they need to make sure that they can continue to operate.

 [Paul]

I’m smiling at you, Joe, if you can see me on the video. I’m just thinking, will he mention heated seats and soft closed doors? And those are all things that do influence people’s decisions, but I bet you wouldn’t get 80% of people in a survey to say, yes, I definitely need the soft closed doors.

They can probably do without those, but you did get 87% of people saying strong cybersecurity will influence their buying decisions. So do you genuinely think that strong cybersecurity from a particular automotive vendor will become a major differentiator, but people will look at that in the same way they look at fuel efficiency, emissions, safety ratings? 

[Joe]

I do think that vendors or OEMs and automakers today look at cybersecurity from a safety perspective and do everything they can to minimize safety risk to consumers.

[Paul]

Yes.

[Joe]

And so for me, I don’t think that’s necessarily a differentiator. I think it’s a must-have for all vehicles.

[Paul]

Yes, particularly when the safety aspect increasingly depends on the cybersecurity aspect anyway. For example, you’d better secure your over-the-air updates if you’re fixing something to do with braking, steering, lighting, et cetera. So, Joe, that brings me to another statistic which I found intriguing, and that is that nearly 30% of respondents, 28%, said that they weren’t confident that their car is properly protected from hacking.

Do you think that is down to the fact that they’re right, or simply that communications from vendors about their cybersecurity measures should or could be improved? When somebody does have a competitive advantage in cybersecurity in their vehicles, how do they communicate that to consumers without falling into the sales, spiel, marketing hype trap? 

[Joe]

Yeah, no doubt manufacturers can incorporate cybersecurity into their branding and into their products and give assurances to customers. I think OEMs and auto manufacturers need to put forward a basic level of confidence to all consumers, a seal of approval, if you will, that coincides with safety but also cybersecurity. And one day I hope that is the case, that people will sign up for a form of validation that they adhere to the strictest security methods.

And until then, I think we will have an information gap. I don’t expect that car salesmen or saleswomen will be asking people instead of kicking the tires to hack the car to see if you can break in. I don’t think that’s ever going to happen.

[Paul]

So, Joe, one more interesting set of numbers before we wrap up and summarise. And that was, whom do you believe should be held responsible if a cyberattack on a connected car due to a third-party vulnerability causes an accident? 33% of people said OEM, 20% said OEM slash supplier, 14% just the supplier.

But fascinatingly, 10% of people thought that the fault would lie with the driver. And understandably, perhaps, 10% of people said, look, the crooks, the cybercriminals, the attackers should bear the liability. So what do you make of those stats?

Where should the liability lie? And how can we all do our bit even if we decide it ends with someone who isn’t the attacker, but maybe is the OEM provided the insecure part?

[Joe]

I think ultimately the OEM is responsible for certifying that their car is safe, is secure, does what it says it’s supposed to do, and will operate as expected. And so I believe that these cyberattacks are foreseeable events, even though they are not necessarily predictable.

[Paul]

Yes.

[Joe]

And what I mean by that is there are ways to do mitigation on vehicles that is in fact the responsibility of the OEM. And so with that, I do think it’s funny, like you point out, that 10% of folks said that the driver is responsible.

[Paul]

If the crash was down to the cyberattack, not down to poor driving, what’s the driver supposed to do? This isn’t like a hobbyist computer that they built themselves, thanks to regulations in the industry.

[Joe]

I think maybe respond to the incident and recover and get the car back into a normal operating mode or whatever. However, we do need to be aware and it is the driver’s fault if they’re speeding and it starts to rain and you don’t turn your headlights on. I do think there’s probably conditions in which a driver has to be able to respond to some unexpected event and maintain control of the vehicle.

[Paul]

Yes. On the other hand, if you have a vehicle that says when I detect that it’s raining and it’s not light enough, I will automatically turn the headlights on. While the driver should intervene if they can see that that system hasn’t worked, you would expect that that system would work correctly very, very, very much more often than it failed.

Like the hill start assist. So you don’t use the brake, you just let the clutch out and drive off. If that fails, you’re going to roll back and hit the car behind you.

Now, who’s liable? Is it the driver? Because they should have covered the brake anyway.

Or is it the manufacturer who said, no, this system allows you to drive without using the brake. I suspect that there are lots of open questions there. But it is interesting that there is at least a suggestion that, well, we all have at least a part to play.

Because it wasn’t 99% of people said, it’s the person who made the part. Or it wasn’t 80% of people said, oh, well, the driver should have just been clever enough to fix it. Though I was intrigued that 4% of people said, it’s the regulator’s fault. 

A software bug is hardly their fault, is it?

[Joe]

It’s hardly their fault, but they are producing standards against which OEMs and the suppliers should be building their vehicles. To the extent that the automakers themselves are not solving the problem, I do think there’s a backdrop where regulators have a role. But like you said, I think ultimately the OEMs and the suppliers need to have a solid program to build security into the vehicles.

They don’t want to have an accident result from a cyber attack in one of their consumers, one of their passengers, one of their drivers. And as a result of that, they ultimately have the most motivation to ensure that the cars stay safe. 

[Paul]

 So Joe, I hope you don’t mind if, to finish up, I ask you a very forward-looking question. And that is, if we ignore just the remote hacking of vehicles, the, oh, I swerved you off the freeway and you crashed and there was nothing you could have done about it, what other emerging cyber risks do you think we will see in the automotive industry, in consumer vehicles, in the next three to five years? 

[Joe]

I think there could be ransomware. I think there could be locking of vehicles.

[Paul]

Oh, you mean, hey, pay me $3,000 or I’ll melt your car down and it’ll cost you $30,000 to get it fixed by the dealership.

[Joe]

Yeah, or maybe multiple cars are affected and they go after the manufacturer to pay a ransom.

[Paul]

Oh, right, your whole fleet will not start from 10am tomorrow.

[Joe]

Yes.

[Paul]

All vehicles will cut out, pull to the side of the road, and stop. Wow. 

[Joe]

So if it’s not really a safety concern and it’s a financially motivated one, I think the deepest pockets are the car makers and not the individual drivers themselves.

[Paul]

Well, it’s hard to see how something like a ransomware attack that stops vehicles would not be a safety concern, because at least some of them are going to stop where they jolly well shouldn’t. If they’re delivering fresh food, then there’s the whole supply chain safety of society problem. So what do you hope will be different, perhaps, or what different questions would you like to ask in future editions of the Run Safe Security Connected Car Survey?

 [Joe]

Yeah, I think we probably want to get some comparison data from some of the different stakeholders in the ecosystem.

[Paul]

Yes.

[Joe]

I’d love to supplement this with some feedback about how OEMs are creating standards for their supply chain to incorporate security into it, what the state of the art is there. Also, how are organizations changing their development practices? How are they sharing information about vulnerabilities?

And what information could be shared at which levels to further substantiate bolstering cybersecurity in the vehicles? It is great to get the consumer perspective. I also think we need a little bit more from the OEMs and their supply chain, or even their customers that are not consumers, the fleets.

[Paul]

So rather than just waiting for the regulators to tell the OEMs and the vendors what they need to do, you’d like to see what the OEMs and the vendors are throwing forward to the regulators to say, here are some new standards we’ve come up with all on our own, and this is what we expect you to hold us to in the future. That would be much more proactive and very much the opposite of checkbox compliance, wouldn’t it?

[Joe]

And I think the OEMs are doing a lot with their suppliers even today. 

[Paul]

I agree.

 [Joe]

I’m encouraged by developing common frameworks and architectures that allow people to have more mature software in these vehicles. Not that it’s been immature lately, but doing more robust framework and architecture development with suppliers so that we can minimize the vulnerabilities throughout the entire supply chain.

[Paul]

Joe, thank you so much for your passion. You know so much about this and you talk about it with such breadth and depth and without getting into any kind of sales spiel mode, I deeply appreciate that. I’m sure our listeners do too.

So thank you very much and thank you to everybody who tuned in and listened. If you find this podcast insightful, please subscribe so you know when each new episode drops. Please like us and share us on social media as well and don’t forget to share us with everybody in your team.

That’s a wrap for this episode of Exploited: The Cyber Truth and remember, stay ahead of the threat. See you next time.

The post What Drivers Really Think About Connected Car Safety appeared first on RunSafe Security.

]]>
When IT Falls, OT Follows: Inside the SharePoint Breach with Ron Reiter https://runsafesecurity.com/podcast/sharepoint-breach/ Thu, 28 Aug 2025 12:56:28 +0000 https://runsafesecurity.com/?post_type=podcast&p=254695 The post When IT Falls, OT Follows: Inside the SharePoint Breach with Ron Reiter appeared first on RunSafe Security.

]]>
 


The explosive SharePoint vulnerabilities (CVE-2025-53770 and CVE-2025-53771) are already wreaking havoc across hundreds of organizations, exposing sensitive data and creating dangerous footholds for attackers.

In this episode of Exploited: The Cyber Truth, host Paul Ducklin sits down with Joe Saunders, CEO of RunSafe Security, and Ron Reiter, CTO and co-founder of Sentra, to break down what makes this vulnerability so severe, how attackers are bypassing authentication to gain full access, and why traditional patching strategies won’t close the door on risk.

Key topics include:

  • The mechanics of the SharePoint exploit and its widespread impact
  • How IT breaches can escalate into OT disruptions
  • The critical role of customer trust and data protection beyond compliance
  • The top three actions organizations must take immediately

If your IT or OT systems rely on secure data flow, this is an episode you can’t afford to miss.

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Special Guest – Ron Reiter, CTO & Co-Founder of Sentra: Ron Reiter is CTO and Co-Founder at Sentra. Ron has over 20 years of hands-on tech and leadership experience in cybersecurity, cloud, big data, and machine learning. As a serial entrepreneur and seed investor, Ron has contributed to the success of several startups, including Axonius, Firefly, Guardio, Talon Cyber Security, and Lightricks, after founding a company acquired by Oracle.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] 00:00:06  Welcome back, everybody, to Exploited: The Cyber Truth. I am Paul Duckllin, joined as usual by Joe Saunders, CEO and founder of Run Safe Security. Hello, Joe. You’re on the road again, aren’t you?

[Joe] 00:00:21  I am on the road and it’s great to be here though. Look forward to the conversation. Paul.

[Paul] 00:00:25  Excellent, because we have a super special guest today. And that is Ron Writer, who is CTO and co-founder at Sentra. Hello, Ron.

[Joe] 00:00:35  Hi.

[Paul] 00:00:36  Now, Ron, today we are going to be talking about SharePoint Under Siege Anatomy of Vulnerabilities. So why don’t you start by what we mean by the SharePoint vulnerability. That was all over the news lately?

[Ron] 00:00:53  Sure. So the recent SharePoint vulnerability noted BV 2025 553770.

[Paul] 00:01:02  And there was a second one, wasn’t there, which is 53771.

[Ron] 00:01:07  Correct.

[Paul] 00:01:08  They kind of hunt as a team, if you like.

[Ron] 00:01:10  Sadly, the other name that the more common name was the tool shell exploit.

[Ron] 00:01:15  If I, if I remember correctly.

[Paul] 00:01:17  Yes. That’s right. And for our listeners, basically a shell is the Unix term for a window in which you can enter commands and the system will respond. So getting a shell on somebody else’s system loosely means you can tell it what commands to run, even though you’re not really supposed to.

[Ron] 00:01:37  Exactly. So the type of vulnerability is, I think, what hackers deem the most lucrative or the most the strongest type of vulnerability, which is the remote code execution vulnerability. Right?

[Paul] 00:01:50  RC yes.

[Ron] 00:01:52  That’s basically the ability to select a server and say, I’m going to execute whatever I want there take over the server and then from there. Do anything. Read the emails that are on the server. Hack into the organization from within that server or whatnot. That is the CV that was recently published.

[Paul] 00:02:15  It’s the one that kind of describes itself the best, isn’t it? Remote. The person could be on the other side of the world. Code is shorthand for program or programs.

[Paul] 00:02:26  Unknown. And execution means I ask you run it. So it literally is take over and ultimately, perhaps with a little bit of fiddling. Do whatever you want.

[Ron] 00:02:39  Correct. And what recently was announced, and I think the reason that is so this is such a, such a critical vulnerability, is that when a remote code execution vulnerability is known to the open right, if someone publishes it, someone talks about it, suddenly people know how to exploit that vulnerability and it becomes widespread knowledge. The question becomes how broad is the damage? So is there only one server that is open to the internet, or is millions of servers potentially could be open to the internet? And it’s also a combination, right? Like you have to have a server that is running it. But that server also has to be accepting connections from the outside world. So things like a mail server, an application server, a web server, these things are usually open to the internet because they welcome people to connect to them. Right.

[Ron] 00:03:39  And that’s exactly what happened.

[Paul] 00:03:41  And that’s the problem with something like SharePoint, isn’t it? The hint is in the name. It is the point from which you share stuff inside the company, outside the company, perhaps with suppliers, perhaps with contractors, perhaps with working from home employees, Ways, perhaps web pages that you want the general public to look at.

[Ron] 00:04:03  The numbers that I’ve collected, I’ve already heard about 400 different organizations that were actually directly affected by this hack. We don’t know if that’s the only number. We can only assume that the number is much, much greater.

[Paul] 00:04:17  It’s certainly not going to be less, is it?

[Ron] 00:04:22  Exactly. A month ago, there were at least 9000 servers that were exposed to the open internet. So from an impact perspective, you know, it’s definitely something very, very severe. And again, these are only the initial numbers. Of course the numbers could be much greater.

[Paul] 00:04:40  And I think it’s also important to mention at this point because we talked about two CVE numbers at the start.

[Paul] 00:04:47  The important one here of course is the remote code execution. But it’s sort of partner in crime, if you like, is what’s called a authentication bypass Heart vulnerability. That means not only can the initial exploit be triggered remotely. The person who’s triggering it doesn’t even have to have the most basic form of login on the targeted system. With a bit of effort, they can probably authorise themselves to get in and then, loosely speaking, implant any rogue executable code or malware that they like.

[Ron] 00:05:26  Completely correct. I think when we assess the impact that a remote code execution vulnerability, this is definitely one of the parameters, right? So the first parameter is how many servers are there in the world that are running this vulnerable software. And then the second question is how many of these are open to the internet. And then the third question is is the remote code execution vulnerability a type of vulnerability that requires you to be authenticated or not? Because if it’s an unauthenticated remote code execution, it’s definitely the worst type. Maybe I can give a small insight into what exactly that vulnerability means.

[Paul] 00:06:05  Please go ahead.

[Ron] 00:06:06  So basically what happened is that the hacker that found this vulnerability noticed that there is a way to get something called a machine key. And that key allows you to create something called a view state. Now it doesn’t really matter what a view state is, but what you need to know is that this view state contains something that if you can manipulate it, if you can change it, then you can put something in it. That is basically what you want to execute. Please take over the server. Please send me all of the passwords that you can find, or the emails that you can extract from the server. And if you can manipulate that view state, then you can essentially do whatever you want with the server. You can basically go and get that machine key, and then you can forge a malicious view state and then basically do whatever you want on that server. The SharePoint servers usually are servers that contain a lot of corporate data.

[Paul] 00:07:12  Now internally it’s actually IIs, isn’t it?

[Ron] 00:07:16  Yes.

[Paul] 00:07:16  If you have SharePoint running, you actually have an IIs server in the background doing all the webby stuff.

[Ron] 00:07:22  Right. And that server usually contains many more things. Not only the IIs server, which is a web server, it could potentially have other servers installed in the same machine, for example in Exchange Server. Right.

[Paul] 00:07:36  Absolutely.

[Ron] 00:07:37  So your emails could also be there.

[Paul] 00:07:39  Oh dear.

[Ron] 00:07:40  And it’s usually the case by the way usually people put the exchange server on the SharePoint servers because why not. Right. But even if it’s not taking over a server that is running Windows in a server environment probably means that you have access to other servers in the organization.

[Paul] 00:08:00  That SharePoint server probably itself communicates. Collects, manipulates, organizes data from far inside the network, not necessarily servers or services that are on the Windows network. They could be other servers, perhaps even out in the cloud, but do things like collect surveillance video, collect telemetry data from vehicle fleets, all sorts of stuff.

[Ron] 00:08:28  If you think about the two most sensitive things in an organization is usually the SharePoint server and the exchange server, right?

[Paul] 00:08:36  Yes.

[Ron] 00:08:36  What’s more sensitive than reading the emails of people and reading the internal knowledge base of everything that is owned by the organization, right?

[Paul] 00:08:46  Yes, you’ve got contracts, you’ve got calendars, you’ve got sales forecasts.

[Ron] 00:08:52  Yeah.

[Paul] 00:08:53  You might even have discussions about the last breach that you just had.

[Ron] 00:08:58  Exactly. And what about the IT materials? Right. What about the, maybe there are passwords that are stored in the SharePoint. These things happen all the time that will enable you to take over the organization.

[Paul] 00:09:10  Absolutely. And even worse for employees inside the organization. Many, if not most countries in the world these days, require employers to collect and hold very detailed what you might call know your customer information about their staff. So they probably have things like scans of driving licenses, scans of passports, tax return details, bank account details, medical insurance history, all of that sort of stuff.

[Joe] 00:09:42  And it might add, Paul, your list and Ron’s list is pretty scary in the first place. But let’s not forget intellectual property and even operational data.

[Joe] 00:09:53  The IP of an organization, when that gets stolen is obviously a devastating thing, and it’s a form of economic espionage and sabotage. And so when you see systems like this that are widespread, that do have these vulnerabilities and do provide access, the key is to look at what’s the motivation of the attackers trying to get in and what are they trying to do. And let’s just not forget intellectual property and and operational data.

[Paul] 00:10:17  Absolutely. I’m thinking about me. Oh no. They’ve got a scan of my driving license. But what if they’ve got, as you say, the intellectual property that makes the company valuable? Maybe they’ve not just got my driving license. Maybe they’ve potentially put my job on the line as well.

[Ron] 00:10:33  Either organizations need to protect their customers’ data or they need to protect their corporate data. So the customer data is what they need to protect to make sure that they are giving a loyal service to their customers or their customers’ customers, and making sure that they preserve the privacy and be compliant with the, you know, different privacy frameworks or compliance frameworks to protect that customer data.

[Paul] 00:11:02  Ron, can I just say how pleased I am to hear you talk about dealing with their loyalty to their customer first, before you said the word. Oh, and being compliant rather than just doing it the other way around and thinking, well, we’ll be compliant and then maybe they’ll think we’ve got some loyalty.

[Ron] 00:11:19  Of course.

[Paul] 00:11:20  I’m sure Joe agrees very strongly with that as well, because he’s very opposed to checkbox compliance, aren’t you, Joe?

[Joe] 00:11:27  Yeah, 100%.

[Ron] 00:11:28  If a customer gets hurt because he gave his details to an organization that couldn’t keep his private information private, then who can he trust? Right. This type of breach of trust is terrible, and every company that has and holds customer data has to really take care of it. And the second type of data is the corporate data. Right. So intellectual property is the number one example. Of course there are always contracts, business agreements, employee agreements. Things that you don’t want out or you don’t want the salaries of all of your employees out.

[Ron] 00:12:03  Some companies it may. Maybe it’s a small thing, but other companies, this could destroy the company. Right.

[Paul] 00:12:09  And it’s one thing if your competitors get it, it’s even worse if some kind of hostile enemy state, for want of a better way of putting it, gets hold of it. And instead of crowing over that, just squirrels that information away in a cupboard so that they can use it later, either for competitive advantage or for intelligence gathering, or for undermining the confidence of your own community.

[Ron] 00:12:37  Yeah, exactly. Some people call this SharePoint again, which is a funny term.

[Paul] 00:12:44  Yeah, maybe that maybe that’s going a bit too far. I suppose if it takes that to focus your mind on actually patching promptly and knowing what’s what in your organization.

[Ron] 00:12:55  Yeah. And now there’s basically a race. Every organization now is in a race to upgrade their SharePoint servers to, so that the hackers won’t actually exploit the vulnerability in time.

[Paul] 00:13:08  Role. At this point, can I just ask you whether I’m assuming correctly here, the fact that this vulnerability allows an attacker to extract things like machine keys.

[Paul] 00:13:21  That means that there’s no username, there’s no password, there’s no multi-factor authentication code required. So you just don’t show up in traditional logs, do you?

[Ron] 00:13:33  When this vulnerability was in Zero Day, no one knew up to the when the vulnerability was published that people actually used it. But since it was discovered, basically all of the different threat detection tools, cybersecurity tools, added a detection mechanism that allowed their security tool to automatically quickly find out if someone is using and exploiting that vulnerability.

[Paul] 00:14:01  So clearly patching is absolutely vital. If you haven’t patched yet, then it’s no longer a zero day, or even a one day, or a three day, or a 12 day. The wry term used is an end day, sometimes for very large values of n, so if you haven’t patched yet, you’re probably not sending out the best message to your customers or to your staff. But patching alone is not enough, is it. Because in cases like this, particularly where things like web servers or data sharing servers are involved, the attackers will almost always add an additional backdoor of their own.

[Paul] 00:14:42  Say something like a web shell that will keep on working even after the hole they used in the first place has been shut off.

[Ron] 00:14:50  Correct. If you look at the since day zero when this vulnerability was discovered, Basically all of the IT teams in the world had to make sure that all of their SharePoint servers are correctly patched, and the faster they do it, the safer they are. And after they patch, you’re very much correct. What they need to do is now to understand and analyze if there was an attack on their servers. They need to now go back to the logs and try to see if someone had tried to exploit that server. They need to investigate whether or not a malicious hacker managed to put some sort of backdoor in that server or steal information. And then, of course, if they discovered something like that, they have to disclose this, right? They have to make sure that if customer data was stolen, that they have to disclose it according to regulations or at least the newer regulations.

[Ron] 00:15:47  So, yeah, it definitely something that created a lot of work for the security teams and throughout the world.

[Paul] 00:15:53  Now, Joe, there’s yet another dimension to all of this, and that is that many organizations may have things like operational technology or industrial control system devices on a separate network that they may consider as largely insulated from the internet because it only ever connects to the internal network, for example, to upload telemetry data about what’s happening in a pump room, what’s happening in a pressure vessel, how many items the lathe has turned out today, etc. but often that interface may actually work in two directions. So if you can get a good footprint inside the IT network an attacker may be able to reach further and start messing with the things that actually make the physical parts of your business work. For example, if you’re a manufacturing company.

[Joe] 00:16:50  Yeah, no doubt. And certainly with operational technology and OT networks and IT networks converging the risk of somebody moving laterally, as Ron had said, and finding their way to other servers inside an organization’s overall networks is certainly possible.

[Joe] 00:17:07  And one of the great concerns, and especially if they somehow maintain some kind of persistent access. And with that, one of my concerns, of course, in attacks like this on SharePoint and all that is the operational data that ultimately finds its way back into the enterprise. Yes, for managing workloads and communicating capacity and forecasting future performance and and all that kind of information. But also then you may find your way into the plant room floor or to those systems that are out there. We did see something like that with Equifax going way back when, in fact, it went the other way. People found their way in through an OTT vendor’s web based infrastructure. And then once they were able to do that, they were traversed to the network and moving across and finding other things. So it goes both ways.

[Paul] 00:17:59  Yes, we had a great example of exposing more than you might initially have thought in a recent podcast. When we spoke to Gabriel Gonzalez of IoActive, didn’t we? And he spoke about an MQTT server that had been set up incorrectly, that by looking at and watching what was going on in that server, attackers could not only read out information about where every vehicle in a fleet was at any moment, which gives them an incredible amount of competitive information about a business or even about a society.

[Paul] 00:18:36  They could, in fact, also inject commands into the vehicles and do things like lock and unlock them. So not only do they know where your drivers are, they could go and steal all your cars as well. Knowing the information that’s coming out, the telemetry information that’s coming out of an industrial control network is Hugely valuable information, but sometimes being able to poke data back could actually affect the physical operation of the business up to and including people’s safety.

[Joe] 00:19:06  Absolutely. And, you know, I think it begs the question of thinking through your own risk management framework and what’s at risk in your enterprise and how you have set up to protect the different assets, the different corporate data, that you have to include your operational data, but also your intellectual property. And I think this set of vulnerabilities in this exposure that’s out there, run said 9000 servers and hundreds of organizations have already been affected, and those are the ones we know about. We need to ultimately go back and look at what’s the motivation of the people behind it, and what are they ultimately going after and, and try to anticipate when we do our own security planning to think about what kind of data do I want to protect?

[Paul] 00:19:48  So, Rob, maybe you could say something now about the kind of things that an organization One should do when they’re confronted by a set of vulnerabilities like this one.

[Paul] 00:20:00  It sounds very specific to start with. It’s only affecting the SharePoint server, but as you’ve pointed out, that could actually be the key or the gateway to the entire network, this vulnerability.

[Ron] 00:20:12  We look at what exactly happened. Those servers probably serve the organization itself, not external customers, which is another use case. But again, usually SharePoint servers are internal. That’s where network security comes in. A lot of times, the most basic network security could have probably avoided most of these issues. When you look at an organization, you want to make sure that you have layered defense mechanisms.

[Paul] 00:20:42  Yes.

[Ron] 00:20:43  The first layer that you want to make sure is the network layer. You want to make sure that people cannot physically access the server if they’re not supposed to access it. After you do that, then you can take care of the other layers, for example, patch management, security, posture of servers, stronger passwords and so forth. At the end of the day, you have to have a multi-layered approach here.

[Ron] 00:21:08  And with those 9000 servers that were open to the internet, they probably shouldn’t have been open to the internet.

[Paul] 00:21:15  So they could have been compromised in some other way. The attackers could have used a phishing attack, or they could have bought a password on the dark web, then found the SharePoint servers, then exploited them. That’s not impossible, but it’s very much harder than just going, hey, look! They left the front door open. If you’re exposing yourself unnecessarily, you’re just making it more likely that something bad is going to happen.

[Ron] 00:21:40  Exactly. There are three types of hackers. There is the script kiddies that usually look for the easy wins. If there is an open vulnerability, they’ll write a script. They’ll scan the internet and they’ll just go and see what open servers are they? Exploit them and try to get something out of it. Maybe some sort of quick ransomware attack with Bitcoin or something very generic? That’s one type. The second type is the ones that target organizations and they try to find ways to go in.

[Ron] 00:22:09  That’s where you want to make sure that you’re fortified from these attackers. And then there’s the third type, which is the nation state. You want to make sure that at least you’re not leaving out a trivial attack surface, so that every script that someone writes, because there is a new vulnerability that’s out, directly impacts your organization. And that’s something that every security leader must remember. You also have the commitment to know exactly what servers you have open to the internet, and how easy is it for people to exploit these servers. So that’s why pen testing is done regularly. That’s why scanners are used. What you don’t want to do is leave a server that has an open CV that is well known to the to the world, and it’s unpatched. That is the most irresponsible thing that a security person can do.

[Paul] 00:23:06  Absolutely. And if you don’t find it, you can be quite sure that the cybercriminals or the state sponsored attackers will. And unlike a typical cybersecurity researcher, they ain’t going to tell you exactly.

[Paul] 00:23:20  So, Joe, it sounds as though this is, if you like, another angle on bills of materials, isn’t it? Now, I know you’re very passionate about software bills of materials, and that’s obviously important here. But there’s a bit more to it than that, isn’t it? It also means that you need to know what the configuration of your network is, to make sure that connectivity only works in the way that you designed it to or intended it to, not the way it accidentally ended up getting implemented.

[Joe] 00:23:52  One of the things I picked up from what Ron had said is in this particular case. Yes, you can even patch the systems, but there’s still work to do, and finding out and going through your logs and finding out if someone has attempted an attack on your system is a key step. And unfortunately, in today’s environment, it’s hard enough to make sure people provide necessary patches or apply necessary patches. But it’s another thing to make sure that people go through the right steps to make sure they weren’t infected or compromised in the first place.

[Joe] 00:24:26  And so we do need to be vigilant in our operations, in managing our endpoints, in our systems and our servers. Yes, you have to apply the patch, but you also have to do some digging to see if you were compromised in the first place.

[Paul] 00:24:39  Yes, because it’s not unusual for attackers once they’re inside, particularly if they’re worried about other attackers following them in. To apply patches for you, basically close the door behind Find themselves, because after all, they’re already inside.

[Joe] 00:24:56  Yeah, and we don’t want anybody to be a sitting duck, let alone in the digital world where access to information and sensitive corporate data is at risk. So we’ve got to be more vigilant.

[Paul] 00:25:06  Joe, if there are duck puns to be done, I think we’ll leave them to me. This is very clearly something that’s not just, hey, Microsoft did a boo boo. This is Microsoft’s fault. I’ve seen a lot of stuff in the media waving fingers at Microsoft. And sure, you can criticize their developers for having these bugs in the first place.

[Paul] 00:25:29  My understanding is that there were bugs of this sort found and patched, but the patches weren’t quite enough and someone figured out how to get past the original patches. But Ron, as he said, this is no longer a zero day. It’s no longer even a one day, or a three day, or a 12 day. So anyone who hasn’t moved yet, there’s not much point in pointing the finger at anybody else. Maybe you just have to look in the mirror and go. There’s the person who can help me get this sorted out.

[Ron] 00:25:59  Yeah, I think vendors in general, I mean, that create software. There will always be vulnerabilities. I don’t think it’s fair to accuse Microsoft for having bad software, right? The more you’re successful, the more hackers try to target your software.

[Paul] 00:26:17  Indeed. And when you look at your typical Patch Tuesday updates, although people talk about the windows updates are out, it’s not windows in the same way that you might talk about bugs in the Linux kernel. It’s windows and hundreds of applications, broad and deep, that go along with it.

[Ron] 00:26:40  It’s up to the IT person to make sure that he doesn’t run software that is outdated. That is the number one cause for concern when it comes to hacking. And I think you know that the first thing that hackers do always is to look for publicly known CVEs, and just hope that the IT administrators have forgotten to upgrade the servers, because it sounds like it’s something that is trivial, but you would be surprised by how common untouched servers are in the internet.

[Paul] 00:27:13  So, Ron, I’m conscious of time. Maybe we can finish up by you just giving us three exercises or three simple steps that system administrators can take, regardless of whether they’re windows shops, Mac shops, Linux shops, or whatever, to make sure that they’re not just focusing on, oh, there’s a patch, I’ll apply it. What should they be doing to make sure that they have a good organization wide holistic view of cyber security?

[Ron] 00:27:43  I mentioned two of them already. I think the first one is the adoption of of network security. If you can make sure that your organization’s resources are only available to your organization’s employees.

[Ron] 00:28:00  That is the first step. So network security is a must. The second thing is a good patch management approach, right? Being able to know about every server and being able to immediately know if you have an untapped server. That’s the second thing. And I guess the third thing is to have a good understanding of where your sensitive data is. I think that’s the three things that every security leader needs to do in order to make sure that it’s not being surprised by such a hack.

[Paul] 00:28:32  If I can summarize maybe oversimplifying things, it’s okay to put all your eggs in one basket. If you watch that basket really carefully, but it’s much better to put only the eggs you need in baskets, in separate baskets, and to protect them separately, depending on the risk associated with each one.

[Ron] 00:28:57  Correct. So I would say know your data multi-layered approach to security and a good patch management program. That’s the three.

[Paul] 00:29:08  Excellent. I think that’s a great point on which to end. Gentlemen, thank you very much for your thoughtfulness, for your passion and for this in-depth discussion.

[Paul] 00:29:18  That is a wrap for this episode of Exploited the Cyber Truth. Thanks to everybody who tuned in and listened. If you find this podcast insightful, please don’t forget to subscribe! Please share the podcast with everyone in your team. Like and share it on social media as well. Thanks again for listening and remember. Stay ahead of the threat. See you next time.

The post When IT Falls, OT Follows: Inside the SharePoint Breach with Ron Reiter appeared first on RunSafe Security.

]]>
Protecting Smart Factories from Smart Attackers https://runsafesecurity.com/podcast/secure-smart-factories/ Thu, 21 Aug 2025 14:26:02 +0000 https://runsafesecurity.com/?post_type=podcast&p=254657 The post Protecting Smart Factories from Smart Attackers appeared first on RunSafe Security.

]]>
 

Smart factories promise efficiency, automation, and global competitiveness—but they also expand the attack surface for cyber adversaries. In this episode of Exploited: The Cyber Truth, Paul Ducklin and RunSafe CEO Joe Saunders dive into the realities of protecting industrial control systems (ICS), operational technology (OT), and IoT-connected environments from nation-state actors, supply chain risks, and creative attackers.

Key discussion points include:

  • The evolution of the Purdue model in cloud-connected operations
  • Competitive risks of industrial espionage and data exfiltration
  • Why compliance is not enough—and how robust software practices improve both safety and quality
  • Practical approaches to safeguarding legacy devices without slowing performance
  • The importance of SBOMs (Software Bills of Materials) and visibility across industrial ecosystems

Whether you’re a manufacturer, supplier, or operator, this episode equips you with the strategies needed to secure your smart factory and protect your competitive edge.

 


Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] 00:00:06  Welcome back to Exploited: The Cyber Truth. I am Paul Ducklin, joined today by Joe Saunders, CEO and Founder of RunSafe Security. Hello there Joe, you have a big smile on your face.

[Joe] 00:00:20  Greetings, Paul. Great to be here as always.

[Paul] 00:00:23  I suspect the smile is, at least in part, because this is a fascinating topic that you just happen to know an awful lot about. And today’s title is Protecting Smart Factories from Smart Attackers. That is almost a boundless subject, isn’t it? Because a factory isn’t just a bunch of welding machines or a bunch of industrial robots. It will have delivery yards, it will have collection points. It will probably have a whole office campus associated with it, with its own it and its own non factory workers working there. It’s kind of the worst of all worlds mixed into one, isn’t it?

[Joe] 00:01:04  It’s a fascinating area, and you may say from a cyber defense area, the worst of all worlds, but certainly an exciting place to be.

[Joe] 00:01:12  Yes, especially with all the advancement in technology, the digitization of all the robotics that goes on or the sensors around the facility and whatnot. So I find it a fascinating area on the forefront of automation and robotics and certainly autonomous systems in general.

[Paul] 00:01:33  So, Joe, when it comes to building a secure environment that allows you a mixture of automated devices, say, a welding machine or a temperature sensor all the way up to the IT infrastructure that runs around the factory and the office is surrounding it. There’s a thing known as the Purdue security model, but that is rather based on the idea that things are, well, segregated level by level, isn’t it? Which isn’t necessarily the case in a cloud world.

[Joe] 00:02:07  Yeah, that’s the whole issue. We think about maybe historically looking at the Purdue model to segment operational technology into up to five layers. And those layers are from the ground level. Level zero would include those sensors and robots and actuators. And then at level one you would start to see the different types of controllers, the PLCs, that are interacting and sending signal to that equipment on the floor.

[Paul] 00:02:35  Now PLC for our listeners, that’s programmable logic controller. So it’s a very special type of computer that is typically programmed by, say, a windows computer on the IT network that downloads a special program that precisely controls things like, well, dare we say, centrifuges, if you think back to the Stuxnet virus, but also temperature sensors, pressure sensors. Things that work in an environment that is very different from the one where a typical windows computer sits.

[Joe] 00:03:07  Exactly. So if you look at the signal in the control that might be going on onto those shop floor or factory floor industrial equipment, you can imagine then having that layer is a key access to get to all that equipment, those PLCs, those controllers are not easily manageable in that sense. So then that brings us to the next level of control, where you have your HMI, your human machine interface.

[Paul] 00:03:34  So that would be like the panel with the buttons.

[Joe] 00:03:37  Exactly. And that’s where things get interesting because you start to figure out, okay, how are we communicating to those devices? How are they communicating to the controllers? And how is all that connected into the factory floor? They’re on level zero.

[Joe] 00:03:51  So we’re at level two. And then next to those human machine interfaces where you might have that switch, you talk of we’ve got the SCADA systems that allow you to monitor and control industrial processes in general. With that said, it makes sense that you would have the structure of this Purdue model structure where you have those three levels up from there. You are going to have some systems that collect data and monitor historical activity and give you workstations to gain access to these SCADA systems and whatnot. And the whole question is then, do you have that divided, that whole OT infrastructure divided from your IT systems and those IT systems at the enterprise level would be levels four and five. In some views there’s a DMZ between the IT and the OT.

[Paul] 00:04:40  The traditional way of thinking of a DMZ. If you’re thinking of, say, your home network, it’s your router. There’s one wire that goes to the internet and there’s another wire that goes to your wireless access point, and never the twain shall meet. But we also know that these days, even thinking of a home network, there may be lots of devices you have that can connect to the internet anyway.

[Paul] 00:05:04  They might come with their own Bluetooth. They might come with their own mobile phone SIM card. Air quotes for emergency backup. So that nice layered model with the DMZ in it. There are all sorts of tentacles sticking out from the side, possibly at every layer. What if you have a valve actuator that just happens to be able to communicate in two different ways by wireless and Bluetooth? And what if both of them are connected at the same time by mistake? How would you ever know exactly?

[Joe] 00:05:33  And so that becomes the big question is what if these factories are then connected to the cloud? And maybe there’s really good reason for it. Maybe it’s inventory tracking in other systems that are tied to other systems in the outside world. Maybe it’s ways to get better management across factory floors and connect data. The whole notion then of having industrial IoT devices that are bringing value, that may actually be connected to the outside world, then sort of puts, I wouldn’t say that the Purdue model is irrelevant. It just changes the complexity a little bit.

[Joe] 00:06:10  And so with that said, I think what we’re finding in general with smart devices, in smart factories in the world of smart attackers is that these connected devices, of course, bring productivity gains, but also bring a new level of security consideration.

[Paul] 00:06:27  And I guess there’s also the issue that although we should be deeply concerned about attackers getting into, say, a factory network and being able to get right or execute access by fiddling with things that they shouldn’t be in. We can talk about that in a moment. Even if all you have is the ability to look at a webcam, retrieve data from some scalar device, even if all you’re doing is essentially spying on what’s going on in the machinery, that can give you an enormous competitive advantage. Commercial advantage, national level intelligence advantage, couldn’t it?

[Joe] 00:07:08  You certainly can imagine and get creative thinking. Would a manufacturer in China want to know what’s being produced in Germany or some other scenario like that? And what are the considerations there? And in an age of supply chain risk and as you say, competitive intelligence to really understand where you stand against competitors, there’s a lot of different reasons, independent of simply taking down systems on a motivated attacker who wants to sabotage something.

[Joe] 00:07:39  But exfiltrate data in these environments is certainly one of the top risk indicators and drivers of the cyber threat in the first place.

[Paul] 00:07:48  So it’s not just the robots have gone amok. The welding machines have welding each other together and shutting down the factory. It could be something much less obvious, such as hey, someone who does not have our best interests at heart knows how much chromium we’ve got in the stores for the manufacturing. Somebody knows that we’re falling behind on production of goods. Somebody knows that there’s a whole new project. So read only access inside a factory is actually a clear and present danger, isn’t it?

[Joe] 00:08:21  It is, present, in clear danger, so to speak. I guess if you think about some of these manufacturing companies, they are multinational firms. In many cases, they have plants around the world.

[Paul] 00:08:34  Yes.

[Joe] 00:08:34  And with that, there is a need for the enterprise itself to know all this information about their own operations. But then again, that same data in the national security realm, data is one of the new forms of oil, if you will.

[Joe] 00:08:49  It’s one of the key units of production that really make a difference in understanding what’s going on. And so if you combined data collection with analytics, then you can have certainly a head start over your competition or maybe worse through country level, nation state level competition.

[Paul] 00:09:06  Well, we already spoke about this, if you like, when we spoke about Salt Typhoon in an early podcast, these Chinese hackers were probing industries around the world, seemed to have a particular predilection for getting into telecommunications companies. And then in the US realized, hey, there’s warranted surveillance that’s been collected for things like criminal court cases. Why don’t we just take that? It’s going to tell us all sorts of exciting and interesting stuff. A smart factory needs to be smart so that you can run it smartly, but a smart attacker doesn’t have to know exactly what they want before they break in. Once they’re in, they can have a look around and go, right. That’s interesting. That’s even more interesting. Let’s take all of it.

[Joe] 00:09:55  So oftentimes I joke that it’s really just up to the level of creativity of the attacker.

[Joe] 00:10:00  But your question is a good one, and it points out a different angle. That is very important to consider, which is you might not know what you’re looking for necessarily, and then you might find some goods. All the more reason to have more robust security and segmentation. But you’re right. The well motivated attacker, the first step is to see can they gain access. And then they’ll figure out, well, what can they gain access to once they’re inside. And then they might figure out, okay, how do I want to persist and leverage what I found once once I got in there?

[Paul] 00:10:33  So Joe, given that there are so many different risks at so many different levels in a typical factory environment, what standards exist that a factory owner or operator either has to comply with, or ought to aspire to comply with, because it means they’ve thought the problem through, at least in part.

[Joe] 00:10:55  Yeah. So certainly if you think about factory floors, there’s safety issues and security issues, certainly. And those security issues would be cyber related.

[Joe] 00:11:05  So standards like IEC 62 443 help guide you along on the security side. Are they required? No, they’re not mandatory. But they are a strong indicator of your commitment to the security posture of your enterprise and will go a long way to ensure that you’ve got the right practices in place. So 62443 is widely adopted and acknowledged by people as being the right level of detail for OT systems inside industrial automation facilities. With good reason, because as I’ve mentioned in the past, when we think about autonomous systems, we’re also thinking about safety or safety of flight. But in industrial automation, we’ve got blades, we’ve got equipment, we have forklifts, we have autonomous systems within it.

[Paul] 00:11:55  Some factories these days are manufacturing plants do actually also have flight, don’t they? They use drones to move things around. So it’s all of the above.

[Joe] 00:12:06  All of the above. And it’s far different than the Rouge plant south of Detroit when it was all raw materials brought in. And you build everything in, in your own walls, so to speak.

[Joe] 00:12:18  Here you’re getting component parts and manufacturing things, but you have all this industrial equipment that’s doing a lot of the operations and with forklifts, with drones, with other devices that are automated. Certainly safety is a concern.

[Paul] 00:12:32  So, Joe, that 62443 standard, that’s a joint standard of AISA, which is the International Society of Automation and the IEC, the International Electrotechnical Commission. So it’s not just something that some bean counters thought of given that it’s not compulsory. Do you want to say something about standards and compliance and how you should fit that into, if you like, the spirit of your organization? Because I think there are still an awful lot of companies out there that go. It’s a checkbox you have to check off, and then you’ll riches will multiply. You should be approaching it from a completely different angle, shouldn’t you?

[Joe] 00:13:19  Yeah. I mean certainly checkbox compliance is an approach that some people can take. And essentially you’re saying we’re going to do the bare minimum. We’re going to avoid disrupting operations. Yeah.

[Joe] 00:13:31  And we’re going to just simply do what we can on paper to suggest that we’re in compliance. And what that does, I mean, what security means when we’re dealing with software, when we’re dealing with industrial assets, industrial automation, facilities quality is one of the aspects of these organizations. And having good practices around software and software security is a subset of quality. And so I don’t think people want to just accept whatever level of bugs they have in their software is simply okay, and we’ll just move on because it’s not the end product. It’s a reflection of the organization and how they approach quality in general. And so with that safety and security, of course, matter, and the more robust your software practices are, the higher the quality standards you have in your products. And in fact, perhaps the more efficient operations you will have. I tend to find that the organizations that have more robust software development, more automated tooling, more automated processes, fewer manual tests and the like, the ones that are automated have fewer bugs, fewer vulnerabilities, better security, better products.

[Paul] 00:14:48  And that’s not because they’ve used AI to eliminate humans, is it? It’s because they’ve used automation and AI to free up the humans to do a higher order task that actually brings security down from the top, not just trying to patch it in afterwards like the bad old days.

[Joe] 00:15:06  Exactly. Because in the bad old days you are chasing patches. And what we want to do is increase code quality, reduce security exposure, and as a result, elevate safety overall in these products and overall quality of programs and products that get produced.

[Paul] 00:15:25  Joe, maybe I can ask you a possibly quite tricky question to answer, both technically and culturally. If you’re somebody who believes in secure by demand, which is where you would prefer to acquire products and services from somebody who can show that they take at least cyber security seriously, then you might ask the question, do you have IEC 62443 certification? And if the answer comes back, yes. What question do you ask after that to make sure that you’re not just talking to a checkbox compiler?

[Joe] 00:16:03  Well, I always want to ask about your software development practices and your software development lifecycle and what you’re doing in the software development lifecycle to integrate security.

[Joe] 00:16:18  And that really tells me. Are people bolting on security after the fact and maybe trying to complete that checkbox security? Or have they thought through their processes more completely and have more robust processes to ensure that security can just be part of the equation and integrated and built in in the first place? Another follow up question to that would be what does your patching process look like? What does your testing process look like and how do you manage that? And how do you manage the trade offs between tech debt and new feature development tech debt?

[Paul] 00:16:56  Now, you know that I don’t like that term because I think it’s a bit of a euphemism. It kind of means we took all sorts of shortcuts to get the product out of the door in the first place, and we’ve never gone back and corrected the sins of the past. But I know what you mean. All companies will eventually accumulate code or products or components that aren’t perfect, possibly because they were found to be flawed after they were deployed with the best will in the world.

[Paul] 00:17:25  How good are you, and how willing are you to go back and confront that? That is a very important question, isn’t it?

[Joe] 00:17:32  It is. And a lot of software development efforts, we look at the initial development and we think we’re done. And obviously there’s a whole nother set of phases beyond the initial release and all the maintenance that you have and all the support that you have for a product. Even in these industrial control systems and these OT networks in industrial automation facilities. There is a need to update and patch software from time to time. And so that’s really what I mean is, once those support and maintenance efforts kick in, you’re supporting ongoing existing code, and you might not be developing new features because you have some maintenance work that you need to do. And so that’s what I mean when I say tech debt at your point is well taken. What I consider tech debt in some cases are the patches and the prioritization of vulnerabilities that have to get fixed after the fact. And as those pile up more, then you’re crowding out future development because you’re consuming resources on the patches.

[Joe] 00:18:37  And so my view is if you have a more robust software development lifecycle, then you have a more efficient way to address those patches. But also you might have other things built in, like at RunSafe. We advocate for inserting runtime defense into your software from the get go so that systems can prevent exploitation even if a patch is not available. And the idea there is to add in robust security in my mind. Then going back to your original question, I want to understand what people software development processes are like, what their patching processes are like, what their testing processes are like. Because behind every compliance is a lot of processes, and you want to dig into those processes and understand, are people committed to safety and security or are they committed to checkbox security.

[Paul] 00:19:30  And so you can tell a lot, can’t you, from an organization’s attitude to vulnerability disclosure. And if a company has a robust practice for revealing its vulnerabilities and explaining how it was able to mitigate or fix those, that’s a very good sign, isn’t it?

[Joe] 00:19:50  Exactly right.

[Joe] 00:19:51  If you’re not disclosing vulnerabilities and not embracing that, then I would be concerned as a customer of your product. At RunSafe, we disclose things. We also build in security into our products. We adopt secure by design practices to boost code quality, and we make our technology accessible so that people do have that confidence and do know that we are going to not only look at our own code while we defend other people’s code. So we live by the same practice that we hope our customers are living by. And if you consider adding in exploit prevention into your own products that you deliver, then you get the luxury of having the best of both worlds you can disclose and fix, but know that you’re already protected. And what that means then, is it really alleviates a lot of this concern of trying to hide vulnerabilities. It’s hey, majority of the ones that we do find are not accessible. They’re not exploitable. We’re still going to fix them, but we’re telling you that we’re already ahead of the curve ahead of where the attackers are

[Joe] 00:20:57  That’s just a really good example of a way to embrace security and use it to help alleviate operational pressures, alleviate security pressures, and certainly find ways to thwart attackers even when a patch is not available.

[Paul] 00:21:12  So, Joe, maybe we can just zoom back into that level zero of the Purdue model, the very low level, the devices that open and close an individual valve or that monitor the pressure in one vessel. Those obviously may be years, even decades old. It may be very difficult to replace them because they might have been built into devices like a lathe or something that you can’t just simply open up and fiddle with. How do you deal with protecting those very low level devices against a smart attacker who’s decided, hey, I’ve milked the network for all the information I’d like, but I also want to know, How could I disrupt this factory if I wanted to? At some time in the future. Where do you start with that?

[Joe] 00:22:02  Well, I just run for the hills. Just kidding. Of course.

[Paul] 00:22:08  When I heard you say the word run, I thought, oh, I know what’s coming next. Yeah.

[Joe] 00:22:12  What I would actually recommend. Instead of running for the hills, you should run safely with RunSafe. The idea, of course, is, and not to get too commercially oriented about our own products, but I do think you point out a really significant challenge that people face. And when you’re looking at sort of everything from a risk management framework, trying to prioritize which assets to do what with, knowing that there are some devices that do have low compute power and compute resources available, applying things that don’t add new code, that don’t add software, agents that don’t slow down, that don’t consume more memory is really one of the best options. And so you can extend the life of legacy systems, applying memory safety protection in a way that doesn’t put any new software on a device. You can imagine if you move things around in the device without disrupting its operational execution, then you can imagine that that makes it harder for the attacker to find the vulnerability in the first place and to take that system down.

[Joe] 00:23:18  But I think what you want to do is assess your whole network, look at what’s reachable, what’s exploitable, what’s the consequence, and prioritize and then look at those items. And when you have hardware shortcomings and when you lack power and lack compute resources on devices, you still have good alternatives. I think that’s the key thing. And with that, you apply Load-time Function Randomization from RunSafe, our proprietary technique that allows you to add in security even without a patch, that’s a good opportunity for folks to extend the life. And all the while these organizations are thinking about when do I replace certain devices? And part of me thinks part of that is when you get so much more value added out of the new device than you have. Simply letting the current one operate, there could be a lot of value add that’s changing in the industry, based on maybe the new architectures of newer devices that do bring some of these smart capabilities. And so certainly when you’re buying those products as well, you want to really understand what the security is, because oftentimes these are connected devices that may be getting signal from elsewhere in the factory floor.

[Paul] 00:24:31  Whether you’re a manufacturer of products that help factories operate like the valve actuators, or you’re the owner of a factory that wants to buy valve actuators, or you’re somebody who wishes to choose a factory to manufacture your goods. What would be your primary advice to someone in any of those three classes for upping their game when it comes to cybersecurity.

[Joe] 00:25:00  In this scenario, you describe where you have maybe the end customer, the manufacturer, and then the supplier at every one of those levels. There are security questions that pop in to make sure that the final product produced has security, that the manufacturing plant itself is secure, and that the software that you derive from your supply chain is secure. I view that as asking for insights into the security posture that will start with standards in these industrial automation facilities. There are five reasons why security gets adopted. One of them is the governance of the manufacturer itself, their policies. One is the compliance that we talked about. One is the known threat actors that are targeting these kinds of devices.

[Joe] 00:25:48  And what are their go to moves. So we think about China and other nation states doing things and what are their, you know, things they’re going to compromise. And then getting to part of your question, they’re what are customers asking for. And then is there any security mechanism that helps differentiate maybe lower cost with more resilience and things like that. So you expect a longer lifespan? I would go down that checklist and ask, what is your government’s policy? What is your security compliance? What threat actors are out there? What are you doing to differentiate your products, and what are you doing to satisfy the customer requests? And that gives you kind of a very macro level view of what’s happening. And then within that I think there’s micro level views that are super interesting. What’s really interesting to me is understanding the software, building materials of all these devices and all these components that come into the factory floor that come into the industrial automation facilities. Why? Because heretofore we just sort of saw these things as black boxes that could be compromised, and we don’t really know what could go wrong.

[Joe] 00:26:55  But there are so many tools out there that help you. The manufacturer. Know the factory owner. Know exactly what’s going on in your infrastructure to understand what your risk is. And so I would look at the macro view, the drivers of security adoption as one way to get a view. And then I would look at the micro drivers and I would look at the software bill materials in particular across all my devices, and look at which of those systems are most vulnerable.

[Paul] 00:27:23  And there are some encouraging signs on the secure by demand angle of things, aren’t there? Now, I don’t want to suggest for a moment that a hospital is a factory, but there are some similarities. There are lots of embedded devices and they’re all over the place, and eventually it all connects to some IT network and so on. But in the recent RunSafe Health industry report, there was an encouraging number of consumers of medical products who said they had products that they would love to have bought because they would have been great for medical care.

[Paul] 00:27:58  But they declined the purchase specifically because they felt that the supplier did not take security seriously enough.

[Joe] 00:28:05  Absolutely. And I do think in the case of industrial automation in general, the overall picture, the investment into all these underlying industrial IoT devices and the robotics and all the machinery that’s in there, you have to consider not only the initial purchase, but the lifespan of those devices and the security posture of those devices. Security is become a very key element to the decision for which equipment to buy, because we don’t want these things to be disrupted and could go down and be outdated in a short amount of time, you want to have a nice, long lifespan. These organizations are capitalizing these purchases over many years, and the security has to be complementary to that economic equation that the facility has.

[Paul] 00:28:56  I think that’s a very positive point on which to end the idea that when it comes to cybersecurity, to some extent, the buck stops with all of us. We all have to do our bit and we can do it at all levels.

[Paul] 00:29:11  Once again, thank you so much for your passion and your informed commentary on a difficult and extensive topic. That is a wrap for this episode of Exploited: The Cyber Truth. Thanks to everyone who tuned in and listened. If you enjoyed this podcast, please don’t forget to subscribe so you can keep up with each week’s episode. Please like us, share us, link to us on social media and be sure to tell your whole team about us. And remember folks, stay ahead of the threat. See you next time.

The post Protecting Smart Factories from Smart Attackers appeared first on RunSafe Security.

]]>
After DEF CON: What the Maritime Hacking Village Revealed About Real-World ICS Risk https://runsafesecurity.com/podcast/defcon-maritime-hacking-village-ics-risk/ Thu, 14 Aug 2025 14:48:36 +0000 https://runsafesecurity.com/?post_type=podcast&p=254626 The post After DEF CON: What the Maritime Hacking Village Revealed About Real-World ICS Risk appeared first on RunSafe Security.

]]>
 

DEF CON 33 made history with its first-ever Maritime Hacking Village, bringing together hackers, engineers, and policymakers to test the resilience of autonomous vessels, port cranes, and other maritime systems.

In this Exploited: The Cyber Truth episode, host Paul Ducklin is joined by Joe Saunders and Shiv Saxena from RunSafe Security to unpack what went down in Las Vegas. From real-world exploits to the broader implications for ICS and OT security, they explore how the maritime industry is adapting to emerging threats.

The discussion covers:

  • Firsthand stories from the narco sub and port crane challenges
  • How maritime hacking compares to past DEF CON villages
  • Why memory safety and legacy vulnerabilities are critical blind spots
  • How public hacking competitions accelerate industry-wide security improvements

Whether you’re defending ports, building autonomous systems, or securing embedded devices, this conversation offers valuable takeaways on staying ahead of adversaries—before the wake-up call arrives.


Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Special Guest – Shiv Saxena: Shiv is a sales rep at RunSafe Security. He’s currently examining memory safety and supply chain security in medical devices, vehicles, and critical infrastructure. He is generally interested in the relationship between security and performance. Shiv has worked at security and observability companies, including Black Duck, Synopsys, and Datadog.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] 

Exploited: The Cyber Truth, a podcast by RunSafe Security. Welcome back to Exploited: The Cyber Truth. I am Paul Ducklin, and I’m joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. 

Hello, Joe. 

[Joe] 

Hello, Paul. Great to be here and excited for today’s show. 

[Paul] 

You and I both, because there is a secret that we don’t yet know about DEF CON 33 that our special guest, Shiv Saxena, who is a sales executive at RunSafe, Shiv is going to tell us this secret. That’s waiting for us. Shiv, maybe before we start, I’ll just remind people what we’re planning to talk about in this episode, and that is DEF CON in review, maritime hacking and beyond. 

In particular, what happened at the Maritime Hacking Village. Just for our listeners, the hacking villages are a noble DEF CON conference tradition. And there have been hacking villages in the past that have taken on satellites, voting machines, medical equipment, automobiles, all of that sort of stuff. But this is the first time that the maritime industry has been put to the test. 

So Shiv, tell us what was up for hacking and how successful were the experts trying to break in? 

[Shiv] 

Yeah, happy to. Like you said, even though this was the first time the Maritime Hacking Village had its own village, you wouldn’t have been able to tell having entered. They had a big, diverse group of challenges and experts. 

You were able to hack into an autonomous sub from Havoc AI. They had a captured drug smuggling vessel, actually. 

[Paul] 

A narco sub? 

[Shiv]

Yes. 

[Paul] 

Grabbed from underwater by the US Coast Guard, I believe. Yes. So here’s the question that Joe and I don’t know the answer to. Did anyone get in?

 

[Shiv] 

Yes, yes. So a few people got into the container as well. And then there was some social engineering required to identify some of the drug smuggling. 

[Paul] 

So the container challenge, that was a port crane, which had come all the way from Los Angeles, I believe. If you hacked the crane, then you could get into the container. Someone actually got in, didn’t they? 

Right the way into the container. 

[Shiv] 

Yeah, yeah. 

[Paul] 

And what did they find? 

[Shiv] 

Hannock and Spencer Beer made it in, and I believe they found a couple crates of alcohol. 

[Paul] 

A beer bust. So in a real world scenario, that sort of attack could have been state-sponsored actors manipulating the contents of something that had either just arrived or was just about to depart from a port. So that would be a huge risk to the supply chain, wouldn’t it, in real life? 

[Shiv] 

Yeah, that’s correct. I think if you follow the main megatrends in society today, all industries are becoming increasingly connected and intelligent. And while that creates a lot of growth in society and a lot of innovation, that is also a much greater surface area for attackers like those who managed to crack the code at DEF CON. 

[Paul] 

They got into the container. They found loads of beer, I guess. Good for them. 

But what happens now? How does the world become a better place because of what they managed to do in the world’s eye? Why does this benefit the good guys rather than the bad guys going, hey, we’ll do that? 

[Shiv]

 

In my preparation for visiting the Maritime Hacking Village, I did some studying into naval history. And I found this academic, Andrew Lambert, who has a thesis that a society, in order to become a sea power, has to massively change its culture and investments. And I think what you saw with the Maritime Hacking Village, that was a concerted effort to redirect the very cool and diverse hacking community in the West towards focusing on new intelligent infrastructure within hacking. 

This is just starting to scratch the surface, getting people familiar with the idea of container ports or pump controls or autonomous subs. So I think the benefit overall to society is that we are taking these expertise and applying it to critical infrastructure that our economy functions off of. Before I started studying this just as an American millennial, my exposure to ships and maybe attacking ships came from like Pirates of the Caribbean, where you’d see Jack Sparrow and Orlando Bloom commandeering a vessel. 

And back in the early 1700s, if you wanted to commandeer a vessel, that meant replacing the people who were operating propulsion weaponry, navigation with your own people. 

[Paul] 

Yes, it was haul alongside, fire your cannons, board, fight with cutlasses, take over the vessel in person. 

[Shiv] 

Exactly. But if you fast forward today, a lot of these vessels don’t have any people. All of those functions, again, propulsion, navigation, payloads, are handled by machines and computers. 

So if you actually want to attack and take control of those vessels today, you need the type of offensive engineering expertise that we saw at the Maritime Hacking Village at DEF CON this year. 

[Paul] 

If the legitimate pilot can be remote, then the pirate pilot, whether it’s a ship or an aircraft, could be remote as well. So in amongst all of that, what was a standout conversation for you? 

[Shiv] 

It’s hard to say. There was a lot of different groups. You could have conversations with policymakers, people coming from naval strategy out of Penn State, and then you could also have just deep offensive engineering conversations. 

Duncan Woodbury’s experience coming from hacking into automobiles, being applied to maritime vessels, was quite illuminating to me because they faced very similar challenges in terms of a disparate supply chain, increasing complexity in terms of the capabilities of these vessels. But that’s all happening at the same time as we’re seeing the West at large facing a

new sort of near-peer adversary. 

[Paul] 

When it comes to security problems that we might consider specific to what you might call level zero of industrial control systems, the motor actuators, the valve operators, the pump switch gear, and all of that stuff, what sort of interest did you have about memory safety issues in particular, which is, I know, something that everyone at RunSafe is very, very passionate about with very good reason. 

[Shiv] 

Well, in terms of memory safety, I would say that it’s very important to get started today in maritime hacking because we do need to get through some of the low-hanging fruit. And a lot of that can be in really basic things like attacking network connectivity with old credentials and that type of thing. So I think what you’ll see in the beginning of these challenges is that a lot of people are focused on some of the entry-level things. 

But then when it comes to the deeper controls within these embedded systems, as you start adding more chips, more compute, that’s where you’ll start finding very old systems that have been running for decades that this generation of hackers at DEF CON don’t necessarily have experience with. And those are also some of the most damaging types of exploits, most sophisticated exploits that a lot of our adversaries are skilled with. I would say from a memory safety perspective, it’s very important to invest today in deep understanding of the Singapore supply chain stack and some of that legacy code that has all these memory safety vulnerabilities in that are not quite so easy to patch. 

[Paul] 

So Joe, maybe I can ask you at this point, what was it that led you and RunSafe to want to sponsor the Maritime Hacking Village specifically? What brought the passion about the ships? 

[Joe] 

Well, I think you can tell even talking with Shiv that RunSafe is mission-focused around what are the threats that affect critical infrastructure and society, and certainly US interest and national security interest. If you add all that up and consider all those factors and then look at how much commerce is done by way of shipping and transportation-related cargo, and then you combine that with the advent of autonomous systems in general and what’s happening in South China Sea and just the geopolitical nature of it, from our perspective, it’s vitally important that we get security right for not only these kinds of vessels, but all of critical infrastructure. 

And so the Maritime Hacking Village, I think a lot of people inside RunSafe Security recognize the importance of it because of its national security interest and geopolitical implications. And I love the fact that Shiv talked about the historical nature of naval power, its influence on the

world, and how those that control the seas ultimately do control a lot of power in the geopolitical struggles between nations and whatnot. And so if you add all that up, like I said, it’s vitally important to have security at the forefront in today’s modern maritime industry. 

[Paul] 

Do you think that there is a tendency in the cybersecurity industry in general, if I could use a maritime metaphor, put up the periscope and just try and look as far forward as possible and go, oh, let’s embrace all this new technology while we’re still struggling with what I generally refer to as the sins of the past? 

[Joe] 

There is a pressure to think about, even with new technology, what features am I releasing? Do I have the best capability, the best differentiation on my particular vessel? Do I have the latest technology? 

And so a lot of it is about what new capability do we bring? And I think in the cybersecurity realm, thinking through what the consequences of some kind of compromise or some kind of attack is also extremely important. So I almost think we need two periscopes in that sense. 

One to look at what features do we need in these vessels? And the other one is what are the potential consequences if given an attack? And as we’ve seen with other security researchers, I think by virtue of bringing the Maritime Hacking Village to DEF CON with Duncan’s leadership, getting creative minds around what can be done and what can be exploited will help everyone put the proper perspective on both the consequences of an attack while balancing forward 

thinking features and new product development in general. 

[Shiv] 

And from a maritime perspective, I don’t know that the layperson would even be familiar with the fact that we have drones that are autonomous, driving themselves through the ocean today. 

[Paul] 

Yes, when you hear drone, you think of something that flies. You don’t tend to think of them as ships on or below the surface that basically go to sea and maybe spend their whole working life never coming back to port. 

[Shiv] 

Yeah, exactly. I mean, a lot of these are solar-powered. They’re navigating themselves. 

They’re communicating with satellites. They have Starlink. So I don’t think people are necessarily familiar that technology has gotten to that point in maritime strategy. And I would say it’s understandable for America and the West at large to develop these features for the capabilities that they provide. Every new innovation, every new feature that you create is another opportunity for the attacker to exploit. Not only is there an increased surface area where every line of code, every new chip that’s added to the vessel is another opportunity for a mistake or for innovation on the attacker’s side. 

It also means the payoff of getting control of that vessel is higher because now you have something capable of much more. 

[Paul] 

So what do you say to the naysayers, and you do hear them quite frequently, who hear about things like Maritime Hacking Villages, and this was certainly a thing when voting machines were put up for hacking for the first time at DEF CON, who say, why don’t you do all this secretively? Why don’t you do all this in the back room? Because if you do the hacking publicly, then aren’t you tipping off the attackers? 

How does that benefit the good guys doing this all publicly? 

[Shiv] 

I would go back to Andrew Lambert’s sea power states, which is that you can’t do this in secret because you need to shift the entire culture of a society to gain these capabilities. If it’s just a small group of people that no one’s ever heard about, they might be able to make some advancements, and they absolutely have, but until you actually take the entire power and workforce and put them towards a singular goal, you’re not gonna be able to compete, and you’re certainly not gonna be able to compete with China, who has a massive labor advantage in offensive cyber security, as well as entire departments dedicated to maritime strategy. 

[Paul] 

And they also have laws now, don’t they, that if you find, say, a zero-day bug in China, then although eventually you can disclose it and bask in the reflected glory, you’re obliged to reveal it privately to the state apparatus first. If you’re the good guys, you better be ahead because otherwise you’re going to be further and further behind, so to speak. Would you agree with that? 

[Shiv] 

Yeah, correct. China has a very sophisticated approach to the zero-day market that we can definitely learn from. 

[Paul] 

So Joe, do you think the outlook from what happened at DEF CON in particular and in the hacking villages bodes well for us this year? Do you think that it will cause, dare I say it, a sea

change in the industry’s attitude towards security, particularly in autonomous or industrial control systems? 

[Joe] 

I can imagine that folks that participate in the Maritime Hacking Village will bring forward their enthusiasm for what they learned and what transpired there. And I do think that a majority of the industry probably needs to hear more about what the Maritime Hacking Village is about and what was accomplished there. Product manufacturers who produce these kinds of vessels can look to Shiv as an expert in this area, or at least someone who’s participated and knows a lot of the folks involved in these things and RunSafe’s role in it. 

I expect Shiv’s going to have a lot of great conversations just himself, and if that represents how others will talk about their experiences at DEF CON and the Maritime Hacking Village, then the word will spread pretty quickly. In that regard, I do think there’s good hope that this will help elevate the security posture in general. And at the same time, I know these things don’t change overnight either. 

It’s kind of the sustained way of thinking about these problems and really having conversations directly one-on-one with folks who are producing these devices and whose information or operations depend on them and having those kinds of conversations about the security risks and what the consequences of if a nation state attacks their device or if some kind of hacker looks to compromise a device in general. I do think it will improve security, and I think it’s a function also of all the conversations that I know RunSafe will have, Shiv will have, and all the participants, and especially Duncan, who led the Maritime Hacking Village. 

[Paul] 

You’re willing to go and engage with technical communities face-to-face, actually get stuck in, and not just rely on reports or papers or things that may come out six months, nine months, 12 months later. It’s a little bit more of a confronting the problem head-on, isn’t it? 

[Joe] 

Yeah, and I think it’s also community and engaging with like-minded people. And I think those experiences inform how you go about your work efforts going forward. I actually would welcome Shiv’s thoughts on that because I’m sure he made really good contact with people and had interesting conversations, and that’s how community is built. 

DEF CON fosters that kind of environment with the Hacking Villages, and Maritime Hacking Village, I think, is an important one going forward for everybody. 

[Shiv] 

Yeah, DEF CON has a tremendous density and diversity of talent.

 

[Paul] 

That’s a great way of putting it, density and diversity. There are things that you can learn there that would probably take you years to find out elsewhere if you ever found them out at all. 

[Shiv] 

In that one village, I spoke to people about memory safety issues. I spoke about policy concerns. I spoke about how ships work today versus a hundred years ago. 

I even randomly had a conversation with someone who was focused on quantum computing for computational chemistry. I mean, it was a very diverse group of people. Wow. 

And when you have the opportunity to get those people in one area, focusing on one problem, you can really shift the way society operates and our capabilities at large at an accelerated rate that you can’t necessarily get when everyone’s off doing their own thing or potentially chasing after one hype cycle. I think it’s important to build and develop institutions you want to actually progress as a society. 

[Paul] 

Well said. You’re a sales guy. Normally you expect salespeople to stop, oh, talking about all the deals they’re going to close and all the contacts they made. 

You haven’t mentioned that at all. You’ve just spoken about the fascinating, as you say, depth and density and diversity, which is really great to hear because as Joe has said many times before, confronting your own cybersecurity weaknesses is a strength. It’s not a weakness, is it? 

[Shiv] 

That’s correct. And I would say a lot of people, certainly a lot of engineering orgs, can often see security as a blocker or something that gets in the way. Yes. 

But I would challenge that many of the strongest engineers that I’ve met have an offensive mindset. Somebody who I look up to, Halvar Flake, said in a presentation, the attacker is the only person paid to understand the entire system. And I think when you have that level of understanding of a system, you can innovate in ways that you wouldn’t have expected. 

So I would just challenge all engineering organizations to really invest in security because it can have benefits beyond just maybe some of the compliance or boring stuff that people think about. 

[Paul] 

Joe, I can see you on my video nodding vigorously because I know you are really passionate about not doing any sort of checkbox compliance where you just do it so you get the certificate

that you can put in a frame and put on the wall. What new conversations do you think might come out of what you learned specifically from the Maritime Hacking Village? Changes in engineering operations in software development life cycle and so on. 

[Joe] 

Naturally, checkbox compliance doesn’t really move the needle for folks. It’s really thinking through how do I do the minimal I can? And I think one of the exciting things that Shiv brought back and mentioned about the Maritime Hacking Village is the deepness of conversation that you can have with folks. 

More specifically, what I think in terms of new conversations going forward related to maritime, we definitely see that the US government is investing in Navy and investing in Indo-Pacific Navy-related assets. And a key aspect of that budget spend happens to be about autonomous vessels. 

[Paul] 

Yes, I think in a recent podcast, we had a guest who’d just come back from South Korea and noted that the military experts in South Korea are saying, we don’t need aircraft carriers anymore. Let’s take those billions and billions of dollars and let’s have loads of autonomous vessels that can go out and do lots more in lots of different places. That, once again, really is a sea change, pardon the pun, isn’t it? 

[Joe] 

It is a sea change. If you’re collecting data in and around Taiwan about the movement of vessels, about ports, or you’re detecting what happened when an underwater cable was severed, I think compromising those devices is as much about information collection as it is around disrupting an adversary and whatever their intentions might be. And so just as we saw 

in Ukraine, drone warfare is here. 

And I think in the waterways, it’s likely the same, and then some with the information collection that’s going on. 

[Paul] 

So Shiv, back to you. What was the most exciting technical feedback that you heard? What was the thing that lit up your brain cells the most? 

[Shiv] 

Well, I got into a decent conversation about using memory corruption attacks and return oriented programming chains. And it’s interesting to hear the mindset of hackers because in one sense, a lot of them don’t necessarily want to deal with memory if they don’t have to, because it’s so complicated and you get so low level. So it was just interesting to hear the

perspective that if I don’t have to do it, I don’t want to do it necessarily. 

But at the same time, I recognize that it is one of the most powerful ways to actually get root access to a device. So it’s something I hear often when speaking to offensive engineers. It seems like people who are really adept at memory attacks is a subset of another subset in terms of engineers. 

So I just thought it was an interesting perspective that I’ve been exploring over the past year. 

[Paul] 

So you mean that because you can still achieve an awful lot by social engineering attacks, guessing a password that hasn’t been changed for seven years, finding an MQTT server that was set up in a hurry and never correctly configured before it went live. I guess if that’s the way that people are getting in at the moment, there may be, unfortunately, a sort of untapped well of much lower level vulnerabilities that for all we know, state-sponsored actors may already have discovered and may be keeping in their little secret cupboard for the day when they need them sometime in the future. 

[Shiv] 

I agree. I think if today, a lot of maritime security might look like penetration testing of a cruise ship where you get access to the network in a few minutes, you pick a few locks and you can really do a lot of damage that way. I think we need to put the time and investment into getting to those deeper, more destructive types of exploits because the state-level actors with eight 

figure, nine-figure budgets are definitely going to get them. 

And until we invest in that level of capabilities, we’ll be behind. 

[Paul] 

You don’t need an infotainment system on a vessel that isn’t gonna have any passengers or staff. You can’t hack into it that way. So the old-school techniques suddenly come back to the fore. 

[Joe] 

Yeah, and I think the key is looking at what are the communications technology that’s on these devices? And then what can you do if you compromise a particular device? What can go wrong? 

And if you look at, is there a tax surface available and is there a meaningful consequence that might motivate an attacker of a certain profile to go after that device, that really helps you think through what the risk scenarios are. If it is infotainment, if it is large cruise liners with lots of passengers, it may be more financially motivated or consumer data and things like that that you might be going after, or possibly even ransom going after the cruise liner itself.

 

[Paul] 

That will certainly attract state-sponsored actors. And we know that some countries do do ransoms because it’s a way of getting foreign exchange. But when you’re at the cruise liner level, what you might call common or garden cyber criminals would find that exciting. 

And who knows what they might uncover at the time and where they might choose to sell that information on. 

[Joe] 

Yeah, certainly. And those are the motivations. Then you look at these more military-oriented, naval-oriented systems. 

The motivations behind that ultimately are to find ways to disrupt operations or to gain information that is otherwise not accessible. So it’s more about the information or the sabotage in those use cases. And perhaps even military-oriented ways to defeat an adversary and disrupt their fleet in general. 

[Paul] 

And perhaps to finish up, we can say something about a topic dear, in fact, to all of our hearts, because it is such a thorny matter. And a particular problem I imagine in the maritime sector, A, because it maybe hasn’t been dealt with quite as well as in some other sectors yet, and B, because of the sheer size and diversity of the maritime sector. And that is problems with the software supply chain. 

How well is the maritime industry doing at the moment? What’s the room for improvement? 

[Joe] 

Well, I think we’re just getting started in understanding the supply chain in the software, but there’s a lot of commercial off-the-shelf components going into these autonomous systems. And so to the extent that those systems are adopting commercial off-the-shelf components, then it puts more and more pressure on the OEM, if you will, that’s bringing and assembling all those parts together to understand the security posture overall. And they will need to understand at some point the security posture of those individual components. 

Unlike government build-out of big Navy fleets and ships, where there might be a deeper discipline around understanding the software supply chain and controlling those components and inspecting them, when you look to the commercial providers who are seeking the commercial off-the-shelf components for their vessels, part of their motivation is to reduce cost. And part of their motivation is to standardize what they purchase. And the more custom and the more tailored those components are, the higher the cost is. 

There is an element where understanding what’s on those individual components is needed

and coming up with security to solve the risk implied in those commercial off-the-shelf software components. 

[Paul] 

Shiv, if you think back 10 years or however long ago it was to the infamous Jeep hack, where suddenly people realize, hey, automobiles may be at risk while they’re being driven. That was a sort of wake-up call. We haven’t really had a wake-up call quite like that in the maritime sector, have we? 

How do you think the maritime industry can get ahead without having to have something terrible to happen first? 

[Shiv] 

Well, I would say the benefits of the Jeep hacking wake-up call is that it was done in a controlled environment by white hack researchers. So I think something like that happening in the maritime sector would be beneficial because it could happen in a way that doesn’t actually harm anybody. 

[Paul] 

Hence the Maritime Hacking Village, right? It’s a real-world scenario, but under controlled, well regulated circumstances with responsible disclosure. 

[Shiv] 

Exactly, and I think that’s the explicit stated goal of the director of the hacking village, Duncan Woodbury, which is that he wants to evolve maritime hacking culture in the same way that we’ve seen the evolution in automotive security in the last 10 years. There are a lot of very similar types of canned buses across all these things. So he even said, if you’re able to pop a zero day on some of these systems, it’ll be relevant to the entire industry. 

So something like that is probably coming, I would guess, in the next year or two. 

[Paul] 

Oh, you mean that there might be something in the automotive supply chain that turns out to introduce exactly the same bug into the maritime supply chain in the same way that sometimes we hear about a bug in the Chrome browser, and then a month later, Apple will say, oh, by the way, we found that that same bug affected Safari. Who knew? 

[Shiv] 

Yeah, certainly. I mean, that’s possible. In fact, a lot of the engine makers who work in the automotive sector also build engines for maritime vessels as well.

[Paul] 

Of course, yes. So Joe, will we get there? 

[Joe] 

Yeah, totally we’re gonna get there. 

[Paul] 

Good. 

[Joe] 

Folks like Shiv and Duncan and all the participants at the Maritime Hacking Village, there’s a community around that. We all recognize that seaports and ships and submarines and navies and autonomous vessels are more connected and they’re driven by automation and digital control and communications more so than compasses and captains and people. Obviously, we have to balance progress in features against risk in compromise, but with some of the tools that are out there with a mature software development life cycle, with a recognition that things can go wrong, with forms of exploit prevention and software available, the industry has a lot to learn from the other hacking villages and the other industries that have really invested in security. 

And part of that, those lessons are to rely on proven tools that can help dramatically reduce the attack surface and build it in. I think Shiv said it well, if we incorporate it into our products as opposed to kind of wait for something to happen, the maritime industry can extend the lessons from all the other industries as well and we’ll get there faster. And I also think we’ll have to because I think conflicts coming, competition in South China Sea or the Indo-Pacific in general is gonna set an expectation that commercial providers of autonomous systems will need to incorporate security. 

So it’s better to do it now than later. 

[Paul] 

Absolutely. And the good news is if you do it for yourself and for your product and for your services and for your company, then you’re essentially doing it for everyone else anyway. If I can conclude by borrowing from the Air Force and applying it to the Navy, it’s very much a case of onwards and upwards. 

So gentlemen, thank you so much for your passion and your very, very community focused attitude to all of this. I’m glad they got into the container and that there was a party prize inside and I hope we learn an awful lot from that. So that is a wrap for this episode of Exploited: The Cyber Truth.

The post After DEF CON: What the Maritime Hacking Village Revealed About Real-World ICS Risk appeared first on RunSafe Security.

]]>
Software Assurance at Mission Speed: Securing Code Without Delaying Programs https://runsafesecurity.com/podcast/software-assurance-at-mission-speed/ Thu, 07 Aug 2025 12:43:42 +0000 https://runsafesecurity.com/?post_type=podcast&p=254595 The post Software Assurance at Mission Speed: Securing Code Without Delaying Programs appeared first on RunSafe Security.

]]>
 

In this episode of Exploited: The Cyber Truth, RunSafe Security CEO Joe Saunders joins host Paul Ducklin to explore the software assurance challenges facing today’s defense programs. From layered supplier networks and open-source dependencies to the necessity of deterministic behavior in real-time systems, Joe lays out a roadmap for building secure code at speed.

Key topics include:

  • Why innovation and compliance don’t have to be at odds
  • The role of automation and DevSecOps in accelerating secure development
  • Understanding your full software supply chain through SBOMs
  • Managing security across mission-critical environments

If you’re developing embedded systems, leading a defense software program, or working to secure critical infrastructure, this episode delivers insights to help you move fast—without breaking things.


Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] Welcome back everybody to Exploited: the Cyber Truth. I am Paul Ducklen joined by Joe Saunders, CEO and founder of RunSafe Security. Now, Joe, you’re on the road this week, aren’t you, in New York City of all places? 

[Joe] Yes. I’m in New York City. I’m in the heart of both the financial district, and I’m also in a coworking spot in an upscale part of town. So it’s fantastic. 

[Paul] I like my life in suburban Oxford, which is a city with its own attractions. But I do envy you a little bit sitting on Manhattan Island, living the high life.

[Paul] However, Joe, we have some important material to cover in this episode. And our title is software assurance at mission speed. Now the subtitle is securing code without delaying programs. And I think, Joe, that we’ve been set up by our chums in the marketing department with what is essentially a pun, programs as in binaries, as in software firmware, the actual apps that get delivered, but also programs as in business plans or a program of delivery. And both of those are important parts of this story, aren’t they?

[Joe] I think they’re both important. And if you think about the programs on the delivery side, when we talk about the aerospace industry and even defense programs, they’re usually programs of record that represent, concerted efforts to build certain kinds of outcomes, certain kind of systems, certain kinds of weapon systems, and the like. And so program of record and programs in general, the US Department of Defense, for example, is organized around programs and program executive offices or what we call PEOs. And so one PEO may have 100 programs and that and those 100 programs are both delivery, like you say, and they represent a bunch of software programs underneath that represent that overall target system they’re trying to build. 

[Paul] Yes. Because there’s almost no military hardware these days, in the same way that there’s no automotive hardware, almost no medical hardware these days, that doesn’t have firmware and software somewhere in it to control it. The days of a simple switch that turns a light on and off are days of the past, aren’t they? 

[Joe] They are days of the past, and I guess it’s been a dozen years or so since Marc Andreessen said software’s eating the world. If you think about military systems and how long they last and then also systems that get deployed in other parts of critical infrastructure, like energy infrastructure, that technology, as we’ve said many times, endures for a long, long time. And so with that, there are efforts to modernize programs across all of these, elements of critical infrastructure.

[Joe] And one part of modernization is a result of these items being connected, and another part of that is certainly advances in new digital techniques and technology and and architecture to make better mousetraps, if you will. 

[Paul] There’s a strong element here of what’s good for the goose is good for the gander. Maybe I really mean what’s good for the goose can also be quite bad for the goose. 

[Joe] Yes. And and I would say the goose and the gander in this case. Let’s say the goose is is resilience and the gander is innovation. What we really wanna know is, can you have resilient systems that are innovative so that different companies, different countries, different militaries can be using the most advanced technology in a way that those systems are secure and doesn’t compromise whatever operation they’re trying to build? The US Department of Defense since World War two has been a tremendous innovator in putting r and d dollars towards new technologies, modernizing systems. And that has changed tremendously over the years. And one of the things that has come along is the bureaucracy or maybe better stated all the steps one has to go through in order to deliver on new systems.

[Joe] There’s a lot of things that go into making an innovative product resilient and ready for prime time, ready for the market. As you’ve heard me say before, there’s a good thing that new model cars take three years or more to produce because you wouldn’t want a car produced in a day or a week because it probably wouldn’t be very safe. And so what is that right balance in innovation and resilience and safety given constrained resources? 

[Paul] Absolutely. We made a joke, didn’t we, in a previous episode? That old school motto that says, move fast and break things, could hardly be a less appropriate metaphor for the automotive industry, or for the aeronautics industry for that matter. Because that’s exactly what you don’t want to do. You might want the product to move fast, but you certainly don’t want it to break while you’re in the middle of using it. 

[Joe] Absolutely. And I guess the joke that keeps on giving over and above that is our medieval castle. And that is that if we didn’t modernize, then we would be stuck in the dark ages potentially. Progress forward on new systems, new technology makes companies more efficient, makes employees more productive, makes systems more effective, and you can achieve outcomes, you know, faster and better. The trick though, as you say is, how do you do that, in a secure way? How do you do that safely? And how do you apply the best techniques and get the right approvals to get some stuff done?

[Joe] I think that’s where standard processes come into place and testing procedures and ways to get approvals to release new technology, including updates to software where there may be vulnerabilities. 

[Paul] Regulations and standards often sound like the boring part of cybersecurity or computer science or product development, don’t they? And they sound like, oh, it’s just some petty regulator getting in the way and standing against innovation. But particularly in, say, defense products, where you’re not just perhaps looking at a few products that one country might want to put together and use, you’re actually looking for components that might work between a whole raft of allies, all of whom have developed those products largely independently, then it really, really matters, doesn’t it? 

[Joe] Yeah. A 100%. In the defense industry, there’s this notion of authority to operate. And if you make changes to, let’s say, an airplane or, I don’t know, a ship or a submarine or some kind of rocket launcher. Well, if you make changes to it, you need to seek authority again. And that means you go through a lot of those compliance checks, a lot of those reviews, a lot of testing.

[Joe] You have to check the safety compliance side of it. You have to check the security elements. And it may take in an in an old world, say, ten years ago, it may take six to twelve months to design and implement a new feature, and then it may take another six or twelve months to test it, and then it may take another six months to get all the additional approvals for the authority to operate. Well, that’s two and a half years from innovation day or idea or concept before that even makes it out onto a program. And is that fast enough for the US military to remain competitive?

[Joe] And a lot of people would say no, and I would even agree that takes too long. So that’s where faster software development processes, the notion of DevOps and DevSecOps and building in automated security testing and scanning and finding ways to automate those steps where you’re breaking down the bureaucracy into technical advantage with mature software development processes. And this isn’t strictly unique to the US Department of Defense. It would be true in any military around the world, but also it’s true in other industries. And we’ve already mentioned the automotive industry.

[Joe] The aviation industry has safety of flight. 

[Paul] And development teams may be much more compact, which is an important part of innovation. You’re actually able to do go ahead stuff. Then you’re sort of stuck by the fact that small development teams don’t build the whole project, do they? They rely on a network of suppliers and possibly on software and firmware components that would develop, say, by hobbyist groups or open source teams or people from other countries that they’re not necessarily sure of.

[Paul] How do you deal with that? 

[Joe] You’re exactly right. The software ecosystem, software development ecosystem, the software supply chain is filled with many players. And it an organization that produces a car just like in the automotive industry, you have tier one suppliers that are shipping components that get assembled in the final assembly plant in the auto industry, a lot of the product manufacturers who are producing devices that get shipped into critical infrastructure, let’s say the energy grid, those product manufacturers, they’re actually OEMs, original equipment manufacturers, who are getting components from third parties. And they’re using open source software.

[Joe] They’re using third party suppliers. They’re using third party developers, as well as their own software development team. And I think the statistic is roughly 20% of devices, 20% of software is written by that manufacturer producing the final product. 

[Paul] So I guess the way to read that is 80% of it comes from, potentially, personal persons unknown. 

[Joe] Exactly right. What that means is that final group that assembles the final product has to account for the quality of all the clones that they do incorporate. So they do need to come up with standards with their name suppliers, but to the point that you have, in the open source world, there there are two sides to it. On the one hand, you get very innovative products where you have lots of developers whose eyes are on it, and you have mature components developed by open source teams where they have input from thousands of organizations around the world, and they find vulnerabilities, and they fix them. And then you may have other open source components that may not have a huge distribution and only a couple people working on it, and that represents risk. 

[Joe] I’ll tell you an example. We have a customer whom I won’t mention who analyzed their software bill of materials, and they found in a firmware product over a thousand components. 

[Paul] Oh, that’s just one component had a further 1,000 bits in it.

[Joe] Exactly right. 

[Paul] Imagine a gearbox with a thousand moving parts. 

[Joe] Exactly right. And 80% of those components were open source. And so you need to have good robust ways to test and review software for bugs and test and review software for vulnerabilities. And in that example where I said, maybe there’s thousands of people working on one open source project and another one might only have two or three. Well, the ones with only two or three represent a pretty significant risk. One, they may not have as robust process to issue updates.

[Joe] So when you do find a vulnerability, it may take much longer for something to get updated. And two, you may be very dependent on just a couple of people. What if they decide not to develop the product anymore? In that example, with the company with a thousand libraries and components, they did identify one where they were dependent on a single individual user who was not paid by the company. If you’re looking at your software supply chain and you’re leveraging a software bill of materials, you start to get insights about who the contributors are to those components.

[Joe] You can assess risk. That responsibility is, in fact, the responsibility of the company that’s producing the final product if they’re gonna stand behind it. 

[Paul] And ideally, everyone in the chain would have done something similar in their turn, so that those bills of materials can be compared just in case somebody missed something. 

[Joe] Exactly right. And in the automotive industry, there are some regulations that say you need to go four layers deep if you’re the auto OEM, meaning you need to go tier one, tier two, tier three suppliers, and maybe even a fourth step to make sure that you understand all those components that are in your final products.

[Joe] The key is you need to understand your software billing materials and all the dependencies in your software that you produce and those dependencies to third party libraries and dynamic libraries and and static libraries that you’re pulling from outside your organization, you really need to understand the risk associated with those individual components that comprise your software product. 

[Paul] And you have to be ready to take bad news and act on it quickly, don’t you? So that if you reason that this particular component I’m using has been maintained by one guy in Minnesota for the last twelve years and he’s decided he’s getting out of software engineering, then either you have to take it over and invest in it yourself, or you have to find something else. Alternatively, you might find that a supplier or the creator of a component includes people in their team that you either, by regulations, are not allowed to trust or that you are inclined to distrust either because of their anti patriotic leanings or their history of incompetent or irrational behavior or whatever it might be. There are lots of things that can suddenly throw a spanner in the works.

[Joe] Yes. What you’re ultimately looking for is determinism in the outcomes and the way that software behaves so that you know all the potential actions that the software can take. And so in some of those areas where you might have riskier components or less mature software processes behind them, You better test and and ensure you understand how those components behave. And so software development isn’t just as simple as writing some code and throwing it out there and seeing if it works. And in the Internet age, if you go back, the idea was to produce code fast.

[Joe] And where it broke, you would just fix it fast. You can’t afford that in critical infrastructure, so you need to go the other way. You need to make sure that the software behaves in a deterministic way and in a safe way. 

[Paul] And yet everyone still wants to achieve the move fast, but it’s move fast, but don’t break anything, which is a much more challenging way of working, but ultimately much more satisfactory both intellectually and practically, isn’t it? 

[Joe] It is. And so it’s a fun set of tensions there when you try to be resilient and innovative. At RunSafe, we try to help people be both resilient and innovative so they can push technology onto systems faster knowing that they’re safe, knowing that they’re secure. And we do that in a couple different ways, leveraging software build materials and understanding everything in there so you have good visibility into your software supply chain, and you can communicate transparently to your customers what’s in there and what you’ve done, and then adding in exploit prevention even when a patch is not available. 

[Paul] Because the advanced runtime protection products that RunSafe provides are designed, if you like, to retrofit security protections to products where maybe you just have a big lump of source code and an old compiler that can’t be updated, or you have a device where you can’t just go and change the code all the time, and you can’t add more and more and more layers of protection like we’ve had in operating systems like macOS and Windows. Oh, let’s add ASLR.

[Paul] Let’s add another wrapper around the program. Because that device might have a very specific function where the way its performance is measured is not how fast it can do things, but how predictably, deterministically, and reliably it can do them within set times, which is a very different prospect. 

[Joe] It is a different prospect, and there are safety standards out there in aviation. The one everybody talks about is referred to by letters and numbers as as we do in the industry, d o one seventy eight. And in the automotive industry, there’s ISO 2,262 and the ASIL, ASIL, the ASIL standard for all of this to ensure that systems operate safely.

[Joe] And you need that determinism. And what we’re doing is ensuring that our technology at RunSafe is qualified under those safety standards. So if you are a product manufacturer, then you can inherit our compliance to those standards so that you can add in the security protections and maintain that safety compliance standard. 

[Paul] And that’s done without taking an existing product and wrapping it in a whole load of extra code, extra memory use, extra runtime, but instead by rearranging the program carefully so that it is much less likely that somebody will be able to craft an attack against it. And if they try, that you will be able to shut it down in a measured and useful way from which you can learn what went wrong.

[Paul] But without, for example, changing the time that it takes for a valve to be slammed shut in an emergency. 

[Joe] It’s a 100% right, and the determinism is the key. And if you can deploy technology without compromising deterministic outcomes in softwar., The great promise of software is you can produce one copy, you can produce a million copies. They should operate exactly the same way, and that works to the advantage of the user. It also works in the advantage of the attacker.

[Paul] Absolutely. 

[Joe] If you add in security to prevent that determinism for the attacker without compromising the determinism for the customer or for the manufacturer, then the manufacturer and the customer get what they want. They get resilient systems, and they get the most innovative systems. 

[Paul] Joe, do you think it’s fair to say that a lot of people, when they hear of terms like real time programming or real time operating systems, tend to confuse that notion or that concept with speed? In other words, to do something in real time, it just means you have to be able to do it fast enough that you don’t have to wait for the results.

[Paul] So it’s like if you send an email, it will just essentially arrive immediately, and you don’t have to wait till tomorrow morning. But it’s quite the opposite with many real time systems. Isn’t it? It’s not the speed. It is the reliability, the predictability, and as you’ve said, the determinism that matters.

[Paul] So something could happen slowly, provided that you know exactly how long it will take and not a zillisecond longer. 

[Joe] If you add to that the notion that a cyber attack compromises the speed, compromises the reliability, compromises the determinism. 

[Paul] Of course. Yes. 

[Joe] So it’s really a fun conversation when you talk about safety standards.

[Joe] And then and so at RunSafe, we don’t wanna compromise any change in behavior or anything that might slow down a system. But let’s be realistic. The attackers changes to systems do slow it down and create that non deterministic outcome, putting airworthiness at risk, putting safety certified autonomous driving at risk. And so cyber is a big deal because it gets to safety, and with that, the goal is to make those systems resilient as best as possible. 

[Paul] Now, Joe, in a few recent episodes that we’ve done, you’ve talked about CICD as a development process for software that can make sure you don’t fall behind in testing.

[Paul] That’s continuous integration, continuous development. Do you want to say something about the form that that takes and why it’s a useful discipline to have when you’re trying to move fast but not break things? 

[Joe] Yeah. I think part of the key is incorporating the right tools when you’re writing software and building systems where you have automation. You have automated tools that will produce repeatable build processes without manual intervention and also produces automated testing processes so that every time a developer commits new code, it wants to put that through the build process, so it’s ready for testing and then ready for deployment.

[Joe] It’s fully automated. So we talked about build tool chains. And in embedded systems, those build tool chains will include some form of Git repository for maintaining versions of source code and whatnot, certainly the development environment. And then naturally, when you’re working with native code that starts with source code and has to be produced into a binary, it goes through a compiler. Compilers are very sophisticated set of tools to ensure that you can convert source code into a software binary.

[Joe] But there’s still other steps, and some of those other steps can be then the automated testing. So when that binary gets produced, you can scan it for vulnerabilities. You can subject it to dynamic analysis, testing of different types, might even do fuzz testing, and the like. That is a very complex process. But if you can commit code and have all those steps automated, then you save a lot of time, and you really look at the outcome of all those automated tests and say, are there any findings?

[Joe] Are there any vulnerabilities? Are there any bugs I need to address? And it becomes an iterative process that’s repeatable. You wanna really look at those steps where you’re testing the quality of the code, testing for vulnerabilities, ensuring that you produce software that that will be reliable when it gets released out into the market. 

[Paul] And that also ensures that you can easily, rapidly, and essentially deterministically rebuild the very build environment.

[Paul] So it actually keeps the entire system, well, honest and consistent, you might say, doesn’t it? 

[Joe] In The UK, GCHQ did a lot of analysis on some telecommunications related software coming out of China. On the surface, it looked like everybody was using the exact same source code, using the same build environment, and yet there was a slight change in the build environment where the compiler was doing had a slightly different setting. And so you do have to be very, very cautious because that setting ended up grabbing maybe one other component that then changed the overall outcome of of the system. It opened up a backdoor into some telecom equipment.

[Paul] And we have seen attacks where what got built in the end, the actual final product, had the right hash passed all its tests, but there was some process in the build process that had actually compromised the build environment. So in those cases, the attackers aren’t so much concerned about one particular final product. Their goal might be to introduce vulnerabilities into the build process so they can steal intellectual property, wander around through source code, and perhaps in the future, sneak in commits code changes that somehow get accepted, the system will then make sure get baked into every future product. So there’s a lot at stake here. 

[Joe] SolarWinds is a good example of that.

[Paul] I shouldn’t laugh, Joe, but 

[Joe] And it looks like there were some legitimate changes to software when in fact they were not legitimate changes, but they got approved as if they were legitimate. So vulnerabilities were built in. 

[Paul] Yes. And that’s crazy, isn’t it? Because that means removing the vulnerabilities will actually create a security warning.

[Paul] Oh, no. The program’s not coming out correctly anymore. You go, no. No. Now it’s coming out correctly for the first time.

[Paul] So, Joe, clearly, there is a hugely important role to be played by technology leadership in companies that build, well, software and firmware products of any sort, but especially those in industries such as defense, automotive, aeronautics. How does the world go around building that strong leadership? 

[Joe] Ironically, I think the faster you produce software may be suggestive of higher quality software because you have more mature processes, you have automation. Now it doesn’t mean you can do it super fast. What I’m getting at is you need mature processes that enable you to respond quickly, to test, to iterate quickly, and to improve software.

[Joe] If you’re taking several weeks to make a merge request or you’re taking several weeks to test things and you’re doing it manually, you’re likely gonna miss things. 

[Paul] Yes. 

[Joe] One of the key elements is to have very mature software processes to automate as much as possible and to really empower developers to act on vulnerabilities and bugs right away. We really need to have mature software processes, but also then mature ways to communicate and share. And I think transparency is the key term there.

[Paul] Yes. 

[Joe] You end up building trust and confidence. Your your customers will gain confidence in your processes as you’re showing them your weaknesses. And it seems counterintuitive, but it is quite powerful in that sense. And that’s part of the reason why you need to have full visibility in the software supply chain.

[Joe] And I I maintain that having a full, complete, accurate software bill of materials, a 100% complete, a 100% correct software bill of materials is the foundation for transparent communications because you can then share with your customers, you can then identify risk within your supply chain, and then you can associate all the things that you’ve done to improve the process. So everybody gains that confidence through your transparency. 

[Paul] Yes. If you’re not willing to confront and to reveal and to deal with the weaknesses and the vulnerabilities in your software or your firmware, you can bet your boots that cyber criminals or state sponsored actors are going to do it for you. 

[Joe] Yeah. The state sponsored actors are actually very well funded security researchers looking for those missteps you might take in your coding process that could create a vulnerability. 

[Paul] And they’re not going to brag about them. They’re just going to keep them for a time when they might be useful. 

[Joe] Yes. They’re not exactly transparent.

[Paul] Quite the opposite. 

[Joe] They’re looking for ways to take advantage of that because their motivation may be much more than generating that ransom payout. It might be to collect information or monitor systems or crash systems and things that might advance their intelligence objectives or their nation state objectives against an adversary. It’s much easier if you find them and communicate them. And I think a lot of these organizations that are producing technology that goes into critical infrastructure, the ones that are the most innovative and most mature in the software development processes probably produce code faster with fewer vulnerabilities.

[Joe] And it it does seem counterintuitive, but I do believe that through automation, through mature software development practices, through generating a full visible understanding of your full software supply chain and all the components in your product, and ensuring you build in security before you ship product, A lot of organizations take a lot of pride in doing that really well. And so you want leaders who have a culture to create robust, mature processes. And what we want are resilient systems that produce innovative technology in ways that don’t slow down developers, but accelerate product that really schedules. 

[Paul] In other words, if the phrase secure by design doesn’t mean anything to you yet, and you’re a software vendor, then it probably should. Because what that is saying is not just move fast, but don’t break things.

[Paul] It’s saying move as fast as you like, but do it right in the first place. And the flip side of that, secure by demand, is where the people who are buying or consuming those software products are actually favoring or preferring the companies that operate that way. So there is hope for us all in software engineering provided that both the producers and the consumers meet in the middle in saying, let’s be secure before we start developing, while we’re developing, and after we’ve developed a product. 

[Joe] And if I had a way to wrap it up, Paul, I would say, in order to achieve resilient innovative systems, you need mature processes that are transparent, that rely on a deep understanding of everything that goes into your product, all the while investing in great testing and adding security into the process to prevent exploits. 

[Paul] Very well said, Joe.

[Paul] I think that is a fantastic way to wrap it up. Thank you so much for your deep thoughtfulness. You obviously care very, very greatly about this to the point that although you’re the CEO and founder of a company that sells cybersecurity related products, you’ve actually spent the entire time we’ve been recording this episode talking about what we as a community can and should be doing for each other to make sure that we don’t get, if you like, hoist by our own petard in the future. So, Joe, thank you so much for your time. That’s a wrap for this episode of Exploited the Cyber Truth.

[Paul] Thanks to everybody who tuned in and listened. If you find this podcast insightful, please subscribe so you can keep up with each week’s episode. Please also share with your colleagues, with your friends, and on social media. And remember, stay ahead of the threat. See you next time.

The post Software Assurance at Mission Speed: Securing Code Without Delaying Programs appeared first on RunSafe Security.

]]>
From Research to Resilience: Securing the Future of Autonomous Vehicles https://runsafesecurity.com/podcast/securing-autonomous-vehicles/ Thu, 31 Jul 2025 12:29:24 +0000 https://runsafesecurity.com/?post_type=podcast&p=254578 The post From Research to Resilience: Securing the Future of Autonomous Vehicles appeared first on RunSafe Security.

]]>
 

 

As the automotive industry accelerates toward autonomy—with cloud-connected fleets, advanced infotainment systems, and software-defined vehicles—cybersecurity risks are becoming more complex and harder to ignore. In this episode of Exploited: The Cyber Truth, RunSafe Security CEO Joe Saunders joins Gabriel Gonzalez, Director of Hardware Security at IOActive, to expose the vulnerabilities buried deep within today’s connected vehicles.

They explore how researchers are uncovering critical flaws in telematics systems, ECUs, and supply chain software—revealing how entire fleets could be remotely accessed or controlled. Gabriel shares the story behind a recent MQTT misconfiguration that exposed live vehicle data, while Joe explains how Secure by Design principles, build-time memory protections, and proactive collaboration can help manufacturers prevent exploitation before cars hit the road.

Whether you’re building autonomous platforms, managing embedded security programs, or guiding compliance with automotive safety standards, this episode delivers a front-line perspective on how to stay ahead of threats as mobility evolves.


Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn


Guest Speaker – Gabriel Gonzalez- Director of Hardware Security at IOActive

Gabriel Gonzalez is the Director of Hardware Security at IOActive, and has many years’ experience in hardware security, embedded systems, reverse engineering, and source code review. He has led research uncovering critical vulnerabilities in automotive systems and a range of embedded technologies, with a career spanning roles as Principal Security Consultant at IOActive and Principal Embedded Software & Security Engineer at Intelligent R&D. Gabriel’s expertise extends from network equipment to satellite communications and industrial systems, with a particular focus on smart grid and automotive environments. He is recognized for his hands-on approach to vulnerability research. Gabriel holds a BSc in Computer Engineering from Universidad de Valladolid and has earned certifications in software-defined networking and cryptography. 

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:03)

Welcome back to Exploited the Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of Run Safe Security. Hello there, Joe.

[Joe] (00:19)

Good day, Paul. Great to be here.

[Paul] (00:21)

And this episode’s special guest is Gabriel Gonzalez, is Director of Hardware Security at IOActive. Welcome, indeed, Gabriel.

[Gabriel] (00:33)

Thank you for having me.

[Paul] (00:34)

It’s a great pleasure. Gabriel, maybe we can start by you telling our listeners how you got into cybersecurity and particularly how you got into embedded cybersecurity and what caused your great passion for it.

[Gabriel] (00:50)

Like most researchers, we began at very young age playing with open source software, trying to learn about why binary worked at the internals of the system. For example, like mobile phones and even a keyboard was very interesting to me. And also cars back then, right? I was very interested in how the people were able to change the horsepower. That was a very important thing.

[Paul] (01:21)

We won’t ask too many questions about that. 

[Gabriel] (01:24)

Yeah. So that’s how I got into security. Then I went to the university, but I kept interested in security. And then I made my passion my work. The things that I do, automotive is a very big part of the research and also the work that I do for clients that are  IOActive.

[Paul] (01:46)

Gabriel, can you share a recent example of a vulnerability that you and your team actually found in an automotive system? Something that was already there, that had gone out into the wild and either was exploited or could have put people at risk? What made that particularly interesting to you?

[Gabriel] (02:06)

So well, when I asked you, we were the ones that put out the Jeep hack a long time ago that maybe some of the viewers remember that. It was very interesting, right? But it was a while ago. But recently a couple of my coworkers, Ramiro and Justin, this is something that has been published, right? So we can openly talk about it. They were actually looking at a T-box, a Telematics box that was the part of the car or the vehicles that communicate the car with the outside world.

Most of the time via cellular networks. And they found that this particular unit used MQTT. For those that I don’t know, MQTT is a protocol that is mostly used to communicate in queues.

[Paul] (02:52)

So that’s a message queuing system that’s very much more efficient and easy to use than say HTTP or HTTPS, right? So it’s quite well favored in the embedded industry.

[Gabriel] (03:00)

Yeah, exactly.

They found that these devices were using that protocol. And of course, when we see something that operates on the cloud, it’s one of the points that we like to look at. When they found out that the MQTT broker, so basically the server that all the devices communicate with was not correctly configured, they could connect to the broker, see all messages from all the vehicles.

Of course, it took them a little bit to reverse engineering the binary protocol, but it turns out in summary that they were able to pinpoint any single vehicle. It was possible for them to send binary commands and specific commands to the canvas. So they found that they could actually fully control the car, could unlock the car, and get all the messages from the car. So this is the particular very, I think, concerning vulnerability.

[Paul] (03:58)

So from a surveillance point of view, if that were in the hands of say a nation state attacker, they’d be able to track pretty much whoever they want. Or if they were a car thief, they’d basically be able to open the car and drive off as though it was theirs.

[Gabriel] (04:05)

Yeah, exactly.

This fleet of vehicles were fully exposed on the internet. Of course, the researchers, Ramiro and Yashin, contacted the relevant authorities and everything got fixed in time. Everything was properly disclosed. But yeah, it’s a type of vulnerability that I think is one of the worst enigmas that could have, right? Like someone getting access to your fleet of vehicles.

[Paul] (04:37)

Yes, particularly if you’re a business and you run a whole load of vehicles, that’s an awful lot of data that you probably don’t want in your attackers hands. So that wasn’t actually a flaw in the protocol itself. It was actually an incorrectly configured part, if you like, what you might call the data supply chain.

[Gabriel] (04:57)

Yeah, exactly. It was on the backend. In MQTT, the isolation between clients needs to be currently configured. And the queues that MQTT has, you can allow and restrict the write and read access. So all those things were not currently set up. And of course, the actual authentication to the system, only authorized systems with the right type of authentication should only be the ones that should access to the server. Not everyone on the internet.

[Paul] (05:28)

Joe, maybe I can ask you now why this kind of thing continues to happen. Why aren’t we there yet and how can we get there so that this kind of thing is prevented in advance rather than fielding it and then hoping that Gabriel and his team find it before the bad guys do?

[Joe] (05:46)

Yeah. And actually with folks like Gabriel, we want to make sure organizations do find them before the bad guys do. Researchers inspire me because they do think about what can go wrong. And the fact remains also that if you go back to the Jeep and forward and you think about all the hundreds of millions of lines of code now on automobiles and vehicles today, there is a lot of attack surface and therefore a lot of opportunity for things to go awry.

I think in some autonomous vehicles, there may be as many as a hundred applications, certainly 40, 50, 60 is a reasonable number on cars when you consider infotainment and all the digital controls on a vehicle, think in the automotive industry.  It’s interesting because as the architectures of the vehicles change, there’s a lot of stuff that still is built around the CAN bus and things like that. And so there’s always little things that can be done to manipulate the CAN bus protocol and

trick it into sending some test messages and whatnot that might result in something that the driver’s not intending to do. And just to go back to the original question, I think mature software development practices is one thing certainly people can do. And the other thing is we need researchers like Gabriel to help us make sure we do have that extra perspective on our process in our systems before the bad actors find them.

[Paul] (07:08)

So Gabriel, it’s obvious from that rather terrifying story that you just told that an important part of what happened after you’d found these vulnerabilities was the responsive and rapid collaboration between US researchers and the vendor who was able to fix the problem. How much do you see that researcher-vendor collaboration improving in embedded systems in general, but in the automotive industry in particular?

[Gabriel] (07:38)

Things have improved a lot. Maybe 15 years ago when you submitted a vulnerability, companies didn’t even know what we were talking about. Some companies were thinking that we were trying to get money out of them.

[Paul] (07:52)

So would you sometimes get threatened with legal action and we’ll sue you if you say a word.

[Gabriel] (07:57)

Yeah, yeah. Sometimes that would happen because it was totally new, yes. Most companies didn’t have cybersecurity teams and maybe not even a security consultant. I sometimes maybe never heard about a cybersecurity review, but those things have changed. Also, there are more government-related entities like all the certs and all the organizations that help to be in the middle  and work with the end companies. Everything has improved.

Large companies nowadays, most of them, have very well established  cybersecurity teams. Then when a vulnerability reaches them, they have a strict process on the timelines and how to process that vulnerability internally. And also navigate with the consultants, with the researchers in this case, to help them fix their vulnerability and get the patch out there so everyone can benefit from that.

Nowadays, all the certs and all these entities, they help with this process, especially with companies that are not well known and they are not too large. I think it’s good to go through the certs.

[Paul] (09:08)

The supply chain into the automotive industry is quite huge and I imagine some of the vulnerabilities you find in the automotive sector might not be in code that’s directly created and managed by the big name. It might be some smaller company that provides their product, perhaps not only to the automotive industry.

[Gabriel] (09:29)

Yeah, exactly. As you said, in automotive, there are different types of companies. Some of them are just integrators. So they get all the ECUs, everything from OEMs, and they get the battery management system, the infotainment unit, the PMS, whatever, everything. And then they plug it in together with their own requirements, and they’re basically integrators. They don’t even know all the code for the ECUs. So in that case, when you find an issue, it’s not even theirs. And even sometimes, these OEMs have another supplier, right? Like for example, a modem. There are a large number of companies that produce modems, but I just said they produce modems for automotive, for IoT, for whatever.

[Paul] (10:11)

Joe, it sounds like we’re talking here not so much about a supply chain as a supply web. Specifically in the automotive industry, there are some standards like ISO 26262, which firstly are compulsory so you have to comply with them, but also that dictate just how far you’re supposed to chase that supply chain back in the automotive world. And you have to go at least four steps, don’t you?

Do want to say something about that standard and how it should help to improve security?

[Joe] (10:47)

Well, certainly there’s good guidelines that the standard offers. In particular, when you get to the ASIL side of it, which is the autonomous standard, and there’s four levels for ASIL compliance, depending on just how involved the autonomous controls are. Is it pure driverless or is it driver assisted and related functionality, if you will?

[Paul] (11:09)

Is that the difference between maybe keeping you in your lane when you’re driving on the freeway and sitting in the backseat reading a book while the vehicle drives you along?

[Joe] (11:19)

Exactly.

But when you consider the supply chain then, yes, you have other tier one suppliers, which are major corporations providing components. As we talked about earlier, you might be getting your infotainment system from one provider and your braking system from someone else and your battery system from somebody else. So it is important then as the OEM that you have a view on the security practices and the level of compliance that those systems have, but the supply chain goes further and includes things like the underlying operating system. And one of the most dominant operating systems on vehicles today is from BlackBerry. It’s QNX real-time operating system.

[Paul] (12:02)

Thatis  isn’t it? People think BlackBerry are gone and forgotten because they don’t see their phones anymore, but actually in the embedded space they’re the provider of operating system, the real-time operating system itself.

[Joe] (12:15)

Exactly. So now BlackBerry, obviously a great company, QNX is on 250 million vehicles or more, I think globally the last I checked. You can imagine the complexity of all this. From one angle, organizations want to know where these software components are coming from? On another angle, there may be drivers or low level system components that contain vulnerabilities. Just last year or in the past two years now, I’m losing track.

Tesla had a classic buffer overflow and it was discovered in Tesla Model 3 vehicles. And it was a specific driver in the operating system that was putting Tesla vehicles at risk. And it was the kind of vulnerability that enabled potentially an attacker to execute arbitrary code on the target system. Those kinds of vulnerabilities are some of the most severe. I go to all that detail to say the supply chain itself, all these suppliers, talking to your second cousin, your third cousin, your fourth cousin, it matters because these are complex systems and you do have to have a sense for what the security is. And so what that means is people should be looking at the vulnerabilities across the entire ecosystem, especially if you’re complying with a safety standard.

[Paul] (13:32)

Joe, you mentioned AISL there, the standards for autonomous driving, all the way from basic driver assist. I’ll help you do a hill start. You just use the clutch. Don’t worry about the brake. I’ll take care of it for you. You better hope it works. So you’re going to roll backwards unexpectedly all the way to sit in the passenger seat and watch a movie. I’ll drive you to Pasadena or whatever it is. Are we going to find a different class of vulnerabilities relating to autonomous driving that are based on just the sheer complexity of the system itself? Or do we still have to focus on problems like, hey, there’s a bug in the modem? What difference do you think the increasing number of autonomous vehicles on the road will make to vehicle security and to automotive security research?

[Gabriel] (14:21)

Well, that’s a good question. I think we are continuing to get similar kinds of vulnerabilities that we’ve been seeing out there now, right? Companies are getting better at securing, as John mentioned, buffer overflows. It’s very difficult to get rid of everything. Yes. We will still see things here and there. Of course, the autonomous vehicles have a large exposure to the internet and also in the fully autonomous taxis kind of services.

Also, physical attacks, I think, probably will become more relevant. In a taxi or similar autonomous vehicles, you don’t know what the previous passenger did to the vehicle.

[Paul] (15:00)

Indeed. I don’t own a car. When I need a car, I just go and rent one. And it always takes me about 10 minutes to go through the infotainment system to get rid of all the previous renters Bluetooth pairings and MP3 files they’ve uploaded. It’s incredible what gets left behind, even by people who aren’t trying to do anything bad.

[Gabriel] (15:16)

Yeah, exactly. So I think that’s going to be something that probably the company is working on that they already have teams thinking on how to secure cars physically. Of course, if there is a Wi-Fi exposed, something’s going to be very relevant. Bluetooth, NFC, even USBs, right? They will probably need to have a very strong physical gap between whatever the user can touch or attach to.

[Paul] (15:43)

Indeed.

[Gabriel] (15:53)

Because otherwise it’s going to be a nightmare. So in autonomous vehicles, I’m going to say again, those that are going to be fully autonomous and shared between people, it will be a new research area and probably, and also a little bit of a headache.

[Paul] (16:11)

So how do you think security collaboration between researchers and automotive vendors will evolve over the next, say, two, five, ten years? As cars get ever more complicated.

[Gabriel] (16:26)

In terms of collaboration, I think nowadays it works well with most of the or almost all of the other manufacturers, cybersecurity problems or issues, everyone is aware of them. Nowadays there are more certifications that the vehicles need to comply with to be able to be sold in different countries. And well, all these things help to increase the security. As everyone knows, are these events where they to put a vehicle so everyone can go there and play with them and try to find vulnerabilities.

[Paul] (17:00)

So this would be something like the car hacking village at DEFCON or cars showing up at the Pwn2Own competition in Vancouver.

[Gabriel] (17:03)

Yeah, exactly.

I don’t know in 10 years what this will look like.

[Paul] (17:11)

But certainly 10 years ago, automotive vendors would not have been bringing their products at all. They’d have had a much more closed approach, wouldn’t they? Whereas today they sort of recognise that that’s a fantastic way of learning about weaknesses that might otherwise get found first by cyber criminals and state-sponsored actors.

[Gabriel] (17:20)

Yeah, exactly.

Think of the deep hack that we did years ago, it was a good example of the things that researchers can do with a little bit of time and access to a vehicle. Yeah. That helped the industry realize that there was a problem. I know that everyone knows that companies and they are much more proactive and they work craftily to improve the security of their of their vehicles and all the things that they use.

[Paul] (17:58)

So Joe, maybe we can go back instead of talking about the whole web of intrigue that is the AI that goes into self-driving and all the cloud services. Let’s zoom into what I guess would be level zero in the Purdue model, the actual individual devices that might be in a car, like the thing that detects that you’ve turned the steering wheel a bit left or a bit right. Given that those can be harder to patch and it’s much better to get them right in the first place, particularly when you’ve just talked about steering.

I would like to think that they were correct out of the box, not only after 17 patches. What sort of approaches can manufacturers and vendors take for devices that aren’t quite so amenable to over-the-air updates as infotainment systems?

[Joe] (18:45)

I mean, there’s a couple different philosophies people can take because in the automotive industry, there are ECUs, right? Electronic control units, things that might control individual components that you’re talking about, brakes and others. And many of those are proprietary oriented products, yet there are several versions of open system architecture components. In fact, in the auto industry, you talk about AUTOSAR. You look at POSIX compliant components on a vehicle.

And so I think it’s as much about the architecture that you take as well as the standards and the working groups and the frameworks that are available that help you navigate some of the security issues. Also, having a good robust software development process is good. And I always say it’s a good thing cars are not produced in five days or 30 days and a new model year might take two, three years to come out. And so if you don’t have a way to really update the vehicles, building security into those components prior to shipping can certainly reduce the exposure and the problem that you would suffer from having to patch components.

[Paul] (19:54)

So that’s something that you do at build time that either detects that something has gone wrong in the build process so you’re not building really what you want to think.

[Joe] (20:05)

Build-in security that protects the device at runtime, I think ultimately is a good approach.

[Paul] (20:11)

So can you describe a real world scenario where build time security prevented a possible exploit, an automotive one?

[Joe] (20:19)

The examples I gave earlier around a buffer overflow on an infotainment system, having the memory protected from exploitation, certainly avoiding the ability for an exploit to target ROP gadgets and ROP chains and take advantage of those buffer overflows is probably the perfect example of things that have been out there that have gotten thwarted. There were some disclosures earlier this year about vulnerabilities in infotainment systems that I think people can look up. And with that said, I think there’s other advances where suppliers and OEMs are working together on some of these open frameworks. I bring it back to that because I do think that if you think about researchers, you think about the software development process, and you think about some of these open frameworks, when those three items are working together and OEMs and tier one suppliers are committing to an overall architecture, then I think we have a better chance of, as an ecosystem, helping to defend the underlying systems. 

What really makes me nervous in the modern world for software-defined vehicles is when you have proprietary systems that are black boxes. And in the end, that’s where I think security researchers are going to find more vulnerabilities than the ones where there’s collaboration and an open framework in place.

[Paul] (21:42)

I’ll put this question openly to both of you. It strikes me that what we refer to as the infotainment system in the average car is very different from the way you might think of an infotainment system in an aircraft, where you’ve got a little screen on the back of the seat in front of you, you can press some buttons and you can choose a movie or listen to music or watch the flight path across the globe, etc. Certainly on most of the cars I’ve rented recently, in the same interface, there’ll be a button that says, do you want the lane keeping assistance on? Do you want the hill start assistant on or off? Would you like the display to show you liters per hundred kilometres or miles per gallon? There seems to be much less natural separation between what you might call the infotainment system and all its input and output systems and the rest of the car itself compared to systems where the infotainment system was plumbed in later like autoplay.  Gabriel, do you have a thought on that?

[Gabriel] (22:43)

That’s a good way to put it, right? On the plane you have limited functionality and in the car they want to have, for example, even Wi-Fi. They’ve got a USB to connect your phone to have the maps loaded and get access to your music as well. And at the same time, you want to have control to certain aspects of the car, like maybe even moving a seat back and forth. That’s something that could be dangerous if it can be controlled by an attacker, right? If you’re driving, you can just flip.

[Paul] (23:15)

It gives a whole new dangerous meaning to sudden relaxation. I hadn’t even thought of that. There is quite a lot that you think, doesn’t control the engine so I shouldn’t worry about it. But it does control your ability to control the vehicle safely.

[Gabriel] (23:31)

Yeah, exactly. There are safety related issues with all the functionality. And as you mentioned, the separation between, let’s say, pure car functions versus entertainment or more pleasure kind of functions. And that’s when we do reviews. Those are the kinds of things that we do, right? Because sometimes the infotainment units allow installing apps. So why are the accesses, these applications, third party applications can have in this infotainment. Can they send messages to the car seat, for example? They are properly containerized and correctly isolated. In fact, the infotainment unit is very complicated and very important in terms of security as well.

[Paul] (24:16)

And guess particularly since you mentioned the idea that many modern car infotainment systems will allow the user to add their own apps just like they can on a mobile phone. That expands the attack surface even more, particularly if those app stores accept submissions from all over the place to give you the best experience. If somebody has your worst interests at heart, they could be just looking for a way to have an app approved that will then find its way into a large number of vehicles. 

Maybe not so they can control a car, but they can just learn more about the operation of a business, the movement of people, something about how society works. All of the things that you kind of think I wouldn’t have given that information away willingly if somebody had asked me. So Joe, what sort of innovations or changes do you think we need? For example, in the automotive software supply chain to deal with all these burgeoning problems.

[Joe] (25:18)

Yeah, there’s certainly software systems with wheels as opposed to cars with individual components at this point, they’re integrated systems.

[Paul] (25:25)

Well put, think that’s a great way to think about it. And like you say, software defined car.

[Joe] (25:31)

They are loaded with software. If you think about it from the auto manufacturers perspective or the OEMs, they are motivated to deliver driver experience and experience in the vehicle to help differentiate their vehicles. And if you’re BMW or you’re Mercedes or you’re Acura or you’re Ford and you’re trying to compete, you’re looking for ways to delight consumers with that overall experience. We’ve become well-trained to bring our own devices, connect phones and leverage AirPlay or CarPlay on the vehicle, we have become accustomed to some of those comforts and some of those sound systems and some of those video displays that make that overall experience just different. I guess the point to the question then is what can you do security wise to enable the drive and the motivation to differentiate on driver experience and customer experience? I believe security is a key role. I believe collaboration around open standards and frameworks is a key role. I believe pen testing and doing your own security research on components and modules is a key role. I do believe in understanding the vulnerabilities that come from your supply chain, analyzing software bill and materials and the components, the underlying components, what components are on those products you get from your suppliers. 

I believe all these play a role and I also would then add that inserting in security protections when you build the software is a lot better than trying to chase things after the fact. You want to embed security into your devices and make it a part of that experience. It’s a full discipline. It requires cooperation and open standards and really good software and security methodology.

[Paul] (27:20)

Well said. Gabriel, maybe we can finish up by me asking you a very open-ended, future-looking question, if you don’t mind. What do you think are the biggest opportunities for innovation and improvement in the vulnerability research side in the automotive world?

[Gabriel] (27:41)

Well, I think the best way is for companies to keep doing what they are doing right now and improving their processes. I think they are doing a good job increasing the SDL and getting third party reviews.

[Paul] (27:56)

So SDL, that’s Software Development Lifecycle. So that’s where security comes at the beginning, the middle and the end of development, not just something that you wait till there’s a problem and then say, hey, let’s paint over it and hope nobody notices.

[Gabriel] (28:00)

And keep working with independent researchers as well. That’s a way to have more people looking at the devices, not only in automotive. We look for vulnerabilities, but not for pride, it’s just for trying to apply our knowledge and get a safer world for everyone.

[Paul] (28:31)

Yes, it’s not showing off. It’s not trying to rejoice in somebody else’s weakness. It’s trying to make it obvious so that the good guys find it first. Therefore, if there is a problem that got through, it can be fixed before the bad guys get there. I wish we could all be out of a job tomorrow because security was a done deal. But cybersecurity is always going to be a journey and not a destination. I like it. Gabriel, it’s great to have people like you being part of that journey and helping us all to do it better. And Joe, lovely once again to hear your passion about getting the whole community to do things, right? Not just standing up and saying, I own a company that sells something, buy it. I really love that. So that is a wrap for this episode of Exploited: the Cyber Truth. If you enjoyed this episode, please don’t forget to subscribe so you know when every week’s new episode arrives.

Don’t forget to recommend us on social media and to share us with all of your team. And don’t forget, stay ahead of the threat. See you next time.

The post From Research to Resilience: Securing the Future of Autonomous Vehicles appeared first on RunSafe Security.

]]>
Iranian Hackers and the Threat to US Critical Infrastructure https://runsafesecurity.com/podcast/iranian-hackers-critical-infrastructure/ Thu, 24 Jul 2025 14:16:40 +0000 https://runsafesecurity.com/?post_type=podcast&p=254529 The post Iranian Hackers and the Threat to US Critical Infrastructure appeared first on RunSafe Security.

]]>
 

As nation-state cyber threats grow more strategic, the United States’ industrial control systems (ICS) and operational technology (OT) are facing mounting pressure. In this episode, Exploited: The Cyber Truth host Paul Ducklin is joined by RunSafe Security CEO Joe Saunders to explore the very real threat Iranian hackers pose to U.S. critical infrastructure.

Joe unpacks recent reports about Iranian-linked actors like CyberAv3ngers targeting human-machine interfaces (HMIs) and programmable logic controllers (PLCs) used in utilities, manufacturing, and healthcare. These attacks are disturbingly effective—not because they’re highly sophisticated, but because many devices are still running with default credentials and out-of-date software.

Listeners will learn:

  • How attackers gain access to and manipulate ICS/OT systems
  • What small and rural municipalities are up against when it comes to cyber defense
  • Why secure development practices like SBOMs and runtime protections are critical
  • How attackers use persistence as a weapon and how defenders can flip that script

Joe also introduces the concept of a National Cyber Guard, explores the role of public-private partnerships, and advocates for a culture of Secure by Design in critical infrastructure technology. This is a must-listen for cybersecurity professionals, OEMs, policymakers, and anyone invested in the resilience of America’s most vital systems.

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] Welcome to Exploited: The Cyber Truth. I am Paul Duklin. I’m joined by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe. Welcome back. 

[Joe] Hi there, Paul. Great to be back. 

[Paul] An intriguing, interesting, important, and worrying topic all in equal measure for this episode, Joe, Iranian hackers and the threat to US critical infrastructure. Let me start at the beginning and ask you to give us an overview of the recent reports that have come from the US government about Iranian attackers targeting specifically industrial control systems and operational technology rather than just general hacking like, hey, let’s do some ransomware and keep the money because it’s foreign exchange. What are the key takeaways from all of that? 

[Joe] Well, from an overview perspective, we have this cyber actor group called CyberAv3ngers, and that’s Av3ngers with the three in it, not an e. 

[Paul] Let’s not confuse them with the television and film franchise of the same name. 

[Joe] Yes.

[Joe] So these are the Iranian related cyber actors affiliated with the Islamic Revolutionary Guard Corps. And of course, the IRGC operates both internally and externally, and they have affiliated cyber actors doing stuff. And in this case, it appears the CyberAv3ngers who have been around operating for a few years now have been targeting aspects of US critical infrastructure and specifically targeting our good friends, the human machine interface, otherwise known as HMI, and the PLCs, the programmable logic controllers that help operate these OT systems inside critical infrastructure. 

[Paul] So they’re not necessarily targeting the valve actuator itself. They’re targeting the little LCD control panel that may have been built twenty three years ago with a few user interface buttons on it that let you do things like open valve, close valve, increase flow rate, etcetera.

[Joe] That’s exactly what they’re doing in this particular case or in these reports. Unfortunately, there was a manufacturer of these kinds of devices, in this case, Unitronic’s PLC, And their devices, unfortunately, didn’t have the best security protections in the first place. 

[Paul] That’s very diplomatically put, Joe. 

[Joe] And we’ll get into it in more detail, I suppose. So the CyberAv3ngers, I guess, saw an opportunity.

[Joe] And, in fact, working on behalf of the IRGC, certainly motivated by calling attention to any conflict that Iran and Israel has had over the past couple years. So IRGC, working in conjunction with these cyber actors, doing things to undermine the credibility of Israel in different ways. And so targeting US infrastructures, targeting even UK infrastructure and other areas around the world is something that the CyberAv3ngers are doing.

[Paul] So part of this is kind of showmanship scary PR, but you have to assume that the other part of it is, hey, if we can find exploitable vulnerabilities, maybe there’s a time where we’ll want to actually press the button that actuates the valve at the wrong time. 

[Joe] We’ve seen that in US critical infrastructure where we find cyber bombs, if you will, implanted in US critical infrastructure that could be detonated at a time of the choosing of the bad actor behind them.

[Paul] And those could be at multiple levels, couldn’t they? It could be some flaw in the actuator itself, just thinking of valves. It could be a problem in the HMI that allows it to be triggered when it shouldn’t be, or it could be something in the IT part of the system where there’s some software that’s supposed simply to take stock of the settings in the network, but turns out to have an API bug that means can actually be used to take control rather than just to read out data. So, Joe, do you want to say something about the particular methods that these threat actors have been using? 

[Joe] Yeah.

[Joe] So in these particular cases, the CyberAv3ngers were leveraging factory installed standard passwords. 

[Paul] Oh, dear. 

[Joe] But then what they did is they figured out what else can we do once we get on device from there to disrupt operations. They were modifying different files in the file system, doing things that would fall back to previous versions, and eliminating certain communications ports. 

[Paul] So they could actually, if you like, reintroduce bugs that had been patched in the past?

[Joe] Reintroduce bugs and then lock out administrators from gaining access. 

[Paul] Oh, so they’ve got root access because they’ve broken in, and then they very carefully closed the door behind them. 

[Joe] Yep. They disabled things so you couldn’t upload or download things, and you prevent the operator from getting back into it. Lots of concerns in 2024 about which public utilities were at risk, and the local water companies, in this case, were the ones that were exposed.

[Joe] It creates a big problem when you have these smaller water systems, perhaps in rural areas, perhaps in major cities, perhaps in suburbs and exurbs in between. But what is the right level of response from a federal government, from law enforcement to help mitigate? What happened was affecting local systems who may not have been aware of what techniques were being employed in the first place. 

[Paul] Yes. It’s always a difficult thing to balance blame, isn’t it? Clearly, the attackers shouldn’t be doing this, and it’s both criminal and dangerous in equal measure. At the same time, you think, well, to what extent do the people who either build or sell or procure these devices that aren’t secure, how much more could or should they be doing? You don’t want to blame the victim, but you kinda think maybe we need to build an environment where people are less likely to be attackable in the first place. So how do you go about dealing with that? Particularly, as you say, in small towns and small municipalities?

[Paul] They don’t have much money at all. They’re not commercial enterprises that can go, hey, we’ll put the price of tomatoes up a bit, and we’ll take some of those profits and we’ll spend it on cybersecurity. 

[Joe] Yeah. We all appreciate a well functioning society, especially when something goes wrong like this. 

[Paul] Indeed.

[Joe] These are vital systems. These are public services that help us with the level of comfort we have in our lives today. These organizations, I think immediately, the options that they had were to disconnect those devices from the Internet, update all the passwords on those PLCs and HMIs.  And even updating software systems to more current versions.

[Joe] You know, a lot of these systems don’t end up getting updated in a timely fashion. And so those were immediate steps, but then you ask yourself, okay. What else should have been done or could have been done? And unfortunately, when you ask that question after an incident like this, the general response is everything you can possibly imagine should be done. And you could probably list out 50 different things that the manufacturer and the distributor and the operator and the government should all be doing.

[Joe] And for me, it comes back to the basic fundamentals of those technology providers and those product manufacturers and what they ought to be doing in their devices in the first place. And of course, an obvious one there is not to just have default passwords made available. 

[Paul] Hallelujah. Yes. I can see why vendors do it.

[Paul] It means you get the device, there’s a little ticket in there that’s the same for everyone says, log in as admin password admin, and then set up the device following these steps. The problem is that if you neglect to do those steps, the device still works properly. And I think you should assume that every default password ever programmed into any device ever made is on a list that every cybercriminal and every state sponsored actor already has in their pocket. If you think of it like that, it’s pretty obvious why you should not have working passwords that are the same for everybody. 

[Joe] I was thinking about it. Did you know, Paul, in many cars, if you disconnect the battery, there could be an anti theft provision in there for the radio, and the radio may not work anymore. And guess what the answer is? It is to enter in a default password that’s the same for all those radios produced in that same model. 

[Paul] Let me guess. It starts with zero.

[Joe] Mine starts with a five. But it’s the same concept. Right? You know, you need a way to try to slow people down, but you have to make it accessible for support reasons and others. 

[Paul] Exactly. We spoke about that in the podcast with Leslie Grandy, didn’t we? Where she said, you have to be really careful that you don’t knit a security straight jacket so restrictive that people go, I’m just gonna find my way around it. It’s too hard. 

[Joe] And for me, that begins with those product manufacturers and the culture they set around quality, around safety, around security, and most importantly, perhaps, on the overall software development process to enable those core pillars of good business for people who serve technology, the critical infrastructure. 

[Paul] So that’s SDLC. Right? Secure development life cycle. We don’t use that old school waterfall model where you do a whole load of development, then you do the testing. Oh, we found some bugs. Let’s stick some polyfiller in the cracks.

[Paul] Let’s sand it back nicely. Paint over it. She’ll be right. Because she probably won’t. 

[Joe] And I do believe that if you don’t upgrade your SDLC, if you don’t bring it forward into a more security minded set of practices, at some point, that means that your devices are not secure, and that means there’s room for other products to disrupt your marketplace, to disrupt your customer base, and bring something that is more secure that may even have more features.

[Joe] I think organizations that have weak security practices have a competitive threat. I think it’s fundamental that these organizations embrace a more secure development life cycle and build security into their products, build in all these automated practices that I’ve talked about over time. And it’s really looking out for your customer’s interest in the end because there are well motivated nation state actors like the ones affiliated with IRGC who will find exposure, and they do look for those organizations, those products that have weaknesses. 

[Paul] So from a community, from a social, from a society point of view, you should want to do it. From a legal and ethical point of view, you ought to do it. But the flip side of that coin is increasingly market forces may actually compel you to do it because customers are showing that they prefer products that do take security seriously. And that the recent RunSafe healthcare survey of medical device acquirers showed that strongly, didn’t it? Was it almost 50% of people you surveyed in the medical industry said, we had products that we would have loved to buy because it’d be great for health care, but we said, no, we’re not buying it. Not good enough from a security point of view. So you can have the most fantastical surgical robot, but you might not be able to sell it if it’s not secure enough.

[Joe] And I think this is where programs like Secure by Design emanate from. The idea that you do need to build in security for the benefit of your customers. Let’s also then look at it from the operator’s perspective.  They may have limited resources as we said.

[Joe] Some of these wastewater systems in smaller jurisdictions may serve 2,000 households or maybe 500 households. And they might have an IT person who’s looking after OT systems and security and internal systems and all the workstations in the enterprise. 

[Paul] And helping the next municipality and the three unincorporated towns around. 

[Joe] And it’s a big job for anybody. Right?

[Joe] And there’s a lot of responsibility. And so I do think then from that perspective, from the operator’s perspective, we’ve talked about the notion of Secure by Demand asking about the security practices from your suppliers, and that gets to the health care report that you talked about. 

[Paul] And it’s not being difficult, is it? It’s not just getting the price down because I don’t think your security is good enough. It’s just saying, if you can bring me security, then I’m naturally going to be more likely to buy your product.

[Joe] Yeah. And as they say, it takes two to tango. So supplier and operator need to work in concert. Again, going back to the wastewater system example, so that’s the operator side and the product manufacturer side. And then there’s still this effect on the society, the town, the city, and all the citizens there.

[Joe] And so then, there is this extra question of what is the role that the government ought to be doing. And that’s actually an interesting question in this particular case because there was legislation passed since then to help the US government provide grants and assistance to wastewater systems and and other local utilities to help bolster or assess their systems and do that in a way that allows them to get resources or access to resources that they wouldn’t otherwise have. So that’s yet another response that has resulted from these events because of the nature of what was targeted. 

[Paul] I guess some of these HMI systems that were targeted may be general purpose. They may not just be there to control valves or water supply or water drainage systems.

[Paul] They might be the same systems that could also be used to control a lathe, to control an automated cutting machine. And when you think about the name PLC, programmable logic controller? The whole idea is that you can reprogram them so that they can run your Christmas tree lights, but they could also run the street lights of a small town. So if the wrong person can reprogram it, game over. 

[Joe] Yeah. The digitization of these switches to enable the ability to control things, to monitor things in a more efficient way is ultimately the goal. And then the connectivity that resulted from all that is what exposes these digitally enabled controllers and switches and whatnot to be targeted in the first place. And then when you have a distributed system with lots of customers and you have support considerations, well, you end up finding some examples where there are default passwords put in play. And that, in this case, came at a serious cost. Part of it is just how can we prevent the next attack, and part of it is what does the wastewater system team do to respond to the existing attack. And I still think there’s other philosophical ways that these things can be approached going forward.

[Joe] After these events, there was yet another consideration that some people had. And as we went into a new administration, some other ideas were bubbling up as ways to address these things, and that was to have some kind of incident response team that could be deployed quickly on behalf of the government to some of these systems that do get compromised. The idea being that there will still always be attacks on other systems and how do we get mobilized forces out there quickly and get those. Should those be volunteers? Should those be private companies? Or should we have people at the ready? 

[Paul] So that’s like a concept of a first responder like you might have for bushfires or home fires or road traffic accidents. 

[Joe] And I’m just speculating now. These things are not real, what I’m about to say. But what if you had the National Cyber Guard?

[Joe] Or what if you had programs with the local universities? So you have students learning their cyber skills and cyber crafts and their computer science skills. Instead of volunteering for EMT, you’re volunteering for the cyber EMT. 

[Paul] Yes. 

[Joe] You know, it’s great opportunity to build up skills for students and could provide a great service in certain areas.

[Paul] Well, if you think about medical degrees, most countries have one year out of a medical degree will be some kind of residency or internship. You’re learning, but your lectures actually happen in the ward while you’re following doctors and nurses around and being a kind of entry level nurse and doctor at the same time. Maybe that’s a kind of apprenticeship model for the cybersecurity industry, that you’re not just out there interning at some big commercial company in the hope of getting a job. You’re actually part of an emergency response team where you will learn to fight the good fight, sometimes in quite difficult circumstances. 

[Joe] And the trick then, bringing it back to CyberAv3ngers.

[Joe] These are highly skilled, well trained cyber actors looking for systemic vulnerabilities where they can wreak some havoc. It may be that they are, in some cases, doing ransomware for purposes of raising money. In other ways, maybe they’re motivated by trying to undermine government in different ways. And so with the IRGC, the US is certainly a cyber target for trying to attack critical infrastructure. And that’s what we saw in these attacks with the CyberAv3ngers on wastewater systems.

[Paul] So, Joe, what are some of the immediate steps that both vendors and consumers, municipalities, for example? What are the simple immediate steps that they could take to improve security in ICS or OT systems? 

[Joe] One of my strongest recommendations is for these organizations to simply ask for a Software Bill of Materials that includes known vulnerabilities associated with the underlying software components. 

[Paul] So that’s the recipe, the complete recipe that goes into the cake, including all the things that maybe you wish you hadn’t actually put in there when you baked it. 

[Joe] Yeah. Those little morsels that could turn into some problem down the road. 

[Paul] Three dead flies. Sorry about that. 

[Joe] Three dead flies.

[Paul]  I’m getting a bad feeling in my throat now.

[Paul] But that’s an important part of that whole process, isn’t it? Your attitude to vulnerability disclosure. What happens when something is found? That needs to go into what is effectively the bill of materials. Right.

[Paul] So that if somebody needs to chase down the dead flies, they know where to start looking. 

[Joe] Yeah. 

[Joe] So, ultimately, my point is with those kinds of ingredients with an understanding of those components and the vulnerabilities with it, you start to get your arms around where the more significant problem areas are that you may wanna focus in on. And then from there, I do think it’s okay then to start to ask those vendors, what are your security protections and what kind of security do you built in? And what are you doing to prevent attacks even when a patch is not available?

[Joe] How can you help me manage my OT network and my OT systems? I say start with the Software Bill Materials, then start to work with your priority vendors where you see opportunity to reduce risk, and then start to ask them those questions about what kind of security do you build in. And and some of those questions about their software development processes, their secure development life cycle methods, and their Secure by Design techniques. And if you do that, all of a sudden, I think you have a really good feel for what else you may need to do in your own operations to help fortify where there may be weaknesses. I think ultimately, you wanna help affect the security that your suppliers build into the systems they ship.

[Joe] If you can do that, then your systems will be protected, and society will be better off. 

[Paul] So, Joe, that covers, if you like, the Secure by Demand side where the person who’s actually fronting up the money and the location for the devices says, show me what you’re doing. Now if you are a vendor and you’re asked for a Software Bill of Materials, if you can’t come up with that Software Bill of Materials, A, you probably should be able to, and B, if you’re going to sell into Europe, the Cyber Resilience Act that’s coming into force soon says, thou shalt jolly well learn how to come up with the Software Bill of Materials. What should your first steps be so that you can actually get that complete recipe? Because they’re not like a cake which might have 12 ingredients.

[Paul] There could be hundreds or even thousands of distinct components that mix in in multiple steps of the supply chain. So where do you start from the Secure by Design side? 

[Joe] So you can certainly analyze your source code. You can certainly analyze the binaries, but there is an optimal spot to generate a Software Bill of Materials, and that is as the software product is getting produced. You could look at the recipe and say, oh, yeah. I could see where I might have some security vulnerabilities when I bake this cake. Or you could look at the finished cake and say, I see the icing. I bet there’s something underneath the icing that’s looked really good. They don’t really know until you cut open and eat it and when you find those three flies in the cake. 

[Joe] In the software development process, especially in these embedded systems that comprise the technology in critical infrastructure, essentially, you take source code, you compile it, and you produce a binary. During that compilation process where you take all the intended instructions and all the source code from the developer, and the compiler pulls in other components and dependencies and third party libraries and all that good stuff. That’s the moment to generate the Software Bill of Materials. 

[Paul] Right.

[Joe] Because that’s where you have ground truth of what is actually ultimately going into that binary. 

[Paul] So you get a much richer, much more precise picture of what could go right and what could go wrong.

[Joe] And some people will argue, and I am one of them, you should produce the Software Bill of Materials as close to the moment when you produce that software binary that you’re going to ship to your customers. And the reason for all of this then, you asked about, well, what should the product manufacturers be doing? What should be the supplier’s approach to all of this? There are a couple items, but one of them is thinking about what your customer’s needs are. And if you are shipping technology to go to all forms of wastewater systems across, say, The United States where there’s, I think it’s 50,000 wastewater systems in the country, you have to realize not all of them have the same level of security on their side, so you can do more.

[Joe] And so I think it’s good customer service to start to really analyze what’s going in these software products so you can reduce downstream effects when a cyber actor tries to compromise the system.

[Paul]  So, Joe, at the risk perhaps of seeming crassly commercial, perhaps you’d like to say something briefly about a couple of products and services that RunSafe does provide. One, building that bill of materials automatically, helping you to do that, and the other, building your software in a way that makes it more resilient to exploitability even if you have inadvertently their vulnerabilities in. 

[Joe] At RunSafe, we took a philosophical approach to disrupt the economics associated with cyberattacks. A lot of these attack groups, like the CyberAv3ngers, are persistent.

[Joe] They might spend six months analyzing and developing their exploit and figuring out their attack methods and how to get on systems and things like that. And what we wanted to do was make it so frustrating that every time they go back and spend another six months on a target device, they get thwarted. And so that kind of approach means you’re disrupting that very economics. You’re using that persistence, which is maybe one of their strengths as one of their weaknesses. 

[Paul] So in other words, they might spend three months getting an exploit to work against one instance of the valve actuator that they’ve bought and they’ve got in their lab, and then they go and try it on the other 999,999, and they realize, oh dear, we need to re tailor the exploit.

[Paul] You haven’t disrupted the determinism because the devices will still perform correctly, but you’ve disrupted the ease with which they can find a one size fits all attack. 

[Joe] Right. And so if you take the Unitronic’s example then, if they had, say, security mechanisms built into their systems, one, they could save some time with their own developers. Eliminating a whole bunch of vulnerabilities from being exploited even when a patch is not available. Two, they could be enhancing their customers’ operations as well, reducing downtime and ensuring that systems aren’t disrupted.

[Joe] With that in mind, that’s a philosophical approach to the technical thing that we did, which is insert security at build time. That security gets invoked on a device when it loads out in the field in that wastewater system for the benefit of runtime protection. And so it’s a very simple step for developers, for those product manufacturers, for those companies that ship the technology to their customers and critical infrastructure to enable security when they’re baking the cake, so to speak, from earlier. 

[Paul] And it doesn’t require a super modern, super special compiler. It doesn’t require all sorts of extra compiler options that generate all sorts of extra code that could interfere particularly with real time devices that can’t tolerate additional delays and additional checks.

[Paul] The same code is running. It’s just running in a sufficiently different way that you would assume that its performance will be the same, but its exploitability will be sufficiently unpredictable that an entry to one will not be an entry to all. 

[Joe] And that is an asymmetric shift in cyber defense on the economic side and on a technological side. And so that same point where you can add in the security happens to be that exact same point where you can build with RunSafe a Software Bill of Materials that is complete and correct and accurate. 

[Paul] Because it’s looking at what’s going into the cake, it’s not sampling the cake and trying to guess from all the chemical reactions that happened during the baking what was there beforehand.

[Joe] Yep. All we wanna do is sprinkle a little pixie dust as you’re producing that cake so that that cake is resilient down the road. And maybe that’s not quite the right analogy. But with that said, we wanna make it so simple that you can add security into your technology systems so they are resilient when they’re out in the field. 

[Paul] So it should be fast, efficient, and safe. Pick all three of three. 

[Joe] Yeah. Take all three of three because we know projects are managed around budget, cost, and scope. And so if you can not disrupt the budget, not disrupt the scope, and, not disrupt the timelines themselves and still have a more robust resilient system from a security perspective than everybody wins, including your customers. 

[Paul] Absolutely. Joe, I think that’s a great spot at which to finish. The idea that this really is something that affects all of us and that actually all of us can do, even if it’s only a little bit to help, all of us can actually make a difference, whether we’re a consumer, whether we’re a user, whether a supplier, a vendor, or whatever. 

[Paul] So that’s a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. If you enjoy this podcast, please don’t forget to subscribe, so you know when each new episode drops.

[Paul] Please like, share, and promote us on social media. Please be sure to recommend us to everyone in your team. Don’t forget folks, stay ahead of the threat. See you next time.

The post Iranian Hackers and the Threat to US Critical Infrastructure appeared first on RunSafe Security.

]]>