AI Has Democratized the Most Dangerous Cyberattack Yet With Dylan Deanda From Doppel
Security UnfilteredMay 04, 2026
237
00:52:1235.91 MB

AI Has Democratized the Most Dangerous Cyberattack Yet With Dylan Deanda From Doppel

Send us Fan Mail

Get ready to dive deep into the raw truths of modern cyber threats, the evolution of social engineering, and how AI is transforming the battlefield. With cybersecurity veterans Dylan DeAnda and Joe, we uncover the alarming acceleration of digital threats, the brilliance behind proactive defense, and the game-changing power of AI-driven tools.


Timestamps:

00:00 - Why it took years to get this conversation with Dylan scheduled

02:24 - The importance of remembering everyone is just a person, regardless of titles

03:51 - Dylan’s journey from military signals intelligence to AI-driven cybersecurity

05:36 - What it takes to master language and signals in special operations and cybersecurity

08:08 - The rapid evolution of AI and its impact on social engineering techniques

11:37 - How multi-channel attacks target human vulnerabilities at machine speed

12:43 - The rise of agentic AI ecosystems and their role in cyber assaults

15:02 - The shift from traditional perimeter security to detecting external signals online

16:53 - The terrifying accuracy of AI deepfakes and voice impersonations in social engineering

18:10 - Harnessing AI-powered agents to automate and supercharge security operations

20:53 - The implications of tokenized AI and its influence on corporate productivity and risk

22:38 - Doppel’s approach to preventing impersonation campaigns and pre-emptive attack disruption

24:31 - Finding quiet signals: the art of subtlety in military and cyber defense

26:53 - How LLMs automate signal detection and the importance of human oversight

28:01 - Military precision in offensive cyber operations and applying it to enterprise defense

29:02 - Threat models in multi-cloud environments and the underestimated risks

31:38 - The inadequacy of current security awareness training against AI-enabled deception

33:02 - How AI can create realistic, convincing threats that are almost indistinguishable from reality

34:07 - The futility of traditional phishing tests in the age of AI deepfakes

36:12 - Building resilience and measuring human risk to reduce social engineering success

37:05 - The importance of drill, discipline, and fundamentals in cybersecurity and military training

40:09 - The incredible skill and precision in military operations and their digital parallels

43:25 - The critical need for scenario-based training to prepare for real-world cyber and physical threats

47:21 - The revolutionary potential of AI-generated, interactive, and adaptive security training tools

48:35 - Connecting with Dylan and exploring Doppel’s cutting-edge platform for your organization

 

Learn More About Doppel: https://www.doppel.com/ Connect With Dylan: https://www.linkedin

Support the show

Follow the Podcast on Social Media!

Tesla Referral Code: https://ts.la/joseph675128

YouTube: https://www.youtube.com/@securityunfilteredpodcast

Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast

Affiliates
➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh
➡️ OffGrid Coupon Code: JOE

➡️ Unplugged Phone: https://unplugged.com/
Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout

*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.

Opening And Guest Welcome

SPEAKER_01

How's it going, Dylan? It's great to get you on the podcast. You know, I I know that uh Doppel and I have been trying to get this thing scheduled for a couple months now. It it seemed like it just took like the stars to align for us to get this thing going. But I'm really excited for our conversation today.

SPEAKER_02

Yeah, likewise, Joe. Longtime listener, first time caller. So I'm very excited and grateful to be here.

SPEAKER_01

Yeah, absolutely. That's that's always it's always kind of like shocking to hear when when people say, like, oh yeah, I've been watching you since like episode, you know, 10 or what whatever it is. Uh when I started, I was just trying to literally my goal was having 10 people listen was a major win for me, right? Like, I mean that's that that that's how you have to set it, right? You have to kind of have the those low expectations, and then you know, something else will be able to come from it, hopefully.

SPEAKER_02

Yeah, absolutely. I mean, you've been working hard at this. Uh, you've been doing this about what three, four years now for this podcast alone? Five years. Five years.

Confidence With High-Title Guests

SPEAKER_01

Holy cow really flew by. It's it's like almost embarrassing when I go and watch the older episodes. I like almost want to go back and like redo them or something because like, I mean, the production quality has changed so significantly. My own interactions, my ability to to interview and talk with a whole myriad of of people has changed and really grown in a lot of significant ways. And when I have people on that I that I had on in those early days, like they tell me, like, man, you are you're completely different from how you were before. And I mean, that's great because like every year I try to get better, make you know, different leaps and bounds in in terms of production quality. So it's always good to hear that.

unknown

Yeah.

SPEAKER_02

If you were to go back to, you know, yourself five years ago, what what one piece of advice would you give for getting where you are now?

Sponsor Message And Real-World Setup

SPEAKER_01

Hmm. Wow, that's a really good question. Hmm. That that is actually a really good question. I would say at the end of the day, you have to remember that you know, everyone's just a person at the end of the day. Because, you know, in the very beginning, it would like get in my head, like, oh, I'm interviewing a CISO or a CEO or a COO or board member, this famous person within the security community, you know, and it kind of would make me nervous. And now, you know, I have a good friend of mine who just reminds me, like, hey, at the end of the day, they're they bleed red blood, right? Like the blood that comes out of them is not blue, it's not purple, it's red. We're all people at the end of the day. Titles are nothing, you know, just have a good conversation. And I remind myself of that now. And I wish I would have, you know, had that mentality back then. How's it going, everyone? Welcome back for another episode of the Security Unfiltered Podcast. So today, Doppel is sponsoring this episode. I'm interviewing a really great person from Doppel. You know, I actually ran into the use case for Doppel a year ago, before I knew about Doppel, where, you know, it's a real story. I tell it in the episode where if we would have had this sort of technology, it would have prevented it, you know, right from the very beginning. And so that's why when I was introduced to Doppel, I had to partner with them. I had to spawn get them to sponsor the podcast because, you know, I love innovative new technologies that are coming out on the market. So I wanted to bring you guys this episode because I think that the tech is really cool. I think that you would find it really interesting. So go ahead and check it out. All of the links are in the description of the episode. Yeah, and can continue on with the episode. Thanks everyone. Going into some of those conversations because, like, you could tell, you could tell in those early interviews, like I'm a little bit nervous, you know, talking to this person and whatnot. And I don't feel like that anymore. That's for sure.

SPEAKER_02

Yeah, I mean, you've you've covered an impressive bit of ground with you know, the number of authors that you've interviewed, generals you've interviewed, you know, NSA targeting officers that still haven't been released yet.

SPEAKER_00

Yeah.

SPEAKER_02

But, you know, again, you're doing the secur security community a great deal. And you're also, you know, an educator as well, right? You know, adjunct professor and uh a practitioner. So you get to see a lot of different angles that you know many people don't get to see. So thanks for bringing this podcast to light.

From Army Linguist To CTO

SPEAKER_01

Yeah, thanks. I I appreciate that. I I typically don't talk about all the things that I also do, you know, like the the adjunct stuff and the courses that I create and whatnot. And I I don't know why. I should probably like to mention it, you know, every episode as like a little mid-roll or whatever might be, but we can uh we could definitely dive into that for for another conversation. But you know, Dylan, I I really want to hear about you. You know, how how did you get into the space? Because everyone has such a unique journey, such a unique path. I've done over 250 episodes, and I literally haven't heard the same path, the same journey twice.

SPEAKER_02

Yeah. So, well, my name is Dylan Deanda, and I'm based in beautiful Montana. You can see it behind me. But I started my career in security or in cyber as a signals intelligence analyst and a Korean cryptolinguist for the US Army. And in that role, my job was pattern recognition in adversary communications and making sure that we identified those signals before those threats went kinetic. And one of the things I learned really fast in that work was that the most dangerous signal is never the loud one. It's always the quiet one that fits perfectly into the background noise. And from that point, once I left the military, I spent, you know, another 30 years moving through enterprise security and federal contracting and building out teams at companies like LoudCloud, at Tanium, Kratos Defense, HP, McAfee. And I was fortunate enough to join Doppel in early 2025 as their field chief technology officer. But I think the one common thread through all of that career is the same thing. It's find the signal before it becomes any damage or harm. And today I'm doing that at the human layer of the stack, which is where we're seeing that pretty much every meaningful attack is is landing right now.

SPEAKER_01

Hmm. Wow. That is a really interesting, you know, background. I mean, can you can you tell me a little bit about what it takes to be in that kind of role? That's such a unique, I don't want to call it a niche, right? But it is kind of a niche because you're specializing in in Korea overall, right? But what does that training regimen look like? Because I would imagine you're not just learning the technical side of it. You have to learn the language. I mean, you have to be fluent in the language to be able to pick up, you know, these different signals and whatnot, if especially if they're speaking Korean. Yeah, that must be extremely challenging going through that.

Language School And SIGINT Tradecraft

Agentic AI Supercharges Social Engineering

SPEAKER_02

Yeah. Well, you know, first you have to go through basic training, and then you get to take this thing called the D Lab, the Defense Language Aptitude Battery Test. And it's about a six-hour test, and they bring you into a room, and you know, in in my room, there was about 60 people in there. And they sit you down and they say, okay, we're going to test you on your reading, writing, and understanding of uh, or listening to a dead language. And that language was Esperanto. And so they gave you the syntax, they gave you the rules, they gave you vocabulary. If A, that means B. And from there, you go through a, you know, a reading and writing and listening battery. And, you know, I remember about two hours into the six hours, there was a sergeant next to me just crumples up his paper, he throws it down, he says, I'm done, and walked out. That happened a couple more times. But once I qualified, they asked me what languages I want. And I said, Ah, you know what? I would love to speak Russian. They said, no, Russian, that threat is dead. The wall came down, the Cold War's over, buddy. And I said, okay, how about Japanese? And they said, that's pretty much an officer's language for liaison. We can get you into a class in about two years. And then I said, okay, um, how about Korean? And they said, perfect. We can get you into a Korean class in about 30 days. And so from there, I got the good fortune of going to Monterey, California, to the Defense Language Institute, where you have all these joint services and agencies coming together to learn pretty much every language in the world. And, you know, we started out with, again, about 60, 70 people in our class. And there was about 15 of us that graduated at the end. But it was just daily immersion into the culture, into the language, into listening, into reading and writing and speaking. And then from there, uh, you go into electronic warfare training and you know, learning all about radio frequencies, intercepting, jamming, misinformation, and even social engineering in there as well. So there's a big human element as a part of that. But it was a great career and uh got to travel the world. I met my beautiful wife in Hawaii, uh, where I was stationed out there. And since then, you know, it's been it's been cyber and it's been defending organizations, defending humans, and watching the adversary evolve rapidly over time, and especially now how we're seeing just a hyper evolution of these attackers and especially around really exploiting the human trust, and that's the new vulnerability that we're seeing out there.

SPEAKER_01

Yeah. Yeah. And, you know, the field is evolving so quickly, too. I mean, it's insane to think a year ago, agentic AI was literally just starting to be talked about. You know, it wasn't anything that anyone was deploying in an enterprise environment or using at home. You know, I mean, like there was a handful of people that were doing it. I was not one of them, right? I mean, I literally remember hearing about it for the first time, I think in May of last year, on a podcast that was business related, and someone was saying how they were, you know, using agentic AI to like do their marketing and accounting and you know, all this stuff, right? But even from the perspective of like social engineering, I mean it's like social engineering just took a like a 180 almost, right? In terms of like difficulty and now trying to actually identify it and figure it out. You know, I I've told it, I've told the story before, but it's it's very applicable, especially to the industry. Is you know, I I I started a company a while ago, and one of the very first things that you know we did getting up to speed and whatnot, is learning about how the environment, you know, interacts with each other, how the CEO interacts with the rest of the organization. Is there anything that he does that could be considered a little bit abnormal, right? And one of those things was every once in a while, it was literally a fairly regular occurrence, but it was a it was a random thing because it wasn't like on a regular interval. The CEO would reach out to the CFO and say, hey, why are this$15 million or$10 million or whatever it was to this account over here, you know, and he would just do it. It was fine. Like that's that's what was expected in that environment. And when I found this out from legal, I said, Well, well, look, you know, I'm very uncomfortable with that. We shouldn't be letting the CEO do that, but I know that I can't say, you know, no, you can't do that, right? Like he's the CEO at the end of the day. I'll probably be gone if I tell him that he can't do that, right? And so, you know, I I encourage very strongly that we should, you know, implement it like a passcode that rotates after every use and rotates every week, uh, because it wasn't a weekly thing that he was doing it, but we still needed to rotate and store it in a secret vault where only him and the CFO were able to retrieve it, and maybe, you know, an admin like myself, if something went wrong and we had a we had to you know generate a new secret or whatever. And uh literally two weeks after implementing it, the CFO gets a call, you know, fairly late in the night, I think it was like 10 p.m. Sounds exactly like the CEO, it's not hard to fake anymore. Sounds exactly like the CEO requests 10 million bucks to be sent to some account. And the CFO was was told it was really ingrained into him, like, hey, if this, if this goes wrong, like if you do not request that passcode, or if he can't answer the passcode and you still send it or whatever it might be, like if it was wrong, like you're fired. Like just immediately, you know, you can you can hand in your laptop, that's what you can expect. Yes. So it was really ingrained into him, like, hey, I have to make sure I ask for this thing.

unknown

Yeah.

SPEAKER_01

So he asked for that passcode, and they couldn't give it to him. And so he hung up the call and reported it to us in the morning. And, you know, sure enough, like it was never the CEO, right?

Multi-Channel Attacks At Machine Speed

SPEAKER_02

It never is. And you know, people still think of social engineering as just phishing emails. One email, one click, one mistake. If you do it three times, you're out. Uh, but that's not what we're seeing out there today. I mean, it's it's coordinated, it's multi-channel operations. Like you said, your CFO gets a LinkedIn direct message, and it looks like it came from your CISO. And then they get a Slack message from IT about an urgent wire transfer, and then they get a calendar invite to approve it in 40 minutes because they've used three channels in one operation. And, you know, the security team is scrambling because their tools just saw three separate low priority alerts. And that attacker saw a clean shot in a direct avenue right into the organization. And these agentic ecosystems that are out there that you can either pay for, things like worm GPT, spam GPT, fraud GPT. And now with OpenClaw, you can build out your own dedicated set of recon agents and exploitation developers. And it enables also that hyper-personalization for your recon agent to go out and find out everything about Joe or about Dylan. And it's being done at machine speed. And, you know, used to take days or or weeks, and you had to add some pretty pretty solid Cali Linux skills and social engineering toolkit skills. But now you can spin up a synthetic identity, you can spoof an executive or a help desk employee, and you can scale the campaign at a near zero marginal cost. And that makes the ROI through the roof for the attacker if they get it even once. And they can continuously pivot and adapt in real time based on what they're seeing as successful. And it's a 24-7 operation.

SPEAKER_01

Yeah. Yeah. I mean, we're we're starting to see nation-state level attacks from just like the everyday attacker. You know, like that's that's how AI is going, where it's eliminating that skill gap so significantly to where, you know, these highly sophisticated degree of attacks are now available to anyone and everyone, basically. It's it's a scary time because it's um, you know, it it used to be the mentality that, like, oh, okay, stucks net level attacks are reserved for the military, you know, for the agencies. There's a handful of people that have this skill set to even pull that off, you know. I mean, there literally a handful of people that can pull that off in the globe. You know, like that's not a long shot to say that, right? But now, I mean, I could go have a GPT, spin it up, do it all local. No one ever asks to know. I never have to be told by my LLM that I can't do it, right? And you know, by the end of the day, I'll have a very similar degree of malware to to Stuxnet, you know, to that same kind of capability, right? I mean, that's that's a crazy thing to think about, you know, that we've evolved so quickly, literally in three years, that's where we're at now.

SPEAKER_02

Yeah, and and anthropic just you know, there's been a lot of news coverage about anthropic in the new mythos model.

SPEAKER_01

Yeah.

SPEAKER_02

And, you know, it's it's finding 20-year-old vulnerabilities and open BSD in the full stack, and it's you know, building 20-step chained exploits to, you know, really go after whatever the objective may be, whether it's financial, whether it's ideological, personal, but it is a time to be alive, and I'm glad to be here. I think, you know, the why go after the hardened systems anymore when you can go after the people that own those systems because the greatest vulnerability is right between your ears. And you don't need to go after and bang on the firewall. You just need to basically hunt where that human is going to be. And these attackers are waiting at the watering holes where people visit every single day, LinkedIn in their Slack, in email, in Teams, on social media. And so right now most organizations have no one watching those those watering holes or that that public exposure.

SPEAKER_00

Hmm.

SPEAKER_01

Yeah, I you know, I saw a video earlier, it might have been yesterday or the day before, where they were showing a I mean, I don't even want to call it a phishing attack because it was so sophisticated. But you know, they sent like the attackers sent like a like a meeting invite that looked exactly like a like a zoom invite. I mean like the URL looked exactly the same, you know, the the page was good, there was no grammar issues, the domain it came from was trusted, right? So I mean I I'm looking at that and I'm saying, well, I would be fooled by that, right? Like all of my checks would would click. You know, that's everything that I would have looked for at this point. It's like we we almost need an AI to tell us when we we get another, you know, advanced AI, you know, social engineering attack, right? I mean that like that's like that's the only option at this point because now you know you can emulate my voice, right? So now my mom doesn't know if you're if you're if I'm calling or if you're calling, and it sounds exactly like me, there's no issues, right? I'll have the same mannerisms. That's probably the problem, right? Because I'm the same person on camera as I am off camera. So like you watch a podcast and you got me down pretty easily, right?

SPEAKER_02

Yeah, it it's you know, as I said before, the most dangerous signal is not the loud one, it's the quiet one that fits perfectly into the background noise. And that's what these social engineers are are doing, and that's what you know agentic AI is doing. It doesn't announce itself, it just belongs. And so you almost think you need your own personal security detail of of agents to create a barrier in front of you before you take any actions.

SPEAKER_01

Yeah. I mean, you you basically do at this point. Agentic AI, I haven't dove into, but I keep running into use cases where where it's like, okay, I I could totally use it for this use case, you know, but I'm holding myself back from getting that Mac Mini to actually deploy it in my home network. A little bit nervous to do that.

SPEAKER_02

I've got OpenClaw running very cautiously in a sandbox, but it give it autonomy to do what it needs within that sandbox. And I ask it every day. I want you to build something new. And when I wake up in the morning, I want you to give me a briefing on what you built and how it's going to improve the overall objective. And it does. And it's a tremendous lever of productivity for the humans, for businesses, but also for the attackers out there, and they're using it pretty effectively.

How To Prompt Agents Responsibly

SPEAKER_01

Hmm. That's interesting. Do you give it any other parameters like build something new, you know, like a newsletter or whatever it might be, or what does that look like?

SPEAKER_02

Yeah, so I've got content generation agents and they can take care of the workflow of the research, you know, the writing style, give it voices and then the publishing thereof. But one of my favorite things to do is, you know, I give it really kind of a filtering process before it gives me an answer. So, you know, I'll give it its identity and its objective. And I'll say, before you give me an answer, you're gonna come up with three answers. And I want you to rank them on the probability of success of the objective, rank it from zero to one. And then take those three answers and build a tribe of mentors that are experts on that particular subject. And they can be any number of you want, from living to dead. Heck, it could even be a metaphor if you want. But you need to present those three answers to your tribe of mentors and then have them evaluate your answer, give you an improved answer, rank that on the probability of success from zero to one, and accept no answers below 0.85% probability, and then coalesce all those answers into one super answer. And so that forces the agents to go through a decision process, through a self-evaluation process, not just, you know, I'm gonna guess what Dylan wants to hear based on the prompt he's giving me. I'm gonna try to guess the next segment or token. Go forward and and put it to the test. And I found that to be pretty successful. It also gives you insights into how it thinks and its kind of filtering criteria.

SPEAKER_01

Hmm. Yeah, you really have to like challenge these uh these agents and LLMs, these models to to actually provide you valid information. You know, when I was when I was doing my PhD, I was using Chad GPT, and it was like Chad GPT 3 or an early version of four, whatever it might might have been. And I mean, it would just give me just complete garbage, like completely false information, false articles that never existed, never even been wrote, you know, like it's it was so frustrating. And Google was completely useless. So I had to switch it up. I went with another model when you when you're building in those parameters where I have mine pretty finely tuned, where it's like, hey, this has to be like it has to not just exist, it has to be publicly available. I have to literally go to the link that you give me and download it. If I cannot do that, then you cannot, you know, give me that article, right? Because I'm not paying for a whole bunch of different you know services to give me these research articles and whatnot. Like it should just be publicly available.

SPEAKER_02

True. And uh tokens are getting expensive too these days.

SPEAKER_01

Yeah, I mean, that's that's another thing, right? Because NVIDIA CEO, I mean, I'll butcher his name if I ever try and say it, other than Jensen, right? But like he was mentioning that he's gonna start allocating tokens for a part of total compensation. I mean, how how crazy is that? Because that that essentially means that they have to have a marketplace for these tokens somewhere to be bought, traded, sold, you know, in some way, shape, or form for that to like really kind of make sense and hold hold value, right? Especially when you're including it in total comp. I mean, at least that's my mentality with it.

SPEAKER_02

And I wonder what the tax structure will be like on that on the tokens. Right. It also, you know, does make sense economically in that think about how much more productive you are now with generative AI on your side. And we were just talking about content generation or research or other projects. That's true output and that's true work. And so it may be a medium of exchange, and Jensen Huang may be onto something.

What Doppel Monitors Outside Perimeter

SPEAKER_01

Yeah. Yeah, that that is that's interesting. We're going into a weird place where like a token is now, and it's not a cryptocurrency, you know, like even a cryptocurrency was a hard thing to wrap people's heads around of how that has value. And it's still, in my opinion, questionable, right? Because I can't go like buy a car with Bitcoin. I mean, like it doesn't, it doesn't work like that. So what's the actual intrinsic value of it? But yeah, no, it's it's a weird place that we're going into for sure. With these evolving threats, how does Doppel approach the problem? And how does Doppel actually shed light on these attacks that are that are going on?

SPEAKER_02

Well, Doppel is an AI-native social engineering defense platform. That means that we operate outside the traditional corporate perimeter. We're monitoring for all of these impersonation campaigns before they ever reach that organization's employees, executives, or even clients. We see fake domains, we see cloned executive identities, spoofed social accounts, look-alike login portals, health portals, employee benefit portals. And we map how all those external signals connect and scale together. And then we dismantle that entire attacker's campaign before it becomes that urgent wire transfer, or before Grandma Sadie loses her entire retirement savings account. And that is our main thesis we want to destroy and disrupt all things social engineering because it is one of the most pressing problems and urgent problems that we see out there today. And we're backed by A16Z. I used to work for Mark Andreessen and Ben Horowitz in their first startup called Loud Cloud. Doppelmade it to the Fortune Cyber 60 this year. But I I think, you know, it beyond just the good luck and the hard work of getting onto the Fortune Cyber 60, I think it's because the market is starting to pay attention because that threat is real. And the tooling that, you know, we see out there historically was built for the last era for 2015, and it's not equipped for this new era. And so, you know, my grandfather used to be able to walk the tree line behind his properly property, and he could tell you whether uh the deer had been through there, he could tell you how many, which direction they were going, and not because he saw them, but because he knew how to read the signs that they left behind. And I've been reading signs my whole career, and now I'm excited to be at Doppel because Doppel does that same sign reading at machine scale.

Finding Signals Then Taking Campaigns Down

SPEAKER_01

Hmm. So when you're when you are sifting through the noise, looking for that that silence or almost silent signal, you know, how do you how do you find it and how does it compare to the work that you did in the military, right? Like I assume that it's probably the same process overall, but you know, just talk me through that process because it sounds really difficult. And I had an instructor at one point in time where he was talking about like, well, how do you pass, you know, a shared secret if you don't if you don't trust the internet, right? Which you probably shouldn't, you don't trust the encrypted tunnel that's allegedly, you know, end-to-end encrypted and whatnot. The really only other solution is to either physically hand them a key that's encrypted, and then you you tell them the secret yourself, or you basically pick up the phone, fill it with white noise, whisper your secret into it. They know how to filter out the white noise, hear the secret, you know, and be able to use it, right? How do you filter through that noise to hear what's actually going on, what the actual intent is?

Cloud Blast Radius And CISO Risk Gaps

SPEAKER_02

Yeah, when I was in Signals Intelligence, you know, we would sit there. This was before the world went digital. So we were still on RF, radar frequencies, and you would sit there at a station with headphones on, and we called it spinning and grinning, and you would just go through all of the frequencies slowly looking for deviations in signal strength, amplitude, and you would pick up a transmission, and you had to discern whether it was Tagalog or Vietnamese, North Korean or South Korean or Mandarin or Cantonese. And the discernment of that was having a basic understanding of each one of those other languages, but being a specialist in your own, and then being able to lock onto that and orient to that and try to discern who they were, what unit they were with, where they were operating, and then lock on and determine that that's your signal. And it's the same thing today on the surface web, the deep web, the dark web. You know, we ingest billions of signals per day from across all of those different sources, from social media, from e-commerce sites, third-party mobile apps, paid ads, email, telephone numbers, and of course typosquatted domains, even payment platforms. And we see those impersonations out there. And we use LLMs to basically run those scans. And then as a part of our fully managed service, we have a three-tiered stock. And the first tier of that, we've automated using agents. And the agents do what they do really well is they ingest very large corpuses of data and they distill those down to determine what's real and what's fake. And then from there to give weighting to that, to give enrichment to that. And then it goes up to the humans in the loop. Without those LLMs and those agents, we wouldn't be able to scale to that because I think the humans just weren't built for that problem. And the challenge is now that you've found it, what do you do with that? And so taking that to the next level, you know, in the military, we were offensive. In this world, we call it proactive. I call it getting left of bang. And in the military, getting left of bang meant that you could identify the threat and neutralize it before it could go kinetic. And that's the same thing we do here is we dismantle their entire infrastructure, meaning we have it taken down and keep it down, again, using those LLMs for continuity. And that helps the organization, the brand, the executive, their families, their employees, their clients, in that before they can actually act, before they can pull that trigger, they've already been taken down and taken down again and taken down again. And that changes their economics. And, you know, spinning up a synthetic executive identity or uh, you know, a deep-faced voice uh voice clone, an already aged and seasoned domain or LinkedIn profile with six months of posting history, that used to cost them days of skilled labor and some real technical sophistication. But today it costs almost nothing. And it runs at machine speed around the around the clock. And so with that, you know, I would say that it used to take a skilled craftsman a full day to pick a lock. And now there's a machine that tries every combination in 30 seconds while that craftsman is home having dinner. It's not that the lock got weaker, it's just that the attack got cheaper and we're still selling the same locks in in security. And so I think that there has to be a new solution and there has to be a new approach that moves at the speed of the attacker. I have a question for you though, Joe. I mean, you get to you get to speak to some really interesting people and get some unique perspectives. You're an educator, you're a practitioner, and you live in cloud security every day. When you think about that economic asymmetry that we're talking about here on top of an environment where, you know, in your context, if there's even one over-permissioned identity that can unlock a multi-cloud blast radius, what does that threat model actually look like from where you sit? And do you think that CISOs have priced that in to their operating models yet?

SPEAKER_01

Hmm. That's a really good question. And I feel like they don't. They don't typically price that in. They don't look at it the same way. They they still kind of view multi-cloud as their own separate things, right? And I was literally talking to a customer previously that you know they have you know an AWS environment and they have an Azure environment. And I asked very specifically, I said, well, what happens if AWS gets breached? Are they able to just come across the tunnel to Azure? And like it it paused them for a minute, right? Because they weren't exactly sure. And you know, I I I told them I was like, Look, you know, you have an open tunnel to an environment that you you don't care about, that you're storing just one type of data in, but the attackers look at that and they say, Well, their defenses in Azure are really good. Oh, but what's this tunnel doing? Well, that's over in AWS. Well, let's go take a look at AWS, right? Like just asking basic questions, like, hey, do you have MFA enabled on a on your AWS accounts? Did you, you know, do you do you still have your root account enabled and using it? Is is it stored securely and whatnot? I feel like I feel like they're still not addressing the risk properly, right? And they're still, even to some extent, not even accepting the the use case of CSPMs properly, right? Because the CSPM would provide you that capability, that insight. But they're still they're still not, especially if if they're in Microsoft, right? If they're in Azure, why why go anywhere else, right? Because you have Security Center and Microsoft has everything under the sun, you know, and they'll they'll be real good and bundle it all into one SKU for you if you especially if you get an Evan license. I mean, now you get that E Evan, you get everything, right?

Why Awareness Training Breaks Under Deepfakes

SPEAKER_02

Just got to worry about burning those credits. And, you know, I I think it's also I'll have you put on your educator hat now. Do you think that security awareness training is even close to equipped to handle that kind of coordinated pressure? And as I say before, are we asking humans to do something that they were never designed to do consistently under you know the pressure of something like social engineering and the urgency that it brings and the credibility that you think you're seeing out there? These social engineers are are using someone who looks like your CEO, talks like your CEO, walks like your CEO, and and you know, they're not questioning anything who walks up to the to the front door. They're opening the front door for him. Come on in, sir. Come on in, ma'am. Do you think that you know the the students that you're seeing working with are being trained for that appropriately?

SPEAKER_01

No, I don't think I I almost think that it's not even possible to be trained for that properly because everything that we know to be true to assess any sort of threat is completely invalid when you start putting AI behind it that can replicate my voice and replicate my video, my background, right? It'll replicate how I'm moving in the background and and you know, just within this video, right? Like I'm thinking about it from the approach of like, if I'm teaching a class and it's virtual, well, how do my students know that it's actually me teaching the class, right? Because AI can do every single thing that they're seeing in this video right now. Like there's there's no problem with it. And so the AI is literally using what we've been trained on our entire lives, like literally from birth. Like, okay, that's that's your parent, that's your sibling, right? This is how their voice sounds, this is how they sound when they're angry, this is how they sound when they're mad or or you know, sad, or what whatever it might be, right? It's taking all of those characteristics, using it against us, and even from a technical perspective, right? I was looking at that phishing email, and I'm looking at it like this would get me, and that happens once a week now, right? Where it's like this is literally at the level where like even the phishing tests that I used to run, I used to run phishing tests for a company I was working for, and I would be so harsh with my phishing tests. I mean, like I would sit there and I would I would have a blast when I would get a 98% failure rate. The CISO would be pretty mad at me because it's a 98% failure rate, but I'm sitting here thinking the attackers are not like gonna give you a break. They're gonna look for any way in. So shouldn't I train them to be prepared to look at it in these ways, right? And I I had a good point, but he also wanted me to dial it down a little bit so people can feel better about themselves.

SPEAKER_02

Wasn't punitive.

SPEAKER_01

Right, you know, and and now just thinking through it, I mean, it's fooling security professionals. I do this all day long. I've done it for you know 12 years at this point, and I do it all day long, and it it would still fool me every single time when I see that that email or that video right come across. Like you need another AI to literally combat it.

SPEAKER_02

Absolutely. And it's you know, the SMS, it's telegram, it's signal, it's you know, email as well. But you know, I think 10 years ago, security taught people not to click links, and that was the right lesson for 2015. Today's attacks don't need a malicious link. They need your help desk to reset a password for someone who sounds exactly like your CTO. They need your finance team to feel that sense of urgency, you know, uh, on a wire transfer because the request came from what looks like an internal Slack channel. It's it's an exploitation of the human psychology. It's no longer technical.

SPEAKER_01

And I think was it Caesars or MGM that had that massive hack that uh you know someone just calling up the help desk, and the the help desk guy did everything that he was supposed to do, and they still got in.

Measuring Human Risk With Real Simulations

SPEAKER_02

And by virtue of their name, the help desk, they want to help. And social engineering is exploitation of that human desire to help. It's also playing on emotions and looking for credibility and coming with a solid pretext about who you are and what your mission is, and then time boxing all that to get the target to make a very urgent decision. And with context to the training that we're we're putting out there today and measuring that training and the completion rates, you know, how do people actually behave under real attack pressure? And that's, you know, another area that we play in is human risk management. And that's the practice of identifying and measuring and reducing that cyber risk created by that human behavior and building that resilience so that you know, you know what it looks like, you know what a multi-channel social engineering attack looks like. And your employees, your contractors, your third-party BPOs, those are a critical attack surface. And now you can monitor and prove it because if you can't measure it, you can't, you can't approve it. And so we see that you know, there's a demonstrable reduction in risk by about 23% just by running realistic simulations across the same channels that the attackers are using. How do your people actually respond? Can you close those gaps that you find? And right now, most organizations are flying blind on that human layer, but still very focused on hardening everything from the perimeter in. And so one of my friends says, you know, there's a difference between knowing the words to a song and being able to sing it when the lights come on and the room goes quiet. And I think that most security training teaches people the words, but with human risk management, that's where you really find out whether they can actually sing.

SPEAKER_01

Hmm. That's a really good way of putting it. I I never thought about it like that. But yeah, that's a really good way of putting it because, you know, even from like a technical, you know, perspective, it's one thing to play around with it, you know, in a lab and get your Cali Linux going and Metasploit going. And it's another thing to do it for real. Where it's like, hey, this I get one shot. If I make one mistake, it's not gonna work, you know. I don't get redoes, I don't get retries. And all these people are watching me. All these people are watching to make sure that I don't type a command wrong or whatever it might be. Like when when I was talking to that that cyber warfare officer, he he he was telling me like the you know, and you could probably speak to it as well, but you don't have to. I know he shouldn't have been talking to me. You know, he he was talking about like how difficult it is, how how high the bar actually is, where if you if you type in the wrong command or have one line of the code that isn't optimal, they just send you home. I mean, they don't they don't play around. They give you a teaching period, and then after that teaching period, that whatever however however many weeks it was, after that, it's like no, it it's all you're being tested every single day in every single way, a man a manageable.

SPEAKER_02

It truly lives life or death, and you know, you're you're supporting the warfighters, and those those folks are doing tremendous work, but it's like the special operations of the cyber world. It matters at that point.

SPEAKER_01

And it I I always found it interesting the correlation between the two the two kinds of training, right? Like you always hear about like how the SEALs train, how Delta trains, how the Rangers train. And they have like the same standards for the digital world, which is really interesting to me. You know, like you, whenever I hear like the interviews with different SEALs and Delta, right? They all say like, hey, you could literally approach a door with the wrong foot forward and you're sent home. And they don't tell you why, they're just like, yeah, you're not you're not gonna be a fit. It's like it was a foot, you know, what's the big deal? But like to them, it's like no, every little minute detail matters. You know, if you're a little bit off center or something like that, like they don't want you. It's like, no, we need the perfect, the perfect operator. And same thing, same thing in the cyber world.

Special Operations Mindset And Fundamentals

SPEAKER_02

And a lot of people, you know, who've not served in those communities think it's you know superhuman that you're learning all these magical tactics and skills. Really, you're just drilling fundamentals over and over and over again so that everyone knows which foot they're gonna put forward when. And I had the good fortune and the bloody pain of serving with first of the 75th Ranger resume and third of the 160th Special Operations Aviation Regiment. And those, the 160th were the pilots that flew in on the bin Laden raid as well on the Venezuela raid. And so I've flown with them and they're incredibly precise, but everything is about the fundamentals and you drill it over and over and you drill it as a team, and that's all that matters is getting those fundamentals right in the right order and bringing everybody home.

SPEAKER_01

Hmm. Yeah, it's it's fascinating the level of skill that that those guys have because you know that when you're at that level, like that's all that you do. You know, like you're not you're not thinking about anything else. You're you're you're not like you're probably spending more time in the air than you are on the ground, you know, specifically for those pilots, right? And I I heard someone talk about like, you know, how the helicopter actually crashed at bin Laden's compound. And I'm not gonna butcher the story because I don't know it inside and out like that. But he was talking about how the pilot literally identified that there was an issue just by the feel of the aircraft before there was any warning alarms or anything like that. And when he felt the aircraft that way, he was able to pin the nose in a way that saved and preserved everyone's life, you know, on the helicopter, even though he crashed the helicopter, he knew, okay, there's no opportunity to save this thing. This is the one millisecond that I have do it, right? And they were even saying that like if he wouldn't have made you know that choice when he actually did, like they likely would have lost at least a couple guys in in that crash, right? Just because of how it how it was. And I don't know, I don't know the details, the nuances of it, uh, but like that just tells you like the level of skill, not only of you know the cyber operators that we have in the military and the different agencies, but now that skill level is coming to every hacker, right? Like through AI, through LLMs, because it's all digital, right? These these AIs, these LLMs, they live in the digital world. If it's digital at all, they can read it, they can learn from it, they can manipulate future attacks to be more successful, even.

Playbooks And Planning For The Worst

SPEAKER_02

And exploit it. And there's there's one more key detail about that, which really brings us back to the resilience and the human risk management was when they were planning this mission for months, I had the good fortune of hearing Rob O'Neill speak at a conference a few weeks ago. He said, you know, we were going through every contingency scenario as we were running these tabletop exercises. And a commander asked, Well, what could go wrong? And the youngest guy on the team raised his hand and said, the helicopter could fall out of the sky. And everybody kind of looked at him and groaned and said, Well, that's kind of a dumb idea. But the commander said, No, we're going to prepare for that one. We're going to build contingencies for that one. So we know what we're going to do. We know which direction everybody is going to go and how we're going to get out of there. And the worst fear that they had was being stuck in the helicopter as it was going down, because that meant compromise and harm to those war fighters. And I think it's the same thing in the corporate world today, is you have to really ask the question is what could go wrong? You know, could my BPO in the Philippines, you know, reset an MFA for someone who called up and said, Hey, this is Dylan. I'm flying back from London and I left my phone in a taxi. I had to buy a new iPhone in the airport, and I've got to jump on a meeting in a hot second. Can you just reset my MFA to this new phone number, please? And they want to be helpful. And I think that's where, you know, we really need to train and send. Those real world scenarios and attacks so that when they do happen, no one gets harmed.

SPEAKER_01

Yeah. Yeah, that that's a really good point. You know, and I talk to my customers a lot about this, actually, where, you know, one of the questions I'll I'll always ask them is, you know, do you have playbooks? What does it look like if something goes wrong in your environment? Is your help desk, you know, are are they following, you know, process and procedures? Do are the steps outlined somewhere where you know everyone can go and follow it no matter what? And almost every single time that I've ever asked that question, they never have, you know, the proper degree of playbooks or run books that they that they should. And, you know, I I always describe it, you know, like this, right? Because you have to paint the picture properly for people to really understand. What if I'm the only one that knows how to troubleshoot a problem or the architecture of a certain client that I have, right? I'm literally the only one. They are my client, no one else would ever talk to them, and it's a unique deployment, right? Maybe me and maybe a dev that that worked on it with me would know anything about this this situation. If I go and get hit by a bus, you know, as as brutal as that is to say that to people, if I go get hit by a bus, and now the new guy on the team doesn't even know Linux, doesn't know the environment, still learning our application overall, he now has to take over this client, and they have an upgrade scheduled or whatever might be. And maybe it takes months for this client to get upgrades. You can't cancel it, but you can't like you know, you can't do it a normal upgrade like you would, what happens, right? And I always approached it from that mentality with uh with like my own documentation, my own troubleshooting guides, where you know I became infamous because I have like the most robust troubleshooting guide, you know, in the company where like I have a whole section of SE Linux. If someone says that they have Stigs or SE Linux on their server, here's the process that you have to go through. If you run into this error, it means that they missed these, you know, stigs, and these are the exact commands that you have to run and whatnot. And that took, I mean, it took months of troubleshooting. It was months of me restoring to a known good state, trying again, because like you know, at the time, like 20 2010, 2012 time frame, right? The the whole SC Linux and Stigs and everything, NSA had like basically just released it to the public, and no one in the public was using it because as soon as you deploy the thing, your entire server bricks up because you have to run, I mean, you have to run like 50 different commands to just make sure that your application is running like it should. And so I'm literally over here, you know, I have like the NSA white paper on it, and then nothing else, and I'm trying to make this thing work no matter what for our customers, because unfortunately, my customers, they were all custom, they were all DOD and military clients, right? They all have their own custom deployment and they all have SE Linux enabled, like fully enabled, right? So God forbid anything happens to me.

Deepfaked Experts For Interactive Runbooks

SPEAKER_02

Yeah, and like you said, you can brick a machine quite easily. But can you imagine living in this agentic world of AI where you could take that troubleshooting guide and that deployment guide and actually deep fake yourself, Joe, and then actually turn that into a corpus of training that's interactive so that when you bring someone in new, a junior practitioner, and it's 2 a.m. on a cold, dark, and rainy night and nothing's going right, all of a sudden they pop up, you know, that that Joe's training guide, Joe's troubleshooting guide, and they're asking you questions. Well, virtual you, virtual Joe, and you're giving them the right answers. Hey, you know, run, said awk, whatever that may be, and this will get you out of that. Now validate it with this, make sure that that Stig didn't override another control and go in from there. And that's actually the reason why we built our security awareness training like that. Not only to simulate the real world attacks, but also to build agentic content so that you can have interactive conversations with a body of knowledge, with a document, with one of your, you know, your libraries. And all of a sudden, now you're amplifying and you know getting a greater lever on your productivity, as we said before.

Where To Connect And Final Takeaways

SPEAKER_01

That's fascinating. I mean, that's that's really the only way to do it, you know, and it I mean, the only way that you would even build a product like that is with a background like yours, right? No one would think to build a product like that, to be able to filter through the noise, identify, you know, the the one silent signal that no one's paying attention to and raise an alarm about it, right? Like in in security, we're always taught to be cautious of everything, always question everything, you know, don't trust everything implicitly. And, you know, you you're really embodying that mentality with the product. I mean, look, on the next episode that we do, I really want to see a demo. I really want you guys to like deep fake me and show me how it works. You know, like I really want to see that. Let's do it. Yeah. No, that would be awesome. I can't. Dylan, you know, we're we're unfortunately at the top of our time. I mean, I feel like I could keep talking to you all day long. But it was a fantastic conversation. I really appreciate you taking the time to come on.

SPEAKER_02

Likewise. And thank you for your past work with the DOD and federal agencies, and thank you for what you're doing in education and in cloud security. And most importantly, thank you for bringing this podcast out and sharing unique perspectives of practitioners, of authors, of leaders out there because it enriches the community. And so I encourage everybody to like and subscribe on the Security Unfiltered Podcast.

SPEAKER_01

Well, thanks. I really appreciate that. I always forget to tell my audience to do that. I'm in the process of improving my own workflows. So I really do appreciate that. But Dylan, before I let you go, how about you tell my audience, you know, where they can find you if they wanted to connect with you and where they could find Doppel if they wanted to learn more? And, you know, I'll leave all the links in the description of this episode. If you want to check out Doppel and schedule a call with them, you'll definitely be able to do so in the link in the description.

SPEAKER_02

Yeah. So if you've got any feedback, you can reach me at Dylan at Doppel.com. That's D-Y-L-A N at D-O-P-P-E-L.com. Or you could uh send me a DM on LinkedIn and we'll get back to you. But really looking forward to continue these conversations, Joe, and I look forward to deep faking you. And again, I'd like to leave your audience just with just a couple of thoughts, really, about you know, defending themselves against social engineering threats out there today. We talked about time boxing where urgency is the exploit. You talked about you know verifying the channel, not just the message out of band with your CFO story. But also, lastly, your digital footprint is your attack surface. And so I would really encourage everyone out there to take a look at what your external exposure looks like. And if you have any questions, please get in contact with Doppel at doppel.com.

SPEAKER_01

Awesome. Well, thanks everyone. I really hope that you enjoyed this episode. It was a fantastic conversation. Make sure that you go and check out Doppel and Dylan at the links below in the description of this episode. And of course, if you enjoy this kind of content, you know, please subscribe and like the content or like the video. All right. Thanks, everyone.

hacker, nsa,military, usa,social engineering, ai,Russia,china,iran,